
Most “database performance” advice fails for one reason: nobody captures proof. This Linux VPS MySQL slow query log tutorial shows you how to enable slow logging cleanly, extract the worst queries, fix common causes (indexes and query cleanup), and confirm the improvement on a production-style VPS.
The steps target MySQL 8.0, with MariaDB notes where it matters. They apply to Ubuntu 24.04 LTS and AlmaLinux/Rocky Linux 9.
You’ll run concrete commands, edit the config files these distros actually use, and verify results instead of guessing.
What you’ll build (and what you need)
- Enabled slow query logging with sensible thresholds and safe file permissions
- Daily rotation so logs don’t fill your disk
- Actionable reporting using
pt-query-digest(Percona Toolkit) - Repeatable fixes: indexes, query rewrites, and MySQL settings that usually matter
- Verification with
EXPLAIN ANALYZEand server counters
Prereqs: root or sudo access to your VPS, and a MySQL 8.0 server already running. If you host WordPress/WooCommerce, this still applies.
The slow log is often the fastest way to catch a plugin generating expensive queries.
If you want a predictable baseline, start from a clean HostMyCode VPS. It gives you control over MySQL config and disk I/O, which shared hosting often limits.
Step 1: Confirm your MySQL version and current logging state
First, confirm what’s installed and how it’s configured. MySQL vs MariaDB naming differs slightly.
Distros also organize config includes differently.
mysql --version
sudo systemctl status mysql
Log in and inspect the relevant variables:
sudo mysql
SHOW VARIABLES LIKE 'slow_query_log';
SHOW VARIABLES LIKE 'slow_query_log_file';
SHOW VARIABLES LIKE 'long_query_time';
SHOW VARIABLES LIKE 'log_queries_not_using_indexes';
SHOW VARIABLES LIKE 'min_examined_row_limit';
What you want to see (after you enable it): slow logging ON, a log file under /var/log/mysql/ (or the distro equivalent), and a long_query_time that matches your workload.
Step 2: Enable slow query logging (Ubuntu 24.04 and AlmaLinux/Rocky 9)
Make this change in config files, not as one-off runtime tweaks. That keeps it across restarts.
It also gives you a clear audit trail of what changed.
Ubuntu 24.04 LTS (MySQL 8.0 packages)
On Ubuntu, overrides usually live in:
/etc/mysql/mysql.conf.d/mysqld.cnf(common)- or a drop-in under
/etc/mysql/conf.d/
A dedicated drop-in keeps upgrades cleaner. It also makes the intent obvious.
sudo nano /etc/mysql/conf.d/slowlog.cnf
Paste:
[mysqld]
slow_query_log = ON
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
min_examined_row_limit = 1000
# Optional while investigating; turn off later if too noisy
log_queries_not_using_indexes = OFF
Create the log directory/file and set tight permissions:
sudo install -d -o mysql -g adm -m 2750 /var/log/mysql
sudo touch /var/log/mysql/mysql-slow.log
sudo chown mysql:adm /var/log/mysql/mysql-slow.log
sudo chmod 0640 /var/log/mysql/mysql-slow.log
Restart MySQL:
sudo systemctl restart mysql
AlmaLinux/Rocky Linux 9 (MySQL community repo or MariaDB)
Common config paths:
- MySQL:
/etc/my.cnfwith includes from/etc/my.cnf.d/ - MariaDB: similar structure under
/etc/my.cnf.d/
Create a drop-in:
sudo nano /etc/my.cnf.d/slowlog.cnf
Paste:
[mysqld]
slow_query_log=1
slow_query_log_file=/var/log/mysqld-slow.log
long_query_time=1
min_examined_row_limit=1000
Create and permission the file:
sudo touch /var/log/mysqld-slow.log
sudo chown mysql:mysql /var/log/mysqld-slow.log
sudo chmod 0640 /var/log/mysqld-slow.log
Restart:
sudo systemctl restart mysqld
Verify it actually stuck
After the restart, confirm the variables match your changes:
sudo mysql -e "SHOW VARIABLES LIKE 'slow_query_log'; SHOW VARIABLES LIKE 'long_query_time'; SHOW VARIABLES LIKE 'slow_query_log_file';"
Then confirm the log file exists and MySQL is writing to it:
sudo ls -lh /var/log/mysql/mysql-slow.log 2>/dev/null || sudo ls -lh /var/log/mysqld-slow.log
sudo tail -n 5 /var/log/mysql/mysql-slow.log 2>/dev/null || sudo tail -n 5 /var/log/mysqld-slow.log
Step 3: Pick a threshold that finds real pain (without logging everything)
long_query_time=1 second is a good starting point for most web workloads.
For a busy WordPress store, you can temporarily drop it to 0.2–0.5 seconds during a known traffic window. Then raise it again.
- Start:
1second +min_examined_row_limit=1000to skip tiny queries - Investigate: temporarily drop to
0.3seconds for 30–60 minutes - Stabilize: align it to your SLO (example: log anything > 500ms if your target is 200ms)
Runtime change (useful during a live incident):
sudo mysql -e "SET GLOBAL long_query_time=0.3;"
sudo mysql -e "SHOW VARIABLES LIKE 'long_query_time';"
Pitfall: turning on log_queries_not_using_indexes=ON on a busy server can flood the log.
Use it briefly. Then switch it back off.
Step 4: Rotate slow logs so they don’t cause outages
Slow logs can balloon during an incident. Rotation keeps a performance problem from becoming a “disk full” outage.
For a broader baseline, see our guide on logrotate + systemd log rotation.
Ubuntu (logrotate snippet)
Create:
sudo nano /etc/logrotate.d/mysql-slow
Use:
/var/log/mysql/mysql-slow.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
create 0640 mysql adm
sharedscripts
postrotate
test -x /usr/bin/mysqladmin && /usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf flush-logs
endscript
}
Test rotation without waiting a day:
sudo logrotate -d /etc/logrotate.d/mysql-slow
sudo logrotate -f /etc/logrotate.d/mysql-slow
Alma/Rocky (logrotate snippet)
Create:
sudo nano /etc/logrotate.d/mysqld-slow
Use:
/var/log/mysqld-slow.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
create 0640 mysql mysql
sharedscripts
postrotate
/usr/bin/mysqladmin flush-logs || true
endscript
}
Step 5: Turn the slow log into a ranked list (Percona Toolkit)
The slow log is evidence. It’s not readable at scale.
pt-query-digest groups similar queries, ranks them by total time, and produces a short list you can act on.
Install Percona Toolkit
Ubuntu:
sudo apt update
sudo apt install -y percona-toolkit
Alma/Rocky: if your repo doesn’t include it, install from Percona’s official repository for your distro. Then:
sudo dnf install -y percona-toolkit
Run pt-query-digest
On Ubuntu:
sudo pt-query-digest /var/log/mysql/mysql-slow.log --limit=20 > /root/slowlog-report.txt
sudo less /root/slowlog-report.txt
On Alma/Rocky:
sudo pt-query-digest /var/log/mysqld-slow.log --limit=20 > /root/slowlog-report.txt
sudo less /root/slowlog-report.txt
Focus on these parts of the report:
- Total time (not just average) — a 200ms query executed 10,000 times is often the real drag.
- Rows examined — big numbers usually mean missing indexes or inefficient join patterns.
- Query fingerprints — the grouped form you can search for in your application code.
Step 6: Reproduce one slow query and measure it correctly
Pick the #1 or #2 query in the digest report. Copy a representative sample.
Run it directly so you can measure changes.
sudo mysql
SET SESSION sql_log_bin=0;
SET SESSION optimizer_trace="enabled=off";
Then:
SELECT NOW();
-- paste the slow query here
For deeper visibility, use EXPLAIN ANALYZE (MySQL 8.0+):
EXPLAIN ANALYZE
-- paste the same SELECT ... query
What you’re checking: whether MySQL scans far too many rows (rows), builds temporary tables, or spills sorts to disk.
If you see “Using temporary” and “Using filesort” on large result sets, you usually need an index that matches the WHERE + ORDER BY pattern.
Step 7: Fix patterns that show up on hosting workloads
On VPS hosting workloads, slow logs tend to repeat the same themes.
Don’t “optimize everything.” Fix the top fingerprints first. Then re-measure.
Pattern A: Missing index on a selective WHERE clause
Example slow query shape:
SELECT * FROM orders WHERE customer_id = 123 AND status = 'paid' ORDER BY created_at DESC LIMIT 20;
Add a composite index that matches the filter and sort:
ALTER TABLE orders
ADD INDEX idx_orders_customer_status_created (customer_id, status, created_at);
Verify it is used:
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 123 AND status = 'paid' ORDER BY created_at DESC LIMIT 20;
Pitfall: piling on indexes can slow writes.
Start with the top 1–3 query fingerprints. Measure the impact before you add more.
Pattern B: Leading wildcard LIKE (can’t use normal index)
Example:
SELECT id FROM users WHERE email LIKE '%@example.com';
This usually forces a scan. Your practical options are:
- Rewrite it as a prefix search if you can (
LIKE 'prefix%'). - Store the searchable portion separately (example:
email_domain) and index it.
Example schema tweak:
ALTER TABLE users ADD COLUMN email_domain VARCHAR(255);
UPDATE users SET email_domain = SUBSTRING_INDEX(email,'@',-1);
ALTER TABLE users ADD INDEX idx_users_email_domain (email_domain);
Then query:
SELECT id FROM users WHERE email_domain = 'example.com';
Pattern C: Sorting a big result without the right index
If the slow log points at an ORDER BY that an index can’t support, MySQL may sort huge intermediate sets.
The fix is usually an index that matches your WHERE columns, followed by the ORDER BY column.
Quick check:
EXPLAIN ANALYZE
SELECT ...
If the plan shows filesort with a large rows estimate, adjust the index. Then re-run the same EXPLAIN ANALYZE.
Pattern D (WordPress/WooCommerce): wp_postmeta and meta_query pain
WooCommerce and plugin-heavy sites often hammer wp_postmeta.
Slow log fingerprints commonly include filters on meta_key and meta_value.
Don’t guess. Pull the exact query fingerprint from the digest report first.
Then inspect the current indexes:
SHOW INDEX FROM wp_postmeta;
Some sites benefit from an index that fits the plugin’s access pattern, but it’s workload-specific. Add one index and retest.
Stop if it doesn’t move the needle.
If you’re running a production store and want a safer, hosting-managed route, consider moving it to HostMyCode WordPress hosting so tuning happens with guardrails.
For broader app-side performance work (page caching, PHP tuning, object caching), pair this with WordPress performance optimization in 2026.
Step 8: Reduce repeat queries (the “death by 10,000 cuts” problem)
Sometimes a query isn’t “slow” in isolation. It just runs constantly.
The slow log often shows decent average time paired with brutal total time.
Fixes that tend to pay off quickly in hosting environments:
- Application caching: cache expensive query results for 30–300 seconds.
- Object cache for WordPress: Redis/Memcached cuts repeated reads.
- Connection pooling (carefully): useful when app servers churn connections.
If the bigger problem is connection churn rather than query latency, you may be CPU-bound on PHP-FPM workers.
You may also be undersized on the VPS.
Our guide on capacity planning for VPS hosting helps you decide whether to tune or resize.
Step 9: Check MySQL health signals that correlate with slow logs
The slow log tells you what hurts. These counters often explain why it hurts.
Common causes include memory pressure, disk spill, or scan-heavy access patterns.
Run:
sudo mysql -e "SHOW GLOBAL STATUS LIKE 'Threads_running';"
sudo mysql -e "SHOW GLOBAL STATUS LIKE 'Created_tmp_disk_tables';"
sudo mysql -e "SHOW GLOBAL STATUS LIKE 'Handler_read_rnd_next';"
sudo mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_reads';"
sudo mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read_requests';"
- Created_tmp_disk_tables rising quickly often means sorts/joins are spilling to disk.
- Handler_read_rnd_next high usually points to full table scans.
- Buffer pool reads vs requests shows whether reads come from RAM or disk.
Correlate that with basic OS-level I/O and CPU signals:
sudo apt install -y sysstat 2>/dev/null || sudo dnf install -y sysstat
iostat -xz 1 10
vmstat 1 10
If disk utilization stays pegged and slow queries show high rows examined, fix indexing first.
If indexes look right but you’re still I/O bound, faster storage or more RAM for the buffer pool is often the cleanest path on a busy database VPS.
Step 10: Apply safe MySQL tuning changes (small, reversible, measurable)
Skip random “my.cnf tuning guides.” On typical hosting workloads, two knobs matter early: buffer pool sizing and temporary table sizing.
Change one setting at a time. Then re-check the same signals.
InnoDB buffer pool
On dedicated MySQL servers, 60–70% of RAM is a common starting point for innodb_buffer_pool_size.
On a combined web+db VPS, stay closer to 25–40%. That leaves room for PHP and the OS.
Check current:
sudo mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
Set in your MySQL config drop-in (same file you used for slow log, or a separate innodb.cnf):
[mysqld]
innodb_buffer_pool_size = 2G
Restart MySQL during a maintenance window.
Temporary tables
If tmp tables frequently spill to disk, modest increases can help:
[mysqld]
tmp_table_size = 128M
max_heap_table_size = 128M
Pitfall: these are per-connection limits. With high concurrency, oversized values can create real memory pressure.
Step 11: Verify the improvement (before/after you can show)
This is where many “optimizations” fall apart. Keep the test consistent.
Capture before/after output. Avoid changing five things at once.
- Run
EXPLAIN ANALYZEbefore the change and save the output. - Apply the index/query/tuning change.
- Run the same
EXPLAIN ANALYZEagain. - Re-run
pt-query-digestafter 30–60 minutes of normal traffic.
A quick way to compare the “top offenders” day over day is to keep a daily report file:
sudo pt-query-digest /var/log/mysql/mysql-slow.log --limit=20 > /root/slowlog-$(date +%F).txt
If you’re also tightening general VPS hygiene (patching, SSH hardening, firewall rules), our VPS hosting security checklist for 2026 pairs well with database work.
MySQL exposure and weak credentials still show up in real incidents.
Common pitfalls that waste hours
- Logging to a path MySQL can’t write: set ownership to
mysqland confirm writes withtail. - “Fixing” it by adding 10 indexes: writes get slower and the real query may remain unchanged. Start with one fingerprint.
- Ignoring total time: average latency and total impact are different problems.
- Testing with warm cache only: run multiple iterations; restart MySQL when appropriate.
- Letting logs fill the disk: rotate early. Incidents often line up with traffic spikes.
Where this fits in a production hosting workflow
Slow query logging isn’t a one-and-done project. Use it during incidents.
Revisit it quarterly as part of routine maintenance.
If you handle client sites as a reseller, the slow log turns “the site is slow” into a specific query you can fix.
If you’re building a broader recovery plan, pair this work with restore testing and backup verification.
Our hands-on guide Linux VPS disaster recovery planning shows how to practice restores before you need them.
Summary: your quick path from slow to stable
- Enable the slow log with a sensible threshold (
long_query_timearound 1s to start). - Rotate the log so it can’t fill your disk.
- Use
pt-query-digestto rank queries by total impact. - Fix one query fingerprint at a time—usually a missing composite index.
- Verify with
EXPLAIN ANALYZEand a fresh digest report.
If you’re running MySQL on a busy site and need predictable performance headroom, move the workload to a VPS sized for database I/O.
Managed VPS hosting from HostMyCode is a practical choice when you want tuning help, monitored resources, and fewer late-night slowdowns to chase.
If your database is slowing down your site, start with the slow query log and follow the evidence. For production workloads, a properly sized HostMyCode VPS gives you the control you need for MySQL tuning, and managed VPS hosting can handle routine maintenance and monitoring for you.
FAQ
Will enabling the slow query log hurt performance?
On most VPS workloads, the overhead is small if you log to local disk and keep thresholds reasonable.
The bigger risk is log volume. Rotate it, and avoid aggressive settings on high-traffic servers.
Should I set long_query_time to 0 to log everything?
Only for short diagnostic windows, and only if you can tolerate the log growth.
For normal operations, keep it at 0.5–2 seconds. Adjust based on your latency targets.
Is the slow query log enough, or do I need Performance Schema?
For many hosting issues, the slow log plus pt-query-digest is enough to find the top bottlenecks quickly.
Performance Schema helps with deeper analysis, but most fixes don’t require it.
How do I know whether to add an index or rewrite the query?
If EXPLAIN ANALYZE shows a full scan with high rows examined, add an index first.
If the query filters poorly (example: leading wildcard LIKE) or returns far more rows than needed, rewriting usually wins.
What if my slow queries come from a WordPress plugin?
Confirm the exact query fingerprint in the digest report. Then test plugin updates and configuration changes in staging.
If the plugin forces expensive meta queries, replace it or add a narrowly targeted index. Verify the result carefully.