
Most WordPress sites don’t feel slow because PHP is “bad”. They feel slow because each request repeats the same expensive work: bootstrap WordPress, fetch options, build queries, render templates, then discard it all. To Optimize WordPress Performance on AlmaLinux in 2026, you’ll get better results by stacking caches that each cover a different bottleneck: Redis for persistent object caching, Varnish for HTTP page caching, and Nginx FastCGI cache for consistent full-page caching right at the PHP-FPM boundary.
This tutorial assumes AlmaLinux 10, Nginx, PHP-FPM, and WordPress are already running. If you’re still assembling the base stack, start here: High-performance LEMP stack on HostMyCode VPS.
Architecture: who caches what (and why you want all three)
These layers overlap, but they aren’t duplicates if you give each one a clear job:
- Redis Object Cache: keeps WordPress objects (options, transients, query results) in memory across requests. That cuts MySQL load and reduces PHP work on uncached pages and inside wp-admin.
- Varnish HTTP Accelerator: caches complete HTTP responses for anonymous traffic. It’s built for traffic spikes because it can serve cached pages without waking up Nginx or PHP.
- Nginx FastCGI cache: stores the HTML produced by PHP-FPM. It’s simple to reason about, stable under load, and useful when you want caching rules living directly in Nginx (or when Varnish doesn’t fit your routing).
A common single-server layout looks like this: Varnish (:80) → Nginx (:8080) → PHP-FPM plus static files. Redis runs alongside on localhost. If HTTPS terminates on the same machine, you’ll typically terminate TLS at Nginx and let Varnish cache HTTP only (or terminate TLS at a CDN/load balancer in front). For most WordPress installs, this is plenty.
Before you touch caching: get a baseline you can trust
Measure first. Otherwise you’re guessing—and guessing breaks the moment traffic changes.
- Check TTFB at the origin (bypass CDN):
curl -s -o /dev/null -w '%{time_starttransfer}\n' https://example.com/ - Confirm PHP-FPM health and error rate:
journalctl -u php-fpm -n 100 --no-pager - Watch MySQL queries during a page load:
mysqladmin processlist
If TTFB is already high, fix underlying latency before piling on caches. This guide walks through the usual server-side culprits: Fix High TTFB.
Install and tune Redis for WordPress object caching
Redis is often the lowest-risk win because it helps even when full-page caching is bypassed (logged-in users, WooCommerce cart/checkout, previews, and many admin screens).
1) Install Redis and enable it
On AlmaLinux 10, Redis is available via AppStream. Install it, start it, and verify the service responds:
sudo dnf -y install redis
sudo systemctl enable --now redis
redis-cli ping
You want PONG.
2) Basic security + memory settings
Edit /etc/redis/redis.conf:
- Bind to localhost:
bind 127.0.0.1 ::1 - Disable dangerous remote access: keep
protected-mode yes - Set a sane memory cap. Example for a 4 GB VPS hosting one WordPress site:
maxmemory 256mb - Choose eviction policy for cache usage:
maxmemory-policy allkeys-lfu
Restart:
sudo systemctl restart redis
3) Configure WordPress Redis Object Cache plugin
Install the “Redis Object Cache” plugin in WP Admin, then add the following to wp-config.php (above /* That's all, stop editing! */):
define('WP_CACHE', true);
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
// Optional: avoid key collisions if multiple sites share Redis
define('WP_REDIS_PREFIX', 'wp_example_');
Enable the object cache from the plugin page. Then confirm Redis is actually filling:
redis-cli info keyspace
Pitfall: If you run multiple PHP pools/sites, don’t reuse the default prefix. Key collisions can show up as “random” stale settings, broken menus, or odd behavior that’s hard to trace.
If you want a deeper hardening pass (auth, ACLs, TLS on other distros, and so on), the same ideas apply from this guide: install and secure Redis.
Set up Nginx FastCGI cache (fast wins, clear rules)
FastCGI cache does one thing well: it stores the HTML output from PHP-FPM and reuses it until it expires or you purge it. It’s a strong fit for WordPress pages that don’t vary by user.
1) Define cache path and key
In your main Nginx config (commonly /etc/nginx/nginx.conf or a file in /etc/nginx/conf.d/), inside http { }:
fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2 keys_zone=WORDPRESS:100m inactive=60m max_size=2g;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
Create the directory and set permissions:
sudo mkdir -p /var/cache/nginx/fastcgi
sudo chown -R nginx:nginx /var/cache/nginx/fastcgi
2) Apply caching to the PHP location
In your WordPress server block (example /etc/nginx/conf.d/example.com.conf), add cache rules to the PHP handler location. Adjust the fastcgi_pass line to match your PHP-FPM socket path.
set $skip_cache 0;
# Skip cache for logged-in users and common WordPress cookies
if ($request_method = POST) { set $skip_cache 1; }
if ($query_string != "") { set $skip_cache 1; }
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|/wp-login.php|/wp-json/|/cart/|/checkout/|/my-account/") { set $skip_cache 1; }
if ($http_cookie ~* "comment_author|wordpress_logged_in|wp-postpass|woocommerce_items_in_cart|woocommerce_cart_hash") { set $skip_cache 1; }
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# Example socket on AlmaLinux (verify your pool config)
fastcgi_pass unix:/run/php-fpm/www.sock;
fastcgi_cache WORDPRESS;
fastcgi_cache_valid 200 301 302 10m;
fastcgi_cache_valid 404 1m;
fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
add_header X-FastCGI-Cache $upstream_cache_status always;
}
Reload Nginx:
sudo nginx -t
sudo systemctl reload nginx
3) Verify behavior quickly
Request a public page twice and inspect the header:
curl -I https://example.com/ | grep -i x-fastcgi-cache
You should see MISS on the first request and HIT on the second (for anonymous traffic).
Pitfall: Don’t cache /wp-json/ by default if you run a headless front-end, membership plugins, or personalized endpoints. Cache API responses only if you’re confident they’re identical across users.
Put Varnish in front of Nginx (HTTP caching that survives spikes)
Varnish is a great fit for bursty anonymous traffic because it can serve pages straight from RAM and gives you flexible logic through VCL. On a single host, the clean setup is simple: Varnish listens on port 80, and Nginx moves to 8080.
1) Install Varnish and move Nginx to 8080
sudo dnf -y install varnish
sudo systemctl enable --now varnish
Edit your Nginx server block to listen on 8080 for plain HTTP:
listen 8080;
server_name example.com www.example.com;
Reload Nginx.
2) Configure Varnish backend to Nginx
Edit /etc/varnish/default.vcl:
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
# Basic WordPress-friendly rules
sub vcl_recv {
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Don’t cache admin/login
if (req.url ~ "^/wp-admin/" || req.url ~ "^/wp-login.php") {
return (pass);
}
# Bypass cache for logged-in users and carts
if (req.http.Cookie ~ "wordpress_logged_in" || req.http.Cookie ~ "woocommerce_items_in_cart" || req.http.Cookie ~ "woocommerce_cart_hash") {
return (pass);
}
# Strip common marketing params to improve hit rate
if (req.url ~ "(\?|&)(utm_|gclid=|fbclid=)") {
set req.url = regsuball(req.url, "(utm_[^&]+|gclid=[^&]+|fbclid=[^&]+)", "");
set req.url = regsub(req.url, "[\?&]+$", "");
set req.url = regsub(req.url, "\?&", "?");
}
}
sub vcl_backend_response {
# Respect upstream no-cache signals
if (beresp.http.Cache-Control ~ "no-cache|no-store" || beresp.http.Pragma ~ "no-cache") {
set beresp.uncacheable = true;
return (deliver);
}
# Default TTL for cacheable pages
set beresp.ttl = 10m;
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Varnish-Cache = "HIT";
} else {
set resp.http.X-Varnish-Cache = "MISS";
}
}
Restart Varnish:
sudo systemctl restart varnish
3) Make sure Varnish actually listens on :80
On AlmaLinux, Varnish’s systemd parameters may define the listen port. Check:
sudo systemctl cat varnish | sed -n '1,120p'
sudo ss -lntp | egrep ':80|:8080'
If Varnish isn’t on port 80, adjust the ExecStart options (via a drop-in override) to set -a :80. Then reload systemd and restart.
Pitfall: If you terminate TLS on the same host, Varnish won’t see HTTPS unless you place an SSL terminator (Nginx/HAProxy) in front of it. Many site owners terminate TLS at a CDN and forward HTTP to Varnish at the origin. If you keep TLS on Nginx, you can still use FastCGI cache for HTTPS and reserve Varnish for HTTP-only paths or internal traffic patterns.
Don’t double-cache blindly: choose who owns full-page caching
Running both Varnish and Nginx FastCGI cache can work, but debugging gets messy if you don’t decide which layer “owns” full-page caching.
- If Varnish sits in front and caches the page, Nginx FastCGI cache will see fewer hits. That’s normal. Check
X-Varnish-Cachefirst, thenX-FastCGI-Cache. - If you run HTTPS-only without a TLS terminator in front of Varnish, real users won’t benefit from Varnish. In that case, FastCGI cache becomes your practical full-page cache.
- For WooCommerce or membership sites, keep full-page caching conservative and let Redis carry more of the load for logged-in sessions.
Purge strategy: how you keep cached pages fresh
TTL-only caching works, but it’s frustrating for sites that publish frequently. Most teams land on one of these approaches:
- Plugin-driven purge: use a WordPress plugin that sends PURGE/BAN to Varnish and/or clears Nginx cache on publish.
- Webhook purge: trigger a purge from your CI/CD or deploy hook after theme/plugin updates.
- Selective short TTL: keep TTL modest (5–15 minutes) and rely on Redis + opcode cache to keep misses fast.
If you implement Varnish PURGE, restrict it by IP (localhost or your admin VPN only). A public purge endpoint turns cache invalidation into an outage.
Quick diagnostics checklist (what to check when performance regresses)
- Are you caching what you think? Inspect headers:
X-Varnish-Cache,X-FastCGI-Cache, and plugin cache indicators. - Is Redis evicting too aggressively?
redis-cli info memoryand look atevicted_keys. - Are 502/504 errors increasing? That usually points to PHP-FPM overload or upstream timeouts. Use our fix guides: Fix 502 Bad Gateway and Fix 504 Gateway Timeout in PHP-FPM.
- Did a plugin add cache-busting query strings everywhere? Hit rate falls off a cliff. Strip harmless params in Varnish, and audit what your front-end scripts append.
- Is the admin area slower than before? That’s expected if you only cached public pages. Redis helps here; full-page caching usually won’t.
Where HostMyCode fits (and what to provision)
Caching makes good hardware feel great—and weak hardware feel inconsistent. On small instances, Redis and Varnish still help, but cache warmups, cron spikes, and PHP bursts can hit CPU limits quickly. If you want steady performance, give the stack room to breathe.
A HostMyCode VPS is a solid baseline for AlmaLinux + Nginx + Redis + Varnish. If you don’t want to manage memory caps, service restarts, or purge rules, managed VPS hosting keeps the stack maintained without turning performance into a weekly chore.
If your WordPress site is bumping into shared limits, run this stack on a VPS with enough RAM for Redis and enough CPU for PHP bursts. HostMyCode offers WordPress hosting for simpler setups and a flexible HostMyCode VPS for full control on AlmaLinux.
FAQ
Should I use Varnish or Nginx FastCGI cache for WordPress?
If you can route origin HTTP traffic through Varnish, it handles spikes well and is easy to verify with hit/miss headers. If most of your real traffic is HTTPS going straight to Nginx, FastCGI cache is usually the more practical full-page cache.
Will Redis object cache replace full-page caching?
No. Redis cuts database and PHP work, but WordPress still runs. Full-page caching (Varnish or FastCGI cache) avoids PHP entirely for cacheable requests, which is where the biggest TTFB drops come from for anonymous traffic.
What breaks most often after enabling caching?
Personalized pages getting cached (logged-in state, WooCommerce cart/checkout, membership content) and “cache-busting” query strings tanking your hit rate. Start conservative: bypass cache on cookies, POST requests, and checkout/account URLs.
How much RAM should I allocate to Redis?
For a single small-to-medium WordPress site, 128–512 MB is a sensible range in 2026. Watch evicted_keys and raise the limit if eviction climbs during peak hours.
Do I still need a CDN?
Often, yes. A CDN cuts latency for global visitors and offloads static assets. Server-side caching reduces origin work; a CDN reduces distance and connection overhead. Used together, they complement each other.
Summary: the practical stack for WordPress speed on AlmaLinux
If you want repeatable results, treat caching as layers with specific responsibilities. Redis keeps WordPress from repeatedly hitting MySQL. Varnish serves anonymous pages at RAM speed when you can route HTTP through it. Nginx FastCGI cache gives you predictable full-page caching close to PHP-FPM, especially in HTTPS-first deployments.
For a stable AlmaLinux WordPress setup with room to grow, deploy on a HostMyCode VPS and scale as traffic and cache hit rate rise. If you want someone else to handle the services and tuning, managed VPS hosting is the simplest path to consistent performance.