performance
Server response time
MetricSpot measures TTFB — time to first byte. Slow server response caps every other performance metric; LCP cannot be fast if TTFB is two seconds.
What this check does
Times how long the server takes to return the first byte of the HTML document — TTFB. The clock starts when MetricSpot’s crawler sends the request and stops when the first response byte arrives. DNS, TCP, TLS, and server processing are all included.
The check passes under 800 ms (web.dev’s “good” threshold) and warns over 1.8 s.
Why it matters
TTFB is a ceiling on every other Core Web Vital. Largest Contentful Paint (LCP) cannot beat TTFB + render time — if your server takes 2 seconds to respond, your LCP is mathematically at least 2 seconds, no matter how fast your CSS is or how aggressive your image optimization is. Google’s “good LCP” threshold is 2.5 s; a 2 s TTFB leaves you 500 ms for everything else.
Slow TTFB usually points to one of four things: a cold serverless function, a database query in the request path, no edge caching, or a CMS doing work it shouldn’t (WordPress without object cache is the classic).
How to fix it
Find the bottleneck first. Add Server-Timing headers so you can see where the time goes:
Server-Timing: db;dur=420, render;dur=180, total;dur=620
Then attack whichever stage is biggest.
Cache the HTML at the edge. A static or page-cached response should be served in 20-100 ms from the nearest POP, not 800 ms from your origin.
# nginx — micro-cache dynamic pages for 60s
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=html:50m max_size=1g inactive=10m;
location / {
proxy_cache html;
proxy_cache_valid 200 60s;
proxy_cache_use_stale updating error timeout;
proxy_cache_lock on;
proxy_pass http://app;
}
Cloudflare: enable “Cache Everything” via a Cache Rule for HTML, then set Cache-Control: s-maxage=60, stale-while-revalidate=600 from your origin. The stale-while-revalidate is the magic — Cloudflare serves the stale copy instantly and refreshes in the background.
Caddy:
example.com {
reverse_proxy localhost:3000
header Cache-Control "public, max-age=0, s-maxage=60, stale-while-revalidate=600"
}
Next.js (App Router): prefer static generation. If a page is dynamic, set route segment config:
export const revalidate = 60; // ISR
export const dynamic = "force-static"; // when possible
For server components hitting a DB, wrap the fetch in unstable_cache or use fetch(url, { next: { revalidate: 60 } }).
Bun / Node: the usual wins are (a) connection-pool your DB (pg.Pool, not a fresh client per request), (b) avoid N+1 queries — one JOIN beats 50 round-trips, (c) precompute anything you can at build time, (d) gzip/brotli the response.
WordPress: install a persistent object cache (Redis via the Redis Object Cache plugin) and a page cache (WP Super Cache, W3 Total Cache, or LiteSpeed Cache). A vanilla WP install with no caching routinely takes 1-3 s per request; with both caches enabled it drops below 200 ms.
Database: add the missing index. Run EXPLAIN ANALYZE on the slowest query in your request path; if it says Seq Scan, you need an index. One missing index can be the entire TTFB problem.
Geographic latency: if your origin is in Frankfurt and your visitors are in Singapore, no amount of server optimization will fix the 180 ms round-trip. Put a CDN in front of HTML (not just images).
Once TTFB is healthy, work on Largest Contentful Paint and Interaction to Next Paint — those are downstream of TTFB and reveal client-side issues.
Frequently asked questions
Is TTFB a Core Web Vital?
No. TTFB is an upstream metric — Google doesn’t use it as a ranking signal directly. But it’s the floor for LCP, which is a ranking signal. A bad TTFB guarantees a bad LCP.
Why does MetricSpot measure a different TTFB than PageSpeed Insights?
Different measurement points. MetricSpot measures from our crawler in a single data center. PSI uses field data (Chrome User Experience Report) when available, which is real users worldwide on real networks. Field data is the truth; lab data is the diagnostic.
My TTFB is fine for the homepage but bad for product pages — why?
The homepage is probably cached and product pages aren’t. Either extend page caching to product URLs (with a short TTL and cache-bust on inventory change), or precompute product pages at build time via ISR / on-demand revalidation.
Sources
Last updated 2026-05-11