performance
Real-user TTFB (field data)
TTFB from real Chrome users over the last 28 days (75th-percentile). Wait between request and first byte — captures real network and server delays.
What this check does
Reads the 75th-percentile Time to First Byte from the Chrome User Experience Report — the time from the moment a real Chrome user’s browser sends the navigation request to the moment the first byte of the response arrives. CrUX aggregates this over a rolling 28-day window. It’s an experimental CWV metric, not a ranking signal in its own right, but it caps every other paint metric: nothing else can render until the first byte arrives.
| Field TTFB (p75) | Verdict |
|---|---|
| ≤ 800 ms | Good |
| 800–1,800 ms | Needs improvement |
| > 1,800 ms | Poor |
TTFB folds in everything between user and origin: DNS lookup, TLS handshake, redirect chains, request queuing on your server, application work, and time spent rendering server-side templates. Anything that happens before the first byte goes out adds to this number.
Why it matters
TTFB isn’t a Core Web Vital, but it’s the upstream constraint on every Core Web Vital. A 2-second TTFB means LCP can’t possibly be under 2.5 s — there’s only 500 ms left for everything else. Cutting TTFB by 500 ms tends to cut LCP by a similar amount almost for free.
It also exposes problems lab tools miss:
- Distant users. A server in us-east-1 looks fast from a US-based lab tool, terrible to a Brazilian user. Field TTFB shows you the geographic reality.
- Cold server states. If your origin runs serverless functions, cold-start latency only shows up in real-user data; lab tools warm the cache for you.
- Redirect chains. A 301 from
example.com→www.example.com→https://www.example.com/adds a full round-trip per hop to TTFB. Lab tools either skip these or report them differently.
How to improve it
1. Put a CDN in front of your origin. Even for dynamic pages, a CDN can cache TLS sessions and connection state. Cloudflare, Fastly, Bunny, and Vercel Edge all have free or near-free tiers:
# Cloudflare proxy on (orange-clouded) — instant CDN
# nslookup your domain — you should see Cloudflare IPs, not your origin
2. Cache the HTML at the edge. For pages that don’t need to be per-user (marketing, blog, docs), set Cache-Control: s-maxage=86400, stale-while-revalidate=604800 so the CDN returns cached HTML in <50 ms:
Cache-Control: public, s-maxage=86400, stale-while-revalidate=604800
The stale-while-revalidate directive lets the CDN serve a stale response instantly while it revalidates in the background — best of both worlds.
3. Eliminate redirect chains. Audit with curl:
curl -sIL https://example.com/ | grep -E "^(HTTP|Location)"
Every Location: line is an extra round-trip. Common offenders: http→https, apex→www (or www→apex), and trailing-slash differences. Collapse these to a single 301 if possible.
4. Profile your server with Server-Timing. Add response headers that tell DevTools where the time went:
// Express/Node example
res.setHeader("Server-Timing", "db;dur=120, render;dur=45, total;dur=170");
Chrome’s Network tab will show a per-phase breakdown so you can see whether the time is in DB queries, template rendering, or framework overhead.
5. Use HTTP/2 or HTTP/3. TCP slow-start and head-of-line blocking on HTTP/1.1 cost real TTFB. All modern CDNs default to HTTP/2; many already serve HTTP/3 (QUIC) for even better mobile-network performance.
6. Cut origin distance. If you have a single-region origin and global users, consider replicating or moving to an edge-rendered framework (Astro, Next.js on Vercel, SvelteKit on Cloudflare Workers).
Frequently asked questions
My TTFB is fast on my dev machine but slow in CrUX. Why?
You’re co-located with your origin; your real users aren’t. Add a CDN and your global users will see TTFB closer to your dev experience. The other common cause is cold-start latency on serverless deployments — your first request after idle pays a 500–2,000 ms penalty.
Should I worry about TTFB if it’s labeled “experimental” in CrUX?
Yes. It’s experimental in the sense that Google hasn’t yet made it a formal Core Web Vital, but the value is well-defined and the thresholds are stable. Treat it as a leading indicator for your LCP — if TTFB is poor, LCP almost certainly will be too.
My TTFB is good but LCP is poor. What’s going on?
The bottleneck is downstream: rendering, image loading, or render-blocking resources. Look at the LCP rule next. Field TTFB + good LCP means your server is fast and the slowness is in the client; field TTFB poor + LCP poor means fix TTFB first because every fix there compounds into LCP improvements for free.
Sources
Last updated 2026-05-12