Performance

TTFB Explained: Why Your Site Can "Feel Slow" Even When LCP Looks Fine

8 min read
Waterfall chart showing server response time as the first bottleneck in page load sequence

Your LCP passes, your INP is green, your CLS is near zero—but visitors still describe your site as “slow.” The culprit is often Time to First Byte (TTFB): the gap between a browser requesting a page and receiving the first byte of the response. TTFB isn’t a Core Web Vital itself, but it directly constrains every metric that is.

What TTFB actually represents

TTFB measures the combined time of DNS resolution, TCP connection, TLS negotiation, and server processing. It’s the time a visitor stares at a blank screen (or the previous page) before anything starts happening. High TTFB delays everything downstream: HTML parsing, resource discovery, rendering, and interactivity.

A TTFB of 200ms means rendering can’t begin until at least 200ms after the click. Add DNS, connection, and TLS overhead—which TTFB includes—and the perceived delay grows. On mobile networks with higher latency, these additions compound. A server that responds in 100ms still produces 400-600ms TTFB on a slow mobile connection.

Google considers TTFB under 800ms as good, but that’s a generous threshold. Users perceive delays above 200-300ms. Sites with TTFB consistently under 200ms feel noticeably snappier than those hovering at 500-800ms, even if both technically “pass.”

Why TTFB stays high even after common optimisations

Page caching helps enormously for cached pages, but cache misses still hit your origin. The first visitor after a cache expires, pages with personalisation, search results, and dynamic content all bypass cache and reveal your true TTFB. If your origin is slow, cache misses expose it.

Database queries are a frequent TTFB bottleneck. WordPress sites making 50-100 database queries per page load spend significant time waiting for query results. Even fast individual queries accumulate. Optimising slow queries and reducing query count directly reduces TTFB.

Geographic distance between server and visitor creates unavoidable latency. Physics limits speed-of-light delays: roughly 1ms per 100km of cable. A server in Virginia serving a visitor in Sydney adds 150-200ms of network latency alone. No amount of server optimisation overcomes distance.

Plugin and middleware overhead adds processing time before a response can begin. Security plugins, analytics, redirects, and middleware each add milliseconds. Individually small, collectively significant. Profiling server-side processing reveals which components delay the response.

How TTFB affects perceived speed

Visitors perceive speed through visual feedback. TTFB determines when that feedback can begin. A 100ms TTFB lets the browser start rendering almost immediately. A 1-second TTFB means one full second of nothing before any content appears.

Navigation between pages feels sluggish with high TTFB even if each page renders quickly once data arrives. Clicking a link and waiting 800ms before seeing any response feels broken, regardless of how fast rendering completes afterward.

LCP can technically pass with high TTFB if the largest contentful element loads quickly after the HTML arrives. But the visitor’s subjective experience includes the TTFB wait. A 2.0s LCP built on 1.5s TTFB plus 0.5s rendering “passes” but feels worse than 0.3s TTFB plus 1.5s progressive rendering.

Diagnosing your TTFB problem

Separate network latency from server processing. Use server-side timing headers (Server-Timing) to report actual processing time. If server processing is 50ms but TTFB is 500ms, the problem is network latency, not your server. If processing is 400ms, optimise your application.

Test from multiple geographic locations. A site might have 100ms TTFB when tested locally but 800ms from another continent. Use tools that test from various locations to understand TTFB across your audience’s geography.

Compare cached vs uncached TTFB. If cached pages show 50ms TTFB but uncached pages show 2 seconds, your caching is working but coverage needs improvement. Increasing cache hit rates reduces how often visitors experience slow uncached responses.

Check TTFB under load. A server handling one request may respond in 100ms but degrade to 2 seconds under concurrent traffic. Load testing reveals capacity-related TTFB problems that don’t appear during low-traffic testing.

Reducing TTFB effectively

Implement server-level caching if you haven’t already. Full-page caching serves responses from memory without executing application code. This reduces TTFB to near-zero for cached pages. For WordPress, this means page caching plugins or server-level caching solutions.

Use a CDN with edge caching for global audiences. Serving cached responses from edge locations near visitors eliminates geographic latency. Instead of every request traveling to your origin, most serve from nearby CDN nodes. This transforms 500ms cross-continent TTFB into 50ms local responses.

Optimise database queries and application code. Profile server-side execution to identify slow operations. Add database indexes, reduce query count, implement object caching (Redis/Memcached), and eliminate unnecessary processing. Every millisecond of server processing directly adds to TTFB.

Consider connection-level improvements. HTTP/2 and HTTP/3 reduce connection overhead. TLS 1.3 reduces handshake round trips. Connection preloading with <link rel="preconnect"> eliminates connection setup delays for known origins. These reduce the non-processing components of TTFB.

Evaluate your hosting infrastructure. Shared hosting often produces inconsistent TTFB because resources are shared. Dedicated resources or containerised hosting provides more predictable, lower TTFB. The hosting tier directly constrains your TTFB floor.

When TTFB optimisation is worth the effort

If your TTFB is consistently under 200ms, further optimisation yields diminishing returns. Focus on other metrics instead. Good TTFB is a foundation, not a goal in itself.

If TTFB exceeds 500ms for significant portions of traffic, it’s worth investigating. The impact compounds across every page load, every navigation, every visitor. Fixing TTFB improves the experience of every page, not just one.

If your audience is geographically concentrated near your server and TTFB is moderate, the effort may not justify the improvement. If your audience is global and TTFB varies wildly by region, edge caching delivers substantial improvement.

The practical takeaway

TTFB is the foundation that other performance metrics build on. High TTFB constrains how good your site can feel, regardless of other optimisations. Low TTFB gives the browser a head start on rendering, downloading resources, and becoming interactive.

Don’t chase TTFB perfection, but don’t ignore it either. If visitors describe your site as “slow” despite passing Core Web Vitals, TTFB is the first metric to investigate. It’s often the hidden bottleneck that explains the gap between metrics and perception.

Measure TTFB from your users’ perspective—multiple locations, real devices, varying network conditions. Server-side monitoring alone misses the network latency component that visitors actually experience. A holistic performance assessment captures these dimensions and identifies whether TTFB is your binding constraint.

Related insights