Performance

Core Web Vitals in 2026: How to Read Search Console vs PageSpeed Insights (Field Data vs Lab Data)

8 min read
Dashboard showing Core Web Vitals metrics from Search Console and PageSpeed Insights side by side

Core Web Vitals data appears in two places—Google Search Console and PageSpeed Insights—but the numbers rarely match. This confuses site owners who expect consistency. The difference isn’t a bug; it reflects two fundamentally different measurement approaches. Understanding which data to trust for which decision prevents wasted optimisation effort.

Field data vs lab data: the fundamental distinction

Search Console reports field data (also called Real User Metrics or RUM). This comes from actual Chrome users visiting your site over a rolling 28-day window. It reflects real devices, real networks, and real user behaviour. The data represents what your visitors actually experience.

PageSpeed Insights shows both field data and lab data. Lab data comes from a simulated page load using a throttled device and network. It’s consistent and reproducible—useful for debugging. But it doesn’t reflect your actual visitors because it uses standardised conditions that may not match your audience’s devices or connections.

The distinction matters because optimisation decisions based on the wrong data type waste effort. Lab data tells you what could be slow under specific conditions. Field data tells you what is slow for real visitors. Both are useful, but they answer different questions.

Why the numbers differ

Device diversity drives the largest gaps between lab and field. Lab tests simulate a single mid-range device. Your real visitors use everything from flagship phones to five-year-old budget devices. If most of your audience uses newer hardware, field data looks better than lab data. If your audience skews towards older devices, field data may look worse.

Network conditions create another divergence. Lab simulations use fixed throttling profiles that may not represent your visitors’ actual connections. A site serving mostly urban broadband users performs differently in the field than lab tests assume. Conversely, if your audience includes significant mobile-data traffic, lab simulations might underestimate real-world problems.

User interaction patterns affect Interaction to Next Paint (INP) significantly. Lab tests simulate specific interactions, but your real users interact with your site in unpredictable ways—clicking elements the lab test didn’t click, scrolling patterns the simulation didn’t model. Field INP captures these real interactions; lab data captures only simulated ones.

Geographic distribution influences Time to First Byte (TTFB) and Largest Contentful Paint (LCP). If your server is in the US but half your audience is in Europe, field data reflects that latency. Lab tests run from a single location and miss this geographic reality.

How to read Search Console data

Search Console groups URLs by status: Good, Needs Improvement, or Poor. These thresholds are fixed: LCP under 2.5s is good, INP under 200ms is good, CLS under 0.1 is good. The assessment uses 75th percentile values—meaning 75% of page loads must meet the threshold.

The 28-day rolling window means changes take time to reflect. If you deploy a fix today, you won’t see the full impact in Search Console for roughly a month. The old slow data gradually cycles out as new faster data accumulates. This delay frustrates teams expecting immediate feedback.

URL grouping in Search Console clusters similar pages. A single slow template can drag down hundreds of URLs. Look at which URL groups are failing to identify whether the problem is site-wide or template-specific. Often, one page type (like product pages or archive pages) fails while others pass.

Mobile and desktop are reported separately. Check both. Mobile typically shows worse numbers because of device and network constraints. If desktop passes but mobile fails, your optimisation should focus on mobile-specific issues: render-blocking resources, large images without responsive sizing, or JavaScript that’s too heavy for mobile processors.

How to use PageSpeed Insights effectively

Start with the field data section at the top. If field data is available for your URL, it provides the same real-user assessment Google uses for ranking. The lab data below is supplementary—useful for diagnostics but not what Google evaluates.

Use lab data for identifying specific issues. The Performance score, diagnostic opportunities, and filmstrip view help pinpoint what’s slow and why. Treemap visualisations show JavaScript weight. Waterfall views reveal loading sequences. This diagnostic detail doesn’t exist in field data.

Run lab tests after changes to get immediate feedback. Unlike field data’s 28-day delay, lab tests reflect current page state instantly. Use lab tests during development to verify fixes work before waiting for field data to confirm. Just remember lab improvements don’t guarantee field improvements if your audience’s conditions differ from lab simulation.

Compare origin-level and URL-level field data. PageSpeed Insights shows both. Origin-level data aggregates your entire site. If a specific URL lacks sufficient traffic for individual field data, origin data still provides a baseline. High-traffic pages get individual data; low-traffic pages may only show origin aggregates.

Common misinterpretations to avoid

A perfect lab score doesn’t mean your field data will pass. Lab tests use controlled conditions. If your real visitors use slower devices or networks than the lab simulates, field performance will be worse. Don’t celebrate a 100 lab score if field data shows problems.

Conversely, poor lab scores don’t always mean field data fails. If your audience predominantly uses fast devices and connections, real-world performance may exceed lab predictions. Don’t panic about lab warnings if field data consistently shows good results.

Comparing PageSpeed scores between different sites is misleading. The score reflects performance relative to throttled simulation conditions, not absolute quality. A site scoring 60 might actually deliver better real-user experience than a site scoring 90 if its audience has faster devices and connections.

Small CLS differences in lab vs field are normal. Cumulative Layout Shift depends on viewport size, font loading timing, and ad insertion—all of which vary between lab simulation and real user sessions. Field CLS is what matters for ranking; lab CLS is useful for catching obvious layout instability.

A practical workflow for using both data sources

Check Search Console first to understand real-world status. Are your pages passing Core Web Vitals for real users? If yes, optimisation is lower priority. If specific URL groups fail, those are your targets.

Use PageSpeed Insights lab data to diagnose failures. Once you know which pages fail in the field, run lab tests on representative URLs from failing groups. The diagnostic details reveal what’s causing poor performance.

Fix issues identified in lab diagnostics. Optimise images, defer non-critical JavaScript, fix layout shifts, reduce server response time—whatever the lab diagnostics highlight.

Verify fixes with lab tests immediately, then monitor field data over the following weeks. Lab tests confirm the technical fix works. Field data confirms real users benefit. Both confirmations are necessary.

Re-check Search Console after 28+ days. If fixes were effective, field data should improve as the rolling window incorporates post-fix page loads. If field data doesn’t improve despite lab improvements, the issue may be device/network-specific or interaction-dependent.

When field data is unavailable

New pages or low-traffic pages may lack sufficient Chrome User Experience Report (CrUX) data for field metrics. In this case, PageSpeed Insights shows only lab data. This is common for new sites, niche pages, or recently changed URLs.

Without field data, lab data becomes your primary signal—but interpret it carefully. Lab results represent one scenario, not your audience’s reality. Optimise based on lab data but monitor field data as traffic accumulates. Once enough visits occur, field data may tell a different story.

Origin-level CrUX data may still be available even when page-level data isn’t. Check the origin summary for a broader picture of site performance. This at least confirms whether your site generally meets thresholds.

The practical takeaway

Search Console field data tells you whether real visitors experience good performance. PageSpeed Insights lab data tells you why performance is what it is and what to fix. Use both, but don’t confuse their purposes.

Prioritise field data for business decisions—it reflects actual visitor experience and influences search ranking. Use lab data for engineering decisions—it provides diagnostic detail for identifying and verifying fixes.

When field and lab data disagree, field data wins for understanding user experience. Lab data wins for reproducible debugging. A site that passes in the field but fails in the lab is fine. A site that passes in the lab but fails in the field has real problems worth solving.

Understanding this distinction saves significant time. Teams that chase lab score perfection while ignoring field reality waste effort. Teams that monitor field data and use lab diagnostics strategically fix what actually matters. If you want help interpreting your Core Web Vitals data and building a prioritised action plan, our performance service focuses on exactly this kind of evidence-based analysis.

Related insights