Core Web Vitals Pass/Fail in Plain English: Thresholds, 28-Day Windows, and What "Good" Really Means
Core Web Vitals documentation uses precise language that can obscure practical meaning. When Google says “good,” what exactly qualifies? When they mention a 28-day window, how does that affect your optimisation timeline? This article translates the technical framework into plain decisions and expectations.
The three thresholds explained simply
Each Core Web Vital has three zones: Good, Needs Improvement, and Poor. The assessment uses the 75th percentile of page loads—meaning 75 out of every 100 page loads must meet the Good threshold for your page to pass.
Largest Contentful Paint (LCP): Good is under 2.5 seconds. This measures when the biggest visible element (usually a hero image or heading) finishes rendering. If your 75th percentile LCP is 2.6 seconds, you fail—even if 74% of loads are under 2 seconds.
Interaction to Next Paint (INP): Good is under 200 milliseconds. This measures how quickly the page responds when users click, tap, or type. It captures the worst interaction per session, reported at the 98th percentile across sessions.
Cumulative Layout Shift (CLS): Good is under 0.1. This measures unexpected visual movement—elements jumping around as the page loads. A CLS of 0.1 roughly means 10% of the viewport shifted unexpectedly.
What the 75th percentile means in practice
The 75th percentile means the threshold applies to slower-than-average experiences, not just the average. This is intentional—Google wants most visitors, including those on slower devices and networks, to have a good experience.
Practically, this means your optimisation must work across device types. If 30% of your visitors use older Android phones, those slower loads pull your 75th percentile up. You can’t just optimise for desktop Chrome on broadband and expect to pass.
It also means outliers matter less than you’d expect. Extremely slow loads from unusual circumstances (someone on a 2G connection) don’t dominate the score because the 75th percentile ignores the worst 25%. But consistently moderate slowness across many loads does affect it.
How the 28-day window works
Google collects field data from real Chrome users over a rolling 28-day period. Each day, the oldest day’s data drops off and today’s data is added. Your Core Web Vitals assessment is always based on the most recent 28 days of real user data.
This means improvements take time to reflect. Deploy a fix today, and it takes up to 28 days for the old slow data to fully cycle out. In practice, you’ll see gradual improvement as new fast data replaces old slow data. Don’t expect overnight changes in Search Console.
It also means regressions take time to appear. If you deploy something that slows your site, Search Console won’t show it immediately. This delay can mask problems—by the time the regression appears in Search Console, it’s been affecting users for weeks.
The 28-day window explains why Search Console and real-time testing disagree. Your site today might be fast, but if it was slow two weeks ago, Search Console still reflects that older slow data. Patience is required—and monitoring tools that track trends help distinguish “still catching up” from “not actually fixed.”
What “URL group” assessment means
Search Console doesn’t assess every URL individually—it groups similar URLs. If your blog uses a consistent template, all blog posts may share the same assessment. If product pages use a different template, they get their own group assessment.
This grouping means one slow template affects many URLs. If your archive pages are slow, every archive URL in that group fails even if individual pages vary. Fixing the template fixes the entire group.
Low-traffic URLs may inherit their group’s assessment rather than having individual data. This is usually fine—URLs using the same template typically have similar performance characteristics.
When Core Web Vitals actually affect rankings
Core Web Vitals is one of many ranking signals, and not the strongest one. Content relevance, backlinks, and other factors dominate. But among pages with similar content quality, Core Web Vitals can be a tiebreaker.
Failing Core Web Vitals won’t crash your rankings overnight. Passing them won’t rocket a thin-content page to position one. Think of it as a qualifier—passing keeps you in the running; failing may disadvantage you when competing against similar content that passes.
The practical implication: fix Core Web Vitals if you’re failing, but don’t expect ranking miracles from passing. The benefit is real but moderate. The bigger benefit is improved user experience, which indirectly supports engagement metrics that do influence rankings more strongly.
Common scenarios and what they mean
Passing on desktop, failing on mobile: Your site performs well on fast hardware but struggles on mobile devices. This is extremely common. Mobile optimisation—lighter JavaScript, responsive images, reduced DOM complexity—is the fix. Since Google uses mobile-first indexing, the mobile assessment matters most.
Passing LCP and CLS but failing INP: Your page loads fine and is visually stable, but interactions are sluggish. Heavy JavaScript, long tasks, or expensive event handlers are typical causes. This is the most common failure pattern in 2026.
Intermittent failures: Some weeks you pass, other weeks you fail. This suggests you’re on the boundary. Traffic pattern changes, seasonal variations in device mix, or inconsistent server performance can push the 75th percentile above or below thresholds. Building headroom below thresholds prevents this instability.
Recently deployed improvements not showing: The 28-day window is still cycling. Wait. Monitor your own real-user metrics for immediate confirmation that the fix works; Search Console will catch up.
The practical approach
Check Search Console monthly for Core Web Vitals status. Address failures systematically—identify the failing metric, diagnose the cause, fix, verify with lab testing, and monitor field data for improvement.
Build margin below thresholds. Aiming for 2.4s LCP when the threshold is 2.5s leaves no room for variation. Aim for 2.0s or better. Similarly, target INP under 150ms rather than exactly 200ms. Margin absorbs normal variation without triggering failures.
Prioritise mobile performance. Desktop almost always performs better than mobile. If mobile passes, desktop likely passes too. Optimise for mobile first, and desktop benefits automatically.
Don’t over-invest in perfecting already-passing metrics. Once you’re comfortably below thresholds with margin, the performance investment delivers diminishing returns for SEO. Redirect effort to content quality, which has a larger ranking impact. For help building a practical Core Web Vitals improvement plan, our performance service takes a prioritised approach that balances effort against measurable benefit.