Build a Weekly Core Web Vitals Report for Clients (CrUX + Search Console): A Practical Automation Blueprint
Clients want to know whether their site’s performance is improving, stable, or regressing—without learning to navigate Google Search Console or interpret CrUX datasets. A weekly automated report that translates raw performance data into clear trends and actionable status solves this communication problem. This blueprint covers the data sources, assembly logic, and delivery mechanism for building one.
Why weekly reporting matters
Core Web Vitals data updates daily in CrUX, but changes are gradual. Daily reports create noise without signal. Monthly reports miss regressions that accumulate over weeks. Weekly cadence balances timeliness with meaningful trend visibility.
Clients who see regular performance reports maintain confidence that their investment is monitored. When regressions occur, weekly reporting catches them within days rather than discovering them during a quarterly review when the damage is already done.
Automated reporting removes the “I’ll check it when I have time” failure mode. Manual performance checks get deprioritised. Automated reports arrive regardless of workload, ensuring continuous monitoring.
Data sources and what they provide
The Chrome User Experience Report (CrUX) API provides real-user performance data. Query it with an API key for origin-level or URL-level Core Web Vitals: LCP, INP, CLS distributions, plus TTFB and FCP as supplementary metrics. CrUX data represents the same data Google uses for ranking assessments.
Google Search Console API provides search performance data alongside Core Web Vitals status. It shows which URL groups pass or fail, and the specific metrics causing failures. This connects performance to search visibility directly.
PageSpeed Insights API provides both field data (from CrUX) and lab data with diagnostic details. Use this as a secondary source for specific URL analysis when the weekly report identifies problems.
Report structure that clients understand
A one-page summary at the top: overall status (Pass/Fail/Mixed), trend direction (Improving/Stable/Regressing), and any action items. Clients should understand their site’s performance health in 10 seconds from this summary.
Metric-by-metric trends showing LCP, INP, and CLS values over the past 4-8 weeks. Line charts with threshold lines make it immediately clear whether metrics are above or below passing thresholds and which direction they’re trending.
Device-specific breakdown (mobile vs desktop) because mobile and desktop often diverge. Most clients’ sites perform differently across device types, and mobile performance usually determines the ranking assessment.
URL group status from Search Console showing which page types (templates) pass and fail. This directs optimisation effort: “product pages pass but blog posts fail INP” is immediately actionable.
Building the data pipeline
Set up a scheduled script (cron job, cloud function, or automation platform) that runs weekly. The script queries CrUX API for origin metrics, queries Search Console API for page experience status, and stores results in a spreadsheet, database, or reporting tool.
CrUX API queries require a Google API key with CrUX access. The request specifies an origin URL and returns metric distributions (good/needs-improvement/poor percentages) and percentile values (p75). This is the same data that drives Search Console assessments.
Search Console API queries require OAuth authentication for the client’s Search Console property. The API returns Core Web Vitals status per URL group, impressions, and classification. This shows what Google currently assesses for ranking purposes.
Store historical data to build trend charts. Each weekly fetch adds a new data point. Over months, this builds a valuable performance history showing the impact of optimisation work, traffic changes, and external factors.
Automating report generation
Use a template (Google Slides, a simple HTML template, or a reporting platform) that’s populated automatically with the week’s data. Charts update from the data store. Summary text generates from comparison logic (this week vs last week, current vs threshold).
Conditional formatting highlights regressions. If LCP worsened by more than 100ms week-over-week, highlight it red. If INP crossed from passing to failing, flag it prominently. These visual cues draw attention to items needing action.
Delivery via email on a consistent day (Monday morning is effective—starts the week with performance context). Include the summary in the email body with a link to the full report. Clients who just need the headline read the email; clients who want detail click through.
Interpreting trends for clients
Stable metrics at passing thresholds are good news. Frame this positively: “Performance remains strong and within Google’s passing thresholds.” No action needed, confidence maintained.
Gradual improvement shows optimisation work is effective. Quantify the improvement: “LCP improved from 2.8s to 2.3s over the past month, now comfortably passing Google’s 2.5s threshold.” This connects effort to measurable results.
Sudden regression requires investigation and communication. Identify when the regression started and correlate with changes: new plugins, content changes, traffic spikes, or external factors. Frame the finding and proposed response: “INP regressed from 180ms to 270ms coinciding with the new product filter implementation. We recommend reviewing the filter’s JavaScript performance.”
Gradual degradation often indicates growing site complexity: more plugins, more content, more third-party scripts accumulating over time. Flag the trend early: “CLS has increased from 0.04 to 0.08 over two months. Still passing but approaching the 0.1 threshold. We recommend investigating before it reaches failing status.”
Handling common data issues
Low-traffic URLs may lack CrUX data. For these URLs, fall back to origin-level data or note that insufficient data exists. Don’t present PageSpeed lab data as equivalent—it’s a different measurement that can mislead.
New URLs won’t have historical data. Start tracking from when they first appear in CrUX. Note that 28-day data accumulation means new pages take time before reliable field data is available.
Seasonal traffic variations affect metric distributions. More mobile traffic during holidays shifts metric profiles. Year-over-year comparison, when available, provides better context than week-over-week for seasonal sites.
Scaling across clients
Build the reporting system as a reusable pipeline that accepts site URLs as input. Each client gets their own data store and report template, but the collection and generation logic is shared. Adding a new client means adding their URLs and API credentials, not rebuilding the pipeline.
Use consistent metrics and formatting across clients. This makes reports comparable and simplifies your team’s interpretation. Standardised reports also train clients to understand the format, reducing explanation time.
Automate as much as possible but review before sending. Automated collection and report generation saves hours. But a quick manual review catches data anomalies, API errors, or unusual results that warrant context before the client sees them.
The practical takeaway
Automated performance reporting transforms Core Web Vitals monitoring from a sporadic check into a continuous service. Clients see consistent, understandable updates. Regressions surface quickly. Improvement trends demonstrate value.
The initial setup takes a few hours: API configuration, data storage, template creation, and scheduling. Once running, the system requires minimal maintenance—just occasional review and adjustment as reporting needs evolve.
For agencies and consultants managing multiple sites, this reporting capability differentiates your service. Proactive performance monitoring builds trust and provides early warning. Combined with a performance optimisation service, weekly reporting creates a continuous improvement cycle where monitoring identifies issues and optimisation resolves them.