🚀PageSpeed Insights & Core Web Vitals

How Hugo integrates Google PageSpeed Insights to measure Lighthouse performance, Core Web Vitals, and optimization opportunities.

Hugo Team·March 18, 2026
pagespeedlighthousecore web vitalslcpfcptbtclsinpspeed indexfield datacruxdiagnosticsaccount

Hugo integrates directly with Google's PageSpeed Insights API v5 to deliver real Lighthouse performance data. This goes beyond the static HTML checks in the standard Performance category — it actually loads your page in a headless Chrome browser and measures real-world performance metrics.[1] This check adds 5% weight to your overall score.

Lighthouse Performance Score

The Lighthouse performance score (0–100) is a weighted combination of the five Core Web Vitals metrics.[2] This is the same score you'd get from Chrome DevTools or web.dev/measure.

Lighthouse Performance (score)

Good
90–100
Warning
50–89
Poor
Below 50
Google considers 90+ as a "good" performance score. Sites scoring below 50 likely have significant performance issues affecting user experience and search rankings.[2]

Core Web Vitals

Core Web Vitals are Google's specific metrics for measuring real-world user experience. They've been part of Google's page experience ranking signal since June 2021.[3] Here's what each measures:

Largest Contentful Paint (LCP)

LCP measures how quickly the largest visible content element (image, video, or text block) loads.[4] It directly reflects perceived loading speed.

LCP Thresholds
Good
Needs Improvement
Poor
≤ 2.5s2.5–4s> 4sseconds

LCP (seconds)

Good
≤ 2.5 seconds
Warning
2.5–4 seconds
Poor
> 4 seconds
Optimize by: preloading critical images, using CDN for static assets, reducing server response time, and avoiding render-blocking resources.[4]

First Contentful Paint (FCP)

FCP measures when the browser first renders any content (text, image, SVG, canvas).[5] It indicates when the user first sees something happening.

FCP Thresholds
Good
Needs Improvement
Poor
≤ 1.8s1.8–3s> 3sseconds

FCP (seconds)

Good
≤ 1.8 seconds
Warning
1.8–3 seconds
Poor
> 3 seconds
Improve by: eliminating render-blocking CSS/JS, inlining critical CSS, using font-display: swap, and reducing server response time.[5]

Total Blocking Time (TBT)

TBT measures the total time the main thread was blocked by long tasks (>50ms) between FCP and Time to Interactive.[6] High TBT means the page feels unresponsive to user input.

TBT Thresholds
Good
Needs Improvement
Poor
≤ 200ms200–600ms> 600msms

TBT (ms)

Good
≤ 200ms
Warning
200–600ms
Poor
> 600ms
Reduce by: code-splitting JavaScript bundles, deferring non-critical scripts, minimizing main-thread work, and reducing JavaScript execution time.[6]

Cumulative Layout Shift (CLS)

CLS measures visual stability — how much page content shifts during loading.[7] Unexpected layout shifts frustrate users, especially on mobile.

CLS Thresholds
Good
Needs Improvement
Poor
≤ 0.10.1–0.25> 0.25score

CLS (score)

Good
≤ 0.1
Warning
0.1–0.25
Poor
> 0.25
Fix by: always specifying width/height on images and videos, avoiding content insertion above the fold, using transform animations instead of layout-triggering properties.[7]

Speed Index

Speed Index measures how quickly content is visually populated during page load. A lower score means content appears faster.

Speed Index (seconds)

Good
≤ 3.4 seconds
Warning
3.4–5.8 seconds
Poor
> 5.8 seconds
Improve by: optimizing the critical rendering path, deferring off-screen content, and using efficient image formats (WebP, AVIF).[2]

Interaction to Next Paint (INP)

INP measures responsiveness — the time from when a user interacts (clicks, taps, or presses a key) to the next visual update on screen.[8] It replaced First Input Delay (FID) as an official Core Web Vital in March 2024, because INP covers the full interaction lifecycle, not just the first one.

INP Thresholds
Good
Needs Improvement
Poor
≤ 200ms200–500ms> 500msms

INP (ms)

Good
≤ 200ms
Warning
200–500ms
Poor
> 500ms
Improve by: breaking up long JavaScript tasks (>50ms), using scheduler.yield() or setTimeout to yield to the browser, reducing third-party script impact, and avoiding heavy event handlers.[8]

Real-World Data (Chrome UX Report)

In addition to lab measurements, Hugo shows field data from the Chrome UX Report (CrUX) when available.[9] This is real performance data collected from Chrome users who visited the URL over the past 28 days — not a simulation.

ℹ️Lab vs Field Data

Lab data (Lighthouse) simulates a throttled mobile connection in a controlled environment. Field data (CrUX) reflects actual user experiences across real devices and network conditions. Both matter — lab data is reproducible and actionable; field data reflects what your users actually experience.[9]

For each metric, Hugo shows the p75 value (the 75th percentile — meaning 75% of real users experienced this value or better) and a distribution bar split into Fast, Moderate, and Slow segments.

⚠️Data Availability

CrUX data is only available for URLs with sufficient traffic. New pages, low-traffic sites, or URLs behind authentication will show "Insufficient real-world data". This is normal and does not affect your score.

Diagnostics

Hugo also surfaces several diagnostic metrics that help you understand where time is being spent, even if they don't directly affect your Lighthouse score.

  • Time to Interactive (TTI) — Time until the page is fully interactive and responds to user input. Thresholds: ≤3.8s good, ≤7.3s needs work. Note: TTI was removed from Lighthouse scoring in v10 but remains a useful diagnostic.
  • Main Thread Work — Total time the browser main thread spent parsing HTML, executing JavaScript, and rendering. Broken down by category (Script Evaluation, Style & Layout, Rendering, etc.).
  • Resource Breakdown — Total page weight by resource type (scripts, images, stylesheets, fonts). Helps identify which asset categories dominate transfer size.
  • Third-party Impact — External scripts (analytics, ads, chat widgets) that block the main thread. Identifies specific origins and their blocking contribution.

Optimization Opportunities

Beyond metrics, PageSpeed Insights identifies specific optimization opportunities with estimated savings. Hugo surfaces these as actionable recommendations. Click any opportunity row to see a detailed description.

  • Render-blocking resources — CSS and JS that delay first paint
  • Unused CSS/JavaScript — Code downloaded but never executed
  • Modern image formats — Using WebP or AVIF instead of JPEG/PNG
  • Offscreen images — Images not in the viewport that could be lazy-loaded
  • Text compression — Enabling gzip or Brotli for text assets
  • HTTP/2 — Modern protocol with multiplexing for faster parallel downloads
  • DOM size — Excessively large DOM trees that slow rendering
  • Server response time — Slow TTFB delays all other resources

References

  1. [1]Google Developers — PageSpeed Insights API v5 — developers.google.com
  2. [2]Chrome Developers — Lighthouse performance scoring — developer.chrome.com
  3. [3]Google Search Central — Understanding page experience in Google Search results — developers.google.com
  4. [4]web.dev — Largest Contentful Paint (LCP) — web.dev
  5. [5]web.dev — First Contentful Paint (FCP) — web.dev
  6. [6]web.dev — Optimize Total Blocking Time — web.dev
  7. [7]web.dev — Cumulative Layout Shift (CLS) — web.dev
  8. [8]web.dev — Interaction to Next Paint (INP) — web.dev
  9. [9]web.dev — User-centric performance metrics (Lab vs Field) — web.dev

Your privacy matters

Hugo stores authentication tokens and your consent record. With your permission we may also show personalised ads via Google AdSense. ·