Google's Core Web Vitals — LCP, INP, CLS — are an explicit ranking signal in mobile search, and as of the March 2026 algorithm update all three are weighted equally. 43% of measured sites currently fail the 200ms INP threshold, making it the most commonly failed Vital in 2026. This guide is the complete UK playbook for hitting all three on every same-day launch and on every speed-optimisation engagement.
The thresholds, current to 2026
Pass thresholds: LCP under 2.5 seconds, INP under 200 milliseconds, CLS under 0.1. Field data (Chrome User Experience Report / CrUX), 75th-percentile, mobile bucket — that is the only measurement Google uses for ranking. Lab tools (Lighthouse, PageSpeed Insights' simulated run) are useful diagnostics, but they will not save you if your real users on Android Mid-tier devices over slow 4G consistently get a 3.4-second LCP. Tune to the field data because that is what gets scored.
Why this matters more in 2026
When Google replaced FID with INP in March 2024, most agencies treated it as an internal change with no real teeth. The March 2026 update closed that loophole — a poor INP score now carries the same ranking penalty as a poor LCP score, and sites in the "needs improvement" band have seen position drops averaging 0.8 places in the rolling six-month tracking studies. For a site sitting at position 6 for its money keyword, 0.8 places is the difference between 4.7% CTR and 3.2% CTR — a quarter of your click volume gone.
LCP — image budget and the 200KB rule
Hero images go through `next/image` (or `<picture>` with AVIF + WebP fallbacks) at the actual rendered viewport size. We ban anything heavier than 200KB above the fold. A typical hero ships at 38-60KB. Three steps: (1) source images at the exact width they render — a 1920×1080 hero on a 375px viewport is wasting 5× the bytes; (2) generate AVIF first, WebP as fallback, JPEG/PNG as last resort; (3) for photographs accept quality 70-75, for product shots 80-85. Background "texture" images get baked into a CSS gradient where the design tolerates it — zero KB instead of forty.
LCP — font subsetting
Two display fonts max. Both subset to the latin range, both preloaded, both with `font-display: swap`. Variable fonts where the design tolerates them — one HTTP request instead of four. Most agency stacks ship 4-7 font files (regular, medium, bold, italic for both heading and body) = 200-400KB of WOFF2 before a single byte of content. Pick one variable font for body, one display font for headings, both self-hosted via `next/font` so they bypass third-party domains.
LCP — hosting choices that quietly help
The host matters more than people think. Static-first hosts that serve from the edge — Vercel, Netlify, Cloudflare Pages — give you TTFB under 100ms from most UK locations because the HTML is cached at a PoP near the user. Shared-PHP hosts (the £3/month UK plans) routinely serve TTFB of 600-1,400ms before TLS handshake, which alone can blow your LCP budget on slow connections. For a Next.js build with `force-static`, deploy to Vercel UK or Cloudflare; for plain HTML deploy to Netlify or Hetzner with a Cloudflare front. The HTML reaches a CDN within 25ms of any UK city by the time the user requests it.
INP — the actual offenders
Five common offenders in order of frequency. (1) Heavy event handlers that block the main thread — usually a third-party script reacting to a click. (2) React state updates that touch the whole tree because someone forgot a useMemo on a derived value. (3) Synchronous reads of `offsetTop` or `scrollTop` during scroll handlers, which force a layout flush. (4) Animations that toggle `display: none` instead of `opacity: 0` — the former triggers reflow on every paint. (5) Hydration of an over-eager component tree on a server-rendered page. Default React tree should be small; most pages should have zero client components in the critical path.
INP — third-party tax
Every UK site we audit ships at least one of: GA4 (87KB), GTM (47KB), Hotjar (78KB), Meta Pixel (62KB), LinkedIn Insight (39KB), Intercom widget (320KB). Most ship four of those. That is 600+KB of third-party JavaScript executing on the main thread before the user touches anything. The fix: every analytics or ad pixel sits behind `requestIdleCallback` triggered after the page is fully interactive AND consent has been granted. Hero, navigation, CTAs are pure HTML and CSS — the JavaScript bundle for the first paint is under 12KB.
CLS — reserve every box
Three hidden CLS sources that bite people who think they have CLS handled. Web fonts: a 0.04 shift when the swap happens — fix with `size-adjust` descriptors in @font-face. Cookie banners that animate in after 800ms and push the hero down by 60px — fix by rendering the banner inside a `position: fixed` overlay rather than reflowing. Third-party embeds (YouTube, TrustBox, Klaviyo signup) that load asynchronously and inflate from zero height — wrap in an `aspect-ratio` div sized to the embed's final dimensions. Measure CLS once with the chrome DevTools Performance tab on throttled 4G, then audit every dynamic element with the "Layout Shift Regions" overlay.
The diagnostic loop when scores regress
Five steps. (1) Confirm the regression in field data, not just lab data — CrUX updates daily; PageSpeed Insights surfaces it the day after the threshold crosses. (2) Identify which page templates are affected — usually one route, not the whole site. (3) Open the offending template in DevTools Performance, throttled, with cache disabled, and record the full load. (4) Look for the new long task on the main thread: who added it, what does it do, can it be deferred. (5) Ship the fix behind a feature flag where possible so the rollback is one toggle if it makes things worse.
Performance and paid media
Performance bleeds into paid as well as organic. Google Ads ties Quality Score partly to landing-page experience, which is partly Core Web Vitals. Meta Ads weights "page speed" in its ad ranking auction. TikTok Ads has begun publicly weighting LCP in its 2026 ad-quality scoring. A landing page with LCP at 3.5s pays a higher CPM than the same offer at LCP 1.5s, sometimes by 15-25% on competitive auctions. The performance work pays its way through the next quarter's ad invoice on any client running paid media.
The testing protocol before any handover
Three measurements on every site before it ships. PageSpeed Insights mobile and desktop on three different page templates (home, deep page, blog), targeting 95+ on Performance. DevTools Performance recording on a throttled "Slow 4G + 6× CPU" profile with cache disabled, watching main-thread occupancy from cold load to interactive. Field-data check via the CrUX API after 28 days of live traffic, confirming 75th-percentile field LCP/INP/CLS all sit inside the green band. If any of the three fails, the launch is held until it passes.
Diminishing returns
There is a point at which further performance work stops paying back. Going from a 4-second LCP to a 2-second LCP is transformative; going from 1.8 to 1.4 changes neither rankings nor conversions meaningfully. The sensible target is comfortable green-band on field data with 20% headroom; beyond that, the engineering time is better spent on the next page rather than shaving milliseconds off an already-fast one. Performance is a means, not the product.