Core Web Vitals: LCP, CLS, INP
The Three Numbers That Decide Your Ranking
Google uses three metrics to judge your site's real-world user experience. Not server response time. Not bundle size. Not your Lighthouse score in an empty Chrome profile on your M3 MacBook. Three metrics, measured on real user devices, in real network conditions, across the 75th percentile of all page loads.
These are the Core Web Vitals: LCP (loading), CLS (visual stability), and INP (interactivity). Get all three in the "good" range and Google rewards you with a ranking boost. Miss any one of them and you're competing with a handicap.
Here's the thing most tutorials get wrong: they treat these as three independent problems. They're not. LCP, CLS, and INP are interconnected — fixing one carelessly can destroy another. A lazy-loaded hero image fixes LCP but might cause CLS. Inlining all your JavaScript fixes INP but bloats LCP. You need to understand the tradeoffs.
Think of Core Web Vitals like a medical checkup with three vital signs: blood pressure (LCP — how fast does the main content appear?), heart rhythm (CLS — is the page visually stable or jittering around?), and reflex response (INP — when the user taps something, how fast does the page react?). A patient needs all three in healthy range. Excellent blood pressure means nothing if the heart is arrhythmic.
The Thresholds at a Glance
| Metric | Good | Needs Improvement | Poor | What It Measures |
|---|---|---|---|---|
| LCP | ≤ 2.5s | 2.5s – 4.0s | > 4.0s | Time until the largest visible element renders |
| CLS | ≤ 0.1 | 0.1 – 0.25 | > 0.25 | Total unexpected layout shift score |
| INP | ≤ 200ms | 200ms – 500ms | > 500ms | Worst interaction latency (p98) |
Google evaluates each metric at the 75th percentile of your real user data. That means 75% of your users must experience "good" values for you to pass. Not the median. Not the average. The 75th percentile — because the users having the worst experience are the ones most likely to leave.
LCP — Largest Contentful Paint
LCP measures the time from when the user starts navigating to when the largest visible content element finishes rendering in the viewport. It answers the question users subconsciously ask: "Has the page loaded yet?"
What Counts as the LCP Element?
Not every element qualifies. The browser considers these candidates:
<img>elements (including<img>inside<picture>)<image>elements inside<svg><video>elements (the poster image is used)- Elements with a
background-imageloaded via CSS - Block-level elements containing text nodes (paragraphs, headings, lists)
The browser picks whichever qualifying element has the largest visible area at the time it finishes rendering. And here's the key: the LCP element can change over time. As more content loads, the browser might promote a different element to LCP. The final LCP candidate is reported when the user first interacts with the page (click, tap, keypress) or when the page finishes loading.
The LCP element is determined by rendered size in the viewport, not natural image dimensions. A 4000x3000 hero image displayed at 400x300 via CSS has an LCP area of 400x300. Conversely, a small image stretched to fill the viewport has a large LCP area. Also, elements with opacity: 0 or visibility: hidden do not count — they must actually be visible to the user.
What Slows Down LCP
LCP is a composite of four sub-parts:
LCP Optimization Strategies
1. Reduce TTFB — Use a CDN for static assets, enable HTTP/2 or HTTP/3, reduce server-side processing time, avoid redirect chains.
2. Eliminate resource load delay — Make sure the browser discovers the LCP resource as early as possible:
<!-- Preload hero image so the browser fetches it immediately -->
<link rel="preload" as="image" href="/hero.webp" fetchpriority="high">
<!-- For responsive images, preload with srcset -->
<link rel="preload" as="image" href="/hero.webp"
imagesrcset="/hero-400.webp 400w, /hero-800.webp 800w"
imagesizes="100vw">
3. Reduce resource load duration — Serve images in modern formats (AVIF, WebP), correctly size images for the viewport, and compress aggressively. A 2MB PNG hero image is never acceptable.
<picture>
<source srcset="/hero.avif" type="image/avif">
<source srcset="/hero.webp" type="image/webp">
<img src="/hero.jpg" alt="Hero" width="1200" height="600"
fetchpriority="high" decoding="async">
</picture>
4. Minimize render delay — Remove render-blocking JavaScript, inline critical CSS, and keep the DOM size manageable. If your LCP element is text, ensure the web font loads fast or use font-display: optional.
fetchpriority and why it matters for LCP
By default, images in the viewport get "High" fetch priority, but the browser doesn't always know which image is the LCP element until layout completes. Adding fetchpriority="high" to your hero image tells the browser to prioritize it immediately during HTML parsing, before layout. Conversely, use fetchpriority="low" on below-the-fold images to avoid them competing with the LCP resource for bandwidth. This single attribute can improve LCP by 500ms+ on slower connections.
CLS — Cumulative Layout Shift
CLS measures how much the page content shifts around unexpectedly while loading. You know the experience: you're about to tap a link, an ad loads above it, the link moves down, and you tap the wrong thing. That rage-inducing moment is exactly what CLS captures.
How CLS Is Calculated
CLS uses a session window approach. Layout shifts are grouped into "session windows" — bursts of shifts that happen within 1 second of each other, with a maximum window duration of 5 seconds. The CLS score is the largest single session window's total shift score.
Each individual layout shift is scored as:
layout shift score = impact fraction x distance fraction
- Impact fraction: The percentage of the viewport that was affected by the shift
- Distance fraction: The largest distance any shifted element moved, as a fraction of the viewport
So an element that occupies 50% of the viewport and shifts down by 25% of the viewport height produces a shift score of 0.50 x 0.25 = 0.125.
Not all layout shifts are bad. Shifts that happen within 500ms of a user interaction (click, tap, keypress) are excluded from CLS because the user expects the page to respond. A dropdown menu opening after a click does not count. A banner sliding in 3 seconds after page load absolutely counts.
The Top CLS Offenders
Almost every CLS problem comes from one of these five causes:
1. Images without dimensions — The browser reserves zero space for an image until it knows the size. When the image loads, everything below it shifts down.
<!-- BAD: No dimensions — causes layout shift when image loads -->
<img src="/photo.jpg" alt="Photo">
<!-- GOOD: Browser reserves space immediately -->
<img src="/photo.jpg" alt="Photo" width="800" height="600">
<!-- ALSO GOOD: CSS aspect-ratio achieves the same thing -->
<img src="/photo.jpg" alt="Photo" style="aspect-ratio: 4/3; width: 100%;">
2. Dynamically injected content — Ads, cookie banners, notification bars, and late-loading embeds that push content down.
3. Web fonts causing FOUT — When a custom font loads and replaces the fallback font, different character widths cause text reflow. Every paragraph shifts.
4. Late-loading JavaScript that modifies the DOM — Client-side rendering that adds content after initial paint.
5. Animations using layout-triggering properties — Animating top, left, width, or height instead of transform.
CLS Prevention Techniques
/* Reserve space for ads/embeds with min-height */
.ad-slot {
min-height: 250px;
}
/* Prevent font swap from causing layout shift */
@font-face {
font-family: 'CustomFont';
src: url('/font.woff2') format('woff2');
font-display: optional; /* No FOUT, no shift — falls back silently */
}
/* Use contain to limit shift propagation */
.card {
contain: layout;
}
font-display: swap is commonly recommended but actually causes CLS. When the fallback font renders first and the custom font swaps in later, the different glyph metrics cause text reflow — a layout shift. Use font-display: optional to eliminate font-related CLS entirely. The browser uses the custom font only if it arrives within ~100ms, otherwise sticks with the fallback. For most sites, this is the better tradeoff.
INP — Interaction to Next Paint
INP replaced First Input Delay (FID) in March 2024, and it's a much harder metric to pass. While FID only measured the delay before the browser started processing the first interaction, INP measures the full round-trip latency of every interaction throughout the page's lifecycle, then reports the worst one (technically, the p98 interaction).
What INP Measures
An "interaction" is a tap, click, or keypress. INP measures three phases for each one:
The total of all three phases must be under 200ms for "good" INP. And remember — INP reports the worst interaction across the entire page session (at p98, so if there are 50+ interactions, the worst 1-2 are excluded to avoid outlier noise).
Why INP Is Harder Than FID
FID was easy to pass — it only measured the first interaction's input delay. Most sites loaded their heavy JavaScript before the user interacted, so FID was usually fast. INP is different:
- It measures every interaction, not just the first one
- It measures the full round-trip (input delay + processing + paint), not just input delay
- It uses the worst interaction, not the first or average
- Interactions deep in a session matter — a slow dropdown filter that fires 30 seconds after page load still counts
INP Optimization Strategies
1. Break up long tasks — Any task over 50ms is a "long task" that can block user interactions. Use scheduler.yield() (or setTimeout(0) as a fallback) to break expensive work into smaller chunks:
async function processLargeDataset(items) {
for (let i = 0; i < items.length; i++) {
processItem(items[i]);
if (i % 100 === 0) {
await scheduler.yield();
}
}
}
2. Reduce event handler complexity — Keep event handlers lean. Move heavy computation to web workers, debounce rapid-fire events, and avoid synchronous layout reads in handlers.
3. Minimize DOM size — Large DOMs make style recalculation and layout more expensive, increasing presentation delay. Keep DOM node count under 1,500 where possible.
4. Use content-visibility: auto — This tells the browser to skip rendering for off-screen content, reducing the work needed after an interaction triggers a repaint:
.below-fold-section {
content-visibility: auto;
contain-intrinsic-size: auto 500px;
}
5. Avoid layout thrashing in handlers — Never read layout properties and write styles in the same handler. Batch reads, then batch writes:
// BAD: Forces synchronous layout in every iteration
items.forEach(item => {
const height = item.offsetHeight;
item.style.height = height * 2 + 'px';
});
// GOOD: Read all, then write all
const heights = items.map(item => item.offsetHeight);
items.forEach((item, i) => {
item.style.height = heights[i] * 2 + 'px';
});
scheduler.yield() vs setTimeout(0)
scheduler.yield() is the modern API for yielding to the main thread. Unlike setTimeout(0), which pushes work to the back of the task queue (potentially behind other queued tasks), scheduler.yield() resumes at the front of the queue — preserving your task's priority while still giving the browser a chance to handle pending user input. As of 2024, scheduler.yield() is supported in Chrome and Edge. For other browsers, fall back to setTimeout(0) or use the scheduler-polyfill package.
Measuring Core Web Vitals
You need both lab data (synthetic tests you run) and field data (real user measurements) because they tell different stories.
Lab Tools (Synthetic)
- Lighthouse (Chrome DevTools) — Simulates a mid-tier mobile device on throttled 4G. Measures LCP and CLS (but not INP, since there's no real user interaction)
- WebPageTest — Advanced testing with filmstrip view, waterfall analysis, and custom network profiles. Best for diagnosing LCP sub-parts
- Chrome DevTools Performance panel — Records a trace of everything the browser does. Use it to identify long tasks causing poor INP
Field Tools (Real User Monitoring)
- Chrome User Experience Report (CrUX) — Google's public dataset of real Chrome user metrics. This is what Google uses for Search ranking. Available via PageSpeed Insights, BigQuery, and the CrUX API
- web-vitals library — A tiny JavaScript library (under 2KB) that measures all Core Web Vitals in real user sessions and lets you send data to your analytics
import { onLCP, onCLS, onINP } from 'web-vitals';
onLCP(metric => sendToAnalytics('LCP', metric));
onCLS(metric => sendToAnalytics('CLS', metric));
onINP(metric => sendToAnalytics('INP', metric));
function sendToAnalytics(name, metric) {
const body = JSON.stringify({
name,
value: metric.value,
rating: metric.rating,
delta: metric.delta,
id: metric.id,
navigationType: metric.navigationType,
});
navigator.sendBeacon('/api/vitals', body);
}
Your Lighthouse score can be 100 while real users have poor vitals. Lighthouse runs on a controlled environment with no extensions, no competing tabs, and no slow third-party scripts. Always validate with CrUX or RUM data. Lab tools find problems. Field data confirms whether your fixes actually helped real users.
Real-World Optimization: An E-Commerce Product Page
Let's walk through a real optimization scenario. A product page has these vitals:
- LCP: 4.8s (Poor) — Hero product image
- CLS: 0.32 (Poor) — Price and reviews loading late
- INP: 340ms (Needs Improvement) — Size selector dropdown
Fixing LCP (4.8s → 1.9s)
The product image was a 1.2MB JPEG loaded via CSS background-image on a <div>. Three changes:
- Switched from
background-imageto an<img>tag withfetchpriority="high"— the browser now discovers the image during HTML parsing instead of waiting for CSS - Converted to AVIF with WebP fallback — file size dropped from 1.2MB to 180KB
- Added
<link rel="preload">for the image — download starts before the CSS even loads
Fixing CLS (0.32 → 0.04)
The price, star rating, and review count were fetched client-side from an API after page load. When they arrived, they pushed the "Add to Cart" button down.
- Moved price and rating data to the server response (SSR or RSC) — they render with the initial HTML
- Added
min-heightto the review section as a fallback for the rare case when client-side data is needed - Added explicit
widthandheightto all product thumbnail images
Fixing INP (340ms → 120ms)
The size selector triggered a full product variant lookup, re-rendered the price, checked inventory, and updated the image — all synchronously in a single event handler.
- Split the handler: update the selected size immediately (visual feedback in under 50ms), then
await scheduler.yield()before running the inventory check and price update - Moved the inventory lookup to a Web Worker
- Used CSS
transforminstead ofheightanimation for the dropdown
The Interplay Between Metrics
This is where most developers trip up. Optimizing one metric in isolation can hurt another.
| What developers do | What they should do |
|---|---|
| Lazy-load the hero image to reduce page weight Lazy-loading delays the LCP element. Only lazy-load images below the fold. | Eager-load the LCP image with fetchpriority high and preload |
| Use font-display swap to show text faster font-display swap causes a layout shift (CLS) when the custom font replaces the fallback. The glyph widths differ, causing text reflow. | Use font-display optional or preload the font with a fallback that matches metrics |
| Inline all JavaScript to avoid network requests Inlined JS blocks parsing (hurts LCP) and creates long tasks (hurts INP). Ship only what is needed for the initial viewport. | Code-split and lazy-load non-critical JS, yield long tasks |
| Defer all third-party scripts to after page load Deferring everything can cause a burst of script execution after load, creating long tasks that hurt INP for early interactions. | Defer non-critical scripts but preconnect to critical third-party origins |
| Treat Lighthouse 100 as proof your vitals are good Lighthouse runs in ideal conditions. Real users have slow devices, extensions, spotty connections, and competing tabs. | Monitor CrUX field data continuously — lab results do not reflect real user experience |
Quick Diagnostic Cheat Sheet
When debugging a specific metric, start here:
LCP is slow?
- Identify the LCP element in DevTools (Performance panel → Timings → LCP)
- Check if the resource is discoverable early (is it in HTML or hidden behind CSS/JS?)
- Check file size and format (AVIF/WebP? Correctly sized for viewport?)
- Check for render-blocking resources delaying paint
CLS is high?
- Open DevTools → Performance → check "Layout Shifts" in the Experience lane
- Look for elements moving after initial paint (images, ads, fonts, injected content)
- Add explicit dimensions to all media elements
- Reserve space for dynamic content with
min-height
INP is slow?
- Record a Performance trace while clicking/tapping around the page
- Find the longest interaction in the Interactions track
- Check for long tasks blocking the main thread before the handler runs
- Check if the handler itself is doing too much synchronous work
- 1LCP measures loading — optimize by preloading the LCP resource, reducing its size, and eliminating render-blocking resources.
- 2CLS measures visual stability — prevent it by always setting image dimensions, reserving space for dynamic content, and using font-display optional.
- 3INP measures interactivity — improve it by breaking long tasks with scheduler.yield(), keeping event handlers lean, and reducing DOM size.
- 4Google uses the 75th percentile of real user data (CrUX) for ranking, not Lighthouse lab scores.
- 5Never lazy-load the LCP element. Never use font-display swap without measuring CLS impact. Never inline all JS to fix one metric while breaking another.
- 6Measure with both lab tools (Lighthouse, DevTools) for diagnosis and field tools (CrUX, web-vitals library) for ground truth.