Skip to content

Core Web Vitals: LCP, CLS, INP

advanced18 min read

The Three Numbers That Decide Your Ranking

Google uses three metrics to judge your site's real-world user experience. Not server response time. Not bundle size. Not your Lighthouse score in an empty Chrome profile on your M3 MacBook. Three metrics, measured on real user devices, in real network conditions, across the 75th percentile of all page loads.

These are the Core Web Vitals: LCP (loading), CLS (visual stability), and INP (interactivity). Get all three in the "good" range and Google rewards you with a ranking boost. Miss any one of them and you're competing with a handicap.

Here's the thing most tutorials get wrong: they treat these as three independent problems. They're not. LCP, CLS, and INP are interconnected — fixing one carelessly can destroy another. A lazy-loaded hero image fixes LCP but might cause CLS. Inlining all your JavaScript fixes INP but bloats LCP. You need to understand the tradeoffs.

Mental Model

Think of Core Web Vitals like a medical checkup with three vital signs: blood pressure (LCP — how fast does the main content appear?), heart rhythm (CLS — is the page visually stable or jittering around?), and reflex response (INP — when the user taps something, how fast does the page react?). A patient needs all three in healthy range. Excellent blood pressure means nothing if the heart is arrhythmic.

The Thresholds at a Glance

MetricGoodNeeds ImprovementPoorWhat It Measures
LCP≤ 2.5s2.5s – 4.0s> 4.0sTime until the largest visible element renders
CLS≤ 0.10.1 – 0.25> 0.25Total unexpected layout shift score
INP≤ 200ms200ms – 500ms> 500msWorst interaction latency (p98)

Google evaluates each metric at the 75th percentile of your real user data. That means 75% of your users must experience "good" values for you to pass. Not the median. Not the average. The 75th percentile — because the users having the worst experience are the ones most likely to leave.

Quiz
Your site's LCP is 1.8s at the 50th percentile but 3.2s at the 75th percentile. How does Google classify your LCP?

LCP — Largest Contentful Paint

LCP measures the time from when the user starts navigating to when the largest visible content element finishes rendering in the viewport. It answers the question users subconsciously ask: "Has the page loaded yet?"

What Counts as the LCP Element?

Not every element qualifies. The browser considers these candidates:

  • <img> elements (including <img> inside <picture>)
  • <image> elements inside <svg>
  • <video> elements (the poster image is used)
  • Elements with a background-image loaded via CSS
  • Block-level elements containing text nodes (paragraphs, headings, lists)

The browser picks whichever qualifying element has the largest visible area at the time it finishes rendering. And here's the key: the LCP element can change over time. As more content loads, the browser might promote a different element to LCP. The final LCP candidate is reported when the user first interacts with the page (click, tap, keypress) or when the page finishes loading.

Common Trap

The LCP element is determined by rendered size in the viewport, not natural image dimensions. A 4000x3000 hero image displayed at 400x300 via CSS has an LCP area of 400x300. Conversely, a small image stretched to fill the viewport has a large LCP area. Also, elements with opacity: 0 or visibility: hidden do not count — they must actually be visible to the user.

What Slows Down LCP

LCP is a composite of four sub-parts:

LCP Optimization Strategies

1. Reduce TTFB — Use a CDN for static assets, enable HTTP/2 or HTTP/3, reduce server-side processing time, avoid redirect chains.

2. Eliminate resource load delay — Make sure the browser discovers the LCP resource as early as possible:

<!-- Preload hero image so the browser fetches it immediately -->
<link rel="preload" as="image" href="/hero.webp" fetchpriority="high">

<!-- For responsive images, preload with srcset -->
<link rel="preload" as="image" href="/hero.webp"
      imagesrcset="/hero-400.webp 400w, /hero-800.webp 800w"
      imagesizes="100vw">

3. Reduce resource load duration — Serve images in modern formats (AVIF, WebP), correctly size images for the viewport, and compress aggressively. A 2MB PNG hero image is never acceptable.

<picture>
  <source srcset="/hero.avif" type="image/avif">
  <source srcset="/hero.webp" type="image/webp">
  <img src="/hero.jpg" alt="Hero" width="1200" height="600"
       fetchpriority="high" decoding="async">
</picture>

4. Minimize render delay — Remove render-blocking JavaScript, inline critical CSS, and keep the DOM size manageable. If your LCP element is text, ensure the web font loads fast or use font-display: optional.

fetchpriority and why it matters for LCP

By default, images in the viewport get "High" fetch priority, but the browser doesn't always know which image is the LCP element until layout completes. Adding fetchpriority="high" to your hero image tells the browser to prioritize it immediately during HTML parsing, before layout. Conversely, use fetchpriority="low" on below-the-fold images to avoid them competing with the LCP resource for bandwidth. This single attribute can improve LCP by 500ms+ on slower connections.

Quiz
Your LCP element is a hero image loaded via CSS background-image on a div. Lighthouse shows LCP at 4.1s. The image is 150KB WebP. TTFB is 200ms. What is the most likely bottleneck?

CLS — Cumulative Layout Shift

CLS measures how much the page content shifts around unexpectedly while loading. You know the experience: you're about to tap a link, an ad loads above it, the link moves down, and you tap the wrong thing. That rage-inducing moment is exactly what CLS captures.

How CLS Is Calculated

CLS uses a session window approach. Layout shifts are grouped into "session windows" — bursts of shifts that happen within 1 second of each other, with a maximum window duration of 5 seconds. The CLS score is the largest single session window's total shift score.

Each individual layout shift is scored as:

layout shift score = impact fraction x distance fraction
  • Impact fraction: The percentage of the viewport that was affected by the shift
  • Distance fraction: The largest distance any shifted element moved, as a fraction of the viewport

So an element that occupies 50% of the viewport and shifts down by 25% of the viewport height produces a shift score of 0.50 x 0.25 = 0.125.

Expected vs unexpected shifts

Not all layout shifts are bad. Shifts that happen within 500ms of a user interaction (click, tap, keypress) are excluded from CLS because the user expects the page to respond. A dropdown menu opening after a click does not count. A banner sliding in 3 seconds after page load absolutely counts.

The Top CLS Offenders

Almost every CLS problem comes from one of these five causes:

1. Images without dimensions — The browser reserves zero space for an image until it knows the size. When the image loads, everything below it shifts down.

<!-- BAD: No dimensions — causes layout shift when image loads -->
<img src="/photo.jpg" alt="Photo">

<!-- GOOD: Browser reserves space immediately -->
<img src="/photo.jpg" alt="Photo" width="800" height="600">

<!-- ALSO GOOD: CSS aspect-ratio achieves the same thing -->
<img src="/photo.jpg" alt="Photo" style="aspect-ratio: 4/3; width: 100%;">

2. Dynamically injected content — Ads, cookie banners, notification bars, and late-loading embeds that push content down.

3. Web fonts causing FOUT — When a custom font loads and replaces the fallback font, different character widths cause text reflow. Every paragraph shifts.

4. Late-loading JavaScript that modifies the DOM — Client-side rendering that adds content after initial paint.

5. Animations using layout-triggering properties — Animating top, left, width, or height instead of transform.

Quiz
A cookie consent banner appears 2 seconds after page load at the top of the page, pushing all content down by 80px. The viewport is 800px tall. What is the layout shift score for this event?

CLS Prevention Techniques

/* Reserve space for ads/embeds with min-height */
.ad-slot {
  min-height: 250px;
}

/* Prevent font swap from causing layout shift */
@font-face {
  font-family: 'CustomFont';
  src: url('/font.woff2') format('woff2');
  font-display: optional; /* No FOUT, no shift — falls back silently */
}

/* Use contain to limit shift propagation */
.card {
  contain: layout;
}
Common Trap

font-display: swap is commonly recommended but actually causes CLS. When the fallback font renders first and the custom font swaps in later, the different glyph metrics cause text reflow — a layout shift. Use font-display: optional to eliminate font-related CLS entirely. The browser uses the custom font only if it arrives within ~100ms, otherwise sticks with the fallback. For most sites, this is the better tradeoff.


INP — Interaction to Next Paint

INP replaced First Input Delay (FID) in March 2024, and it's a much harder metric to pass. While FID only measured the delay before the browser started processing the first interaction, INP measures the full round-trip latency of every interaction throughout the page's lifecycle, then reports the worst one (technically, the p98 interaction).

What INP Measures

An "interaction" is a tap, click, or keypress. INP measures three phases for each one:

The total of all three phases must be under 200ms for "good" INP. And remember — INP reports the worst interaction across the entire page session (at p98, so if there are 50+ interactions, the worst 1-2 are excluded to avoid outlier noise).

Why INP Is Harder Than FID

FID was easy to pass — it only measured the first interaction's input delay. Most sites loaded their heavy JavaScript before the user interacted, so FID was usually fast. INP is different:

  • It measures every interaction, not just the first one
  • It measures the full round-trip (input delay + processing + paint), not just input delay
  • It uses the worst interaction, not the first or average
  • Interactions deep in a session matter — a slow dropdown filter that fires 30 seconds after page load still counts
Quiz
A user clicks a 'Filter' button on your page. The main thread is busy with a 150ms analytics task. Your event handler takes 80ms to filter and re-render a list. Style/layout/paint takes 30ms. What is the INP for this interaction?

INP Optimization Strategies

1. Break up long tasks — Any task over 50ms is a "long task" that can block user interactions. Use scheduler.yield() (or setTimeout(0) as a fallback) to break expensive work into smaller chunks:

async function processLargeDataset(items) {
  for (let i = 0; i < items.length; i++) {
    processItem(items[i]);

    if (i % 100 === 0) {
      await scheduler.yield();
    }
  }
}

2. Reduce event handler complexity — Keep event handlers lean. Move heavy computation to web workers, debounce rapid-fire events, and avoid synchronous layout reads in handlers.

3. Minimize DOM size — Large DOMs make style recalculation and layout more expensive, increasing presentation delay. Keep DOM node count under 1,500 where possible.

4. Use content-visibility: auto — This tells the browser to skip rendering for off-screen content, reducing the work needed after an interaction triggers a repaint:

.below-fold-section {
  content-visibility: auto;
  contain-intrinsic-size: auto 500px;
}

5. Avoid layout thrashing in handlers — Never read layout properties and write styles in the same handler. Batch reads, then batch writes:

// BAD: Forces synchronous layout in every iteration
items.forEach(item => {
  const height = item.offsetHeight;
  item.style.height = height * 2 + 'px';
});

// GOOD: Read all, then write all
const heights = items.map(item => item.offsetHeight);
items.forEach((item, i) => {
  item.style.height = heights[i] * 2 + 'px';
});
scheduler.yield() vs setTimeout(0)

scheduler.yield() is the modern API for yielding to the main thread. Unlike setTimeout(0), which pushes work to the back of the task queue (potentially behind other queued tasks), scheduler.yield() resumes at the front of the queue — preserving your task's priority while still giving the browser a chance to handle pending user input. As of 2024, scheduler.yield() is supported in Chrome and Edge. For other browsers, fall back to setTimeout(0) or use the scheduler-polyfill package.


Measuring Core Web Vitals

You need both lab data (synthetic tests you run) and field data (real user measurements) because they tell different stories.

Lab Tools (Synthetic)

  • Lighthouse (Chrome DevTools) — Simulates a mid-tier mobile device on throttled 4G. Measures LCP and CLS (but not INP, since there's no real user interaction)
  • WebPageTest — Advanced testing with filmstrip view, waterfall analysis, and custom network profiles. Best for diagnosing LCP sub-parts
  • Chrome DevTools Performance panel — Records a trace of everything the browser does. Use it to identify long tasks causing poor INP

Field Tools (Real User Monitoring)

  • Chrome User Experience Report (CrUX) — Google's public dataset of real Chrome user metrics. This is what Google uses for Search ranking. Available via PageSpeed Insights, BigQuery, and the CrUX API
  • web-vitals library — A tiny JavaScript library (under 2KB) that measures all Core Web Vitals in real user sessions and lets you send data to your analytics
import { onLCP, onCLS, onINP } from 'web-vitals';

onLCP(metric => sendToAnalytics('LCP', metric));
onCLS(metric => sendToAnalytics('CLS', metric));
onINP(metric => sendToAnalytics('INP', metric));

function sendToAnalytics(name, metric) {
  const body = JSON.stringify({
    name,
    value: metric.value,
    rating: metric.rating,
    delta: metric.delta,
    id: metric.id,
    navigationType: metric.navigationType,
  });
  navigator.sendBeacon('/api/vitals', body);
}
Lab vs field discrepancy

Your Lighthouse score can be 100 while real users have poor vitals. Lighthouse runs on a controlled environment with no extensions, no competing tabs, and no slow third-party scripts. Always validate with CrUX or RUM data. Lab tools find problems. Field data confirms whether your fixes actually helped real users.

Quiz
You optimize your site and Lighthouse now shows LCP at 1.2s. But CrUX data shows LCP at 3.8s at p75. Which number does Google use for Search ranking?

Real-World Optimization: An E-Commerce Product Page

Let's walk through a real optimization scenario. A product page has these vitals:

  • LCP: 4.8s (Poor) — Hero product image
  • CLS: 0.32 (Poor) — Price and reviews loading late
  • INP: 340ms (Needs Improvement) — Size selector dropdown

Fixing LCP (4.8s → 1.9s)

The product image was a 1.2MB JPEG loaded via CSS background-image on a <div>. Three changes:

  1. Switched from background-image to an <img> tag with fetchpriority="high" — the browser now discovers the image during HTML parsing instead of waiting for CSS
  2. Converted to AVIF with WebP fallback — file size dropped from 1.2MB to 180KB
  3. Added <link rel="preload"> for the image — download starts before the CSS even loads

Fixing CLS (0.32 → 0.04)

The price, star rating, and review count were fetched client-side from an API after page load. When they arrived, they pushed the "Add to Cart" button down.

  1. Moved price and rating data to the server response (SSR or RSC) — they render with the initial HTML
  2. Added min-height to the review section as a fallback for the rare case when client-side data is needed
  3. Added explicit width and height to all product thumbnail images

Fixing INP (340ms → 120ms)

The size selector triggered a full product variant lookup, re-rendered the price, checked inventory, and updated the image — all synchronously in a single event handler.

  1. Split the handler: update the selected size immediately (visual feedback in under 50ms), then await scheduler.yield() before running the inventory check and price update
  2. Moved the inventory lookup to a Web Worker
  3. Used CSS transform instead of height animation for the dropdown
Execution Trace
Before
Click → 340ms synchronous work → paint
User sees no response for 340ms
After: Phase 1
Click → update selected size (15ms) → paint
User sees immediate visual feedback
After: Phase 2
yield() → inventory check in Worker (parallel) → update price (20ms) → paint
Full update completes within 120ms total

The Interplay Between Metrics

This is where most developers trip up. Optimizing one metric in isolation can hurt another.

What developers doWhat they should do
Lazy-load the hero image to reduce page weight
Lazy-loading delays the LCP element. Only lazy-load images below the fold.
Eager-load the LCP image with fetchpriority high and preload
Use font-display swap to show text faster
font-display swap causes a layout shift (CLS) when the custom font replaces the fallback. The glyph widths differ, causing text reflow.
Use font-display optional or preload the font with a fallback that matches metrics
Inline all JavaScript to avoid network requests
Inlined JS blocks parsing (hurts LCP) and creates long tasks (hurts INP). Ship only what is needed for the initial viewport.
Code-split and lazy-load non-critical JS, yield long tasks
Defer all third-party scripts to after page load
Deferring everything can cause a burst of script execution after load, creating long tasks that hurt INP for early interactions.
Defer non-critical scripts but preconnect to critical third-party origins
Treat Lighthouse 100 as proof your vitals are good
Lighthouse runs in ideal conditions. Real users have slow devices, extensions, spotty connections, and competing tabs.
Monitor CrUX field data continuously — lab results do not reflect real user experience
Quiz
You add loading='lazy' to all images on a page, including the hero image above the fold. What happens to your Core Web Vitals?
Quiz
Which of these changes can improve ALL three Core Web Vitals simultaneously?

Quick Diagnostic Cheat Sheet

When debugging a specific metric, start here:

LCP is slow?

  1. Identify the LCP element in DevTools (Performance panel → Timings → LCP)
  2. Check if the resource is discoverable early (is it in HTML or hidden behind CSS/JS?)
  3. Check file size and format (AVIF/WebP? Correctly sized for viewport?)
  4. Check for render-blocking resources delaying paint

CLS is high?

  1. Open DevTools → Performance → check "Layout Shifts" in the Experience lane
  2. Look for elements moving after initial paint (images, ads, fonts, injected content)
  3. Add explicit dimensions to all media elements
  4. Reserve space for dynamic content with min-height

INP is slow?

  1. Record a Performance trace while clicking/tapping around the page
  2. Find the longest interaction in the Interactions track
  3. Check for long tasks blocking the main thread before the handler runs
  4. Check if the handler itself is doing too much synchronous work
Key Rules
  1. 1LCP measures loading — optimize by preloading the LCP resource, reducing its size, and eliminating render-blocking resources.
  2. 2CLS measures visual stability — prevent it by always setting image dimensions, reserving space for dynamic content, and using font-display optional.
  3. 3INP measures interactivity — improve it by breaking long tasks with scheduler.yield(), keeping event handlers lean, and reducing DOM size.
  4. 4Google uses the 75th percentile of real user data (CrUX) for ranking, not Lighthouse lab scores.
  5. 5Never lazy-load the LCP element. Never use font-display swap without measuring CLS impact. Never inline all JS to fix one metric while breaking another.
  6. 6Measure with both lab tools (Lighthouse, DevTools) for diagnosis and field tools (CrUX, web-vitals library) for ground truth.