Skip to content

Performance Mark, Measure, and Observer

advanced16 min read

Measure What Matters, Automatically

Chrome DevTools is incredible for debugging in development. But your users are not running DevTools. The Performance API gives you the same measurement power in production — mark important moments, measure durations between them, observe browser events, and send it all to your analytics pipeline.

This is how you go from "the app feels slow sometimes" to "P95 LCP is 2.8s on mobile in India, caused by a 1.2s main thread blocking task during hydration."

Mental Model

Think of the Performance API like a stopwatch system built into every browser. performance.mark() is pressing the lap button — it records a named timestamp. performance.measure() calculates the duration between two lap presses. PerformanceObserver is a live scoreboard that automatically displays specific types of events as they happen. Together, they let you measure anything, from "how long did this function take" to "how long until the user saw meaningful content."

User Timing API: Mark and Measure

performance.mark()

Creates a named timestamp in the performance timeline:

performance.mark('component-render-start');
renderDashboard();
performance.mark('component-render-end');

Marks are zero-overhead points in time. They appear in the DevTools Performance panel's Timings lane and are accessible via the Performance API.

performance.measure()

Calculates the duration between two marks:

performance.mark('fetch-start');
const data = await fetch('/api/dashboard');
performance.mark('fetch-end');

const measurement = performance.measure(
  'Dashboard API Fetch',
  'fetch-start',
  'fetch-end'
);

console.log(measurement.duration); // e.g., 342.5 (milliseconds)

The measure name ('Dashboard API Fetch') appears as a labeled bar in the DevTools Timings lane — making it easy to correlate your code with the flame chart.

Measure Without Marks

You can measure from navigation start or pass timestamps directly:

performance.measure('Time to Interactive', {
  start: 0,
  end: performance.now(),
});

performance.measure('Image Processing', {
  start: startTimestamp,
  duration: 500,
});
Quiz
You call performance.mark('A'), run an async operation, then call performance.mark('B') and performance.measure('op', 'A', 'B'). The measure shows 150ms, but the async operation actually took 500ms. What happened?

PerformanceObserver: Real-Time Event Monitoring

PerformanceObserver lets you subscribe to performance events as they happen. It is the foundation for monitoring Core Web Vitals in production.

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    console.log(entry.entryType, entry.name, entry.duration);
  }
});

observer.observe({ type: 'measure', buffered: true });

The buffered: true option is critical — it delivers entries that were recorded before the observer was created. Without it, you miss early events.

Entry Types You Can Observe

Entry TypeWhat It CapturesKey Properties
longtaskMain thread tasks over 50msduration, startTime, attribution (which iframe/script)
largest-contentful-paintLCP candidate elements and their render timerenderTime, loadTime, size, element, url
first-inputFirst user interaction timing (FID)processingStart - startTime = input delay
layout-shiftVisual stability events (CLS)value (shift fraction), sources (shifted elements)
eventAll user interactions with timing (for INP)duration, processingStart, processingEnd, interactionId
resourceNetwork resource loadingduration, transferSize, decodedBodySize, initiatorType
navigationPage navigation timingdomContentLoadedEventStart, loadEventStart, type
markUser marks created via performance.mark()name, startTime
measureUser measures via performance.measure()name, startTime, duration

Monitoring Core Web Vitals

Largest Contentful Paint (LCP)

new PerformanceObserver((list) => {
  const entries = list.getEntries();
  const lastEntry = entries[entries.length - 1];

  console.log('LCP:', {
    renderTime: lastEntry.renderTime,
    loadTime: lastEntry.loadTime,
    size: lastEntry.size,
    element: lastEntry.element?.tagName,
    url: lastEntry.url,
  });
}).observe({ type: 'largest-contentful-paint', buffered: true });

The browser may report multiple LCP candidates (as larger elements render, they replace previous candidates). The last entry before user interaction is the final LCP.

Quiz
A PerformanceObserver for 'largest-contentful-paint' fires three times during page load, with sizes 10000, 50000, and 45000. Which one is the final LCP element?

Cumulative Layout Shift (CLS)

let clsScore = 0;
let sessionEntries = [];

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    if (!entry.hadRecentInput) {
      sessionEntries.push(entry);
      clsScore += entry.value;
    }
  }

  console.log('Current CLS:', clsScore);
}).observe({ type: 'layout-shift', buffered: true });

Layout shift entries have a value (how much the viewport shifted) and sources (which elements moved). Shifts caused by user input (hadRecentInput === true) are excluded from CLS.

Interaction to Next Paint (INP)

const interactions = new Map();

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    if (entry.interactionId) {
      const existing = interactions.get(entry.interactionId);
      if (!existing || entry.duration > existing.duration) {
        interactions.set(entry.interactionId, entry);
      }
    }
  }
}).observe({ type: 'event', buffered: true, durationThreshold: 16 });

INP is the P98 interaction duration. Each interaction (click, keypress, tap) gets an interactionId. Multiple event entries can share the same interactionId (e.g., pointerdown, pointerup, click for one tap). The longest duration per interactionId represents the interaction's latency.

Common Trap

Computing INP correctly is tricky — you need to track all interactions, group by interactionId, take the longest duration per interaction, then find the P98. Use the web-vitals library instead of rolling your own. It handles all edge cases including page visibility changes, back/forward cache, and prerendering.

Resource Timing API

Every network resource (scripts, images, stylesheets, fonts, API calls) gets a PerformanceResourceTiming entry:

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    if (entry.initiatorType === 'fetch' || entry.initiatorType === 'xmlhttprequest') {
      console.log('API call:', {
        name: entry.name,
        duration: entry.duration,
        transferSize: entry.transferSize,
        serverTiming: entry.serverTiming,
      });
    }
  }
}).observe({ type: 'resource', buffered: true });

Key properties:

  • duration — total time from request start to response end
  • transferSize — bytes transferred over the network (0 if cached)
  • decodedBodySize — uncompressed response size
  • serverTiming — server-provided timing data (via Server-Timing header)

This is invaluable for monitoring API performance in production.

Quiz
A PerformanceResourceTiming entry shows transferSize: 0 and decodedBodySize: 150000. What does this indicate?

Long Tasks Observer

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    console.log('Long task detected:', {
      duration: entry.duration,
      startTime: entry.startTime,
      attribution: entry.attribution.map(a => ({
        containerType: a.containerType,
        containerSrc: a.containerSrc,
        containerName: a.containerName,
      })),
    });
  }
}).observe({ type: 'longtask', buffered: true });

The attribution array tells you whether the long task came from your main page, an iframe, or a third-party script. This is critical for identifying third-party performance problems.

Sending Metrics to Analytics

function sendToAnalytics(metric) {
  const body = JSON.stringify({
    name: metric.name,
    value: metric.value,
    id: metric.id,
    page: location.pathname,
    connection: navigator.connection?.effectiveType,
    deviceMemory: navigator.deviceMemory,
  });

  if (navigator.sendBeacon) {
    navigator.sendBeacon('/api/metrics', body);
  } else {
    fetch('/api/metrics', { body, method: 'POST', keepalive: true });
  }
}

navigator.sendBeacon() is designed for analytics — it queues the request and guarantees delivery even if the page is unloading. The keepalive: true option on fetch provides similar behavior.

Enriching Metrics with Context

Raw numbers are useless without context. Always attach:

function enrichMetric(entry) {
  return {
    ...entry.toJSON(),
    url: location.href,
    userAgent: navigator.userAgent,
    connection: navigator.connection?.effectiveType,
    deviceMemory: navigator.deviceMemory,
    hardwareConcurrency: navigator.hardwareConcurrency,
    viewport: `${innerWidth}x${innerHeight}`,
    timestamp: Date.now(),
  };
}

This lets you segment metrics by device type, connection speed, and page — revealing that your "good average LCP" hides terrible P95 scores on budget Android phones over 3G.

The web-vitals library

Google's web-vitals library wraps all of the above into a simple API: onLCP(sendToAnalytics), onINP(sendToAnalytics), onCLS(sendToAnalytics). It handles all the edge cases: page visibility changes (metrics should only measure visible pages), back/forward cache restoration, soft navigations in SPAs, and correct aggregation. Unless you need custom metrics beyond the standard Web Vitals, use this library instead of building your own observers. It is ~2KB gzipped and maintained by the Chrome team.

Custom Metrics

Beyond Web Vitals, define metrics specific to your application:

performance.mark('search-query-start');
const results = await searchAPI(query);
performance.mark('search-results-received');
renderResults(results);
performance.mark('search-results-rendered');

performance.measure('Search API Latency',
  'search-query-start', 'search-results-received');
performance.measure('Search Render Time',
  'search-results-received', 'search-results-rendered');
performance.measure('Search Total Time',
  'search-query-start', 'search-results-rendered');

Custom metrics track what matters to your users. "Time from search keystroke to visible results" is more actionable than generic Web Vitals for a search-heavy app.

Key Rules
  1. 1Use performance.mark() and performance.measure() to instrument critical user flows in production
  2. 2Always pass buffered: true to PerformanceObserver — without it you miss events that fired before the observer was created
  3. 3Use navigator.sendBeacon() for analytics — it guarantees delivery even during page unload
  4. 4Enrich metrics with device context (connection type, memory, viewport) to segment performance by user conditions
  5. 5Use the web-vitals library for standard Core Web Vitals — it handles edge cases you will miss
What developers doWhat they should do
Forgetting buffered: true on PerformanceObserver
LCP, FCP, and navigation entries often fire before your JavaScript observer is set up — buffered: true replays them
Always include buffered: true to capture entries fired before the observer was registered
Computing INP manually without grouping by interactionId
A single click generates multiple event entries (pointerdown, pointerup, click) — you must group them to avoid counting the same interaction multiple times
Use the web-vitals library or group entries by interactionId and take the max duration per interaction
Using fetch() without keepalive for analytics on page unload
Regular fetch requests are cancelled when the page unloads — sendBeacon queues the request and guarantees delivery
Use navigator.sendBeacon() or fetch with keepalive: true
Only tracking average metrics
A P50 LCP of 1.5s looks great, but if P95 is 8s, 5% of your users are having a terrible experience
Track P50, P75, P95, and P99 — averages hide outliers that affect real users