Allocation Timeline and Sampling
Beyond Snapshots: Watching Memory in Real Time
Heap snapshots are powerful, but they are photographs — frozen moments. They tell you what is in memory, but not when it got there or how fast it is accumulating.
That is where allocation profiling comes in. Chrome DevTools gives you two allocation profiling tools, each with a different tradeoff between accuracy and overhead.
If a heap snapshot is like a census ("count everyone right now"), then the allocation timeline is like a security camera ("record everyone entering and leaving over time"). The allocation sampler is like a bouncer doing random spot-checks ("sample a fraction of people entering"). The camera catches everything but slows down the door. The bouncer is faster but might miss some entrants.
Allocation Instrumentation Timeline
This is the "security camera" tool. It records every allocation as it happens, showing you a timeline of memory activity.
How to Use It
- Open Chrome DevTools → Memory tab
- Select "Allocation instrumentation on timeline"
- Click Start
- Perform the actions you want to profile
- Click Stop
The result is a timeline with blue and grey bars at the top and a snapshot-like view at the bottom.
Reading the Bars
| Bar Color | Meaning | Action |
|---|---|---|
| Blue | Objects allocated at this point in time that are STILL alive when you stopped recording | These are your leak suspects — they were allocated and never freed |
| Grey | Objects allocated at this point in time that WERE garbage collected before you stopped | These are normal — allocated temporarily and properly cleaned up |
A healthy app shows mostly grey bars — objects are created and collected naturally. A leaking app shows persistent blue bars, especially if they accumulate over time.
Clicking into the Timeline
You can click on any blue bar (or drag to select a time range) to see exactly which objects were allocated during that window. The bottom pane shows them grouped by constructor, just like the Summary view in a heap snapshot.
This is incredibly powerful for answering: "what specific action caused these allocations?"
Overhead Warning
The allocation instrumentation timeline tracks every allocation. This has significant performance overhead — your app will run noticeably slower during recording. The GC behavior may also change because of the instrumentation.
Use this tool for:
- Short, targeted investigations (30 seconds to 2 minutes)
- Pinpointing exactly when a leak occurs during a user flow
- Identifying which constructor types are being leaked
Do not use this for:
- Long-running sessions (the overhead skews results)
- Performance benchmarking (the tool itself causes slowdowns)
- Production monitoring (too heavy)
Allocation Sampling Profiler
This is the "bouncer doing spot-checks" tool. Instead of tracking every allocation, it statistically samples allocations at regular intervals.
How to Use It
- Open Chrome DevTools → Memory tab
- Select "Allocation sampling"
- Click Start
- Perform the actions you want to profile
- Click Stop
The result looks like a flame chart or call tree — it shows you which functions are responsible for the most allocations.
What It Shows
Unlike the instrumentation timeline (which shows what was allocated and when), the sampling profiler shows where allocations happen in your code:
- Self size — memory allocated directly by this function
- Total size — memory allocated by this function and everything it calls
- Function name and location — the exact file and line number
| Feature | Allocation Timeline | Allocation Sampling |
|---|---|---|
| Tracks | Every allocation | Statistical sample of allocations |
| Output | Timeline with blue/grey bars + object list | Call tree showing which functions allocate most |
| Overhead | High — noticeably slows the app | Low — minimal performance impact |
| Best for | Finding WHEN leaks happen | Finding WHERE (which code) allocates most |
| Duration | Short sessions (under 2 minutes) | Longer sessions (minutes to hours) |
| Precision | 100% accurate — sees everything | Statistical — may miss infrequent allocations |
Reading the Sampling Results
The results appear in three views:
Chart view — A flame chart where each bar represents a function. Width corresponds to allocation size. Read bottom to top — the bottom function called the function above it.
Heavy (Bottom Up) — Sorted by which functions directly allocated the most memory. Start here to find the hotspots.
Tree (Top Down) — Shows the call tree from entry points down. Useful for understanding the allocation path — which user action triggers which function chain.
The sampling profiler can miss allocations that happen very quickly and infrequently. If a function allocates a 100KB object once during a 10-minute session, the sampler might not catch it. But if a function allocates 100 bytes every millisecond (totaling 60MB over 10 minutes), the sampler will definitely find it. The profiler is optimized for finding hot allocation paths, not rare one-off allocations. For rare allocations, use heap snapshot comparison instead.
The Memory Panel Workflow
Here is a decision tree for choosing the right tool:
Combining Tools
The most effective workflow combines all three:
- Heap snapshot comparison to confirm a leak exists (three-snapshot technique)
- Allocation timeline to pinpoint when during your user flow the leak occurs
- Allocation sampling to find the exact function and line number responsible
- Heap snapshot retaining path to understand why the leaked object is not being collected
Under the hood: how allocation sampling works
The allocation sampling profiler uses a technique called Poisson sampling. V8 intercepts allocation calls and randomly decides whether to record each one based on a sampling interval (defaulting to about 512KB of allocation between samples). When it decides to record, it captures the full call stack at that moment. Over time, the statistical distribution accurately represents which functions are responsible for the most total allocation. The mathematical guarantee is that functions allocating more memory are proportionally more likely to be sampled. This is the same technique used by Google-wide profiling tools like pprof.
- 1Use allocation timeline for short, targeted leak investigations — it tracks every allocation but has high overhead
- 2Use allocation sampling for longer sessions or finding hot allocation paths — it has low overhead but may miss rare allocations
- 3Blue bars in the timeline mean objects still alive — grey means properly collected
- 4Combine tools: snapshots to confirm, timeline to locate in time, sampling to find the code, retaining paths to find the reference
| What developers do | What they should do |
|---|---|
| Running the allocation instrumentation timeline for long sessions The instrumentation overhead skews results and makes your app sluggish, which changes GC behavior | Keep instrumentation sessions under 2 minutes — use sampling for longer runs |
| Using only one memory tool for the entire investigation Each tool answers a different question — what is leaking, when it leaks, and which code is responsible | Combine heap snapshots, allocation timeline, and allocation sampling for a complete picture |
| Assuming all blue bars are leaks Blue means 'still alive when recording stopped' — not necessarily 'should have been collected' | Some objects are intentionally long-lived (caches, app state, singletons) |
| Ignoring the Bottom Up view in the sampling profiler The Top Down view shows call paths, but Bottom Up immediately surfaces the actual hot spots | Start with Bottom Up to find functions with the highest direct allocation |