Memory Budgets and Leak Prevention Strategies
Reactive vs Proactive Memory Management
Most developers manage memory reactively: wait until the app gets slow, take a heap snapshot, find the leak, patch it. That works for individual bugs, but it doesn't prevent the next one. You're always playing whack-a-mole.
Proactive memory management is different. You build structural guarantees into your architecture so that entire categories of leaks become impossible. You know how much memory each feature should use, and you have systems that enforce those limits.
The difference? Reactive debugging finds one leak. Proactive architecture prevents a hundred.
Think of memory like a household budget. You don't wait until you're bankrupt to start tracking spending. You allocate budgets per category (rent, food, entertainment), set limits, and review monthly. If any category exceeds its budget, you investigate immediately — not when the bank calls.
Setting Memory Budgets
A memory budget defines the maximum memory a feature or component is allowed to use. It transforms the vague "the app is slow" into specific, testable constraints you can actually act on.
Budget Categories
| Category | Typical Budget | What to Measure |
|---|---|---|
| Initial page load JS heap | < 10MB | Heap snapshot after first paint |
| Single route/page | 5-15MB delta | Heap diff: navigate to page vs navigate away |
| Component instance | 50KB-500KB | Heap diff: mount vs unmount |
| Cache (in-memory) | 5-20MB with cap | Map/Array size tracking |
| Image/media cache | 50-100MB with eviction | Total decoded image bytes |
| WebSocket connection state | 1-5MB | Message buffer size tracking |
| Single modal/dialog lifecycle | 0MB delta after close | Heap diff: open modal, close modal, force GC |
Measuring Against Budgets
// Simple memory measurement utility
function measureMemory() {
if (performance.measureUserAgentSpecificMemory) {
// Chrome 89+: more accurate, async
return performance.measureUserAgentSpecificMemory();
}
// Fallback: less accurate, includes non-JS memory
if (performance.memory) {
return {
bytes: performance.memory.usedJSHeapSize,
breakdown: []
};
}
return null;
}
// Check budget after mounting a feature
async function checkBudget(featureName, budgetBytes) {
const before = performance.memory?.usedJSHeapSize;
// ... mount the feature ...
// Force GC if available (DevTools must be open, or use --expose-gc)
if (globalThis.gc) globalThis.gc();
const after = performance.memory?.usedJSHeapSize;
const delta = after - before;
if (delta > budgetBytes) {
console.warn(
`[Memory Budget] ${featureName}: ${(delta / 1024).toFixed(0)}KB exceeds budget of ${(budgetBytes / 1024).toFixed(0)}KB`
);
}
}
Use Playwright or Puppeteer to take heap snapshots during e2e tests. Compare heap sizes before and after key user flows (open/close modals, navigate routes, scroll lists). Fail the build if the delta exceeds the budget. This catches leaks before they reach production.
Object Pooling: Reuse Instead of Allocate
Here's an optimization that sounds like premature optimization but absolutely isn't — when you need it. In hot paths — animation frames, game loops, scroll handlers, audio processing — creating and discarding objects at 60fps generates massive GC pressure. Object pooling pre-allocates a fixed set of objects and reuses them.
class ObjectPool {
#pool = [];
#factory;
#reset;
constructor(factory, reset, initialSize = 20) {
this.#factory = factory;
this.#reset = reset;
// Pre-allocate
for (let i = 0; i < initialSize; i++) {
this.#pool.push(factory());
}
}
acquire() {
if (this.#pool.length > 0) {
return this.#pool.pop();
}
// Pool exhausted — create a new one (consider logging this)
return this.#factory();
}
release(obj) {
this.#reset(obj); // clean the object for reuse
this.#pool.push(obj); // return to pool
}
}
// Usage: particle system in an animation
const particlePool = new ObjectPool(
() => ({ x: 0, y: 0, vx: 0, vy: 0, life: 0, active: false }),
(p) => { p.x = 0; p.y = 0; p.vx = 0; p.vy = 0; p.life = 0; p.active = false; },
200
);
function spawnParticle(x, y) {
const p = particlePool.acquire();
p.x = x;
p.y = y;
p.vx = Math.random() * 2 - 1;
p.vy = Math.random() * -3;
p.life = 60;
p.active = true;
return p;
}
function updateParticles(particles) {
for (let i = particles.length - 1; i >= 0; i--) {
const p = particles[i];
if (!p.active) continue;
p.x += p.vx;
p.y += p.vy;
p.life--;
if (p.life <= 0) {
p.active = false;
particlePool.release(p);
particles.splice(i, 1);
}
}
}
// Zero allocations in the steady state — no GC pauses during animation
When object pooling is not worth it
Object pooling adds complexity: pool management, reset logic, pool sizing, potential for use-after-release bugs. Only use it when:
- You're allocating hundreds+ objects per second in a sustained hot loop
- GC pauses from those allocations are measurably affecting frame rate or latency
- The objects have a consistent shape (same properties, same types)
Don't pool objects for normal application code. V8's Young Generation Scavenger is already optimized for short-lived objects. Pool when you've measured a problem, not as a premature optimization.
Subscription Cleanup in React
This is the big one. React components that subscribe to external data sources (event listeners, WebSocket connections, intervals, Observers) are the single biggest source of leaks in modern frontend apps. The pattern is always the same: subscribe on mount, forget to unsubscribe on unmount.
The useEffect Cleanup Pattern
function useWindowSize() {
const [size, setSize] = useState({ width: 0, height: 0 });
useEffect(() => {
function handleResize() {
setSize({ width: window.innerWidth, height: window.innerHeight });
}
handleResize(); // set initial value
window.addEventListener('resize', handleResize);
// CLEANUP: this function runs on unmount (and before re-running the effect)
return () => {
window.removeEventListener('resize', handleResize);
};
}, []);
return size;
}
AbortController for Multiple Subscriptions
When a component has many subscriptions, AbortController simplifies cleanup to a single call:
function useLiveData(endpoint) {
const [data, setData] = useState(null);
useEffect(() => {
const controller = new AbortController();
const { signal } = controller;
// Event listener — auto-removed on abort
document.addEventListener('visibilitychange', handleVisibility, { signal });
// Fetch — auto-cancelled on abort
async function fetchData() {
try {
const res = await fetch(endpoint, { signal });
const json = await res.json();
setData(json);
} catch (e) {
if (e.name !== 'AbortError') throw e;
}
}
fetchData();
// Interval — manual cleanup but tied to abort signal
const intervalId = setInterval(fetchData, 30_000);
signal.addEventListener('abort', () => clearInterval(intervalId));
// Single cleanup for everything
return () => controller.abort();
}, [endpoint]);
return data;
}
IntersectionObserver / ResizeObserver / MutationObserver
function useIntersection(ref, options) {
const [isVisible, setIsVisible] = useState(false);
useEffect(() => {
const element = ref.current;
if (!element) return;
const observer = new IntersectionObserver(([entry]) => {
setIsVisible(entry.isIntersecting);
}, options);
observer.observe(element);
return () => {
observer.disconnect(); // MUST disconnect on cleanup
};
}, [ref, options]);
return isVisible;
}
A subtle React leak: setting state on an unmounted component. Before React 18, this caused a warning. Now React silently drops the update, but the async operation that triggers it is still running and holding its closure alive. Always cancel async work in cleanup — don't just ignore the warning.
// LEAK: fetch continues after unmount, closure stays alive
useEffect(() => {
fetch('/api/data')
.then(r => r.json())
.then(data => setData(data)); // still runs after unmount
}, []);
// FIX: cancel the request
useEffect(() => {
const controller = new AbortController();
fetch('/api/data', { signal: controller.signal })
.then(r => r.json())
.then(data => setData(data))
.catch(e => { if (e.name !== 'AbortError') throw e; });
return () => controller.abort();
}, []);Architecture-Level Prevention
Beyond individual cleanup patterns, there are architectural decisions that prevent entire categories of leaks. This is where you go from "good developer" to "the person whose code just doesn't leak."
1. Centralize Event Management
// Instead of each component managing its own window listeners:
class EventHub {
#listeners = new Map();
#abortController = new AbortController();
on(event, handler) {
window.addEventListener(event, handler, {
signal: this.#abortController.signal
});
}
destroy() {
this.#abortController.abort(); // removes ALL listeners at once
}
}
2. Use Disposal Pattern for Resources
// Every class that holds resources implements a dispose method
class DataStream {
#ws;
#intervalId;
#controller = new AbortController();
connect(url) {
this.#ws = new WebSocket(url);
this.#intervalId = setInterval(() => this.#ping(), 30_000);
document.addEventListener('visibilitychange', this.#onVisibility, {
signal: this.#controller.signal
});
}
dispose() {
this.#controller.abort();
clearInterval(this.#intervalId);
this.#ws?.close();
this.#ws = null;
}
}
3. Bound Every Collection
// Rule: No Map, Set, or Array grows without a maximum size
const MAX_CACHE = 500;
const MAX_HISTORY = 100;
const MAX_LOG_ENTRIES = 1000;
class BoundedMap extends Map {
#maxSize;
constructor(maxSize) {
super();
this.#maxSize = maxSize;
}
set(key, value) {
super.set(key, value);
if (this.size > this.#maxSize) {
// Remove oldest entry (first key in insertion order)
const oldest = this.keys().next().value;
this.delete(oldest);
}
return this;
}
}
4. Leak Detection in Development
// Track mount/unmount pairs in development
const mountedComponents = new Set();
function useLeakDetector(componentName) {
useEffect(() => {
if (mountedComponents.has(componentName)) {
console.warn(
`[Leak Detector] ${componentName} mounted multiple times without unmount. Possible leak.`
);
}
mountedComponents.add(componentName);
return () => {
mountedComponents.delete(componentName);
};
}, [componentName]);
}
| What developers do | What they should do |
|---|---|
| No memory budget — only investigating when the app is visibly slow Leaks accumulate slowly. By the time the app is slow, you have many leaks to find. | Set explicit memory budgets per feature. Measure in CI. Fail the build on budget violations. |
| Premature object pooling everywhere Pooling adds complexity and maintenance burden. Only worth it when GC pauses are measurably affecting UX. | Only pool objects in measured hot paths (60fps loops, audio processing). V8's nursery handles normal allocations efficiently. |
| useEffect without a cleanup return Subscriptions without cleanup are the most common source of leaks in React apps | Every useEffect that subscribes to anything must return a cleanup function |
| Growing caches, histories, and logs without bounds Unbounded collections grow linearly with usage time — they'll eventually exhaust memory | Every in-memory collection must have a maximum size and eviction strategy |
- 1Set explicit memory budgets per feature. Measure them in automated tests. Fail the build on violations.
- 2Every useEffect that creates a subscription must return a cleanup function. No exceptions.
- 3Use AbortController to batch-cancel multiple subscriptions (event listeners, fetch requests, intervals) with a single abort() call.
- 4Bound every in-memory collection: maximum size + eviction strategy (LRU, FIFO, or WeakMap).
- 5Only use object pooling when GC pauses are measured to cause problems in hot paths. Don't prematurely optimize.
- 6Centralize resource management: EventHub, disposal pattern, and development-mode leak detectors catch entire categories of bugs.