V8 Ignition and TurboFan Pipeline
Why Is the First Call Always Slow?
Run this benchmark and the numbers will make no sense at first:
function add(a, b) { return a + b; }
// First 100 calls: ~0.8ms total
for (let i = 0; i < 100; i++) add(i, i);
// Next 100,000 calls: ~0.3ms total
for (let i = 0; i < 100000; i++) add(i, i);
100,000 calls take less time than the first 100. That makes no sense — unless you understand what V8 is doing behind your back. The first calls run through an interpreter. Once V8 decides the function is "hot," it compiles it to native machine code. After that, it runs at near-C++ speed.
This is the compilation pipeline, and once you understand it, you'll never think about JavaScript performance the same way again.
The Pipeline at 30,000 Feet
Think of V8 like a restaurant kitchen with two stations. Ignition is the fast-prep station — it can start serving food immediately, but the plates are basic. TurboFan is the gourmet station — it takes longer to prepare, but the result is vastly superior. The kitchen watches what diners order most frequently. When a dish becomes popular, it moves production to the gourmet station. If a diner suddenly orders something unexpected (a type change), the gourmet station has to throw out its specialized recipe and send the order back to fast-prep.
Here's the full journey your JavaScript source code takes:
- Parser reads source text, produces an Abstract Syntax Tree (AST)
- Ignition (interpreter) walks the AST, generates compact bytecode, starts executing immediately
- Sparkplug (baseline compiler, V8 9.1+) compiles bytecode to unoptimized machine code with zero analysis — a quick speedup over the interpreter
- Maglev (mid-tier compiler, V8 11.3+) generates mildly optimized code using type feedback, bridging the gap between Sparkplug and TurboFan
- TurboFan (optimizing compiler) uses type feedback collected during interpretation to generate highly optimized machine code with speculative optimizations
This is tiered compilation — every tier trades startup speed for peak throughput.
Parsing: Source Text to AST
Here's something most people don't realize: when V8 first encounters a script, it doesn't parse everything immediately. It uses lazy parsing (also called pre-parsing):
function outerFunction() {
// V8 fully parses this — it's called immediately
return innerFunction();
}
function innerFunction() {
// V8 pre-parses this — notes it exists, skips the body
// Full parse happens only when innerFunction() is actually called
return "expensive computation";
}
outerFunction(); // Now innerFunction gets fully parsed
Pre-parsing is roughly 2x faster than full parsing. For large codebases, this means V8 only pays the full parsing cost for code that actually executes.
Parsing JavaScript on a mid-range mobile device costs approximately 1ms per 10KB of minified code. A 500KB bundle means 50ms of parse time before a single line executes. This is why code splitting and lazy loading matter — they aren't just about network, they directly reduce parse cost.
The parser produces an AST. For function add(a, b) { return a + b; }, the AST looks roughly like:
FunctionDeclaration "add"
Parameters: [a, b]
Body:
ReturnStatement
BinaryExpression (+)
Left: Variable "a"
Right: Variable "b"
Ignition: AST to Bytecode
Ignition is V8's interpreter. It walks the AST and generates bytecode -- basically a compact, stack-machine instruction set. For our add function, it looks like this:
LdaNamedProperty a0, [0] // Load parameter 'a'
Add a1 // Add parameter 'b' to accumulator
Return // Return the accumulator value
Ignition bytecodes are much smaller than machine code (typically 25-50% the size of equivalent x64 instructions), which matters for memory. A large application might have tens of thousands of functions — keeping them all as machine code would consume gigabytes.
But here's the thing most people miss: the critical thing Ignition does beyond execution is collecting type feedback. Every operation records what types it has seen into a data structure called a FeedbackVector. This is the intelligence that TurboFan uses to speculate.
Sparkplug and Maglev: The Middle Tiers
Before TurboFan kicks in, there are two more compilation tiers you should know about:
Sparkplug (V8 9.1+) compiles Ignition bytecode 1:1 into machine code. No optimization, no analysis — it literally walks each bytecode and emits the equivalent native instructions. It runs roughly 2x faster than the interpreter because it eliminates dispatch overhead.
Maglev (V8 11.3+) sits between Sparkplug and TurboFan. It builds a simplified graph from bytecode, uses the type feedback from the FeedbackVector, and applies basic optimizations (common subexpression elimination, register allocation). Maglev compiles 10-20x faster than TurboFan while producing code that's 50-80% as fast.
The tiers work together:
| Tier | Compilation Speed | Execution Speed | When Used |
|---|---|---|---|
| Ignition | Instant | 1x (baseline) | All functions start here |
| Sparkplug | Very fast | ~2x | After first call |
| Maglev | Fast | ~5-10x | After ~100 calls with stable types |
| TurboFan | Slow (1-10ms) | ~20-100x | Hot functions with stable type feedback |
You might think "just optimize everything with TurboFan immediately." TurboFan compilation for a single function can take 1-10ms. If you compiled all 10,000 functions in a medium app with TurboFan at startup, you'd freeze the main thread for 10-100 seconds. Tiered compilation exists because most code only runs once or twice — spending optimization time on it wastes more time than it saves.
TurboFan: Speculative Optimization
Now we get to the crown jewel. TurboFan takes the type feedback from the FeedbackVector and literally bets on it. If add(a, b) has only ever been called with small integers (SMIs), TurboFan generates machine code that:
- Checks if both arguments are SMIs (a single bit check)
- Adds them with a native
addlinstruction - Checks for integer overflow
- Returns the result
No boxing, no type dispatch, no property lookups. This is why it approaches C++ speed.
// TurboFan sees: add() always receives integers
// It generates something like this pseudocode:
// OPTIMIZED add(a, b):
// if (!isSmi(a)) goto DEOPTIMIZE
// if (!isSmi(b)) goto DEOPTIMIZE
// result = a + b (native integer add)
// if (overflow) goto DEOPTIMIZE
// return result
//
// DEOPTIMIZE:
// restore interpreter state
// jump back to Ignition
Those DEOPTIMIZE guards are critical. If the speculation is wrong — someone calls add("hello", "world") — V8 bails out of the optimized code and falls back to the interpreter. This is called deoptimization, and it's expensive.
What TurboFan actually optimizes
Beyond type specialization, TurboFan applies classical compiler optimizations adapted for JavaScript:
- Inlining: Small functions called from hot loops get their body copied directly into the caller, eliminating call overhead
- Escape analysis: Objects that don't leave a function get allocated on the stack instead of the heap, avoiding garbage collection
- Loop-invariant code motion: Computations that don't change across loop iterations get hoisted outside the loop
- Dead code elimination: Branches that can never be reached (based on type feedback) get removed entirely
- Constant folding: Expressions with known compile-time values get pre-computed
- Range analysis: If V8 can prove a number stays within 32-bit integer range, it uses integer arithmetic throughout
Production Scenario: The 200ms Startup Regression
This is a real one. A team adds a configuration validation function that runs once at startup:
function validateConfig(config) {
// 500 lines of complex validation logic
// called exactly once during initialization
}
They notice startup time increases by 200ms. The cause: the function was wrapped as an IIFE (Immediately Invoked Function Expression), which V8 historically treated as a signal for eager compilation — assuming IIFEs are hot and compiling them eagerly. Note: V8's heuristics evolve across versions, and modern V8 may handle this differently. You can verify with --trace-opt and --trace-deopt flags.
// This triggers eager optimization — V8 sees IIFE pattern
(function validateConfig(config) {
// 500 lines get TurboFan-compiled even though it runs once
})();
// This stays lazy — only Ignition bytecode
function validateConfig(config) {
// Only compiled by Ignition, fast to start
}
validateConfig(globalConfig);
The fix: unwrap the IIFE. Startup drops by 200ms because Ignition handles the one-time code instantly instead of waiting for TurboFan.
Common Mistakes
| What developers do | What they should do |
|---|---|
| Assuming all JavaScript runs at the same speed regardless of when it's called The first few calls of any function are 10-100x slower than optimized calls. Micro-benchmarks that don't warm up the JIT are meaningless | Recognize that first calls go through the interpreter and hot code gets JIT-compiled. Benchmark warm code, not cold code |
| Wrapping startup code in IIFEs for 'cleanliness' V8 eagerly compiles IIFEs, spending optimization time on code that only runs once | Use regular functions for one-shot initialization code |
| Thinking 'JavaScript is slow' when a function performs poorly Optimized JavaScript runs within 2-3x of C++. If it's much slower, something is preventing optimization | Check if the function is being deoptimized. A deopted function falls back to the interpreter |
| Trying to manually optimize cold code paths V8 only optimizes code that runs enough times. Optimizing code that runs once gives zero benefit — V8 won't even compile it with TurboFan | Focus optimization efforts on hot loops and frequently-called functions |
Quiz: The Compilation Pipeline
Key Rules
- 1JavaScript goes through Parser -> Ignition -> Sparkplug -> Maglev -> TurboFan. Each tier is slower to compile but faster to execute.
- 2V8 uses lazy parsing — function bodies are only fully parsed when first called. This makes initial script load faster.
- 3Ignition collects type feedback into FeedbackVectors. This is the data TurboFan uses to speculate and generate fast native code.
- 4TurboFan generates speculative machine code. If the speculation is wrong (type changes), the code deoptimizes back to the interpreter.
- 5Most code should never be manually optimized — V8's tiered compilation handles it. Focus on keeping types stable in hot paths.
- 6Avoid IIFEs for cold startup code — V8 eagerly compiles them, wasting optimization time on one-shot logic.