Skip to content

JavaScript Parse and Execution Cost

advanced14 min read

JavaScript Is Not Free

Here's something that surprises even senior engineers: a 200KB JavaScript file is far more expensive than a 200KB JPEG. The image gets decoded off the main thread and painted. Done. The JavaScript file goes through a gauntlet that blocks your user from doing anything.

Download it. Parse it. Compile it. Execute it. Every single step happens on the main thread (parsing and compilation can partially happen off-thread, but execution cannot). And every millisecond your main thread is busy, your user is staring at a frozen page.

On a fast MacBook Pro, 200KB of minified JavaScript takes roughly 20ms to parse. On a median Android phone? Closer to 200ms. That's a 10x penalty for the same bytes — and we haven't even started executing yet.

Mental Model

Think of JavaScript like a letter written in a foreign language. An image is a photograph — you just look at it. But a letter? You have to translate it (parse), figure out the instructions (compile), and then actually do what it says (execute). The same number of bytes, wildly different amounts of work.

The True Cost of JavaScript

Every JavaScript file your page loads goes through this pipeline before it does anything useful:

Compare this to a 200KB image:

ResourceStepsMain Thread Time
200KB JPEGDownload → Decode (off-thread) → Paint~0ms main thread
200KB JavaScriptDownload → Decompress → Parse → Compile → Execute50-200ms+ main thread

That's why the performance community keeps saying "JavaScript is the most expensive resource." It's not about bandwidth — it's about CPU time on the device that matters most: your user's phone.

Quiz
A 300KB JavaScript bundle and a 300KB PNG image are loaded on the same page. Which statement is true about their processing cost?

The V8 Compilation Pipeline

When JavaScript arrives in V8, it doesn't just run. It goes through a multi-tier compilation system where each tier trades startup speed for execution speed.

Tier 1: Ignition (Interpreter)

Every function starts here. Ignition parses the source code into an AST, then generates compact bytecode. Bytecode is roughly 25-50% the size of equivalent machine code, which matters for memory when you have thousands of functions.

The critical job of Ignition isn't just execution — it's collecting type feedback. Every operation records the types it observes into a FeedbackVector. This data is what the optimizing compilers use later.

Tier 2: Sparkplug (Baseline Compiler)

Sparkplug (V8 9.1+) takes Ignition bytecode and converts it 1:1 to machine code. Zero analysis, zero optimization. It runs roughly 2x faster than interpreted bytecode because it eliminates the interpreter dispatch overhead.

Tier 3: Maglev (Mid-Tier Compiler)

Maglev (V8 11.3+) bridges the gap between Sparkplug and TurboFan. It uses the type feedback from FeedbackVectors and applies basic optimizations like common subexpression elimination and register allocation. Compiles 10-20x faster than TurboFan while producing code that's 50-80% as fast.

Tier 4: TurboFan (Optimizing Compiler)

The crown jewel. TurboFan generates highly optimized machine code using speculative optimization — it literally bets on the types it has observed. If a function always receives integers, TurboFan generates integer-specific machine code that approaches C++ speed. If the bet is wrong (a string shows up where an integer was expected), the code deoptimizes back to Ignition.

Execution Trace
Source arrives
Raw JavaScript bytes from network
200KB minified bundle
Parse (lazy)
Only top-level code and called functions parsed. Inner functions pre-parsed
Pre-parsing is ~2x faster than full parsing
Ignition
Bytecode generated, execution begins immediately
Slow but instant startup
Sparkplug
Hot bytecode compiled to baseline machine code
~2x speedup, no analysis cost
Maglev
Warm functions compiled with light optimization
~5-10x speedup, uses type feedback
TurboFan
Hot functions speculatively optimized to near-native speed
~20-100x speedup, risk of deopt
Why tiered compilation exists

TurboFan compilation takes 1-10ms per function. With 10,000 functions in a medium app, compiling everything with TurboFan at startup would freeze the main thread for 10-100 seconds. Tiered compilation ensures only the ~5% of functions that are genuinely hot pay the optimization cost.

Quiz
Why does V8 use multiple compilation tiers instead of immediately optimizing all code with TurboFan?

Script Evaluation and the Main Thread

Here's where theory meets reality. When the browser encounters a script, it must evaluate it — and that evaluation happens on the main thread. During this time, the page is unresponsive.

Open Chrome DevTools, go to the Performance tab, and record a page load. You'll see blocks labeled "Evaluate Script" in the flame chart. Each one represents a JavaScript file being parsed, compiled, and executed. The wider the block, the longer your users wait.

Main Thread Timeline (simplified):

|-- HTML Parse --|-- Evaluate app.js (150ms) --|-- Evaluate vendor.js (80ms) --|-- Layout --|-- Paint --|
                 ↑                              ↑
                 Main thread blocked             Still blocked
                 User cannot interact            User cannot interact

This is why a single large bundle is so dangerous. A 500KB JavaScript file can block the main thread for 200-500ms on mobile. That's not a theoretical concern — it directly impacts Interaction to Next Paint (INP), which Google uses as a Core Web Vital ranking signal.

Measuring Script Evaluation Cost

Chrome DevTools has a built-in Coverage tool that shows exactly how much of your JavaScript actually executes during page load:

  1. Open DevTools → More Tools → Coverage (or press Ctrl+Shift+P and type "Coverage")
  2. Click the reload button in the Coverage panel
  3. Interact with the page
  4. Each file shows a red/green bar: green is code that ran, red is code that was loaded but never executed

In most production applications, 50-70% of loaded JavaScript is unused during the initial page load. That's parse and compile cost you're paying for code that does nothing. Every red byte in the Coverage tool represents wasted main thread time.

Common Trap

Don't confuse "unused during page load" with "unnecessary." Some code is loaded upfront but used later (after user interaction). The goal isn't to eliminate all red — it's to defer red code so it doesn't block the initial render. That's what code splitting does.

Loading Strategies: defer vs async vs module

How you load scripts dramatically affects when they block the main thread.

Default (no attribute)

<script src="app.js"></script>

Parser-blocking. The HTML parser stops, downloads the script, executes it, then resumes parsing. This is the worst option for performance.

async

<script src="analytics.js" async></script>

Downloads in parallel with HTML parsing. Executes as soon as it's downloaded — which means it interrupts parsing at an unpredictable time. The execution order between multiple async scripts is non-deterministic.

Best for: independent scripts that don't depend on DOM state or other scripts (analytics, error tracking).

defer

<script src="app.js" defer></script>

Downloads in parallel with HTML parsing. Executes after HTML parsing is complete, but before the DOMContentLoaded event. Multiple defer scripts execute in document order.

Best for: most application scripts. You get parallel downloading without blocking the parser.

type="module"

<script type="module" src="app.js"></script>

Deferred by default (behaves like defer). Also enables ES module syntax (import/export). Module scripts are never parser-blocking.

modulepreload

<link rel="modulepreload" href="/modules/utils.js">

Here's a trick most developers miss. When you use ES modules, the browser discovers dependencies only as it parses each module. Module A imports Module B, which imports Module C — that's a waterfall of three sequential network requests. modulepreload lets you tell the browser about these dependencies upfront so it can fetch them in parallel.

<link rel="modulepreload" href="/modules/app.js">
<link rel="modulepreload" href="/modules/utils.js">
<link rel="modulepreload" href="/modules/api.js">
<script type="module" src="/modules/app.js"></script>

Without modulepreload, the browser discovers utils.js and api.js only after downloading and parsing app.js. With it, all three start downloading immediately.

Quiz
You have a 200KB app bundle and a 15KB analytics script. Which loading strategy minimizes main thread blocking during page load?

Code Splitting: Ship Less, Parse Less

The most effective way to reduce JavaScript cost is to not send it in the first place. Code splitting breaks your application into chunks that load on demand.

Route-based splitting

Every modern bundler supports this. Instead of one giant app.js, you get a chunk per route:

Before: app.js (500KB)
After:  shared.js (80KB) + home.js (60KB) + dashboard.js (120KB) + settings.js (90KB)

The user visiting the home page downloads 140KB instead of 500KB. That's a 72% reduction in parse cost on the initial load.

Component-based splitting

For heavy components that aren't needed immediately:

import dynamic from 'next/dynamic'

const CodeEditor = dynamic(() => import('./CodeEditor'), {
  loading: () => <EditorSkeleton />,
})

const ChartDashboard = dynamic(() => import('./ChartDashboard'), {
  loading: () => <ChartSkeleton />,
})

The code editor and chart library only download when the user navigates to a view that needs them. The loading skeleton keeps the layout stable (preventing CLS) while the chunk loads.

The import() boundary

Every import() call creates a split point. The bundler generates a separate chunk that's loaded on demand:

button.addEventListener('click', async () => {
  const { openModal } = await import('./heavy-modal.js')
  openModal()
})

Until the user clicks that button, heavy-modal.js doesn't download, parse, or execute. Zero cost.

How code splitting reduces parse cost specifically

Parse cost isn't just about the JavaScript you execute — it's about the JavaScript the engine sees. Even if a function is never called, V8 still has to pre-parse its source to find the boundaries (where it starts and ends, its parameters, whether it contains eval or arguments). Pre-parsing is cheaper than full parsing (~2x), but it's not free.

When you code-split, those bytes never arrive on the device at all. The parser never sees them. There's zero pre-parse cost, zero memory allocation for the source string, zero cost period. This is fundamentally different from tree-shaking (which removes dead code from the bundle) — code splitting removes entire modules from the initial load.

Measuring the Real Cost

Theory is nice, but you need to measure your actual application. Here are the tools that matter:

Chrome DevTools Performance Panel

Record a page load, then look for:

  • Evaluate Script blocks in the flame chart — each is a file being parsed + compiled + executed
  • Parse and Compile sub-tasks within those blocks
  • Long Tasks (any task over 50ms) — these directly degrade INP

Coverage Tool

As mentioned above, the Coverage panel shows unused JavaScript per file. Sort by "Unused Bytes" to find your biggest offenders.

Lighthouse / PageSpeed Insights

The "Reduce unused JavaScript" audit flags files where more than 20KB is unused. It estimates the potential savings in milliseconds.

Bundle Analyzer

npx next build && npx @next/bundle-analyzer

This generates a visual treemap of your bundles, showing exactly which libraries consume the most space. A 150KB charting library imported for a single page is an obvious candidate for code splitting.

Performance budget

Set a JavaScript budget: no more than 150KB of compressed JavaScript for the initial load on mobile. Track it in CI. When someone adds a dependency that pushes you over, the build fails. This single practice prevents more performance regressions than any other.

Reducing Parse Cost in Practice

Beyond code splitting, there are concrete techniques to reduce the parse burden:

1. Tree shaking

Modern bundlers eliminate unused exports. But tree shaking only works with ES modules (import/export), not CommonJS (require). Ensure your dependencies ship ESM builds.

// Tree-shakeable — bundler can drop unused exports
import { debounce } from 'lodash-es'

// NOT tree-shakeable — the entire lodash library is included
const { debounce } = require('lodash')

2. Avoid polyfills you don't need

Many applications ship polyfills for features already supported by their target browsers. Check your browserslist config and audit your polyfill bundle.

3. Compress aggressively

Brotli compression reduces JavaScript files 15-20% more than gzip. If your CDN supports Brotli (most do), enable it. A 200KB file compressed to 50KB with Brotli means 50KB to download and 200KB to parse — the parse cost doesn't change, but you save network time.

4. Defer non-critical scripts

Third-party scripts (analytics, chat widgets, A/B testing) should never block the initial render. Load them with async, or better yet, load them after the page becomes interactive:

if (typeof requestIdleCallback !== 'undefined') {
  requestIdleCallback(() => {
    const script = document.createElement('script')
    script.src = 'https://analytics.example.com/tracker.js'
    document.body.appendChild(script)
  })
}

This waits until the browser's main thread is idle before loading the analytics script. Your users get a fast initial experience, and the analytics load when there's nothing more important to do.

Quiz
You discover via the Coverage tool that 60% of your main JavaScript bundle is unused during page load. What is the most effective approach?

Common Mistakes

What developers doWhat they should do
Loading all JavaScript upfront in a single bundle because it is simpler
A single large bundle forces the browser to parse and compile everything before anything is interactive. Code splitting ensures users only pay the cost for the code they actually need right now
Code-split by route and dynamically import heavy components
Using async for scripts that depend on each other or on DOM state
Async scripts execute in download-completion order, which is non-deterministic. If script B depends on script A, async can cause B to execute before A, breaking your application
Use defer for dependent scripts (preserves order, runs after parsing) and async only for independent scripts
Assuming compression reduces parse cost
A 200KB file compressed to 50KB still requires parsing 200KB of JavaScript. The browser decompresses before parsing. The only way to reduce parse cost is to send fewer bytes of JavaScript
Compression reduces download time only. Parse cost is based on the decompressed size
Ignoring third-party script cost because it is not your code
Third-party scripts compete for the same main thread. A 100KB analytics library has the same parse and execution cost as 100KB of your own code. In many production sites, third-party scripts account for more main thread time than first-party code
Audit and defer all third-party scripts. Load them after the page is interactive

Key Rules

Key Rules
  1. 1JavaScript is the most expensive byte-for-byte resource on the web. Unlike images, every byte must be parsed, compiled, and executed on the main thread.
  2. 2V8 uses tiered compilation: Ignition (interpreter) for instant startup, then Sparkplug, Maglev, and TurboFan for progressively faster execution of hot code.
  3. 3Use the Coverage tool in DevTools to find unused JavaScript. If more than 30% is unused at page load, code splitting will have a significant impact.
  4. 4Use defer for application scripts and async for independent third-party scripts. Never use bare script tags in the head.
  5. 5Use modulepreload to eliminate the waterfall caused by ES module dependency chains.
  6. 6Set a JavaScript budget (150KB compressed for initial load on mobile) and enforce it in CI.

Quiz: Putting It All Together

Quiz
A mid-range Android phone loads a page with 400KB of minified JavaScript in a single bundle. Approximately how long will parsing alone take?