Skip to content

WebGL and WebGPU Fundamentals

advanced18 min read

The GPU Is a Supercomputer in Your Browser

Canvas 2D runs on the CPU. It's fine for 2D games and charts, but try rendering a 3D scene with lighting, shadows, and thousands of polygons — the CPU chokes. GPUs are built for exactly this: thousands of cores executing the same operation on different data points simultaneously.

WebGL has been the browser's GPU API since 2011. It works, but it's based on OpenGL ES — a design from the 1990s. The API is stateful, error-prone, and maps poorly to how modern GPUs actually work.

WebGPU is the successor. Designed from scratch for modern GPU architectures, it landed in Chrome 113, and as of early 2026, all major browsers ship it. WebGPU is not just "better WebGL" — it enables compute shaders, which turn the GPU into a general-purpose parallel processor for any data-heavy task.

Mental Model

Think of the GPU as a factory with 1000 identical machines. Each machine does a simple operation, but they all run simultaneously. WebGL gives you a factory manager's handbook from 1992 — it works but is full of implicit state and arcane rituals. WebGPU gives you the modern factory automation system — explicit, composable, and it can do more than just make graphics (compute shaders).

WebGL: The Rendering Pipeline

WebGL renders 3D (and 2D) by running a pipeline on the GPU:

You write two programs that run on the GPU: the vertex shader and the fragment shader, both in GLSL (OpenGL Shading Language).

Minimal WebGL Example

const canvas = document.querySelector('canvas');
const gl = canvas.getContext('webgl2');

const vertexShaderSrc = `#version 300 es
  in vec2 a_position;
  void main() {
    gl_Position = vec4(a_position, 0.0, 1.0);
  }
`;

const fragmentShaderSrc = `#version 300 es
  precision mediump float;
  out vec4 fragColor;
  void main() {
    fragColor = vec4(0.2, 0.5, 1.0, 1.0);
  }
`;
function createShader(gl, type, source) {
  const shader = gl.createShader(type);
  gl.shaderSource(shader, source);
  gl.compileShader(shader);
  if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
    console.error(gl.getShaderInfoLog(shader));
    gl.deleteShader(shader);
    return null;
  }
  return shader;
}

const vs = createShader(gl, gl.VERTEX_SHADER, vertexShaderSrc);
const fs = createShader(gl, gl.FRAGMENT_SHADER, fragmentShaderSrc);

const program = gl.createProgram();
gl.attachShader(program, vs);
gl.attachShader(program, fs);
gl.linkProgram(program);
gl.useProgram(program);
const positions = new Float32Array([
  -0.5, -0.5,
   0.5, -0.5,
   0.0,  0.5,
]);

const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW);

const posLoc = gl.getAttribLocation(program, 'a_position');
gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

gl.clearColor(0, 0, 0, 1);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLES, 0, 3);

That's a blue triangle. 50+ lines of code. This is why everyone uses abstractions.

Quiz
In WebGL, what does the vertex shader do?

Textures

const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
gl.generateMipmap(gl.TEXTURE_2D);

Textures are images mapped onto geometry. The fragment shader samples the texture at each pixel to determine color. Mipmaps are pre-scaled versions for efficient rendering at different distances.

3D Transforms with Matrices

3D rendering uses three matrices multiplied together:

  • Model matrix: positions the object in the world (translate, rotate, scale)
  • View matrix: positions the camera
  • Projection matrix: perspective or orthographic projection
gl_Position = u_projection * u_view * u_model * vec4(a_position, 1.0);

Libraries like gl-matrix handle the math:

import { mat4 } from 'gl-matrix';

const projection = mat4.create();
mat4.perspective(projection, Math.PI / 4, aspect, 0.1, 100);

Three.js: WebGL Made Human

Three.js is the de facto abstraction over WebGL. It handles the boilerplate so you can focus on your scene:

import * as THREE from 'three';

const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, width / height, 0.1, 1000);
camera.position.z = 5;

const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshStandardMaterial({ color: 0x3b82f6 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);

const light = new THREE.DirectionalLight(0xffffff, 1);
light.position.set(5, 5, 5);
scene.add(light);

const renderer = new THREE.WebGLRenderer({ canvas });
renderer.setSize(width, height);

function animate() {
  requestAnimationFrame(animate);
  cube.rotation.x += 0.01;
  cube.rotation.y += 0.01;
  renderer.render(scene, camera);
}
animate();

Same blue shape, but now with lighting, perspective, and animation — in a third of the code. Three.js abstracts buffers, shaders, and state management while still giving you access to raw WebGL when needed.

Quiz
Why do most developers use Three.js instead of raw WebGL?

WebGPU: The Modern GPU API

WebGPU is a new API designed for modern GPU architectures (Vulkan, Metal, Direct3D 12). As of early 2026, it is shipped in Chrome, Edge, Firefox, and Safari — making it production-ready for the first time.

Key Differences from WebGL

FeatureWebGLWebGPU
API designStateful (global state machine)Explicit (descriptors, pipelines, bind groups)
Shader languageGLSLWGSL (WebGPU Shading Language)
Compute shadersNo (hacks via fragment shaders)Yes, first-class support
Error handlingSilent failures, check gl.getError()Validation errors at pipeline creation
Multi-threadedNoCan encode commands from workers
PerformanceDriver does implicit state trackingExplicit control, less driver overhead
Browser support (2026)UniversalChrome, Edge, Firefox, Safari (Linux still partial)

WebGPU Core Concepts

const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();

const context = canvas.getContext('webgpu');
const format = navigator.gpu.getPreferredCanvasFormat();
context.configure({ device, format });

Adapter: represents the physical GPU. Device: a logical connection to the GPU — all resource creation goes through it.

WGSL: The Shader Language

@vertex
fn vertexMain(@location(0) pos: vec2f) -> @builtin(position) vec4f {
  return vec4f(pos, 0.0, 1.0);
}

@fragment
fn fragmentMain() -> @location(0) vec4f {
  return vec4f(0.2, 0.5, 1.0, 1.0);
}

WGSL (WebGPU Shading Language) replaces GLSL. It's typed, has clear semantics, and catches more errors at compile time.

Render Pipelines

const pipeline = device.createRenderPipeline({
  layout: 'auto',
  vertex: {
    module: device.createShaderModule({ code: shaderCode }),
    entryPoint: 'vertexMain',
    buffers: [{
      arrayStride: 8,
      attributes: [{ shaderLocation: 0, offset: 0, format: 'float32x2' }],
    }],
  },
  fragment: {
    module: device.createShaderModule({ code: shaderCode }),
    entryPoint: 'fragmentMain',
    targets: [{ format }],
  },
});

Pipelines are created upfront with all configuration baked in. This is fundamentally different from WebGL's stateful approach — the GPU driver knows exactly what you need and can optimize aggressively.

Drawing

const commandEncoder = device.createCommandEncoder();

const pass = commandEncoder.beginRenderPass({
  colorAttachments: [{
    view: context.getCurrentTexture().createView(),
    clearValue: { r: 0, g: 0, b: 0, a: 1 },
    loadOp: 'clear',
    storeOp: 'store',
  }],
});

pass.setPipeline(pipeline);
pass.setVertexBuffer(0, vertexBuffer);
pass.draw(3);
pass.end();

device.queue.submit([commandEncoder.finish()]);

Commands are recorded into a command buffer, then submitted to the GPU queue. This batching model is how modern GPUs prefer to receive work.

Compute Shaders: The Game Changer

Compute shaders are the reason WebGPU matters beyond graphics. They let you run arbitrary parallel computation on the GPU — no rendering required.

@group(0) @binding(0) var<storage, read> input: array<f32>;
@group(0) @binding(1) var<storage, read_write> output: array<f32>;

@compute @workgroup_size(64)
fn main(@builtin(global_invocation_id) id: vec3u) {
  let i = id.x;
  output[i] = input[i] * input[i];
}

This shader squares every element in an array — thousands of elements simultaneously. Use cases:

  • Physics simulations: particle systems, fluid dynamics, cloth
  • Machine learning inference: matrix multiplication for neural networks
  • Image processing: parallel filters, histogram computation
  • Data processing: sorting, searching, reduction on large datasets
  • Cryptography: parallel hashing
Quiz
What can WebGPU compute shaders do that WebGL cannot?

When to Use Which

ScenarioRecommendation
Simple 3D scenes, product viewersThree.js (WebGL backend, WebGPU backend coming)
Complex 3D with custom shadersThree.js + custom shader materials
GPU compute (ML, physics, data)WebGPU compute shaders
Maximum rendering controlRaw WebGPU
Legacy browser support neededWebGL 2
2D games and chartsCanvas 2D (simpler, sufficient)

Feature Detection

if (navigator.gpu) {
  const adapter = await navigator.gpu.requestAdapter();
  if (adapter) {
    // WebGPU supported
  }
} else if (document.createElement('canvas').getContext('webgl2')) {
  // Fall back to WebGL 2
}
Three.js WebGPU backend

Three.js is migrating from WebGL to WebGPU as its primary renderer. The WebGPURenderer is available now and supports most Three.js features. For new projects, consider starting with the WebGPU backend — you get the same Three.js API but with WebGPU performance benefits and access to compute. Three.js abstracts the difference, so you can fall back to WebGL for browsers without WebGPU support.

What developers doWhat they should do
Using raw WebGL for a typical 3D product viewer or scene
Raw WebGL requires managing shader compilation, buffer layout, attribute binding, matrix math, and state machine transitions. Three.js wraps all of this. Unless you need absolute control or are building a rendering engine, the abstraction is worth it.
Use Three.js — it handles the 90% of WebGL boilerplate you do not want to write
Assuming WebGPU replaces WebGL today for all use cases
WebGPU is shipping in all major browsers as of 2026, but Linux support is still partial, and older browser versions lack support. Production apps that need universal reach should feature-detect and provide a WebGL fallback.
Use WebGPU for new projects where browser support is acceptable. Keep WebGL as a fallback for universal reach.
Using fragment shader hacks for GPU compute in WebGL
WebGL fragment shader compute is fragile, limited, and hard to debug. WebGPU compute shaders are purpose-built for general GPU computation with proper storage buffers, workgroup management, and debugging support.
Use WebGPU compute shaders for non-rendering GPU workloads
Key Rules
  1. 1WebGL uses vertex shaders (position transforms) and fragment shaders (pixel coloring) in GLSL to render via the GPU
  2. 2Three.js is the standard abstraction over WebGL — use it unless you need absolute GPU control
  3. 3WebGPU is the modern successor: explicit API, WGSL shaders, and first-class compute shader support
  4. 4Compute shaders unlock GPU parallelism for non-rendering tasks: physics, ML, data processing
  5. 5Feature-detect WebGPU with navigator.gpu and fall back to WebGL 2 for broader support