Back to Lab

WebGPU particle physics with Javascript

A browser experiment with WebGPU compute shaders, image particles, gravity, and small collision controls.

  • JavaScript
  • TypeScript
  • WebGPU
  • HTML5

Trying WebGPU in the browser

I wanted to try building one of these small particle/image experiments directly in the browser with WebGPU. I have written this shape of thing for native GPU programs before, but I wanted to see how far I could get on a regular web page without pulling in a large engine or hiding the interesting parts behind hundreds of thousands of lines of framework code.

The setup is intentionally small: load an image, create one particle per sampled image point, update the particles in a compute shader, then render them as tiny textured quads. The controls below are mostly there so I can poke the simulation and see where the browser starts to feel like a real playground.

Result

Setup

Interaction

Particles

Motion field

The idea

The older canvas version reads pixels, creates JavaScript objects, loops over every particle on the CPU, and draws them with a 2D canvas context. That is a good first version because it is very readable, but it is also exactly the kind of thing GPUs are built to do.

For this experiment I wanted the particle state itself to live on the GPU:

particleBuffers.ts
for (let y = 0; y < rows; y += 1) {
for (let x = 0; x < columns; x += 1) {
const positionX = padding + (x + 0.5) * gap;
const positionY = padding + (y + 0.5) * gap;
data[offset] = positionX;
data[offset + 1] = positionY;
data[offset + 2] = particle + 1;
data[offset + 3] = 0;
data[offset + 4] = 0;
data[offset + 5] = 0;
data[offset + 6] = (positionX - padding) / innerWidth;
data[offset + 7] = (positionY - padding) / innerHeight;
data[offset + 8] = positionX;
data[offset + 9] = positionY;
data[offset + 10] = 0;
data[offset + 11] = 0;

The first two values are the current position. The UV values point back into the image texture, and the final position pair is the particle origin. Velocity is stored in the same GPU buffer, so gravity and pointer movement can accumulate over time.

Compute first

The part I wanted to test most was the compute pass. Each frame dispatches a workgroup over the particle buffer. The shader reads one particle, updates its velocity and position, applies the pointer field, optionally adds gravity, and writes the result into the next buffer.

simulate.wgsl
@compute @workgroup_size(64)
fn simulate(@builtin(global_invocation_id) globalId: vec3u) {
let index = globalId.x;
let count = u32(uniforms.grid.y);
if (index >= count) {
return;
}
var particle = particlesIn[index];
var position = particle.position.xy;
var velocity = particle.velocity.xy;
let origin = particle.origin.xy;

The implementation uses two particle buffers and swaps between them. That keeps the reads and writes separate, which makes the simulation easier to reason about and avoids each particle fighting with data that is being written at the same time.

Gravity and collisions

Gravity is just another force in the compute shader. When enabled, the particles stop behaving like an elastic image distortion and start falling into the floor and walls.

simulate.wgsl
let waveX = sin(time * 1.4 + seed * 0.17 + position.y * 0.018);
let waveY = cos(time * 1.2 + seed * 0.11 + position.x * 0.014);
var originStrength = cohesion * 0.035;
var damping = 1.0 - friction;
var gravity = 0.0;
if (gravityEnabled) {
originStrength = 0.0;
damping = max(damping, 0.992);
gravity = 0.18;
}
velocity.x = velocity.x + (origin.x - position.x) * originStrength * delta + waveX * turbulence * 0.11 * delta;
velocity.y = velocity.y + (origin.y - position.y) * originStrength * delta + waveY * turbulence * 0.11 * delta + gravity * delta;
velocity = velocity * damping;
position = position + velocity * delta;

The collision toggle is intentionally a small local pass. Each frame builds a little spatial grid on the GPU, bins particles by their current position, and then checks the neighbouring cells for overlaps. It is not a complete sand simulator, but it gives the experiment that nice collapsing, settling, sand-game feeling while still staying compact enough to fit in a post.

simulate.wgsl
if (collisionsEnabled) {
let minDistance = max(1.0, uniforms.physics.z);
let minDistanceSquared = minDistance * minDistance;
let columns = max(1u, u32(uniforms.grid.z));
let rows = max(1u, u32(uniforms.grid.w));
let viewportWidth = max(1.0, uniforms.viewport.x);
let viewportHeight = max(1.0, uniforms.viewport.y);
let cellX = i32(min(columns - 1u, u32(max(0.0, floor(position.x / viewportWidth * f32(columns))))));
let cellY = i32(min(rows - 1u, u32(max(0.0, floor(position.y / viewportHeight * f32(rows))))));
var correction = vec2f(0.0);
var impulseCorrection = vec2f(0.0);
var collisionCount = 0.0;
var yOffset = -1;
59 collapsed lines
loop {
if (yOffset > 1) {
break;
}
var xOffset = -1;
loop {
if (xOffset > 1) {
break;
}
let otherColumn = cellX + xOffset;
let otherRow = cellY + yOffset;
if (
otherColumn >= 0 &&
otherColumn < i32(columns) &&
otherRow >= 0 &&
otherRow < i32(rows)
) {
let cell = u32(otherRow) * columns + u32(otherColumn);
let bucketCount = min(atomicLoad(&gridCounters[cell]), 96u);
var slot = 0u;
loop {
if (slot >= bucketCount) {
break;
}
let otherIndex = gridIndices[cell * 96u + slot];
if (otherIndex != index && otherIndex < count) {
let other = particlesIn[otherIndex];
let offset = position - other.position.xy;
let distanceSquared = dot(offset, offset);
if (distanceSquared > 0.0001 && distanceSquared < minDistanceSquared) {
let distance = sqrt(distanceSquared);
let normal = offset / distance;
let overlap = minDistance - distance;
let relativeVelocity = velocity - other.velocity.xy;
let closingSpeed = min(0.0, dot(relativeVelocity, normal));
correction = correction + normal * overlap;
impulseCorrection = impulseCorrection + normal * overlap * 0.018 - normal * closingSpeed * 0.12;
collisionCount = collisionCount + 1.0;
}
}
slot = slot + 1u;
}
}
xOffset = xOffset + 1;
}
yOffset = yOffset + 1;
}
if (collisionCount > 0.0) {
var averagedCorrection = correction / collisionCount;
let correctionLength = length(averagedCorrection);
let maxCorrection = minDistance * 0.54;
if (correctionLength > maxCorrection) {
averagedCorrection = averagedCorrection / correctionLength * maxCorrection;
}
position = position + averagedCorrection;
velocity = velocity + impulseCorrection / collisionCount;
}
}

Rendering

WebGPU does not give me the same gl_PointSize shortcut that the old WebGL-style particle demos often use, so each particle is rendered as a tiny instanced quad. The render shader reads the same particle buffer that the compute shader just wrote, turns the particle position into clip space, and samples the image texture.

@vertex
fn vertexMain(
@builtin(vertex_index) vertexIndex: u32,
@builtin(instance_index) instanceIndex: u32
) -> VertexOut {
let particle = particles[instanceIndex];
let local = quadPoint(vertexIndex);
let seed = particle.position.z;
let jitterAmount = clamp(uniforms.viewport.w, 0.0, 1.0);
let sizeJitter = 1.0 + (hash(seed) - 0.5) * 2.0 * jitterAmount;
let particleSize = uniforms.viewport.z * sizeJitter;
let screen = particle.position.xy + local * particleSize;
let pointerDelta = particle.position.xy - uniforms.pointer.xy;
let pointerDistance = max(length(pointerDelta), 0.001);
let pointerInfluence = 1.0 - smoothstep(0.0, uniforms.pointer.z, pointerDistance);
var output: VertexOut;
output.position = vec4f(
screen.x / uniforms.viewport.x * 2.0 - 1.0,
1.0 - screen.y / uniforms.viewport.y * 2.0,
0.0,
1.0
);
output.uv = particle.velocity.zw;
output.local = local;
output.alpha = 0.7 + pointerInfluence * 0.22;
return output;
}

The fragment shader uses the particle UV to grab the original image colour:

renderParticles.wgsl
@fragment
fn fragmentMain(input: VertexOut) -> @location(0) vec4f {
let mask = smoothstep(0.5, 0.36, length(input.local));
let color = textureSample(imageTexture, imageSampler, input.uv);
if (color.a < 0.05 || mask <= 0.0) {
discard;
}
return vec4f(color.rgb, color.a * mask * input.alpha);
}

Why I like this

This is the kind of browser feature that makes the web feel exciting. It is not a production physics engine, and I am not pretending it is one. It is a small experiment to see what it feels like to write GPU-shaped code on a normal website and get something playful out of it.

The fun part is that the page still behaves like the rest of the site. It is just an Astro content entry with a canvas component and a TypeScript sketch file. The heavy part is hidden in the GPU where it belongs.