How to write a custom fragment shader in GLSL and use it with three.js

rectangles_cover.png

This is the first of two posts looking at writing a custom fragment shader to create a grid of rotating triangles.

Notice each triangle rotates separately towards the direction of the mouse.

Code for this post


The goal is to use this visualization as the background of a website, and in this post we are going to set up the code using three.js that will allow us to incorporate a custom fragment shader.

In the second post, we will go through the shader code in detail.

But first, let's step back and talk about what all of this even is!

What is a fragment shader?

  • In graphics programming, a fragment shader is a program that runs on the GPU.
  • It is written in a language called GLSL (short for GL shader language), which is a strongly-typed language similar to C
  • It is responsible for determining the color and other attributes of each pixel (fragment) making up an object on the screen.

In this project, we will be using a 2-D rectangle object that fills the background of a website, and applying a fragment shader to it.

If you've seen sites such as Shadertoy, this shader will be similar to something you might find on there.

We will be using the three.js library, which provides boilerplate code for setting up WebGL and managing its state.

In this first post we will go over setting up the three.js app.

WebGL Basics

WebGL is a JavaScript API based on the OpenGL graphics library, which allows for the creation of interactive 3D graphics that can run in a web browser without the need for any plug-ins or other third-party software.

You can access the WebGL API through the HTML canvas element.

At the core of a WebGL program are two main components: vertex and fragment shaders.

Vertex Shader

The vertex shader takes coordinates and other attributes that define some kind of geometry as inputs.

In WebGL there are three types of drawing primitives:

  • points
  • lines
  • triangles.

Most shapes, such as this 2-D rectangle, are made up of triangles.

rectangle_vertices.png

The vertex shader converts the input coordinates to clip space, which range from -1 to 1 in all dimensions.

For example:

clip_space_comparison.png

Your input data might be 2-D input coordinates with ranges:

  • 0 <= x <= 500
  • 0 <= y <= 200

Vertices with coordinate values falling outside of the clip space range will not be rendered, so converting them is important.

This part will be taken care of in three.js and we don't have to worry about it.

Fragment Shader

After the vertex shader computes vertex positions for the input data, the fragment shader computes the color for each pixel making up the object(s) on the screen.

  • This is done in parallel for all of these pixels.
  • The shader will output a color for each pixel.
  • Each pixel is processed independently in its own thread and has no knowledge of what is happening in any other thread, which can make writing fragment shaders a bit of a challenge at first.

Shaders written in GLSL can be stored in non-JavaScript script tags, for example like this:

<script id="fragmentShader" type="x-shader/x-fragment"></script>

Or it can also be stored just as a string variable, like in the Observable notebook I linked.

three.js

Three.js is a library that provides an API for working with WebGL without needing to know the intricacies of WebGL programming.

There is a lot of boilerplate code to set up the WebGL pipeline that we won't have to worry about when using it.

It does support writing your own shader program, which you can do by attaching it to a material object.

scene_graph.png

Import three.js from a CDN

For this project I've imported three.js from a CDN.

Find the instructions to set that up here.

Next we will go over the three.js structure of the code.

I also have to recommend this great tutorial on the three.js website that goes into detail on the various components that we will touch on in this post.

Scene | Camera | Renderer

The basic objects in three.js that are required to render anything are:

  • scene
  • camera
  • renderer

The scene will be rendered with the camera.

First let's import three.js.

import * as THREE from 'three';

And initialize some variables.

let camera,scene,renderer;

Scene

The scene is like a container that holds the objects you want to render.

In the graphic above, you can see that:

  • The scene will contain a mesh object.
  • The mesh object holds geometry and material objects, which correspond to the rectangle and the fragment shader.
scene = new THREE.Scene();
scene.background = new THREE.Color( 0x000000 );

I gave it a black background color to start off.

Here is a great article on scene graphs.

Camera

The camera simulates the viewpoint of the observer - the user, in this case.

We are using an Orthographic camera which you can read more about if interested, but it is good for 2-D visualizations.

camera = new THREE.OrthographicCamera( window.innerWidth / - 2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / - 2, 1, 1000 );
camera.position.z = 1;

We are just making a pretty basic 2-D visualization and don't need to worry too much about the camera, aside from the fact that we have have it to render anything.

The tutorial I mentioned earlier builds a basic 3-D cube animation and goes into more detail on cameras.

Renderer

The renderer takes the scene and camera as input, and generates the 2-D image that is ultimately displayed on the screen.

I mentioned earlier that we can access the WebGL API through the canvas element.

<canvas id="glCanvas"></canvas>

So we will query for that element and use it to initialize a three.js WebGLRenderer object.

const canvas = document.querySelector('#glCanvas');

renderer = new THREE.WebGLRenderer({canvas});
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );

And with this, we have the basic structure set up to render an image on screen.

Now we need to tell three.js what to render by adding some stuff to the scene we've created.

Mesh | Geometry | Material

mesh_geo_mat.png

A material determines the appearance of the geometry.

This visualization will consist of:

  • a rectangle geometry
  • our fragment shader to color in the rectangle

We will create a Mesh object that packages together these two things.

Geometry

We're using the built-in PlaneGeometry object in three.js - this is a 2-D geometry in the x-y plane.

const plane = new THREE.PlaneGeometry(window.innerWidth, window.innerHeight);

We've passed it the window width and height, so it will take up the entire viewport.

Material

We need to create a material to describe how the rectangle should be colored in.

The ShaderMaterial object in three.js allows us to add our custom fragment shader like so:

const material = new THREE.ShaderMaterial( {
    uniforms: {
        u_resolution: new THREE.Uniform( new THREE.Vector2() ),
        u_mouse: new THREE.Uniform( new THREE.Vector2() )
    },
    fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
} );
Uniforms

Somehow we need to be able to access the current mouse coordinates in the shader, so that we can rotate each triangle towards that location.

We can pass in variables from the JavaScript code, called uniforms, and they contain values that are the same over all pixels.

We are passing two uniforms, which are both vectors with an x and y component.

  • u_resolution - viewport resolution, this is used in case the window is resized
  • u_mouse - mouse coordinates
Shader code

Earlier I mentioned that we can write the actual GLSL shader code inside of a script tag:

<script id="fragmentShader" type="x-shader/x-fragment">
#GLSL code will go here
</script>

Here we are passing a reference of that to the ShaderMaterial object.

It is currently empty because we haven't written the shader yet!

A Mesh object ties it all together

Now we create the mesh object, which takes our geometry object and applies the material to it.

const mesh = new THREE.Mesh( plane, material );

You can read more about polygon meshes here.

Add the Mesh to the Scene

Finally, we add the mesh to the scene.

scene.add( mesh );

Rendering the visualization

Finally we have two functions that will render and animate the visualization.

Render

function render() {
    const object = scene.children[ 0 ];
    object.material.uniforms.u_resolution.value.x = window.innerWidth;
    object.material.uniforms.u_resolution.value.y = window.innerHeight;  
    renderer.render( scene, camera );

}
  • The scene only has one child, which is the mesh object we just added to it.
  • From there we can access and update the uniform values.

Animation loop

This calls the render function which will allow us to see the movement of the triangles when the mouse is moved around.

function animate() {
    requestAnimationFrame( animate );
    render();
}

Mouseover event

And finally here is the mouseover event handler, which will update the x and y values of u_mouse whenever the mouse moves around.

function onPointerMove(event) {
    event.preventDefault();
    const mouseX = (event.clientX) * 2 - 1,
          mouseY = (event.clientY) * 2 + 1,
    object = scene.children[ 0 ];
    object.material.uniforms.u_mouse.value.x = mouseX;
    object.material.uniforms.u_mouse.value.y = mouseY;
};  

Thanks for reading!

In the next post, I will go over the fragment shader in more detail, but for now you can check it out on Github or Observable.

blog comments powered by Disqus

Recent Posts

mortonzcurve.png
Computing Morton Codes with a WebGPU Compute Shader
May 29, 2024

Starting out with general purpose computing on the GPU, we are going to write a WebGPU compute shader to compute Morton Codes from an array of 3-D coordinates. This is the first step to detecting collisions between pairs of points.

Read More
webgpuCollide.png
WebGPU: Building a Particle Simulation with Collision Detection
May 13, 2024

In this post, I am dipping my toes into the world of compute shaders in WebGPU. This is the first of a series on building a particle simulation with collision detection using the GPU.

Read More
abstract_tree.png
Solving the Lowest Common Ancestor Problem in Python
May 9, 2023

Finding the Lowest Common Ancestor of a pair of nodes in a tree can be helpful in a variety of problems in areas such as information retrieval, where it is used with suffix trees for string matching. Read on for the basics of this in Python.

Read More
Get the latest posts as soon as they come out!