Contents of this article: JS and WebGL related knowledge, 2-pass shadow algorithm, BIAS to alleviate self-occlusion, PCF algorithm, PCSS, object movement.
Project source code:
GitHub – Remyuu/GAMES202-Homework: GAMES202-Homework
The picture above is fun to draw.
Written in front
Since I know nothing about JS and WebGL, I can only use console.log() when I encounter problems.
In addition to the content required by the assignment, I also have some questions when coding, and I hope you can answer them QAQ.
- How to achieve dynamic point light shadow effect? We need to use point light shadow technology to achieve omnidirectional shadow maps. How to do it specifically?
- The possionDiskSamples function is not really a Poisson disk distribution?
Framework Modification
Please make some corrections to the assignment framework at the beginning of the assignment. Original text of the framework change:https://games-cn.org/forums/topic/zuoyeziliao-daimakanwu/
- The unpack function algorithm provided by the framework is not accurately implemented. When no bias is added, it will cause serious banding (the ground is half white and half black instead of the typical z-fighting effect), which will affect job debugging to a certain extent.
// homework1/src/shaders/shadowShader/shadowFragment.glsl
vec4 pack (float depth) {
// Use RGBA 4 bytes, 32 bits in total, to store the z value, and the precision of 1 byte is 1/255
const vec4 bitShift = vec4(1.0, 255.0, 255.0 * 255.0, 255.0 * 255.0 * 255.0);
const vec4 bitMask = vec4(1.0/255.0, 1.0/255.0, 1.0/255.0, 0.0);
// gl_FragCoord: the coordinates of the fragment, fract(): returns the decimal part of the value
vec4 rgbaDepth = fract(depth * bitShift); // Calculate the z value of each point
rgbaDepth -= rgbaDepth.gba * bitMask; // Cut off the value which do not fit in 8 bits
return rgbaDepth;
}
// homework1/src/shaders/phongShader/phongFragment.glsl
float unpack(vec4 rgbaDepth) {
const vec4 bitShift = vec4(1.0, 1.0/255.0, 1.0/(255.0*255.0), 1.0/(255.0*255.0*255.0));
return dot(rgbaDepth, bitShift);
}
- To clear the screen, you also need to add a glClear.
// homework1/src/renderers/WebGLRenderer.js
gl.clearColor(0.0, 0.0, 0.0,1.0);// Clear to black, fully opaque
gl.clearDepth(1.0);// Clear everything
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
The most basic knowledge of JS
variable
- In JavaScript, we mainly use three keywords: var, let and const to declare variables/constants.
- var is a keyword for declaring variables.The entire function scopeIn the use statementvariable(Function scope).
- The behavior of let is similar to var, which also declares avariable, but letScope is limited to blocks(Block scope), such as a block defined in a for loop or if statement.
- const: used to declareconstantThe scope of const is alsoBlock Levelof.
- It is recommended to use let and const instead of var to declare variables because they follow block-level scope, are more consistent with the scope rules in most programming languages, and are easier to understand and predict.
kind
A basic JavaScript class structure is as follows:
class MyClass {
constructor(parameter1, parameter2) {
this.property1 = parameter1;
this.property2 = parameter2;
}
method1() {
// method body
}
static sayHello() {
console.log('Hello!');
}
}
Create an instance:
let myInstance = new MyClass('value1', 'value2');
myInstance.method1(); //Calling class method
You can also call static classes directly (without creating an instance):
MyClass.sayHello(); // "Hello!"
Brief description of project process
The program entry is engine.js, and the main function is GAMES202Main. First, initialize WebGL related content, including the camera, camera interaction, renderer, light source, object loading, user GUI interface and the most important main loop.
During the object loading process, loadOBJ.js will be called. First, the corresponding glsl is loaded from the file, and the Phong material, Phong-related shadows, and shadow materials are constructed.
// loadOBJ.js
case 'PhongMaterial':
Material = buildPhongMaterial(colorMap, mat.specular.toArray(), light, Translation, Scale, "./src/shaders/phongShader/phongVertex.glsl", "./src/shaders/phongShader/phongFragment.glsl");
shadowMaterial = buildShadowMaterial(light, Translation, Scale, "./src/shaders/shadowShader/shadowVertex.glsl", "./src/shaders/shadowShader/shadowFragment.glsl");
break;
}
Then, the 2-pass shadow map and conventional Phong material are directly generated through MeshRender. The specific code is as follows:
// loadOBJ.js
Material.then((data) => {
// console.log("Now making surface material")
let meshRender = new MeshRender(Renderer.gl, mesh, data);
Renderer.addMeshRender(meshRender);
});
shadowMaterial.then((data) => {
// console.log("Now making shadow material")
let shadowMeshRender = new MeshRender(Renderer.gl, mesh, data);
Renderer.addShadowMeshRender(shadowMeshRender);
});
Note that MeshRender has a certain degree of versatility, it accepts any type of material as its parameter. How is it distinguished specifically? By judging whether the incoming material.frameBuffer is empty, if it is empty, the surface material will be loaded, otherwise the shadow map will be loaded. In the draw() function of MeshRender.js, you can see the following code:
// MeshRender.js
if (this.Material.frameBuffer != null) {
// Shadow map
gl.viewport(0.0, 0.0, resolution, resolution);
} else {
gl.viewport(0.0, 0.0, window.screen.width, window.screen.height);
}
After the shadow is generated by MeshRender, it is pushed into the renderer. The corresponding implementation can be found in WebGLRenderer.js:
addShadowMeshRender(mesh) { this.shadowMeshes.push(mesh); }
Finally, enter the mainLoop() main loop to update the screen frame by frame.
Detailed explanation of the project process
This chapter will start from a small problem and explore how the fragment shader is constructed. This will connect almost the entire project, and this is also the reading project flow that I think is more comfortable.
Where does glsl work? — Explain the code flow in detail starting from the fragment shader process
In the above we did not mention in detail how the glsl file is called, here we will talk about it in detail.
First inloadOBJ.jsThe .glsl file is introduced for the first time using the path method:
// loadOBJ.js - function loadOBJ()
Material = buildPhongMaterial(colorMap, mat.specular.toArray(), light, Translation, Scale, "./src/shaders/phongShader/phongVertex.glsl", "./src/shaders/phongShader/phongFragment.glsl");
shadowMaterial = buildShadowMaterial(light, Translation, Scale, "./src/shaders/shadowShader/shadowVertex.glsl", "./src/shaders/shadowShader/shadowFragment.glsl");
Here we take phongFragment.glsl as an example. phongFragment.glsl loads the glsl code from the hard disk through the getShaderString method in the buildPhongMaterial function of PhongMaterial.js. At the same time, the glsl code is passed in as a construction parameter and used to construct a PhongMaterial object. During the construction process, PhongMaterial calls the super() function to implement the constructor of the parent class Material.js, that is, to pass the glsl code to Material.js:
// PhongMaterial.js
super({...}, [], ..., fragmentShader);
In C++, a subclass can choose whether to completely inherit the parameters of the parent class's constructor. Here, the parent class has 5 constructors, but only 4 are actually implemented, which is completely fine.
In Material.js, the subclass passes the glsl code here through the fourth parameter #fsSrc of the constructor. So far, the transmission path of glsl code isCome to the end, the next function waiting for him will be called compile().
// Material.js
this.#fsSrc = fsSrc;
...
compile(gl) {
return new Shader(..., ..., this.#fsSrc,{...});
}
As for when is the compile function called? Back to the process of loadOBJ.js, now that we have completely executed the buildPhongMaterial() code, the next step is the then() part mentioned in the previous section.
Note that loadOBJ() is just a function, not an object!
// loadOBJ.js
Material.then((data) => {
let meshRender = new MeshRender(Renderer.gl, mesh, data);
Renderer.addMeshRender(meshRender);
Renderer.ObjectID[ObjectID][0].push(Renderer.meshes.length - 1);
});
When constructing a MeshRender object, compile() is called:
// MeshRender.js
constructor(gl, mesh, Material) {
...
this.shader = this.Material.compile(gl);
}
// Material.js
compile(gl) {
return new Shader(..., ..., this.#fsSrc,{...});
}
Next, let's take a closer look at the structure of shader.js. Material implements all four construction parameters when constructing the shader object. Let's focus on fsSrc here, that is, continue to see the fate of the glsl code.
// shader.js
constructor(gl, vsSrc, fsSrc, shaderLocations) {
...
const fs = this.compileShader(fsSrc, ...);
...
}
When constructing a shader object to implement fs compile shader, the compileShader() function is used. This compileShader function will create a global variable shader, the code is as follows:
// shader.js
compileShader(shaderSource, shaderType) {
const gl = this.gl;
var shader = gl.createShader(shaderType);
gl.shaderSource(shader, shaderSource);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error(shaderSource);
console.error('shader compiler error:\n' + gl.getShaderInfoLog(shader));
}
return shader;
};
What is this gl? It is passed to shader.js as the parameter renderer.gl when loadOBJ() constructs the MeshRender object. And renderer is the first parameter of loadOBJ(), which is passed in engine.js.
Actually, renderer in loadOBJ.js is a WebGLRenderer object. And the gl of renderer.gl is created in engine.js:
// engine.js
const gl = canvas.getContext('webgl');
gl can be understood as getting the WebGL object of canvas from index.html. In fact, gl provides an interface for developers to interact with the WebGL API.
<!-- index.html -->
<canvas id="glcanvas">
WebGL recommended references:
- https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API
- https://webglfundamentals.org
- https://www.w3cschool.cn/webgl/vjxu1jt0.html
Tips: The website has a corresponding Chinese version, but it is recommended to read the English version if you are capable~ WebGL API:
- https://developer.mozilla.org/en-US/docs/Web/API
- https://webglfundamentals.org/docs/
After knowing what gl is, it is natural to find out where and how the project framework is connected with WebGL.
// Shader.js
compileShader(shaderSource, shaderType) {
const gl = this.gl;
var shader = gl.createShader(shaderType);
gl.shaderSource(shader, shaderSource);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error(shaderSource);
console.error('shader compiler error:\n' + gl.getShaderInfoLog(shader));
}
return shader;
};
That is to say, all gl methods are called through the WebGL API. gl.createShader is the first WebGL API we come into contact with.
We just need to know that this createShader() function will return WebGLShader shader objectWe will explain this in more detail later, but let’s focus on where shaderSource goes.
- gl.shaderSource:A string containing the GLSL source code to set.
In other words, the GLSL source code we have been tracking is parsed into the WebGLShader through the gl.shaderSource function.
The WebGLShader is then compiled through the gl.compileShader() function to make it binary data, which can then be used by the WebGLProgram object.
Simply put, a WebGLProgram is a GLSL program that contains compiled WebGL shaders, which must contain at least a vertex shader and a fragment shader. In WebGL, one or more WebGLProgram objects are created, each containing a specific set of rendering instructions. By using different WebGLPrograms, you can achieve a variety of screens.
The if statement is the part that checks whether the shader was compiled successfully. If the compilation fails, the code inside the brackets is executed. Finally, the shader object shader is returned after compilation (or attempted compilation).
At this point, we have completed the work of taking the GLSL file from the hard disk and compiling it into a shader object.
But the rendering process is not over yet. Let's go back to the construction of the Shadow object:
// Shadow.js
class Shader {
constructor(gl, vsSrc, fsSrc, shaderLocations) {
this.gl = gl;
const vs = this.compileShader(vsSrc, gl.VERTEX_SHADER);
const fs = this.compileShader(fsSrc, gl.FRAGMENT_SHADER);
this.program = this.addShaderLocations({
glShaderProgram: this.linkShader(vs, fs),
}, shaderLocations);
}
...
Although we just explained the GLSL compilation process of the fragment shader, the vertex shader is quite similar, so it is omitted here.
Here we introduce the process of linking shaders using linkShader(). The code is below the text.
- First create aCreating a programName it WebGLProgram.
- Add the compiled vertex shader and fragment shader vs and fs to the program. This step is calledAdditional ShadersSpecifically, they are attached to the WebGLProgram using gl.attachShader().
- Link the WebGLProgram using gl.linkProgram(). This generates an executable program that combines the shaders attached previously. This step is calledLinker.
- Finally, check the link status and return the WebGL object.
// Shader.js
linkShader(vs, fs) {
const gl = this.gl;
var prog = gl.createProgram();
gl.attachShader(prog, vs);
gl.attachShader(prog, fs);
gl.linkProgram(prog);
if (!gl.getProgramParameter(prog, gl.LINK_STATUS)) {
abort('shader linker error:\n' + gl.getProgramInfoLog(prog));
}
return prog;
};
A WebGLProgram can be thought of as a container for shaders, which contains all the information and instructions needed to transform 3D data into 2D pixels on the screen.
After getting the program glShaderProgram that is linked to the shader, it will be loaded together with the shaderLocations object.
Simply put, the shaderLocations object contains two properties
- Attributes are "individual" data (such as information about each vertex)
- Uniforms are "overall" data (such as information about a light)
The framework packages the loading process into addShaderLocations(). Simply put, after this step, when you need to assign values to these uniforms and attributes, you can directly operate through the acquired locations without having to query the locations every time.
addShaderLocations(Result, shaderLocations) {
const gl = this.gl;
Result.uniforms = {};
Result.attribs = {};
if (shaderLocations && shaderLocations.uniforms && shaderLocations.uniforms.length) {
for (let i = 0; i < shaderLocations.uniforms.length; ++i) {
Result.uniforms = Object.assign(Result.uniforms, {
[shaderLocations.uniforms[i]]: gl.getUniformLocation(Result.glShaderProgram, shaderLocations.uniforms[i]),
});
}
}
if (shaderLocations && shaderLocations.attribs && shaderLocations.attribs.length) {
for (let i = 0; i < shaderLocations.attribs.length; ++i) {
Result.attribs = Object.assign(Result.attribs, {
[shaderLocations.attribs[i]]: gl.getAttribLocation(Result.glShaderProgram, shaderLocations.attribs[i]),
});
}
}
return Result;
}
Let's review what has been done so far: successfully construct a compiled (or attempted compiled) Shader object for MeshRender:
// MeshRender.js - construct()
this.shader = this.Material.compile(gl);
At this point, the task of loadOBJ has been successfully completed. In engine.js, such loading needs to be done three times:
// loadOBJ(renderer, path, name, objMaterial, transform, meshID);
loadOBJ(Renderer, 'assets/mary/', 'Marry', 'PhongMaterial', obj1Transform);
loadOBJ(Renderer, 'assets/mary/', 'Marry', 'PhongMaterial', obj2Transform);
loadOBJ(Renderer, 'assets/floor/', 'floor', 'PhongMaterial', floorTransform);
Next, we come to the main loop of the program. That is, one loop represents one frame:
// engine.js
loadOBJ(...);
...
Function mainLoop() {...}
...
Main program loop — mainLoop()
In fact, when mainLoop is executed, the function will call itself again, forming an infinite loop. This is the basic mechanism of the so-called game loop or animation loop.
// engine.js
Function mainLoop() {
cameraControls.update();
Renderer.Render();
requestAnimationFrame(mainLoop);
};
requestAnimationFrame(mainLoop);
cameraControls.update(); Updates the camera's position or orientation, for example in response to user input.
renderer.render(); The scene is rendered or drawn to the screen. The specific content and method of rendering depends on the implementation of the renderer object.
The benefit of requestAnimationFrame is that it will try to synchronize with the screen refresh rate, which can provide smoother animations and higher performance because it will not execute code unnecessarily between screen refreshes.
For more information about the requestAnimationFrame() function, refer to the following article: https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame
Next, focus on the operation of the render() function.
render()
This is a typical process of light source rendering, shadow rendering and final camera perspective rendering. I will not go into details here and will move on to the multi-light source section later.
// WebGLRenderer.js - render()
const gl = this.gl;
gl.clearColor(0.0, 0.0, 0.0, 1.0); // The default value of shadowmap is white (no occlusion), which solves the problem of shadows on the ground edge (because the ground cannot be sampled, the default value of 0 will be considered as occluded)
gl.clearDepth(1.0);// Clear everything
gl.enable(gl.DEPTH_TEST); // Enable depth testing
gl.depthFunc(gl.LEQUAL); // Near things obscure far things
console.assert(this.lights.length != 0, "No light");
console.assert(this.lights.length == 1, "Multiple lights");
for (let l = 0; l < this.lights.length; l++) {
gl.bindFramebuffer(gl.FRAMEBUFFER, this.lights[l].entity.fb);
gl.clear(gl.DEPTH_BUFFER_BIT);
// Draw light
// TODO: Support all kinds of transform
this.lights[l].meshRender.mesh.transform.translate = this.lights[l].entity.lightPos;
this.lights[l].meshRender.draw(this.camera);
// Shadow pass
if (this.lights[l].entity.hasShadowMap == true) {
for (let i = 0; i < this.shadowMeshes.length; i++) {
this.shadowMeshes[i].draw(this.camera);
}
}
}
// Camera pass
for (let i = 0; i < this.meshes.length; i++) {
this.gl.useProgram(this.meshes[i].shader.program.glShaderProgram);
this.gl.uniform3fv(this.meshes[i].shader.program.uniforms.uLightPos, this.lights[0].entity.lightPos);
this.meshes[i].draw(this.camera);
}
GLSL Quick Start - Analyzing the Fragment Shader FragmentShader.glsl
Above we discussed how to load GLSL. This section introduces the concept and practical usage of GLSL.
When rendering in WebGL, we need at least one Vertex Shader and a Fragment Shader In the previous section, we took the fragment shader as an example to introduce how the framework reads the GLSL file from the hard disk into the renderer. Next, we will take the Flagment Shader fragment shader as an example (i.e. phongFragment.glsl) to introduce the process of writing GLSL.
What is the use of FragmentShader.glsl?
The role of the fragment shader is to render the correct color for the current pixel when rasterizing. The following is the simplest form of a fragment shader, which contains a main() function, in which the color of the current pixel gl_FragColor is specified.
void main(void){
...
gl_FragColor = vec4(Color, 1.0);
}
What data does the Fragment Shader accept?
Fragment Shader needs to know data, which is provided by the following three main methods. For specific usage, please refer to Appendix 1.6 :
- Uniforms (global variables): These are values that remain constant for all vertices and fragments within a single draw call. Common examples include transformation matrices (translation, rotation, etc.), light parameters, and material properties. Since they are constant across draw calls, they are called "uniforms".
- Textures: Textures are arrays of image data that can be sampled by the fragment shader to get color, normal, or other types of information for each fragment.
- Varyings: These are the values output by the vertex shader, which are interpolated between the vertices of a graphics primitive (such as a triangle) and passed to the fragment shader. This allows us to calculate values (such as transformed positions or vertex colors) in the vertex shader and interpolate between fragments for use in the fragment shader.
Uniforms and Varyings were used in the project.
GLSL Basic Syntax
I won't go over the basic usage here, because that would be too boring. Let's just look at the project:
// phongFragment.glsl - PCF pass
void main(void) {
// Declare variables
float visibility; // Visibility (for shadows)
vec3 shadingPoint; // Viewpoint coordinates from the light source
vec3 phongColor; // Calculated Phong lighting color
// Normalize the coordinate value of vPositionFromLight to the range [0,1]
shadingPoint = vPositionFromLight.xyz / vPositionFromLight.w;
shadingPoint = shadingPoint * 0.5 + 0.5; //Convert the coordinates to the range [0,1]
// Calculate visibility (shadows).
visibility = PCF(uShadowMap, vec4(shadingPoint, 1.0)); // Use PCF (Percentage Closer Filtering) technology
// Use the blinnPhong() function to calculate the Phong lighting color
phongColor = blinnPhong();
// Calculate the final fragment color, multiply the Phong lighting color by the visibility to get the fragment color that takes shadows into account
gl_FragColor = vec4(phongColor * visibility, 1.0);
}
Like C language, GLSL is a strongly typed language. You cannot assign a value like this: float visibility = 1;, because 1 is of type int.
vector or matrix
In addition, glsl has many special built-in types, such as floating-point type vectors vec2, vec3 and vec4, and matrix types mat2, mat3 and mat4.
The access method of the above data is also quite interesting.
- .xyzw: Usually used to represent points or vectors in three-dimensional or four-dimensional space.
- .rgba: Used when the vector represents a color, where r represents red, g represents green, b represents blue, and a represents transparency.
- .stpq: Used when vectors are used as texture coordinates.
therefore,
- vx and v[0] and vr and vs all represent the first component of the vector.
- vy and v[1] and vg and vt all represent the second component of the vector.
- For vec3 and vec4, vz and v[2] and vb and vp all represent the third component of the vector.
- For vec4, vw and v[3] and va and vq all represent the fourth component of the vector.
You can even access these types of data using a technique called "component reassembly" or "component selection":
- Repeat a certain amount:
- v.yyyy results in a new vec4 where each component is the y component of the original v. This has the same effect as vec4(vy, vy, vy, vy).
- Exchange Components:
- v.bgra will produce a new vec4 with the components taken from v in the order b, g, r, a. This is the same as vec4(vb, vg, vr, va).
When constructing a vector or matrix you can provide multiple components at once, for example:
- vec4(v.rgb, 1) is equivalent to vec4(vr, vg, vb, 1)
- vec4(1) is also equivalent to vec(1, 1, 1, 1)
Reference: GLSL language specification https://www.khronos.org/files/opengles_shading_language.pdf
Matrix storage method
These tips can be found in the Doc of glmatrix: https://glmatrix.net/docs/mat4.js.html. In addition, if we look closely, we will find that this component is also usedColumn-first storageThe matrix is also stored in columns in WebGL and GLSL. It is shown below:
To move an object to a new position, you can use the mat4.translate() function, which accepts three parameters: a 4×4 output out, an incoming 4×4 matrix a, and a 1×3 displacement matrix v.
The simplest matrix multiplication can be done using mat4.multiply, scaling the matrix using mat4.scale(), adjusting the "looking" direction using mat4.lookAt(), and the orthogonal projection matrix using mat4.ortho().
Implementing matrix transformation of light source camera
If we use perspective projection, we need to scale the following Frustum to an orthogonal perspective space, as shown below:
But if we use orthogonal projection, we can keep the linearity of the depth value and make the accuracy of the Shadow Map as large as possible.
// DirectionalLight.js - CalcLightMVP()
let lightMVP = mat4.create();
let modelMatrix = mat4.create();
let viewMatrix = mat4.create();
let projectionMatrix = mat4.create();
// Model transform
mat4.translate(modelMatrix, modelMatrix, translate);
mat4.scale(modelMatrix, modelMatrix, scale);
// View transform
mat4.lookAt(viewMatrix, this.lightPos, this.focalPoint, this.lightUp);
// Projection transform
let left = -100.0, right = -left, bottom = -100.0, top = -bottom,
near = 0.1, far = 1024.0;
// Set these values as per your requirement
mat4.ortho(projectionMatrix, left, right, bottom, top, near, far);
mat4.multiply(lightMVP, projectionMatrix, viewMatrix);
mat4.multiply(lightMVP, lightMVP, modelMatrix);
return lightMVP;
2-Pass Shadow Algorithm
Before implementing the two-pass algorithm, let’s take a look at how the main() function is called.
// phongFragment.glsl
void main(void){
vec3 shadingPoint = vPositionFromLight.xyz / vPositionFromLight.w;
shadingPoint = shadingPoint*0.5+0.5;// Normalize to [0,1]
float visibility = 1.0;
visibility = useShadowMap(uShadowMap, vec4(shadingPoint, 1.0));
vec3 phongColor = blinnPhong();
gl_FragColor=vec4(phongColor * visibility,1.0);
}
So the question is, how does vPositionFromLight come from? It is calculated in the vertex shader.
Unified space coordinates
In layman's terms, the world coordinates of the scene's vertices are converted to new coordinates corresponding to the NDC space of the light camera. The purpose is to retrieve the required depth value in the light camera's space when rendering the shadow of a shading point of the main camera.
vPositionFromLight represents the homogeneous coordinates of a point seen from the perspective of the light source. This coordinate is in the orthogonal space of the light source, and its range is [-w, w]. It is calculated by phongVertex.glsl. The function of phongVertex.glsl is to process the input vertex data and convert a series of vertices into clip space coordinates through the MVP matrix calculated in the previous chapter. Convert vPositionFromLight to the NDC standard space to get the shadingPoint, and then pass the Shading Point in the shadingPoint that needs to be used for shadow judgment into the useShadowMap function. Attached is the relevant code for vertex conversion:
// phongVertex.glsl - main()
vFragPos = (uModelMatrix * vec4(aVertexPosition, 1.0)).xyz;
vNormal = (uModelMatrix * vec4(aNormalPosition, 0.0)).xyz;
gl_Position = uProjectionMatrix * uViewMatrix * uModelMatrix *
vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
vPositionFromLight = uLightMVP * vec4(aVertexPosition, 1.0);
phongVertex.glsl is loaded in loadOBJ.js together with phongFragment.glsl.
Compare depth values
Next, implement the useShadowMap() function. The purpose of this function is to determine whether a fragment (pixel) is in the shadow.
texture2D() is a GLSL built-in function used to sample a 2D texture.
The unpack() and pack() functions in the code framework are set to increase numerical precision. The reasons are as follows:
- Depth information is a continuous floating point number, and its range and precision may exceed what an 8-bit channel can provide. Storing such a depth value directly in an 8-bit channel will result in a lot of precision loss, resulting in incorrect shadow effects. Therefore, we can make full use of the other three channels, that is, encode the depth value into multiple channels. By allocating different parts of the depth value to the four channels of R, G, B, and A, we can store the depth value with higher precision. When we need to use the depth value, we can decode it from these four channels.
closestDepthVec is the depth information of the blocker.
Finally, the closestDepth is compared with the currentDepth. If the blocker (closestDepth) is greater than the depth value (shadingPoint.z) of the fragment to be rendered by the main camera, it means that the current Shading Point is not blocked, and visibility returns 1.0. In addition, in order to solve some shadow acne and self-occlusion problems, the position of the blocker can be increased, that is, EPS can be added.
// phongFragment.glsl
float useShadowMap(sampler2D shadowMap, vec4 shadingPoint){
// Retrieve the closest depth value from the light's perspective using the fragment's position in light space.
float closestDepth = unpack(texture2D(shadowMap, shadingPoint.xy));
// Compare the fragment's depth with the closest depth to determine if it's in shadow.
return (closestDepth + EPS + getBias(.4)> shadingPoint.z) ? 1.0 : 0.0;
}
Actually, there is still a problem. Our current light camera is not omnidirectional, which means that its illumination range is only a small part. If the model is within the range of the lightCam, then the picture is completely correct.
But when the model is outside the range of lightCam, it should not participate in the calculation of useShadowMap. But we have not yet completed the relevant logic. In other words, if the position is outside the range of lightCam's MVP transformation matrix, unexpected errors may occur after calculation. Take a look at the soul diagram again:
In the previous section, we defined zFar, zNear and other information in the directional light source script. The following code is shown:
// DirectionalLight.js - CalcLightMVP()
let left = -100.0, right = -left, bottom = -100.0, top = -bottom, near = 0.1, far = 1024.0;
Therefore, in order to solve the problem that the model is outside the lightCam range, we add the following logic to useShadowMap or in the code before useShadowMap to remove the sampling points that are not in the lightCam range:
// phongFragment.glsl - main()
...
if(shadingPoint.x<0.||shadingPoint.x>1.||
shadingPoint.y<0.||shadingPoint.y>1.){
visibility=1.;// The light source cannot see the area, so it will not be covered by the shadow
}else{
visibility=useShadowMap(uShadowMap,vec4(shadingPoint,1.));
}
...
The effect is shown in the figure below. The left side has culling logic, and the right side has no culling logic. When 202 moves to the edge of the lightCam's frustum, her limbs are amputated directly, which is very scary:
Of course, it is okay not to complete this step. In fact, in development, we will use a universal light source, that is, the lightCam is 360 degrees omnidirectional, and we only need to remove those points outside the zFar plane.
Add bias to improve self-occlusion problem
When we render the depth map from the light source's point of view, errors may occur due to the limitations of floating point precision. Therefore, when we use the depth map in the main rendering process, we may see the object's own shadow, which is called self-occlusion or shadow distortion.
After completing the 2-pass rendering, we found shadow acne in many places such as 202's hair, which is very unsightly. As shown in the following figure:
In theory, we can alleviate the self-occlusion problem by adding bias. Here I provide a method to dynamically adjust the bias:
// phongFragment.glsl
// Use bias offset value to optimize self-occlusion
float getBias(float ctrl) {
vec3 lightDir = normalize(uLightPos);
vec3 normal = normalize(vNormal);
float m = 200.0 / 2048.0 / 2.0; // Orthogonal matrix width and height/shadowmap resolution/2
float bias = max(m, m * (1.0 - dot(normal, lightDir))) * ctrl;
return bias;
}
First, when the light and the normal are almost perpendicular, self-occlusion is very likely to occur, such as the back of our 202 sauce's head. Therefore, we need to obtain the direction of the light and the direction of the normal. Among them, m represents the size of the scene space represented by each pixel under the light source view.
Finally, change useShadowMap() in phongFragment.glsl to the following:
// phongFragment.glsl
float useShadowMap(sampler2D shadowMap, vec4 shadingPoint){
...
return (closestDepth + EPS + getBias(.3)> shadingPoint.z) ? 1.0 : 0.0;
}
The effect is as follows:
It should be noted that a larger bias value may lead to over-correction and shadow loss, while a smaller value may not improve acne, so multiple attempts are required.
PCF
However, the resolution of ShadowMap is limited. In actual games, the resolution of ShadowMap is much smaller than the resolution (because of the high performance consumption), so we need a method to soften the jagged edges. The PCF method calculates the Shading Point by taking the average of multiple pixels around each pixel on ShadowMap.
Initially people wanted to use this method to soften shadows, but later they found that this method can achieve the effect of soft shadows.
Before using the PCF algorithm to estimate the shadow ratio, we need to prepare a set of sampling points. For PCF shadows, we only use 4-8 sampling points on mobile devices, while high-quality images use 16-32. In this section, we use 8 sampling points, and on this basis, we adjust the parameters of the generated samples to improve the image, reduce noise, etc.
However, the above different sampling methods do not have a particularly large impact on the final image. The most important factor affecting the image is the size of the shadow map when doing PCF. Specifically, it is the textureSize in the code, but generally speaking, this item is a fixed value in the project.
So our next idea is to implement PCF first and then fine-tune the sampling method.
After all, premature optimization is a taboo.
Implementing PCF
In main(), modify the shading algorithm used.
// phongFragment.glsl
void main(void){
...
visibility = PCF(uShadowMap, vec4(shadingPoint, 1.0));
...
}
shadowMap.xy is the texture coordinate used to sample the shadow map, and shadowMap.z is the depth value for that pixel.
The sampling function requires us to pass in a Vec2 variable as a random seed, and then returns a random point within a circle with a radius of 1.
Then divide the uv coordinates of $[0, 1]^2$ into textureSize parts. After setting the filter window, sample multiple times near the current shadingPoint position and finally count:
// phongFragment.glsl
float PCF(sampler2D shadowMap,vec4 shadingPoint){
// The sampling result will be returned to the global variable - poissonDisk[]
poissonDiskSamples(shadingPoint.xy);
float textureSize=256.; // The size of the shadow map, the larger the size, the smaller the filtering range
float filterStride=1.; // Filter step size
float filterRange=1./textureSize*filterStride; // The range of the filter window
int noShadowCount=0; // How many points are not in the shadow
for(int i=0;i<NUM_SAMPLES;i++){
vec2 sampleCoord=poissonDisk[i]*filterRange+shadingPoint.xy;
vec4 closestDepthVec=texture2D(shadowMap,sampleCoord);
float closestDepth=unpack(closestDepthVec);
float currentDepth=shadingPoint.z;
if(currentDepth<closestDepth+EPS){
noShadowCount+=1;
}
}
return float(noShadowCount)/float(NUM_SAMPLES);
}
The effect is as follows:
poissonDisk sampling parameter settings
In the homework framework, I found that this possionDiskSamples function is not really a Poisson disk distribution? A bit strange.I personally feel that it is more like points evenly distributed on the spiral line.I hope readers can give me some guidance. I will first analyze the code in the framework.
Mathematical formulas related to poissonDiskSamples in the framework:
// phongFragment.glsl
float ANGLE_STEP = PI2 * float( NUM_RINGS ) / float( NUM_SAMPLES );
float INV_NUM_SAMPLES = 1.0 / float( NUM_SAMPLES );
float angle = rand_2to1( randomSeed ) * PI2;
float radius = INV_NUM_SAMPLES;
float radiusStep = radius;
Convert polar coordinates to Cartesian coordinates: Update rule: Radius change:
The specific code is as follows:
// phongFragment.glsl
vec2 poissonDisk[NUM_SAMPLES];
void poissonDiskSamples( const in vec2 randomSeed ) {
float ANGLE_STEP = PI2 * float( NUM_RINGS ) / float( NUM_SAMPLES );
float INV_NUM_SAMPLES = 1.0 / float( NUM_SAMPLES );//Put the sample in a circle with a radius of 1
float angle = rand_2to1( randomSeed ) * PI2;
float radius = INV_NUM_SAMPLES;
float radiusStep = radius;
for( int i = 0; i < NUM_SAMPLES; i ++ ) {
poissonDisk[i] = vec2( cos( angle ), sin( angle ) ) * pow( radius, 0.75 );
radius += radiusStep;
angle += ANGLE_STEP;
}
}
That is, we can adjust the following parameters:
- Selection of radius variation index
As for why the number 0.75 is used in the homework framework, I made a more vivid animation, showing the change of the exponent of the distance (radius) between each result coordinate and the center of the circle between 0.2 and 1.1 during Poisson sampling. In other words, when the value is above 0.75, it can be basically considered that the center of gravity of the data will be more inclined to the position of the center of the circle. I put the code of the animation below Appendix 1.2 Readers can compile and debug by themselves.
The above is a video. If you want the PDF version, you need to go to the website to view it.
- Number of turnsNUM_RINGS
NUM_RINGS is used together with NUM_SAMPLES to calculate the angle difference ANGLE_STEP between each sample point.
At this point, the following analysis can be made:
If NUM_RINGS is equal to NUM_SAMPLES, then ANGLE_STEP will be equal to $2π$, which means that the angle increment in each iteration is a full circle, which obviously does not make sense. If NUM_RINGS is less than NUM_SAMPLES, then ANGLE_STEP will be less than $2π$, which means that the angle increment in each iteration is a portion of a circle. If NUM_RINGS is greater than NUM_SAMPLES, then ANGLE_STEP will be greater than $2π$, which means that the angle increment in each iteration exceeds a circle, which may cause coverage and overlap.
So in this code framework, when our sampling number is fixed (8 here), we can make decisions to make the sampling points more evenly distributed.
Therefore, in theory, NUM_RINGS can be set directly to 1 here.
The above is a video. If you want the PDF version, you need to go to the website to view it.
When the sampling points are evenly distributed, the effect is quite good:
If the sampling is very uneven, such as when NUM_RINGS is equal to NUM_SAMPLES, a dirty picture will appear:
After getting these sampling points, we can also perform weight distribution on the sampling points. For example, in the 202 course, Professor Yan mentioned that different weights can be set according to the distance of the original pixel, and farther sampling points may be assigned lower weights, but this part of the code is not involved in the project.
PCSS
First find the AVG Blocker Depth of any uv coordinate in the Shadow Map.
float findBlocker(sampler2D shadowMap,vec2 uv,float z_shadingPoint){
float count=0., depth_sum=0., depthOnShadowMap, is_block;
vec2 nCoords;
for(int i=0;i<BLOCKER_SEARCH_NUM_SAMPLES;i++){
nCoords=uv+BLOKER_SIZE*poissonDisk[i];
depthOnShadowMap=unpack(texture2D(shadowMap,nCoords));
if(abs(depthOnShadowMap) < EPS)depthOnShadowMap=1.;
// The step function is used to compare two values.
is_block=step(depthOnShadowMap,z_shadingPoint-EPS);
count+=is_block;
depth_sum+=is_block*depthOnShadowMap;
}
if(count<EPS)
return z_shadingPoint;
return depth_sum/count;
}
There are three steps, and I will not go into details here. It is not difficult to follow the theoretical formula.
float PCSS(sampler2D shadowMap,vec4 shadingPoint){
poissonDiskSamples(shadingPoint.xy);
float z_shadingPoint=shadingPoint.z;
// STEP 1: avgblocker depth
float avgblockerdep=findBlocker(shadowMap,shadingPoint.xy,z_shadingPoint);
if(abs(avgblockerdep - z_shadingPoint) <= EPS) // No Blocker
return 1.;
// STEP 2: penumbra size
float dBlocker=avgblockerdep,dReceiver=z_shadingPoint-avgblockerdep;
float wPenumbra=min(LWIDTH*dReceiver/dBlocker,MAX_PENUMBRA);
// STEP 3: filtering
float _sum=0.,depthOnShadowMap,vis;
vec2 nCoords;
for(int i=0;i<NUM_SAMPLES;i++){
nCoords=shadingPoint.xy+wPenumbra*poissonDisk[i];
depthOnShadowMap=unpack(texture2D(shadowMap,nCoords));
if(abs(depthOnShadowMap)<1e-5)depthOnShadowMap=1.;
vis=step(z_shadingPoint-EPS,depthOnShadowMap);
_sum+=vis;
}
return _sum/float(NUM_SAMPLES);
}
Framework Part Analysis
This part is the comments I wrote when I was casually browsing the code, and I have organized them here a little bit.
loadShader.js
Although both functions in this file load glsl files, the latter getShaderString(filename) function is more concise and advanced. This is mainly reflected in the fact that the former returns a Promise object, while the latter directly returns the file content. For more information about Promise, please refer to this article. Appendix 1.3 – Simple usage of JS Promise For more information about async await, see this article Appendix 1.4 – Introduction to async awaitFor the usage of .then(), see Appendix 1.5 - About .then .
To put it more professionally, these two functions provide different levels of abstraction. The former provides the atomic level capability of directly loading files and has finer-grained control, while the latter is more concise and convenient.
Add object translation effect
Adding controllers to the GUI
It is very expensive to calculate shadows for each frame, so I manually create a light controller and manually adjust whether to calculate shadows for each frame. In addition, when Light Moveable is unchecked, users are prohibited from changing the light position:
After checking Light Moveable, the lightPos option box appears:
Specific code implementation:
// engine.js
// Add lights
// light - is open shadow map == true
let lightPos = [0, 80, 80];
let focalPoint = [0, 0, 0]; // Directional light focusing direction (starting point is lightPos)
let lightUp = [0, 1, 0]
const lightGUI = {// Light source movement controller. If not checked, shadows will not be recalculated.
LightMoveable: false,
lightPos: lightPos
};
...
Function createGUI() {
const gui = new dat.gui.GUI();
const panelModel = gui.addFolder('Light properties');
const panelCamera = gui.addFolder("OBJ properties");
const lightMoveableController = panelModel.add(lightGUI, 'LightMoveable').name("Light Moveable");
const arrayFolder = panelModel.addFolder('lightPos');
arrayFolder.add(lightGUI.lightPos, '0').min(-10).max( 10).step(1).name("light Pos X");
arrayFolder.add(lightGUI.lightPos, '1').min( 70).max( 90).step(1).name("light Pos Y");
arrayFolder.add(lightGUI.lightPos, '2').min( 70).max( 90).step(1).name("light Pos Z");
arrayFolder.domElement.style.display = lightGUI.LightMoveable ? '' : 'none';
lightMoveableController.onChange(Function(value) {
arrayFolder.domElement.style.display = value ? '' : 'none';
});
}
Appendix 1.1
import numpy as np
import matplotlib.pyplot as plt
def simulate_poisson_disk_samples(random_seed, num_samples=100, num_rings=2):
PI2 = 2 * np.pi
ANGLE_STEP = PI2 * num_rings / num_samples
INV_NUM_SAMPLES = 1.0 / num_samples
# Initial angle and radius
angle = random_seed * PI2
radius = INV_NUM_SAMPLES
radius_step = radius
x_vals = []
y_vals = []
for _ in range(num_samples):
x = np.cos(angle) * pow(radius, 0.1)
y = np.sin(angle) * pow(radius, 0.1)
x_vals.append(x)
y_vals.append(y)
radius += radius_step
angle += ANGLE_STEP
return x_vals, y_vals
plt.figure(figsize=(8, 8))
# Generate and plot the spiral 5 times with different random seeds
for _ in range(50):
random_seed = np.random.rand()
x_vals, y_vals = simulate_poisson_disk_samples(random_seed)
plt.plot(x_vals, y_vals, '-o', markersize=5, linewidth=2)
plt.title("Poisson Disk Samples")
plt.axis('on')
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
Appendix 1.2 – Poisson sampling point post-processing animation code
illustrate:Appendix 1.2 The code is directly based on Appendix 1.1 Modified.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
def simulate_poisson_disk_samples_with_exponent(random_seed, exponent, num_samples=100, num_rings=2):
PI2 = 2 * np.pi
ANGLE_STEP = PI2 * num_rings / num_samples
INV_NUM_SAMPLES = 1.0 / num_samples
angle = random_seed * PI2
radius = INV_NUM_SAMPLES
radius_step = radius
x_vals = []
y_vals = []
for _ in range(num_samples):
x = np.cos(angle) * pow(radius, exponent)
y = np.sin(angle) * pow(radius, exponent)
x_vals.append(x)
y_vals.append(y)
radius += radius_step
angle += ANGLE_STEP
return x_vals, y_vals
fig, ax = plt.subplots(figsize=(8, 8))
ax.axis('on')
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
ax.set_aspect('equal', adjustable='box')
lines = [ax.plot([], [], '-o', markersize=5, linewidth=2)[0] for _ in range(50)]
exponent = 0.2
def init():
for line in lines:
line.set_data([], [])
return lines
def update(frame):
global exponent
exponent += 0.005 # Increment to adjust the exponent
for line in lines:
random_seed = np.random.rand()
x_vals, y_vals = simulate_poisson_disk_samples_with_exponent(random_seed, exponent)
# plt.title(exponent +"Poisson Disk Samples")
line.set_data(x_vals, y_vals)
plt.title(f"{exponent:.3f} Poisson Disk Samples")
return lines
ani = FuncAnimation(fig, update, frames=180, init_func=init, blit=False)
ani.save('animation.mp4', writer='ffmpeg', fps=12)
# plt.show()
Appendix 1.3 – Simple usage of JS Promise
Here is an example of how to use Promise:
Function delay(milliseconds) {
return new Promises(Function(resolve, reject) {
if (milliseconds < 0) {
reject('Delay time cannot be negative!');
} else {
setTimeout(Function() {
resolve('Waited for ' + milliseconds + ' milliseconds!');
}, milliseconds);
}
});
}
// Example
delay(2000).then(Function(message) {
console.log(message); // Output after two seconds: "Waited for 2000 milliseconds!"
}).catch(Function(error) {
console.log('Error: ' + error);
});
// Error example
delay(-1000).then(Function(message) {
console.log(message);
}).catch(Function(error) {
console.log('Error: ' + error); // Immediately output: "Error: Delay time cannot be negative!"
});
The fixed operation of using Promise is to write a Promise constructor, which has two parameters (parameters are also functions): resolve and reject. This allows you to build error handling branches. For example, in this case, if the input content does not meet the requirements, you can call reject to enter the Promise rejection branch.
For example, now enter the reject branch, reject(XXX) is passed to the following then(function(XXX))'s XXX.
To sum up, Promise is an object in JS. Its core value lies in that it provides aVery elegantandunifiedIt handles asynchronous operations and chain operations in a way that also provides error handling functions.
- With Promise’s .then() method, you can ensure that one asynchronous operation completes before executing another asynchronous operation.
- The .catch() method can be used to handle errors, without having to set up error handling for each asynchronous callback.
Appendix 1.4 – async/await
Async/await is a feature introduced in ES8, which aims to simplify the steps of using Promise.
Let’s look at the example directly:
async Function asyncFunction() {
return "Hello from async function!";
}
asyncFunction().then(Result => console.log(Result)); // Output: Hello from async function!
After adding async to the function, a Promise object will be implicitly returned.
The await keyword can only be used inside an async function. It "pauses" the execution of the function until the Promise is completed (resolved or rejected). Alternatively, you can also use try/catch to capture the reject.
async Function handleAsyncOperation() {
try {
const Result = await maybeFails();//
console.log(Result);// If the Promise is resolved, this will output "Success!"
} catch (error) {
console.error('An error occurred:', error);// If the Promise is rejected, this will output "An error occurred: Failure!"
}
}
The "pause" here means pausing theSpecific asynchronous functions, not the entire application or JavaScript event loop.
Here is a simplified explanation of how await works:
- When the await keyword is executed,The asynchronous functionThe execution of is suspended.
- Control is returned to the event loop, allowing other code (such as other functions, event callbacks, etc.) to run immediately after the current asynchronous function.
- Once the Promise following the await is fulfilled or rejected, the previously paused asynchronous function continues to execute, resumes from the paused position, and processes the result of the Promise.
That is, although your specific async function is logically "paused", the main thread of JavaScript is not blocked. Other events and functions can still be executed in the background.
Here is an example:
console.log('Start');
async Function demo() {
console.log('Before await');
await new Promises(resolve => setTimeout(resolve, 2000));
console.log('After await');
}
demo();
console.log('End');
The output will be:
Start Before await End (wait for 2 seconds) After await
I hope the above explanation can help you understand the asynchronous mechanism of JS. Welcome to discuss in the comment area, I will try my best to reply to you immediately.
Appendix 1.5 About .then()
.then() is defined on the Promise object and is used to handle the result of the Promise. When you call .then(), it will not be executed immediately, but after the Promise is resolved (fulfilled) or rejected (rejected).
Key points about .then() :
- Non-blocking: When you call .then(), the code does not pause to wait for the Promise to complete. Instead, it returns immediately and executes the callback in then when the Promise is completed.
- Returns a new Promise: .then() always returns a new Promise. This allows you to chain calls, i.e. a series of .then() calls, each one handling the result of the previous Promise.
- Asynchronous callbacks: The callbacks in .then() are executed asynchronously when the original Promise is resolved or rejected. This means that they are queued in the event loop's microtask queue instead of being executed immediately.
For example:
console.log('Start');
const promise = new Promises((resolve, reject) => {
setTimeout(() => {
resolve('Promise resolved');
}, 2000);
});
promise.then(Result => {
console.log(Result);
});
console.log('End');
The output will be:
Start End (wait for 2 seconds) Promise resolved
Appendix 1.6 - Fragment Shaders: Uniforms/Textures
https://webglfundamentals.org/webgl/lessons/zh_cn/webgl-fundamentals.html
Uniforms Global Variables
The value of a global variable passed to the shader during a drawing process is the same. In the following simple example, an offset is added to the vertex shader using a global variable:
attribute vec4 a_position;uniform vec4 u_offset; void main() { gl_Position = a_position + u_offset;}
Now we can offset all vertices by a fixed value. First, we find the address of the global variable during initialization.
var offsetLoc = gl.getUniformLocation(someProgram, "u_offset");
Then set the global variable before drawing
gl.uniform4fv(offsetLoc, [1, 0, 0, 0]); // Offset to the right by half the screen width
It is important to note that global variables belong to a single shader program. If multiple shaders have global variables with the same name, you need to find each global variable and set its own value.
Textures
To get texture information in the shader, you can first create a sampler2D type global variable, and then use the GLSL method texture2D to extract information from the texture.
precision mediump float;
uniform sampler2D u_texture;
void main() {
vec2 texcoord = vec2(0.5, 0.5); // Get the value of the texture center
gl_FragColor = texture2D(u_texture, texcoord);
}
Data obtained from the textureDepends on many settingsAt a minimum, you need to create and fill the texture with data, for example
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
var level = 0;
var width = 2;
var height = 1;
var data = new Uint8Array([
255, 0, 0, 255, // A red pixel
0, 255, 0, 255, // A green pixel
]);
gl.texImage2D(gl.TEXTURE_2D, level, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, data);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
Find the address of a global variable at initialization time
var someSamplerLoc = gl.getUniformLocation(someProgram, "u_texture");
WebGL requires that textures must be bound to a texture unit when rendering.
var unit = 5; // Pick a texture unit
gl.activeTexture(gl.TEXTURE0 + unit);
gl.bindTexture(gl.TEXTURE_2D, tex);
Then tell the shader which texture unit you want to use.
gl.uniform1i(someSamplerLoc, unit);
References
- GAMES202
- Real-Time Rendering 4th Edition
- https://webglfundamentals.org/webgl/lessons/webgl-shaders-and-glsl.html