Because I am also a newbie, I can't ensure that everything is correct. I hope the experts can correct me.
Zhihu's formula is a bit ugly, you can go to:GitHub
Project source code:
https://github.com/Remyuu/GAMES202-Homeworkgithub.com/Remyuu/GAMES202-Homework
Precomputed spherical harmonic coefficients
The spherical harmonics coefficients are pre-computed using the framework nori.
Ambient lighting: Calculate the spherical harmonic coefficients for each pixel of the cubemap
ProjEnv::PrecomputeCubemapSH(images, width, height, channel); Use the Riemann integral method to calculate the coefficients of the ambient light spherical harmonics.
Complete code
// TODO: here you need to compute light sh of each face of cubemap of each pixel
// TODO: Here you need to calculate the spherical harmonic coefficients of a certain face of the cubemap for each pixel
Eigen::Vector3f dir = cubemapDirs[i * width * height + y * width + x];
int index = (y * width + x) * channel;
Eigen::Array3f Le(images[i][index + 0], images[i][index + 1],
images[i][index + 2]);
// Describe the current angle in spherical coordinates
double theta = acos(dir.z());
double phi = atan2(dir.y(), dir.x());
// Traverse each basis function of spherical harmonics
for (int l = 0; l <= SHOrder; l++){
for (int m = -l; m <= l; m++){
float sh = sh::EvalSH(l, m, phi, theta);
float delta = CalcArea((float)x, (float)y, width, height);
SHCoeffiecents[l*(l+1)+m] += Le * sh * delta;
}
}
C#analyze
Spherical harmonic coefficientsIt is the projection of the spherical harmonic function on a sphere, which can be used to represent the distribution of the function on the sphere. Since we have three channels of RGB values, the spherical harmonic coefficients we will store as a three-dimensional vector. Parts that need to be improved:
/// prt.cpp - PrecomputeCubemapSH()
// TODO: here you need to compute light sh of each face of cubemap of each pixel
// TODO: Here you need to calculate the spherical harmonic coefficients of a certain face of the cubemap for each pixel
Eigen::Vector3f dir = cubemapDirs[i * width * height + y * width + x];
int index = (y * width + x) * channel;
Eigen::Array3f Le(images[i][index + 0], images[i][index + 1],
images[i][index + 2]);
C#First, we sample a direction (a 3D vector representing the direction from the center to the pixel) from each pixel of the six cubemaps (the images array) and convert the direction to spherical coordinates (theta and phi).
Then, each spherical coordinate is passed into sh::EvalSH() to calculate the real value sh of each spherical harmonic function (basis function) and the proportion delta of the spherical area occupied by each pixel in each cubemap is calculated.
Finally, we accumulate the spherical harmonic coefficients. In the code, we can accumulate all the pixels of the cubemap, which is similar to the original operation of calculating the integral of the spherical harmonic function.
$$
Ylm=∫ϕ=02π∫θ=0πf(θ,ϕ)Ylm(θ,ϕ)sin(θ)dθdϕ
$$
in:
- θ is the zenith angle, ranging from 0 to π; ϕ is the azimuth angle, ranging from 0 to 2pi.
- f(θ,ϕ) is the value of the function at a point on the sphere.
- Ylm is a spherical harmonic function, which consists of the corresponding Legendre polynomials Plm and some trigonometric functions.
- l is the order of the spherical harmonics; m is the ordinal number of the spherical harmonics, ranging from −l to l.
In order to make the readers understand more specifically, here is the estimate of the discrete form of the spherical harmonics in the code, that is, the Riemann integral method for calculation.
$$
Ylm=∑i=1Nf(θi,ϕi)Ylm(θi,ϕi)Δωi
$$
in:
- f(θi,ϕi) is the value of the function at a point on the sphere.
- Ylm(θi,ϕi) is the value of the spherical harmonics at that point.
- Δωi is the tiny area or weight of the point on the sphere.
- N is the total number of discrete points.
Code Details
- Get RGB lighting information from cubemap
Eigen::Array3f Le(images[i][index + 0], images[i][index + 1],
images[i][index + 2]);
C#The value of channel is 3, corresponding to the three channels of RGB. Therefore, index points to the position of the red channel of a pixel, index + 1 points to the position of the green channel, and index + 2 points to the position of the blue channel.
- Convert direction vector to spherical coordinates
double theta = acos(dir.z());
double phi = atan2(dir.y(), dir.x());
C#theta is the angle from the positive z-axis to the direction of dir, and phi is the angle from the positive x-axis to the projection of dir on the xz plane.
- Traversing the basis functions of spherical harmonics
for (int l = 0; l <= SHOrder; l++){
for (int m = -l; m <= l; m++){
float sh = sh::EvalSH(l, m, phi, theta);
float delta = CalcArea((float)x, (float)y, width, height);
SHCoeffiecents[l*(l+1)+m] += Le * sh * delta;
}
}
C#Unshadowed diffuse term
scene->getIntegrator()->preprocess(scene); calculation Diffuse Unshadowed Simplify the rendering equation and substitute the spherical harmonic function in the previous section to further calculate the coefficients of the spherical harmonic projection of the BRDF. The key function is ProjectFunction. We need to write a lambda expression for this function to calculate the transfer function term.
analyze
For the diffuse transmission term, we canThere are three situationsconsider:Shadowed,No shadowandMutually Reflective.
Let's first consider the simplest case without shadows. We have the rendering equation
in,
- is the incident radiance.
- It is a geometric function, and the microscopic properties of the surface are related to the direction of the incident light.
- is the incident light direction.
For a diffuse surface with equal reflection everywhere, we can simplify to Unshadowed Lighting equation
in:
- is the diffuse outgoing radiance of the point.
- is the surface normal.
The incident radiance and transfer function terms are independent of each other, as the former represents the contribution of the light sources in the scene, and the latter represents how the surface responds to the incident light. Therefore, these two components are treated independently.
Specifically, when using spherical harmonics approximation, we expand these two items separately. The input of the former is the incident direction of light, and the input of the latter is the reflection (or outgoing direction), and the expansion is two series of arrays, so we use a data structure called Look-Up Table (LUT).
auto shCoeff = sh::ProjectFunction(SHOrder, shFunc, m_SampleCount);
C#Among them, the most important one is the function ProjectFunction above. We need to write a Lambda expression (shFunc) as a parameter for this function, which is used to calculate the transfer function term.
ProjectFunction function parameter passing:
- Spherical harmonic order
- Functions that need to be projected onto basis functions (that we need to write)
- Number of samples
This function will take the result returned by the Lambda function and project it onto the basis function to get the coefficient. Finally, it will add up the coefficients of each sample and multiply them by the weight to get the final coefficient of the vertex.
Complete code
Compute the geometric terms, i.e. the transfer function terms.
//prt.cpp
...
double H = wi.normalized().dot(n.normalized()) / M_PI;
if (m_Type == Type::Unshadowed){
// TODO: here you need to calculate unshadowed transport term of a given direction
// TODO: Here you need to calculate the unshadowed transmission term spherical harmonics value in a given direction
return (H > 0.0) ? H : 0.0;
}
C#In short, remember to divide the final integral result by , and then pass it to m_TransportSHCoeffs.
Shadowed Diffuse Term
scene->getIntegrator()->preprocess(scene); calculation Diffuse Shadowed This item has an additional visible item.
analyze
The Visibility item () is a value that is either 1 or 0. The bool rayIntersect(const Ray3f &ray) function is used to reflect a ray from the vertex position to the sampling direction. If it hits the object, it is considered to be blocked and has a shadow, and 0 is returned; if the ray does not hit the object, it is still returned.
Complete code
//prt.cpp
...
double H = wi.normalized().dot(n.normalized()) / M_PI;
...
else{
// TODO: here you need to calculate shadowed transport term of a given direction
// TODO: Here you need to calculate the spherical harmonic value of the shadowed transmission term in a given direction
if (H > 0.0 && !scene->rayIntersect(Ray3f(v, wi.normalized())))
return H;
return 0.0;
}
C#In short, remember to divide the final integral result by , and then pass it to m_TransportSHCoeffs.
Export calculation results
The nori framework will generate two pre-calculated result files.
Add run parameters:
./scenes/prt.xml
In prt.xml, you need to do the followingRevise, you can choose to render the ambient light cubemap. In addition, the model, camera parameters, etc. can also be modified by yourself.
//prt.xml
<!-- Render the visible surface normals -->
<integrator type="prt">
<string name="type" value="unshadowed" />
<integer name="bounce" value="1" />
<integer name="PRTSampleCount" value="100" />
<!-- <string name="cubemap" value="cubemap/GraceCathedral" />-->
<!-- <string name="cubemap" value="cubemap/Indoor" />-->
<!-- <string name="cubemap" value="cubemap/Skybox" />-->
<string name="cubemap" value="cubemap/CornellBox" />
</integrator>
C#Among them, the label optional value:
- type: unshadowed, shadowed, interreflection
- bounce: The number of light bounces under the interreflection type (not yet implemented)
- PRTSampleCount: The number of samples per vertex of the transmission item
- cubemap: cubemap/GraceCathedral, cubemap/Indoor, cubemap/Skybox, cubemap/CornellBox
The above pictures are the unshadowed rendering results of GraceCathedral, Indoor, Skybox and CornellBox, with a sampling number of 1.
Coloring using spherical harmonics
Manually drag the files generated by nori into the real-time rendering framework and make some changes to the real-time framework.
After the calculation in the previous chapter is completed, copy the light.txt and transport.txt in the corresponding cubemap path to the cubemap folder of the real-time rendering framework.
Precomputed data analysis
Cancel The comments on lines 88-114 in engine.js are used to parse the txt file just added.
// engine.js
// file parsing
... // Uncomment this code
C#Import model/create and use PRT material shader
In the materials folderEstablishFile PRTMaterial.js.
//PRTMaterial.js
class PRTMaterial extends Material {
constructor(vertexShader, fragmentShader) {
super({
'uPrecomputeL[0]': { type: 'precomputeL', value: null},
'uPrecomputeL[1]': { type: 'precomputeL', value: null},
'uPrecomputeL[2]': { type: 'precomputeL', value: null},
},
['aPrecomputeLT'],
vertexShader, fragmentShader, null);
}
}
async Function buildPRTMaterial(vertexPath, fragmentPath) {
let vertexShader = await getShaderString(vertexPath);
let fragmentShader = await getShaderString(fragmentPath);
return new PRTMaterial(vertexShader, fragmentShader);
}
C#Then import it in index.html.
// index.html
<script src="src/materials/Material.js" defer></script>
<script src="src/materials/ShadowMaterial.js" defer></script>
<script src="src/materials/PhongMaterial.js" defer></script>
<!-- Edit Start -->script src="src/materials/PRTMaterial.js" defer></script> Edit End -->
<script src="src/materials/SkyBoxMaterial.js" defer></script>
C#Load the new material in loadOBJ.js.
// loadOBJ.js
switch (objMaterial) {
case 'PhongMaterial':
Material = buildPhongMaterial(colorMap, mat.specular.toArray(), light, Translation, Scale, "./src/shaders/phongShader/phongVertex.glsl", "./src/shaders/phongShader/phongFragment.glsl");
shadowMaterial = buildShadowMaterial(light, Translation, Scale, "./src/shaders/shadowShader/shadowVertex.glsl", "./src/shaders/shadowShader/shadowFragment.glsl");
break;
// TODO: Add your PRTmaterial here
//Edit Start
case 'PRTMaterial':
Material = buildPRTMaterial("./src/shaders/prtShader/prtVertex.glsl", "./src/shaders/prtShader/prtFragment.glsl");
break;
//Edit End
// ...
}
C#Add the Mary model to the scene, set its position and size, and use the material just created.
//engine.js
// Add shapes
...
// Edit Start
let maryTransform = setTransform(0, -35, 0, 20, 20, 20);
// Edit End
...
// TODO: load model - Add your Material here
...
// Edit Start
loadOBJ(Renderer, 'assets/mary/', 'mary', 'PRTMaterial', maryTransform);
// Edit End
C#Compute Shading
Load precomputed data into the GPU.
In the camera pass of the rendering loop, set the real-time value of precomputeL for the material, that is, pass the precomputed data to the shader. The following code traverses every uniform of every mesh in every camera pass in every frame. The real-time rendering framework has parsed the precomputed data and stored it in uniforms. precomputeL is a 9×3 matrix, which means that there are the first three (9) spherical harmonics of the three RGB channels (in fact, we would say this is a 3×3 matrix, but we write the code directly as an array of length 9). For ease of use, convert precomputeL to a 3×9 matrix through the tool.js function.
Through the uniformMatrix3fv function, we can upload the information stored in the material to the GPU. This function accepts three parameters, please refer to WebGL Documentation – uniformMatrix . The first parameter is used in the PRTMaterial we created ourselves. Uniforms include uPrecomputeL[0], uPrecomputeL[1] and uPrecomputeL[2]. We don't need to pay attention to the work in the GPU. We only need to have the uniform on the CPU to automatically access the corresponding content on the GPU through the API. In other words, when getting the location of a uniform or attribute, what you actually get is a reference on the CPU side, but at the bottom level, this reference will be mapped to a specific location on the GPU. The step of linking uniforms is completed in this.program = this.addShaderLocations() in Shader.js (you can understand it by looking at the code, but it is a bit complicated. I have also analyzed it in my HW1 article). shader.program has three attributes: glShaderProgram, uniforms, and attribs. The specific declaration location is in XXXshader.glsl, which we will complete in the next step.
To summarize, the following code mainly provides pre-processed data to the fragment shader.
// WebGLRenderer.js
if (k == 'uMoveWithCamera') { // The rotation of the skybox
gl.uniformMatrix4fv(
this.meshes[i].shader.program.uniforms[k],
false,
cameraModelMatrix);
}
// Bonus - Fast Spherical Harmonic Rotation
//let precomputeL_RGBMat3 = getRotationPrecomputeL(precomputeL[guiParams.envmapId], cameraModelMatrix);
// Edit Start
let Mat3Value = getMat3ValueFromRGB(precomputeL[guiParams.envmapId]);
if (/^uPrecomputeL\[\d\]$$/.test(k)) {
let index = parseInt(k.split('[')[1].split(']')[0]);
if (index >= 0 && index < 3) {
gl.uniformMatrix3fv(
this.meshes[i].shader.program.uniforms[k],
false,
Mat3Value[index]
);
}
}
// Edit End
C#You can also put the calculation of Mat3Value outside the i loop to reduce the number of calculations.
Writing the vertex shader
After understanding the purpose of the above code, the next task is very clear. In the previous step, we passed each spherical harmonic coefficient to the uPrecomputeL[] of the GPU. Next, we program the GPU to calculate the dot product of the spherical harmonic coefficient and the transport matrix, which is light_coefficient * transport_matrix in the figure below.
The real-time rendering framework has already simplified the matrix from Light_Transport to the corresponding direction. We only need to do dot multiplication on the vectors of length 9 of the three color channels. It is worth mentioning that PrecomputeL and PrecomputeLT can be passed to both the vertex shader and the fragment shader. If passed to the vertex shader, only the color difference in the fragment shader is needed, which is faster but less realistic. How to calculate depends on different needs.
//prtVertex.glsl
attribute vec3 aVertexPosition;
attribute vec3 aNormalPosition;
attribute mat3 aPrecomputeLT; // Precomputed Light Transfer matrix for the vertex
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform mat3 uPrecomputeL[3]; // Precomputed Lighting matrices
varying highp vec3 vNormal;
varying highp vec3 vColor; // Outgoing color after the dot product calculations
float L_dot_LT(const mat3 PrecomputeL, const mat3 PrecomputeLT) {
return dot(PrecomputeL[0], PrecomputeLT[0])
+ dot(PrecomputeL[1], PrecomputeLT[1])
+ dot(PrecomputeL[2], PrecomputeLT[2]);
}
void main(void) {
// Prevent errors due to browser optimization, no practical effect
aNormalPosition;
for(int i = 0; i < 3; i++) {
vColor[i] = L_dot_LT(aPrecomputeLT, uPrecomputeL[i]);
}
gl_Position = uProjectionMatrix * uViewMatrix * uModelMatrix * vec4(aVertexPosition, 1.0);
}
C#It is also worth mentioning that in the rendering framework, a value is set for an attribute called aNormalPosition. If it is not used in the Shader, it will be optimized by WebGL, causing the browser to keep reporting errors.
Writing fragment shaders
After the vertex shader completes the color calculation for the current vertex, the fragment shader interpolates the color. Since the vColor value calculated for each vertex in the vertex shader will be automatically interpolated in the fragment shader, it can be used directly.
//prtFragment.glsl
#ifdef GL_ES
precision mediump float;
#endif
varying highp vec3 vColor;
void main(){
gl_FragColor = vec4(vColor, 1.0);
}
C#Exposure and color correction
Although the framework author mentioned that the results saved by PRT pre-calculation are in linear space and no gamma correction is required, the final result is obviously problematic. If you do not divide beforehand when calculating the coefficient, then taking the Skybox scene as an example, there will be an overexposure problem. If you divide beforehand but do not do color correction, it will be too dark in the real-time rendering framework.
First, divide by when calculating the coefficient, and then do a color correction. How to do it? We can refer to the toSRGB() function in the export process of the nori framework:
// common.cpp
Color3f Color3f::toSRGB() const {
Color3f Result;
for (int i=0; i<3; ++i) {
float value = coeff(i);
if (value <= 0.0031308f)
Result[i] = 12.92f * value;
else
Result[i] = (1.0f + 0.055f)
* std::pow(value, 1.0f/2.4f) - 0.055f;
}
return Result;
}
C#We can imitate this and do color correction in fragment shading.
//prtFragment.glsl
#ifdef GL_ES
precision mediump float;
#endif
varying highp vec3 vColor;
vec3 toneMapping(vec3 color){
vec3 Result;
for (int i=0; i<3; ++i) {
if (color[i] <= 0.0031308)
Result[i] = 12.92 * color[i];
else
Result[i] = (1.0 + 0.055) * pow(color[i], 1.0/2.4) - 0.055;
}
return Result;
}
void main(){
vec3 color = toneMapping(vColor);
gl_FragColor = vec4(color, 1.0);
}
C#This ensures that the rendering results of the real-time rendering framework are consistent with the screenshot results of the nori framework.
We can also do other color corrections. Here are several common Tone Mapping methods for converting the HDR range to the LDR range.
vec3 linearToneMapping(vec3 color) {
return color / (color + vec3(1.0));
}
vec3 reinhardToneMapping(vec3 color) {
return color / (vec3(1.0) + color);
}
vec3 exposureToneMapping(vec3 color, float exposure) {
return vec3(1.0) - exp(-color * exposure);
}
vec3 filmicToneMapping(vec3 color) {
color = max(vec3(0.0), color - vec3(0.004));
color = (color * (6.2 * color + 0.5)) / (color * (6.2 * color + 1.7) + 0.06);
return color;
}
C#So far, the basic part of the assignment has been completed.
Add CornellBox scene
There is no CornellBox in the default framework code, but there is one in the resource file, so we need to add it ourselves:
// engine.js
var envmap = [
'assets/cubemap/GraceCathedral',
'assets/cubemap/Indoor',
'assets/cubemap/Skybox',
// Edit Start
'assets/cubemap/CornellBox',
// Edit End
];
// engine.js
Function createGUI() {
const gui = new dat.gui.GUI();
const panelModel = gui.addFolder('Switch Environment Map');
// Edit Start
panelModel.add(guiParams, 'envmapId', { 'GraceGathedral': 0, 'Indoor': 1, 'Skybox': 2, 'CornellBox': 3}).name('Envmap Name');
// Edit End
panelModel.open();
}
C#Results of the basic part
Four shadowed and unshadowed scenes are shown respectively.
Consider multiple bounces of the transport ray (bonus 1)
This is the first part of the improvement. Calculating the transport of light for multiple bounces has similarities to ray tracing, and you can combine ray tracing with the lighting approximation using spherical harmonics (SH) to calculate the effect of these multiple bounces.
Complete code
// TODO: leave for bonus
Eigen::MatrixXf m_IndirectCoeffs = Eigen::MatrixXf::Zero(SHCoeffLength, mesh->getVertexCount());
int sample_side = static_cast<int>(floor(sqrt(m_SampleCount)));
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_real_distribution<> rng(0.0, 1.0);
const double twoPi = 2.0 * M_PI;
for(int bo = 0; bo < m_Bounce; bo++)
{
for (int i = 0; i < mesh->getVertexCount(); i++)
{
const Point3f &v = mesh->getVertexPositions().col(i);
const Normal3f &n = mesh->getVertexNormals().col(i);
std::vector<float> coeff(SHCoeffLength, 0.0f);
for (int t = 0; t < sample_side; t++) {
for (int p = 0; p < sample_side; p++) {
double alpha = (t + rng(gen)) / sample_side;
double beta = (p + rng(gen)) / sample_side;
double phi = twoPi * beta;
double theta = acos(2.0 * alpha - 1.0);
Eigen::Array3d d = sh::ToVector(phi, theta);
const Vector3f wi(d[0], d[1], d[2]);
double H = wi.dot(n);
if(H > 0.0) {
const auto ray = Ray3f(v, wi);
Intersection intersect;
bool is_inter = scene->rayIntersect(ray, intersect);
if(is_inter) {
for(int j = 0; j < SHCoeffLength; j++) {
const Vector3f coef3(
m_TransportSHCoeffs.col((int)intersect.tri_index[0]).coeffRef(j),
m_TransportSHCoeffs.col((int)intersect.tri_index[1]).coeffRef(j),
m_TransportSHCoeffs.col((int)intersect.tri_index[2]).coeffRef(j)
);
coeff[j] += intersect.bary.dot(coef3) / m_SampleCount;
}
}
}
}
}
for (int j = 0; j < SHCoeffLength; j++)
{
m_IndirectCoeffs.col(i).coeffRef(j) = coeff[j] - m_IndirectCoeffs.col(i).coeffRef(j);
}
}
m_TransportSHCoeffs += m_IndirectCoeffs;
}
C#analyze
Based on the calculation of occluded shadows (Direct Lighting), plus the secondary reflected light (Indirect lighting). The same steps can be followed for the secondary reflected light. For the calculation of indirect lighting, spherical harmonics are used to approximate the illumination of these reflected rays. If multiple bounces are considered, recursive calculations are performed, and the termination condition can be the recursive depth or the light intensity is lower than a certain threshold. The following is a textual formula description.
The brief code and comments are as follows:
// TODO: leave for bonus
//First initialize the spherical harmonic coefficients
Eigen::MatrixXf m_IndirectCoeffs = Eigen::MatrixXf::Zero(SHCoeffLength, mesh->getVertexCount());
// The size of the sampling side = the square root of the number of samples // This way we can perform two-dimensional sampling later
int sample_side = static_cast<int>(floor(sqrt(m_SampleCount)));
// Generate a random number in the range [0,1]
...
std::uniform_real_distribution<> rng(0.0, 1.0);
// Define constant 2 \pi
...
// Loop to calculate multiple reflections (m_Bounce times)
for (int bo = 0; bo < m_Bounce; bo++) {
// Process each vertex
// For each vertex, the following operations are performed
// - Get the position and normal vn of the vertex
// - rng() gets random two-dimensional direction alpha beta
// - If wi is on the same side of the vertex normal, then proceed:
// - Generate a ray from the vertex and check if the ray intersects with other objects in the scene
// - If there is an intersecting object, the code uses the information of the intersection and the existing spherical harmonic coefficients to update the indirect reflection information of the light at that vertex.
for (int i = 0; i < mesh->getVertexCount(); i++) {
const Point3f &v = mesh->getVertexPositions().col(i);
const Normal3f &n = mesh->getVertexNormals().col(i);
...
for (int t = 0; t < sample_side; t++) {
for (int p = 0; p < sample_side; p++) {
...
double H = wi.dot(n);
if (H > 0.0) {
// Here is the formula $$(1-V(w_i))$$. If it is not satisfied, this round of loop will not accumulate.
bool is_inter = scene->rayIntersect(ray, intersect);
if (is_inter) {
for (int j = 0; j < SHCoeffLength; j++) {
...
coeff[j] += intersect.bary.dot(coef3) / m_SampleCount;
}
}
}
}
}
// For each vertex, its spherical harmonic coefficients are updated based on the calculated reflection information.
for (int j = 0; j < SHCoeffLength; j++) {
m_IndirectCoeffs.col(i).coeffRef(j) = coeff[j] - m_IndirectCoeffs.col(i).coeffRef(j);
}
}
m_TransportSHCoeffs += m_IndirectCoeffs;
}
C#In the previous steps, we only calculated the spherical harmonics of each vertex, and did not involve the interpolation calculation of the triangle center. However, in the implementation of multiple ray bounces, the light emitted from the vertex to the positive hemisphere will intersect with the position outside the vertex, so we need to obtain the information of the intersection of the emitted light and the inside of the triangle through barycentric coordinate interpolation calculation, which is the role of intersect.bary.
result
If you observe, there is not much difference overall, except that the shadow areas are brighter.
Ambient lighting spherical harmonics rotation (bonus 2)
Improved by 2. The rotation of low-level sneaker lighting can use the "low-level SH fast rotation method".
Code
First, let Skybox rotate. [0, 1, 0] means rotation around the y axis. Then calculate the spherical harmonics after rotation through the getRotationPrecomputeL function. Finally, apply it to Mat3Value.
// WebGLRenderer.js
let cameraModelMatrix = mat4.create();
// Edit Start
mat4.fromRotation(cameraModelMatrix, timer, [0, 1, 0]);
// Edit End
if (k == 'uMoveWithCamera') { // The rotation of the skybox
gl.uniformMatrix4fv(
this.meshes[i].shader.program.uniforms[k],
false,
cameraModelMatrix);
}
// Bonus - Fast Spherical Harmonic Rotation
// Edit Start
let precomputeL_RGBMat3 = getRotationPrecomputeL(precomputeL[guiParams.envmapId], cameraModelMatrix);
Mat3Value = getMat3ValueFromRGB(precomputeL_RGBMat3);
// Edit End
C#Next, jump to tool.js and write the getRotationPrecomputeL function.
// tools.js
Function getRotationPrecomputeL(precompute_L, rotationMatrix){
let rotationMatrix_inverse = mat4.create()
mat4.invert(rotationMatrix_inverse, rotationMatrix)
let r = mat4Matrix2mathMatrix(rotationMatrix_inverse)
let shRotateMatrix3x3 = computeSquareMatrix_3by3(r);
let shRotateMatrix5x5 = computeSquareMatrix_5by5(r);
let Result = [];
for(let i = 0; i < 9; i++){
Result[i] = [];
}
for(let i = 0; i < 3; i++){
let L_SH_R_3 = math.multiply([precompute_L[1][i], precompute_L[2][i], precompute_L[3][i]], shRotateMatrix3x3);
let L_SH_R_5 = math.multiply([precompute_L[4][i], precompute_L[5][i], precompute_L[6][i], precompute_L[7][i], precompute_L[8][i]], shRotateMatrix5x5);
Result[0][i] = precompute_L[0][i];
Result[1][i] = L_SH_R_3._data[0];
Result[2][i] = L_SH_R_3._data[1];
Result[3][i] = L_SH_R_3._data[2];
Result[4][i] = L_SH_R_5._data[0];
Result[5][i] = L_SH_R_5._data[1];
Result[6][i] = L_SH_R_5._data[2];
Result[7][i] = L_SH_R_5._data[3];
Result[8][i] = L_SH_R_5._data[4];
}
return Result;
}
Function computeSquareMatrix_3by3(rotationMatrix){ // Calculate the square matrix SA(-1) 3*3
// 1. pick ni - {ni}
let n1 = [1, 0, 0, 0]; let n2 = [0, 0, 1, 0]; let n3 = [0, 1, 0, 0];
// 2. {P(ni)} - A A_inverse
let n1_sh = SHEval(n1[0], n1[1], n1[2], 3)
let n2_sh = SHEval(n2[0], n2[1], n2[2], 3)
let n3_sh = SHEval(n3[0], n3[1], n3[2], 3)
let A = math.matrix(
[
[n1_sh[1], n2_sh[1], n3_sh[1]],
[n1_sh[2], n2_sh[2], n3_sh[2]],
[n1_sh[3], n2_sh[3], n3_sh[3]],
]);
let A_inverse = math.inv(A);
// 3. Use R to rotate ni - {R(ni)}
let n1_r = math.multiply(rotationMatrix, n1);
let n2_r = math.multiply(rotationMatrix, n2);
let n3_r = math.multiply(rotationMatrix, n3);
// 4. R(ni) SH projection - S
let n1_r_sh = SHEval(n1_r[0], n1_r[1], n1_r[2], 3)
let n2_r_sh = SHEval(n2_r[0], n2_r[1], n2_r[2], 3)
let n3_r_sh = SHEval(n3_r[0], n3_r[1], n3_r[2], 3)
let S = math.matrix(
[
[n1_r_sh[1], n2_r_sh[1], n3_r_sh[1]],
[n1_r_sh[2], n2_r_sh[2], n3_r_sh[2]],
[n1_r_sh[3], n2_r_sh[3], n3_r_sh[3]],
]);
// 5. S*A_inverse
return math.multiply(S, A_inverse)
}
Function computeSquareMatrix_5by5(rotationMatrix){ // Calculate the square matrix SA(-1) 5*5
// 1. pick ni - {ni}
let k = 1 / math.sqrt(2);
let n1 = [1, 0, 0, 0]; let n2 = [0, 0, 1, 0]; let n3 = [k, k, 0, 0];
let n4 = [k, 0, k, 0]; let n5 = [0, k, k, 0];
// 2. {P(ni)} - A A_inverse
let n1_sh = SHEval(n1[0], n1[1], n1[2], 3)
let n2_sh = SHEval(n2[0], n2[1], n2[2], 3)
let n3_sh = SHEval(n3[0], n3[1], n3[2], 3)
let n4_sh = SHEval(n4[0], n4[1], n4[2], 3)
let n5_sh = SHEval(n5[0], n5[1], n5[2], 3)
let A = math.matrix(
[
[n1_sh[4], n2_sh[4], n3_sh[4], n4_sh[4], n5_sh[4]],
[n1_sh[5], n2_sh[5], n3_sh[5], n4_sh[5], n5_sh[5]],
[n1_sh[6], n2_sh[6], n3_sh[6], n4_sh[6], n5_sh[6]],
[n1_sh[7], n2_sh[7], n3_sh[7], n4_sh[7], n5_sh[7]],
[n1_sh[8], n2_sh[8], n3_sh[8], n4_sh[8], n5_sh[8]],
]);
let A_inverse = math.inv(A);
// 3. Use R to rotate ni - {R(ni)}
let n1_r = math.multiply(rotationMatrix, n1);
let n2_r = math.multiply(rotationMatrix, n2);
let n3_r = math.multiply(rotationMatrix, n3);
let n4_r = math.multiply(rotationMatrix, n4);
let n5_r = math.multiply(rotationMatrix, n5);
// 4. R(ni) SH projection - S
let n1_r_sh = SHEval(n1_r[0], n1_r[1], n1_r[2], 3)
let n2_r_sh = SHEval(n2_r[0], n2_r[1], n2_r[2], 3)
let n3_r_sh = SHEval(n3_r[0], n3_r[1], n3_r[2], 3)
let n4_r_sh = SHEval(n4_r[0], n4_r[1], n4_r[2], 3)
let n5_r_sh = SHEval(n5_r[0], n5_r[1], n5_r[2], 3)
let S = math.matrix(
[
[n1_r_sh[4], n2_r_sh[4], n3_r_sh[4], n4_r_sh[4], n5_r_sh[4]],
[n1_r_sh[5], n2_r_sh[5], n3_r_sh[5], n4_r_sh[5], n5_r_sh[5]],
[n1_r_sh[6], n2_r_sh[6], n3_r_sh[6], n4_r_sh[6], n5_r_sh[6]],
[n1_r_sh[7], n2_r_sh[7], n3_r_sh[7], n4_r_sh[7], n5_r_sh[7]],
[n1_r_sh[8], n2_r_sh[8], n3_r_sh[8], n4_r_sh[8], n5_r_sh[8]],
]);
// 5. S*A_inverse
return math.multiply(S, A_inverse)
}
Function mat4Matrix2mathMatrix(rotationMatrix){
let mathMatrix = [];
for(let i = 0; i < 4; i++){
let r = [];
for(let j = 0; j < 4; j++){
r.push(rotationMatrix[i*4+j]);
}
mathMatrix.push(r);
}
// Edit Start
//return math.matrix(mathMatrix)
return math.transpose(mathMatrix)
// Edit End
}
Function getMat3ValueFromRGB(precomputeL){
let colorMat3 = [];
for(var i = 0; i<3; i++){
colorMat3[i] = mat3.fromValues( precomputeL[0][i], precomputeL[1][i], precomputeL[2][i],
precomputeL[3][i], precomputeL[4][i], precomputeL[5][i],
precomputeL[6][i], precomputeL[7][i], precomputeL[8][i] );
}
return colorMat3;
}
C#result
Animated GIFs can be found onHere Get.
principle
Two key properties
First, let me briefly explain the principle. Two properties of spherical harmonics are used here.
- Rotational invariance
If you rotate the coordinates of a function in 3D space and substitute the rotated coordinates into the spherical harmonics, you will get the same result as the original function.
- Linearity of rotation
For each "layer" or "band" of spherical harmonics (that is, all spherical harmonics of a given order l), its SH coefficients can be rotated, and this rotation is linear. That is, the coefficients of a spherical harmonic expansion can be rotated by a matrix multiplication.
Overview of Wigner D Matrix Rotation Method
The rotation of spherical harmonics is an in-depth topic, so we will give a brief overview here without involving complicated mathematical proofs. The method given in the homework framework is based on projection. This article will first introduce a more precise method, the Wigner D matrix. For more details, please see:Notes on Spherical Harmonic Illumination (Rotation) – Article by NetEase Games Leihuo Business Group – Zhihu Anyway, I didn’t understand it QAQ.
Since the first three order spherical harmonics are currently used and band0 has only one projection coefficient, we only need to process the rotation matrices of band1 and band2.
The rotation of spherical harmonics can be expressed as:
where is the rotation matrix element that gives how to rotate the spherical harmonic coefficients from their original orientations to their new orientations.
Suppose there is a function that canExpanded into a linear combination of spherical harmonics :
If we want to rotate this function, we do not rotate each spherical harmonic directly, butRotate their coefficientsThe new expansion coefficients can be obtained from the original coefficients through the rotation matrix:
Now comes the crucial step: how toCalculate the rotation matrix?
In the homework framework, we learned that band 1 needs to construct a matrix, and band 2 needs to construct a matrix. In other words, for each band of order, it has a legal solution, and each solution corresponds to a basis function on the current band, which is a feature of the Legendre equation.
Now, let's consider the effect of rotation.
When we rotate an environment light, we don't rotate the basis functions, but rather "rotate" all the coefficients. The process of rotating a specific coefficient involves using the Wigner D matrix. First, when we talk about rotations, we usually mean rotations around some axis, defined by Euler angles. For each order, we calculate a square matrix with side length .
Once we get the rotation matrix corresponding to each order, we can easily calculate the new coefficients after "rotation":
However, computing the elements of the Wigner D matrix can be a bit complicated, especially for higher orders. Therefore, the assignment prompt gives a projection-based method. Next, let's see how the above two code snippets are implemented.
Projection approximation
First, select a normal vector. The selection of this quantity needs to ensure linear independence, that is, to cover the sphere as evenly as possible (Fibonacci spherical sampling may be a good choice), otherwise there will be errors in calculating singular matrices later.Make sure the resulting matrix is full rank.
For each normal vector, project it onto the spherical harmonics (SHEval function), which actually calculates the dot product of the spherical harmonics with that direction. From this projection, you can get a dimensional vector, each of whose components is a coefficient of the spherical harmonics.
Using the vectors obtained above, we can construct the matrix and the inverse matrix. If we denote the normal vector as the coefficient of the spherical harmonic function, then the matrix can be written as:
For each normal vector, apply the rotation, and we get (pre-multiplication):
Then, for these rotated normal vectors, spherical harmonic projection is performed again to obtain.
Using the vectors obtained from the rotated normal vectors, we can construct the matrix S. Calculating the rotation matrix: The rotation matrix tells us how to rotate the spherical harmonic coefficients by simple matrix multiplication.
By multiplying the original spherical harmonic coefficient vector by the matrix, we can get the rotated spherical harmonic coefficients. Repeat for each layer: To get the complete rotated spherical harmonic coefficients, we need to repeat the above process for each layer.
Reference
- Games 202
- https://github.com/DrFlower/GAMES_101_202_Homework/tree/main/Homework_202/Assignment2
Leave a Reply