Wave Optics Rendering: Full Wave Reference Simulation - Study Notes - 4

Today, let’s take a quick look at the Sig23 paper on a full-wave reference simulator for calculating surface reflections.

Keywords: Introduction to graphics, wave optics rendering, BEM, AIM, BRDF
Wave Optics Render, Full Wave Reference Simulator

original:https://zhuanlan.zhihu.com/p/1471147574

Preface

Let me briefly summarize my current understanding.

Computer graphics based on wave optics is undoubtedly a huge challenge for graphics researchers. It not only requires computational processing of electromagnetic waves, but also involves some theories of quantum mechanics. It is not easy to bridge the gap between physics and graphics. In graphics based on wave optics, the propagation of light in a medium is no longer just a straight line, but will exhibit various characteristics such as spin, deflection and diffraction due to different wavelengths. From the optical rotation caused by soap bubbles, polarizers, oil film gloss or sugar media to how radio radiation propagates between urban buildings, all are inseparable from the theory of wave optics.

There are also many experts on Zhihu who have shared very detailed basic teaching on wave optics rendering theory.

However, the reading threshold is high and it is too far from practical application. Wave optics rendering involves too much mathematical and physical knowledge, and even if you read the paper of Professor Yan and his doctoral students line by line, it is difficult to make any progress.

The holy grail of graphics is ray tracing, and global wave optics rendering is the holy grail of graphics holy grails.

At Sig21, Shlomi proposed combining path tracing with physical optics to achieve a realistic simulation of electromagnetic radiation propagation. There are many methods for calculating electromagnetic wave transmission, from high-precision but computationally intensive wave solvers to fast but inaccurate geometric optics methods. The following figure shows the current calculation methods for electromagnetic wave transmission. The left one is the most accurate, and the right one is the fastest.

Wave Solvers focus on finding exact solutions to Maxwell's equations, but are not practical for large scenes, which are usually done with FDTD, BEM or FEM.

PO is based on high-frequency approximation of electromagnetic wave calculations, but it is barely sufficient for the frequency of visible light. PS: [Xia 2023]'s black dog hair belongs to the Physical Optics method. The full-wave reference in this article should still belong to the Wave Solvers method, but it uses some ideas of PO (equivalent current, etc.) on the basis of BEM.

In addition to PO, there is another method between Wave Solvers and Geometrical Optics, called Hybrid GO-PO. I personally think it should be called a hybrid method of geometric optics and physical optics. The Uniform Theory of Diffraction (UTD) incorporates the diffraction effect into geometric optics to calculate the transmission of electromagnetic waves under high-frequency conditions. In my opinion, UTD compensates for the deficiencies of the boundary conditions of geometric optical rays by calculating the diffraction coefficient, which means that the rays of geometric optics can also turn. This operation is very practical in the field of radar detection antenna design. In addition to UTD, Hybrid GO-PO also involves a technology called Shooting and Bouncing Rays (SBR). This technology simulates multiple reflections of rays on the surface of an object, which is also based on geometric optics.

I personally think that understanding light as a plane sine wave is somewhat limited. For example, the 3b1b optics series video considers the electric field of light as a plane sine wave.

Although it can explain most phenomena and is very suitable for introductory learning, for the study of wave optics rendering, simply describing the electric field of light as a plane sine wave cannot further explain, for example, the Gaussian beam mentioned in this paper.

But I still strongly recommend readers who are not familiar with wave optics to watch this series of videos first.

The video shows the special wave optical phenomenon that light "rotates" when passing through a right-handed chiral medium, and popularizes a series of interesting phenomena in wave optics and the principles behind them.

Very thought-provoking.

Finally, it even talks about how to use matter waves to construct holographic images.

In classical electromagnetic field theory, we usually usePlane WaveExpand the electromagnetic field, and the photons of each mode are spatiallyNonlocalWith infinite spatial extension,A photon is in a mode with a well-defined frequency and wave vector., so it "fills" the entire area of the plane wave in space. This is very common for electromagnetic field patterns in free space, but this plane wave is not suitable for describing localization.

Another way to expand isLocal wave packetThis way of understanding allows us to arrange the electromagnetic field in space, that is, to form a series of wave packets with a certain width.

img

Simply understanding light as a sine wave actually violates the uncertainty principle. If light is regarded as a sine wave, it means that the light is completely monochromatic, but monochromatic light is impossible in reality. The effect of optical signals is basically in the frequency domain, and Fourier transform is required to convert the spatial domain to the frequency domain. The bandwidth-time uncertainty principle states that the narrower the bandwidth, the longer the signal will be in time. Conversely, the wider the bandwidth, the shorter the time length of the signal.

In classical electromagnetic theory, a wave packet can correspond to a region where energy is concentrated.

But please note that photons cannot be simply understood as a wave packet, but as a probability wave.Probability wave functionDescribes the probability of a photon's existence. The more concentrated the wave packet, the more obvious the particle nature. This is the wave-particle duality explained by quantum electrodynamics. A photon is not necessarily in only one wave packet, it can also be described as a superposition of multiple wave packets. Because the photon state is essentially the excitation of the quantum field,Allows superposition of wave packets at different locations.

That is to say,A photon can "span" multiple wave packets, that is, its wave function can exist in the form of multiple wave packets in space, rather than being confined to a specific location. When a wave packet mode has a photon, this wave packet can be regarded asLowest excited state; If there are multiple photons on the wave packet (higher-order excited states), higher energy will be reflected.

An electromagnetic beam mentioned in this articleGaussian beamIt is a specific electromagnetic wave solution and can be regarded as a wave packet.Gaussian beamIt is a structure with stable amplitude and phase.Single wave packet, which shows a Gaussian distribution on the cross section. This is different from the plane wave solution we usually use in classical electromagnetic theory, because the wavefront of the Gaussian beam has a changing curvature and is not an infinitely extended plane.

Ensemble of wavesRefers to a collection of multiple waves that may have different frequencies, phases, or propagation directions. If we consider a system in which multiple independent wave packets (such as multiple pulsed laser beams) are superimposed on each other, these wave packets can be understood as aEnsemble of wavesIn other words, if multiple uncorrelated wave packets, especially those with random phases, are superimposed on each other, they can statistically constitute a wave ensemble.

When we expand the electromagnetic field into a series of wave packets, we can regard each wave packet as a random event, and their arrival time and phase are random variables. For a collection of multiple wave packets, we can observe the characteristics of these wave packets at different times and locations. In a statistical sense, usingEnsemble meanTo analyze the energy distribution and wave behavior of light.


$$
\langle U(\vec{r}, t) \rangle = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^{T} U(\vec{r}, t) \, dt
$$

Since I am not a physicist, I do not approve of writing the differential before the integral formula (

As mentioned above, light can be viewed as a collection of multiple wave packets. There may be phase and time differences between different wave packets, so it is necessary to introduceCross-correlation functionandInterference Functionto describe the coherence and relative phase between wave packets at two different locations, providing aSimilarity in time and spaceIf we sayEnsemble meanUsed to describe the average behavior of a signal in a statistical sense, thenCross-correlation functionIt describes the time averageThe correlation between different locations and different timesandVolatility Correlation.

Specifically, the cross-correlation function describes the coherence of the waves at positions $\vec{r}_1$ and $\vec{r}_2$. If the two waves are coherent, then the Gamma value will be large:


$$
\Gamma(\vec{r}_1, \vec{r}_2, \tau) = \langle U(\vec{r}_1, t) U^*(\vec{r}_2, t + \tau) \rangle
$$


The cross spectral density (CSD) matrix $W(\vec{r}_1, \vec{r}_2, \omega)$ is the cross-correlation functionFourier Transform, expressed in the frequency domainThe correlation between two positions.

Remember the autocorrelation function we discussed in Xia2023's article Black Dog Hair? The cross-correlation function requires two signals to describe, while the autocorrelation function only has one signal. From another perspective, the autocorrelation function (ACF) is a special case of the cross-correlation function. The autocorrelation function describesThe correlation between the signal and itself at different times or locations, and the cross-correlation function isCorrelation between two different signals or the same signal at different locationsThe cross-correlation function and the cross-spectral density are a pair of Fourier transform pairs, and the autocorrelation function and the power spectral density are a pair of Fourier transform pairs.

The cross spectral density (CSD) and the mutual coherence function describe the coherence of the fluctuations between different locations.Radiance Cross-Spectral Density (RCSD)It is a generalization of CSD, describing the correlation between light radiation (i.e., energy density) at different positions and directions. It can be understood that RCSD provides a coherence description similar to CSD based on radiation measurement. In the formula, $L(\vec{r}_1, \vec{r}_2, \omega)$ represents the radiation intensity coherence between positions $\vec{r}_1$ and $\vec{r}_2$ at frequency $\omega$.

Radiative cross-spectral density transfer equation (SDTE)With classicLight Transport Equation (LTE)Similar, but more suitable for wave optics. LTE describes the transmission of optical radiance from point to point, while SDTE uses the RCSD function to describe the propagation of optical radiation, which is equivalent to treating the transmission as coherent transmission between regions.

The RCSD in SDTE expresses the propagation of radiation in the form of an integral between regions, which can be understood as replacing the traditional reflection and scattering with the RCSD matrix and diffraction operator. And note that SDTE is based onRCSD function, rather than the specific light intensity value, which is quite different from LTE.

In further discussion of the useBoundary Element Method (BEM)andAdaptive Integral Method (AIM)Before we discuss the accelerated full-wave reference simulator, let's first briefly review the previous article. The previous article introduced the PO method and SDTE/RCSD theory. These methods are used for different scattering calculation needs, but their basic theories and scope of application are different. This article will discuss a method that provides high-precision surface scattering simulation by combining BEM and AIM.

To sum up in the original author's words, Wave optics is a very new branch in physically based rendering. Although wave optics phenomena can be seen everywhere in life, their impact on the picture is not very big. There is actually a lot of room for improvement in this direction.

Other related references:


A Full-Wave Reference Simulator for Computing Surface Reflectance

Paper Homepage:

https://blaire9989.github.io/assets/1_BEMsim3D/project.html

ACM Home Page:

https://dl.acm.org/doi/10.1145/3592414

ACM Citations:

Yunchen Yu, Mengqi Xia, Bruce Walter, Eric Michielssen, and Steve Marschner. 2023. A Full-Wave Reference Simulator for Computing Surface Reflectance. ACM Trans. Graph. 42, 4, Article 109 (August 2023), 17 pages. https: //doi.org/10.1145/3592414

Speech Report

Compared to the more commonly used ray tracing techniques, wavelight simulations are more time-consuming but also more accurate.

For example, the microscopic scratches on the metal surface and the hair fiber structure cannot be rendered by traditional optical models, and the colorful color effects we observe in real life cannot be rendered.

Wave optics-based rendering is a difficult problem because solving Maxwell's equations requires a lot of complex calculations. Existing wave-based appearance models usually adopt some approximation methods.

One approximation is the scalar field approximation.

In the wave scattering problem, the electric field and the magnetic field are different vector field quantities, and light with different polarizations consists of field quantities pointing in different directions.

Some approximate models replace these two vector field quantities with a single scalar function, thus being able to calculate the intensity of the light energy but giving up modeling polarization.

Another approximation is the first-order approximation. This assumes that the light reflects only once from each part of the model structure, ignoring multiple reflections. However, there are many cases where these approximations are not valid.

For example, Yu et al., in collaboration with Dr. Lawrence’s group at Penn State University, created surfaces with cylindrical cross-sections that cause multiple light reflections and produce structural colors, phenomena that cannot be well understood or predicted using approximate models.

The authors wanted to characterize surface scattering as accurately as possible by calculating the bidirectional reflectance distribution function (BRDF).

Existing models all use various approximations, such as ray-based, scalar, or first-order approximation models. Without a reference quality BRDF, it is difficult to see what each reflection model is missing or what scenarios it is suitable for.

The authors' solution is to build a 3D 4-way simulation to compute the BRDF of surfaces with well-defined microgeometry.

Claims to compute a reference-quality BRDF for surface samples without using any approximations.

In terms of speed, dddd.

The authors next describe how their simulation works and how it relates to the BRDF.

First, in the left picture, we input a surface sample (defined as a height field) and an incident direction (expressed on the projected hemisphere). We define an incident field propagating toward the surface and calculate a scattered field from the surface.

In the middle picture, the beam is the incident field, and the scattered field in the background is also shown in this picture.

The output is the BRDF model for a given incident direction. Each point in the hemisphere represents an outgoing direction, and the color represents the color of the reflected light in that direction. The BRDF is expressed in RGB colors, which are converted from spectral data.

For many surfaces with low roughness, the reflection pattern is symmetric around the specular direction and changes as the incident light direction moves.

The following is a discussion on how to use the boundary element method to solve the problem of signal scattering only on the surface, thereby reducing the dimensionality of the problem.

The surface signal is the surface current solved from Maxwell's equations. After discretization, the problem is constructed as a linear system, and the surface current and scattered field are solved.

To make the computation feasible, the authors symmetrize the linear system and use a minimum residual solver suitable for symmetric matrices.

In addition, the matrix-vector multiplication is accelerated using the adaptive integration method, which is an acceleration method based on the fast Fourier transform and was originally used in radar calculations.

Most of the code uses the Cuda C++ package for acceleration.

Next, some results are shown, illustrating how the BRDF it computes compares to the BRDF derived from previous methods.

[Yan 2018] use scalar field approximation BRDF models that only consider one refraction.

[Xia 2023] This paper uses vector field quantities but only considers one refraction.

The most accurate method is ours, which not only uses vector field quantities, but also takes into account reflections of all orders.

In the above figure, each incident direction corresponds to five BRDFs, representing different calculation methods.

A relatively smooth material covered with a bunch of isotropic bombs.

The first row shows the normal instance, and the reflection pattern is pretty much centered.

In the second row, the pattern shifts to the left because the incident light comes from a certain oblique direction.

Since the surface is not too rough, the results of the five methods are very similar.

Another material has some corners (corner cubes). The three faces of each corner cube will reflect the light multiple times, making the reflected light return in the direction of incidence. This is called retroreflection.

Our simulator can also simulate this phenomenon. The four methods on the left all failed.

The reason is that if only one reflection is considered, when the light hits one face of the corner cubes, it will be predicted to go down into the lower hemisphere.

The final example is a surface covered with spherical pits.

Multiple reflections occur due to the high slopes of the surface at the edges of the pits.

Different methods show obvious differences.

You can see the extra lobe predicting. (The part slightly to the right in the middle)

In addition, it is brighter overall.

These differences arise from interference between reflections of different orders.

Finally, a technique for efficiently computing the BRDF of a very large number of densely sampled directions is briefly introduced.

If the surface to be simulated is large, this will be slow and require a lot of GPU memory.

But the calculation is linear and can decompose a large surface into multiple smaller sub-areas.

The incident field is projected onto each sub-area, the smaller sub-area simulation is performed first, and then the scattered fields are integrated to obtain the BRDF of the entire large-area surface.

By applying different complex value scale factors to the scattered fields of different sub-areas, the BRDF of a large surface corresponding to different incident directions can be synthesized.

This is because applying an appropriate phase shift to the local incident field in each sub-region produces a total incident field with a different net direction on the surface. In this figure, the incident field propagates vertically. If the same field with five foci at different spatial locations is superimposed, a spatially wider field is obtained, still propagating in the vertical direction. If these fields are linearly combined and an appropriate complex-valued scaling factor is applied to the field in each sub-region, the overall field can be made to propagate in a slightly tilted direction.

Here we explain why different complex value scale factors can produce different incident directions.

  • This factor can adjust the amplitude and phase of the wave. For example, if two stones are thrown into the water at the same time, the waves on the water will be stronger. If one stone is thrown a little later, the ripples may cancel each other out (destructive interference). This factor controls the time of throwing the stones. By adjusting the phase to control the superposition of waves, the waves can be "guided" to propagate in different directions. Search for "Beamforming" in detail. It is widely used in radar, wireless communication, sonar and other fields.

These three pictures represent the wavefront of the light wave, i.e. the wave crest, which can be understood as the waveform cross section of the light during propagation.

In the upper figure, the incident field distribution is concentrated in the center when the light is incident perpendicular to the surface. The light field is concentrated in the center and propagates in the vertical direction (i.e. the direction of the yellow line in the middle).

At the bottom left, the effect of superposition of multiple identical incident fields, but with the same phase (i.e. no phase difference) when superimposed. The addition of multiple incident fields makes the entire field wider in space, but the propagation direction remains vertical.

In the lower right corner, the superposition effect of multiple incident fields is shown, but a phase difference is added during the superposition, which is equivalent to "offsetting" the direction of the incident field, showing an inclined direction.

In the demo, two videos show the movement of BRDF patterns. (I'm too lazy to make GIFs)

Finally, the big guys took a photo with their hands making a heart shape.

paper

mineFirst articleHaving briefly introduced the content and results of this work, we will now move directly to the theoretical derivation (Section 3-5).

3 FULL-WAVE SIMULATION

The holistic approach starts with a surface model, described by a height field and its material properties (such as the complex refractive index), and specifies a target point.

To compute the BRDF for a given incident direction, an incident field is defined that propagates from that specific direction to the target point.

This incident field is used as input and processed through a surface scattering simulation to solve for the corresponding scattered electromagnetic field.

In this section (FULL-WAVE SIMULATION), we will focus on the principles of BEM in application scenarios. The next section will explain how to efficiently implement the BEM algorithm, and the last section will explain how to combine multiple simulation results to synthesize a BRDF that is densely sampled in the incident and scattering directions.

Let me start with a symbol table to scare you.

3.1 Boundary Element Method: The Basics

The boundary element method (BEM) mainly solves the scattering problem of a single frequency, that is, how electromagnetic waves (including electric and magnetic fields) of a specific frequency are reflected and scattered at the boundaries of different media. The boundary here divides the space into two uniform regions, and the material properties of the two regions (medium parameters at the incident site) are represented by ($\epsilon_1, \mu_1$) and ($\epsilon_2, \mu_2$). Among them, $\epsilon$ represents the dielectric constant (dielectric constant), and $\mu$ represents the magnetic permeability (magnetic permeability coefficient).

In this approach, we deal with complex-valued fields that contain both amplitude and phase information (i.e., the propagation state of the wave). To simplify the formula, we assume that all waves are "time-harmonic" - that is, the waves change with a specific period over time. Throughout the text, the $e^{j\omega t}$ term is omitted to simplify the presentation.

3.1.1 Maxwell's Equations and Surface Currents

First, Maxwell's equations describe how electric fields (E) and magnetic fields (H) interact with each other, determining how light waves propagate and scatter between different materials. For simplicity, this is expressed in a "time-harmonic" form:

$$ \begin{align} \nabla \times \mathbf{E} &= -\mathbf{M} – j \omega \mu \mathbf{H} \\ \nabla \times \mathbf{H} &= \mathbf{ J} + j \omega \epsilon \mathbf{E} \tag{1} \end{align} $$


The left side of the equal sign describes the degree of "rotation" of the electric and magnetic fields in space. $M$ and $J$ areSurface current density(imaginary current), respectively representing the density of magnetic flow and current (electric and magnetic current densities). This formula can be understood as that when the electric field "rotates" near the boundary, changes in magnetic flow and magnetic field occur; the rotation of the magnetic field also causes changes in the electric field and current.

The core idea of the boundary element method is:Introducing surface currents on boundaries, using these currents to indirectly describe the field distribution without having to calculate all points in each region. The three-dimensional problem is reduced to a two-dimensional problem on the boundary.

3.1.2 Source-Field Relationships

How does an imaginary electric current on the surface (the "source" of the electromagnetic waves) produce the scattered electromagnetic field (the "field")?

As shown in Fig. 2., in the region $R_1$, the total field (incident field and scattered field) are represented as $E_1$ and $H_1$ respectively;

$$ \begin{align} &\mathbf{E}_1 = \mathbf{E}_i + \mathbf{E}_s \\ &\mathbf{H}_1 = \mathbf{H}_i + \mathbf{H}_s \tag{2} \end{align} $$


In the region $R_2$ , the total field is denoted by $E_2$ and $H_2$ . The upper scattered field is generated by the upper electromagnetic current; the lower scattered field is generated by the lower electromagnetic current.

In a homogeneous medium, Maxwell's equations can be written in integral form to describe how electric and magnetic fields are generated.

\begin{align} \mathbf{E}(\mathbf{r}) &= -j \omega \mu (\mathcal{L} \mathbf{J})(\mathbf{r}) – (\mathcal{K } \mathbf{M})(\mathbf{r}) \\ \mathbf{H}(\mathbf{r}) &= -j \omega \epsilon (\mathcal{L} \mathbf{M})(\mathbf{r}) + (\mathcal{K} \mathbf{J})(\mathbf{r}) \tag{3} \end{align }


The left side of the equal sign represents the electromagnetic field strength at $r$, that is, it describes the "effect" of the field. $\mathcal{L}$ and $\mathcal{K}$ are integral operators that represent how the field is generated from the surface current and magnetization. These two operators are defined as:

$$ \begin{aligned} & (\mathcal{L} \mathbf{X})(\mathbf{r})=\left[1+\frac{1}{k^2} \nabla \nabla \cdot\right ] \int_V G\left(\mathbf{r}, \mathbf{r}^{\prime}\right) \mathbf{X}\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} \\ & (\mathcal{K} \mathbf{X})(\mathbf {r})=\nabla \times \int_V G\left(\mathbf{r}, \mathbf{r}^{\prime}\right) \mathbf{X}\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} \end{aligned} \tag{4} $$


$G(r, r{\prime})$ is the three-dimensional Green's function for the scalar Helmholtz equation, defined as:

$$
G(\mathbf{r}, \mathbf{r}^{\prime}) = \frac{e^{-jkr}}{4 \pi r} \quad \text{where } r = |\mathbf{r } – \mathbf{r}^{\prime}|
\tag{5}
$$


This function converts the source field of the scattering surface into the distribution of the electromagnetic field in the scattering area.

The formula (11) in this paper is actually the same as that in [Xia 2023], but this paper implicitly incorporates the Green's function in the operator and the integration domain is wider. In essence, both describe how to generate the scattered electric field $E(r)$ from the current density $\mathbf{J}$ and the magnetic flux density $\mathbf{M}$.

When solving Maxwell's equations, the Green's function is used to integrate the influence of various sources (such as current and charge) on the electromagnetic field in space. Assuming that the electromagnetic field changes with time in the form of $e^{j\omega t}$ (single frequency), a form similar to the Helmholtz equation can be obtained: $(\nabla^2 + k^2) \mathbf{E} = -j \omega \mu \mathbf{J}$ , which is actually a "frequency domain" form of Maxwell's equations. The Green's function is introduced to establish a source-field relationship, that is, the current $\mathbf{J}$ and the charge $\rho$ are used as "sources" to calculate the distribution of the electromagnetic field $\mathbf{E}$ and $\mathbf{H}$. Then the Green’s function satisfies the following equation: $(\nabla^2 + k^2) G(\mathbf{r}, \mathbf{r}{\prime}) = -\delta(\mathbf{r} – \mathbf{r}{\prime})$ , $\delta$ is the Dirac delta function, which represents the standard waveform generated by a “point source” in space, and describes the wave field excited by a “point source” in space. Through the Green’s function, the influence of any current distribution $\mathbf{J}$ on the field point $\mathbf{r}$ in space can be expressed! Then, the “current” or “charge” at each source point is diffused to the entire space through the Green’s function, producing a cumulative effect on each field point. $\mathbf{E}(\mathbf{r}) = \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{J}(\mathbf{r}{\prime}) d \mathbf{r}{\prime}$ This formula is the electric field expressed as an integral superposition of source currents. Finally, combine the integral form of Maxwell's equations with the idea of Green's function. For example, the integral form of the electric field can be expressed as: $\mathbf{E}(\mathbf{r}) = -j \omega \mu \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{J}(\mathbf{r}{\prime}) d \mathbf{r}{\prime} – \nabla \times \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{M}(\mathbf{r}{\prime}) d \mathbf{r}{\prime}$ . Through convolution, we can obtain the cumulative influence of the tiny current distribution at each source point on the field point, which is the result of the propagation and superposition of the integral of the Green’s function in space. The Green function is used as a convolution kernel to propagate the current distribution at each point in space to the target field point through integration, realizing the cumulative impact of the current at each source point in space on the entire field.

The electric field in the dielectric regions $R_1$ and $R_2$ is expressed as formula (6) and (7) respectively, which include two operators.

In area $R_2$:

$$ \begin{align} & \mathbf{E}_1(\mathbf{r}) = -j \omega \mu_1 (\mathcal{L}_1 \mathbf{J}_1)(\mathbf{r}) – ( \mathcal{K}_1 \mathbf{M}_1)(\mathbf{r}) \\ & \mathbf{H}_1(\mathbf{r}) = -j \omega \epsilon_1 (\mathcal{L}_1 \mathbf{M}_1)(\mathbf{r}) + (\mathcal{K}_1 \ mathbf{J}_1)(\mathbf{r}) \end{align} \tag{6} $$


In area $R_2$:

\begin{align} \mathbf{E}_2(\mathbf{r}) &= -j \omega \mu_2 (\mathcal{L}_2 \mathbf{J}_2)(\mathbf{r}) – (\ mathcal{K}_2 \mathbf{M}_2)(\mathbf{r}) \\ \mathbf{H}_2(\mathbf{r}) &= -j \omega \epsilon_2 (\mathcal{L}_2 \mathbf{M}_2)(\mathbf{r}) + (\mathcal{K}_2 \mathbf{J}_2)(\mathbf{r}) \tag{7} \end{align}


This gives the manifestations of electric and magnetic fields in different dielectric regions.

In summary, this section transforms the Maxwell equations into integral expressions by generating scattered electromagnetic fields through hypothetical surface currents. Green's function transforms current and charge distribution into integral superposition of electromagnetic fields, showing the specific implementation of the source-field relationship. Finally, the expressions of the electric and magnetic fields in the regions $R_1$ and $R_2$ are given, showing the influence of different medium parameters on the field.

3.1.3 Boundary Conditions

When electromagnetic waves propagate at the boundary of two different media, reflection and refraction will occur. At this time, the energy of the wave cannot disappear out of thin air, but transitions smoothly at the interface. If the electric field or magnetic field is discontinuous at the boundary, an unrealistic energy jump will occur (that is, the energy suddenly disappears or increases), which violates the law of conservation of energy.

Specifically, you can search: "Interface conditions for electromagnetic fields".

Therefore, certain boundary conditions must be met at the boundary of the medium to ensure the continuity of the electromagnetic field and the conservation of energy. Specifically, when an electromagnetic wave propagates at the interface of two different media, the tangential components of the electric and magnetic fields need to maintain continuity at the boundary:


$$
\begin{aligned}
& \mathbf{n} \times (\mathbf{E}_1 – \mathbf{E}_2) = 0 \\
& \mathbf{n} \times (\mathbf{H}_1 – \mathbf{H}_2) = 0
\end{aligned}
\tag{8}
$$


And the net electromagnetic current density on the boundary is zero, that is, the electromagnetic current densities on both sides of the boundary are opposite in direction and equal in magnitude.


$$
\begin{aligned}
& \mathbf{J} = \mathbf{J}_1 = -\mathbf{J}_2 \\
& \mathbf{M} = \mathbf{M}_1 = -\mathbf{M}_2
\end{aligned}
\tag{9}
$$


These two conditions must be met at the same time to avoid violating the laws of physical conservation.

3.1.4 Integral Equations

Combining the above formulas (6), (7), (8) and (9), we get the integral equations for the electric and magnetic fields, which are called the PMCHWT (Poggio-Miller-Chang-Harrington-Wu-Tsai) equations:

$$
\begin{aligned}
& {\left[j \omega \mu_1\left(\mathcal{L}1 \mathbf{J}\right)(\mathbf{r})+j \omega \mu_2\left(\mathcal{L}_2 \ mathbf{J}\right)(\mathbf{r})+\left(\mathcal{K}_1 \mathbf{M}\right)(\mathbf{r})+\right.}\left.\left(\mathcal{K}_2 \mathbf{M}\right)(\mathbf{r})\right] {\tan } \\
&=\left[\mathbf{E}i(\mathbf{r})\right]{\tan } \\
& {\left[\left(\mathcal{K}1 \mathbf{J}\right)(\mathbf{r})+\left(\mathcal{K}_2 \mathbf{J}\right)(\mathbf {r})-j \omega \varepsilon_1\left(\mathcal{L}_1 \mathbf{M}\right)(\mathbf{r})-j \omega \varepsilon_2\left(\mathcal{L}_2 \mathbf{M}\right)(\mathbf{r})\right]{\ tan } } \\
&=-\left[\mathbf{H}i(\mathbf{r})\right]{\tan }
\end{aligned}
\tag{10}
$$


These two equations also have names:

  • EFIE(Electric Field Integral Equation), electric field integral equation. $j \omega \mu_1 (\mathcal{L}_1 \mathbf{J})(\mathbf{r})$ and $j \omega \mu_2 (\mathcal{L}_2 \mathbf{J})(\mathbf{r})$ represent the contribution of current density $\mathbf{J}$ to the electric field in medium 1 and medium 2. $\mathcal{K}_1 \mathbf{M}$ and $\mathcal{K}_2 \mathbf{M}$ represent the contribution of magnetic current density $\mathbf{M}$ to the electric field in medium 1 and medium 2.
  • MFIE(Magnetic Field Integral Equation), magnetic field integral equation.

In general, this is a boundary integral equation specifically used to solve electromagnetic scattering problems caused by dielectric objects. With this PMCHWT equation, the distribution of electromagnetic fields can be accurately calculated.

3.1.5 Solving for Current Densities

In this section, we need to use the PMCHWT equation above to calculate the distribution of "electric current" and "magnetic current" on the surface of the object. These distributions determine how the electromagnetic wave will be "reflected" or "refracted" when it hits the object.

The surface current density $\mathbf{J}$ and magnetic flux density $\mathbf{M}$ are solved by discretizing the boundary element. A basis function $f_m(\mathbf{r})$ is defined for the discrete element, and the current density and magnetic flux density distribution are expressed by the basis function expansion method.


$$
\mathbf{J}(\mathbf{r}) = \sum_{m=1}^{N} I_{J_m} f_m(\mathbf{r}); \quad \mathbf{M}(\mathbf{r} ) = \sum_{n=1}^{N} I_{M_n} f_n(\mathbf{r})
\tag{11}
$$


N is the total number of basis functions; $ I_{J_m}$ and $I_{M_n}$ are the unknown coefficients of the corresponding basis functions, representing the current and magnetic flux intensities on each unit.

Through this basis function expansion, the continuous surface current density and magnetic current density are decomposed into a linear combination of a series of basis functions.

In order to solve the unknown coefficients of the corresponding basis functions, the electric field integral equation (EFIE) and the magnetic field integral equation (MFIE) are transformed into a linear equation system. This is done using the Galerkin Method. The basic idea of this method is to apply the integral equation to each basis function and perform weighted averaging so that the integral equation holds true in the projection direction of each basis function. Simply put, this method involves discretization, finding the basis, and calculating the coefficients. A high-dimensional linear equation system can be simplified using linear algebra methods.

In this way, the EFIE part of the original continuous form of the PMCHWT equation can be converted into a finite number of linear equations, and the problem can be transformed into solving the following matrix equation.


$$
\begin{bmatrix} A_{EJ} & A_{EM} \ A_{HJ} & A_{HM} \end{bmatrix} \begin{bmatrix} I_J \ I_M \end{bmatrix} = \begin{bmatrix} V_E \ V_H \end{bmatrix}
\tag{12}
$$


in


$$
A_{\mathrm{EJ}}^{mn} =\int_S \mathbf{f}_m(\mathbf{r}) \cdot\left[j \omega \mu_1\left(\mathcal{L}_1 \mathbf{ f}_n\right)(\mathbf{r})+j \omega \mu_2\left(\mathcal{L}_2 \mathbf{f}_n\right)(\mathbf{r})\right] d \mathbf{r} \tag{13}
$$

$$
A_{\mathrm{EM}}^{mn} =\int_S \mathbf{f}_m(\mathbf{r}) \cdot\left[\left(\mathcal{K}_1 \mathbf{f}_n\right )(\mathbf{r})+\left(\mathcal{K}_2 \mathbf{f}_n\right)(\mathbf{r})\right] d \mathbf{r} \tag{14}
$$

$$
A_{\mathrm{HJ}}^{mn} =\int_S \mathbf{f}_m(\mathbf{r}) \cdot\left[\left(\mathcal{K}_1 \mathbf{f}_n\right )(\mathbf{r})+\left(\mathcal{K}_2 \mathbf{f}_n\right)(\mathbf{r})\right] d \mathbf{r} \tag{15}
$$

$$
A_{\mathrm{HM}}^{mn} =-\int_S \mathbf{f}_m(\mathbf{r}) \cdot\left[j \omega \varepsilon_1\left(\mathcal{L}_1 \mathbf {f}_n\right)(\mathbf{r})+j \omega \varepsilon_2\left(\mathcal{L}_2 \mathbf{f}_n\right)(\mathbf{r})\right] d \mathbf{r} \tag{16}
$$

and


$$
V_{\mathrm{E}}^m =\int_S \mathbf{f}_m(\mathbf{r}) \cdot \mathbf{E}_i(\mathbf{r}) d \mathbf{r}\tag{ 17}
$$

$$
V_{\mathrm{H}}^m =-\int_S \mathbf{f}_m(\mathbf{r}) \cdot \mathbf{H}_i(\mathbf{r}) d \mathbf{r}\tag {18}
$$

In formula (12), $I_J$ and $I_M$ need to be calculated.

In a loose sense, formulas (13)-(16) respectively represent the contribution of each small block's current density to the electric field, the contribution of each small block's magnetic flux density to the electric field, the contribution of each small block's current density to the magnetic field, and the contribution of each small block's magnetic flux density to the magnetic field. Formulas (17)(18) respectively represent the "driving force" of the external incident electric field on the small block's current and the "driving force" of the external incident magnetic field on the small block's magnetic flux. Emphasize the elements in the matrix, such as $A_{EJ}^{mn}$, which is actually a double integral. Since the integration of the source point $\mathbf{r}{\prime}$ has been completed in $\mathcal{L}_1$ and $\mathcal{L}_2$, the original paper looks like a single integral.

Although it is not written in the original paper, it is recommended that smart readers derive it by themselves. Try to expand formula (13) according to formula (4) mentioned above. I have tried to derive it here. Please correct me if there are any mistakes. First, substitute the two operators and pay attention to the position of the gradient operator here:


$$
\begin{aligned}
A_{\mathrm{EJ}}^{mn} &= j \omega \mu_1 \int_S \mathbf{f}_m(\mathbf{r}) \cdot \left\{ \int_V G_1(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{\prime} + \frac{1}{k_1^2} \nabla \left[ \nabla \cdot \int_V G_1(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{\prime} \right] \right\} d\mathbf{r} \\
&\quad + j \omega \mu_2 \int_S \mathbf{f}_m(\mathbf{r}) \cdot \left\{ \int_V G_2(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{\prime} + \frac{1}{k_2^2} \nabla \left[ \nabla \cdot \int_V G_2(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r }{\prime}) d\mathbf{r}{\prime} \right] \right\} d\mathbf{r}
\end{aligned}
$$

Consider first one of the gradient terms:

$$
\int_S \mathbf{f}_m(\mathbf{r}) \cdot \nabla \left[ \nabla \cdot \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f }_n(\mathbf{r}{\prime}) d\mathbf{r}{\prime} \right] d\mathbf{r}
$$

Expand using vector distribution integral:

$$
\int_S \mathbf{f}m \cdot \nabla B \, dr = -\int_S B (\nabla \cdot \mathbf{f}_m) \, dr + \int{\partial S} B (\mathbf{A } \cdot \mathbf{n}) \, dr
$$

Among them, the divergence term $B$:

$$
B = \nabla \cdot \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{ \prime}
$$

But sorry, this is physics. Under the boundary conditions, the boundary terms are directly simplified to zero, and we get:

$$
\int_S \mathbf{f}_m \cdot \nabla B \, dS = -\int_S B (\nabla \cdot \mathbf{f}_m) \, dS
$$

For the divergence term $B$ , the divergence can be directly expanded:

$$
B = \nabla \cdot \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{ \prime} = \int_V (\nabla \cdot G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime})) d\mathbf{r}{\prime}
$$

One is a scalar function and the other is a matrix function, so according to the product rule of divergence:

$$
\nabla \cdot (G \mathbf{f}_n) = (\nabla G) \cdot \mathbf{f}_n + G (\nabla \cdot \mathbf{f}_n)
$$

But sorry, this is physics. Since the basis functions satisfy the divergence-free condition, this is a straightforward simplification:

$$
\nabla \cdot (G \mathbf{f}_n) = (\nabla G) \cdot \mathbf{f}_n
$$

At the same time, we note the symmetry of the Green's function $\nabla G(\mathbf{r}, \mathbf{r}{\prime}) = -\nabla{\prime} G(\mathbf{r}, \mathbf{r}{\prime})$, so:

$$
B = \nabla \cdot \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{ \prime} = -\int_V \nabla{\prime} G(\mathbf{r}, \mathbf{r}{\prime}) \cdot \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{\prime}
$$

Perform the distribution integral on this term as well:

$$
\int_V \nabla{\prime} G(\mathbf{r}, \mathbf{r}{\prime}) \cdot \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r }{\prime} = \int_V \nabla{\prime} \cdot (G \mathbf{f}_n) d\mathbf{r}{\prime} – \int_V G (\nabla{\prime} \cdot \mathbf{f}_n) d\mathbf{r}{\prime}
$$

But sorry, this is physics. The boundary terms are simplified again:

$$
\int_V \nabla{\prime} G(\mathbf{r}, \mathbf{r}{\prime}) \cdot \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r }{\prime} = -\int_V G(\mathbf{r}, \mathbf{r}{\prime}) (\nabla{\prime} \cdot \mathbf{f}_n(\mathbf{r}{\prime})) d\mathbf{r}{\prime}
$$

Finally, we get:

$$
B = \nabla \cdot \int_V G(\mathbf{r}, \mathbf{r}{\prime}) \mathbf{f}_n(\mathbf{r}{\prime}) d\mathbf{r}{ \prime} = \int_V G(\mathbf{r}, \mathbf{r}{\prime}) (\nabla{\prime} \cdot \mathbf{f}_n(\mathbf{r}{\prime})) d\mathbf{r}{\prime}
$$

The relationship between wave number $k_i$ and medium parameters:

$$
k_i^2 = \omega^2 \mu_i \varepsilon_i \quad \Rightarrow \quad \frac{1}{k_i^2} = \frac{1}{\omega^2 \mu_i \varepsilon_i}
$$

The same goes for the other gradient term, and finally we get the final expression $A_{\mathrm{EJ}}^{mn}$ :

$$
\begin{aligned} A_{\mathrm{EJ}}^{mn} &= j \omega \mu_1 \int_S \int_{V_1} \mathbf{f}m(\mathbf{r}) \cdot \mathbf{f }n(\mathbf{r}{\prime}) G_1(\mathbf{r}, \mathbf{r}{\prime}) d\mathbf{r}{\prime} d\mathbf{r} \\ &\quad – \frac{j}{\omega \varepsilon_1} \int_S \int{V_1} (\nabla \cdot \mathbf{f}m(\mathbf{r})) G_1(\mathbf{r}, \mathbf{r}{\prime}) (\nabla{\prime} \cdot \mathbf{f}n(\mathbf{r}{\prime})) d\mathbf{r}{\prime} d\mathbf{r} \\ &\quad + j \omega \mu_2 \int_S \ int{V_2} \mathbf{f}m(\mathbf{r}) \cdot \mathbf{f}n(\mathbf{r}{\prime}) G_2(\mathbf{r}, \mathbf{r}{\prime}) d\mathbf{r}{\prime} d\mathbf{r } \\ &\quad – \frac{j}{\omega \varepsilon_2} \int_S \int{V_2} (\nabla \cdot \mathbf{f}_m(\mathbf{r})) G_2(\mathbf{r}, \mathbf{r}{\prime}) (\nabla{\prime} \cdot \mathbf{f}_n(\mathbf{r}{\prime})) d\mathbf{r }{\prime} d\mathbf{r} \end{aligned}
$$

The same operation is used to obtain the remaining matrix elements. Here we directly copy the content of the additional materials of the paper:

$$
\begin{aligned} A_{\mathrm{EM}}^{mn}= & A_{\mathrm{HJ}}^{mn}=\int_{\mathbf{f}_m} \int_{\mathbf{f} _n} \mathbf{f}_m(\mathbf{r}) \cdot\left[\nabla G_1\left(\mathbf{r}, \mathbf{r}^{\prime}\right) \times \mathbf{f}_n\left(\mathbf{r}^{\prime}\right)\right] d \mathbf{r}^{\prime} d \mathbf{r} \\ & +\int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \mathbf{f}_m(\mathbf{r}) \cdot\left[\nabla G_2\left(\mathbf{r}, \mathbf{r}^{\prime}\right) \times \mathbf{f} _n\left(\mathbf{r}^{\prime}\right)\right] d \mathbf{r}^{\prime} d \mathbf{r} \\ A_{\mathrm{HM}}^{mn} & =-j \omega \varepsilon_1 \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \mathbf{ f}_m(\mathbf{r}) \cdot \mathbf{f}_n\left(\mathbf{r}^{\prime}\right) G_1\left(\mathbf{r}, \mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & +\frac{j}{ \omega \mu_1} \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \nabla \cdot \mathbf{f}_m(\mathbf{r}) G_1\left(\mathbf{r}, \mathbf{r}^{\prime}\right) \nabla^{\prime} \cdot \mathbf{f} _n\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & -j \omega \varepsilon_2 \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \mathbf{f}_m(\mathbf{r}) \cdot \mathbf{f}_n\left(\mathbf{r }^{\prime}\right) G_2\left(\mathbf{r}, \mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & +\frac{j}{\omega \mu_2} \int_{\mathbf {f}_m} \int_{\mathbf{f}_n} \nabla \cdot \mathbf{f}_m(\mathbf{r}) G_2\left(\mathbf{r}, \mathbf{r}^{\prime}\right) \nabla^{\prime} \cdot \mathbf{f}_n\left(\mathbf{r}^{\prime}\right) d \mathbf{r} ^{\prime} d \mathbf{r}\end{aligned}
$$

After getting each element of the matrix, we introduce shift-invariant functions to help us get the Green's function and its gradient in different coordinate systems.

$$
\begin{aligned}
& g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right)=G_i\left(\mathbf{r}, \mathbf{r}^{\prime} \right)=\frac{e^{-j k_i r}}{4 \pi r} \\
& g_{2, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right)=\hat{\mathbf{x}} \cdot \nabla G_i\left(\mathbf{ r}, \mathbf{r}^{\prime}\right)=-\left(xx^{\prime}\right)\left(\frac{1+j k_i r}{4 \pi r^3}\right) e^{-j k_i r} \\
& g_{3, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right)=\hat{\mathbf{y}} \cdot \nabla G_i\left(\mathbf{ r}, \mathbf{r}^{\prime}\right)=-\left(yy^{\prime}\right)\left(\frac{1+j k_i r}{4 \pi r^3}\right) e^{-j k_i r} \\
& g_{4, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right)=\hat{\mathbf{z}} \cdot \nabla G_i\left(\mathbf{ r}, \mathbf{r}^{\prime}\right)=-\left(zz^{\prime}\right)\left(\frac{1+j k_i r}{4 \pi r^3}\right) e^{-j k_i r} \quad \text { where } r=\left|\mathbf{r}-\mathbf{r}^{\prime}\ right|
\end{aligned}
$$

Then the matrix elements are expanded into a combination of different components of the basis functions, which act on the translation invariant functions. All elements of the final matrix have the following form. This form can speed up the construction and solution of the boundary element matrix.

$$
\int_{\mathbf{f}m} \int{\mathbf{f}_n} \psi_m(\mathbf{r}) g\left(\mathbf{r}-\mathbf{r}^{\prime}\ right) \xi_n\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r}
$$

Finally, we get:


Here, $x, y, z, x^{\prime}, y^{\prime}, z^{\prime}$ are the Cartesian components of $\mathbf{r}, \mathbf{r}^{\prime }$. Now we have for $i=1,2$:

$$ \begin{aligned} A_{\mathrm{EJ}, i}^{mn} &= j \omega \mu_i \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \mathbf{ f}_{mx}(\mathbf{r}) \, g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \, \mathbf{f}_{nx}\left(\mathbf{r}^{\prime}\right ) \, d \mathbf{r}^{\prime} \, d \mathbf{r} \\ & \quad + j \omega \mu_i \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \mathbf{f}_{my}(\mathbf{r}) \, g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \, \mathbf {f}_{ny}\left(\mathbf{r}^{\prime}\right) \, d \mathbf{r}^{\prime} \, d \mathbf{r} \\ & \quad + j \omega \mu_i \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \mathbf{f}_{mz}(\mathbf{r }) \, g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \, \mathbf{f}_{nz}\left(\mathbf{r}^{\prime}\right) \, d \mathbf{r}^{\prime} \, d \mathbf{r} \\ & \ quad – \frac{j}{\omega \varepsilon_i} \int_{\mathbf{f}_m} \int_{\mathbf{f}_n} \nabla \cdot \mathbf{f}_m(\mathbf{r}) \, g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \, \ nabla^{\prime} \cdot \mathbf{f}_n\left(\mathbf{r}^{\prime}\right) \, d \mathbf{r}^{\prime} \, d \mathbf{r} \end{aligned} \tag{S.18} $$

where $\mathbf{f}{mx}, \mathbf{f}{my}, \mathbf{f}_{mz}$ are the $x, y, z$ components of the vector basis function $\mathbf{f}_m$ . Similarly, we have:

$$ \begin{aligned} & A_{\mathrm{EM}, i}^{mn}=A_{\mathrm{HJ}, i}^{mn}=\int_{\mathbf{f}m} \int{ \mathbf{f}n} \mathbf{f}{mz}(\mathbf{r}) g_{2, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{ny}\left(\mathbf{r}^{\prime}\right) d \ mathbf{r}^{\prime} d \mathbf{r} \\ & -\int{\mathbf{f}m} \int{\mathbf{f}n} \mathbf{f}{my}(\mathbf{r}) g_{2, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{nz }\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & +\int{\mathbf{f}m} \int{\mathbf{f}n} \mathbf{f}{mx}(\mathbf{r}) g_{3, i}\left(\mathbf{r} -\mathbf{r}^{\prime}\right) \mathbf{f}{nz}\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & -\int{\mathbf{f}m} \int{\mathbf{f}n} \mathbf{f}{mz}(\ mathbf{r}) g_{3, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{nx}\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & +\int{\mathbf {f}m} \int{\mathbf{f}n} \mathbf{f}{my}(\mathbf{r}) g_{4, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{nx}\left(\mathbf{r}^{\prime}\right) d \ mathbf{r}^{\prime} d \mathbf{r} \\ & -\int{\mathbf{f}m} \int{\mathbf{f}n} \mathbf{f}{mx}(\mathbf{r}) g_{4, i}\left(\mathbf{r}-\mathbf{r}^{\prime} \right) \mathbf{f}_{ny}\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \end{aligned} \tag{S.19} $$

Lastly, we also have:

$$ \begin{aligned} A_{\mathrm{HM}, i}^{mn} & =-j \omega \varepsilon_i \int_{\mathbf{f}m} \int{\mathbf{f}n} \mathbf {f}{mx}(\mathbf{r}) g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{nx}\left(\mathbf{r}^{\prime}\right) d \ mathbf{r}^{\prime} d \mathbf{r} \\ & -j \omega \varepsilon_i \int{\mathbf{f}m} \int{\mathbf{f}n} \mathbf{f}{my}(\mathbf{r}) g_{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{ny }\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & -j \omega \varepsilon_i \int{\mathbf{f}m} \int{\mathbf{f}n} \mathbf{f}{mz}(\mathbf{r}) g_{1, i}\left( \mathbf{r}-\mathbf{r}^{\prime}\right) \mathbf{f}{nz}\left(\mathbf{r}^{\prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \\ & +\frac{j}{\omega \mu_i} \int{\mathbf{f}m} \int{\mathbf{f}n } \nabla \cdot \mathbf{f}_m(\mathbf{r}) g{1, i}\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \nabla^{\prime} \cdot \mathbf{f}_n\left(\mathbf{r}^{\ prime}\right) d \mathbf{r}^{\prime} d \mathbf{r} \end{aligned} \tag{S.20} $$


This part of the code is inMVProd ClassIt doesn’t matter if you don’t understand it, because I just copied and scribbled it randomly, which is beyond the scope of graphics research.

In addition, the author also discussed the symmetry of EFIE and MFIE. It is this symmetry that makes the computational efficiency and space utilization lower. Note:


$$
A_{EJ} = A_{EJ}^T, \quad A_{HM} = A_{HM}^T \tag{19}
$$

$$
A_{EM} = A_{EM}^T, \quad A_{HJ} = A_{HJ}^T, \quad A_{EM} = A_{HJ}\tag{20}
$$

Since the matrix is symmetric, we do not need to calculate all the matrix elements, nor do we need to store all the matrix elements.

After solving for the surface current density, equation (6) can be used to calculate the scattered field propagating outward from the scattering surface.

3.2 Rough Surface Scattering: The Specifics

When simulating the interaction between electromagnetic waves and rough surfaces, the irregular geometric structure of the surface will have a significant impact on the scattering characteristics. The authors discretized the rough surface and divided the continuous surface into multiple units, each of which can be numerically calculated using the PMCHWT equation.

3.2.1 Rough Surface Samples

Represent the rough surface as a two-dimensionalHeight field, and then the height field is discretized and divided into multiple rectangular units.

Only a surface sample of size $$L_x \times L_y$$ is considered for each simulation, and a step size $d$ is chosen to define the discretization grid.


$$
x_s = s \cdot d, \quad s = 0, 1, \ldots, N_x
\
y_t = t \cdot d, \quad t = 0, 1, \ldots, N_y
\tag{21}
$$


Among them, $ N_x = L_x / d$ and $N_y = L_y / d$, respectively, represent the number of cells divided in the $x$ and $y$ directions. At each discrete point $(x_s, y_t)$ there is a height field $h(x_s, y_t)$ function.

The authors concluded that the height variations of rough surfaces are very small, typically only a few micrometers, which is comparable to the wavelength of visible electromagnetic waves.

3.2.2 Basis Elements and Functions

Each primitive has four corners, each corner has a different height, and each corner contributes differently to the current and magnetic flux. Therefore, four basis functions are defined on each small square to approximate the situation on each small square.

In most simulations, the step size $d$ is chosen to be around $\lambda / 16$ of the wavelength to ensure accuracy.

Each primitive is parameterized by two parameters u and v, both in the range [-1, 1].

The shape of the primitive is represented by a bilinear function $\mathbf{r}(u, v)$, where $(s, t)$ represents the index of the current primitive, and the primitive is determined by the coordinates of the four vertices:

$$ \begin{aligned} \mathbf{r}(u, v) = &\frac{(1 – u)(1 – v)}{4} \mathbf{p}_{s-1, t-1} + \\ &\frac{(1 – u)(1 + v)}{4} \mathbf{p}_{s-1, t} + \\ &\frac{(1 + u)(1 – v)}{4} \mathbf{p}_{s, t-1} + \\ &\frac{(1 + u)(1 + v)}{4} \mathbf{p} _{s, t} \end{aligned} \tag{22} $$

where $\mathbf{p}{s, t} = (x_s, y_t, z{s, t})$ are the coordinates of the four vertices of the primitive.

Four basis functions $f_1(u, v), f_2(u, v), f_3(u, v), f_4(u, v)$ are defined on each rectangular primitive, and their forms are:


$$
\begin{aligned}
& f_1(u, v) = \frac{(1 – u)}{J(u, v)} \frac{\partial \mathbf{r}(u, v)}{\partial u}, \quad f_2 (u, v) = \frac{(1 + u)}{J(u, v)} \frac{\partial \mathbf{r}(u, v)}{\partial u} \\
& f_3(u, v) = \frac{(1 – v)}{J(u, v)} \frac{\partial \mathbf{r}(u, v)}{\partial v}, \quad f_4 (u, v) = \frac{(1 + v)}{J(u, v)} \frac{\partial \mathbf{r}(u, v)}{\partial v}
\end{aligned}
\tag{23}
$$


Here the Jacobian $J(u, v)$ is represented as follows:


$$
J(u, v) = \left| \frac{\partial \mathbf{r}(u, v)}{\partial u} \times \frac{\partial \mathbf{r}(u, v)}{ \partial v} \right|
\tag{24}
$$


The introduction of Jacobian is used to transform the coordinate system and ensure that the basis functions have appropriate proportional relationships in different $u, v$ directions.

3.2.3 Gaussian Beam Incidence

Since I don't know much about optics, the following is my personal understanding. A Gaussian beam is a phenomenon in which the middle part of the traveling wave field appears to be concave inward when the laser is emitted. In other words, a Gaussian beam describes the energy distribution of light on the cross section. The focus of plane waves and spherical waves is to describe the direction of energy propagation. During the propagation process, the wavefront shape of a Gaussian beam is approximately a spherical wave.

Gouy phase is a phase delay effect in Gaussian beam propagation. The phase of the beam will increase after passing through the focus. On the other hand, Gaussian beams satisfy a solution of Maxwell's equations under paraxial conditions and can be approximated as non-uniform spherical waves. I feel that there is no need to study it too deeply at present.

Information about Gaussian beams:

Back to the original paper, the advantage of Gaussian beam is that the size of the incident field can be controlled, and thus the surface induced current density of an area slightly larger than the irradiated area can be controlled to be a non-zero value.

A Gaussian beam is an electromagnetic wave whose amplitude is distributed in a two-dimensional Gaussian pattern in a plane perpendicular to the propagation direction [Paschotta 2008]. Its energy is mainly concentrated near the center of the beam. Looking at the figure (a) above, a Gaussian beam can be described by the focal plane $P$ , the center point $o$ , and the beam waist $w$ . The field intensity decays with position as $e^{-r^2 / w^2}$ . When the distance from the center exceeds $2.5w$ , the field intensity decays to a very small value and can be considered to be almost zero.

However, Gaussian beams also have a certain degree of divergence. The divergence angle $\theta$ is approximately proportional to the wavelength $\lambda$ and inversely proportional to the beam waist diameter $w$. Formula:


$$
\theta = \frac{\lambda}{\pi \eta w} \tag{25}
$$


When the beam is incident on the surface obliquely, the Gaussian beam has an elliptical cross section in the focal plane, as shown in the figure below. There are different beam waist sizes in two perpendicular directions: one parallel to the plane of the incident direction and the surface normal, and the other perpendicular to it.

In order to ensure that the irradiation area of the Gaussian beam in different directions on the surface is the same, two beam waist widths in the lateral direction are introduced here:


$$
w_1 = w, \quad w_2 = w \cos \theta_i\tag{26}
$$


The illuminated area on the surface remains consistent even at different angles of incidence. This is very important for deriving the BRDF.

4. IMPLEMENTATION AND ACCELERATION

However, if we use the above method to calculate, there will be no result. Therefore, some acceleration methods are needed.

4.1 The Adaptive Integral Method

If we want to directly calculate the system of equations of formula (12) above, the amount of calculation is unacceptable.

$$ \begin{bmatrix} A_{EJ} & A_{EM} \ A_{HJ} & A_{HM} \end{bmatrix} \begin{bmatrix} I_J \ I_M \end{bmatrix} = \begin{bmatrix} V_E \V_H \end{bmatrix} \tag{12} $$


According to the original idea, $N$ basis functions are used to represent the current magnetic flux density, and the size of the matrix is $2N \times 2N$ . If the matrix is solved directly (LU decomposition, Cholesky decomposition, etc.), the total complexity may be $\mathcal{O}(N^3)$ . Even if the conjugate gradient method is used, the total complexity is still $\mathcal{O}(N^2)$ . In some small-scale simulations, the size of the basis function is about $960*960$ , and the storage requirement is about 29.4GB. Using the Adaptive Integral Method, AIM, the total storage requirement is about 76.8 MB based on 8 bytes. What is so magical about this method? Let me show you together!

4.1.1 Approximating Matrix Elements

AIM was first proposed by Bleszynski et al. [1996]. The core idea of AIM is to approximate the effect of each basis function as the effect of a set of point sources, avoiding the direct calculation of the exact interaction between each pair of basis functions, and at the same time spread the influence of each basis function through FFT to improve the calculation efficiency.

The calculation method of matrix elements in AIM is to approximate the calculation by linear combination of some items of matrix elements. This is why the readers are asked to further deduce after the above formulas (13)-(16) to derive the final result so that it conforms to the form of AIM.

$$
\int_{f_m} \int_{f_n} \psi_m(r) g(r – r{\prime}) \xi_n(r{\prime}) \, dr{\prime} \, dr
\tag{27}
$$


AIM first creates a global 3D Cartesian grid in the space containing the electromagnetic fields and field sources, as shown in Figure (6) below.

In order to further simplify formula (27), the AIM algorithm approximates the original basis function as a point source on a set of grid points in this three-dimensional Cartesian coordinate system. In other words, it is a continuous-to-discrete transformation, which is convenient for subsequent FFT.

$$ \psi_m(r) \approx \tilde{\psi}m(r) := \sum{p \in S_m} \Lambda_{mp} \delta^3(r – p) \\ \xi_n(r{\ prime}) \approx \tilde{\xi}n(r{\prime}) := \sum{q \in S_n} \Lambda{\prime}_{nq} \delta^3(r{\prime} – q) \tag{28} $$

Substituting formula (27) into formula (28), in other words, the double integral form is converted into a double summation form.

$$
\sum{p \in S_m} \sum_{q \in S_n} \Lambda_{mp} g(p – q) \Lambda{\prime}_{nq}
\tag{29}
$$

 

Method detailed reference:

Kai Yang and Ali E Yilmaz. 2011. Comparison of precorrected FFT/adaptive integral method matching schemes. Microwave and Optical Technology Letters 53, 6 (2011), 1368–1372.

4.1.2 Base and Correction Matrices

Based on formula (29), a set of base approximation matrices $B_{EJ}, B_{EM}, B_{HJ}, B_{HM}$ are defined as approximations to specifically handle basis function pairs with a long distance. These matrices simplify the calculation by introducing $\Lambda$ matrices and convolution operations. At the same time, for basis function pairs with a short distance ($d_{near}$), correction matrices are introduced to reduce the error. $C_{EJ}, C_{EM}, C_{HJ}, C_{HM}$ is a sparse matrix defined as follows:

$$ C_{\mathrm{X}}^{mn}=\left\{\begin{array}{ll} A_{\mathrm{X}}^{mn}-B_{\mathrm{X}}^{mn } & d_{mn} \leq d_{\text {near }} \\ 0 & \text { otherwise } \end{array} \quad \mathrm{X} \in\{\mathrm{EJ}, \mathrm{EM}, \mathrm{HJ}, \mathrm{HM}\}\right. \tag{30} $$

$A_X^{mn}$ is the exact value of the original matrix, while $B_X^{mn}$ is the approximation of the fundamental matrix. By subtracting the approximation of the fundamental matrix, a more accurate correction term is obtained to compensate for the error of the close basis function pairs.

In summary, the final approximate form of each matrix in the AIM method can be written as follows:

$$ \begin{aligned} A_{\mathrm{EJ}} \approx B_{\mathrm{EJ}}+C_{\mathrm{EJ}} ; & A_{\mathrm{EM}} \approx B_{\mathrm{ EM}}+C_{\mathrm{EM}} ; \\ A_{\mathrm{HJ}} \approx B_{\mathrm{HJ}}+C_{\mathrm{HJ}} ; & A_{\mathrm{HM}} \approx B_{\mathrm{HM}}+C_{\mathrm{HM}} \end{aligned } $$


In other words, the original matrix can be approximated by a combination of the fundamental matrix and the correction matrix.

4.1.3 Fast Matrix-Vector Multiplication

Fast Matrix-Vector Multiplication is the core of AIM.

Since the modified matrix $C$ obtained above is a sparse matrix, $C$ has non-zero values only on close-range basis functions, so the multiplication operation of the matrix $C$ is very fast.

Using the convolution property of the basic approximation matrix $B$, the product of the matrix $B$ and the vector is calculated. The calculation process is divided into three steps:


$$
y_1 = \Lambda_2^T x, \quad y_2 = G y_1, \quad y_3 = \Lambda_1 y_2
\tag{32}
$$


The first step is to project the vector onto a sparse matrix grid.

The second step is also the core step, which is to propagate the data of the grid points to the entire network, that is, to calculate the influence of each point on other points. The closer the two points are, the larger the propagation function in the matrix $G$ will be. FFT is used to speed up the process.


$$
y_2 = \mathcal{F}^{-1} { \mathcal{F}(g) \mathcal{F}(y_1) }
\tag{33}
$$


The third step maps the result back to the original basis function space.

4.2 GPU-Accelerated Iterative Solving

The computational focus of the AIM method is shifted to fast Fourier transform (FFT) and sparse matrix operations on the GPU. In summary, the large matrix is currently divided into the basic matrix $B$ and the correction matrix $C$ to handle the long-distance and short-distance basis function pairs respectively.

  • cuFFT: Converts the underlying matrix multiplication operation to a convolution calculation in the frequency domain
  • cuSPARSE: Accelerate the correction matrix C for sparse matrix calculations

And optimize the calculation strategy:

  • For small-scale simulation tasks, only one GPU is needed.
  • For large-scale simulation tasks, distribute the tasks to 4 GPUs
4.2.1 Small-Scale Simulations

For small-scale simulation tasks (e.g. $12 \mu m \times 12 \mu m$), the Fourier transform of the propagation function (i.e. the Fourier transform of the matrix $G$) must be calculated and stored in advance. In small-scale tasks, the sparse correction matrix C occupies less than 5GB of video memory and can be handled by a single GPU.

4.2.2 Large-Scale Simulations

For large-scale simulation tasks (e.g. $24 \mu m \times 24 \mu m$), since the video memory of a single GPU is not enough to store all the data, the author distributes the computational tasks to 4 GPUs. In this scale simulation, the number of basis functions will reach 960 × 960, and storing all non-zero elements of the correction matrix (including row and column indices and complex floating-point values) requires about 20GB of video memory. The strategy is still the same as the small scale, and each GPU is allocated about 5GB of memory to store the correction matrix $C$.

The MINRES solver is executed on the host CPU, while the matrix-vector product $y = Ax$ is computed on the GPU. But don’t worry about the transfer time, the vector is only about 30MB.

4.3 FFT-Accelerated Scattered Field Evaluation

The FFT is used to accelerate the calculation of the scattered field. The field scattered from the surface is evaluated in the far field region to finally find the surface BRDF.

After solving the BEM, the current density $\mathbf{J}$ and the magnetic flux density $\mathbf{M}$ on the surface are obtained. These density distributions define the electromagnetic sources on the surface and can be used to calculate the scattered field in the far field region. The formula is very simple, and it will weaken with distance and have a certain phase change:


$$
\mathbf{E_s}(r) \approx \mathbf{E}(\hat{r}) \frac{e^{-jkr}}{r}; \quad \mathbf{H_s}(r) \approx \mathbf {H}(\hat{r}) \frac{e^{-jkr}}{r}
\tag{36}
$$


The right side of the formula $\mathbf{E}(\hat{r})$ and $\mathbf{H}(\hat{r})$ are the amplitudes of the specific directions $\hat{r}$ in the far field. In different directions, the intensity of the scattered field may be different.


$$
\begin{aligned}
F_1(\hat{\mathbf{r}})=\int_V J_x\left(\mathbf{r}^{\prime}\right) e^{jk \mathbf{r}^{\prime} \cdot \hat {\mathbf{r}}} d \mathbf{r}^{\prime} ; & F_2(\hat{\mathbf{r}})=\int_V J_y\left(\mathbf{r}^{\prime}\right) e^{jk \mathbf{r}^{\prime} \cdot \hat {\mathbf{r}}} d \mathbf{r}^{\prime} \\
F_3(\hat{\mathbf{r}})=\int_V J_z\left(\mathbf{r}^{\prime}\right) e^{jk \mathbf{r}^{\prime} \cdot \hat {\mathbf{r}}} d \mathbf{r}^{\prime} ; & F_4(\hat{\mathbf{r}})=\int_V M_x\left(\mathbf{r}^{\prime}\right) e^{jk \mathbf{r}^{\prime} \cdot \hat {\mathbf{r}}} d \mathbf{r}^{\prime} \\
F_5(\hat{\mathbf{r}})=\int_V M_y\left(\mathbf{r}^{\prime}\right) e^{jk \mathbf{r}^{\prime} \cdot \hat {\mathbf{r}}} d \mathbf{r}^{\prime} ; & F_6(\hat{\mathbf{r}})=\int_V M_z\left(\mathbf{r}^{\prime}\right) e^{jk \mathbf{r}^{\prime} \cdot \hat {\mathbf{r}}} d \mathbf{r}^{\prime}
\end{aligned}
\tag{37}
$$


To avoid solving these integrals directly, the author uses the point source approximation ($\Lambda$ matrix) of $\mathbf{J}$ and $\mathbf{M}$ previously (4.1.2) to discretize each integral term $F_i(\hat{r})$ and rewrite it in Fourier transform form, as shown in formula (38):


$$
F_i(\hat{r}) = \sum_{p \in S} h_i(p) e^{jp \cdot k \hat{r}}
\tag{38}
$$


Convert the continuous field strength calculation into a discrete sum so that it can be quickly calculated using FFT. And from the above formula, it can be observed that $F_i(\hat{r})$ is actually the Fourier component of $h_i(p)$ at the spatial frequency $-k\hat{r}$.

The required spatial frequencies are not on the FFT grid but can be interpolated; we add zero padding prior to the FFT step, to ensure enough resolution in the frequency domain for the trilinear interpolation to be sufficiently accurate.

5 HIGH RESOLUTION BRDF GENERATION

After a lot of work, I finally got back to the familiar BRDF calculation. The key here is to use the linear superposition of small-scale simulations to reconstruct the far-field scattering of the large-scale incident field, rather than trying to do it all at once. A grid of $N^2$ small-scale Gaussian beams is linearly combined to approximate the large-scale incident field.

This is done using a technique called beam steering, which significantly reduces computational costs by not having to simulate each direction.

5.1 Basic and Derived Incident Directions

First, $N^2$ Gaussian beams propagating in a certain direction $\mathbf{u}$ form a grid of $N \times N$ points in the receiving plane. These beams are combined to generate a large total field.

Then, a complex scaling factor is introduced into each Gaussian beam to adjust the phase of each beam and thus adjust the propagation direction of the combined field. These directions are called desired directions.


$$
a_{st} = e^{jk \mathbf{p}{st} \cdot \omega_i} \tag{39}
$$

When the angle between the target incident direction $-\omega_i$ and the basic direction $\mathbf{u}$ is close to the divergence of the small beams, aliasing artifacts begin to appear. An example is shown in Fig. 8 (d).

In our framework, we decide on a primary waist w and choose a collection of basic incident directions. In general, a smaller waist width means a larger divergence angle, so that more incident directions can be derived from each basic direction, thereby reducing the number of basic directions required. A larger waist width will reduce the divergence angle of each Gaussian beam, making the total field divergence after combination smaller, thus producing a more accurate incident direction.

The center of each hexagon corresponds to a basic direction of incidence. All the directions of incidence in the entire hemisphere are divided into several territories, and each territory belongs to a basic direction of incidence. In the hemispherical projection, this proportional relationship and the cosine factor cancel each other out, so you can useHexagons of equal sizeTo express.

For each cardinal direction, several derived directions that deviate from the cardinal direction can be generated through "beam steering".

When the incident angle of light is very small (for example, close to the surface normal direction), the range of derived directions near the basic incident direction is very small, because the light at a small angle is more concentrated and will not diffuse greatly. Conversely, the range of derived directions will be larger. In summary, the larger the incident angle (the closer the angle is to the horizontal), the larger the coverage of the derived directions. The author of the formula also mentioned: $1/\cos \theta_i$.

5.2 Individual Simulations and Synthesized Results

In order to calculate the BRDF, we need to know the scattering of this large area of incident light. However, directly simulating such a large area of light will require a high computational cost. Therefore, we:

The large-scale incident field is simulated using a superposition of small-scale simulations.

Think of it as using many small flashlights (Gaussian beams) to cover an area, rather than one giant searchlight.

First, determine the size of the flashlights (the size of the beam, i.e. the waist width $w$) and evenly distribute them on the surface. Written as a formula, this arrangement of flashlights is the grid point ${x_s}, {y_t}$, representing the center position of each Gaussian beam. The grid spacing is generally the same as the waist width to ensure uniform coverage of the light and keep the divergence angle low.

Let each Gaussian beam produce the same electromagnetic field in its central region, just repeating the effect at different locations.

Next, to get the total scattered field of a large beam, we needPhase FactorTo perform “adjustment” and “superposition”, please refer to formula (39) for details.

Finally, Combining with Eq. 39, the scattered fields in the far field region corresponding to the pair of directions $(w_i,w_o)$ are given by:

$$ \begin{aligned} \mathbf{E}\left(\omega_i, \omega_o\right) & =\sum_{s=1}^n \sum_{t=1}^ne^{jk \mathbf{p} {st} \cdot\left(\omega_i+\omega_o\right)} \mathbf{E}{st}\left(\omega_o\right) \\ \mathbf{H}\left(\omega_i, \omega_o\right) & =\sum_{s=1}^n \sum_{t= 1}^ne^{jk \mathbf{p}{st} \cdot\left(\omega_i+\omega_o\right)} \mathbf{H}{st}\left(\omega_o\right) \end{aligned} \tag{41} $$

where $\mathbf{E}, \mathbf{H}$ refer to the far field quantities only associated with directions (without the $e^{-jkr} / r$ term).

Finally, we can compute the surface BRDF value as

$$
f_r\left(\omega_i, \omega_o\right)=\frac{\frac{1}{2}\left|\mathbf{E}\left(\omega_i, \omega_o\right) \times \mathbf{H} \left(\omega_i, \omega_o\right)^*\right|}{\Phi_i \cos \theta_r}
\tag{42}
$$

where the incident power $\Phi_i$ is computed by integrating the incident irradiance over the surface:

$$
\Phi_i=\frac{1}{2} \int_S\left|\left[\mathbf{E}_i\left(\mathbf{r}^{\prime}\right) \times \mathbf{H}_i\ left(\mathbf{r}^{\prime}\right)^*\right] \cdot \mathbf{n}\right| d \mathbf{r}^{\prime}
\tag{43}
$$

where $\mathbf{n}$ is the surface normal at the macro scale ( $+\mathbf{z}$ ). Note that Eq. 42 and Eq. 43 can also be applied in single simulations, where $\Phi_i$ is computed from a single Gaussian beam.

Comment

2 responses to “Wave Optics Rendering: Full Wave Reference Simulation - Study Notes - 4”

  1.  Avatar
    Anonymous

    Seems the latex render doesn't work atall

    1. remo Avatar
      remo

      im sorry for that, I'm working on fixing it

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

en_USEN