Cranberry Lair

Walking the programming parameter space

Diary of a Path Tracer – Wrapping Up — September 27, 2020

Diary of a Path Tracer – Wrapping Up

Index: https://cranberryking.com/2020/08/22/diary-of-a-path-tracer-the-beginning/

Intro

In our last post, we took a look at texture sampling and bump mapping. In this post, we’ll be talking about the last few additions that were made to cranberray before it was wrapped up. Converting from a recursive path tracer to an iterative path tracer and implementing tone mapping using ACES tone mapping curves.

Iterative Path Tracing

Converting cranberray from a recursive path tracer to an iterative path tracer was surprisingly simple. Before this change, cranberray would shoot a ray from the camera and run the appropriate shader at the intersection point. This shader would in turn shoot a ray in some arbitrary direction to determine how much light should illuminate that location. This would continue recursively until the rays would either reach their maximum recursive depth or the ray would terminate by exiting the scene completely and sampling our skybox.

This approach makes sense if you think of every point as reflecting light that it has received from another location. As a result, you need to know how much light you’ve received before you can shade your point.

However, instead you can consider the opposite view. That every point is continuously absorbing light. We can see that the amount of light that the viewer receives is modeled by

light = lightSource * absorb_0 * absorb_1 * absorb_2 ... * absorb_n

With that in mind, we can see that we can simply calculate how much light we absorb at each point without needing to know the quality or the quantity of the light that we are receiving.

Instead of thinking of it as the light source being recursively reflected to the viewer.

light recursive(i)
{
    return emissionAtPoint(i) + absorptionAtPoint(i) * recursive(i + 1);
}

We instead think of it as the light being absorbed at every stage

light iterative()
{   
    for(i = 0; i < maxDepth; i++)
    {
        light += absorption * emissionAtPoint(i);
        absorption *= absorptionAtPoint(i);
    }
    return light;
}

This has some interesting properties!

  • If we notice that our absorption has reached 0, we can stop iterating.
  • We can multi-thread this much more easily than the recursive approach.
  • It has the desirable property of not easily introducing stack overflows.

I also find this version easier to think about which allows me more flexibility in the solutions I can imagine.

Tone Mapping

I will only touch briefly on tone mapping, I find that [1], [2] and [3] already provide excellent views on this topic.

Cranberray makes use of the histogram based average luminance calculations alongside the ACES tone mapping operator. You can find more information about these in the referenced articles.

However, one portion that I would like to talk about is why we are using the geometric mean to calculate the average luminance and hopefully provide some intuitions about the geometric mean.

As you can see in [1], the histogram luminance is stored as a histogram of the logarithms of our luminance instead of as their raw value. We sum up the values of the logarithms, divide it by the total number of values and revert it to a luminance.

This is the definition of the geometric mean. You can calculate the geometric mean either through repeated multiplication:

\sqrt[n]{i_0*i_1*...*i_n}

Or by using the logarithmic identities:

e^{1/n*((ln(i_0)+ln(i_1)+...+ln(i_n)))}

It took me quite some time to understand why the geometric mean would be a desirable value when doing tone mapping. Then, I realized that it is desirable when you are speaking in terms of multiplicative properties.

If I’m speaking of light, I’m thinking of the intensity either being twice as bright, or half as bright. In most scenarios, I don’t necessarily care if something is 2 units brighter versus 2 units dimmer. The center value of something that is twice as bright and half as bright is the geometric mean.

You can derive this by imagining that we have our center value x.

Imagine that we have a set of numbers, 0.5x, 1x, 2x and I would like to determine the value of x. If we repeatedly multiply our values, we get 0.5*1*2*x^3=x^3 and we can then retrieve our value by taking the cube root.

Generalizing, given 3 variables

y,z,w

where

y=ax, z=bx, w=cx

and

a*b*c = 1 (1)

Then we can find a value for x by

\sqrt[3]{y*z*w}=m

since

\sqrt[3]{ax*bx*cx}=m

\sqrt[3]{a*b*c*x^3}=m

given (1), then

\sqrt[3]{x^3}=m

x=m

As an example, say we have the values 0.2, 0.7, 1.3. If we take the geometric mean we get \sqrt[3]{0.2*0.7*1.3} which is 0.56670511081. Now if we divide our original values by that value we get 0.35291723364, 1.23521031776 and 2.2939620187 which when multiplied together give 1!

In terms of desirable properties, let’s take a look at a more visual example.

These images show the difference between using the arithmetic mean to normalize our values and using the geometric mean to normalize our values. The top row is our arithmetic mean and the bottom row is our geometric mean.

Our colors are simply generated by starting with 2, 4, 8, 16, 32, 2^n and calculating their arithmetic and geometric means then dividing each value by the respective means.

In our first example we have 2, 4 and 8. We divide by 4.6666 with the arithmetic mean and divide by 4 with the geometric mean.

You’ll notice that as our numbers get larger and larger, the arithmetic mean slowly loses all detail in the low range of our values and only maintains the brightest values at the edges.

Notice also that the geometric mean remains centered despite the ever growing magnitude of our values. This is desirable since most of our image would become very dark, very quickly.

Conclusion

That’s it for this series! Our next series will likely focus on a completely different topic, possibly in the world of real-time rendering. Cranberray was an excellent experience in various areas of path tracing. I would love to come back to it someday. Until next time, happy coding!

Resources

[1] http://alextardif.com/HistogramLuminance.html

[2] https://bruop.github.io/exposure/

[3] https://64.github.io/tonemapping/

Diary of a Path Tracer – Texture Sampling And Bump Mapping — September 20, 2020

Diary of a Path Tracer – Texture Sampling And Bump Mapping

Index: https://cranberryking.com/2020/08/22/diary-of-a-path-tracer-the-beginning/

Intro

In our last post, we looked at the various utilities used by cranberray. in this post we’re going to return to the world of graphics and take a look at cranberray’s texture sampling and bump mapping strategy.

Texture Sampling

Cranberray takes inspiration from graphics APIs and splits texture sampling into 2 parts. The sampler and the texture.

The sampler in cranberray stores information such as sample type (nearest, bilinear) and some other useful flags such as gamma_to_linear to convert our textures stored in sRGBA to linear RGBA. Textures store pixel data and simply support sampling a single point in them.

Sampling a texture in cranberray is quite simple, you take your UV coordinates and convert them to an array index.

static cv4 sample_r_u8(cv2 uv, uint8_t* cran_restrict image, uint32_t width, uint32_t height, uint32_t offsetX, uint32_t offsetY)
{
	float readY = uv.y * (float)height;
	float readX = uv.x * (float)width;

	uint32_t y = (uint32_t)floorf(readY) + offsetY;
	y = y >= height ? height - 1 : y;
	uint32_t x = (uint32_t)floorf(readX) + offsetX;
	x = x >= width ? width - 1 : x;
	uint32_t readIndex = y * width + x;

	float f = (float)image[readIndex] / 255.0f;
	return (cv4) { f, f, f, f };
}

Once your can retrieve distinct pixel values, you can either sample nearest of use bilinear interpolation between 4 different samples.

if (sampleType == sample_type_nearest)
{
	color = samplers[texture->format](uv, texture->data, texture->width, texture->height, 0, 0);
}
else if (sampleType == sample_type_bilinear)
{
	cv4 s00 = samplers[texture->format](uv, texture->data, texture->width, texture->height, 0, 0);
	cv4 s01 = samplers[texture->format](uv, texture->data, texture->width, texture->height, 0, 1);
	cv4 s10 = samplers[texture->format](uv, texture->data, texture->width, texture->height, 1, 0);
	cv4 s11 = samplers[texture->format](uv, texture->data, texture->width, texture->height, 1, 1);

	float wf = cf_frac((float)texture->width * uv.x);
	float hf = cf_frac((float)texture->height * uv.y);
	wf = wf < 0.0f ? 1.0f + wf : wf;
	hf = hf < 0.0f ? 1.0f + hf : hf;
	color = (cv4)
	{
		cf_bilinear(s00.x, s01.x, s10.x, s11.x, wf, hf),
		cf_bilinear(s00.y, s01.y, s10.y, s11.y, wf, hf),
		cf_bilinear(s00.z, s01.z, s10.z, s11.z, wf, hf),
		cf_bilinear(s00.w, s01.w, s10.w, s11.w, wf, hf)
	};
}

And that’s pretty much it for how cranberray samples its textures!

Bump Mapping

Bump mapping is a bit more fun than texture sampling.

A very important and interesting point about the tangent frame is that it is the set of vectors that represent the basis for our texture coordinates. Originally, I believed that any tangent and bitangent could be selected as long as they were orthonormal. In the context of bump mapping, this is not correct. We actually want to select our tangent and bitangent so as to have them represent the flow of the U and V coordinates in space. ([2] has an excellent visualization for this)

To construct your tangent and bitangent, you can imagine that your triangle edge is a construction of the tangent and bitangent vectors.

Once you know you can construct your edges from some contribution of the tangent and bitangent, you can solve for your tangent and bitangent vectors with a little bit of algebra.

// e0=du0T+dv0B (1)
// e1=du1T+dv1B (2)
// solve for B
// (e0-du0T)/dv0
// plug into (2)
// e1=du1T+dv1(e0-du0T)/dv0
// solve for T
// e1=du1dv0T/dv0+dv1e0/dv0-dv1du0T/dv0
// dv0e1=du1dv0T+dv1e0-dv1du0T
// dv0e1-dv1e0=du1dv0T-dv1du0T
// dv0e1-dv1e0=T(du1dv0-dv1du0)
// T = (dv0e1-dv1e0)/(dv0du1-dv1du0)

Calculating tangent frames caused a surprising amount of issues with degenerate triangles and NaNs. Be careful here.

And now that you have your tangent and bitangent vectors, you can “bump” them using the partial derivatives of a height map. To bump your tangent and bitangent vectors, you add a scaled normal to the tangent and bitangent vectors and recalculate the normal using the cross product of said vectors.

cv3 normal = cv3_normalize(inputs.normal);
{
	cv4 partialDerivative = sampler_sample(&scene->textureStore, microfacetData.bumpSampler, microfacetData.bumpTexture, inputs.uv);
	normal = cv3_cross(cv3_add(inputs.tangent, cv3_mulf(normal, partialDerivative.x)), cv3_add(inputs.bitangent, cv3_mulf(normal, partialDerivative.y)));
	normal = cv3_normalize(normal);
	cran_assert(cv3_dot(normal, inputs.normal) >= 0.0f);
}

Conclusion

And that’s it! This part of cranberray is quite simple but was fun to write!. Next time we’ll look at the iterative path tracing of cranberray. Until then, happy coding!

Future Work

I would like to add a few things to the texture sampling system as it’s quite barebones. Things such as mip map selection, trilinear interpolation as well as add a texture caching system. (Currently cranberray keeps all textures resident in memory)

References

[1] http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

[2] https://learnopengl.com/Advanced-Lighting/Normal-Mapping

Diary of a Path Tracer – GGX-Smith and Multiple Importance Sampling — September 13, 2020

Diary of a Path Tracer – GGX-Smith and Multiple Importance Sampling

Index: https://cranberryking.com/2020/08/22/diary-of-a-path-tracer-the-beginning/

Intro

In our last post, we looked at cranberray’s sampling strategy and it’s implementation of N-Rooks sampling. In this post, we’re going to look at cranberray’s BRDF implementations and multiple importance sampling.

Shading Pipeline

Cranberray’s shading pipeling is very simple. When loading a mesh, ranges of triangles (or submesh) are tagged to use a specific material. When intersecting the geometry, cranberray will use the triangle’s index to lookup which material it should use for said triangle. Each material has an associated shader and appropriate data object. Once we know which material to use, we call the function associated to that shader and light our pixel. The heavy lifting happens inside the shader.

The primary lighting shader in cranberray is shader_microfacet. This function uses Lambert as it’s diffuse BRDF and GGX-Smith as it’s specular BRDF. It selects and blends both using multiple importance sampling.

In this post we’ll be taking a look at the magic behind the shader starting with our Lambert BRDF!

Understanding The Lambertian BRDF

For this post, we’ll assume that there is some familiarity with the fact that the Lambert BRDF is simply the Albedo of the surface divided by \pi.

However, one bit I’d like to touch on is that a Lambertian reflector emits energy at a constant rate in every direction.

At least, that was how I understood a Lambertian reflector. This is incorrect. A Lambertian reflector emits energy at a constant rate in every direction per unit area.

The difference here is that the first definition describes radiant intensity and the second defines radiance. [3]

What that means is that the amount of energy emitted is proportional to the angle at which the surface is viewed. Wikipedia was quite insightful here [1].

If you imagine a rectangular Lambertian reflector of 1 meter on each side and imagine that it reflects 1 unit of radiant intensity.

Notice that in that case, our Lambertian reflector is reflecting L = \frac{1}{1} = 1 units of radiance.

Now, imagine that we’re looking at the reflector at an angle of 60 degrees

Notice that the perceived size of our reflector is now 0.5 meters squared. If our reflector were to emit 1 unit of radiant intensity when viewed at this angle, it would emit L = \frac{1}{0.5} = 2 units of radiance. Twice as much energy per unit area! Meaning that if we looked at it at an angle, it would emit the same amount of energy for less area in our field of view making it brighter.

As a result, you can imagine that when we’re looking to render a surface that looks uniform when viewed from all directions this is problematic.

A Lambertian reflector should reflect constant radiance. As a result, the energy reflected per direction by our material should be proportional to the change in size of our reflector when viewed at an angle.

I’ll save you the trigonometry and tell you that this factor is cos(\theta) where \theta is the zenith angle of our viewer.

As a result, you can see that to emit constant radiance, our reflector’s radiant intensity should be reduced by cos(\theta) and if we do that, our reflector would emit 0.5 units of radiant intensity and as a result would emit L = \frac{0.5}{0.5} = 1 unit of radiance.

Now, like me, you might be thinking “If we need a cosine here to make our radiant intensity proportional to the foreshortening of our area, why don’t we multiply our emitted light by the cosine factor when rendering?”

The reason is that this cosine factor is relevant when we’re trying to calculate radiance from radiant intensity and the area being shaded. However, when we’re rendering, the result of our shading math is radiance not radiant intensity. If we wanted to instead calculate the radiant intensity, we would have to integrate across our surface.

I = \int_A L dA

Taking our previous surface as an example, when we’re shading it in our renderer, we’re asking “Given this irradiance, what is it’s radiance?”. In our previous example, our surface was being shaded with an irradiance of \pi (conveniently skipping over why here) and as a result, our radiance was 1. If we were to ask what the radiant intensity was in any given direction, you would have to multiply the radiance by the projected area of our surface. At an angle of 60 degrees, this would be 0.5, making our radiant intensity I = L * A = 1 * 0.5 = 0.5.

This was a long winded way to attempt at explaining a neat and surprising (to me) property of Lambertian reflectors.

Smith-GGX and microfacet BRFDs

Aside from our Lambert BRDF for diffuse, cranberray makes use of the Smith-GGX specular formulation. This formulation is very common and well documented. I refer the interested reader to [6] for an overview of the topic.

I won’t explain microfacet BRDFs here, but I highly recommend [6] and [9] for a derivation of the specular microfacet BRDF formula, and [8] for more info about the origins of the formula.

As presented in the heading, we’re making use of the Smith Uncorrelated function for our shadowing function and GGX for our NDF. [9]

There’s nothing particularly novel about this portion of the renderer. I recommend the interested reader see some of the linked references for information on implementing a similar shading model.

Importance Sampling

At this point in our rendering, we have a specular BRDF and a diffuse BRDF however we now need to decide on where to sample our ray.

The simplest method for selecting a ray is to uniformly select a point on the hemisphere centered about the normal of our surface.

This works well to get started but can converge very slowly. Especially in scenarios with sharp highlights which often happens with smooth surfaces.

If we sample only a few times, we might miss all the important details.

As a result, we want to make use of something called importance sampling. Simply put, importance sampling is the idea of trying to sample the most “important” parts of a function.

I assume the reader has some familiarity with Monte Carlo estimation method. If not, I suggested going to [15]. Their treatment of importance sampling and Monte Carlo estimators is excellent.

To sample the most important parts of our scene, we want to take samples where we expect the most contributions to our image would live. We unfortunately don’t know exactly where that is, but we can guess. And to guess, we can use an approximate function to choose what direction we want to send our ray.

Let’s imagine our basic Lambert BRDF, we know that rays that are closest to our normal will contribute the most light to our illumination. As a result, we can try to send more rays towards our normal’s direction with some falloff. [16]

To select a ray, we want to convert our PDF to a CDF and randomly select a point on our CDF using a uniform random number. [13] This allows us to select a random number for the specified distribution using a random number with a uniform distribution.

Once we’ve selected our value, we can calculate the PDF and plug it into our Monte Carlo estimator.

A potential downside of selecting an approximate function is that it might not accurately reflect the illumination in the scene, causing us to increase our variance instead of reducing it. [15]

Cranberray uses the BRDF functions as the approximate representation of the illumination in the scene. This is easy to implement and also quite effective since there tends to be quite a few rays that would effectively contribute nothing at all to the image if we sampled uniformly.

For Lambert, I use the derivation from a previous post [16].

And for GGX I use the derivation found here [13].

static cv3 hemisphere_surface_random_lambert(float r1, float r2)
{
	float theta = acosf(1.0f - 2.0f*r1) * 0.5f;
	float cosTheta = cosf(theta);
	float sinTheta = sinf(theta);
	float phi = cran_tao * r2;

	return (cv3) { sinTheta*cosf(phi), sinTheta*sinf(phi), cosTheta };
}

static float lambert_pdf(cv3 d, cv3 n)
{
	return cv3_dot(d, n) * cran_rpi;
}

static cv3 hemisphere_surface_random_ggx_h(float r1, float r2, float a)
{
	float cosTheta = sqrtf((1.0f-r1)*cf_rcp(r1*(a*a-1.0f)+1.0f));
	float sinTheta = sqrtf(1.0f - cosTheta*cosTheta);
	float phi = cran_tao * r2;
	return (cv3) { sinTheta*cosf(phi), sinTheta*sinf(phi), cosTheta };
}

static float ggx_pdf(float roughness, float hdotn, float vdoth)
{
	float t = hdotn*hdotn*roughness*roughness - (hdotn*hdotn - 1.0f);
	float D = (roughness*roughness)*cf_rcp(t*t)*cran_rpi;
	return D*hdotn*cf_rcp(4.0f*fabsf(vdoth));
}

Multiple Importance Sampling and blending our BRDFs

With these importance functions, we need a way to select between either our diffuse or our specular BRDF and blending between them. This is where Multiple Importance Sampling comes in.

Multiple Importance Sampling allows us to blend our functions using weights assigned to each PDF. [14]

In cranberray, we use a simple strategy for determining if we select from either our specular BRDF or our diffuse BRDF. We use the Fresnel factor for our geometric normal and select our specular BRDF when our random number is less than the Fresnel factor and our diffuse BRDF in the other case.

float geometricFresnel = cmi_fresnel_schlick(1.0f, microfacetData.refractiveIndex, normal, viewDir);
geometricFresnel = fmaxf(geometricFresnel, microfacetData.specularTint.r * gloss); // Force how much we can reflect at a minimum
float weights[distribution_count] =
{
	[distribution_lambert] = 1.0f - geometricFresnel,
	[distribution_ggx] = geometricFresnel
};

bool reflected = random01f(&context->randomSeed) < weights[distribution_ggx];

Once we’ve selected our BRDF and sampled from it, we want to blend our results with our previous sampling results. This is where multiple importance sampling comes in. (See [17] for an excellent treatment on the topic)

Notice that if we imagine our scene as a simplified function.

Let’s assume this function has an area of 1.2 units.

Imagine that our function is a combination of 2 simpler functions.

f(X) = g(X)h(X)

Let’s imagine that we select only one of those functions as the importance function uniformly at random and we select g(x).

To get our value using importance sampling, we need to divide by our PDF and the chance of selecting that PDF as our distribution (Which is 0.5 in this example).

If we were to use this single sample as our estimate using p_g as our PDF and the values g(x) = 0.1, h(x) = 2 and p_g(x) = 0.1 we would get y = \frac{g(x)h(x)}{0.5*p_g(x)} = \frac{0.1*2}{0.5*0.1} = \frac{0.2}{0.05} = 4 which is a poor estimate of our area of 1.2.

If we were to use p_h(x) and where p_h(x) = 1 as our importance function we would get y = \frac{g(x)h(x)}{0.5*p_h(x)} = \frac{0.1*2}{0.5*1} = \frac{0.2}{0.5} = 0.4 which is a closer estimate of our area of 1.2.(Admittedly, this example is a bit contrived)

The issue presented here (and in [17]) is that because our sample location had a low likelihood of being selected (0.1) but our second function h(x) had a relatively high value, our value gets a very high value as well since 2/0.1 is quite large. It’s important to note that if we were to continue taking samples, our estimator would still converge to 1.2.

Multiple Importance Sampling suggests a method to reduce the overall impact of these spikes to reduce the variance of our estimates by using a set of appropriate weighting functions instead of simply using our PDF directly as we just did.

The one sample model presented in [14] is what we’ll be using for this example, as it’s simpler to think about.

Instead of only dividing by our PDF, we want to apply a weighing function to our calculation.

F = \frac{w_I(X)f(X)}{c_Ip_I(X)}

Where I represents the index of our selected PDF, w_I represents the weight assigned to our PDF, c_I represents the probability that we select that PDF and p_I is our PDF function.

In our previous example, we did not have w_I(X). This function does all the magic.

[14] presents the balance heuristic as an option for a weighting function which has the form

w_i(x) = \frac{c_i p_i(x)}{\sum_k c_k p_k(x)}

Notice that this is simply a weighted average of our probability density functions.

If we add this function to our single sample model we get

F = \frac{c_i p_i(x)}{\sum_k c_k p_k(x)} \frac{f(x)}{c_ip_i(x)}

F = \frac{f(X)}{\sum_k c_k p_k(x)}

If we look at our previous example using p_g(x) as our PDF, we can make use of the same values here (notice that our final formulation does not divide by p_g(x) individually anymore.

c_g = 0.5, c_h = 0.5, p_g(x) = 0.1, g(x) = 0.1, p_h(x) = 1 and h(x) = 2

y = \frac{g(x)h(x)}{0.5*p_g(x)+0.5*p_h(x)}

y = \frac{g(x)h(x)}{0.5*p_g(x)+0.5*p_h(x)}

y = \frac{0.1*2}{0.5*0.1+0.5*1}

y = \frac{0.2}{0.55}

y = 0.36

Which although is not as good as our estimate of 0.4 from earlier, it is much better than our estimate of 4 when using p_g(x)

We can see that this is unbiased by noticing that

F(x) = \frac{g(x)h(x)}{c_g p_g(x) + c_h p_h(x)}

E[F(x)] = c_g p_g(x) \frac{g(x)h(x)}{c_g p_g(x) + c_h p_h(x)} + c_h p_h(x) \frac{g(x)h(x)}{c_g p_g(x) + c_h p_h(x)}

E[F(x)] = (c_g p_g(x) + c_h p_h(x)) \frac{g(x)h(x)}{c_g p_g(x) + c_h p_h(x)}

E[F(x)] = g(x)h(x)

(I took some shortcuts here, I recommend reading [14] for proofs)

An interesting property of the balance heuristic is that our values are bounded.

Notice that since

\frac{g(x)}{c_g p_g(x) + c_h p_h(x)} <= \frac{g(x)}{c_g p_g(x)}

then

\frac{g(x)h(x)}{c_g p_g(x) + c_h p_h(x)} <= \frac{g(x)h(x)}{c_g p_g(x)}

as well as

\frac{g(x)h(x)}{c_g p_g(x) + c_h p_h(x)} <= \frac{g(x)h(x)}{c_h p_h(x)}

Where both equations on the right were the formulas that we used originally when sampling either with p_g(x) and p_h(x) (Where c_g=0.5 and c_h = 0.5).

Meaning that our balanced sampling is bounded by the smaller of the 2 values generated by both of our PDFs. Neat! (Note that with uniform sampling, our value would simply be 0.2)

All that to say that this is our weighting code

float sum = 0.00001f;
for (uint32_t i = 0; i < distribution_count; i++)
{
	sum += weights[i] * PDFs[i];
}
weight = cf_rcp(sum);

And that’s it! That was a lot of words for a very little chunk of code.

Conclusion

This post is definitely a large one. But it was a lot of fun to learn enough about multiple importance sampling and the Lambertian BRDF to at least convey some sense of it in this blog post. I hope you enjoyed it! Next time we’ll be taking a look at the much simpler Obj loading code in cranberray. Until next time, happy coding!

Future Work

I’d like to make more use of a better importance sampling function for Smith-GGX as seen in [10]. I also wonder if it’s possible to only select from half vectors that reflect on the upper half of the hemisphere instead of requiring us to discard rays that would be generated in the lower half of our sphere.

I would also like to make more effective use of multiple importance sampling in the path tracer as a whole. At this moment, it is only used to combine our lambertian importance function and our GGX-Smith importance function.

References

[1] https://en.wikipedia.org/wiki/Lambert%27s_cosine_law

[2] https://www.usna.edu/Users/physics/mungan/_files/documents/Publications/BRDFreview.pdf

[3] https://en.wikipedia.org/wiki/Irradiance#SI_radiometry_units

[4] http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Radiometry.html

[5] http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Radiometry.html#x1-IrradianceandRadiantExitance

[6] https://www.gdcvault.com/play/1024478/PBR-Diffuse-Lighting-for-GGX

[7] http://www.pbr-book.org/3ed-2018/Reflection_Models/Microfacet_Models.html

[8] https://inst.eecs.berkeley.edu/~cs283/sp13/lectures/cookpaper.pdf

[9] https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf

[10] https://hal.archives-ouvertes.fr/hal-01509746/document

[11] https://schuttejoe.github.io/post/ggximportancesamplingpart1/

[12] https://twitter.com/KostasAAA/status/1246936564556537865?s=20

[13] https://agraphicsguy.wordpress.com/2015/11/01/sampling-microfacet-brdf/

[14] https://graphics.stanford.edu/courses/cs348b-03/papers/veach-chapter9.pdf

[15] https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods-in-practice/variance-reduction-methods

[16] https://cranberryking.com/2020/06/07/derivation-importance-sampling-the-cosine-lobe/

[17] http://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/Importance_Sampling.html

Diary of a Path Tracer – SSE optimized BVH Traversal — September 3, 2020

Diary of a Path Tracer – SSE optimized BVH Traversal

Index: https://cranberryking.com/2020/08/22/diary-of-a-path-tracer-the-beginning/

Intro

In our last post, we looked at building our BVH using SAH as our guiding heuristic. Now it’s time to look at how we intersect that BVH.

High Level

Querying the BVH is actually quite simple. We test our ray against our parent node, if it intersects, we test the ray against both of it’s children. If the ray intersects either of it’s children, we test that children’s children. Once we reach a leaf node, we add it to our list of nodes that we’ve intersected.

This sounds perfect for recursion, but recursion never scales particularly well. Instead a queue or stack is a more effective method to represent the nodes to be tested.

There are three pieces that bring the BVH traversal together. The math library, the AABB intersection code and the (roughly) branchless intersection response code.

We’ve already seen the math library in this post, as a result, let’s take a dive into the intersection routine!

Ray/AABB Intersection Routine

There are quite a few excellent sources on the ray/AABB intersection routine [1][2][3]. We’ll do a small dive into how we get to the final implementation.

The intersection routine is based on the slab intersection method presented in [3]. We want to define our AABB as a set of 3 slabs in each axis. A slab is simply represented by a pair of planes, the slab being the space in between them.

In order to determine if we’re intersecting the AABB, we want to determine if our ray passes through the shared intersection of each slab.

AABBIntersection

In order to determine that, we want to determine if our ray enters each slab before it exits any of the other slabs.

A simple approach to this could be:

slabEntryX = intersect(ray, startPlaneX)
slabExitX = intersect(ray, endPlaneX)
slabEntryY = intersect(ray, startPlaneY)
slabExitY = intersect(ray, endPlaneY)

if(slabEntryX is greater than slabExitY or slabEntryY is greater than slabExitX)
    return missed collision

return hit collision

As a result, our core approach is to find if there is any entry value that is greater than our exit values. And if so, we’re not intersecting the slab.

To intersect a ray and a plane, we need to define our ray as:

\vec{p} = \vec{a}*t+\vec{b}

and our plane as

0 = \vec{n}.\vec{p}-d

We can then plug our ray function into our plane function to get:

0 = \vec{n}.(\vec{a}*t+\vec{b})-d

And solving for t

d = \vec{n}.\vec{a}*t+vec{n}.\vec{b}

d-\vec{n}.\vec{b} = vec{n}.\vec{a}*t

\frac{d-\vec{n}.\vec{b}}{vec{n}.\vec{a}} = t

Now that we have t we can calculate it for each plane, since our planes are axis aligned, our intersection math gets quite simple. Here is an example for our x axis.

\frac{d-\vec{n}.\vec{b}}{vec{n}.\vec{a}} = t

where n = (1, 0, 0)

\frac{d_x-b_x}{a_x} = t

And repeat for each axis.

Now let’s look at our intersection code:

cv3l rayOLanes = cv3l_replicate(rayO);
cv3l invD = cv3l_replicate(cv3_rcp(rayD));
cv3l t0s = cv3l_mul(cv3l_sub(aabbMin, rayOLanes), invD);
cv3l t1s = cv3l_mul(cv3l_sub(aabbMax, rayOLanes), invD);

cv3l tsmaller = cv3l_min(t0s, t1s);
cv3l tbigger  = cv3l_max(t0s, t1s);

cfl rayMinLane = cfl_replicate(rayMin);
cfl rayMaxLane = cfl_replicate(rayMax);
cfl tmin = cfl_max(rayMinLane, cfl_max(tsmaller.x, cfl_max(tsmaller.y, tsmaller.z)));
cfl tmax = cfl_min(rayMaxLane, cfl_min(tbigger.x, cfl_min(tbigger.y, tbigger.z)));
cfl result = cfl_less(tmin, tmax);
return cfl_mask(result);

This code looks different than our math, but it’s essence is the same.

In this code we’re testing 4 AABBs against a single ray instead of one ray against one AABB.

Notice that in this line:

cv3l t0s = cv3l_mul(cv3l_sub(aabbMin, rayOLanes), invD);

We’re implementing \frac{d_x-b_x}{a_x} = t where b_x, b_y, b_z is rayOLanes, invD is \frac{1}{a_x}, \frac{1}{a_y}, \frac{1}{a_z} and aabbMin is d_x, d_y, d_z.

The next few lines:

cv3l tsmaller = cv3l_min(t0s, t1s);
cv3l tbigger = cv3l_max(t0s, t1s);

Are there to account for the situation where our ray direction is negative. If our ray direction is negative on one axis, t0 and t1 will actually be flipped. As a result, we want to simply make sure that t0 is the smaller one and t1 is the larger one. [1]

And finally,

cfl tmin = cfl_max(rayMinLane, cfl_max(tsmaller.x, cfl_max(tsmaller.y, tsmaller.z)));
cfl tmax = cfl_min(rayMaxLane, cfl_min(tbigger.x, cfl_min(tbigger.y, tbigger.z)));
cfl result = cfl_less(tmin, tmax);
return cfl_mask(result);

These lines are calculating to see if any of our latest entrance is earlier than our earliest exit. If so, cfl_mask will set the appropriate bit for that AABB.

That’s it for our intersection routine! Now we can take a look at what we do with these results.

Intersection Response

The previous intersection code is nicely packaged in the caabb_does_ray_intersect_lanes routine.

In our bvh traversal, the call to it looks like this:

uint32_t intersections = caabb_does_ray_intersect_lanes(rayO, rayD, rayMin, rayMax, boundMins, boundMaxs);

Where intersections hold the bits of the AABBs that intersect with our ray.

One approach to handling these bits is to simply loop through each AABB, determine if it’s bit is set and add it’s children volumes to the queue of being tested or to the final list of leaf nodes.

However, that’s no fun.

Instead it’s a lot more fun to make this code branchless using some SSE intrinsics!

Fair warning, this code can be complicated if you aren’t used to reading SSE intrinsics.

#define _MM_SHUFFLE_EPI8(i3,i2,i1,i0) _mm_set_epi8(i3*4+3,i3*4+2,i3*4+1,i3*4,i2*4+3,i2*4+2,i2*4+1,i2*4,i1*4+3,i1*4+2,i1*4+1,i1*4,i0*4+3,i0*4+2,i0*4+1,i0*4)
__m128i shuffles[16] =
{
	_MM_SHUFFLE_EPI8(0,0,0,0), _MM_SHUFFLE_EPI8(0,0,0,0), _MM_SHUFFLE_EPI8(0,0,0,1), _MM_SHUFFLE_EPI8(0,0,1,0), // 0000, 0001, 0010, 0011
	_MM_SHUFFLE_EPI8(0,0,0,2), _MM_SHUFFLE_EPI8(0,0,2,0), _MM_SHUFFLE_EPI8(0,0,2,1), _MM_SHUFFLE_EPI8(0,2,1,0), // 0100, 0101, 0110, 0111
	_MM_SHUFFLE_EPI8(0,0,0,3), _MM_SHUFFLE_EPI8(0,0,3,0), _MM_SHUFFLE_EPI8(0,0,3,1), _MM_SHUFFLE_EPI8(0,3,1,0), // 1000, 1001, 1010, 1011
	_MM_SHUFFLE_EPI8(0,0,3,2), _MM_SHUFFLE_EPI8(0,3,2,0), _MM_SHUFFLE_EPI8(0,3,2,1), _MM_SHUFFLE_EPI8(3,2,1,0)  // 1100, 1101, 1110, 1111
};

__m128i queueIndices = _mm_load_si128((__m128i*)testQueueIter);
uint32_t leafLine = bvh->count - bvh->leafCount;
uint32_t childIndexMask;
uint32_t parentIndexMask;
{
	__m128i isParent = _mm_cmplt_epi32(queueIndices, _mm_set_epi32(leafLine, leafLine, leafLine, leafLine));
	parentIndexMask = _mm_movemask_ps(_mm_castsi128_ps(isParent));
	childIndexMask = ~parentIndexMask & 0x0F;

	parentIndexMask = parentIndexMask & intersections;
	childIndexMask = childIndexMask & intersections;
}

uint32_t leafCount = __popcnt(childIndexMask);
uint32_t parentCount = __popcnt(parentIndexMask);
__m128i childIndices = _mm_shuffle_epi8(queueIndices, shuffles[childIndexMask]);
__m128i parentIndices = _mm_shuffle_epi8(queueIndices, shuffles[parentIndexMask]);

union
{
	uint32_t i[4];
	__m128i v;
} indexUnion;

indexUnion.v = childIndices;
for (uint32_t i = 0; i < leafCount; i++)
{
	uint32_t nodeIndex = indexUnion.i[i];
	candidateIter[i] = bvh->jumps[nodeIndex].leaf.index;
}
candidateIter+=leafCount;

indexUnion.v = parentIndices;
for (uint32_t i = 0; i < parentCount; i++)
{
	uint32_t nodeIndex = indexUnion.i[i];
	cran_assert(nodeIndex < bvh->count);
	testQueueEnd[i*2] = bvh->jumps[nodeIndex].jumpIndices.left;
	testQueueEnd[i*2 + 1] = bvh->jumps[nodeIndex].jumpIndices.right;
}
testQueueEnd += parentCount * 2;

Now this looks like a lot, so let’s take a look at it piece by piece. As we presented in the previous post, our leaf nodes are stored at the end of our array. This allows us to determine which nodes are leaf nodes simply by comparing their indices against that boundary. The following bit of code does exactly that:

__m128i isParent = _mm_cmplt_epi32(queueIndices, _mm_set_epi32(leafLine, leafLine, leafLine, leafLine));
parentIndexMask = _mm_movemask_ps(_mm_castsi128_ps(isParent));
childIndexMask = ~parentIndexMask & 0x0F;
parentIndexMask = parentIndexMask & intersections;
childIndexMask = childIndexMask & intersections;

The first line here simply compares our indices against our “leaf boundary”. The result of this tells us if our index is a parent or a leaf node.

Once we know which ones are our leaf nodes, we convert this to a bitmask representing the elements of the vector.

And finally, we mask out the indices that have no intersections associated with them.

The result of this operation is that we have two bitmasks that represents the parents that have intersected the ray and the leaves that have intersected the ray.

Next we want to turn this bit mask into a series of indices.

uint32_t leafCount = __popcnt(childIndexMask);
uint32_t parentCount = __popcnt(parentIndexMask);
__m128i childIndices = _mm_shuffle_epi8(queueIndices, shuffles[childIndexMask]);
__m128i parentIndices = _mm_shuffle_epi8(queueIndices, shuffles[parentIndexMask]);

The first two lines are very simple, it tells us how many leaf nodes have intersections and how many parent nodes. The next pair of lines require a little more explaining.

Right now, our queueIndices vector represents the indices to the AABBs that we’ve tested against our rays.

One approach we could implement is to simply loop through every element, test their bits and read the values if those bits are set.

However, what we can do instead is pack all of our active indices to the front of our vector allowing us to read only the ones that have reported an intersection. Reducing our branching at the cost of shuffling our data. That’s what _mm_shuffle_epi8 is doing. It’s taking a lookup table (shuffles[]) and determining where our values should be moved in the vector to pack them to the front of the vector.

Since our masks only contain 4 bits, they have a maximum of 15 values making for a very manageable look up table.

Once we have our indices packed, we simply have to read them.

union
{
    uint32_t i[4];
    __m128i v;
} indexUnion;

indexUnion.v = childIndices;
for (uint32_t i = 0; i < leafCount; i++)
{
    uint32_t nodeIndex = indexUnion.i[i];
    candidateIter[i] = bvh->jumps[nodeIndex].leaf.index;
}
candidateIter+=leafCount;

indexUnion.v = parentIndices;
for (uint32_t i = 0; i < parentCount; i++)
{
    uint32_t nodeIndex = indexUnion.i[i];
    cran_assert(nodeIndex < bvh->count);
    testQueueEnd[i*2] = bvh->jumps[nodeIndex].jumpIndices.left;
    testQueueEnd[i*2 + 1] = bvh->jumps[nodeIndex].jumpIndices.right;
}
testQueueEnd += parentCount * 2;

These parts are relatively self explanatory. We’re reading our indices and doing the appropriate work if our values are leaves or parent nodes. To understand why we’re using a union to convert our SSE vector, I recommend looking into type punning. [5]

Conclusion

That’s it! I don’t really have any future work for this part of cranberray, I’m actually quite happy with the end result.

You can find the code here: https://github.com/AlexSabourinDev/cranberries/blob/5fe9c25e1df23d558b7ef8b5475717d9e67a19fc/cranberray.c#L1233

Next we’ll be looking at the ray tracer’s sampling strategy. Until then, happy coding!

References

[1] https://medium.com/@bromanz/another-view-on-the-classic-ray-aabb-intersection-algorithm-for-bvh-traversal-41125138b525

[2] http://psgraphics.blogspot.com/2016/02/new-simple-ray-box-test-from-andrew.html

[3] http://papers.cumincad.org/data/works/att/67d2.content.pdf

[4] https://realtimecollisiondetection.net/

[5] https://www.cocoawithlove.com/2008/04/using-pointers-to-recast-in-c-is-bad.html

 

Diary of a Path Tracer – BVH Construction Using SAH — August 31, 2020

Diary of a Path Tracer – BVH Construction Using SAH

Index: https://cranberryking.com/2020/08/22/diary-of-a-path-tracer-the-beginning/

Intro

In our last post, we looked at our SIMD-friendly math library. In this post, we’ll be taking a look at how cranberray builds its BVH. For a primer to BVHs, I recommend going to PBRT’s treatment on the topic here.

Canberray’s BVH is built recursively as a binary tree. It starts with the top level node containing all nodes and recursively splits it into 2 children at each level. Originally, the BVH was split simply by selecting the left and right halves of the children nodes. The next approach was to split according to the median of a particular axis (typically the largest one), and now it makes use of the surface area heuristic.

SAH

The Surface Area Heuristic or SAH is a well known strategy to build an efficient BVH that tries to minimize the number of intersection tests applied on a tree.

PBRT has an excellent treatment on the topic [2], and I’ve linked one of the original papers referencing the idea [1]. I will try to instead provide an intuitive picture of why and how SAH works.

The idea behind SAH is to discover an efficient partition of a collection that will try to minimize the cost of traversing the tree for any ray.

The SAH function is presented as:

C_o = C_{trav} + P_A\sum C_{isect}(a_i) + P_B\sum C_{isect}(b_i)

This cost model is based on probabilities. C_{trav} is the cost of traversing the parent node to determine if any of the two children nodes intersect the ray. In cranberray, this would be the cost of testing the ray against both children AABBs and determining if it intersects either. C_{isect} is the cost of testing the shape itself against the ray. In cranberray, this would be the cost of testing the triangles against the ray. Finally, P_A and P_B are the probabilities that the bounding volumes containing the children nodes intersect the ray.

Now, well make a slight modification and reformulate P_A and P_B as P(A|C) and P(B|C) where C is our parent node. This states that P(A|C) represents the probability of the event A happening given the event C has already happened. In this context, this means that P(A|C) represents the probability that we intersect our volume A given that we’ve already intersected our parent volume C.

With this change, our function is now:

C_o = C_{trav} + P(A|C)\sum C_{isect}(a_i) + P(B|C)\sum C_{isect}(b_i)

Now, let’s imagine that we’re a lonesome ray traveling through our BVH. We arrive at a node and we test both of it’s children to see if we intersect it. This takes us 10 units of time (C_{trav}). At this point we would intersect A with a probability of P(A|C) and we would intersect B with a probability of P(B|C). In this example, we only intersect with A, and so we test our ray against it’s contained shapes. Let’s say that A has 10 shapes, as a result, we have to pay the cost of testing the intersection with all 10 shapes. This is \sum_{i=1}^{10} C_{isect}(a_i). Finally, let’s say that the cost of C_{isect} is 15 for every child shape.

This gives us the final actual cost of:

C_o = 10+sum_{i=1}^{10} 15 = 10+150 = 160

What our original cost function is doing, is calculating the expected cost of doing this operation for any possible ray.

As a result, if we were to build with these values:

C_{trav}=10

C_{isect}=15

P(A|C)=1

P(B|C)=0

A_n = 10

B_n = 0

Then our expected cost for this node would be:

C_o = 10 + 1*\sum_{i=1}^{10} 15 + 0*\sum_{i=1}^{0} 15 = 10+150 = 160

Which is exactly the cost of our example above because if our A node has a probability of 1, we would always experience the scenario above.

Of course, if we were to take some more likely value such as:

P(A|C)=0.5

P(B|C)=0.6

A_n = 10

B_n = 12

C_o = 10 + 0.5*\sum_{i=1}^{10} 15 + 0.6*\sum_{i=1}^{12} 15 = 10+0.5*150+0.6*180 = 193

Which would be the expected value of our node. Notice that our probabilities can add up to more than or less than 1 because it is reasonable that we might be able to intersect with both volumes or none at all.

Now we need to determine how we want to define P(A|C). This probability is the probability of a random ray intersecting with our volume given that we’ve intersected with our parent volume.

P(A|C) is given as the surface area of our volume A divided by the surface area of our parent volume C. As a result, we can define P(A|C) = \frac{S_A}{S_C} as the probability of intersecting A given that we’ve intersected C.

Derivation In 2D

In 2 dimensions, this makes sense, as we can image a rectangle, embedded in another rectangle. The probability of a random point being placed in the child rectangle given that it is contained in the parent rectangle is equal to the area of the embedded rectangle divided by the area of the parent rectangle.

RectangleArea

Now, instead we’ll imagine uniformally random ray instead of a random line. To select a random ray, we can select a random direction and then select a random ray along the plane defined by that direction.

DirectionSelection
The plane defines our ray direction and our rays are generated across that line.

With this in mind, we want to calculate the probability that our ray will intersect the children volume given that it has intersected the parent volume.

To do this, we want to calculate the average projected area of our shape.

Imagine that we’ve selected our random direction for our ray, we can see that our ray will have intersected our shape if it’s projection onto the plane intersects our random ray.

VolumeProjection
Notice that if a ray intersects our volume it also intersects its projection.

Now if we add a second volume embedded in the first volume, we can see that the probability of intersecting the first volume would be given by the projected area of the child volume divided by the projected area of the parent volume.

ChildVolumeProjection
Notice how our child’s projection is a fraction of our parent’s projection.

This example only works for a single direction. However, since we’re looking for a probability given any ray direction, we need to calculate the average of this projection across the sphere of all directions.

In 2 dimensions, the projected area of a circle is always it’s diameter. We will use this fact to test our work.

To calculate the average projected area of an arbitrary convex shape[4][5], we first want to split that shape into a series of small discrete patches.

Integration
These patches would get smaller and smaller approaching the shape of our circle.

This will be our integration domain.

Once we’ve split our shape into discrete patches, we want to calculate the average projected area of a single patch.

Now we’ll integrate across the circle of directions containing the patch and divide it by the area of our integration. This will effectively give us the average projected area for a single patch of our shape. This works because we will have calculated the projection for every single direction.

ProjectionInDifferentDirections
As we test different directions, we get different planes giving us different projections for the same line.

dA_p = \frac{1}{2\pi}\int_{\Omega} |cos\theta| dA d\theta

Integrating with an absolute value is tricky. Instead due to the symmetry of the problem, we can look at only a quarter of our circle and multiply by 4 and remove our absolute term.

dA_p = \frac{2}{\pi} dA \int_{0}^{\frac{\pi}{2}} cos\theta d\theta

Since \int_{0}^{\frac{\pi}{2}} cos\theta d\theta = 1 our expression reduces to

dA_p = \frac{2}{\pi} dA

Finally, we want to divide our value by 2, this is because we’ve now counted our projected area twice.

DualShadow
Notice how there are 2 valid projections. The projection given the left side of the line and the projection given the right side of the line.

dA_p = \frac{1}{\pi} dA

Using this formulation, we can calculate the average projected area of our shape by adding all of our average projected areas together.

A_p = \int_A dA_p

A_p = \int_A \frac{1}{\pi} dA

A_p = \frac{1}{\pi} A

And there you have it, the average projected area of a 2 dimensional convex object is \frac{1}{\pi} the area of said object.

Notice that if we apply this to the surface area of a circle with the formula 2\pi r we get A_p = \frac{2\pi r}{\pi} = 2r which is our diameter which matches up with our expectation.

Derivation In 3D

We can take the same steps in 3D.

Calculate the average projection over our sphere of directions:

dA_p = \frac{1}{4\pi} \int_0^{2\pi} \int_0^{\pi} |cos\theta| sin\theta dA d\theta d\phi

Integrating across the positive hemisphere and multiplying by 2

dA_p = \frac{1}{2\pi} dA \int_0^{2\pi} \int_0^{\frac{\pi}{2}} cos\theta sin\theta d\theta d\phi

Since \int_0^{\frac{\pi}{2}} cos\theta sin\theta d\theta = \frac{1}{2}

dA_p = \frac{1}{2\pi} dA \int_0^{2\pi} \frac{1}{2} d\phi

dA_p = \frac{1}{4\pi} dA \int_0^{2\pi} d\phi

dA_p = \frac{1}{2} dA

Finally, dividing by 2 for our double projection

dA_p = \frac{1}{4} dA

And plugging into our surface area calculation

A_p = \int_A dA_p

A_p = \frac{1}{4} \int_A dA

A_p = \frac{1}{4} A

Putting It Together

Finally we can see that our average projected area in 3D is \frac{1}{4} it’s surface area.

To calculate our probability, we simply want to divide our parent’s projected area (C) divided by our child’s projected area (A)

P(A|C) = \frac{A_p}{C_p}

P(A|C) = \frac{0.25*A}{0.25*C}

P(A|C) = \frac{A}{C}

Where A and C is the surface area of our volumes. And voila, that’s how we got that original formula.

In Depth View

Now that we have all the pieces, we can take a look at the construction of our BVH.

Cranberray builds it’s BVH from the top down. Starting from a containing bounding volume and splitting it in 2 recursively.

Cranberray keeps a queue of bounding volumes to process as a ring buffer. This makes management of front popping and back pushing very simple but makes resizing the queue trickier. As a result, Cranberray simply allocates a large buffer. We could likely allocate the maximum possible number of elements in the ring buffer instead (something along the lines of 2n-1 where n is the next largest power of 2)

Cranberray then selects the axis with the widest breadth of centroids. The code for this looks like so:

cv2 axisSpan[3];
for (uint32_t axis = 0; axis < 3; axis++)
{
	for (uint32_t i = 0; i < count; i++)
	{
		axisSpan[axis].x = fminf(axisSpan[axis].x, caabb_centroid(start[i].bound, axis));
		axisSpan[axis].y = fmaxf(axisSpan[axis].y, caabb_centroid(start[i].bound, axis));
	}
}

uint32_t axis;
if (axisSpan[0].y - axisSpan[0].x > axisSpan[1].y - axisSpan[1].x && axisSpan[0].y - axisSpan[0].x > axisSpan[2].y - axisSpan[2].x)
{
	axis = 0;
}
else if (axisSpan[1].y - axisSpan[1].x > axisSpan[2].y - axisSpan[2].x)
{
	axis = 1;
}
else
{
	axis = 2;
}

Once we’ve selected our axis, we split our axis in 12 distinct buckets as. (See PBRT [2] for more info on this approach)

We then calculate the cost of each seperation by adding up all the volumes on the left of the seperation and all the buckets on the right of the seperation.

BucketSelection
Here we’ve selected the first bucket as our splitting bucket and calculated the cost of this split.

We then store the cost of each seperation and select the seperation with the minimal cost as our split.

We then continue in this recursion until we’ve run out of items in our queue. (When we’ve partitioned all of our child volumes into leaf nodes).

Finally, our BVH is restructured somewhat for an improvement in memory usage.

Our BVH is stored in 2 arrays, a “jump index” array and a bounds array. This allows us to load the bounds array without having to load the jump indices into memory until we absolutely need them.

We read from our bounds memory much more frequently than our jump memory, as a result, splitting them allows us to make more effective use of our caches.

Our BVH structure looks like this:


typedef struct
{
	union
	{
		struct
		{
			uint32_t left;
			uint32_t right;
		} jumpIndices;

		struct
		{
			uint32_t index;
		} leaf;
	};
} bvh_jump_t;

typedef struct
{
	caabb* bounds;
	bvh_jump_t* jumps;
	uint32_t count;
	uint32_t leafCount;
} bvh_t;

The final special format of our BVH is that all the leaf nodes in our tree are stored at the end of our array. This allows us to test if a node is a leaf node by simply comparing the index in our array against the size of our array minus the number of leaves contained in the tree. This allows us to use data that we’ve already loaded into memory instead of requiring us to load extra data to use in our branch introducing a data dependency.

You can find the source for BVH construction here: https://github.com/AlexSabourinDev/cranberries/blob/5fe9c25e1df23d558b7ef8b5475717d9e67a19fc/cranberray.c#L963

We’ll be taking a look at the BVH traversal next. Until then, happy coding!

Future Work

The primary addition to this BVH construction algorithm would likely be to look into parallelizing it’s construction. [6]

References

[1] https://authors.library.caltech.edu/79167/1/04057175.pdf

[2] http://www.pbr-book.org/3ed-2018/Primitives_and_Intersection_Acceleration/Bounding_Volume_Hierarchies.html

[3] http://mathforum.org/library/drmath/view/62924.html

[4] https://arxiv.org/pdf/1109.0595.pdf

[5] https://math.stackexchange.com/questions/3222317/average-area-of-the-shadow-of-a-convex-shape

[6] https://meistdan.github.io/publications/phr/paper.pdf

Diary of a Path Tracer – Math Library — August 22, 2020

Diary of a Path Tracer – Math Library

Index: https://cranberryking.com/2020/08/22/diary-of-a-path-tracer-the-beginning/

Intro

This is the first in a series of posts detailing the functionality of my hobby path tracing, we will be looking at the underpinning’s of it all, the math library cranberry_math.

The goals of cranberry_math are simple. Making SIMD accelerated mathematics easier and fun to use. There are already a huge swath of math libraries out there that do things very well (DirectXMath and GLM to name a few), however I decided to write my own for fun!

Ideology

The basis of cranberry_math is the concept of lanes. If you’re not familiar with SIMD, the concept is relatively simple. You can imagine that a SIMD register (or vector register) is a set of 4 “lanes”, these lanes are generally independent of each other but allow you to perform operations on them using another SIMD register. For the purpose of demonstration, we will write a SIMD register as v_n (v_0, v_1, v_2) and the element of one of those registers as v_ne_m (v_0e_0) where v_0e_0 would be element 0 of SIMD register 0.

I.e. if I take v_0 and add it to v_1 the result is the equivalent of v_2 = (v_0e_0+v_1e_0, v_0e_1+v_1e_1, v_0e_2+v_1e_2, v_0e_3+v_1e_3)

VectorExample

On it’s own, this might not be particularly useful. But vector register operations have a number of desirable properties.

Let’s imagine we have a fantasy processor, and this processor takes 1 cycle for every addition. This means that to add 4 pairs of numbers together it would take us roughly 4 cycles or 1 cycle per add. Now if our vector addition takes 4 cycles to complete, that’s still 1 cycle per add. However, consider what happens if by some magic we only take 3 cycles for our vector addition. That means overall, we only spend 3/4=0.75 cycles per add! In our fantasy processor, we can make this as low as 1 cycle for 4 adds or 1/4=0.25 cycles per add. You may ask why we can make our instruction run in less than 4 cycles even though we’re still doing 4 adds? In our fantasy processor, we can do this  by simply running 4 adders at once with our data. This means that although every adder is taking 1 cycle to complete, they are run in parallel, making the overall throughput 1 cycle/4 adds.

This is a highly simplified version of how your processor might implement vector adds.

Now, with this added information, we can see why we would want to use these operations efficiently, we could reduce our program latency by at most 4x! (In reality this is unlikely to happen, but we can get fairly close).

Cranberry_math takes this concept and tries to make the best use of it while still making it approachable to the user.

An intuitive and simple use of these registers would be to store our X,Y,Z components into each component such as v_0 = (x, y, z, unused) and working with them in that manner. In some common operations such as adding vectors, this can give you a nice hypothetical improvement of 3x over your original single element addition. However, we’re still leaving 25% of our optimal performance on the table! This approach also causes some issues with common operations such as the dot product.

Let’s say in our fantasy processor, our dot product operation takes 2 cycles to complete when applied to 1 vector. That means we get a throughput of 2 cycles/dot product. I propose (and has been proposed elsewhere as well) that you instead use each register to store a single component of a vector. Such as v_x = (x_0, x_1, x_2, x_3), v_y = (y_0, y_1, y_2, y_3), etc. With this in mind, we can achieve the dot product by doing v_x*v_x+v_y*v_y+v_z*v_z which is a total of 5 operations that all take 1 cycle in our fantasy processor for 4 vectors or a throughput of 1.25 cycles/dot product.

For SSE according to the intel intrinsics guide a multiply operation has a latency of 4 cycles (on Skylake), an add has a latency of 4 cycles. As a result a dot product in a fantasy processor with no out of order functionality or multiple issue, our wide dot product would take roughly 20 cycles to complete or 20/4=5 cycles per dot product. While if we look at the dot product instruction, it has a latency of 11 cycles. (See [1] and [2] for info on reading the intel intrinsics guide values) This is not indicative of real performance numbers. Modern processors are significantly more complex than what I’m implying here.

Another benefit of storing vectors this way, is that it scales trivially to SIMD registers of size 8, or 16! You only need to throw more data at it!

Now, this approach to storing vectors in SIMD registers is not fool proof. It introduces difficulty if we want to add similar components together. (x with x, y with y) within one vector, I however have not found this to be a problem while writing my path tracer.

This isn’t a silver bullet, but I recommend considering adding a path for this type of storage if you can afford it.

In Depth View

With that out of the way, here are the primary types for our library:

#define cran_lane_count 4
cran_align(16) typedef union
{
	float f[cran_lane_count];
	__m128 sse;
} cfl;

typedef union
{
	struct
	{
		float x, y, z;
	};

} cv3;

typedef struct
{
	cfl x;
	cfl y;
	cfl z;
} cv3l;

The theme of this library is that SIMD registers are referred to as lanes. Every element is expected to stay within it’s respective lane. cfl represents “cranberry float lane”, cv3 represents “cranberry vector 3” and cv3l represents “cranberry vector 3 lanes”. Notice how every component in cv3l has it’s own set of lanes? That is a direct implementation of the explanation that was presented above.

cran_forceinline cv3l cv3l_add(cv3l l, cv3l r);
cran_forceinline cv3l cv3l_sub(cv3l l, cv3l r);
cran_forceinline cv3l cv3l_mul(cv3l l, cv3l r);
cran_forceinline cv3l cv3l_min(cv3l l, cv3l r);
cran_forceinline cv3l cv3l_max(cv3l l, cv3l r);

The API for cv3l is simple, it looks exactly the same as it’s cv3 counterpart. This makes it relatively easy to switch from one to the other simply by loading the appropriate data correctly.

cran_forceinline cv3l cv3l_indexed_load(void const* vectors, uint32_t stride, uint32_t offset, uint32_t* indices, uint32_t indexCount);

One function that we should take a particular look at is cv3l_indexed_load. It’s likely you don’t want to store your data in cv3l format. It doesn’t lend itself particularly well to general processing and works much better in batch processing. As a result your data needs to be transformed from x,y,z,x,y,z to x,x,x,y,y,y,z,z,z to facilitate our maximum throughput.

One way to shuffle our data would be to load each element individually and store it in the appropriate locations. However this is surprisingly slow (I forget the exact reasons, a reference for that would be awesome).

Instead, what you can do is load your vectors as a set of vectors:

v_0 = (x_0, y_0, z_0, u), v_1 = (x_1, y_1, z_1, u), v_2 = (x_2, y_2, z_2, u), v_3 = (x_3, y_3, z_3, u)

and then shuffle the values into a set of temporary registers

v_{xy0} = (x_0, y_0, x_1, y_1), v_{xy1} = (x_2,y_2,x_3,y_3), v_{z0} = (z_0, u, z_1, u), v_{z1} = (z_2, u, z_3, u)

and finally shuffle them into the final registers

v_x = (x_0, x_1, x_2, x_3), v_y = (y_0, y_1, y_2, y_3), v_z =(z_0, z_1, z_2, z_3)

VectorShuffle

(There’s a mistake with the arrows for Z_2 and Z_3 they should be pointed to the 4 register)

Here’s the source for this operation:

cran_forceinline cv3l cv3l_indexed_load(void const* vectors, uint32_t stride, uint32_t offset, uint32_t* indices, uint32_t indexCount)
{
	__m128 loadedVectors[cran_lane_count];
	for (uint32_t i = 0; i < indexCount && i < cran_lane_count; i++)
	{
		uint8_t const* vectorData = (uint8_t*)vectors;
		loadedVectors[i] = _mm_load_ps((float const*)(vectorData + indices[i] * stride + offset));
	}

	__m128 XY0 = _mm_shuffle_ps(loadedVectors[0], loadedVectors[1], _MM_SHUFFLE(1, 0, 1, 0));
	__m128 XY1 = _mm_shuffle_ps(loadedVectors[2], loadedVectors[3], _MM_SHUFFLE(1, 0, 1, 0));
	__m128 Z0 = _mm_shuffle_ps(loadedVectors[0], loadedVectors[1], _MM_SHUFFLE(3, 2, 3, 2));
	__m128 Z1 = _mm_shuffle_ps(loadedVectors[2], loadedVectors[3], _MM_SHUFFLE(3, 2, 3, 2));

	return (cv3l)
	{
		.x = {.sse = _mm_shuffle_ps(XY0, XY1, _MM_SHUFFLE(2, 0, 2, 0))},
		.y = {.sse = _mm_shuffle_ps(XY0, XY1, _MM_SHUFFLE(3, 1, 3, 1))},
		.z = {.sse = _mm_shuffle_ps(Z0, Z1, _MM_SHUFFLE(2, 0, 2, 0))}
	};
}

With all this, we can see that I can write a vectorized ray/AABB intersection using this new API:

cran_forceinline uint32_t caabb_does_ray_intersect_lanes(cv3 rayO, cv3 rayD, float rayMin, float rayMax, cv3l aabbMin, cv3l aabbMax)
{
	cv3l rayOLanes = cv3l_replicate(rayO);
	cv3l invD = cv3l_replicate(cv3_rcp(rayD));
	cv3l t0s = cv3l_mul(cv3l_sub(aabbMin, rayOLanes), invD);
	cv3l t1s = cv3l_mul(cv3l_sub(aabbMax, rayOLanes), invD);

	cv3l tsmaller = cv3l_min(t0s, t1s);
	cv3l tbigger  = cv3l_max(t0s, t1s);

	cfl rayMinLane = cfl_replicate(rayMin);
	cfl rayMaxLane = cfl_replicate(rayMax);
	cfl tmin = cfl_max(rayMinLane, cfl_max(tsmaller.x, cfl_max(tsmaller.y, tsmaller.z)));
	cfl tmax = cfl_min(rayMaxLane, cfl_min(tbigger.x, cfl_min(tbigger.y, tbigger.z)));
	cfl result = cfl_less(tmin, tmax);
	return cfl_mask(result);
}

That’s the gist of it! Writing vectorized BVH traversal using this API has been a breeze. You can find the rest of the library here: https://github.com/AlexSabourinDev/cranberries/blob/cranberray/cranberry_math.h

Here is the library in use for BVH AABB testing:

uint32_t activeLaneCount = min((uint32_t)(testQueueEnd - testQueueIter), cran_lane_count);
cv3l boundMins = cv3l_indexed_load(bvh->bounds, sizeof(caabb), offsetof(caabb, min), testQueueIter, activeLaneCount);
cv3l boundMaxs = cv3l_indexed_load(bvh->bounds, sizeof(caabb), offsetof(caabb, max), testQueueIter, activeLaneCount);
uint32_t intersections = caabb_does_ray_intersect_lanes(rayO, rayD, rayMin, rayMax, boundMins, boundMaxs);

We’ll be taking a deeper look at the SSE optimized BVH traversal in the future. For now, happy coding!

References:

[1] https://stackoverflow.com/questions/40203254/intel-intrinsics-guide-latency-and-throughput

[2] https://stackoverflow.com/questions/35859449/why-are-some-haswell-avx-latencies-advertised-by-intel-as-3x-slower-than-sandy-b

[3] https://users.ece.cmu.edu/~franzf/teaching/slides-18-645-simd.pdf

[4] https://deplinenoise.files.wordpress.com/2015/03/gdc2015_afredriksson_simd.pdf

Derivation – Importance Sampling The Cosine Lobe — June 7, 2020

Derivation – Importance Sampling The Cosine Lobe

Introduction

I’ve recently been diving into the world of importance sampling and I decided to share my derivation for importance sampling the cosine lobe.

Shade

When we’re shading a point in path tracing, we typically shoot a ray from our surface in a uniformity random direction contained on a hemisphere centered about our normal. This has the downside of introducing quite a bit of variance into our renders.

Imagine that we have a very bright light that only occupies a very small projected area from our shaded point.

IMG_2799

We would be very likely to miss this light with most of our samples and our render could turn out much darker than we would expect.

This is where importance sampling comes in.

If you imagine that your illumination is a polar function

IMG_2800

If we were to sample a random variable with a distribution that matches this function, we would be much more likely to hit the important points of our function. (Hence importance sampling)

I won’t dive deeply into this topic, as there are a variety of excellent resources detailing this topic. [1]

The essence of it however, is that you want to find a Probability Density Function (PDF) that matches the shape of your illumination function. Once you’ve define this PDF, you can sample it using the Cumulative Density Function (CDF).

Derivation

Since our cosine lobe illumination will look like this:

IMG_2801

We will use this as the basis to derive our distribution since we’re most likely to get the most light from directions arriving parallel to our normal.

Thankfully, our cosine lobe has an analytical formula that we can use as our PDF.

PDF(\omega) = C*cos(\theta) (1)

Our PDF must integrate to 1, we integrate the PDF across our hemisphere

\int_{\Omega}PDF(\omega)d\omega

\int_0^{2\pi}\int_0^{\frac{\pi}{2}}PDF(\omega)sin\theta d\theta d\phi

Plug in (1)

\int_0^{2\pi}\int_0^{\frac{\pi}{2}}C*cos\theta sin\theta d\theta d\phi

C*\int_0^{2\pi}\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta d\phi

\int cos\theta sin\theta d\theta = -\frac{1}{4}cos2\theta

\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta

-\frac{1}{4}cos\pi+ \frac{1}{4}cos0

\frac{1}{4}+\frac{1}{4}

\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta=\frac{1}{2} (2)

Plug in (2)

C*\int_0^{2\pi}\frac{1}{2} d\phi

C*\frac{1}{2}*2\pi

C*\int_0^{2\pi}\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta d\phi=C*\pi (3)

Since our PDF has to integrate to 1,

\int_0^{2\pi}\int_0^{\frac{\pi}{2}}PDF(\omega)sin\theta d\theta d\phi = 1

Plug in (3),

C*\pi=1

C=\frac{1}{pi} (4)

Finally, plug in (4) into our PDF,

PDF(\omega) = \frac{cos(\theta)}{\pi} (5)

Now that we have our PDF, we can calculate our PDF in terms of \theta and \phi.

PDF(\theta,\phi)d\theta d\phi = PDF(\omega)d\omega

PDF(\theta,\phi)d\theta d\phi = PDF(\omega)sin\theta d\theta d\phi

PDF(\theta,\phi)=PDF(\omega)sin\theta

PDF(\theta,\phi)=\frac{cos\theta sin\theta}{\pi} (6)

Now we integrate with respect to \phi to get PDF(\theta)

\int_0^{2\pi}\frac{cos\theta sin\theta}{\pi}d\phi = 2cos\theta sin\theta

PDF(\theta)=2cos\theta sin\theta

And then to get PDF(\phi),

\frac{PDF(\theta,\phi)}{PDF(\theta)}=PDF(\phi)

\frac{cos\theta sin\theta}{2cos\theta sin\theta \pi}=\frac{1}{2\pi}

PDF(\phi)=\frac{1}{2\phi}

Now we want to calculate the CDF of each function,

CDF(\theta)=\int_0^\theta PDF(\theta) d\theta

CDF(\theta)=\int_0^\theta 2cos(\theta)sin(\theta) d\theta

CDF(\theta)=\int_0^\theta sin(2\theta) d\theta

CDF(\theta)=\frac{1}{2}-\frac{cos(2\theta)}{2}

CDF(\phi)=\int_0^\phi PDF(\phi) d\phi

CDF(\phi)=\int_0^\phi\frac{1}{2\pi} d\phi

CDF(\phi)=\frac{\phi}{2\pi}

Now we want to invert our CDF to sample it using our random variable y,

y=CDF(\theta)

y=\frac{1}{2}-\frac{cos(2\theta)}{2}

\frac{1}{2}-y=\frac{cos(2\theta)}{2}

1-2y=cos(2\theta)

\frac{cos^{-1}(1-2y)}{2}=\theta (7)

For \phi,

y=CDF(\phi)

y=\frac{\phi}{2\pi}

y*2\pi = \phi (8)

Now we have our CDFs and our PDFs, we can finally calculate our direction.

In pseudo code you can simply do:

\theta=\frac{cos^{-1}(1-2rand01())}{2}

\phi=rand01()*2\pi

With these directions, you can now sample your scene:

\frac{SampleScene(SphericalTo3D(\theta, \phi))}{PDF(\omega)}

Plug in (5)

\frac{SampleScene(SphericalTo3D(\theta, \phi))\pi}{cos\theta}

Conclusion

That’s it! These formulas will sample the hemisphere where it is receiving more light defined by the cosine lobe. The results are pretty awesome.

Bibliography

[1] https://www.scratchapixel.com/lessons/3d-basic-rendering/global-illumination-path-tracing/global-illumination-path-tracing-practical-implementation

Attempting Triangular Area Lights — May 16, 2020
A Quest Towards Intuition – Why is depth interpolated as 1/z? — August 27, 2019

A Quest Towards Intuition – Why is depth interpolated as 1/z?

Premise

I have recently been attempting to improve my understanding of perspective projection. This included a variety of topics such as deriving a perspective projection matrix and understanding interpolation of vertex shader outputs in perspective. However, one topic that evaded me, was the surprising result that depth is interpolated as 1/z instead of z.

Continue reading