Cranberry Lair

Walking the programming parameter space

Diary of a Path Tracer – The Beginning — August 22, 2020

Diary of a Path Tracer – The Beginning

Within the past few months I’ve been implementing a path tracer during my weekends and evenings. Having reached what I consider to be an acceptable stopping point, I thought it might be interesting to detail my current implementation alongside the references that inspired me. Let’s jump in!

This series will touch on a variety of features that were implemented in the path tracer. Notably:

These posts will try to explain every aspect of the path tracer as they stand but they might not be as comprehensive as the sources referenced to create it. As a result, I will strive to include all relevant sources for each feature.

You can find the path tracer here: https://github.com/AlexSabourinDev/cranberries/tree/cranberray

Next we’ll talk a about the underpinning of the path tracer, the cranberry_math library, see you then!

Derivation – Importance Sampling The Cosine Lobe — June 7, 2020

Derivation – Importance Sampling The Cosine Lobe

Introduction

I’ve recently been diving into the world of importance sampling and I decided to share my derivation for importance sampling the cosine lobe.

Shade

When we’re shading a point in path tracing, we typically shoot a ray from our surface in a uniformity random direction contained on a hemisphere centered about our normal. This has the downside of introducing quite a bit of variance into our renders.

Imagine that we have a very bright light that only occupies a very small projected area from our shaded point.

IMG_2799

We would be very likely to miss this light with most of our samples and our render could turn out much darker than we would expect.

This is where importance sampling comes in.

If you imagine that your illumination is a polar function

IMG_2800

If we were to sample a random variable with a distribution that matches this function, we would be much more likely to hit the important points of our function. (Hence importance sampling)

I won’t dive deeply into this topic, as there are a variety of excellent resources detailing this topic. [1]

The essence of it however, is that you want to find a Probability Density Function (PDF) that matches the shape of your illumination function. Once you’ve define this PDF, you can sample it using the Cumulative Density Function (CDF).

Derivation

Since our cosine lobe illumination will look like this:

IMG_2801

We will use this as the basis to derive our distribution since we’re most likely to get the most light from directions arriving parallel to our normal.

Thankfully, our cosine lobe has an analytical formula that we can use as our PDF.

PDF(\omega) = C*cos(\theta) (1)

Our PDF must integrate to 1, we integrate the PDF across our hemisphere

\int_{\Omega}PDF(\omega)d\omega

\int_0^{2\pi}\int_0^{\frac{\pi}{2}}PDF(\omega)sin\theta d\theta d\phi

Plug in (1)

\int_0^{2\pi}\int_0^{\frac{\pi}{2}}C*cos\theta sin\theta d\theta d\phi

C*\int_0^{2\pi}\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta d\phi

\int cos\theta sin\theta d\theta = -\frac{1}{4}cos2\theta

\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta

-\frac{1}{4}cos\pi+ \frac{1}{4}cos0

\frac{1}{4}+\frac{1}{4}

\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta=\frac{1}{2} (2)

Plug in (2)

C*\int_0^{2\pi}\frac{1}{2} d\phi

C*\frac{1}{2}*2\pi

C*\int_0^{2\pi}\int_0^{\frac{\pi}{2}}cos\theta sin\theta d\theta d\phi=C*\pi (3)

Since our PDF has to integrate to 1,

\int_0^{2\pi}\int_0^{\frac{\pi}{2}}PDF(\omega)sin\theta d\theta d\phi = 1

Plug in (3),

C*\pi=1

C=\frac{1}{pi} (4)

Finally, plug in (4) into our PDF,

PDF(\omega) = \frac{cos(\theta)}{\pi} (5)

Now that we have our PDF, we can calculate our PDF in terms of \theta and \phi.

PDF(\theta,\phi)d\theta d\phi = PDF(\omega)d\omega

PDF(\theta,\phi)d\theta d\phi = PDF(\omega)sin\theta d\theta d\phi

PDF(\theta,\phi)=PDF(\omega)sin\theta

PDF(\theta,\phi)=\frac{cos\theta sin\theta}{\pi} (6)

Now we integrate with respect to \phi to get PDF(\theta)

\int_0^{2\pi}\frac{cos\theta sin\theta}{\pi}d\phi = 2cos\theta sin\theta

PDF(\theta)=2cos\theta sin\theta

And then to get PDF(\phi),

\frac{PDF(\theta,\phi)}{PDF(\theta)}=PDF(\phi)

\frac{cos\theta sin\theta}{2cos\theta sin\theta \pi}=\frac{1}{2\pi}

PDF(\phi)=\frac{1}{2\phi}

Now we want to calculate the CDF of each function,

CDF(\theta)=\int_0^\theta PDF(\theta) d\theta

CDF(\theta)=\int_0^\theta 2cos(\theta)sin(\theta) d\theta

CDF(\theta)=\int_0^\theta sin(2\theta) d\theta

CDF(\theta)=\frac{1}{2}-\frac{cos(2\theta)}{2}

CDF(\phi)=\int_0^\phi PDF(\phi) d\phi

CDF(\phi)=\int_0^\phi\frac{1}{2\pi} d\phi

CDF(\phi)=\frac{\phi}{2\pi}

Now we want to invert our CDF to sample it using our random variable y,

y=CDF(\theta)

y=\frac{1}{2}-\frac{cos(2\theta)}{2}

\frac{1}{2}-y=\frac{cos(2\theta)}{2}

1-2y=cos(2\theta)

\frac{cos^{-1}(1-2y)}{2}=\theta (7)

For \phi,

y=CDF(\phi)

y=\frac{\phi}{2\pi}

y*2\pi = \phi (8)

Now we have our CDFs and our PDFs, we can finally calculate our direction.

In pseudo code you can simply do:

\theta=\frac{cos^{-1}(1-2rand01())}{2}

\phi=rand01()*2\pi

With these directions, you can now sample your scene:

\frac{SampleScene(SphericalTo3D(\theta, \phi))}{PDF(\omega)}

Plug in (5)

\frac{SampleScene(SphericalTo3D(\theta, \phi))\pi}{cos\theta}

Conclusion

That’s it! These formulas will sample the hemisphere where it is receiving more light defined by the cosine lobe. The results are pretty awesome.

Bibliography

[1] https://www.scratchapixel.com/lessons/3d-basic-rendering/global-illumination-path-tracing/global-illumination-path-tracing-practical-implementation

Attempting Triangular Area Lights — May 16, 2020
A quest towards intuition – Where does that cosine come from? — April 1, 2020

A quest towards intuition – Where does that cosine come from?

I’ve recently been learning more about BRDFs and their properties and I stumbled across the concept of energy conservation. Energy conservation simply states that your BRDF should not emit more energy than it receives. To determine if a BRDF is energy conserving, you accumulate all the light emitted by a surface across a hemisphere centered at the normal of that surface and assure that the accumulated light is less than or equal to the amount of received light.

The mathematical equation for this is:

\int_{\Omega}BRDF*cos(i)*Li*d\omega<=Li

Where Li is the incoming light, i is the angle from our normal to our current accumulation direction.

Since Li is a constant, we can remove it and we’re left with:

\int_{\Omega}BRDF*cos(i)*d\omega<=1

Faced with this equation, I was struck. Where did that cosine come from?

Where did you come from? Where did you go? Where did you come from Cosine I Joe.

My first instinct was to brush it off. “It’s just the Lambert cosine factor”. False! The Lambert cosine factor was applied in Li which we removed.

I then began my slow decline into madness.

 

I was very confused. With a lot of stumbling I came up with various answers, each not as satisfying as I would like them to be.

Radiant exitance I yelped! Is it related to this “physics” and this “poynting vector” that you find when researching radiant flux? It might be, but I’m not proficient in the dark magic of physics. And so I moved on.

Can we instead think of it using the Helmholtz reciprocity principle that says that we can reverse our view and light vectors? This would allow us to flip the meaning of cos(i) from our viewing direction to our light direction and then simply say it’s the Lambertian cosine factor. But this didn’t feel right. We’re not talking about Lambertian materials, we’re talking about accumulating light from a generalized material. I decided to move on from this theory.

Finally, it struck me. The circle had done it! It’s always the circle.

Let’s forget everything we talked about. Forget about incoming radiance. Forget about 3D. Forget about Helmholtz. Forget about physics.

It’s just geometry

Let’s take a look at what we’re doing when we’re calculating how much light is being emitted by a surface in 2D.

If we start with the integration of the BRDF across a 2D circle without the cosine factor, we get:

\int_{\Omega}BRDF*dS

Where dS is our integration segment.

What this means is that We’re splitting the circle into equal lines and multiplying our emitted light by the width of each viewing segment.

 

 

Let’s take a look at each integration part in succession. Our integration method implies that we want to multiply our current light value by the width of our integration, as a result, we’ll draw 2 parallel lines for each line segment to represent the shape that would be the result of our multiplication.

When viewed top down, everything is great!

TopDownIntegration
What about at an angle?

AngledIntegration
Oops! That doesn’t line up with our observed area, our perceived area is containing more than our desired dA. What do we do about it?

Let’s look at it from the perspective of dA projecting onto a line at different angles

 

Notice how our area is getting smaller as our angle gets steeper? That’s what our integration segment should be doing to remove the extra area that we observed.

Let’s figure out the proportion of that shrinking area.

Notice that we can calculate the length of our projection using a right angle triangle (It always comes back to triangles).

 

 

Triangle

Our hypotenuse is dA, our angle is i and our projected area is dP.
Remembering our identities, SOH CAH TOA, Sine of our angle is equal to our opposite over our hypotenuse. This gives us dP/dA = sin(i) which we can reformulate as dP=sin(i)*dA.

dP here is the area of our integration segment, it is the value by which we want to multiply our accumulated light. Now that we know how to get dP, we want to formulate dA in terms of our integration segment dS.

Since we know that dS from the top down (when i is \pi/2) matches dA we can say that dS = sin(\pi/2)*dA which simplifies to dS = dA, as a result, we can replace our dA in the above to get dP = sin(i)*dS.

We can therefore modify our integral to be \int_{\Omega}BRDF*dP and substituting dP we get \int_{\Omega}BRDF*sin(i)*dS. Close! But why is it a sine and not a cosine? This is because our angle i is the angle from the plane of our normal to our normal but the information that we have is the angle from our normal to our plane (using N dot V). Let’s call the angle from the normal to the viewing direction \theta. We know that \theta=\pi/2-i. If we solve for i we get \pi/2-\theta=i and substituting to get \int_{\Omega}BRDF*sin(\pi/2-\theta)*dS which finally becomes \int_{\Omega}BRDF*cos(\theta)*dS!

CorrectIntegration

We’ve reached the end. This is where our cosine comes from!

Final thoughts

I spent quite some time trying to unravel this mystery, but it’s always exciting to get just that little bit of extra intuition. Hopefully this helps you understand it just a little bit more!

Side notes

Q: Why don’t we always do this cosine when integrating over a hemisphere?

A: Because we’re not always integrating a differential area (dA). It would be wrong to always apply that rule if we’re instead integrating a smaller hemisphere, or some other geometrical concept

Q: Why don’t we multiply our light by this cosine factor when we’re rendering?

A: Because it’s implicit in our perspective projection. When we’re viewing the surface it already covers less of our viewing surface when viewed at an angle. If we applied the cosine when rendering we would be applying it twice. We apply it here because we don’t have any perspective projection applied when doing our integration.

Resources

 

 

A Quest Towards Intuition – Why is depth interpolated as 1/z? — August 27, 2019

A Quest Towards Intuition – Why is depth interpolated as 1/z?

Premise

I have recently been attempting to improve my understanding of perspective projection. This included a variety of topics such as deriving a perspective projection matrix and understanding interpolation of vertex shader outputs in perspective. However, one topic that evaded me, was the surprising result that depth is interpolated as 1/z instead of z.

Continue reading

Think Piece – Expecting Failure — June 3, 2019
Creating an Optimized Transform Hierarchy — April 14, 2019
Sowing the seeds of speed – Part 5 – The Finale (For now?) — March 16, 2019

Sowing the seeds of speed – Part 5 – The Finale (For now?)

Recap of part 4

In part 4, we looked at SIMD for computations, more quantization, software prefetching and a bug fix. This time, we’re going to be looking at some more bug fixing, more SIMD and some surprising performance impacts!

Bug fixes

While I was debugging a new optimization for part 5, I ran into a few bugs in the program…

Farmer and crop removal had a bug in it. When we were removing these elements, we would loop from the start of the removed indices and we would take the last valid element from the back and copy it onto the current index we want removed. The problem with this solution, is that if our last valid element was also to be removed, we would copy it onto the current index. This is a problem, because we would need to remove it! It’s not a valid object anymore! Instead, if we loop from the back of the array, we will only replace our current index with an element that is guaranteed to be valid because the indices are ordered.

Farmer and crop removal also had a bug in it with the second call to simd_moveMaskToIndexMask. Before the fix, the call would only mask out the lower 8 bits from the indexMask. However, this is not correct, because the function itself only works with the lower 8 bits! As a result, the indices would all be the same. In order to modify this, I simply shifted the indexMask by 8 bits to the right.

Before:

simd_moveMaskToIndexMask(indexMask & 0xFF00);

After:

simd_moveMaskToIndexMask((indexMask & 0xFF00) >> 8);

It would probably have been best to take a uint8_t as an argument instead of an unsigned int but I had misunderstood the functionality of the _pdep_u64 intrinsic used in simd_moveMaskToIndexMask.

Finally, there was also a bug in converting the result of a 16 bit compare to a move mask. Because there is no movemask intrinsic for 16 bit integer, I had used _mm256_packs_epi16(a, b) which I believed to convert the 16 bit integers to 8 bit integers at their respective element. However, I had not realized that these actually worked within lanes! The first 8 elements were from a and the next 8 were from b and then the next 8 were from a and then from b. I expected the format to be a, a, b, b. As a result, the movemask would end up incorrect!

Instead, I modified the code to execute the movemask on the 16 bit results and then converted the mask to an 8 bit movemask.

__m256i cmpRes = _mm256_cmpgt_epi16(zeroI, lifetime);
uint32_t moveMask = _mm256_movemask_epi8(cmpRes);
int indexMask = _pext_u32(moveMask, 0x55555555) & bitMask;

The magic happens in _pext_u32(moveMask, 0x55555555). Because we’re creating a movemask from 16 bit integers and not 8 bit integers, the mask bits will actually double! If the result of our compare was 0xFFFF, 0x0000, 0xFFFF, 0xFFFF then our movemask would be 11001111 which is not correct! We want our movemask to look like 1011. As a result, I used _pext_u32. _pext_u32 will take the bits corresponding set bits in mask (0x55555555) and pack them from least significant bit to most significant bit. This means that we’re taking all our even bits and packing them. Because 0x55555555 in binary is 01010101010101… we’re taking x1x0x1x1 from our movemask and packing them to 1011! (More cool bit tricks can be found here)

As a result to all these changes, the performance of the program degraded. I believe it is because we weren’t adding all the indices to our removal list, and we were also not adding the correct indices! Now we actually loop through all the correct indices and chrome://tracing says that our average ai_tick performance degraded from 1.67ms to 1.87ms. Our average tick is now 2.75ms instead of 2.6ms.

Bucket farming

At this point we’ve made our program very efficient by improving the speed at which we access and modify our data. But now, it’s gotten quite a bit harder to improve the performance of our algorithm this way.

Instead, we’re going to modify our algorithm. According to VTune, most of our time is spent decrementing timers and moving farmers. We’re going to be tackling the timers.

The timers are currently all decremented at a rate of 16ms per tick, that means at times we can be decrementing 1 million timers per tick! Instead of doing this, we can do something much better, we can group our timers in buckets and only decrement the global bucket timers instead of the timers themselves. Then, in order to retain our fine grained timing, we keep track of which bucket will require fine grained decrementation and we will only decrement that bucket!

Looking at our farm state, we can see that our state decrements a timer between the ranges of 3 seconds and 5 seconds. In order to split this up into buckets, I decided to split these buckets up in 6 buckets where each bucket holds the timers of a specific time range.

We start off with an index indicating which bucket needs to be finely decremented. We then increment our global bucket timer and decrement the timers in the bucket referenced by our index. Once out timer reaches 1s, we reset the timer and advance our index by 1 and modulo by the number of buckets in order to get it to wrap around.

Say our current fine bucket index is 5, then our timer reaches 1s, we advance our index by 1 to 6. Since 6 % 6 is 0, that’s our next fine decrementation bucket. We can guarantee this property because we place the timers in the buckets based on their number of seconds.

int16_t farmerTimer = rand_range(AI_FarmerFarmSpeedMin, AI_FarmerFarmSpeedMax);
uint32_t bucketSecond = (farmerTimer + AI_FarmersFarmBucketTransitionTimer) / AI_TimePrecision;
uint32_t bucketFarmerCount = AI_FarmersFarmHotBucketCounts[bucketIndex];

AI_FarmersFarmHotBuckets[bucketIndex][bucketFarmerCount] = farmerTimer - bucketSecond * AI_TimePrecision;
AI_FarmersFarmHotBucketCounts[bucketIndex]++;

Looking at chrome://tracing indicates that ai_tick now runs at an average of 1.7ms from our original 1.8ms. Interestingly, game_gen_instance_buffer now runs 0.1ms slower. I wonder if this a result of ai_tick not allowing game_gen_instance_buffer to complete it’s work in the background.

VTuneSample21

Looking at VTune indicates that we’re now 75% back-end bound in ai_tick and we’re now retiring almost 30% of our instructions, very good results from bucketing the farm timers.

Doing the same modification to the search timers doesn’t produce exceptional results, but I will keep this modification in order to keep things consistent.

More quantization!

Now that our timers are only in the range of 0s to 1s, we can quantize our data even more! Now I’m going to quantize our values to a range of 1s to 10ms. This will cause our precision to drop significantly, but since our timers are for AI, I think this cost is appropriate.

As a result, I changed all the farmer timers to int8_t and change the AI_TimePrecision to 10ms. With this change, chrome://tracing notes that ai_tick now runs at an average of 1.25ms from our 1.7ms! However, once again, game_gen_instance buffer slowed down from 0.9ms to 1.17ms…

Big thanks to Zack Dawson (twitter) for the idea!

Streaming again?

When we left off for game_gen_instance_buffer, we were mostly using memcpy to copy all of our data from one position buffer to another. At that time, I had decided to use memcpy because memcpy is very fast and the location being written to was not guaranteed to be aligned with the location being read from, restricting me from using _mm256_stream_si256.

As a result, I decided to write the last element of an array multiple times in order to align the write index to a 64 byte boundary. This gave me the opportunity to use the stream intrinsics for our homegrown memcpy:

void simd_streamMemCpy(__m256i* dstWrite, __m256i* srcRead, size_t size)
{
    if (size == 0)
    {
        return;
    }

    __m256i* dstEnd = dstWrite + (size >> 5);
    for (; dstWrite <= dstEnd; dstWrite += 2, srcRead += 2)
    {
        __m256i src256 = _mm256_stream_load_si256(srcRead);
        _mm256_stream_si256(dstWrite, src256);
        __m256i src256_2 = _mm256_stream_load_si256(srcRead + 1);
        _mm256_stream_si256(dstWrite + 1, src256_2);
    }
}

Which compiles to:

    test    rdx, rdx
    je      .LBB0_3
    and     rdx, -32
    add     rdx, rdi
    xor     eax, eax
.LBB0_2:
    vmovaps ymm0, ymmword ptr [rsi + rax]
    vmovntps        ymmword ptr [rdi + rax], ymm0
    vmovaps ymm0, ymmword ptr [rsi + rax + 32]
    vmovntps        ymmword ptr [rdi + rax + 32], ymm0
    lea     rcx, [rdi + rax]
    add     rcx, 64
    add     rax, 64
    cmp     rcx, rdx
    jbe     .LBB0_2
.LBB0_3:
    vzeroupper
    ret

Taking a look at chrome://tracing shows us that gen_instance_buffer now runs at an average of 1ms instead of 1.1ms, saving us an additional 0.1ms.

And VTune:

VTuneSample22

Both our slowest functions are now taking less than 1s!

Can we avoid reading the tile stage?

At first, in order to avoid reading the tile stage and to be able to just store the index without having to access the memory, I thought storing a collection of the unplanted indices might be effective. However, I had missed the obvious that in order to get the index from this collection of indices, I would have to read that memory instead. This added complexity simply slowed down ai_tick by 0.2ms.

As a result, I simply changed the tile from a 32 bit integer, to an 8 bit integer giving speedups in ai_tick but slowing down game_gen_instance_buffer again.

The return of game_gen_instance_buffer

At this point, game_gen_instance_buffer is running at an average of 1ms per tick. Still slower than the original 0.9ms from part 4. As a result, we’re going to tackle it some more with quite a few modifications.

The first, we changed the type for sprite indices from uint16_t to uint8_t. Since the range of values for these indices is 0 to 11, we have plenty of space for these 11 values in a uint8_t.

The next modification was quite a bit more complex than the uint8_t modification. This change takes into account that the state of the farmers will dictate what sprite index will be used to render the farmers and that all farmers share the same scale.

In order to keep the blog post from becoming all about this optimization, I’m only going to go through the explanation for the scales.

The buffer generation is very simple: I write every scale into the array in the order of tiles, crops, farmers. The number of tiles is constant, thus, we don’t need to worry about it. The number of crops however, is variable. This means that were our farmer scales starts and end is determined by the number of crops rendered that tick.

While rendering, I keep track of a writing index. This index determines where the next sprite instance will be rendered.

Like this:

int writeIndex = 0;

__m256i farmerScale = _mm256_set1_epi16(AI_FarmerScale);

if (Gen_PreviousFarmerScaleStart == 0)
{
    Gen_PreviousFarmerScaleEnd = writeIndex + AI_FarmerCount;
    Gen_PreviousFarmerScaleEnd += 64 - (AI_FarmerSearchCount % 64);
    Gen_PreviousFarmerScaleEnd += 64 - (AI_FarmerMoveCount % 64);

    simd_memSetToValue((__m256i*)(buffer->scales + writeIndex), farmerScale, (Gen_PreviousFarmerScaleEnd - writeIndex) * sizeof(uint16_t));
    Gen_PreviousFarmerScaleStart = writeIndex;
}
else
{
    uint32_t newEnd = writeIndex + AI_FarmerCount;
    newEnd += 64 - (AI_FarmerSearchCount % 64);
    newEnd += 64 - (AI_FarmerMoveCount % 64);

    if (newEnd > Gen_PreviousFarmerScaleEnd)
    {
        uint32_t extra = Gen_PreviousFarmerScaleEnd % 64;
        uint32_t writeLoc = Gen_PreviousFarmerScaleEnd - extra;
        simd_memSetToValue((__m256i*)(buffer->scales + writeLoc), farmerScale, (newEnd - Gen_PreviousFarmerScaleEnd + extra) * sizeof(uint16_t));
    }

    if (writeIndex < Gen_PreviousFarmerScaleStart)
    {
        simd_memSetToValue((__m256i*)(buffer->scales + writeIndex), farmerScale, (Gen_PreviousFarmerScaleStart - writeIndex) * sizeof(uint16_t));
    }

    Gen_PreviousFarmerScaleEnd = newEnd;
    Gen_PreviousFarmerScaleStart = writeIndex;
}

And that’s it! This change adds quite a bit of complexity to our generation code but the results are worth it!

chrome://tracing indicates that our average game_gen_instance_buffer tick is now 0.72ms, faster than our original performance! ai_tick also runs at an average of 1.2ms and our average tick is now 1.9ms.

VTuneSample23.PNG

VTune shows us that game_gen_instance_buffer is now at

More bug fixes…

At this point, I realized that I hadn’t ran the game out of profile mode in a bit… I had gotten too enthralled in the performance side of things. And when I ran it… nothing rendered…

It turns out I had made a mistake very early on when I changed the format of Game_InstanceBuffer.

As a refresher, our buffer looks like this:

typedef struct
{
    uint8_t spriteIndices[GAME_MAX_INSTANCE_COUNT];
    uint16_t scales[GAME_MAX_INSTANCE_COUNT];
    uint16_t positionX[GAME_MAX_INSTANCE_COUNT];
    uint16_t positionY[GAME_MAX_INSTANCE_COUNT];
} Game_InstanceBuffer;

The code was set up in such a way that I would copy the data needed for all the rendered instances with this call:

sg_update_buffer(Render_DrawState.vertex_buffers[0], Render_InstanceBuffer, (sizeof(uint16_t) * 3 + sizeof(uint8_t)) * renderInstances);

This doesn’t work! If our renderInstances is 1, we’re rendering 7 bytes of data from the sprite indices array, not spriteIndices, scales, positionX and positionY! This caused the GPU to only get a little bit of the data that it needed and the rest was completely missing…

This could be seen as a lesson in testing all aspects of your code…

One last hoorah

At this point, it was very clear to me that I had to address the movement code. The problem with this, is that we have to access every target position and velocity in order to move the farmer.

One approach that I attempted was to store the future positions of the farmers into N buffers and read from those and only process a fraction of the active move farmers while using the buffered positions for the other fractions. A rough prototype of this approach however showed that this was slower than the simple processing of all the farmers due to needing to touch quite a bit of memory to store the future positions of the farmers.

Another consideration was to bucket the farmer movements by cardinal direction. If the farmers were close enough to a cardinal direction, they would use that direction as an approximation of their velocity. This attempt only managed to provide speedups if a large portion of the farmers were bound to predefined velocities and quickly introduced visual artifacts. As a result, this solution although potentially viable with a large amount of predefined velocities didn’t seem particularly viable for this project.

After these two attempts, I’m going to call it for this project. We made some excellent progress and I learned a ton in the process. I hope you did as well!

Where are we now?

Looking at chrome://tracing tells us that we’re now at an average tick of around 2ms from our original 42ms. 21 times faster than our original performance! We tackled a lot from memory access patterns, quantization, SIMD, software prefetching and non-temporal stores. As a result however, our program went from ~400 lines to almost 1k lines of code. Our program is now harder to change and harder to read, but code cleanliness was not one of our concerns here.

I had great fun and I hope you did too!

Until next time!

Find the repo on github here!

Sowing the seeds of speed – Part 4 – Achievement unlocked — March 6, 2019

Sowing the seeds of speed – Part 4 – Achievement unlocked

Part 3 Recap

In part 3 we looked at quantization, field_crop memory access, dirty draw commands, SIMD for memory and non-temporal stores. In part 4, we’re going to be looking at SIMD for computation, more quantization and software prefetching.

Back at it again

As always, let’s take a look at our VTune profile.

VTuneSample17

It seems that we’re fairly back-end bound, however not nearly as much as we used to be. At this point, it’s not entirely clear where we could be improving our performance in ai_tick. VTune tells us that most of our operations are fairly efficient.

As a result, I considered that SIMD might be the best approach to improve the performance of this function. It does a lot of computations and if we can make use of the larger SIMD registers for loading as well, we could be more efficient.

To introduce SIMD, we have quite a lot of setup to do.

Currently, most code that requires positions looks like this:

typedef struct
{
    Vec2 tilePos;
} AI_FarmerMveStateHot;

What we need to do, is to split up these structures into an X and Y component array.

Like this:

static float* AI_FarmersMoveHotX = NULL;
static float* AI_FarmersMoveHotY = NULL;

There’s a very simple reason for doing this. We could load our registers with 2 vec2s ([x1, y1, x2, y2]) but then all our following operations are only working on 2 positions. However, by splitting the positions into 2 arrays, we can actually load 4 positions at once and process those elements at the cost of a few more registers ([x1, x2, x3, x4], [y1, y2, y3, y4]).

It also takes away the need to do horizontal adds. (Horizontal adds will add the elements of a vector across a register. For a vector of [x1, y1, x2, y2] it would be x1 + y1 + x2 + y2). We want to avoid doing horizontal adds because their performance is inferior to our usual vector operations. (See here)

Another approach to processing 4 positions at once is to shuffle 4 of our vectors into the X only, Y only ([x1, x2, x3, x4] [y1, y2, y3, y4]) format but this is additional code that we can avoid because we have complete control over the layout of our data.

A few of the changes we had to make were:

Changing the instance buffer to have a positionX and positionY array to facilitate our memcpy in the instance buffer generation.

typedef struct
{
    int16_t spriteIndicesAndScales[GAME_MAX_INSTANCE_COUNT * 2];
    float positionX[GAME_MAX_INSTANCE_COUNT];
    float positionY[GAME_MAX_INSTANCE_COUNT];
} Game_InstanceBuffer;

Removal of the vec2 structure, everything is an array of floats now. Our usage of vec2s is completely erased. Our movement code is now purely floating point math.

float farmerX = AI_FarmersMoveHotX[i];
float farmerY = AI_FarmersMoveHotY[i];
float genFarmerX = AI_FarmersMoveGenX[i];
float genFarmerY = AI_FarmersMoveGenY[i];

float dirVecX = farmerX - genFarmerX;
float dirVecY = farmerY - genFarmerY;
float mag = sqrtf(dirVecX * dirVecX + dirVecY * dirVecY);

float velX = dirVecX * velMag / mag;
float velY = dirVecY * velMag / mag;
AI_FarmersMoveGenX[i] = genFarmerX + velX;
AI_FarmersMoveGenY[i] = genFarmerY + velY;

Finally, another change was the removal of referencing position in tiles and in the crops. This was done because we can determine the position of a tile and crop from it’s tile index. This removes 32 bytes from these structures at the cost of a little more computation.

typedef struct
{
    float grown;
    float lifetime;
    uint32_t cropType;
    uint32_t tileIndex;
} Field_Crop;

typedef struct
{
    Field_Stage stage;
} Field_Tile;

These changes had a better performance impact than I expected.

VTuneSample18

ai_tick improved by one second from 5.3s to 4.1s and field_tick doubled in performance from 1.5s to 0.7s.

Looking at chrome://tracing, ai_tick improved from 4.4ms to 4.1ms, field tick improved from 0.7ms to 0.5ms and our overall tick improved from 6.6ms to 6.1ms.

Link to this change is here.

The SIMD-fication

Here is how our movement code started after we split our positions into X and Y arrays:

float farmerX = AI_FarmersMoveHotX[i];
float farmerY = AI_FarmersMoveHotY[i];
float genFarmerX = AI_FarmersMoveGenX[i];
float genFarmerY = AI_FarmersMoveGenY[i];

float dirVecX = farmerX - genFarmerX;
float dirVecY = farmerY - genFarmerY;
float mag = sqrtf(dirVecX * dirVecX + dirVecY * dirVecY);

float velX = dirVecX * velMag / mag;
float velY = dirVecY * velMag / mag;
AI_FarmersMoveGenX[i] = genFarmerX + velX;
AI_FarmersMoveGenY[i] = genFarmerY + velY;

if (velMag > mag)
{
    // Move is complete
}

The first step to SIMD-ing this code is the linear algebra:

__m256 farmerX = _mm256_load_ps(AI_FarmersMoveHotX + i);
__m256 farmerY = _mm256_load_ps(AI_FarmersMoveHotY + i);
__m256 genFarmerX = _mm256_load_ps(AI_FarmersMoveGenX + i);
__m256 genFarmerY = _mm256_load_ps(AI_FarmersMoveGenY + i);

__m256 dirVecX = _mm256_sub_ps(farmerX, genFarmerX);
__m256 dirVecY = _mm256_sub_ps(farmerY, genFarmerY);
__m256 rmag = _mm256_rsqrt_ps(_mm256_add_ps(_mm256_mul_ps(dirVecX, dirVecX), _mm256_mul_ps(dirVecY, dirVecY)));

__m256 velX = _mm256_mul_ps(dirVecX, _mm256_mul_ps(velMag, rmag));
__m256 velY = _mm256_mul_ps(dirVecY, _mm256_mul_ps(velMag, rmag));
_mm256_store_ps(AI_FarmersMoveGenX + i, _mm256_add_ps(genFarmerX, velX));
_mm256_store_ps(AI_FarmersMoveGenY + i, _mm256_add_ps(genFarmerY, velY));

This looks like a lot, but it maps almost exactly 1 to 1 to our original code. You might notice that instead of using sqrt, we’re using rsqrt. The reason for this is simple, the reciprocal of a square root has a fast approximation that will reduce the precision of our operation but will also improve our performance. rsqrt has a latency of around 5 units, whilst sqrt has a latency of 13. (see the Intel Intrinsics guide and wikipedia)

The next step is more challenging. We now have 8 magnitudes to compare. Before SIMD we could have a simple branch, but now this has become more difficult.

One approach would be to loop through each element and do a single comparison… This is alright, but it’s not very “SIMD”, we’re not processing multiple elements at once anymore.

INSTEAD

We’ll be doing our comparison and storing the indices of the magnitudes that passed our comparison. (There’s an excellent collection of slides on doing this from Andreas Fredriksson detailing how to do this here.)

We’re currently using AVX for our movement code, this allows us to process 8 elements at once instead of 4, which means we also get access to additional intrinsics that we would not typically have access to with SSE.

Let’s take a look at the code and break it down line by line:

// Thanks to https://stackoverflow.com/questions/36932240/avx2-what-is-the-most-efficient-way-to-pack-left-based-on-a-mask for this great algorithm
// Uses 64bit pdep / pext to save a step in unpacking.
__m256i simd_moveMaskToIndexMask(unsigned int mask /* from movmskps */)
{
    uint64_t expanded_mask = _pdep_u64(mask, 0x0101010101010101);  // unpack each bit to a byte
    expanded_mask *= 0xFF;    // mask |= mask<<1 | mask<<2 | ... | mask< AAAAAAAABBBBBBBBCCCCCCCC...: replicate each bit to fill its byte

    const uint64_t identity_indices = 0x0706050403020100;    // the identity shuffle for vpermps, packed to one index per byte
    uint64_t wanted_indices = _pext_u64(identity_indices, expanded_mask);

    __m128i bytevec = _mm_cvtsi64_si128(wanted_indices);
    __m256i shufmask = _mm256_cvtepu8_epi32(bytevec);

    return shufmask;
}

int bitMask = (1 << math_min(previousFarmerMoveCount - i, 8)) - 1;

__m256 cmpRes = _mm256_cmp_ps(rvelMag, rmag, _CMP_LT_OQ);
int indexMask = _mm256_movemask_ps(cmpRes) & bitMask;

__m256i indices = _mm256_set_epi32(i, i, i, i, i, i, i, i);
__m256i indexAdd = simd_moveMaskToIndexMask(indexMask);
indices = _mm256_add_epi32(indices, indexAdd);

_mm256_storeu_si256((__m256i*)(AI_FarmerRemovalIndices + removedFarmerCount), indices);
removedFarmerCount += _mm_popcnt_u32(indexMask);

First we need to mask out the comparison of elements that aren’t valid farmers. We can create a bit mask of valid farmers like so:

int bitMask = (1 << math_min(previousFarmerMoveCount - i, 8)) - 1;

This line will simply determine how many farmers we’re processing in this iteration. (If we were at index 8 but only had 14 farmers we would get 14 – 8 which would mean we’re processing 6 farmers and we create a mask of the first 6 bits set.

Then, we do our comparison:

__m256 cmpRes = _mm256_cmp_ps(rvelMag, rmag, _CMP_LT_OQ);
int indexMask = _mm256_movemask_ps(cmpRes) & bitMask;

This compares our velocity against our magnitude, similarly to how we did it before. When a velocity’s magnitude is higher than our distance, the element in the result register will have all of its bits set. (If the magnitudes are [0, 2, 3, 4] and the velocities are [1, 1, 1, 1], the resulting vector would be [0xFF, 0, 0, 0]) This isn’t of much use to us yet, we need to convert this register into a mask where the bits that passed the comparison are set. (For the previous example, the value would be 1000)

As this point, we want to be able to take this mask and convert it to a collection of packed indices.

We want to pack our indices to the left of a register because we want to be able to store only the indices that we want to remove from the collection of farmers. Currently our mask has a value, maybe 01001100, but what we want is a register with the values i + 3, i + 4, i + 7.

To do this, we call simd_moveMaskToIndexMask which is a modified version of the code found here. This function will return a collection of packed integers as a result of the mask provided. (for the mask 01001100 we would get 3, 4, 7)

Then we can take these values and add them to i to get our removed farmer indices.

__m256i indices = _mm256_set_epi32(i, i, i, i, i, i, i, i);
__m256i indexAdd = simd_moveMaskToIndexMask(indexMask);
indices = _mm256_add_epi32(indices, indexAdd);

_mm256_storeu_si256((__m256i*)(AI_FarmerRemovalIndices + removedFarmerCount), indices);

Using the previous example, our resulting vector would be [i + 3, i + 4, i + 7, don’t care, don’t care, don’t care…] We don’t care about the following elements because they didn’t pass the comparison.

And then we move our write index forward by the number of farmers removed:

removedFarmerCount += _mm_popcnt_u32(indexMask); // _mm_popcnt_u32 counts the number of bits set in a u32

Finally we can just loop through our removed farmer indices:

for(uint32_t i = 0; i < removedFarmerCount; i++)
{
    int r = AI_FarmerRemovalIndices[i];
}

That’s it! We SIMD-ed our movement.

Let’s take a look at VTune!

VTuneSample19

We’ve shaved off another second from our ai_tick from 4.1s to 3.2s. chrome://tracing says that our ai_tick is now 3.3ms from 4.1ms. The average tick has improved from 6.1ms to 5.2ms.

You might notice that we’re more back-end bound and our CPI has increased. This is to be expected, SIMD instructions have a higher CPI but this is fine since we’re now processing 8 elements instead of 1 element at a time. As for being more back-end bound, since we’re now processing data quickly, I suspect that we’re not running slow enough to hide the latency of the memory loads. I’m not quite sure how to address this.

Link to this change is here.

Field_tick is back?

Before I SIMD everything else, there’s one change to be made to field_tick.

The Field_Crop structure looks like this:

typedef struct
{
    float grown;
    float lifetime;
    uint32_t cropType;
    uint32_t tileIndex;
} Field_Crop;

Looking at this structure, we don’t actually need lifetime and grown! We can use a single float to represent how long the crop has to live and instead of incrementing and comparing against grown, we will decrement and compare against zero. (We will also split the Field_Crop structure, it will simplify our SIMD code)

typedef struct
{
    uint32_t cropType;
    uint32_t tileIndex;
} Field_Crop;

float* Field_CropLifetimes;

With this simple change, field_tick has improved from 0.49ms to 0.052ms… I doubt we will be looking at field_tick anytime soon.

Link to this change is here.

The Great SIMD-fication

Following the field_tick improvement. I SIMD-ed the rest of the game code. chrome://tracing states that ai_tick improved from 3.3ms to 2.8ms. VTune notes that the performance has slightly improved from 3.2s to 2.8s.

VTuneSample20

Link to this change is here.

The Useless Optimization

I then went on to try to using the streaming instructions _mm_stream_ps and _mm_stream_load_si128 in an attempt to maybe speed things up by not evicting anything I might need from the cache.

…This had no effect…

My reasoning behind the performance not improving was that I had no data to evict that I might need in the first place. And since I still had to read the data, I couldn’t have the benefit of being able to write without loading.

Quantization Again? That’s half the truth

At this point, I couldn’t just “SIMD” everything and get more performance (We already did that!). But I ran into something interesting. AVX has a series of float to half conversion intrinsics that could be used to store our data in half the size of our previous data. (For the curious half is a form of floating point storage that takes up half the size of a typical float (16 bits) more information on them can be found here)

This is very similar to the quantization I implemented in part 3 where we take a range of values and map them to an integer range.

The modifications for this were simple.

I added 2 new macros that did all the work for me on load and store:

#define SIMD_FLOAT_TO_HALF(f) _mm_extract_epi16(_mm_cvtps_ph(_mm_set_ss(f), _MM_FROUND_NO_EXC), 0)
#define SIMD_LOAD_PH_TO_PS(a) _mm256_cvtph_ps(_mm_load_si128((__m128i*)(a)))

And then simply changed the data types from floats to 16 bit integers for storage.

This change had some beneficial effects on all of the gameplay code. According to chrome://tracing ai_tick went from 2.8ms to 2.6ms, field_tick went from 0.052ms to 0.022ms and game_gen_instance_buffer went from 1.5ms to 0.8ms. On average, the tick improved from 4.4ms to 3.4ms!

Link to this change is here.

More failures

At this point, I tried a variety of other things to improve performance:

  • I attempted to store field tiles in 4 bits instead of 8 bits. This was a detriment to the performance overall. I assume that the load needed to write to the tile might have affected the CPUs ability to send of the write without having to load anything into registers?
  • I attempted to store the crop type in the tile index uint8_t instead of storing it as another uint32_t. But this caused the program to also have reduced performance.

The prefetching chronicles

At this point, it got harder to make things faster. I was out of “tricks” for improving the performance. And so I turned to VTune. According to VTune I was still highly back-end bound, especially in the farmer removal code where farmers are transitioned from state to state.

A potential reason why removal is causing our code to be back-end bound is that we have a layer of indirection that keeps track of the indices that we want to remove from the the farmer list. After processing all the farmers, we then loop through this collection of removed indices and remove the farmers associated to those indices.

The problem with this indirection, is that our memory patterns are slightly irregular and our hardware prefetcher can’t predict where our next farmer will be. As a result, I decided to turn to software prefetching.

What is software prefetching?

Software prefetching is a mechanism built in to modern CPUs that allows you to hint to your CPU that you would like it to start prefetching some memory at some address for you. It’s similar to hardware prefetching in the sense that it will go gather memory for you, but it differs by allowing you to ask for memory that has a more irregular pattern of fetching.

A potential downside of asking for memory to be prefetched is that it might evict something from the cache that is currently in use or our cache line might be evicted before we even get to it. (A few sources on software prefetching can be found here and here)

Software prefetching is very tricky to get right, and I don’t know all of the intricacies of using it, but it made sense for the removal of farmers.

The change was simple. The code for transitioning a moving farmer to a farming farmer looked like this:

for(uint32_t i = 0; i < removedFarmerCount; i++)
{
    int r = SIMD_FarmerRemovalIndices[i];

    AI_FarmersFarmHot[AI_FarmerFarmCount + i] = rand_range(AI_FarmerFarmSpeedMin, AI_FarmerFarmSpeedMax);
    AI_FarmersFarmCold[AI_FarmerFarmCount + i] = AI_FarmersMoveCold[r];
    AI_FarmersFarmGenX[AI_FarmerFarmCount + i] = AI_FarmersMoveGenX[r];
    AI_FarmersFarmGenY[AI_FarmerFarmCount + i] = AI_FarmersMoveGenY[r];

    AI_FarmersMoveHotX[r] = AI_FarmersMoveHotX[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveHotY[r] = AI_FarmersMoveHotY[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveCold[r] = AI_FarmersMoveCold[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveGenX[r] = AI_FarmersMoveGenX[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveGenY[r] = AI_FarmersMoveGenY[AI_FarmerMoveCount - 1 - i];
}

And now the code looks like this:

for(uint32_t i = 0; i < removedFarmerCount; i++)
{
    int r = SIMD_FarmerRemovalIndices[i];

    int nextRemoval = SIMD_FarmerRemovalIndices[i + 2];
    _mm_prefetch((const char*)(AI_FarmersMoveCold + nextRemoval), _MM_HINT_T0);
    _mm_prefetch((const char*)(AI_FarmersMoveGenX + nextRemoval), _MM_HINT_T0);
    _mm_prefetch((const char*)(AI_FarmersMoveGenY + nextRemoval), _MM_HINT_T0);

    AI_FarmersFarmHot[AI_FarmerFarmCount + i] = rand_range(AI_FarmerFarmSpeedMin, AI_FarmerFarmSpeedMax);
    AI_FarmersFarmCold[AI_FarmerFarmCount + i] = AI_FarmersMoveCold[r];
    AI_FarmersFarmGenX[AI_FarmerFarmCount + i] = AI_FarmersMoveGenX[r];
    AI_FarmersFarmGenY[AI_FarmerFarmCount + i] = AI_FarmersMoveGenY[r];

    AI_FarmersMoveHotX[r] = AI_FarmersMoveHotX[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveHotY[r] = AI_FarmersMoveHotY[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveCold[r] = AI_FarmersMoveCold[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveGenX[r] = AI_FarmersMoveGenX[AI_FarmerMoveCount - 1 - i];
    AI_FarmersMoveGenY[r] = AI_FarmersMoveGenY[AI_FarmerMoveCount - 1 - i];
}

Notice the addition of _mm_prefetch? Those are the SSE prefetch instrinsics. They take a hint that notes which cache level you would prefer the memory you are requesting to be in. I decided that it would go to L1 because I’m going to use it very soon.

You might notice that the nextRemoval index is not the next one, but actually the second one after. The reason for this, is that asking for the next one would most likely not complete the prefetch before the next iteration. Through some experimentation, I found that the second one was an effective value for the prefetch.

This change was coupled with the next change. As a result, we won’t take a look at chrome://tracing just yet.

More quantization?

While looking for some ways to improve the performance of the farmers, I came upon an idea.

If I convert the timers from 16 bit halves to 16 bit integer values, I would be able to load 16 timers instead of 8 at a time. Currently we’re limited to loading 8 at a time because we have to expand them to floats before we can operate on them. If instead, we load and store them as 16 bit integer values, we can use the full power of the 256 registers.

We store these values in integers by multiplying the floating point by 1000 and storing the result in an int16_t. Instead of decrementing floating point, we reduce the precision of our numbers and load our 16 bit integers as a number representing the number of milliseconds left. This means we are limiting our range of precision to 32s and 1ms, which is enough precision for a farmer brain.

With this simple change, our timer code now looks like this:

__m256i farmerSearchTimer = _mm256_load_si256((__m256i*)(AI_FarmersSearchHot + i));
farmerSearchTimer = _mm256_sub_epi16(farmerSearchTimer, delta256);
_mm256_store_si256((__m256i*)(AI_FarmersSearchHot + i), farmerSearchTimer);

And that’s it!

With these changes in hand, chrome://tracing says that ai_tick is now 1.78ms from 2.7ms, and crops are now 0.013ms from 0.021ms. This leaves us with an average tick of 2.7ms. Achieving our goal of reaching less than 3ms.

Link to this change is here.

The bug fix that saved the world

As I was thinking about the program during the week, and trying to think of different ways to improve the performance, I came upon a bug!!

In the index packing code, we were only packing the first 8 indices and forgetting the next 8. This caused our removal code to access uninitialized indices and (spoiler) severely affecting the performance of our ai_tick code!

int indexMask = _mm256_movemask_ps(cmpRes) & bitMask;

__m256i indices = _mm256_set_epi32(i, i, i, i, i, i, i, i);
__m256i indexAdd = simd_moveMaskToIndexMask(indexMask);
indices = _mm256_add_epi32(indices, indexAdd);

_mm256_storeu_si256((__m256i*)(AI_FarmerRemovalIndices + removedFarmerCount), indices);
removedFarmerCount += _mm_popcnt_u32(indexMask);

Looking at this code, you’ll notice that __m256i can only hold 8 32 bit integers. However, we’re processing 16 timers. And since we’re telling our removal count that we’re removing up to 16 timers, this causes us to loop over too much memory.

The quick fix for this problem was simply to handle an additional 8 indices:

__m256i indices = _mm256_set_epi32(i, i, i, i, i, i, i, i);
__m256i indexAdd = simd_moveMaskToIndexMask(indexMask & 0x00FF);
indices = _mm256_add_epi32(indices, indexAdd);

_mm256_storeu_si256((__m256i*)(SIMD_FarmerRemovalIndices + removedFarmerCount), indices);
removedFarmerCount += _mm_popcnt_u32(indexMask & 0x00FF);

__m256i next8Indices = _mm256_set_epi32(i + 8, i + 8, i + 8, i + 8, i + 8, i + 8, i + 8, i + 8);
__m256i next8IndexAdd = simd_moveMaskToIndexMask(indexMask & 0xFF00);
next8Indices = _mm256_add_epi32(next8Indices, next8IndexAdd);

_mm256_storeu_si256((__m256i*)(SIMD_FarmerRemovalIndices + removedFarmerCount), next8Indices);
removedFarmerCount += _mm_popcnt_u32(indexMask & 0xFF00);

And the results are a slight improvement in performance. chrome://tracing says that ai_tick now runs at an average of 1.67ms instead of 1.7ms.

Link to this change is here.

That’s it for part 4!

In this part, we looked at:

  • SIMD computations
  • Half floating point
  • More quantization
  • Software prefetching

Our program now runs at an average 2.6ms per tick. Achieving our goal of less than 3ms per tick!

In part 5, we’ll look into wrapping up the series, we’re going to tackle game_gen_instance_buffer’s alignment issues, attempt to use the AVX gather instructions in order to potentially vectorize our removal loop and reducing the amount of timers that we need to decrement every tick!

Follow the repo here!

Until next time!