The last two weeks haven’t had as many revelations code wise as some previous weeks, but there’s been plenty of development on the script, trailer and logo fronts. The code has moved along and there’s a couple of maybe interesting bits below along with over explanatory descriptions and code to back it up.
The bees in BeeBeeQ can fly/crawl all over the environment and they can sometimes get in to small barely lit areas. When this happens the crushing darkness makes it pretty tricky for the player to navigate their way out, so I decided to get in some eye adaptation / dynamic exposure control which would boost the exposure in dark areas making them appear brighter and easier to read. Starting off with some research on HDR and exposure techniques I found it’s all interesting stuff and thankfully not the black magic I was expecting, Joey de Vries’ page about the math and theory of HDR was very informative, as was John Hable’s post about the tonemapping curve used in Uncharted 2.
The modern standard way to do this is;
1. Render the scene to a HDR buffer.
2. Average the luminosity of all pixels to approximate the light amount entering the camera.
3. Divide (in some way) each pixels original value by the exposure.
4. Use a tonemapping curve to bring HDR values in to LDR range.
Intuitively this is an expensive operation, performed individually for each camera, and not the right approach for a 90fps 5 player split screen, I need it to be faster.
Getting Fast Approximate Exposure
I really liked Shadow of the Colossus’s exposure zones (and Valve did something similar for Half Life 2: The Last Coast), these areas were set up ahead of runtime with exposure values, then at runtime when the camera entered the zone the exposure would animate to the new values. Sounds like a fast performing solution but most implementations seem to only take position into account and I imagine took a long time to set up, in BeeBeeQ I’m trying to avoid as much time consuming set up as possible in favour of automation.
I realized Unity already provides approximations of incoming light in any direction in the form of light probes, these store indirect (and optionally direct light) for specific positions and any direction in the scene in a way that can be interpolated. Unity provides a function to get an interpolated probe but neglected to add a function to decode it, and though the shader side is accessible by downloading the built-in shaders or looking in the .cginc files, the process of converting a SphericalHarmonicsL2 into the 7 float4 arrays they use in the shader is not documented. Luckily for me Bas-Smit figured it out and shared it on the Unity forums.
So in OnPreRender I sample a probe’s luminosity and set a global shader value for exposure, this is then used at the end of our uber shader (we’re using an modified version of Valve’s The Lab Renderer) where exposure and tonemapping is applied, this allows us to perform full HDR and approximated exposure without requiring any HDR buffers or image effects. This could work fully on its own since it’s possible for light probes to contain direct light, but in most set ups probes only contain indirect light, in those cases the exposure to the direct light has to be added, I decided to raycast from the camera to the light and add it if it passed.
Here’s the code;
And the modification to the shader, these are the last lines of the fragment program;
This whole exercise was very much a reminder of how awesome the gamedev community is as people share information and techniques so freely, and everyone moves the industry forward. Moving forward I’d get rid of that GetComponent
Smoother Bee Pickup
I switched the bee pick up mechanic back to a fixed joint from a spring joint. I had been working out how much stamina the bees should lose based on the stretch in the spring, this was supposed to encourage teamwork since bees trying to move the same object in different directions would lose stamina at double rate.
The problem I was having was with mass, if I picked up a piece of meat on a spring I would expect the meat to bounce on the spring and my hand to stay pretty still. In BeeBeeQ the bees have a much lower mass than the meat so they ended up bouncing back onto the meat when they tried to lift it. Now it’s a fixed joint and stamina loss is based on the user’s input, the mass of the carried object and the number of bees carrying it. The result should be more predictable stamina loss but more importantly smoother and predictable movement.
Bits and Bobs
BeeBeeQ was running slow so I dug into the profiler, I found some OnWillRenderObject calls that were having an impact, moving these into the update loop helped performance. I also removed any of my own code that was causing allocations and for now it’s all back to 90 fps or more, and now some tools make velocity based “swoosh” noises when the chef swings them around.
So that’s another couple of weeks and I’ve written too much again, thanks for reading and I’ll try make it a bit briefer next time.