Environment Lighting & SIGGRAPH

4 minute read. Updated July 8, 2024.




UPDATE: Second time's the charm! After my polite rejection from SIGGRAPH a few years ago (the original post on that is below), someone recommended that I submit a talk on the work I just finished with the Khronos standards group on color and tone mapping for 3D product rendering. This time it was accepted, so I'll be presenting in Denver for SIGGRAPH 2024. My talk is in a session called "Love Me Some Color", which is appropriate since I've spent the last year+ going very deep on color theory. You can find my talk's abstract here - apparently the abstract of a 20-minute SIGGRAPH talk is, I kid you not, a two-page, two-column, single-spaced paper, complete with its own abstract! If you happen to be attending, please stop by and ask me hard questions at the end of my talk.

Original post:

I've been working on Google's open source web component, <model-viewer>, for about a year and a half now, which marks the length of time I've been working on (or even known the term) physically-based rendering (PBR). I have to admit I was rather baffled by the term when I first heard it, as how else would you color the pixels of a screen to show a 3D object than by using the physics of light? In case this weren't already obvious, this post is going to be bit more technical than the average; you have been warned. 

Anyway, I was certainly no expert when I set about to improve the accuracy of our rendering in <model-viewer> and three.js, the renderer upon which we built. As I reviewed the literature I realized there was still room for improvement as most of the research had gone toward games and movies, which have very different requirements than e-commerce. For us, moving lights and complex scenes are not the norm, but accurate renditions of materials are key, and the shape of reflections is a huge part of what keys our senses to detect the difference between, for instance top grain, suede, and faux leather.

I decided that since we wanted a simple interface for non-technical developers to set up a 3D scene, we should focus on a single type of lighting, that most suited for showing realistic reflections: environmental, image-based lighting. Traditionally this has been used in combination with point lights or directional lights, as these allow simpler mathematics for creating sharp shadows both from an object's shape and also from occlusion by others. Environment lights tended to need slow pre-processing or else got poor results when bright areas were included to represent spotlights. And don't get me started on spherical harmonics, which sound fancy but are basically equivalent to trying to represent a whole sphere by an 8x8 pixel jpeg. 

One of the first things I did after joining the <model-viewer> team was to invent and build a system that could render accurately even with extremely high dynamic range (HDR) environments (like those that include the sun) without needing offline preprocessing of the environment. This is key to our compatibility going forward, as it frees us to make any changes we need to the internal processing without changing the input environment format our users rely on. I wrote a paper on this, if anyone wants to know the details. You can find my Python analysis in this colab (please click the "Open with Google Colaboratory" button). [UPDATE: that colab now also includes my related analysis for applying environment lighting to materials with sheen.] Also, my code can be perused in three.js, here. See our comparisons to other realtime renderers that use slower offline environment preprocessing.

Why did I write a paper? I suppose I wanted to prove that I still could, plus I figured that form might help certain experts take my ideas more seriously. Still, I hadn't published a paper in eight years, and certainly nowhere near this subject area. I decided to use an upcoming SIGGRAPH deadline as an excuse to get it finished. My hopes weren't particularly high for actually getting it in there given that's pretty much the pinnacle of graphics publications, and sure enough it was rejected. 

Still, I have to admit that I was very impressed by SIGGRAPH's review process; certainly it was both faster and more thorough than anything I'd experienced in controls publications. I got feedback from half a dozen experts who had clearly read and understood my paper and who gave constructive criticism. Their primary gripe was that I hadn't done enough quantitative comparison to other techniques, which is absolutely true. Unfortunately it's quite time consuming to set up proper tests against other people's code and I wasn't being paid for this kind of thing. Secondly, they weren't very impressed by my use case, as getting decent rendering on a cheap mobile phone doesn't really have the same pizzaz as making blockbuster stunts look real on the big screen. 

They kindly recommended some other journals I might submit to next, but I was already remembering how thankless I always felt academic publishing was, especially when my employer couldn't care less. And then I remembered how during my literature review I found that the blogs of graphics experts often had better information than I could find in the published papers. So, I've decided to follow that tradition here and post my paper on my blog in case anyone is interested. Honestly, it's intended for a vanishingly small audience anyway.

If you want to see the results, at the top of the page is <model-viewer> demonstrating some very HDR environments on a variety of PBR models. My technique is rather useless if you want dynamically moving lights with their own shadows, but while that is common in games and movies, I think there's a wide swath of 3D rendering that doesn't call for that complexity. The nice thing about this method is all lighting is handled consistently and in a single pass; any shape or number of lights can be painted into the environment without special treatment. 


Models from Khronos glTF samples repo, under individual CC licenses.
Environments from HDRI Haven, licensed under CC0.

Comments

  1. Your improvements to the Three.js PMREM that originally relied upon a hierarchical hack GGX importance sampling technique where it was always hard to figure out how many samples to do (e.g. https://github.com/mrdoob/three.js/pull/8237 ) to this more implicit approach using blurs was a great improvement! Faster and higher quality results!

    I wonder if more performance numbers would have helped as well as comparisons in terms of quality - especially to the GGX importance sampling technique that is sort of the industry standard everywhere.

    ReplyDelete
  2. could you add studio lighting ? say, this one? https://polyhaven.com/a/studio_small_09

    ReplyDelete
    Replies
    1. and perhaps the glTF sample models from 3D commerce (https://github.khronos.org/3DC-Sample-Viewer/)

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Better late than never? I added model-viewer's default lighting, which is a flavor of studio lighting. And I put a few more nice-looking sample models in, along with tone-mapping options.

      Delete

Post a Comment

Popular posts from this blog

Perseverance - a history of Manifold

3D Interaction

Manifold Performance