Fundamentals of Radiance Cascades

(m4xc.dev)

115 points | by ibobev4 days ago

9 comments

  • Negitivefrags4 days ago
    This approach was originally developed by Alex Sanikov for Path of Exile 2.

    Of course in PoE2 it's used in full 3d.

    The benefit of this approach is that you get global illumination with a constant cost for the entire scene and because it doesn't use any temporal acculation, it has zero latency as well.

    This means you can rely on it as the lighting for fast effects. For example: https://www.youtube.com/watch?v=p1SvodIDz6E

    There is no traditional "lighting" added to these two effects. The light on the nearby surfaces is indirect light from the GI solution that you get for free by just spawning the particles. That means all effects just naturally emit light with no extra work from artists.

    On my GPU (which is admittedly a 4090) the GI solution runs in 0.8ms for a 4k scene in "High" quality. This is exactly what it will always cost, no matter what scene you choose.

    • midnightclubbed4 days ago
      One caveat is doing screen space means anything offscreen doesn’t contribute to lighting. For a fixed perspective dungeon or 2d game it works great, but a light source behind a column or in another room will cast no light on the scene
      • jasonjmcghee4 days ago
        There are multiple ways to solve this issue. The most naive would be rendering to a larger texture and cropping, but with radiance cascades you can do better- you could only render what's necessary based on per-cascade interval lengths. Depending on needed accuracy, you could calculate it similar to ambient light, only calculating it for the largest cascade- wouldn't be perfect, but could feel pretty natural.

        Definitely an area that could use more research!

      • Log_out_1 day ago
        Not entirely true, you can do the column with a 2nd depthmap rendered from the twincamera at maxdepth with lightsources.
    • isaacimagine4 days ago
      From what I understand, PoE2 has a fixed-perspective camera, so radiance is calculated in screenspace (2D) as an optimization (and not using a 3D texture). It would be interesting to see how the performance of this technique scales to full 3D, as that would be significantly more expensive / require a lower-resolution voxelization of the scene.
      • Negitivefrags4 days ago
        The source data is indeed from screen space, but this generates a full 3d data structure by ray marching the depth buffer. Its not using the 2D algorithm. In PoE2s case, it doesn’t need to go further since light sources are not usually going to be behind the camera.

        You can gather data in other ways.

      • corysama4 days ago
        From offhand comments I've read, you are right. It's not practical for 3D.
    • TinkersW3 days ago
      It looks fine, but I have yet to see any examples scaled up to an outdoor scene where you can see for miles(I don't even know if this is possible?).

      Also the article this post links says this is diffuse only.. kinda not so impressive as specular is also very important.

      I assume this means they are using a diffuse model that is view direction independent so Lambert.. which is a rather crummy diffuse model.. the better diffuse models are view dependent..

    • stonethrowaway4 days ago
      In the age of PoE is Diablo even relevant anymore?
  • ozarker4 days ago
    The guy that developed the technique works on Path of Exile. They’re using it for POE2, he gave an awesome talk about it here

    https://youtu.be/TrHHTQqmAaM?si=xrW0XT2lsGHqUYY_

  • arijo4 days ago
    I'm trying to understand the basic idea of Radiance Cascades (I don't know much about game development and ray tracing).

    Is the key idea the fact that light intensity and shadowing require more resolution near the light source and lower resolution far from it?

    So you have higher probe density nearby the light source and then relax it as distance increases minimising the number of radiance collection points?

    Also using interpolation eliminates a lot of the calculations.

    Does this make any sense? I'm sure there's a lot more detail, but I was looking for a bird's eye understanding that I can keep in the back of my mind.

    • pornel4 days ago
      Essentially yes.

      There's ambient occlusion that computes light intensity with high spatial resolution, but completely handwaves the direction the light is coming from. OTOH there are environment maps that are rendered from a single location, so they have no spatial resolution, but have precise light intensity for every angle. Cascade Radiance observes that these two techniques are two extremes of spatial vs angular resolution trade-off, and it's possible to render any spatial vs angular trade-off in between.

      Getting information about light from all angles at all points would cost (all sample points × all angles), but Radiance Cascades computes and combines (very few sample points × all angles) + (some sample points × some angles) + (all sample points × very few angles), which works out to be much cheaper, and is still sufficient to render shadows accurately if the light sources are not too small.

      • arijo3 days ago
        So I've been reading

        https://graphicscodex.com/app/app.html

        and

        https://mini.gmshaders.com/p/radiance-cascades

        so I could have a basic grasp of classical rendering theory.

        I made some assumptions:

        1. There's an isometric top-down virtual camera just above the player

        2. The Radiance Cascades stack on top of each other, incresing probe density as they get closer to the objects and players

        I suspect part of the increased algorithm efficiency results from:

        1. The downsampling of radiance measuring at some of the levels

        2. At higher probe density levels, ray tracing to collect radiance measurements involves less computation than classic long path ray tracing

        But I'm still confused about what exactly in the "virtual 3D world" is being downsampled and what the penumbra theory has to do with all thus.

        I've gained a huge respect for game developers though - this is not eady stuff to grasp.

        • pornel3 days ago
          Path tracing techniques usually focus on finding the most useful rays to trace, to focus only rays that hit a light (importance sampling).

          RC is different, at least in 2D and screen-space 3D. It brute-force traces fixed sets of rays in regular grids, regardless of what is in the scene. There is no attempt to be clever about picking the best locations and best rays. It just traces the exact same set of rays every frame.

          Full 3D RC is still too expensive beyond voxels with Minecraft's chunkiness. There's SPWI RC that is more like other real-time raytracing techniques: traces rays in the 3D world, but not exhaustively, only from positions visible on screen (known as Froxels and Surfels elsewhere).

          Penumbra hypothesis is an observation that hard shadows require high resolution to avoid looking pixelated, but soft shadows can be approximated with bilinear interpolation of low-res data.

          RC adjusts its sampling resolution to be the worst resolution it can get away with, so that edges of soft shadows that are going from dark to light are all done by interpolation of just two samples.

          • arijo2 days ago
            Thanks for taking the time to provide more details on how resonance cascades work.
    • rendaw4 days ago
      I didn't get it either, I found this though which seems to be a much better fundamentals of radiance cascades: https://mini.gmshaders.com/p/radiance-cascades

      IIUC basically you have a quad/oct-tree of probes throughout the area of screen space (or volume of view frustum?). The fine level uses faster measurements, and the broad level uses more intensive measurements. The number of levels/fineness determines resolution.

      I guess for comparison:

      - Radiance cascades: complexity based on resolution + view volume; can have leakage and other artifacts

      - Ray tracing: complexity based on number of light sources, screen resolution, and noise reduction; has noise

      - RTX: ??

      - Radiosity: complexity based on surface area of scene

      Also not sure, but I guess ray tracing + radiosity are harder to do in GPU?

      • jasonjmcghee1 day ago
        No octree/quadtree. It's just a stacked grid of textures, halving resolution (or otherwise reducing) each level. Low resolution layers capture many rays (lowest resolution say 4096 rays) and longer distances at lower spatial resolution. High resolution layers capture fewer rays (lowest say 4 rays) for shorter distances at high spatial resolution. When you merge them all together, you get cheap, soft shadows and a globally illuminated scene. In the post I wrote, I calculated it's similar in cost to firing 16 rays using a classic path tracer. But in theory should look similar to 4096 rays (or whatever the highest cascade layer does) but softer shadows.

        Depending on your approach the geometry of the scene is completely irrelevant. (Fixed step / DDA truly, JFA + DF has some dependence due to circle marching, but largely independent)

      • arijo3 days ago
        Thanks for the analysis and the insights - I guess I'll have to parse all of this a bite at a time :)
  • ezcrypt3 days ago
    I also recommend the video by SimonDev: Exploring a New Approach to Realistic Lighting: Radiance Cascades <https://www.youtube.com/watch?v=3so7xdZHKxw>

    (I recently discovered his channel, and I like his calm down-to-earth explanations of game dev techniques).

  • cubefox4 days ago
    This article unfortunately presupposes understanding of (ir)radiance probes, a topic on which there isn't even a Wikipedia article...
    • stonethrowaway4 days ago
      This is true for quite a few graphic algorithms today. Certain terms in the equations used are pre-computed, and you can generalize all of them in a sense to be “probes”, “queries”, “function inversions”, “look up tables” or whatever. The only real way to know what can be computed on the fly vs what can be stored is to go ahead, try to implement it, think really fucking hard, and then see what you can do in real time. That’s basically how the field has advanced the formulas used into something that can run in real time, or can run in screen space complexity rather than scene complexity, and so on.

      An equation given to you will be one of 2 forms usually; the raw equation that may come from optics, in which case expect an ass load of calculus and diff equations with arcane terms not seen outside of Maxwell’s equations. Or, in the likely case, the presenter is a PhD ubermensch nerd who has already pulled the terms out, rewritten them, is presenting them, and you need to be really paying close attention to every single word during their talk to figure it out. It’s at your discretion to determine which of the two forms is easier for you to deal with.

  • abetusk3 days ago
    Am I correct in thinking that this approach effectively makes real time GI a solved problem?

    Can someone who's more knowledgeable than myself offer any explanation on why it took so long to discover a method like this? I remember hearing about rt-GI from over 25 years ago.

    • pixelpoet3 days ago
      It does not make realtime GI a solved problem.
      • abetusk13 hours ago
        Could you go into more detail?
  • wowxserr4 days ago
    So on my webcam there is a cover but it doesn't fully cover the webcam. So this technology would be able to infer something from the radiance of the light seeping in around the edges?
    • pornel4 days ago
      This is a rendering technique designed for real-time graphics, and it's not applicable to that kind of image analysis. It does what has already been possible with ray tracing, but using an approximation that makes it suitable for real-time graphics.

      However, the technique has been used to speed up astrophysics calculations:

      https://arxiv.org/pdf/2408.14425

  • justin664 days ago
    They're waiting for you, Gordon. In the test chamber.
    • rustcleaner4 days ago
      >it's saturday morning 1998 and you are in your coziest sleepwear, on the computer in the living room.

      I WANT TO GO BAAACK!

    • DaiPlusPlus4 days ago
      > In the test chamber.

      Flashbacks to sitting the GRE

    • davesque4 days ago
      Exactly what I thought of :).
  • gclawes4 days ago
    Resonance cascade?
    • speed_spread4 days ago
      A gamble, but we needed the extra resolution.