Nvidia Says Real-Time Path Tracing Is On the Horizon, But What Is It?

1 year 11 months ago

Real-time path tracing, the next stage of game rendering, is on the horizon, Nvidia says. But wait--didn't RTX just get here? So what is path tracing, how is it different from ray tracing, and why does it matter? It's all about the way light works--both the way developers use it and the way gamers interact with it--and about turning a complex magic trick into simple physics with the quickly-growing graphical horsepower we have in our PCs and consoles.

No Caption Provided

Whether they're in two or three dimensions, games are an illusion. When you're solving puzzles in Outer Wilds or dodging Margit's giant hammer in Elden Ring, there's a complicated tangle of mathematical magic tricks running in the background to make you think you're looking at a natural and organic thing. Behind the scenes, your GPU is doing billions of calculations per second to make it all move and function.

While games are more graphically impressive than ever, it would be easy to look at the last decade or so of games and say that graphical advancements have slowed down--we're not seeing the easily visible jumps in fidelity that came with the transition from 2D to 3D, or from basic 3D to more advanced rendering techniques. But the truth is that big stuff is happening behind the scenes that will change the way games are made and maybe even how we play them.

First, let's talk about the primary different rendering methods used to put games on our screens.

Rasterization, Tracing, and Light

Rasterization is the way games are rendered right now, and the way they've been rendered for decades--it'll most likely never go away completely. Rasterization is the act of rendering 3D models as 2D images. As explained by Nvidia, "objects on the screen are created from a mesh of virtual triangles… computers then convert the triangles of the 3D models into pixels on a 2D screen." Other processing, like anti-aliasing, is then applied to those pixels to show you the final product. On a 4K display, your GPU is calculating and displaying the color information for 8 million pixels, and then refreshing that data 30, 60, or even 144 times per second.

This is computationally intensive, and so developers use shortcuts to help speed things up so that our graphics cards don't choke on the pixels and just give up. For example, many games are rendered at a lower resolution and then upscaled, or rendered in a checkerboard pattern on your screen to cut down on the number of pixels the GPU has to worry about.

As far as light goes, ray tracing and path tracing are about tracing the way light bounces in different ways, while rasterization shows you what the game world looks like if, instead of bouncing, the light just stopped at the first thing it hit. The object is illuminated, but that illumination doesn't affect any of the other objects on the screen in front of you.

Stupid Computer Tricks

When I talk about games being an illusion, lighting is one of the biggest examples. In the real world, light is complicated. If you put a bright red apple on a white table, light bouncing off the apple will cast a red hue on the table below, while light bouncing off of the table will cast a white hue up onto the apple. If there are two lights in the room, the apple might cast two slightly different shadows on the table. Every light source and object can emit, reflect, scatter, or absorb light.

Simulating all of that has historically been out of the range of real-time graphics processing. So instead, much of this information is pre-baked into a scene. Game developers have gotten incredibly good at faking natural-looking lighting effects. If you've ever seen those videos from Japanese television of people pretending to play ping pong while other people in black suits move the players around, game lighting is kind of like that--there are a lot of manually crafted tricks going on in the background. For example, to make sure your GPU has enough time to render everything on your display, stuff that you can't see is dropped or "culled." In real life, reflections reflect whether you're looking at them or not (though we can definitely get into some philosophical discussions about that if you want to). When rendering a game, though, those elements are ignored until they're on-screen again. That means that a dynamic light source or a reflective surface that's just off-screen might be skipped over--not rendered--until it's on-screen. You'll see that manifest as reflections suddenly appearing when you turn your game camera just slightly. If you look at a reflective water surface in just about any modern game, that's often the easiest place to spot this effect.

Additional effects are then added on after the fact. The shadows that your character casts are calculated separately from your character, and you can turn this setting individually up and down on many PC games, adjusting from a blocky mess that's barely discernible as a shadow to a high-fidelity one that looks believable. This shadow isn't being calculated based on the exact placement of the light source and your character or object, though; it's more like an estimation of what the shadow would look like, rendered as a two-dimensional image on the ground below that object.

Reflecting on Reflections

Reflections, meanwhile, are a separate thing altogether and have caused gamers plenty of confusion and consternation, sometimes even inspiring conspiracy theories. One example of this is in Marvel's Spider-Man for PlayStation 4 from developer Insomniac, in which gamers took what was just a reflection technique as a tribute to 9/11. Spider-Man's New York City is chock full of tall buildings covered in glass panels, and gamers expect to see reflections when they get close to a sheet of glass. With current rendering techniques, though, these reflections aren't being calculated. Instead, they're something called cube maps--literally, a cube-shaped image that simulates a reflection--and they're created by an artist. For a given area of the game, the artist might create cube maps for street-level windows and high-up windows, for day and night, and things like that.

Because of the fact that most buildings are just tall boxes, this works most of the time, but it has its limits. In Spider-Man, if you crawl along certain buildings near "Ground Zero," the site of the 9/11 World Trade Center attacks, you can see the hazy image of two buildings. Some gamers believed this initially to be a quiet tribute to the Twin Towers. The truth, though, is that it was a cube map simulating reflections--a static, generic image embedded in the reflective object, rather than a genuine reflection of the surroundings.

These are the places where the seams in the illusion are visible: reflections, lighting, the edges of your visible play space. The more you play games, the more you can see how these illusions are good enough to make games look realistic, but hardly something calculated in real-time.

Light, Simulated

That's where ray tracing comes in.

Author
Eric Frederiksen

Tags