This week's ReSpec is a little different. I spent my week in sunny San Francisco attending the Game Developers Conference (GDC), running from conference to conference, trying to find some time to write.
In lieu of a regular column, I decided to post a sample of entries from the newly published ReSpec newsletter covering what I saw at GDC this week. If you would like this same newsletter delivered to your inbox every week, sign up now for exclusive content.
Path tracing is a lie
“Lie” may be too strong of a word, but implementing path tracing into your game can be quite tricky.I participated in both path tracing sessions cyberpunk 2077 and alan wake 2 At GDC, both discussed a common thread for leveraging path tracing in games intended to run in real-time at playable frame rates. It's called ReSTIR Direct Illumination.
Check out our weekly analysis of the technology behind PC gaming
First, let's explain how path tracing works. Take a pixel and trace a line from it away from the camera. It hits something and bounces off. It then continues bouncing around the scene until it disappears into the ether or ends up in a light source. Developers want a path that ends at a light source, especially for calculating shadows.
The problem in any kind of real-time context is that this process is very expensive. Calculating all these rays and all the bounces consumes a lot of resources, even though only a small amount is used. This is why path tracing has long been an offline technique. You need to calculate the possible paths and calculate their average.
it's not alan wake 2 and cyberpunk 2077. For direct lighting, ReSTIR works by weighting the light sources in the scene and sampling only selected parts of them. These samples are then shared temporally (between frames) and spatially (with nearby pixels).For games like alan wake 2certain lights are weighted more heavily, such as the blue and red “cinematic” lights seen at train stations.
As a result, the images are combined much faster, at least fast enough to allow you to play the game at a reasonable frame rate with proper upscaling and frame generation.
This is an interesting piece of trivia that will hopefully become more common among Titan developers. alan wake 2 and cyberpunk 2077 We share our work.
Microsoft is focusing on upscaling
At GDC, Microsoft finally talked more about DirectSR and managed to convince developers from AMD and Nvidia to sit on the same panel. Even together! DirectSR is not the way to end the upscaling wars as we originally thought, but it does provide a unified framework for developers to add multiple upscaling features to their games.
Most of it is input. When interfacing with DirectSR, there is a standardized set of inputs that developers provide to the application programming interface (API). These inputs can then be passed to built-in upscalers, such as AMD's FSR 2, or variants that require specific hardware, such as Nvidia's DLSS.
This is similar to Nvidia's own Streamline framework, which was built to accomplish something similar before AMD decided not to cooperate. It seems like Microsoft, a neutral third party in this fight, was able to unite everyone.
We don't yet know what this will actually look like in game. DirectSR is not even available to developers yet. Nothing may change for the end user, and the graphics menu will still show multiple upscaling options. Perhaps Microsoft will update Windows to add a universal upscaling option depending on the hardware you're using. It's not obvious, but DirectSR should make it easier for developers to implement any kind of upscaling in their games using his DLSS, FSR, and Intel's his XeSS.
One benefit that didn't materialize initially was how this system would work with updates. Nvidia, AMD, and Intel are constantly releasing new versions of their upscaling technologies that slightly improve image quality or slightly adjust the way upscaling works. With DirectSR, developers don't have to add all these updates to their games, it just works across the API.
This is all yes to me. Upscaling has become a hot topic of contention, especially for big releases such as: star field and resident evil 4 At release, it only supported one upscaler. The only drawback is frame generation. It doesn't appear to be part of DirectSR's vision at this point, so there will likely be a lot of back and forth with major graphic brands going forward.
Will it bring death to the CPU?not exactly
One of the most exciting announcements at GDC this year was Work Graphs. We talked about this in last week's newsletter, and got a closer look at Work Graphs during Microsoft's DirectX State of the Union address. The idea behind it is to reduce the burden on the CPU by having the GPU direct its own work.
There's a little more nuance to the conversation. This gives the GPU even more power to decide what to do, similar to when programmable shaders were first introduced to graphics cards. The Work Graph is made up of nodes that can generate more nodes for the GPU to work on, rather than waiting for work from the CPU. Microsoft described it as a compute shader that can launch other compute shaders.
The obvious benefit that PC gamers quickly realized was GPU utilization. Microsoft explained that current systems require a global synchronization point between the GPU and CPU. This often means that no work is done for a short period of time while waiting for synchronization to occur, as GPUs are highly parallel devices.
What I didn't expect was how it affected my memory. Microsoft explained that current programming for DirectX 12 involves using the ExecuteIndirect command, which requires holding multiple buffers. Work Graph allows the GPU to start its own work and continue to generate work, so there is no need to maintain these buffers.
AMD's Robert Martin demonstrated just how big of a deal this is in a scene that required 3.3 GB of memory. Using Work Graphs, memory usage was only 113 MB and performance was slightly better. As explained in the presentation, “Memory usage scales with the size of the GPU, not with the size of the workload.”
Although the Work Graph is invisible to end users, it's actually the next frontier in graphics programming, and as AMD, Nvidia, and Microsoft all say, it's something developers have been working on for years. is. Less memory usage and better performance sounds like a good thing to me. We'll have to wait until Work Graphs becomes widespread in real games.
Editor's picks