Jump to content

NVIDIA Unveils DLSS 2.0 with Significant Changes to How Tech Works


Guest_Jim_*

Recommended Posts

When NVIDIA announced its GeForce RTX 20-series of graphics cards, their ray tracing capabilities was a focus of much attention, but there was another feature limited to these GPUs also revealed; DLSS or Deep Learning Super Sampling. This technology was to provide improved performance be running a game at a lower, internal resolution and then upscaling the image with a trained AI network to enhance it to near the quality a full-resolution image would have. The number of games that employ DLSS has not been very long since its launch, and likely contributing to this scarcity is that the AI network needed to be trained for each game on supercomputers that would render at 64x super-sampling.

Yesterday, NVIDIA unveiled DLSS 2.0 that make some significant improvements over the original implementation, and it may help see it used in more games. Instead of needed training data for an individual game, DLSS 2.0 utilizes a generalized AI network that will work across all games. This should speed up integration, and along with support for the technology in Unreal Engine 4 should help it come to more games. It does require information from the game engine though, so integration is necessary and you will not be able to just toggle it on to enhance all of your games' visuals.

The DLSS 2.0 network takes two main inputs to generate the final image, with one being the lower-resolution image with aliasing and jittered pixels, and the other being motion vectors for these images. The use of motion vectors requires integration with the game engine. By analyzing the pixels from the low-resolution images and applying the information of the motion vectors, the DLSS 2.0 AI network, called a convolutional autoencoder, will attempt to reconstruct the higher resolution final image you would see on your screen. During the training process, this output is compared against a 16K reference image with anti-aliasing so the network can improve its results.

Another important improvement coming with DLSS 2.0 is that it is less limited in its configuration by the user, as well as the hardware it can work with. It does still require Turing's Tensor Cores, but there were some limitations such as the RTX 2080 not being able to use it at 1920x1080 because the time to apply DLSS (1.0) would have been greater than the performance improvement it might allow. Now even the RTX 2080 Ti will be able to enable DLSS 2.0 at 1920x1080, giving players more options. If it seems unlikely an RTX 2080 Ti would need a performance boost at 1080p, one use case would be to enable DLSS 2.0 as well as ray tracing, which can significantly bring down performance even for that GPU.

NVIDIA also shares some specific examples of the improvements coming with DLSS 2.0, such as in Control, which will have it patched in on March 26. When using its original implementation of DLSS, certain objects such as spinning fans behind a grate would have significant artifacting because of the constantly changing details. The temporal information provided by the motion vectors prevents these artifacts with DLSS 2.0, and it can also do a better job sharpening more subtle details.

It will not be long before Control gets its update, but if you do not want to wait, MechWarrior 5: Mercenaries was already received its patch, though you will need to make sure you have the latest drivers; 445.75. I remember noting in that game how well its graphics held up to reduced resolution scaling even without sharpening enabled, so I can definitely believe that DLSS 2.0 can have a very positive impact on performance, by letting that scaling go still lower.

The two other games that round at the first four to receive DLSS 2.0 are Wolfenstein: Youngblood and Deliver Us The Moon.

I am a bit busy now but will hopefully find the time to look into DLSS 2.0 in MechWarrior 5: Mercenaries, outside of the comparisons NVIDIA is providing. One thing that does occur to me is that this new implementation of DLSS sounds similar to amortized super-sampling, the basis to TAA in many games which I covered in Serious Statistics: The Aliasing Adventure, and SMAA T2x. Both involve the collect of shader results from past frames and reprojecting that information into the current frame, though filtering is used to avoid artifacts like ghosting from fast movement. The temporal component to SMAA T2x also applies its morphological engine to motion vectors to do an even better job avoiding artifacts. With DLSS 2.0 likely applying better methods of reprojecting the information and working with motion vectors, it is definitely an interesting technology, but also has the different use of working with a lower internal resolution that is up-scaled, while both TAA and SMAA T2x will work on full resolution images. Definitely should be an interesting technology to look into.

Source: NVIDIA




Back to original news post

Share this post


Link to post
Share on other sites

×
×
  • Create New...