Jump to content
Sign in to follow this  
Guest_Jim_*

NVIDIA Reveals Next Generation RTX 30 Series GPUs

Recommended Posts

The time many have been waiting for has finally arrived as NVIDIA held its special GeForce event to reveal the upcoming RTX 30 Series GPUs powered by the new Ampere architecture. The event showed off some new technologies and announced three upcoming GPUs; the RTX 3070, RTX 3080, and RTX 3090. It might be worth noting upfront the RTX 3090 was introduced as the successor of the Titan series while the RTX 3080 was described as the flagship for the line. The GPUs will be manufactured on a Samsung 9 nm NVIDIA custom process.

Starting with the RTX 3070, NVIDIA is claiming this GPU will beat the RTX 2080 Ti with its 20 shader TFLOPs, 40 ray tracing TFLOPS, and 163 tensor TFLOPS. The GPU is paired with 6 GB of GDDR6 and it does appear to use the special 12 pin power connector that has been leaked previously. Most importantly, the RTX 3070 is to be available in October and priced at $499. Hopefully it does live up to the claim of beating the RTX 2080 Ti, which has been priced at over one thousand dollars since it launched

The RTX 3080 was introduced as the flagship for the line and NVIDIA claims it will offer double the performance of the current RTX 2080. It is a little hard to tell if this is the RTX 2080 or RTX 2080 Super though as the text on a graph states "UP TO 2X 2080" while the point shown is for the RTX 2080 Super. Regardless, doubling either version of the RTX 2080 is an impressive leap and, very importantly, it will be priced the same at $699. Availability is September 17, so you do not have long to wait before you can try to get one. It offers 30 shader TFLOPS, 58 TFLOPS, and 238 tensor TFLOPS paired with 10 GB of GDDR6X memory, and one of its advancements is the use of PAM4 signaling, so instead of being limited to binary it can transmit four states, allowing double the bits to be sent per clock.

After the introduction of the two above graphics cards, a specific point was made that Ampere is something those on Pascal-based GTX 10-series GPUs can upgrade to, apparently acknowledging many of not upgraded to the RTX 20 series. Considering the better pricing of these RTX 30 series GPUs than the RTX 20 series graphics cards they will be replacing, it will likely make more sense to invest in an upgrade to Ampere.

The RTX 3090 was shown off and described as a "BFGPU", which is hardly surprising as it is a triple-slot card meant to succeed the current Titan RTX. Like the previous Titan graphics cards, it demands a greater price than the gaming-focused graphics cards at $1499. It offers 36 shader TFLOPS, 69 ray tracing TFLOPs, and 285 tensor TFLOPs and is paired with 24 GB of GDDR6X, which NVIDIA claims is enough for 8K gaming at 30 FPS. It will be available on September 24, so one week after the RTX 3080. Among its features besides its performance is HDMI 2.1 and AV1 decode.

A number of technologies contributing to the performance of these graphics cards were shown off, such as a redesign to the shader cores. This new design doubles the number of shader operations per clock compared to Turing. The second generation RT Core offers 1.7 times the performance of the first generation, and the new third generation Tensor cores are 2.7 times faster. The result of these redesigns is a 1.9x improvement to performance per watt, at least in Control when running at 4K with an i9 CPU.

The slide shown for the 2nd Gen RT Cores states it is cable of doubling the ray/triangle intersection of Turing, and in the graph it shows this being achieved with Blender Cycles and Chaos V-Ray while the games listed keep more around the 1.7x mark. The footnote says the games and applications were run at 4K with an i9 CPU. The slide also states this deign supports concurrent ray tracing and graphics, and concurrent ray tracing and compute.

Other technologies shown off include RTX I/O, which may prove interesting depending on the impact the Xbox Series X and PlayStation 5 have on the loading of video game assets. By working with Microsoft and that company's DirectStorage on Windows, this technology will allow for assets to be decompressed on the GPU, instead of needing the CPU to first decompress the asset and hold it in system memory before sending it to the GPU VRAM. If the next-generation consoles also push computers to need storage bandwidth comparable to the PCIe 4.0 SSDs combined with data compression in the consoles, this could prove quite important.

Other technology announcements include NVIDIA Reflex, NVIDIA Broadcast, and NVIDIA Omniverse Machinima. Unfortunately I did not fully catch the purpose of the Omniverse Machinima, though it appears to be able to import various assets used in video games and then apply technologies, such as mapping characters to motion captured in a video, to make Machinima creation easier.

The NVIDIA Reflex technology is supposed to reduce the system latency between the GPU and CPU when enabled, and is to arrive in the drivers this month. Curiously the graphs shown for the impact in Valorant, Destiny 2, and Fortnite list the GTX 1050, GTX 1060, and GTX 1070 GPUs.

NVIDIA Broadcast is meant to be NVIDIA's one-stop solution for various broadcast-related tools. Some of these are available already by other applications, but NVIDIA has put its audio noise removal, background removal/replacement, and webcam auto frame tracking into the one application, and all apply AI for these features. The noise removal was demonstrated by having a hair drying being used in the same room be removed from audio, though the hair dryer did not sound that loud or annoying to me to begin with. The background effects include removing and replacing the background, as well as just blurring it out, if that is your preference. The auto frame feature is for combination with cropping the webcam frame. It looks to me that the feature will move the cropping frame around the image to track your movements, in case you move around while on video. NVIDIA Broadcast is coming this month and all RTX cars will be able to support it.

Also announced at the event is that Fortnite is gaining support for both RTX ray tracing for shadows and reflections (I believe that is what was stated) and Call of Duty: Cold War will also make use of NVIDIA RTX technologies.

Source: NVIDIA and Anandtech (event live-blog with slides)



Back to original news post

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...