Jump to content

Vega 56 !!! Upgrade or wait


Recommended Posts

From my understanding when talking to NVIDIA about DLSS 2.0 (don't quote me on record please) is that DLSS 1.0 was AI driven in the sense that the image was rendered at a lower resolution and the "important" parts of the screen deemed high action would be the best looking up-scaled.

So kind of like shining a flashlight at a wall, the middle is lit very well lit while the outside is more vignette. Which means things like the HUD or anything on the outside of the ring would be of noticeable lower quality. To get this AI to work, every game must have sample footage sent to be analyzed and someone would have to point out to the AI what is important and what is not. When playing the game the Tensor cores preform the pre-determined algorithm for DLSS for that game. This is why so few games support it and why not all DLSS 1.0 looks the same.

DLSS 2.0 did away with this mostly and I believe it is now just half resolution image and up-scaled. It doesn't require sample footage or algorithm per game. It just has to be added to the game engine. I don't know enough about the subject to say more. But that was my understanding. But it makes sense. when working with digital photos my rule of thumb for clients is I can double the native size with little visible impact.

So my understand is DLSS 1.0 looks like crap to some and usually in the shadow details because of how it renders the image. DLSS 2.0 is just a generic doubler with a image sharpener applied.

Share this post


Link to post
Share on other sites

5 hours ago, ir_cow said:

From my understanding when talking to NVIDIA about DLSS 2.0 (don't quote me on record please) is that DLSS 1.0 was AI driven in the sense that the image was rendered at a lower resolution and the "important" parts of the screen deemed high action would be the best looking up-scaled.

So kind of like shining a flashlight at a wall, the middle is lit very well lit while the outside is more vignette. Which means things like the HUD or anything on the outside of the ring would be of noticeable lower quality. To get this AI to work, every game must have sample footage sent to be analyzed and someone would have to point out to the AI what is important and what is not. When playing the game the Tensor cores preform the pre-determined algorithm for DLSS for that game. This is why so few games support it and why not all DLSS 1.0 looks the same.

DLSS 2.0 did away with this mostly and I believe it is now just half resolution image and up-scaled. It doesn't require sample footage or algorithm per game. It just has to be added to the game engine. I don't know enough about the subject to say more. But that was my understanding. But it makes sense. when working with digital photos my rule of thumb for clients is I can double the native size with little visible impact.

So my understand is DLSS 1.0 looks like crap to some and usually in the shadow details because of how it renders the image. DLSS 2.0 is just a generic doubler with a image sharpener applied.

With my 2070 super RTX on or off I honestly could never see a difference besides the massive frame hit . It was like having a a slower 1080 ti with a feature that really didn't matter to me. So I sold it and I am using a vega 56 pulse I picked for $150

Share this post


Link to post
Share on other sites

That's probably one of the best descriptions of DLSS, ..both the flashlight at the wall and looks like crap to some folks is spot on.

I haven't done vary much research or gaming with DLSS 1.0. And now we have DLSS 2.0. Guess I need to do some Testing/Benchmarking, as I have lots of crazy questions. 

Not to change the subject, as for the OP's question on upgrading Vega 56,.. if you want to upgrade now. The Radeon VII would be a nice performance boost @ 1440P  over your current Vega 56,.. Best Buy has the AMD Radeon VII on sale now for $499.99   

AMD Radeon VII vs RTX 2080.  https://www.digitaltrends.com/computing/amd-radeon-vii-vs-nvidia-rtx-2080/

Edited by Braegnok

Share this post


Link to post
Share on other sites

Braegnok, I'm reading NVIDIA Press Data sheet right now. If you have certain questions, I might be able to ask a NVIDIA rep for you if I don't know the answer myself.

Edit: But back to the topic subject. The only way you are going to get 4K 144 FPS is with DLSS. Maybe on the low settings with DLSS  in certain games, but it will be a struggle. I can still see DLSS 2.0 quality impact when I am looking for it. The fuzzy shadows have been fixed, but it is still not perfect.

I don't mind playing with it ON because Ray-Tracing looks so dang nice. Some games like Call of Duty forces DLSS on and is baked into it. If you want DXR, you have to have DLSS enabled. I never notice it running around online. That game is way to fast pace to stand around and look at the edges of things.

Share this post


Link to post
Share on other sites

4 minutes ago, ir_cow said:

Braegnok, I'm reading NVIDIA Press Data sheet right now. If you have certain questions, I might be able to ask a NVIDIA rep for you if I don't know the answer myself.

I have a question actually: In the announcement on March 23 there is a graphic discussing how DLSS 2.0 works and it states "1080p Aliased, Jittered Pixels" but nothing explains the purpose to the jittered pixels in the announcement. I am wondering if jittering is being used like in amortized super-sampling, where having the sample positions in different places within a single pixel area for each frame is used to better construct sub-pixel detail? It would make sense to do this, but as it was not stated in that announcement post I am not certain.

Share this post


Link to post
Share on other sites

9 minutes ago, Guest_Jim_* said:

I have a question actually: In the announcement on March 23 there is a graphic discussing how DLSS 2.0 works and it states "1080p Aliased, Jittered Pixels" but nothing explains the purpose to the jittered pixels in the announcement. I am wondering if jittering is being used like in amortized super-sampling, where having the sample positions in different places within a single pixel area for each frame is used to better construct sub-pixel detail? It would make sense to do this, but as it was not stated in that announcement post I am not certain.

The press reviewer guide doesn't explain jitter either, but I think your right in how it works. Much like any upscaler, it is grabbing the surrounding pixel and constructing new pixels according to whatever algorithm is used. There is also vague information about feedback frames which is sent back into the AI render and used for the following frame construction.

Share this post


Link to post
Share on other sites

Not sure about the feedback frames, at least not by that name. The way that announcement described DLSS 2.0 made it sound like amortized super-sampling used to get additional spatial and temporal information that is fed into the neural network to assume new details in a following frame, kind of like the morphological engine in SMAA T2x works (though I guess with amortized super-sampling it might be more like SMAA 4x). Of course the neural network would be more advanced and potentially more accurate than the SMAA engine.

The feedback frames might be shader results from sample positions that align with what was constructed (thanks to the jittering), as a way to test the accuracy. That would explain why it is called a "feedback" frame.

Share this post


Link to post
Share on other sites

Okay, so I just looked at MechWarrior 5: Mercenaries with DLSS and was definitely impressed. Unfortunately I do not have any performance data on it, but I really was not able to spot any visual differences with it on and off for a while. Eventually I did notice beams looked different (lasers normally have a speckled appearance to them, but are just solid beams with DLSS and the PPC beams also lost some bloom/blur to them compared to DLSS off) but when staring at mech feet in the hangar and being in some levels, I could not tell you there was a difference.

Of course I would say MW5 responds well to resolution scaling anyway, but still, DLSS 2.0 is looking pretty good to me as an option here.

Share this post


Link to post
Share on other sites

I know NVIDIA spins it like its DLSS 1.0 with "updated" algorithm and no disrespect to the coding team if this is untrue. My thoughts on DLSS 2.0 is that it is more of a glorified upscaler rather than real AI that was used for 1.0

Share this post


Link to post
Share on other sites

I have no reason to doubt it is using an actual AI system, but all versions of DLSS have been a glorified upscaler. The AI is not necessarily running on your system (nor was it necessarily ever), but was how the information its engine uses to do the upscaling and enhancing was generated. If my analogy to SMAA T2x or 4x is accurate, what runs on the system is akin to the base SMAA morphological engine, but what it does when it finds a pattern is determined by what the neural network figured out while being trained. (The patterns it is looking for could also have come from the training.) The SMAA engine works this way too as the blending of pixel is pre-computed and stored in a texture the engine can just look up, instead of doing the math itself.

Hopefully we will eventually see semi-open source methods that are similar be developed, using something like DirectML, as those could be better documented, and not limited to specific hardware.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...