Jump to content

High res or AA?


Desktop-pro

Recommended Posts

I thought it was a case of, your graphics hardware renders frames at the maximum size it possibly can, but your resolution is the bottleneck when it comes to the size of the rendered models, so the hardware needs to scale the models down, causing pixellation on the edges. At higher resolutions there's less scaling, thus less-to-no need for AA?

not exactly correct.

 

consider that at any given time, the number of pixels onscreen is equal to the resolution, for example at 1080p the 2d rendered scene is constructed from the sum of 1920x1080 pixels, which is 2,073,600 or 2.07 Megapixels. this means that the image you see is made up of around 2 million coloured squares.

 

the GPU will NOT render the image using any more than this total number of pixels. it's like if you made a picture from lego bricks, it would be more well defined than if you made the same image, the same total size from duplo bricks.

 

what you are describing sounds like supersampling, which is a method of antialiasing where the image is rendered at a larger size (by doubling one or both of it's dimensions) then compressed before it is sent to the display. it's probably the best quality AA available, but it comes at a massive performance hit.

 

one thing i don't understand is why GPUs will only double the x and/or y res using this method. as i mentioned above, because my display is "HD ready" it accepts resolutions higher than its native res, and so compresses a 1080p image to 720p, recieving a huge quality boost. surely a modern GPU is capable of achieving this compression before outputting the image? using 1920x1200 on a 1680x1050 screen wouldn't incur the massive performance drop that comes with regular SS, but would still provide good AA on the whole scene.

Edited by psycho_terror

Share this post


Link to post
Share on other sites

  • Replies 28
  • Created
  • Last Reply

Top Posters In This Topic

not exactly correct.

 

consider that at any given time, the number of pixels onscreen is equal to the resolution, for example at 1080p the 2d rendered scene is constructed from the sum of 1920x1080 pixels, which is 2,073,600 or 2.07 Megapixels. this means that the image you see is made up of around 2 million coloured squares.

 

the GPU will NOT render the image using any more than this total number of pixels. it's like if you made a picture from lego bricks, it would be more well defined than if you made the same image, the same total size from duplo bricks.

 

what you are describing sounds like supersampling, which is a method of antialiasing where the image is rendered at a larger size (by doubling one or both of it's dimensions) then compressed before it is sent to the display. it's probably the best quality AA available, but it comes at a massive performance hit.

 

one thing i don't understand is why GPUs will only double the x and/or y res using this method. as i mentioned above, because my display is "HD ready" it accepts resolutions higher than its native res, and so compresses a 1080p image to 720p, recieving a huge quality boost. surely a modern GPU is capable of achieving this compression before outputting the image? using 1920x1200 on a 1680x1050 screen wouldn't incur the massive performance drop that comes with regular SS, but would still provide good AA on the whole scene.

Sowwy? :P

Share this post


Link to post
Share on other sites

  • 2 weeks later...
When you game, do you prefer to have a higher resolution and no AA, or lower resolution and AA? I am sure some of you just do both. :P

 

If I have to choose I will usually go with higher resolution and no AA because I find some games look weird when not in my monitors native resolution.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...