Jump to content

Should I upgrade to AMD FX? Thoughts?


Recommended Posts

Forgive me if I am being a little thick here, is that because of being 8 cores?

 

The bulldozer CPUs aren't truly 8 cores. AMD calls their BD series a 4 module CPU. Each module has 2 cores and they share some of the CPU memory. In a sense this was AMDs response to intel Hyper threading. The problem with the whole design is that each core in the module has HORRID single thread performance. AMD expected that to be compensated with the other core in the module, however, many windows programs do not take advantage of that second core in the module. So when you have an intel CPU that has hyper threading, and you open a program that does not take advantage of hyper threading, you are essentially left with 4 cores that still perform really well.

 

When you have a program that does not support AMD modules, you are left with 4 weaker cores. Most games on the market do not support modules, so the performance you get is equivalent to an x4 phenom.

Share this post


Link to post
Share on other sites

The bulldozer CPUs aren't truly 8 cores. AMD calls their BD series a 4 module CPU. Each module has 2 cores and they share some of the CPU memory. In a sense this was AMDs response to intel Hyper threading. The problem with the whole design is that each core in the module has HORRID single thread performance. AMD expected that to be compensated with the other core in the module, however, many windows programs do not take advantage of that second core in the module. So when you have an intel CPU that has hyper threading, and you open a program that does not take advantage of hyper threading, you are essentially left with 4 cores that still perform really well.

 

When you have a program that does not support AMD modules, you are left with 4 weaker cores. Most games on the market do not support modules, so the performance you get is equivalent to an x4 phenom.

This is all wrong. First and foremost, programs do not have to be aware of the CPU's architecture (with the exception of being compiled to use particular instruction set extensions, which is beyond the scope of this discussion). It's the OS that has this duty. This is why Microsoft has released a patch to their thread scheduler after the release of Bulldozer, like they did when the P4 HT came out.

 

Second, for modules vs. Hyper-Threading, it's quite the opposite. With an OS that is not aware of HT, threads can be scheduled on the same physical core whereas there are still some others left idling. This means that both of these threads will only get approximately 50% of the full performance because there is still only one execution unit; it's only the thread context registers that are duplicated. With a Bulldozer module, threads are actually executed at the same time since the execution units are dedicated to each core, and this results more like 90% of the full performance.

 

Also, AMD actually designed the Bulldozer module with the goal to have a much more predictable behavior/performance than Hyper-Threading. Many supercomputers/workstations are delivered to customers with HT disabled by default because some applications spawning a thread for each logical core actually see a reduced performance in some cases. With Bulldozer, every program will see HUGE gains from spawning a thread for each core and not for each module.

 

Hope that helps :)

Share this post


Link to post
Share on other sites

Must have read something wrong somwhere about games and threading. So there is no game that uses multi threading then, still only one?

Share this post


Link to post
Share on other sites

No they use multiple threads, but since games tend to be limited by per-core performance (which the FX kinda sucks at) they tend not to run as well as a Phenom 2 even with the FX at higher clocks.

Share this post


Link to post
Share on other sites

Have you overclocked your X4? How much Mhz is it doing right?

 

One odd thing I noticed is that in your sig, it says you have 12 GB ram... it is 3 dimms, isn't it? Since the 990FX board is double channel, and you have 3 dimms, the ram should be in single channel I think. :P

I have not overclocked my CPU, I just don't feel that I have to even though it is designed to be. I believe the board has 4 dimms.

Share this post


Link to post
Share on other sites

I find it weird to have 12GB with 4 dimms of ram. Normally 12GB of ram go with 3 dimms, which is good for the Intel nehalem, since it has triple-channel. :P

 

Before thinking of buying a new CPU, try to overclock that one. Its a good way to get some more power and not buy a new CPU. I have my X2 overclocked for 3 years and now with another core unlocked for 3 months, and without any problem. It was before as a X2 at 3.6 GHz and 1.38v, and its now at 3 cores (like a X3) and 3.7 GHz at 1.42v, still without a problem. I could go another step, but I can't get it enough stable for more. :P

  • Like 1

Share this post


Link to post
Share on other sites

This is all wrong. First and foremost, programs do not have to be aware of the CPU's architecture (with the exception of being compiled to use particular instruction set extensions, which is beyond the scope of this discussion). It's the OS that has this duty. This is why Microsoft has released a patch to their thread scheduler after the release of Bulldozer, like they did when the P4 HT came out.

 

Second, for modules vs. Hyper-Threading, it's quite the opposite. With an OS that is not aware of HT, threads can be scheduled on the same physical core whereas there are still some others left idling. This means that both of these threads will only get approximately 50% of the full performance because there is still only one execution unit; it's only the thread context registers that are duplicated. With a Bulldozer module, threads are actually executed at the same time since the execution units are dedicated to each core, and this results more like 90% of the full performance.

 

Also, AMD actually designed the Bulldozer module with the goal to have a much more predictable behavior/performance than Hyper-Threading. Many supercomputers/workstations are delivered to customers with HT disabled by default because some applications spawning a thread for each logical core actually see a reduced performance in some cases. With Bulldozer, every program will see HUGE gains from spawning a thread for each core and not for each module.

 

Hope that helps :)

 

That is not entirely true either. Individual programs due have a lot to do with how well a CPU performs. For instance when you test the 8150 in various benchmarks that are optimized for the 8150 you see that the 8150 performs better than the 2500K. I know a prime example is cinebench. However, when you run a benchmark like PCmark, the i5 is better than the 8150 even after the OS patch. There are some programs that love cores/threads and there are some programs that don't need it as much.

 

Basically, you could have your OS optimized all you want for the BD chips, but there are program limitations as well that the OS will NOT supersede. The difference between cinebench and other benchmarks is just one of many different examples.

 

The module vs hyper threading thing I will have to look into. That could of been faulty results on my end because that is my experience with my testing almost a year ago. When I limited the i7 2600K to 1 core and one thread for testing I got vastly better results that I did by limiting the 8150 to 1 core. I got the results a bit closer to one and other when I turned on a full module on the 8150. I figured that the i7 was still using most of the single cores capacity and that the 8150 needed both cores to perform as well as a single intel core due to the architecture. I will look back into it, but from what I remember my performance numbers weren't far off from what other people were getting. If someone wants to explain why 1 intel core on 1 thread was better than a 1 AMD 8150 module (only by 9%) then I will be happy to retract what I said.

 

Lastly, why on earth would AMD make an architecture designed for super computers and sell it on the consumer level? And even if that were true, then why is it that the 2600K does better than the 8150 in workstation style applications.

Share this post


Link to post
Share on other sites

I find it weird to have 12GB with 4 dimms of ram. Normally 12GB of ram go with 3 dimms, which is good for the Intel nehalem, since it has triple-channel. :P

 

Before thinking of buying a new CPU, try to overclock that one. Its a good way to get some more power and not buy a new CPU. I have my X2 overclocked for 3 years and now with another core unlocked for 3 months, and without any problem. It was before as a X2 at 3.6 GHz and 1.38v, and its now at 3 cores (like a X3) and 3.7 GHz at 1.42v, still without a problem. I could go another step, but I can't get it enough stable for more. :P

If I am going to overclock I definately need a new case, the H2 I have seems very restricted. I love the new Corsair C70 case but it seems too pricey and I wanna get a few more months out of my H2 since it is pretty recent and quiet.

Share this post


Link to post
Share on other sites

That is not entirely true either. Individual programs due have a lot to do with how well a CPU performs. For instance when you test the 8150 in various benchmarks that are optimized for the 8150 you see that the 8150 performs better than the 2500K. I know a prime example is cinebench. However, when you run a benchmark like PCmark, the i5 is better than the 8150 even after the OS patch. There are some programs that love cores/threads and there are some programs that don't need it as much.

 

Basically, you could have your OS optimized all you want for the BD chips, but there are program limitations as well that the OS will NOT supersede. The difference between cinebench and other benchmarks is just one of many different examples.

 

The module vs hyper threading thing I will have to look into. That could of been faulty results on my end because that is my experience with my testing almost a year ago. When I limited the i7 2600K to 1 core and one thread for testing I got vastly better results that I did by limiting the 8150 to 1 core. I got the results a bit closer to one and other when I turned on a full module on the 8150. I figured that the i7 was still using most of the single cores capacity and that the 8150 needed both cores to perform as well as a single intel core due to the architecture. I will look back into it, but from what I remember my performance numbers weren't far off from what other people were getting. If someone wants to explain why 1 intel core on 1 thread was better than a 1 AMD 8150 module (only by 9%) then I will be happy to retract what I said.

 

Lastly, why on earth would AMD make an architecture designed for super computers and sell it on the consumer level? And even if that were true, then why is it that the 2600K does better than the 8150 in workstation style applications.

I'm not sure if you realize this...but you're arguing the complete opposite of what he just said. :lol:

 

FX chips aren't very strong in single-threaded tasks. This is known. They do, however, scale rather when with multiple threads. That's why when you compared a single BD thread to a single SB thread you saw such a difference.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...