ixtapalapaquetl Posted August 11, 2005 Posted August 11, 2005 I was wondering if anyone - maybe a EE? - could explain to me some of the theory behind overclocking a CPU. Here's what I was thinking: any given chip runs comfortably at stock frequency and voltage. The frequency can be increased a certain degree, but at some point hits a wall because of lack of voltage. Why? That is, how does more voltage allow for higher error-free frequencies? Then this cycle gets repeated; more volts, more hertz, more volts, more hertz... until another wall is hit ultimately controlled by temperature. But if temperature can be brought down (better fan, water cooling, liquid nitrogen, the depths of outer space...), the voltage-frequency cycle can run a few more iterations. But why does high temperature cause errors in calculations? According to what I have described above, it seems that there might not be a hard ceiling for any given chips speed. Could one, in theory, maintain a chip at absolute zero, then crank it infinitely high? That can't be right though. So what are the physical limitations that dictate the ultimate maximum that one might reach? I would love to know the answer to any of the above questions, and whether or not the model I described is accurate. Thanks. Nerds rule. Quote Share this post Link to post Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.