Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by ir_cow

  1. I really want AMD to come out with something big to bring the prices down on NVIDIA side. At least this time around NVIDIA is "affordable". 2080 Ti performance for $499 is really good. Anyone playing at 1080 or 1440 will be set for years to come. Anyone that must play at 4K max settings can get a 3080 and those who want to dabble in 8K or maybe play Control (RTX ON) without DLSS will be happy. That game brings the 2080 SUPER down to 14 FPS. unplayable. I can only imagine what Cyberpunk 2077 will do with 4K (RTX On). Its the new crysis benchmark for the next 5 years.

  2. Well AMD rumored Big NAVI is only 80 CU, so double the 5700XT. It will probably end up slightly above the 3070, but it better be the same price. Unfortunately that will be AMDs top dog it seems. It might have  been scrapped in favor of RDNA2. Or it is the same but with improved IPC of RDNA2. A 10% increase will still put it well below the 3080.

    As for the reference PCB. I think this time around all AIB partners will use their own design.  Because the work around for years has been to slap a blower fan on the OEM and cheap units to see it for $100-200 less. Now with this mini PCB, it is unlikely any of current air cooler will easily fit without a different PCB or drastic rework. So like EVGA who often uses the reference PCB for the Ultra and SC lineup will be forced to either use the FTW PCB for everything or keep NVIDIA cooler.

    It costs a lot of money to re-tool a cooler. It is cheaper to slap a GPU on empty PCB like MSI does. Same cooler for multiple series. Technically a different PCB for each, but the size is usually about the same for sub-set. Meaning the use of the same cooler and fans for a lot of the products. Also it keeps the branding nice.

  3. Sept 17th it looks like for the 3080 and 24th for 3090. Did anyone catch the 12pin connector in the middle of the card when the 3070 was shown (31:10)? I didn't think about it until now, but the PCB is really small so that connector is actually like dead center. Its going to drive people crazy for cable management. 

    Edit: NVIDIA also says the 3070 is faster than the 2080 TI for only $499. Wow It looks like anyone with a RTX Titan or 2080 Ti isn't going to be able to see it used for very much here shortly.


  4. I don't know much about the inter workings of the SoC but it is well documented that if you raise the SoC voltage to 1.2v (or above) PCIE 4.0 is disabled. My ASUS X570 Hero tells me this when I do, which is nice as not all vendors do.

    I think the only reason to set it that high is for world records if you are trying to run DDR4 5000 and above.  This is because the VDDG CCD Voltage, VDDG IOD Voltage and CLDO VDDG voltage are all part of the memory controller which is kinda tied to the SoC voltage. So you cannot have say the VDDG higher then the SoC. Getting DDR4 6000 is probably impossible without 1.2v SoC.

    The only reason to raise the SoC on its own that high is if you are trying to get a higher FCLK, though without LN2 cooling, the limit is pretty much 1900Mhz and that can be achieved with 1.125v.

  5. Well I can tell you that the problem is simply the stock cooler. Intel coolers that come with the CPU is garbage and this why Intel kinda stopped including them for mid-range and up.

    Realistically though, Prime95 is an extreme stance meant for only stability. It will reach 100c easily with the stock cooler every time. If you are playing games and its below 90c you are fine for such a old CPU. not worth spending the money. Though if you really want to a Cooler Master EVO 212 for $30 will do the trick.

    Personally I would have just re-pasted it like you have done and called it a day. Prime95 is only for stability testing due to the stress it puts on the CPU.

    Edit: I made a video on this subject a few years ago


  6. Hmm. Well HWinfo is going be the most actuate without probing it yourself especially if you are looking at the VRM controller readout and not the BIOS one.

    I must have misunderstood you. I thought you had it set to extreme (LLC 6) and still getting .01v uplift. Most OC guides will say LLC 2 is safe. Which I mostly agree with. However what is never explained or misunderstood is that the voltage is automatically raised with LLC so the vdroop is less impactful. The way Buildzoid seems to explain it is that higher LLC gives more "stable" OC. This is true, but not the way I think about it. It is only more stable because you are pushing more voltage to accommodate for the vdroop.

    I actually was trying to follow his z490 OC guide for this i9-10900K before I played around with it myself. Using his LLC settings and override mode I ended up with 1.55v. Good thing I was looking and it wasn't running that high for long.

  7. 6 hours ago, Nyt said:

    This is interesting, did a quick test.

    On my 9900K with my Strix Z390-E at 1.3v core with LLC on the lowest (which is level 1), I can't even boot into Windows without a BSOD.
    At 1.285v with LLC 6 (max is 7), I'm 100% stable with Cinebench, Realbench, hours of gaming, basically everything. HWiNFO shows load voltage around 1.279V which is close to what I have in BIOS. 

    Without LLC I'd probably have to increase the voltage a lot.

    What sensor are you reading off of?

    I also guess it depends on how you set the voltage. Adaptive, overide and fixed do different things. I use fixed becuse I don't have to dick around stuff.

    When I set extreme LLC it puts a extra 0.2v+ into the CPU. So say 1.35v is the max safe limit. you'll be running 1.55v. This is how i killed that 3930K. I remember needing 1.4v for 4.6Ghz. Trying for 4.7ghz didnt work and I read to raise the LLC for better stability. Turns out it jamped 1.65v into it. 

    So from my view instead of using a low oc voltage with a high LLC, just set the voltage appropriately in the first place.

  8. 37 minutes ago, Gremlin said:

    Thank you ir-cow..

    He is getting I9-10900 k LGA 1200 cpu.. 

    Sooooooo. As much as I respect my fellow Youtubers and reviewers, that CPU runs hot. A lot of video and articles saying it never gets above 80c. It is a matter of use and voltage honestly. If you are just gaming than it will probably never get above 70c. The second you do any sort of video encoding or 3d-rendering it will hit 100c with a air cooler and downclock itself.

    I was using a Nuctua D-14, (I know its old but its the same as the D15). Anything but gaming it would throttle under a minute. Usually it would just barely pass Cinebench R20. I highly recommend a 360mm AIO. Even with a 480 custom loop I am still getting 92c in Prime95. Video renderings get up to about 83c. This is only at 5Ghz (all-core) @1.25v, but that still draws 250watts. If I push it to 5.1Ghz 1.35v it will pull 332 Watts and cannot run Prime95 for more than 5 minutes before it reaches 99c again.

    So in short for cooling, - gaming doesn't matter much, 3d or video encoding a AIO is a must at stock (1core turbo 5.3ghz). If you want all core 5ghz and do anything cpu heavy you will need a custom water loop. You need 1.25v to make 5Ghz all-core stable. No way around it and that draws about 220-250 watts.

  9. 1 hour ago, Braegnok said:

    Was that you back in 2016 ROG thread?  That's why I posted the old thread,.. the OP was iamacow. ;) 

    OH no way! That was me hahaa. I killed my i7-3930K with a LLC of Extreme. No one ever discusses the fact that LLC raises the voltage. It doesn't make it more "stable". Just adds more voltage on top of what you set so when the vdroop happens, so it isn't lower then the required voltage when under load. Almost killed my i9-10900K this exact same way last week by following some OC guide. Stupid LLC. Nothing has changed on that front. DO NOT use LLC unless you are on LN2.

    Edit: A lot of people swear by using LLC for overclocking, but I think LLC is either only good for LN2 or if you have a really crappy motherboard and the VRMs suck like the Gigabyte z490 Vision G i'm using. what a pile of shit for overclocking.

  10. I was afraid of this size....When I was hearing 20 chokes, dual fan / dual PCB from the leaks, the only thing I could think of was PowerColor Massive Red Devil cards. Namely the VEGA64 - 12.4" (316mm) which is the largest card I have ever used. The only other comparison is the EVGA 2080 Ti FTW3 - 11.83" (301mm).

    I think NVIDIA is betting on that anyone willing to plop down $1500+ for a video card, has a case equally large. Just eyeballing this beast, it looks to be a E-ATX length, aka 12" (305mm).

    Here is the photo of both cards together I just took. The Red Devil is 3 slots even if the bracket is two lol.


  11. The CPU notches are reserved from the 1151 / i9-9900K. So if you place the CPU in with the arrow in the correct corner it will not fit. Not to mention the different amount of pins. Hopefully you didn't force a i9-9900K into the z490-1200 socket.

    The 00 Q-Code in the manual says its not used. A generic way of saying the motherboard is on but not doing anything. 01-06 is the CPU, 07-22 is the memory and so forth.

    As Braegnog said, I would re-seat the video card such. But if your stuck on 00, it is before any of that. First I would make sure you have the right CPU in the socket and it is seated correctly.


  12. You messed up and didn't do research. the vBIOS flash is only for the desktop cards. Laptop GPUs, while the newer ones (1000 series and on) have the same GPU as the desktop counter-part, but the VRM, memory and everything else that makes it a working video card is different. Its meant for a laptop with different voltage requirements, cooling and frequencies. This is why you will never find a tutorial or anyone successfully doing this. NVIDIA makes the mobile-GPUs, no other brands to flash it to. Besides the desktop cards, which you have found out the hard way that it doesn't work. 

    If you did manage to somehow get it to work, I doubt it would have been stable, probably instantly overheat under load or burn out the VRMs.

    As for flashing it back...well you could with a DOS flash drive and blindly try, but knowing it booted to the flash drive instead of windows will be impossible.

    If you can somehow reset the BIOS so the iGPU is re-enabled, you can than proceed to use a DOS flash drive and re-flash it back (assuming you backed up the ROM!). Usually removing the battery will do the trick, but where is the battery located. That might be the hard part.

    If your doing to windows, I don't know if the iGPU will be defaulted or not.

  • Create New...