How Hot is Too Hot for PC Hardware?

I believe these temperatures do indeed reduce the lifespan of CPUs, but not enough for you to notice. Plus, as both AMD and Intel do not want to give up profit margins, the two try to launch products always running close to the limit in order to extract the maximum possible performance from that silicon.

Fortunately undervolt exists and it is possible to run Modern CPUs at - 20-30° than stock, giving up 5% performance or less.
 
Last edited:
I wonder.
if you fit an identical cooler onto a amd and nvidia gpu and put the same load same fan speed to it using the same amount of wall power just how different the reported temperature would be. I have my suspicion Nvidia report less than AMD does because no one tests this so they get away with lying to everyone.
Test it so we know please. im interested to know the truth what ever it is.
 
I wonder.
if you fit an identical cooler onto a amd and nvidia gpu and put the same load same fan speed to it using the same amount of wall power just how different the reported temperature would be. I have my suspicion Nvidia report less than AMD does because no one tests this so they get away with lying to everyone.
Test it so we know please. im interested to know the truth what ever it is.
It's worth noting that AMD's GPUs typically have a higher transistor density than Nvidia's. Here are some examples (figures are in millions of transistors per square millimetre of die area):

AMD Navi 31 GCD = 152.33
AMD Navi 21 = 51.54
AMD Navi 10 = 41.04

Nvidia AD102 = 125.49
Nvidia GA102 = 45.06
Nvidia TU102 = 24.67

If one packs in more components into the same amount of area, the temperature is naturally going to be higher.
 
It's worth noting that AMD's GPUs typically have a higher transistor density than Nvidia's. Here are some examples (figures are in millions of transistors per square millimetre of die area):

AMD Navi 31 GCD = 152.33
AMD Navi 21 = 51.54
AMD Navi 10 = 41.04

Nvidia AD102 = 125.49
Nvidia GA102 = 45.06
Nvidia TU102 = 24.67

If one packs in more components into the same amount of area, the temperature is naturally going to be higher.

This is probably due to the fact that before Nvidia was being manufactured on an inferior Samsung process, and now because the cache which is less dense has been divided into separate chiplets in the RDNA3 GPUs.
 
This is probably due to the fact that before Nvidia was being manufactured on an inferior Samsung process, and now because the cache which is less dense has been divided into separate chiplets in the RDNA3 GPUs.
It's not just about Samsung and the GCD/MCD split, though, because all of the Turing-based GPUs (using TSMC 12FFN) are less dense than the RDNA and GCN 5.1 (Vega 20) series of chips (all TSMC N7).

Even going back to the Pascal and Maxwell era, and comparing it to GCN 5.0 and 4.0, Nvidia's chips are either the same or lower, in die density (though not by much in a number of cases).

There are no grand conspiracies or marketing shenanigans here -- AMD's GPUs are just generally more tightly packed and run with higher temperatures, that's all.
 
Interesting article, thank you. I was thinking about this recently as I was reading some articles about 3DFX cards for retro builds. Once upon a time I had a Voodoo 3 3000 model which was passively cooled, and of course CPU coolers in those days were pretty conservative too, even on high-end models. Fast forward 20 years and I have a 360 AIO, a GPU with a 13 inch block of aluminium strapped to it, and and perforated airflow case with 4x140mm fans.

I think the point I'm trying to make here is that modern PC component engineering prioritises that last 5-10% of performance over temperature control. Often for no good reason except to hit the top of the charts in Youtube videos. The difficulty I have is that it places the significant burden of cooling very firmly with the user. All for a subjective performance delta that we'd probably never really notice. I'm all for progress, but this race to the top of the performance charts isn't always rational and is costing us more than we should tolerate.
 
I went with the poorman method: open case option to keep my old hardware cool. no need for intake/exhaust fan. just need to dust off CPU/GPU fan every 3 months using an electric blower. so far so good.

 
Once upon a time I had a Voodoo 3 3000 model which was passively cooled, and of course CPU coolers in those days were pretty conservative too, even on high-end models. Fast forward 20 years and I have a 360 AIO, a GPU with a 13 inch block of aluminium strapped to it, and and perforated airflow case with 4x140mm fans.

I think the point I'm trying to make here is that modern PC component engineering prioritises that last 5-10% of performance over temperature control
Temperature control has been pretty much the same -- it's power, and therefore heat, that's considerably risen. That Voodoo 3 would have used around 20W, at the very most, whereas something like a Radeon RX 6700 XT is 235W. Both GPUs will have the same temperature limits, but the old one just generates way less heat.

Of course, for nearly 12 times more heat, one is getting a processor with over 2000 times more transistors, with a clock speed up to 18 times higher. It's a pretty good trade-off.
 
I remember pumping 3.5 volts through an overclocked Pentium and they were still only 30 watt processors at that. Pretty incredible today, you look at power consumption to see 150 watts is mundane.

Back then I was tuning for performance, these days I tune for the lowest voltage I can get away with. It especially makes a difference in notebooks if you can carefully under-volt, the gains are superb.
 
I remember when the datacenter where we had co-located our severs raised inside LAN room the temp from 21 to 24 Celsius, during a chiller replacement that lasted 2 months. Suddenly we started to have more SAS drives (SSD and HDD), ECC ram, 10GB HBA's and PSU's failures.

You can say what you want but I like my electronics as cold as possible, of course above the dew point.
In my current home rig temps are like: CPU max 65, GPU max 62 and storage 36-40.
 
People don`t understand that those temperature limits are from engineering those chips. Engineers knows more about semiconductors than a bunch of hardware enthusiasts.

The worst are people mistaking heat generation with temperatures. Having a CPU at 95C while consuming 200W will generate less heat than a CPU running at 85C at 350W.

Temperatures are important for maintaining the integrity of the hardware. If it operates between the engineering limits defined, then the hardware will outlast your upgrading cycle.

People need to stop SPECULATING on things they don`t have a clue about. Thanks youtube for being part of this problem.
 
People don`t understand that those temperature limits are from engineering those chips. Engineers knows more about semiconductors than a bunch of hardware enthusiasts.

The worst are people mistaking heat generation with temperatures. Having a CPU at 95C while consuming 200W will generate less heat than a CPU running at 85C at 350W.

Temperatures are important for maintaining the integrity of the hardware. If it operates between the engineering limits defined, then the hardware will outlast your upgrading cycle.

People need to stop SPECULATING on things they don`t have a clue about. Thanks youtube for being part of this problem.
Problem? What problem? The only problem is that you believe everything an employee of large corporations says.

Engineers are just employees. If the CEO of Intel or AMD says that a CPU has to run at X temperature to achieve Y performance in order to reach Z profit margin, instead of making a bigger CPU that is more balanced in consumption and temperature, engineers are expected to comply with that directive. They don't hold the ultimate decision-making power in the company.

The higher the operating temperature of a silicon-based CPU, the more rapidly the silicon degrades. This degradation is caused by impurity diffusion and manufacturing defects such as thermal oxidation and doping. Although degradation is more rapid at higher temperatures, it can still occur at temperatures below 100°C, There are studies showing this.

There is a direct ratio between degradation and operating temperature. It turns out that CPUs are known to have a lifespan of decades, so even if the lifespan is reduced by 50% you still wouldn't complain because the product would have already become obsolete or failed for some other reason.
 
90C should always be the top temp that you allow on anything inside of your PC. (Laptops excluded of course.) It's not hard to achieve either.
 
Problem? What problem? The only problem is that you believe everything an employee of large corporations says.

Engineers are just employees. If the CEO of Intel or AMD says that a CPU has to run at X temperature to achieve Y performance in order to reach Z profit margin, instead of making a bigger CPU that is more balanced in consumption and temperature, engineers are expected to comply with that directive. They don't hold the ultimate decision-making power in the company.

The higher the operating temperature of a silicon-based CPU, the more rapidly the silicon degrades. This degradation is caused by impurity diffusion and manufacturing defects such as thermal oxidation and doping. Although degradation is more rapid at higher temperatures, it can still occur at temperatures below 100°C, There are studies showing this.

There is a direct ratio between degradation and operating temperature. It turns out that CPUs are known to have a lifespan of decades, so even if the lifespan is reduced by 50% you still wouldn't complain because the product would have already become obsolete or failed for some other reason.
You're all over the place here...

As you said, "the product would have already become obsolete or failed for some other reason". So your first three paragraphs about heat and degradation are moot.

Yes, engineers are "just employees". But that doesn't mean that their well-educated opinions are no more influential than say, the janitor's opinion - who is also "just" an employee. And I doubt that any CEO is micro-managing a product's specs. That would fall on the lead engineer.
 
The real problem is operating close to TJmax. The problem is dilatation and contraction during a heating and cooling cycle. The bigger the die the larger the dilatation and also the bigger the Delta from full power to idle temps. There is no material immune to thermal cycle.
And since the silicon die is attached to the substrate with micro balls, imagine the mechanical stress.

Laptop are more prone to this issue because the operating temperatures and colling restraints. I have seen many laptops with BGA array issues from long exposure to heat and moved outside the building. For both CPu and GPU. Also small variance in environment, like temperature or air moisture can amplify all this.


.
The worst are people mistaking heat generation with temperatures. Having a CPU at 95C while consuming 200W will generate less heat than a CPU running at 85C at 350W.

How many BTU/hour released it's not discussed here now.
 
My personal limit is 90°C and only for short times. HEAT = BAD, I don't care what the engineered "normal" or limits are. There's no way I want to be even close to those. The colder you keep your system, the longer it will last and it will have fewer problems.
 
Interesting article. I have a 4090 FE that I lock at 30% fan speed at 90% power limit because I’m a noise freak.

Temps usually hit 75-81 during gaming, never higher as the cooler on the 4090 is so overengineered even at 30% fan speed in an mini-tower case there’s still enough air movement to dissipate the heat and prevent throttling.
 
Great article! Bookmarked it for easy reference. Now according to that chart, the Tcase for a 13700k is only 72c?? Mine runs up to 75c while gaming, but averages around 65c. Is that something to worry about long term? Raptorlake just runs hotter than Alderlake, thought that was generally accepted. But they both have the same Tcase @ 72C?

Also, the chart has a typeo; it shows the 13700k as having 8P+8E &16T, but it has 24T.
 
Well, I try not to oc my 3600x with more than 1.35v, maintain my GPU 3070ti around 80% power usage and so far so good. Ofc I tried to crank both of them to maximum for some 3D Mark points and that's all xD
 
Well, I try not to oc my 3600x with more than 1.35v, maintain my GPU 3070ti around 80% power usage and so far so good. Ofc I tried to crank both of them to maximum for some 3D Mark points and that's all xD
I have stock oc'ed rtx 3070. The noise fans made on full load was unbearable. My opinion is that gpu makers should not even offer them for sale with such specs and fan noise.
I dropped power by 30%, fans dont go up above 2000rpm and are fairly quiet and GPU under 74c. I mean, I am ok with the heat such GPU would make on full load. But there is no way to bear those tiny fans spinning at 3000rpm! This is not right. They are selling fake specs, products working with such noise are not acceptable for in house use.
 
I have stock oc'ed rtx 3070. The noise fans made on full load was unbearable. My opinion is that gpu makers should not even offer them for sale with such specs and fan noise.
I dropped power by 30%, fans dont go up above 2000rpm and are fairly quiet and GPU under 74c. I mean, I am ok with the heat such GPU would make on full load. But there is no way to bear those tiny fans spinning at 3000rpm! This is not right. They are selling fake specs, products working with such noise are not acceptable for in house use.
I'm a noise freak also. I de-shrouded the GPU and made a custom shroud to fit 92mm fans.
The original 75mm fans spinning at 3000RPM lasted only a day for me.
For me the 92mm fans now top at 1200-1300 RPM. Same power limit reduced to 73%.
The rest of the fans also reduced according to size:
frontal 2x200mm max RPM is 600
inside and back 140mm max RPM is 800
Cpu sink is 120mm max RPM is 1000
I have set all fans with custom curves both BIOS and Windows software
 
No temperature is considered too hot nowadays for PC hardware, so why stop at the boiling point of water? Stride to reach 220ºC, that's a good, roasty temperature, even for pizza.
 
Back