Amazon's New World beta is reportedly destroying RTX 3090 cards (update)

midian182

Posts: 9,741   +121
Staff member
In brief: Reports are flooding in of Amazon's New World beta killing off EVGA RTX 3090 graphics cards. A number of users are reporting that the card is being destroyed by the MMORPG, and many commentators say other Nvidia models are showing 100% loads and reaching high temperatures on the main menu and loading screens.

Update: Amazon has just released a statement regarding the RTX 3090 issues as related to New World:

Hundreds of thousands of people played in the New World Closed Beta yesterday, with millions of total hours played. We’ve received a few reports of players using high-performance graphics cards experiencing hardware failure when playing New World.

New World makes standard DirectX calls as provided by the Windows API. We have seen no indication of widespread issues with 3090s, either in the beta or during our many months of alpha testing.

The New World Closed Beta is safe to play. In order to further reassure players, we will implement a patch today that caps frames per second on our menu screen. We’re grateful for the support New World is receiving from players around the world, and will keep listening to their feedback throughout Beta and beyond.

"I just bricked a 3090 in the main menu after setting my graphics quality to medium and hitting save. Cant believe it. Anyone else had a catastrophic failure like this? I've been playing cyberpunk 3090 on Ultra for months, so this really doesn't make sense," writes the thread creator. "Anybody else joining the RMA club?"

It seems the EVGA RTX 3090 was the only card to experience catastrophic failures. Other RTX 30-series users and some owners of pre-Ampere cards say they have experienced incredibly high temperatures and 100% loads.

"The highest temperatures I've ever seen in any game – Cyberpunk, Control, Metro Exodus, BF V, Witcher 3, F1 etc – are about 65-70°C for the GPU and about 70-75°C for the CPU after hours and hours of gaming," wrote one EVGA 3080 user. "In New World, not only both GPU/CPU go nuts in the menu but in game it's even worse. My GPU hit over 80°C (after I forced all 3 fans to 100%, first time I've ever done this since undervolting) and the CPU over 85°C."

It’s been suggested that capping the framerate at 60 fps helps bring the temperatures down. There have also been questions over whether this could somehow be related to the early RTX 3080/3090 crashing problems that EVGA initially said was a capacitor issue but turned out to be factory OC models pushing the cards a little too hard, forcing Nvidia to release a GeForce driver that lowered boost clocks by 1% to 1.5%. It could be that New World somehow conflicts with these drivers. Either way, EVGA RTX 3090 owners might want to avoid the game until Amazon puts out a statement (which it now has).

Permalink to story.

 
EVGA cheaped out on the components this series, the only reason to pay their premium is for the warranty service. Nvidia also did a poor job with the FE memory cooling on the back of the card.

I use hwinfo64 to monitor memory temps and EVGA Precision X1 to power limit my 3090 so they do not exceed 100C under full load.
 
Not sure why would a game be able to bypass driver level restrictions. I am sure however, that I don't want to find out until there's some testing done and a reasonable explanation: don't want a game that's effectively malware.
 
Maybe the game is doing some shadowy crypto-mining in the background? :)

I guess I see now why EVGA cards dropped prices the most. They generally suffer from over-heating, so much for the EVGA quality. And it's not the first time. That's why I prefer MSI instead.
 
Sounds like a driver issue - I'm sure Nvidia and Amazon will get it sorted.

I remember when I had my GTX 570s in SLI. Usually the cards didn't break 80C when gaming, but with Sniper Elite 3, that game ran the cards hard. They ran the game well on max settings, but the GPUs were pushing over 90C. I had to turn down the settings to a medium/high mix to keep the cards running in the low 80s. Some games just work cards harder than others.

Beta tester of what? nVidia cards?
Aren't there like ... thermal sensors that should prevent a component from frying up?

With successfully RMAA, they are in quite a pickle as they most likely will end up with a) get the money they paid b) wait months for a new card.

All these people can do is RMA the card and wait for a replacement. The manufacturers will sit on some stock for just this reason alone, so they can handle and fulfill RMA requests. Of course, I could be wrong, but that would be a really $hitty way to handle your business, by not keeping any overhead for RMAs.
 
I have to say, all these web sites keep acting like a subdivision of nvidia marketing team.

I mean, a 6900 is as fast or faster on certain task than a 3090, but nooo, only the 3090 exist, according to these sites.

Anyways, that is an interesting issue, that a gpu can be physically destroyed by a game.
 
This game could fry any graphic card that has bad cooling, I've seen streamers with 3090s have their fps drop under 30fps on "very high" settings.
You literally have to manually drop the settings and enable the option to lock your fps under 60 if you don't want to hear your fans too much when you only have a loading screen displayed.
 
Why is the game being blamed here ?

Because the devs didn't think to put a frame cap on frames in the menu, which are usually static screens, which skyrockets gpu usage as it spits out 1000's of frames a second...which creates *a lot* of heat.

This is not the first time this has happened with game menus either. Rocket League had this issue, and one of the shooty-man games did as well.
 
Played the game yesterday, still looks and feels like the first Beta last year. They are some points the graphics are amazing. Running everything on Very High on a RTX3070.
 
Because the devs didn't think to put a frame cap on frames in the menu, which are usually static screens, which skyrockets gpu usage as it spits out 1000's of frames a second...which creates *a lot* of heat.

This is not the first time this has happened with game menus either. Rocket League had this issue, and one of the shooty-man games did as well.
There's actually a frame cap rate within the settings menu.
 
Yea target the company publishing the game when any piece of software could provide the same results of loading a gpu to 100%... I need a new tech feed given techspot has sunk in quality for clickbait over the last some years.

Its called a MANUFACTURE design failure, not "game blows up RTX card"
 
EVGA cheaped out on the components this series
EVGA have been cheaping out for the last 3 series. I remember during the exploding GTX 1070 / 1080 issue, there was little to no VRM cooling on top-end cards. The "fix" was to first issue a BIOS update running the fan much higher (and louder) then later offering on a thermal pad mod. Meanwhile, at the same time brands regarded as cheap and budget like KFA2 / Palit had full heatsink coverage for all memory & VRM's even on £99 GTX 1050 (75w) cards that certainly caused me to reassess the price premium justification plus the fact that the "best customer service" of all is the one you don't need in the first place...
 
Last edited:
Ummmmmmm............unless your cpu is bottlenecking your gpu......your gpu is basically at 100% ish usage the entire time if your details are maxed OR if vsync is off with and uncapped framerate OR both. what im saying is gpus basically run at 90-100% load constantly during gameplay unless youre capping your frames or running a game that doesnt utilize the cards full potential. also what im saying is it's not the game doing this. if someone is playing cyberpunk on max settings just fine then all of a sudden switching to new world and having their gpus "explode" then it's not the game. cyberpunk is maxing the card out just as much as new world....I can pretty much guarantee it.

the problem is the evga card itself. evga is a garbage company. they cheaped out on the capacitors in the 30 series. jayztwocents did a video during the time where the 3090s were crashing etc because their clocks were "too high" and nvidia released a driver downclocking some cards. further investigation revealed the cards that were crashing had a few of the cheap capacitors integrated with the new ones that didnt suck. the cards that werent having issues during those times were the cards that had the new capacitors and NONE of the cheap old ones. if I remember correctly the ONLY company that used ALL of the new capacitors in their cards was ASUS. every other company cheaped out on the parts. ASUS earned a huge amount of respect from me that day because they actually did their job and gave their consumers the best of the best possible.

enough "ranting." It's not the game. it's the graphics card, user error relating to poor airflow in your case or not setting your fan speeds on the card high enough. youre not going to fire up cyberpunk and max out the settings and have your card working just fine at 100% usage then fire up new world and have it explode while ALSO maxing the gpu out at 100% usage. it's the pile of garbage card made by the pile of garbage company....or user error.
 
Sounds like people have disabled downclocking ... or factory OC cards that have disabled downclocking.

On high GPU temps OEM cards from NVidia directly lower clock speeds if unable to keep temps under XX Celsius
 
As others have noted, this seems more like a failure in hardware design and/or driver management. If there is the potential that a card can be driven too hard (and it's happened in the past), there should be protocols in place to mitigate potential damage. A piece of software (like the game) using your driver should not ever be able to destroy your hardware - if it can, something was missed in the coding of that driver. Or, alternatively, if you are allowing a situation because it falls under a "normal use case" scenario, and your hardware is melting down, you must have a poor design or inferior components. The fact that it's only happening on a single model from a single manufacturer makes me think it's primarily the hardware at fault.
 
Back