Amazon says New World won't brick your GPU, despite reports of hardware failure

Polycount

Posts: 3,017   +590
Staff
In context: Amazon's New World MMO just launched a few days ago, and it has been met with a mixed reception from players. Users enjoy the crafting system, combat, and weapon progression but aren't as keen on the story or the lack of mounts. Or the fact that, in some cases, their GPUs are being bricked merely from playing the game.

Or so gamers claim. Back in July, some users who owned EVGA's high-end RTX 3090 GPU found that the card would reach such high loads and temperatures on the menu screen -- sometimes before they've even had a chance to adjust the game's settings -- that their cards would outright die. Amazon later stated that the game had nothing to do with the hardware failures but nonetheless chose to institute an FPS cap on the main menu.

At the time, EVGA admitted that some RTX 3090s suffered from QA issues that could lead to catastrophic hardware failure under certain circumstances. The company swiftly sent replacement cards to affected customers -- before even receiving the dead cards back at HQ.

Unfortunately, despite these early issues occurring three months ago, it seems New World's alleged correlation to bricked GPUs is still an ongoing problem. While not all players with New World-related hardware problems are experiencing card death -- some just wind up with GPU crashes -- there are certainly those that are. For example, one player's GPU allegedly began to smoke upon rebooting his machine after a system crash occurred during a New World session. PC hardware YouTuber JayzTwoCents explains the situation and even shows a clip of it happening live in the video above.

But don't just take our word for it. You'll find countless users discussing hardware problems, both minor and major, that they've encountered in-game over on New World's English Support forums. This thread, in particular, is an absolute goldmine for such reports. Even lower-end EVGA cards have been affected to some extent.

Amazon, for its part, still maintains that New World isn't the cause. In a statement to PC Gamer, the company claimed that there is "no unusual behavior" on the game's side that would be causing these issues and it has only received a "small number" of reports from affected players.

Perhaps that is true. Maybe New World is just putting a level of strain on already-flawed GPUs that they can't handle and don't typically face. Then again, maybe there is some strange component-destroying software bug in the game that hasn't been found yet.

Whether the fault lies with Amazon or GPU manufacturers, one thing is clear: there is a problem here and it needs to be solved sooner rather than later. No matter who is to blame, consumers shouldn't be left with dead or otherwise gimped hardware -- especially top-end hardware -- just for booting up a flashy new MMO.

We'll be reaching out to Amazon, as well as EVGA and other affected card makers for comment. We will update this article if we receive a response from any of them.

Permalink to story.

 
Users are reporting that it just totally ignores user defined *and* manufacturer defined power limits on the GPU.

One stating that even with power limit capped at 90%, verified working at that cap via GPUz during both Furmark (which can and will kill your hardware if you're not careful) and OCCT tests, New World will still push the GPU to a 110% power load.

So, now when it's verified that it is indeed the game causing issues, since Amazon's official stance is that it's 100% safe to play without damaging hardware due to "functioning correctly", all liability is on them.
 
Users are reporting that it just totally ignores user defined *and* manufacturer defined power limits on the GPU.

One stating that even with power limit capped at 90%, verified working at that cap via GPUz during both Furmark (which can and will kill your hardware if you're not careful) and OCCT tests, New World will still push the GPU to a 110% power load.

So, now when it's verified that it is indeed the game causing issues, since Amazon's official stance is that it's 100% safe to play without damaging hardware due to "functioning correctly", all liability is on them.
Jay released another video yesterday on this. It's just doing weird ****, like even if you have a 1440p monitor attached and that's the desktop resolution, the game allows you to choose 4k anyway! The power goes up drastically, and the power % goes up accordingly.
 
Jay released another video yesterday on this. It's just doing weird ****, like even if you have a 1440p monitor attached and that's the desktop resolution, the game allows you to choose 4k anyway! The power goes up drastically, and the power % goes up accordingly.

Havent checked that out yet, but shall.
While I don't follow jay, I do like to look him up every couple weeks and check out some of his stuff, just is one of those 'hit or miss' channels with me.

Die hard GN and HWUB watcher.

/LTT can pound sand...into a cpu he can then drop.
 
I'm not sure exactly what's going on with New World, but it's concerning. I don't have a 3090, but I do have an EVGA card (2080 ti), and I was considering giving New World a shot -- I'd certainly hope I could do so without putting my card at risk.

The frustrating thing here is that, even if Amazon isn't at fault (and I'm totally willing to believe that's the case, there is obviously something up with EVGA's QA process), they seem pretty keen on washing their hands of any responsibility.

I think, regardless of which party is causing the issue, you have a moral -- if not, perhaps, legal -- obligation to ensure your game isn't leading to bricked or damaged hardware, even indirectly. Amazon should IMO, be working with hardware makers here if they aren't already.

Keeping an eye on the situation. Hoping for some kind of patch to mitigate the problem, though that won't do any good for those who have already run into issues..
 
Another example is Tom Clancy's The Division 1. It is rising my GPU R9 280X to 90 C or higher and even after I tweaked most of the settings and capped the FPS still it is around 75. I think there are certain graphic (features) that are responsible for that.
 
Mmmm...

Amazon: " Why do I have to fix an issue that shortens the lifespan of customer video cards? After all, I'm one of the largest video card vendors in the world. Let it be."

I'm kidding of course, but I wouldn't be surprised if it was partially true
 
Last edited:
This is the second game I've ever had where I had to drop down the overclock on the old 1080Ti a tiny bit (20MHz).

I put everything else up, full power limits, pushing memory as hard as I can. Only the core needed dropping and that was only due to it crashing after 4(ish) hours of gameplay.

Personally, I think this is all rubbish. The game clearly pushes GPU's hard, You can see it everywhere (my 1080Ti is pulling basically 300 Watts the entire time which is quite a lot for it). I'd place money it's simply certain GPU's are just badly made and this game pushes the cards to breaking point.

I also think it's fairly well optimised, I get 80+ fps out and about and only busy towns drops to below 60 (and appears to be CPU bound) all running on the highest settings at 1440p. The lighting is very well done. Considering the engine they're using and 2000+ players on a single server, some of the towns legit have hundreds of players all running around onscreen, I'd say the engine copes very well all things considered.

To put peoples minds at ease though, If I was Amazon, I'd get Nvidia to push a driver update out that specifically limits power specifically for this game. Let people override it (like me) of course but by default lower it slightly to put peoples minds at rest.
 
I love this game. My son and I both have RTX3070 cards and we are running everything on Very High and no performance issues what so ever. Marauders!!!
 
Jay2cents just did another video and had some wired anomalies, but there are clear indications of bad programming going on with the game

Following a discussion about it in the GN discord, code for New World is already being picked apart, and big surprise, it's apparently absolute trash. Making repeat inefficient calls causing limits to spike then plummet repeatedly.
Do that fast enough in a short enough time frame and it'd be easy to see overshooting happen.

As Jay said in the most recent video, it's a bit of a perfect storm situation going on.
That said, this is certainly more an Amazon issue than GPU issue, though the affected hardware itself does come into play.

Crysis did the same back in the day with 8800 series GPU's (and New World uses a heavily -apparently poorly coded at that- modified version of Lumberjack engine), Furmark nuked a bunch of hardware, even just launching Ryzen Master will throw up a warning that it can and will damage hardware.
 
I've put 12-14 hours into the game on my 3060Ti so far - no issues. Game runs around 100fps on my system with settings maxed, but I turn down the shadows, water effects and lighting details to medium to help keep fps closer to the 90+ range at all times.

The game feels a bit...dull, at the moment. But I'm just starting to creep into level 20 and of course by the time I was able to start playing, the server a couple of friends are playing on is locked from allowing more people to create new characters because the server is full (as how Amazon describes the process for creating new characters on servers and if a server is locked or not). So I'm on a different server and going through the grind of everything.

Hopefully server merges do come out in the next few days so we can all jump to one server.
 
Users are reporting that it just totally ignores user defined *and* manufacturer defined power limits on the GPU. One stating that even with power limit capped at 90%, verified working at that cap via GPUz during both Furmark (which can and will kill your hardware if you're not careful) and OCCT tests, New World will still push the GPU to a 110% power load.
Even so it's still a hardware problem to design space heaters that are pushed to the limits and literally have no overhead whatsoever. You want a 350w GPU, then design it for 400w and put a 100% power cap at 350w. Then an accidental 110% load that breaks that power cap won't break the card. PC PSU's have exactly that level of tolerance built in, ie, 12v should be between 11.4-12.6v, 240v appliances work from 220-250v, etc, there's no reason why GPU VRM's shouldn't have some tolerance vs the "electronic glass cannons" they're currently designed to be. The higher the wattage and more expensive the card, the more important it becomes.

This isn't EVGA's first rodeo in being disproportionately affected more than other brands due to skimping on VRM's (link 1, link 2).

And New World isn't the only game causing 3090's issues (link 1, link 2, link 3), it's just the one getting all the media attention at the moment. Nor is New World remotely the only game with "uncapped fps in the main menu". I have +20 year old DirectX 7-8 games that do that (run in the thousands of fps on the main menu) due to VSync no longer working on pre DirectX 9 games in W10 due to changes made in DWM vs W7. No-ones going to patch those, so it's on the hardware manufacturer to ensure their card's "ability to handle without exploding" wattage is actually +10% higher than "designed for" peak wattage. Anything else is just bad design.
 
Last edited:
I've put 12-14 hours into the game on my 3060Ti so far - no issues. Game runs around 100fps on my system with settings maxed, but I turn down the shadows, water effects and lighting details to medium to help keep fps closer to the 90+ range at all times.

The game feels a bit...dull, at the moment. But I'm just starting to creep into level 20 and of course by the time I was able to start playing, the server a couple of friends are playing on is locked from allowing more people to create new characters because the server is full (as how Amazon describes the process for creating new characters on servers and if a server is locked or not). So I'm on a different server and going through the grind of everything.

Hopefully server merges do come out in the next few days so we can all jump to one server.
Hoping for the same! We have 3 friends on a different server because of this... Sucks but I understand the reason.
 
If you ask me it's the curve setting on some of these cards that have been setup by the manufactures. I've noticed that the newer RTX cards pull much more power than they actually need, I lowered my curve, got the same performance and my temps dropped by 10c
 
Even so it's still a hardware problem to design space heaters that are pushed to the limits and literally have no overhead whatsoever. You want a 350w GPU, then design it for 400w and put a 100% power cap at 350w. Then an accidental 110% load that breaks that power cap won't break the card. PC PSU's have exactly that level of tolerance built in, ie, 12v should be between 11.4-12.6v, 240v appliances work from 220-250v, etc, there's no reason why GPU VRM's shouldn't have some tolerance vs the "electronic glass cannons" they're currently designed to be. The higher the wattage and more expensive the card, the more important it becomes.

This isn't EVGA's first rodeo in being disproportionately affected more than other brands due to skimping on VRM's (link 1, link 2).

And New World isn't the only game causing 3090's issues (link 1, link 2, link 3), it's just the one getting all the media attention at the moment. Nor is New World remotely the only game with "uncapped fps in the main menu". I have +20 year old DirectX 7-8 games that do that (run in the thousands of fps on the main menu) due to VSync no longer working on pre DirectX 9 games in W10 due to changes made in DWM vs W7. No-ones going to patch those, so it's on the hardware manufacturer to ensure their card's "ability to handle without exploding" wattage is actually +10% higher than "designed for" peak wattage. Anything else is just bad design.

Out of curiosity due to this issue being back in the news, what is suspected to be going on, and Jay's video...I decided to enable power load monitoring in AB and see how some other games act as I've got a FTW3 Ultra 3080.
Set with a power limit of 100, it does indeed jump to 105% very briefly (well, in Wreckfest at least, didn't get to test anything else).

105% is also the max power limit AB will let you push through the GPU for OC'ing (which iirc is a limit set by the vbios? Welcome correction if wrong).

So, in my one game (so far) and one gpu sample, seems the EVGA cards do play a little loose.

Thing is people with Gigabyte and other brand cards are reporting issues now too, and with that in mind as a matter of curiosity I'm going to ask my brother (has a Gigabyte 3090) to try the same test out in a few games we both have, see if those cards do the same spiking. I'm going to suspect that they do, and this is just the nature of Ampere.

And yeah, uncapped fps at the menu screen isn't the reason. Hell, Doom eternal runs at like 1000 fps during their loading screens, old games are just fine in the hundreds of fps.

I'm wanting to hear a bit more about what's dug out of the games code.

As an aside and quasi-related, even with a rather high OC set on it for some games (2055 core, 9961 mem) I'm yet to exceed 60C on the GPU or 72C on the hot spot or memory temps, on air, highly aggressive fan curve.
Yet to test that for any overshooting as that is at the defined 105% limit in AB, but shall do that today.

Edit:
And oh crap, should tell said brother to set an undervolt profile for HL Alyx. He's been playing through that and has that same model 3090 in those user reports.
 
Havent checked that out yet, but shall.
While I don't follow jay, I do like to look him up every couple weeks and check out some of his stuff, just is one of those 'hit or miss' channels with me.

Die hard GN and HWUB watcher.

/LTT can pound sand...into a cpu he can then drop.
I love HWUB and GN as well, but GN aren't necessarily gamers either and will tell you that to start with, so I doubt they will cover this.
 
This is the second game I've ever had where I had to drop down the overclock on the old 1080Ti a tiny bit (20MHz).

I put everything else up, full power limits, pushing memory as hard as I can. Only the core needed dropping and that was only due to it crashing after 4(ish) hours of gameplay.

Personally, I think this is all rubbish. The game clearly pushes GPU's hard, You can see it everywhere (my 1080Ti is pulling basically 300 Watts the entire time which is quite a lot for it). I'd place money it's simply certain GPU's are just badly made and this game pushes the cards to breaking point.

I also think it's fairly well optimised, I get 80+ fps out and about and only busy towns drops to below 60 (and appears to be CPU bound) all running on the highest settings at 1440p. The lighting is very well done. Considering the engine they're using and 2000+ players on a single server, some of the towns legit have hundreds of players all running around onscreen, I'd say the engine copes very well all things considered.

To put peoples minds at ease though, If I was Amazon, I'd get Nvidia to push a driver update out that specifically limits power specifically for this game. Let people override it (like me) of course but by default lower it slightly to put peoples minds at rest.
I'm running a Strix 2060 OC at stock settings. I play New World in 1080p with the FPS locked at 60 in-game. Using the nVidia overlay, I haven't pulled anything higher than about 130w on my GPU while playing.
 
I love HWUB and GN as well, but GN aren't necessarily gamers either and will tell you that to start with, so I doubt they will cover this.

Yeah, as they're more an in depth hardware focused channel I don't expect GN to cover this at all, if anything more than a brief mention in a weekly news round up at that.

There's enough outlets covering it currently anyway, Tech Jesus and company no doubt have something else they're working on anyway, on top of new building renovations to manage.
 
I'm running a Strix 2060 OC at stock settings. I play New World in 1080p with the FPS locked at 60 in-game. Using the nVidia overlay, I haven't pulled anything higher than about 130w on my GPU while playing.

Because Turing and Ampere are different architecture and have different power requirements.

Aside from a supposed handfull of AMD units, it's the Ampere line having issues.
 
I have a MSI GTX 1060 with an I5 8400 and after run the game some games reboot my pc. I was looking for a work around and tryied disabling Turbo Boost option, games work fine for a time and now they are rebooting my PC again. I will try with a new PSU when have money to buy one certificated. I hope changing the PSU will solve the problem.
 
Back