Recent high-end Intel CPUs are crashing Unreal Engine games

midian182

Posts: 9,745   +121
Staff member
In a nutshell: Are you using one of Intel's top-end 13th-gen or 14-gen processors and have noticed your games are crashing a lot? It's a problem that primarily affects Unreal Engine titles, and a division of Epic Games, along with Nvidia and gaming studios, are pointing the finger squarely at Team Blue's hot and power-hungry hardware.

There have been several reports of Core i9-13900K and Core i7-14900K processor users experiencing crashes in games that show an 'out of video memory' error. The issue is also being experienced by those using the Core i9-13700 and Core i7-14700.

A lengthy post from Epic-owned RAD, the company behind the Bink video codec and Oodle data compression technology, explains that the problem is a combination of BIOS settings and the high clock speeds and power usage of Intel's processors. A combo that results in system instability and unpredictable behavior under heavy load.

RAD emphasizes that there are no software bugs in Oodle or Unreal causing this issue. It says "overly optimistic BIOS settings" are causing a small percentage of processors to go out of their functional range of clock rate and power draw under high load, and execute instructions incorrectly.

The crashing appears to be affecting Unreal Engine games more than others due to Oodle Data decompression performing extra integrity checks, resulting in error messages, RAD states. The error has also been appearing in software such as CineBench, Prime95, and Handbrake.

The problem isn't a new one. Fatshark, developer of Vermintide 2 and Warhammer 40,000: Darktide, noted two months ago that players with the Intel i9-13900K/i7-13700K CPUs are prone to these crashes, and that a workaround was to underclock the Performance Core speed using Intel Extreme Tuning Utility (XTU), from x55 to x53. Gearbox, meanwhile, identified the 'out of video memory' crash on some 13th-gen CPUs in Remnant 2 last August. Again, the solution was to remove any overclocks or use Intel (XTU), though making changes in the BIOS will make them stick after a reset, unlike XTU.

A lot of people are seeing crashes during the shader compilation process. It's something that this writer has also experienced; I found Star Wars: Jedi Survivor and Lies of P wouldn't get past this stage until I dialled back the overclocking to stock settings. Disabling specific motherboard OC features such as Asus' MultiCore Enhancement can also help, though the problem has also been affecting people running games at stock.

Tom's Hardware carried out its own investigation and found the issue is related to the high default power and current limits that some motherboards may use.

Downclocking, lowering the power/current limits, and undervolting CPUs isn't something most people are going to be happy about, partly due to the potential performance impacts. But it does appear that only a relatively small number of 13th/14th-Gen Core i9/i7 users are experiencing the problem.

Permalink to story.

 
Νot gonna lie.

Whoever games on a CPU that guzzles up to 400 Watts of electricity just totally deserves this.

Intel continue to manufacture horribly inefficient CPUs like the 260W consuming QX 9650 from distant A.D. 2008.

They have not really learned anything nor they care to learn. AMD is vastly superior in this regard.

Intel would still gladly sell you 300W monocore with HT Pentium 4 1C/2T CPU's if they could get away with it.

Consider that my Chinese cheap $100 Xiaomi cellphone has got an octa-core CPU whereas my 2016 Intel laptop only has a 2C/4T CPU.
 
Νot gonna lie.

Whoever games on a CPU that guzzles up to 400 Watts of electricity just totally deserves this.

Intel continue to manufacture horribly inefficient CPUs like the 260W consuming QX 9650 from distant A.D. 2008.

They have not really learned anything nor they care to learn. AMD is vastly superior in this regard.

Intel would still gladly sell you 300W monocore with HT Pentium 4 1C/2T CPU's if they could get away with it.

Consider that my Chinese cheap $100 Xiaomi cellphone has got an octa-core CPU whereas my 2016 Intel laptop only has a 2C/4T CPU.
The worst part is intel shills going "oh but you can set pll2 to 280w and its not as bad and it idles ok!", then going on to compare to the 7950x bone stock with pbo on and being let rip as if that is a good comparison, let's face it, intel has work to do and I couldn't particularly care what brand it says for my cpu, just so long as it runs well, isn't a nuclear reactor and doesn't cost the world
 
Last edited:
As 13700k owner here, it is hot. It is hottest CPU I have ever owned.
And it is extremely disappointing being Intel customer when there is no other option but to lower speed of this near top tier CPU in 13th gen lineup.
Imagine hoping to overclock it a bit. No, no . You are not overclocking it, in fact, lower the speed a bit to avoid crashes.
But it is not just Intel. It is CPUs, and GPUs. Look how big GPU heatsinks have become. They are gigantic!
We do not have many ways left to keep em faster without inventing special cables to feed these monstrous chips..
 
So Epic confirms what we already knew, that Intel is pushing their CPUs too far trying to maintain the performance crown.

They did the same thing with skylake before this, with the 10900k being a neat idea but useless in practice. These ice cove or whatever cores were neat when they came out, but pushing amp gulping "efficiency" cores just isnt working.
Νot gonna lie.

Whoever games on a CPU that guzzles up to 400 Watts of electricity just totally deserves this.

Intel continue to manufacture horribly inefficient CPUs like the 260W consuming QX 9650 from distant A.D. 2008.

They have not really learned anything nor they care to learn. AMD is vastly superior in this regard.

Intel would still gladly sell you 300W monocore with HT Pentium 4 1C/2T CPU's if they could get away with it.

Consider that my Chinese cheap $100 Xiaomi cellphone has got an octa-core CPU whereas my 2016 Intel laptop only has a 2C/4T CPU.
All that whataboutism and you seem to have lost the plot. That cheap $100 Xaiomi 8 core pales in comparison to the performance of a Pentium, let alone a core i3 from 2016.The QX 9650 was a performance BEAST in 2008, easily top of the charts. Of course, you leave out that it didnt actually draw 280 watts, nor did you mention that 35watt was considered "low power" back then.

None of this has anything to do with the power chugging modern intel chips.
As 13700k owner here, it is hot. It is hottest CPU I have ever owned.
And it is extremely disappointing being Intel customer when there is no other option but to lower speed of this near top tier CPU in 13th gen lineup.
Imagine hoping to overclock it a bit. No, no . You are not overclocking it, in fact, lower the speed a bit to avoid crashes.
But it is not just Intel. It is CPUs, and GPUs. Look how big GPU heatsinks have become. They are gigantic!
We do not have many ways left to keep em faster without inventing special cables to feed these monstrous chips..
GPU heatsinks are a bit different. There DOES exist a dual slot cooler on a 4090, I can only imagine how ear splittingly noisy it is.

Modern big GPU heatsinks largely come from a desire for low noise. The 4090 and 7900xtx coolers all function well enough to keep sound around 35 dba. It wasnt even a decade ago that the 290x released with a cooler that hit 79 dba. Couple that with the popularity of ATX designs and an utter lack of expansion cards, and why not use that space?

I agree that modern GPU boosting is a little broken. Every release seems to miss the efficiency curve by barely a smidge. Seems silly, but so long as undervolting and underclocking is allowed we can still benefit.
 
Lol, Unreal Engine has always been a little bit unstable. As the game engine that I've had the most crashes from for stupid instabilities, I wouldn't be surprised if some bad game engine coding is exasperating the problem...
 
I wonder if a frame cap via frames per second limit or vsync would help mitigate the issue. Since the issue is not limited Unreal engine I can tell you I play both Darktide ( Since launch)and Vermintide 2 ( past 4 years) mentioned in this article I can say that the 7800X3D is the most stable gaming experience I've had especially in long gaming sessions. The added ambient temperature also doesn't cause throttling or an uncomfortable gaming environment in my experience.
Microcenter has the original 13900K for $499 , the decoy version called i9 14900k for $519.99 and i7 14700k for $369.99.
Seems like Intel cpus are In a freefall, although the the 7800X3D is cheaper than all of those.
My old 9900ks ( used less power than current flagships) while was great gaming performing cpu definitely crashed and made the ambient room temperature in long gaming sessions uncomfortable.
Lol, Unreal Engine has always been a little bit unstable. As the game engine that I've had the most crashes from for stupid instabilities, I wouldn't be surprised if some bad game engine coding is exasperating the problem...
True but we should attempt to mitigate any forms of instability to rule out the root cause in any unstable system. From bad hardware, to bad drivers and Unstable game engines there is a lot that can go wrong. While game engines can be patched up eventually, hardware instability has to be tweaked on an individual basis. 2 years ago EVGA 3090s were killing themselves in the Open world. Hey at least cpu has the ability to save itself unlike those poor 3090s.
 
I wonder if a frame cap via frames per second limit or vsync would help mitigate the issue.

Probably not as at least part of the issue occurs at launch:
2024-02-22-image-p_1100.webp
 
Many of these Intel systems are not stable by default and the problem only shows up during specific workloads. It's absolutely terrible, especially when you're using a last gen Intel system for work and get these rare inexplicable errors or data corruption.

AM5 is also not exempt from woes, but there the problem lies more in memory configuration and speed.

Friggin current gen systems are terrible. They are unstable out of the box and you end up underclocking an expensive processor or tuning down the memory timings because it can't run at the advertised speed.

This is not an issue with Unreal. The systems themselves are not stable. Run any stress test for a long enough time and you'll find problems.
 
Thank you for this article! I have a 13700k and it gets insanely hot even w/ my massive Noctua aircooler. I found one way around it is limiting the TDP package to 150 watts, cause by default there's no TDP limit and they hit 90-100c before throttling back down.

I wanted to undervolt it, by the z690 mobos have a ridiculous amount of options under CPU tweeking. :(
 
There's a reason why I went AMD in my latest build, despite never purchasing anything other then Intel before.
Me too. My 7700X-based was the first AMD system that I've built in years. Amazing performance and none of the heat issues that plague Intel.
AM5 is also not exempt from woes, but there the problem lies more in memory configuration and speed.
If you stick with memory on your motherboard's QVL list, you should be fine. I, myself, have the same exact kit of RAM that's featured in many AMD Ryzen 7000-related videos featured on Hardware Unboxed. I figured that if it's good enough for Steve of Hardware Unboxed, then who am I to ask questions?
 
Dear EA ... GF. Sounds a lot like UNREAL is inefficient AF and breaks under load. Dont see any other engine vendors with this issue. Oh and you know the entire internet running prime+cinebench+compiling stress tests have not had this issue.

Oh and the cpu ARCH is used on SERVER cpus that run 100% load for their entire lives.. im calling BS
 
The QX 9650 was a performance BEAST in 2008, easily top of the charts. Of course, you leave out that it didnt actually draw 280 watts, nor did you mention that 35watt was considered "low power" back then.

Wrong.

BD3gCj9.png


The QX9650 is just another example of Intel being Intel: Horriby inefficient, defective (Termination Voltage over 1.36V would kill the chip as Anandtech found out) and superseded in performance by the Q9650 released just a few months later that overclocked better using lower voltage and cost $500 less.

This scam is what is referred to as "pulling an Intel". The QX9650 originally cost around $900 with a price range reaching up to $1050.

You are welcome.

P.S.: My 2016 laptop does not come with an i3. The CPU is Core i5 6200U.
 
It seems that the setting, with over 4000W of power consumption "allowed" hits a power wall at the CPU itself. Ryzens do the same, to protect the silicon once it hits beyond a certain wattage. The system just chrashes.

It's the protection of the Intel CPU build inside, thats registering peak power above what's considered maximum or peak.

 
Wrong.

BD3gCj9.png


The QX9650 is just another example of Intel being Intel: Horriby inefficient, defective (Termination Voltage over 1.36V would kill the chip as Anandtech found out) and superseded in performance by the Q9650 released just a few months later that overclocked better using lower voltage and cost $500 less.

This scam is what is referred to as "pulling an Intel". The QX9650 originally cost around $900 with a price range reaching up to $1050.

You are welcome.

P.S.: My 2016 laptop does not come with an i3. The CPU is Core i5 6200U.

Are you implying that a 47% overclock uses a lot of power? I'm shocked, SHOCKED to hear that. Yet when you run the QX9650 at it's rated speed, it uses (checks your chart)...

54W. Wow. Much powers. So Care.
 
I wonder if a frame cap via frames per second limit or vsync would help mitigate the issue.

Probably, but in my case, and I've seen many others have this problem: The crash happens already during compiling of the shaders, which for some inexplicable reason, both Hogwart's Legacy and Jedi Survivor feels it needs to do every time the game starts.
 
Probably, but in my case, and I've seen many others have this problem: The crash happens already during compiling of the shaders, which for some inexplicable reason, both Hogwart's Legacy and Jedi Survivor feels it needs to do every time the game starts.

Yeah, that's straight CPU load. Not sure about Jedi Survivor but I've watched Hogwarts Legacy shader comps a lot on different machines (it's a fave CPU-broke..., ehhh demanding game to test).

The first time you launch HL on a new install or a new GPU install, it will recompile all the shaders and use up to 100% of your CPU and it takes a while even if you have an 8-core CPU. All subsequent boots with unchanged hardware have a shorter "shader compile" which uses maybe 2 cores? I'm usually seeing not much more than 25% CPU on any 6 and 8 core machines.

As that is dual or maybe even single-threaded, this could be causing Raptor Lake CPUs to max Turbo boost for 1-2 cores and triggering the crashes. The very first shader compile uses all cores so those are running at (a little) slower all-core Turbo clocks so even though it's using more power overall there, the higher clocks of the subsequent "easy" shader compiles could be tripping this CPU instability.

This is similar to overclocking a power-limited GPU where you only get modest clocks on heavy loads and it's stable, but on very light loads it can put 400 more MHz into the clock without hitting power limits and crash out because those clocks are unstable even with the higher voltage they're getting.
 
Last edited:
The very first shader compile uses all cores so those are running at (a little) slower all-core Turbo clocks so even though it's using more power overall there, the higher clocks of the subsequent "easy" shader compiles could be tripping this CPU instability.
That makes a lot of sense. I have gotten Hogwarts Legacy to start more easily by deleting the compiled shaders. I also wonder if my instability is related to a couple of my cores running slightly hotter than the others. I may have been sloppy with the thermal paste, but I can't be bothered to fix it before I can get my hands on the next gen NH-D15.
 
Back