Nvidia advises crash-prone gamers to look to Intel for assistance

Intel CPU's are clearly a must buy right now, if you live in a vacuum, with amnesia....
There's only one single AMD CPU I would recommend, the 7800X3D which is pretty much just good for strictly gaming. And that's not even their own tech, that's TSMC's. Still only 8 cores.
If you do anything else heavy besides gaming but also want to game then you get Intel, you dont get a 7950X3D or any dual CCD AMD, those aren't good for gaming.
 
There's only one single AMD CPU I would recommend, the 7800X3D which is pretty much just good for strictly gaming. And that's not even their own tech, that's TSMC's. Still only 8 cores.
If you do anything else heavy besides gaming but also want to game then you get Intel, you dont get a 7950X3D or any dual CCD AMD, those aren't good for gaming.
No. Intel is not good for any real use. For gaming, AMD is better. For gaming and something else, AMD is still better because of Intel's Thread director. Putting background tasks on slower cores is about stupidest thing ever seen on desktop PC's. That's also reason why Intel only puts hybrid crap on desktop. On servers something like that would be laughed out. Intel's Hybrid crap only exists because it looks good on benchmarks.
 
There's only one single AMD CPU I would recommend, the 7800X3D which is pretty much just good for strictly gaming. And that's not even their own tech, that's TSMC's. Still only 8 cores.
If you do anything else heavy besides gaming but also want to game then you get Intel, you dont get a 7950X3D or any dual CCD AMD, those aren't good for gaming.
You can't be serious? This very site has performance benchmarks in it's review of 14th Gen that paint a different picture:


For gaming, absolute fastest is undeniably the 7800X3D right now. The fact it's 8 cores is a weak argument, just LOOK how it benches. If you happen to have the dual CCD 7950X3D, it looses about 5-7% on peak FPS, but not all the time - As in some games and apps it's perf is identical to the 7800X3D. Now just look at how it looses ZERO ground to Intel's i9-14900K in non-gaming workloads.

While using a lot less power, runs cooler and is cheaper to buy.

If I needed my PC to double as a workstation and gaming rig, the 7950X3D is a better pick over the i9 any day of the week right now. It looses to the i9 in 2 of the 7 productivity benches, and absolutely trounces it in all but one of the gaming ones.

Lastly on a side note, before anyone else comes at me for shilling AMD - My longest serving CPU I have ever owned was my trusty i7-4790K. Ran that CPU all core overclocked to 4.8 24/7 at only 0.025mv above stock voltages from 2014 until 2022 when I moved to Ryzen 7 5800X3D. The 4790K was a great CPU of it's time, and was only retired because I swapped from a Radeon RX 5700XT to a RX 6950XT and couldn't get anywhere close to utilising the GPU to 100%. I buy whatever's best for me at the time. It was Intel in 2014, and was AMD in 2022.
 
No. Intel is not good for any real use. For gaming, AMD is better. For gaming and something else, AMD is still better because of Intel's Thread director. Putting background tasks on slower cores is about stupidest thing ever seen on desktop PC's. That's also reason why Intel only puts hybrid crap on desktop. On servers something like that would be laughed out. Intel's Hybrid crap only exists because it looks good on benchmarks.
Yes. AMD Dual CCD is worse than hybrid cores for gaming. Higher average fps doesn't mean it's a smoother experience. You have to hunt very specific and much slower ram kits down because everyone wants the same ones for 1:1 infinity fabric. X670E is very very expensive. If Intel was truly good for nothing then the market prices would show them way under AMD and they aren't.
I don't really like the current AMD lineup, 7800X3D needs to be a 10 or 12 core. It's hard to recommend at 8 cores for anything other than 100% current gen gaming. It's not better at workstation use than say a 14700K.
I'd recommend a 14600K over a 7700X any day of the week.

that's crazy about servers, it's almost like servers are very task specific while home desktops aren't.
 
You can't be serious? This very site has performance benchmarks in it's review of 14th Gen that paint a different picture:


For gaming, absolute fastest is undeniably the 7800X3D right now. The fact it's 8 cores is a weak argument, just LOOK how it benches. If you happen to have the dual CCD 7950X3D, it looses about 5-7% on peak FPS, but not all the time - As in some games and apps it's perf is identical to the 7800X3D. Now just look at how it looses ZERO ground to Intel's i9-14900K in non-gaming workloads.

While using a lot less power, runs cooler and is cheaper to buy.

If I needed my PC to double as a workstation and gaming rig, the 7950X3D is a better pick over the i9 any day of the week right now. It looses to the i9 in 2 of the 7 productivity benches, and absolutely trounces it in all but one of the gaming ones.

Lastly on a side note, before anyone else comes at me for shilling AMD - My longest serving CPU I have ever owned was my trusty i7-4790K. Ran that CPU all core overclocked to 4.8 24/7 at only 0.025mv above stock voltages from 2014 until 2022 when I moved to Ryzen 7 5800X3D. The 4790K was a great CPU of it's time, and was only retired because I swapped from a Radeon RX 5700XT to a RX 6950XT and couldn't get anywhere close to utilising the GPU to 100%. I buy whatever's best for me at the time. It was Intel in 2014, and was AMD in 2022.
This is one of those silly perspectives about power draw. You had a 6950XT and your talking about what's more efficient. Lol it's a power beast.

I already said if you're just gaming and gaming only then a 7800X3D is prob the best choice. Though 8 cores is kinda weak. 10 - 12 would be so much more appealing. They don't really run that cool either for the lower power draw they have
 
Yes. AMD Dual CCD is worse than hybrid cores for gaming. Higher average fps doesn't mean it's a smoother experience. You have to hunt very specific and much slower ram kits down because everyone wants the same ones for 1:1 infinity fabric. X670E is very very expensive. If Intel was truly good for nothing then the market prices would show them way under AMD and they aren't.
I don't really like the current AMD lineup, 7800X3D needs to be a 10 or 12 core. It's hard to recommend at 8 cores for anything other than 100% current gen gaming. It's not better at workstation use than say a 14700K.
I'd recommend a 14600K over a 7700X any day of the week.

that's crazy about servers, it's almost like servers are very task specific while home desktops aren't.
Um, Intel has 8 P-cores, AMD has 8 cores with 3D cache. How is AMD worse? Intel Crap cores are useless for games anyway.

X670E is expensive because PCIe 5.0 is expensive. Not to mention Intel hasn't got Anything against X670E. It's just too far ahead.

7800X3D has only 8 cores for very obvious reasons. Of course you could want something that makes no sense but again, AM5-lineup is basically cut down server chips for obvious reasons.

Hybrid crap does not exist on servers because it sucks and not a single intelligent server buyer would want that crap on server use. On desktops Intel just releases CPU that's only good for benchmarks and fanboys are too stupid to realize they suck.
 
This is one of those silly perspectives about power draw. You had a 6950XT and your talking about what's more efficient. Lol it's a power beast.
I still have the 6950XT, I don't upgrade often and it's still my current GPU. I like it, nothing wrong with it's performance almost 2 years later. I realise I'm in a very small minority using this card, but it's power use is no different to it's competing RTX 3090/Ti of the same release year.

Coming after my graphics card does nothing to distract from how an Intel CPU for the here and now in most cases is the 2nd choice.
 
Some people seem to be blaming unreal here or oodle, but Intel's cpus crashing has nothing to do with that.
It is fundamentally a hardware issue that produces wrong results in heavy workloads. (this is mentioned in the linked rad tools article) It shows up in oodle compression because that app actually verifies results and then terminates because it detects errors.
Not every instruction that is executed incorrectly needs to lead to an app crash.
That's why in games that don't use oodle you might not even notice any issues, but they are still there.
It's absolutely terrible. If you're doing critical production work then these cpus are complete garbage at default settings, because they produce wrong results every now and then. Which means they are unstable out of the box. Often this doesn't even lead to app crashes, it can be just corrupted data or other errors that go undetected.
An absolute nightmare, because errors can slowly stack over time and when all finally goes to hell you realize you've been working with corrupted data all this time.

All the fixes do far are to reduce clock speeds and power consumption, so go figure. I lost 10% of performance trying to get our workstation pcs stable.
 
Um, Intel has 8 P-cores, AMD has 8 cores with 3D cache. How is AMD worse? Intel Crap cores are useless for games anyway.

X670E is expensive because PCIe 5.0 is expensive. Not to mention Intel hasn't got Anything against X670E. It's just too far ahead.

7800X3D has only 8 cores for very obvious reasons. Of course you could want something that makes no sense but again, AM5-lineup is basically cut down server chips for obvious reasons.

Hybrid crap does not exist on servers because it sucks and not a single intelligent server buyer would want that crap on server use. On desktops Intel just releases CPU that's only good for benchmarks and fanboys are too stupid to realize they suck.
Cause AMD was first to market with DDR5, PCIe5 and Wifi7 right? What are you even talking about there with X670E? Too far ahead? Like what are you even trying to say.

Wanting more cores makes no sense? How does that make sense?

You keep saying "obvious reasons" what's that reason? Cause the main one is R&D budget. X3D is TSMC tech, not AMD tech.

Hybrid doesn't exist in server because servers are task specific. While hybrid isn't task specific they are going to be making e core only servers.
I still have the 6950XT, I don't upgrade often and it's still my current GPU. I like it, nothing wrong with it's performance almost 2 years later. I realise I'm in a very small minority using this card, but it's power use is no different to it's competing RTX 3090/Ti of the same release year.

Coming after my graphics card does nothing to distract from how an Intel CPU for the here and now in most cases is the 2nd choice.
Your perspective on why power draw matters is silly is what I'm getting at. Saying you got a Ryzen CPU and one of the reasons is power draw/efficiency but then you run an extremely power hungry GPU. It's contradicting. Like if you actually cared about that aspect of your system you would've went with a smaller GPU.
Gamers make a bigger deal about the Intel AMD CPU power war than they need to when they shove 350w+ cards in their systems.
 
Some people seem to be blaming unreal here or oodle, but Intel's cpus crashing has nothing to do with that.
It is fundamentally a hardware issue that produces wrong results in heavy workloads. (this is mentioned in the linked rad tools article) It shows up in oodle compression because that app actually verifies results and then terminates because it detects errors.
Not every instruction that is executed incorrectly needs to lead to an app crash.
That's why in games that don't use oodle you might not even notice any issues, but they are still there.
It's absolutely terrible. If you're doing critical production work then these cpus are complete garbage at default settings, because they produce wrong results every now and then. Which means they are unstable out of the box. Often this doesn't even lead to app crashes, it can be just corrupted data or other errors that go undetected.
An absolute nightmare, because errors can slowly stack over time and when all finally goes to hell you realize you've been working with corrupted data all this time.

All the fixes do far are to reduce clock speeds and power consumption, so go figure. I lost 10% of performance trying to get our workstation pcs stable.
The fixes are to literally run it within Intel's spec. Not outside the spec like mobo makers default to.
You shouldn't have to lower any ratios
 
Cause AMD was first to market with DDR5, PCIe5 and Wifi7 right? What are you even talking about there with X670E? Too far ahead? Like what are you even trying to say.
Like what are you even asking? PCIe configuration AMD vs Intel:

Z790 (Intel):

x16 PCIe 5.0 (CPU)
x4 PCIe 4.0 NVMe (CPU)
(various PCIe 4.0 from chipset)

X670E (AMD)

x16 PCIe 5.0 (CPU)
x4 PCIe 5.0 NVMe (CPU)
x4 PCIe 5.0 NVMe (CPU)
(various PCIe 4.0 from chipset)

While AMD offers TWO PCIe 5,0 NVMe from CPU, Intel offers ZERO. Unless sacrificing lanes from video card of course. That means AMD platform is still very future proof whereas Z790 was DOA. Intel is not even close on AMD when comparing IO capabilities.
Wanting more cores makes no sense? How does that make sense?

You keep saying "obvious reasons" what's that reason? Cause the main one is R&D budget. X3D is TSMC tech, not AMD tech.

Hybrid doesn't exist in server because servers are task specific. While hybrid isn't task specific they are going to be making e core only servers.
It makes sense to have more cores but again, even Intel does not offer more than 8 "big" cores on one cluster and AMD chiplets have 8 core CCX so hoping for more is pretty much futile right now. Again, X3D tech has nothing to do with amounts of core on CCX.

That makes no sense. Hybrid cores are good only IF running single software at at time. Something that should happen on servers because according to you they are task spesific. So on servers they Should make much more sense but doesn't. Hybrid architecture is one of biggest scams on CPU history and just proves how stupid most reviewers are when actually praising them. AMD "hybrid" is much better. It's faster, has no compatibility issues with software and has much lower performance penalty.
 
Like what are you even asking? PCIe configuration AMD vs Intel:

Z790 (Intel):

x16 PCIe 5.0 (CPU)
x4 PCIe 4.0 NVMe (CPU)
(various PCIe 4.0 from chipset)

X670E (AMD)

x16 PCIe 5.0 (CPU)
x4 PCIe 5.0 NVMe (CPU)
x4 PCIe 5.0 NVMe (CPU)
(various PCIe 4.0 from chipset)

While AMD offers TWO PCIe 5,0 NVMe from CPU, Intel offers ZERO. Unless sacrificing lanes from video card of course. That means AMD platform is still very future proof whereas Z790 was DOA. Intel is not even close on AMD when comparing IO capabilities.

It makes sense to have more cores but again, even Intel does not offer more than 8 "big" cores on one cluster and AMD chiplets have 8 core CCX so hoping for more is pretty much futile right now. Again, X3D tech has nothing to do with amounts of core on CCX.

That makes no sense. Hybrid cores are good only IF running single software at at time. Something that should happen on servers because according to you they are task spesific. So on servers they Should make much more sense but doesn't. Hybrid architecture is one of biggest scams on CPU history and just proves how stupid most reviewers are when actually praising them. AMD "hybrid" is much better. It's faster, has no compatibility issues with software and has much lower performance penalty.
Idk bout DOA... As if pcie5 nvme drives are extremely necessary right and will be within the next 2 years..
Intel has offered more cores in the past on a single die than 8 cores for consumer. 10900K.
Never said anything about core count being related to X3D. That was about R&D. AMD has gone stagnant like Intel did and the only thing that saved AMD in the consumer space was TSMC R&D into X3D.
I never said AMD was a bad choice, just dual CCD isn't good for gaming, the fps might be high but it isn't a great experience.

My biggest gripe with Intel reviews is everyone wants to throw the same AMD optimized RAM into the Intel system and claim that's even playing grounds.
 
It's absolutely terrible. If you're doing critical production work then these cpus are complete garbage at default settings, because they produce wrong results every now and then. Which means they are unstable out of the box. Often this doesn't even lead to app crashes, it can be just corrupted data or other errors that go undetected.

An absolute nightmare, because errors can slowly stack over time and when all finally goes to hell you realize you've been working with corrupted data all this time.
Proof that these chips are being pushed to their limits and straight off the cliff.
 
Idk bout DOA... As if pcie5 nvme drives are extremely necessary right and will be within the next 2 years..
Intel has offered more cores in the past on a single die than 8 cores for consumer. 10900K.
Never said anything about core count being related to X3D. That was about R&D. AMD has gone stagnant like Intel did and the only thing that saved AMD in the consumer space was TSMC R&D into X3D.
I never said AMD was a bad choice, just dual CCD isn't good for gaming, the fps might be high but it isn't a great experience.

My biggest gripe with Intel reviews is everyone wants to throw the same AMD optimized RAM into the Intel system and claim that's even playing grounds.
Intel LGA1700 platform cannot take advantage of PCIe NVMe 5.0 drives and cannot do that in future. That means LGA1700 is both DOA and not future proof. Intel's next platform will likely support PCIe 5.0 NVMe. As usual, Intel is way behind AMD on IO side.

10900K is pretty much only example of more than 8 "big" cores on single group, outsede HEDT of course. Intel and AMD both agree about 8 cores group being maximum.

Without stacked 3D cache AMD could have design different CPUs. Like we see on RDNA2 Infinity cache, that uses "traditional" cache.

Dual CCD is not ideal for gaming but it's very cost effective solution. That's basically only reason it's even used. However Intel is not using anything better really.

Do they really say that's even playing grounds? Benchmarks rarely use setups that is "equal" to all systems in case there are major differences. It's also not fair for AMD to use water cooling, since air cooler would make AMD look much better.
 
Back