Intel Core Ultra 9 285K Review: Arrow Lake is a Mess

This is actually just deja vu. Back in 2012-2013 I built a AMD FX4170 as I have been with AMD for many years. Well I don't regret it as it was inheritance money and my Mother still uses the PC. Anyways it uses 125w at 1.425v. That was hectic back then and I paired it later with a 1050ti. Now for my work pc I use a I5 3340 which is in general faster than the FX4170 and it uses 77w and the Vcore is just under 1v.

Now down the line it is reversed. I kinda feel like manufacturers runs into a brick wall generation per generation. Obviously the x3d parts set a new bench mark but how long will Infinity cache be relevant as games evolve? If AMD's new x3d part doesn't deliver at least 10% performance increase then it is not worth it as it will most prob be a lot higher in price than the current parts. Both this cpu and most prob the new x3d will only really make sense for new builders although I would hold on till the next Intel gen to iron out the problems.
 
Possibly, efficiency focus for datacenter in their architecture. AMD is eating their lunch in the datacentrer space.

I also assume the X3D cache is patented by AMD so Intel can't go that route?

Without TSMC, AMD could not do X3D chips.

Intel are free to do use it but it will require a complete redesign of their chips, and they probably don't want to become locked in to TSMC fabs.

Intel Foveros is basicly the same yeah?
 
"ditched hyper-threading" - I always saw this as their "power move" to win in gaming since disabling HT often improved FPS with intel CPUs. And this isn't Intel's first tile based consumer CPU, that would be Meteor Lake.

To put things into perspective... if Intel doesn't deliver 20% or more perf improvement in gaming with their next gen CPUs they won't beat the 7000 x3D chips (let's say 7% gains from software/bios updates and 15% gen on gen on top of that 5%, so ~23% in total). I think even with that they won't be close to the 9800x3D.

Forget about Intel competing with Zen 6. If Zen 5 Turin server results are any indication, AMD is leaving a LOT of performance on the table for desktop CPUs because they didn't change the I/O chiplet like they did on the server side. They're also using an older process node. I can see AMD just releasing Zen5+ with the new I/O chiplet as a mid-gen update and take their time with Zen6.

AMD left pretty much nothing on the table. IO die means very little for total performance. CCDs are way more important.

AMD went TSMC 4N because it was cheaper, made perfect sense. Nothing old about TSMC 4N really. TSNC 3nm won't be good for big chips until N3P or N3X really, or better yet 2nm. Just look at Arrow Lake.

Disabling HT on Intel works in some games, just like disabling SMT works in some games on AMD too. Nothing worth doing unless you play the same game all the time and it works better with HT/SMT off. In some games, you will see a performance decrease by disabling it.


This is what disabling SMT and CCD1 can do, in some games. Not specific to Intel at all. SMT is worth using for most programs and games.

Nothing new.

9800X3D will be the king of gaming CPUs till Zen 6 3D launches and it will also do very well in programs, as MT perf in increased by 30% over 7800X3D.
 
AMD left pretty much nothing on the table. IO die means very little for total performance. CCDs are way more important.

AMD went TSMC 4N because it was cheaper, made perfect sense. Nothing old about TSMC 4N really. TSNC 3nm won't be good for big chips until N3P or N3X really, or better yet 2nm. Just look at Arrow Lake.

Disabling HT on Intel works in some games, just like disabling SMT works in some games on AMD too. Nothing worth doing unless you play the same game all the time and it works better with HT/SMT off. In some games, you will see a performance decrease by disabling it.


This is what disabling SMT and CCD1 can do, in some games. Not specific to Intel at all. SMT is worth using for most programs and games.

Nothing new.

9800X3D will be the king of gaming CPUs till Zen 6 3D launches and it will also do very well in programs, as MT perf in increased by 30% over 7800X3D.
In general disabling HT has always worked better on Intel because SMT is regarded as being more efficient.

TSMC N3B works just fine. 5.7GHz is not a low number, it's the same as the 9950X. And AMD is using it for servers.

As for the I/O die, it can 100% improve performance. Zen 5 has a wider pipeline which could benefit a lot from an improved memory controller (higher throughput, lower latency, fewer memory/cache misses, higher memory speed compatibility for 1:1 mode, lower power draw). When Zen 4 launched the new I/O was a major upgrade: improved infinity fabric, iGPU, USB, PCIe lanes, etc.
 
In general disabling HT has always worked better on Intel because SMT is regarded as being more efficient.

TSMC N3B works just fine. 5.7GHz is not a low number, it's the same as the 9950X. And AMD is using it for servers.

As for the I/O die, it can 100% improve performance. Zen 5 has a wider pipeline which could benefit a lot from an improved memory controller (higher throughput, lower latency, fewer memory/cache misses, higher memory speed compatibility for 1:1 mode, lower power draw). When Zen 4 launched the new I/O was a major upgrade: improved infinity fabric, iGPU, USB, PCIe lanes, etc.
Nah, disabling HT on Intel does not yield more or less performance than disabling SMT on AMD. 100% depends on game and software. Some games don't like HT/SMT, others do. You can easily gain performance on AMD chips too, by disabling SMT, in games that don't like SMT.

TSMC 3N is far more expensive than 4nm. Epyc uses 3nm but barely launched yet and is chips that costs x10 of their desktop chips anyway. IO die is far more important on enterprise tier CPUs with tons of CCDs. Makes little to no diffence on single CCD chips like 5800X3D, 7800X3D and 9800X3D.

IO die means close to nothing for desktop class chips. 9800X3D will be the best gaming CPU until Zen 6 3D launches. Cache is the important part here, not IO die or production process.

7800X3D beats entire Intel 13th, 14th and 200 series in gaming, with a sub 5 GHz clockspeed. Lets not act like clockspeed is the magic behind good gaming performance.
 
Last edited:
Nah, disabling HT on Intel does not yield more or less performance than disabling SMT on AMD. 100% depends on game and software. Some games don't like HT/SMT, others do. You can easily gain performance on AMD chips too, by disabling SMT, in games that don't like SMT.

TSMC 3N is far more expensive than 4nm. Epyc uses 3nm but barely launched yet and is chips that costs x10 of their desktop chips anyway. IO die is far more important on enterprise tier CPUs with tons of CCDs. Makes little to no diffence on single CCD chips like 5800X3D, 7800X3D and 9800X3D.

IO die means close to nothing for desktop class chips. 9800X3D will be the best gaming CPU until Zen 6 3D launches. Cache is the important part here, not IO die or production process.

7800X3D beats entire Intel 13th, 14th and 200 series in gaming, with a sub 5 GHz clockspeed. Lets not act like clockspeed is the magic behind good gaming performance.
Even a 2% improvement to the infinity fabric and another 2% from faster memory support (something like 6800-7000MHz 1:1) would have made the 9000 series much better.
 
For what they test here, it is a massive flop. And the sales reflect that. And you forgot the context where the launch prices were higher than they are now, while Zen 4 was really cheap.

The fact that it improved value wise means that AMD understood the problem and adjusted their strategy... somewhat. They need to save Zen5 with a good x3D launch.
What is tested here is old software that doesn't support AVX-512. There are very few excuses why current software do not support AVX-512. Heck, it was introduced 2013 and first CPUs came out 2016. Also adding support for it takes, as said countless times, few seconds.

Problem is not Zen5 but outdated software.
Possibly, efficiency focus for datacenter in their architecture. AMD is eating their lunch in the datacentrer space.
Efficiency on server CPUs may be way different. Main reason why 13K and 14K were so inefficient is ultra high power limit and clock speeds. Just lowering those would made them much more efficient, and slower.
 
Nah, disabling HT on Intel does not yield more or less performance than disabling SMT on AMD. 100% depends on game and software. Some games don't like HT/SMT, others do. You can easily gain performance on AMD chips too, by disabling SMT, in games that don't like SMT.
AMD's SMT implementation is better than Intel's so disabling SMT hurts AMD more than Intel. Of course, some software don't benefit from SMT in general but that's another thing.
 
AMD's SMT implementation is better than Intel's so disabling SMT hurts AMD more than Intel. Of course, some software don't benefit from SMT in general but that's another thing.
Why do you think that? Because if you compare then, they do the same thing pretty much.
 
Even a 2% improvement to the infinity fabric and another 2% from faster memory support (something like 6800-7000MHz 1:1) would have made the 9000 series much better.
Not really, since low timings are just as important. Also memory speed don't really affect 3D chips much. Way less than regular chips. I only care about 9000X3D. Just like I only cared for 5000X3D and 7000X3D.

IO die means close to nothing on chips with only a single CCD. Even on dual CCD it makes little difference. AMD upgraded IO die on enterprise chips because they have alot more CCDs. Chokepoint.
 
Why do you think that? Because if you compare then, they do the same thing pretty much.
Because it has been proven countless times on benchmarks that AMD SMT implementation is indeed better. Also Intel has confirmed that they don't use SMT thread priority, I.e. which thread gets higher priority for execution units. Because there is no priority, it's pretty much randomized. Cannot remember what AMD has said but IMO AMD has NOT said they don't have any SMT thread priority. If AMD has any sort of priority, than easily makes difference.
 
If you're spending that kind of money on a GPU then on the average it will not be at 1080p anyway. I'd be more interested in 1440p and 2160p results; does the gap become indistinguishable?
 
Back