AMD launches Ryzen 6000 series for laptops: What's new with the Zen 3+ architecture?

M1 efficiency has Nothing to do with ARM. There are two major factors: First is node advantage (TSMC 5nm) and another is die space waster aka SOC integrated memory. Current x86 chips have neither of those, making x86 vs ARM comparison totally pointless. This is something almost every ARM promoter "forget" to say.
It is true that dark silicon is a challenge. Nodes are deceasing faster than power consumption, so there is a lot of heat on a die. On 7 mn, around half of die is unpowered, assuming 125W limit. 3D stacking brings even more heat.
 
The real possibility of TB4 integration is huge for me. Finally no more platform advantage for Intel.
A curious thing about USB4 on Rembrandt is use of PCIe 4.0 to wire port on a certified laptop that chooses to add PCIe data to USB4 port, as shown of their presentaiton slide. TB4 uses PCIe 3.0 x4, whereas AMD will use here 4.0 x2 lanes, I suppose, to bring 32 Gbps of speed.

It's going to be messy marketing with USB4 implementation by different vendors. The spec is loose and not as tight as TB4. Brace yourselves for BS in places!

It is not clear what "DisplayPort 2.0 ready" really means. I am always sceptical about the word "ready". Remember "HD ready" a decade ago? What a joke that was... There is a bit of unknown about DP. On USB4, DP 2.0 signal at 40 Gbps can be tunnelled.

In terms of CPU, does it nativelly support, on die, UHBR10 lanes for DP 2.0 and FRL lanes for HDMI 2.1? Or does CPU rely on seperate level shifter chips to translate DP 1.4 into HDMI 2.1 or into UHBR10? Parade technologies launched a chip for that purpose, to help CPUs - PS196

Many questions, indeed. "DP 2.0 ready" suggests to me that CPU does not have on die UHBR10 lanes, but AMD leaves this to vendors to decide whether they want to install a level shifter chips that would also do DSC pass-through for the first time.
 
Most of you are wrong and talk as making improvements in highly optimized tech was an easy task.

The M1 is the excellency because Apple went all-in: 5nm + very optimized codecs/ memory controllers etc etc etc etc etc. That with years of optimizing code for ARM on the iOS then it is understandable that it is one of the best chips around.

AMD with this update just didn't change the architecture to unprecedented IPC. Everything eles involves A LOT of work, so this chip is a new big upgrade
 
AMD with this update just didn't change the architecture to unprecedented IPC. Everything eles involves A LOT of work, so this chip is a new big upgrade
Absolutely, especially RDNA2 graphics. Next year, on Zen 4, integrated graphics will deliver full 1080p for the first time ever and dGPUs will be needed only for 1440p/4K or other graphical workloads.
 
You are incorrect. The battery life of an M1 equipped max laughs at any X86 machine. It also has no fan and the devices can be extremely thin and light. It’s a completely different beast.

It is incredibly delusional to believe that the 6000 series is a “massive step up from the 5000” series in any regard, this is a refresh. I’m actually laughing at how someone can read up on the 6000 series and see it as anything more but a slightl refresh.

You clearly don’t understand what the M1 is or how it works. I certainly don’t believe you own one. ARM technology is massively disruptive. It is so much more efficient that I can’t see laptops using X86 Intel or AMD CPUs as soon as ARM becomes readily available on Windows with better compatibility than it currently has.

A "slight refresh"...and you're calling them delusional? Completely new IO, new memory spec, new gpu, and a new node(even if it is just enhanced 7nm) is MUCH more than a "slight refresh". ARM arch WAS massively disruptive, it isnt anymore because it isnt making anymore headway into markets outside of Apple devices which are their own closed ecosystems. ARM will never be as efficient or powerful as x86 for Windows applications, at least not as long as Intel and AMD are the primary market players. As for actual disruptive archs, RISC-V is already showing to be more efficient than ARM and is where all of the VC cash is going right now that isnt going into AI/ML silicon.
 
ARM will never be as efficient or powerful as x86 for Windows applications, at least not as long as Intel and AMD are the primary market players.
Both Intel and AMD have plans to adopt ARM solutions in their designs. It will be fun in 2-3 years. ARM is officially in Intel's IDM 2.0 strategy.
 
ARM will never be as efficient or powerful as x86 for Windows applications.
The world is going towards to translate any code to the architecture of the chip and then run, like on Apple M1 from x86. The first time it may be slower but afterwards it will run at a very good speed. So, not only Microsoft is writing apps generically and then compile them on x86 / ARM but many others (even free apps sometimes offer x86 / ARM).

ARM has much more chance to win over x86 than the opposite: ARM is on phones, tablets, Samsung and iPads, and even Apple laptops already run on it. So almost all devices worldwide use ARM; for a programmer makes much sense to optimize for ARM as for x86.

The only exception: games. High-end consoles use x86 and gaming PC also. So on this matter AAA titles will still be optimized for x86. But as GPUs on ARM SoCs are getting very powerful, it makes sense that slowly they start doing both x86 and ARM executables.

I have an M1 Mac Mini and it's abysmal what that small chip and apple on macos achieve. Excellence. Apple: please focus now on games too..
 
The world is going towards to translate any code to the architecture of the chip and then run, like on Apple M1 from x86. The first time it may be slower but afterwards it will run at a very good speed. So, not only Microsoft is writing apps generically and then compile them on x86 / ARM but many others (even free apps sometimes offer x86 / ARM).

ARM has much more chance to win over x86 than the opposite: ARM is on phones, tablets, Samsung and iPads, and even Apple laptops already run on it. So almost all devices worldwide use ARM; for a programmer makes much sense to optimize for ARM as for x86.

The only exception: games. High-end consoles use x86 and gaming PC also. So on this matter AAA titles will still be optimized for x86. But as GPUs on ARM SoCs are getting very powerful, it makes sense that slowly they start doing both x86 and ARM executables.

I have an M1 Mac Mini and it's abysmal what that small chip and apple on macos achieve. Excellence. Apple: please focus now on games too..
Apple have achieved such good x86 emulation (if you can really call it that) by essentially having some x86 components on the SoC. It remains to be seen if or even when they decide to axe those and use the die space for something else.
 
Apple have achieved such good x86 emulation (if you can really call it that) by essentially having some x86 components on the SoC. It remains to be seen if or even when they decide to axe those and use the die space for something else.
x86 components are not allowed, they are exclusive from Intel and AMD. They may add general acceleration for translation tasks though, but no x86...
 
The world is going towards to translate any code to the architecture of the chip and then run, like on Apple M1 from x86. The first time it may be slower but afterwards it will run at a very good speed. So, not only Microsoft is writing apps generically and then compile them on x86 / ARM but many others (even free apps sometimes offer x86 / ARM).

ARM has much more chance to win over x86 than the opposite: ARM is on phones, tablets, Samsung and iPads, and even Apple laptops already run on it. So almost all devices worldwide use ARM; for a programmer makes much sense to optimize for ARM as for x86.

The only exception: games. High-end consoles use x86 and gaming PC also. So on this matter AAA titles will still be optimized for x86. But as GPUs on ARM SoCs are getting very powerful, it makes sense that slowly they start doing both x86 and ARM executables.

I have an M1 Mac Mini and it's abysmal what that small chip and apple on macos achieve. Excellence. Apple: please focus now on games too..
2013 called and wants its predictions back.

how long have we heard the ARM dream? Oh ARM is the future!

What we have not seen is ARM scaling up past mobile. It took nearly a decade for apple to make a competitive ARM chip, using their exclusive right to the 5nm node, in an enclosed ecosystem, with levels of integration not present anywhere else in the market.

Wow, what an achievement. Just being ARM doesnt make chips magical supercomputer tier products. AMD's 6000 series is closing much of the gap and still isnt on 5nm.
This has nothing to do with my hatred of Radeon. I don’t hate AMD CPUs. But either way, the battery life of an M1 device is vastly superior to X86 laptops. Anyone who sits there and claims they are matched or even close are lying through their teeth. They are directly contradicting what all the reviewers tell us. I also haven’t mentioned that X86 throttles itself on battery when the M1 performs identically on battery or plugged in.

Apple isn’t perfect, the M1 isn’t perfect. But it’s massively more efficicent than any X86 CPU.

it seems a lot of people are letting their love for AMD or hatred of Apple cloud their vision. Apple have done a revolutionary thing with the M1 and that is probably really annoying for their haters. The amount of ignorance surrounding the M1 is epic.
Funny, my 4800h doesnt throttle itself on battery. Must be one of those ARM based ryzens!
 
Let's just acknowledge that Rembrandt is going to be a brilliant platform, with well modernised I/O and well-rounded platform.
 
2013 called and wants its predictions back.

how long have we heard the ARM dream? Oh ARM is the future!

What we have not seen is ARM scaling up past mobile. It took nearly a decade for apple to make a competitive ARM chip, using their exclusive right to the 5nm node, in an enclosed ecosystem, with levels of integration not present anywhere else in the market.

Wow, what an achievement. Just being ARM doesnt make chips magical supercomputer tier products. AMD's 6000 series is closing much of the gap and still isnt on 5nm.
Funny, my 4800h doesnt throttle itself on battery. Must be one of those ARM based ryzens!
Lmao, your 4800H definitely does “throttle“ itself on battery. All X86 laptop chips do As they are power restricted on battery. In fact your 4800H is already limited by the cooling and power setup of your laptop even when it’s plugged in. Otherwise it would match a 3800 clocks and performance. I’m quite shocked that you seem to be completely unaware of how laptops work.

Oh and my predictions aren’t previctions, they are facts. The fact is ARM is considerably more efficient than any x86 CPU and one little Ryzen refresh isn’t going to change that.

You don’t know the subject you are waffling about…
 
Oh and my predictions aren’t previctions, they are facts. The fact is ARM is considerably more efficient than any x86 CPU and one little Ryzen refresh isn’t going to change that.
No it's not. Show me 5nm x86 CPU that's designed fort 3.2 GHz and has integrated DRAM on package? Oh there isn't one?

There goes your comparison...
 
Oh and my predictions aren’t previctions, they are facts. The fact is ARM is considerably more efficient than any x86 CPU and one little Ryzen refresh isn’t going to change that.
Please... watch Tim's review of new Apple's laptop and you will find your answers. It's much, much more complex that this simplistic, meaningless statement.
 
A "slight refresh"...and you're calling them delusional? Completely new IO, new memory spec, new gpu, and a new node(even if it is just enhanced 7nm) is MUCH more than a "slight refresh". ARM arch WAS massively disruptive, it isnt anymore because it isnt making anymore headway into markets outside of Apple devices which are their own closed ecosystems. ARM will never be as efficient or powerful as x86 for Windows applications, at least not as long as Intel and AMD are the primary market players. As for actual disruptive archs, RISC-V is already showing to be more efficient than ARM and is where all of the VC cash is going right now that isnt going into AI/ML silicon.
You are deeply misinformed. MS and Google are currently scrambling to develop their own ARM architectures right now. So your claim that ARM isn’t making headway is pure bullshit. You also laughably claim that ARM will never be as efficient as X86. This is actually backwards, X86 will never be as efficient as ARM. Hence why Google and MS are moving to ARM. ARM was designed to be power efficient from day one. When X86 was designed they never imagined it would be powered by a battery (it’s that old).

Also the 6000 series is closer to a refresh than an overhaul. The resulting performance differences of this product amounts to a “slight” change. This is nothing compared to Zen 3 vs Zen 2 for example.

 
ARM was designed to be power efficient from day one.
Yes, this is correct and ARM will have its important niches in many sectors, form mobile, tablets, laptops, other SoCs, etc.

For desktops, this is not needed at the moment and x86 is fine. I do not see what's the fuss about. Each architecture has its own niche.

AMD's 6000 APU are efficient enough for x86 designs, which is fine too.
 
You see, they will be using default ARM cores there. There's no need for any CPU development, just use existing architecture.
 
You are deeply misinformed. MS and Google are currently scrambling to develop their own ARM architectures right now. So your claim that ARM isn’t making headway is pure bullshit. You also laughably claim that ARM will never be as efficient as X86. This is actually backwards, X86 will never be as efficient as ARM. Hence why Google and MS are moving to ARM. ARM was designed to be power efficient from day one. When X86 was designed they never imagined it would be powered by a battery (it’s that old).

Also the 6000 series is closer to a refresh than an overhaul. The resulting performance differences of this product amounts to a “slight” change. This is nothing compared to Zen 3 vs Zen 2 for example.


LOL deeply misinformed...I work directly in the industry, with one of those 2 companies specifically. Scrambling would be a LARGE stretch. Google is essentially outsourcing their designs to Samsung and adding in some custom AI/ML blocks for image processing, the rest is essentially stock ARM arch. If MS felt the urgent need to "scramble" to make an ARM design then they would have already instead of just using QC chips. Again, ARM owns mobile and embedded, but outside of those it isnt making a whole lot of waves.

I didn't state that ARM would never be as efficient as x86 in general, I stated that it would never be as efficient for WINDOWS, which is why its only used in super lightweight notebooks that arent meant for any kind of raw compute power. The architectures have 2 completely different targets. And yes, I know exactly how old x86 is.

The 6000 series is closer to a refresh than a complete overhaul, you are correct in that statement because this isnt a complete arch update, however thats not what you said before. You stated that it was a "slight refresh" which was absurd. A "slight refresh" would be bumping the max clocks by .1ghz and slapping a new name on the chip like Nvidia used to do with their GPUs.
 
he 6000 series is closer to a refresh than a complete overhaul, you are correct in that statement because this isnt a complete arch update, however thats not what you said before. You stated that it was a "slight refresh" which was absurd.
Indeed, but there is more to it.
It's a CPU refresh and complete platform overhaul. FP7 is entirely new chipset, with better I/O support than Alder Lake, including USB4 40 Gbps, DP 2.0 at 40 Gbps and HDMI 2.1 up to 48 Gbps support on die. No other platform in the world has such advanced video and data pipeline together, at the moment.
 
Last edited:
Indeed, but there is more to it.
It's a CPU refresh and complete platform overhaul. FP7 is entirely new chipset, with better I/O support than Alder Lake, including USB4 40 Gbps, DP 2.0 at 40 Gbps and HDMI 2.1 up to 48 Gbps support on die. No other platform in the world has such advanced video and data pipeline together, at the moment.
Oh yes, I know. You and I are on the same page here. That's what I've been trying to point out to the other poster.
 
Back