AMD's upcoming AM5 socket for Raphael processors appears to lack PCIe 5.0 support

fps4ever

Posts: 758   +1,003
Please read about 5.0 before commenting about it, look up CXL and what it does and is capable of before making foolish claims. Not everything is about raw speed or bandwidth.
And no the m.2 ssds are more than fast enough, the problem lies in the software component not utilizing it. Outside of Windows itself only really productivity software manages to actually use that speed. Blame the software engineers.

I understand all about PCIE 5.0. Exactly what is it with talking about current cpu's is being foolish? You are talking about future benefits and the hardware that can take advantage of it. Also mostly specialized software and AI applications at that and not consumer entry level cpu'ss. You one of those future paper experts calling me foolish, LOL?
 

Shadowboxer

Posts: 1,847   +1,459
Fun fact:
When AMD was touting PCIe 4 on X470, the chipset was PCIe 2.

Meanwhile on Intel's platform, four full speed PCIe 4 M.2 drives can run sultaneously on Z690.

AMD didn't capatilize while Intel was still on 14nm. I saw this coming.
I wouldn’t rule out AMD yet. Their 6000 series parts could perform better than Alder lake and then nobody will care about Pcie5. I don’t think many people even care about Pcie4 but I personally believe it’s always best to seek out the better spec, even if you don’t need it yet. You never know, AMD might drop another card with only half the lanes on Pcie5 (highly unlikely) and then you’d regret getting Pcie4.

Although personally I do look forward to butthurt comments from the AMD fans. This pcie thing is good enough. Go look at the articles promising Pcie4 on Zen 2. The same fans are all there reiterating how important Pcie bandwidth is. Now, as Zoidberg would say, the rubber band is on the other claw!
 

hahahanoobs

Posts: 3,850   +1,911
I wouldn’t rule out AMD yet. Their 6000 series parts could perform better than Alder lake and then nobody will care about Pcie5. I don’t think many people even care about Pcie4 but I personally believe it’s always best to seek out the better spec, even if you don’t need it yet. You never know, AMD might drop another card with only half the lanes on Pcie5 (highly unlikely) and then you’d regret getting Pcie4.

Although personally I do look forward to butthurt comments from the AMD fans. This pcie thing is good enough. Go look at the articles promising Pcie4 on Zen 2. The same fans are all there reiterating how important Pcie bandwidth is. Now, as Zoidberg would say, the rubber band is on the other claw!
AMD can def stay in the game and I want them to, but I we'll start seeing them struggle. Especially if Intel's GPUs are any good. If they are, AMD will struggle against Intel's massive software team.

I also dismissed PCIe 4 and went with X470 and a PCIe 3.0 drive to save money and avoid buying what I won't need. PCIe 3 is overkill for 90%+ of gamers since SATA load times are on par. This might change with Direct Storage, but I'm still not convinced.
 

Shadowboxer

Posts: 1,847   +1,459
AMD can def stay in the game and I want them to, but I we'll start seeing them struggle. Especially on Intel's GPUs are any good. If it is, AMD will struggle against Intel's massive software team.

I also dismissed PCIe 4 and went with X470 and a PCIe 3.0 drive to save money and avoid buying what I won't need. PCIe 3 is overkill for 90%+ of gamers since SATA load times are on par.
Intels GPUs look very interesting, some are saying it could be 3070 like performance. They are supposedly good at ray tracing and use hardware acceleration on their XeSS tech (their FSR/DLSS) equivalent. Its the most interesting thing to happen for PC gaming hardware since zen 1 launched a few years back.

Also, I hope for your sake that this 8 lane GPU trend doesn’t continue. I chose X570 for the Pcie4 because I was concerned that if bought a GPU in a few years it might be limited by Pcie3. But AMD have accelerated that with their cheaping out on the 6600XT and only having 8 lanes on it. Now todays midrange GPUs could be limited by Pcie3.
 

hahahanoobs

Posts: 3,850   +1,911
I knew I read this somewhere...
"The PCIe 5.0 x16 can be split into an x8 (for graphics) and 2x x4 (for storage)"
Techpowerup


I didn't originally see it on TPU, but it's what I found when I searched it just now.
@Irata
 

Kosmoz

Posts: 461   +812
Does any of you know for sure 100% these are the final release specs of AM5?

Why do you keep debating about things that can be actually different 1 year from now?

This is so stupid and makes no sense to already declare a winner of specs bases on a hack that we don't know how old the actual info is... it's like trusting leaks about something that will come 1 year from now, and trust them 100%.
 

Shadowboxer

Posts: 1,847   +1,459
Does any of you know for sure 100% these are the final release specs of AM5?

Why do you keep debating about things that can be actually different 1 year from now?

This is so stupid and makes no sense to already declare a winner of specs bases on a hack that we don't know how old the actual info is... it's like trusting leaks about something 1 year from now 100%.
Because Techspot created an article about it and us armchair experts can’t resist to speculate. Also, I see nobody declaring “winner”..

Obviously we should take this all with a pinch of salt.
 

NeoMorpheus

Posts: 883   +1,679
When intel was still on pcie3, amd fanboys said “intel is terrible, mired in stagnation, can’t innovate anymore, Intel is dead.”

Now that Intel has forged ahead leading the industry with pcie5 while amd is stuck on pcie4 for years, it’s “well no one needs pcie5, it’s a gimmick, there’s no discernible difference between pcie3.0 and pcie4.0 so who even needs pcie5, blah blah blah.”

Kinda amusing and sad simultaneously.
Lol! Just as @Adi6293 called it!
 

NeoMorpheus

Posts: 883   +1,679
All of the E18 controller and newer models are already hitting max bandwidth on PCIe 4.0.
The interesting part is, only benchmarks seems to take advantage of that bandwidth.

real usage shows a completely different story. One quick search I did:

 

Theinsanegamer

Posts: 2,843   +4,493
Please read about 5.0 before commenting about it, look up CXL and what it does and is capable of before making foolish claims. Not everything is about raw speed or bandwidth.
And no the m.2 ssds are more than fast enough, the problem lies in the software component not utilizing it. Outside of Windows itself only really productivity software manages to actually use that speed. Blame the software engineers.
It kind of is a big deal because of the feature sets 5 brings like cxl support which is a game changer.
Well actually, *"ignorant"* that clearly just think it's a bandwidth increase then, there is more than just raw bandwidth for GPUs to be concerned with, like feature sets, like CXL that is a complete game changer for the way devices or co-processors, yeah that is your GPU, in the PCIe pipeline are handled.

*"* I am calling ignorant because of a lack of knowledge while making judgements about a tech commenters apparently are not informed about which is the proper term and not to be meant in an inherently demeaning way like most people perceive it thanks to bastardized English, it's just a lack of knowledge on the subject. So try not to perceive it in the wrong way please and own up if you are uninformed.
It's always great when someone of the peanut gallery stands up with a big target on their back and starts talking about some technology being a "game changer" then switches gears to "do your own research!!!". If you cant back up your point then all you are doing is shitting up the thread with CXL shilling.

The CXL protocol *MAY* be a game changer with HSA implementations, but not only does that not immediately mean it will matter to consumers, but since the majority of hardware out there is still 3.0 with plenty of 4.0 out there, it will be a LONG time before CXL has the market penetration for any software devs to actually care, then it will take them years to actually take advantage of it By then PCIe 6.0 will be out. We still see only a small difference between PCIe 2.0, 3.0, and 4.0 unless you bring in multi GPU or cards like AMD's 5500xt that are gimped with x8 connections.
It's not the brand agenda that's the problem for me, it's the censorship.
Censorship has been the death of many a tech site. WCCFtech's whitelist has cut traffic to a fraction of what it once was.
Really strange how this site seems to be posting fluff pro nvida articles on a daily basis and the few AMD articles they post are only on a negative side.

And dont get me started in the draconian censorship that is the barrage of removed comments that dont fit their narrative.
So Techspot shouldnt report on news if it looks bad for AMD? Funny, most people would call that "bias".
 

Shadowboxer

Posts: 1,847   +1,459
The interesting part is, only benchmarks seems to take advantage of that bandwidth.

real usage shows a completely different story. One quick search I did:

Everybody has known for years that game loading times do not benefit from the PCIe interconnect on storage. I believe Nvidia are working on technology to take advantage of it. I think the consoles have already shown benefits to using PCIe storage.

But that doesn’t mean PCIe5 couldn’t have an advantage to gamers. 8 lanes of Pcie5 is the same as 16 lanes of PCIe4 so it could free up lanes on your board.

Or if Nvidia follow the route of AMD and take a dump on consumers by removing half the lanes from their cards, we may potentially need the extra bandwidth.
 

Lionvibez

Posts: 2,482   +2,120
The interesting part is, only benchmarks seems to take advantage of that bandwidth.

real usage shows a completely different story. One quick search I did:

Yup not too many consumer work loads are bottlenecked by I/O and remember this testing is only showing Gaming. Most games are designed for Hard drives, only more recent and newer games will benefit more going forward.

There are other workloads that would benefit from it just depends on what you do on your computer.
 

Rdmetz

Posts: 335   +162
Not exactly a big deal since even 4.0 isn't really needed by 99% of PC users, I'm using 3.0 and I'm fine but I can see Intel fanboys use this as a weapon against them when they were they ones who said 4.0 didn't matter.... 😅

You can spin it both ways PLENTY of amd users also dogged Intel users because they didn't have pci 4.0 in their 10 series.

Just saying what goes around comes around.

Me personally? I knew it wouldn't matter with the 10 series (and rtx 30 series proved that) and it won't matter here again either.

But still maybe if you don't want to get dogged on about it don't dog on others first to begin with.
 

Rdmetz

Posts: 335   +162
This is at least 1 year away and the leaked/hacked data may be old/older, we don't know.

I think it's still possible the real AM5 when it's officially announced before the launch to actually have PCIe 5.0.

Intel fanbois should not rejoice yet, it's not over.

P.S. Also as a side note: the majority of gamers are still on PCIe. 3.0 (myself included) and it's fine, so there is a lot of room for 4.0 to grow before going to 5.0.
Only the smug elitists need 5.0 for bragging rights (with the exception of pro-sumers).
The only reason there are so many here defending this is because many of those same people were bashing Intel a year and half - 2 years ago for not having pci 4.0 and now they trying to get out in front of the bashing they expect when the tides have turned.I knew back then it didn't matter and I know now it still won't not for some time.

But it won't stop either side for taking digs at each other but the fact is amd fanboys started it.
 

Rdmetz

Posts: 335   +162
Does any of you know for sure 100% these are the final release specs of AM5?

Why do you keep debating about things that can be actually different 1 year from now?

This is so stupid and makes no sense to already declare a winner of specs bases on a hack that we don't know how old the actual info is... it's like trusting leaks about something that will come 1 year from now, and trust them 100%.

Well a part of the reason people can debate on something that hasn't happened yet is because so much of this stuff is set in stone years ahead.

The road maps tell us they work on these things for years and years and something this big for something coming in a year or so is almost always set in stone by this point.

Not saying that's true for everything but we've seen in the past that it usually is.
 

HardReset

Posts: 1,311   +983
Even if rumours are true, that's not problem for AMD. About only useful usage scenario I could think for those 16 PCIe 5.0 lanes is:

- Put PCIe 5.0 x16 video card (none available yet) on x16 slot running at x8 speed
- Put PCIe 4.0/8.0 SSD (that's PCIe, not M.2) into another x8 slot

That kind of scenario is very niche. Video cards won't need PCIe 5.0 for long time.

Another problem with PCIe 5.0 support is that motherboards supporting it are much more expensive to make than PCIe 4.0 motherboards. I expect we will see Intel motherboards with "only" PCIe 4.0 support even there are Alder Lake processor used. Just because PCIe 5.0 adds cost, cheap motherboards are quite impossible. That's why Intel limited 5.0 for just two slots but it also means that's useless for 99.9% of users.

Also, we don't really need PCIe 5.0 for anything. It's just too far ahead of needs. While 3.0 was outdated, 4.0 is not. To illustrate:

1.0: (Came out) 2003
2.0: 2007 (4 years after previous one)
3.0: 2010 (3 years)
4.0: 2017 (7 years)
5.0: 2019 (2 years)

It took 7 years to double bandwidth from 3.0 and now we are supposed to need another double bandwidth just after two years 🤔
 
Well actually, *"ignorant"* that clearly just think it's a bandwidth increase then, there is more than just raw bandwidth for GPUs to be concerned with, like feature sets, like CXL that is a complete game changer for the way devices or co-processors, yeah that is your GPU, in the PCIe pipeline are handled.

*"* I am calling ignorant because of a lack of knowledge while making judgements about a tech commenters apparently are not informed about which is the proper term and not to be meant in an inherently demeaning way like most people perceive it thanks to bastardized English, it's just a lack of knowledge on the subject. So try not to perceive it in the wrong way please and own up if you are uninformed.
Not too mention when they go full pcie 5 it will allow cheaper chipset be able to split the lanes between devices and not lose performance because of the doubled bandwith of 5.0
 

kiwigraeme

Posts: 659   +503
As stated - to really need to take advantage - a serious server - or lots of video/photo work etc - loading lots of huge files to work and process .

I wonder if this is also - let's make sure it works - remember some x570 boards - mainly Gigabyte were having some issues . You are requiring immaculate timings/communications between multiple parts - so why get a bad rep. if one m/b maker stuffs up - for gains only to specific customers.
Plus AMD makes commercial servers etc - who knows what they will implement there .

least Hdmi2.1 with drm works now ok - but use to be pain - to repower to get a link - Not saying it's like this - but annoying 5% of users vs cheaper product and disappointing 5% of power users or I only want the best crowd
 

Rayneofpayne

Posts: 480   +410
It's always great when someone of the peanut gallery stands up with a big target on their back and starts talking about some technology being a "game changer" then switches gears to "do your own research!!!". If you cant back up your point then all you are doing is shitting up the thread with CXL shilling.

The CXL protocol *MAY* be a game changer with HSA implementations, but not only does that not immediately mean it will matter to consumers, but since the majority of hardware out there is still 3.0 with plenty of 4.0 out there, it will be a LONG time before CXL has the market penetration for any software devs to actually care, then it will take them years to actually take advantage of it By then PCIe 6.0 will be out. We still see only a small difference between PCIe 2.0, 3.0, and 4.0 unless you bring in multi GPU or cards like AMD's 5500xt that are gimped with x8 connections.
Censorship has been the death of many a tech site. WCCFtech's whitelist has cut traffic to a fraction of what it once was.
So Techspot shouldnt report on news if it looks bad for AMD? Funny, most people would call that "bias".
It will but it's a waste of time to go through an entire white paper of it, hence you can do your own research.
 

Rayneofpayne

Posts: 480   +410
Not too mention when they go full pcie 5 it will allow cheaper chipset be able to split the lanes between devices and not lose performance because of the doubled bandwith of 5.0
Yes that is another by-product that is very welcomed, no to mention depending on the increase overall it allows for more features to be added at a much cheaper cost.
 

Rayneofpayne

Posts: 480   +410
I understand all about PCIE 5.0. Exactly what is it with talking about current cpu's is being foolish? You are talking about future benefits and the hardware that can take advantage of it. Also mostly specialized software and AI applications at that and not consumer entry level cpu'ss. You one of those future paper experts calling me foolish, LOL?
Yeah no read a bit more, like about why SLI/Crossfire was trash and was underperforming versus what CXL actually does for co-processing and how Intel open sourced it with AMD and NVidia on the consortium for the standard and it's use with GPUs.
 

Rayneofpayne

Posts: 480   +410
All I have said here is you should buy the more modern spec. I can assure you this is not a foolish claim.

Unless you have a reason outside of “buy only what you need” for choosing PCIe4 over PCIe5 then you are nothing but a worthless troll…
Your looking at it superficially, buying something just because it's new is a bad case scenario if you will never get to actually use it's features. Prosumer versus consumer you are for the most part correct, in this case there are more reasons to buy into PCIe 5 than just because it is the shiny brand new thing but typically unless you like beta testing it's never a wise idea to jump on the first iteration of something.
 

Rayneofpayne

Posts: 480   +410
It's not just for consumer parts people.
Intel will benefit IMMENSELY from this, because server and HPC are king, not games.
While yes, the gaming industry is still a billion dollar industry to capitalize on, why throw the money away due to being stupid.