AMD's upcoming AM5 socket for Raphael processors appears to lack PCIe 5.0 support

It's always great when someone of the peanut gallery stands up with a big target on their back and starts talking about some technology being a "game changer" then switches gears to "do your own research!!!". If you cant back up your point then all you are doing is shitting up the thread with CXL shilling.

The CXL protocol *MAY* be a game changer with HSA implementations, but not only does that not immediately mean it will matter to consumers, but since the majority of hardware out there is still 3.0 with plenty of 4.0 out there, it will be a LONG time before CXL has the market penetration for any software devs to actually care, then it will take them years to actually take advantage of it By then PCIe 6.0 will be out. We still see only a small difference between PCIe 2.0, 3.0, and 4.0 unless you bring in multi GPU or cards like AMD's 5500xt that are gimped with x8 connections.
Censorship has been the death of many a tech site. WCCFtech's whitelist has cut traffic to a fraction of what it once was.
So Techspot shouldnt report on news if it looks bad for AMD? Funny, most people woul
Even if rumours are true, that's not problem for AMD. About only useful usage scenario I could think for those 16 PCIe 5.0 lanes is:

- Put PCIe 5.0 x16 video card (none available yet) on x16 slot running at x8 speed
- Put PCIe 4.0/8.0 SSD (that's PCIe, not M.2) into another x8 slot

That kind of scenario is very niche. Video cards won't need PCIe 5.0 for long time.

Another problem with PCIe 5.0 support is that motherboards supporting it are much more expensive to make than PCIe 4.0 motherboards. I expect we will see Intel motherboards with "only" PCIe 4.0 support even there are Alder Lake processor used. Just because PCIe 5.0 adds cost, cheap motherboards are quite impossible. That's why Intel limited 5.0 for just two slots but it also means that's useless for 99.9% of users.

Also, we don't really need PCIe 5.0 for anything. It's just too far ahead of needs. While 3.0 was outdated, 4.0 is not. To illustrate:

1.0: (Came out) 2003
2.0: 2007 (4 years after previous one)
3.0: 2010 (3 years)
4.0: 2017 (7 years)
5.0: 2019 (2 years)

It took 7 years to double bandwidth from 3.0 and now we are supposed to need another double bandwidth just after two years 🤔
It is as finalized 2 years ago PCIe 4 took to long to certify, look it up, look do you guys want multi gpu rigs back without the horrible issues SLI and crossfire endured, CXL and PCIe 5 will be able to fix everything. So yes the doubling is actually needed even if a single GPU doesn't bang off the rev limiter of the total bandwidth marker.
 
Yeah no read a bit more, like about why SLI/Crossfire was trash and was underperforming versus what CXL actually does for co-processing and how Intel open sourced it with AMD and NVidia on the consortium for the standard and it's use with GPUs.

WTF are you going on about? Seriously you are arguing something nobody else is. Did you even read the article topic? You need to read a bit less because nobody is ripping on PCIE 5 in the long run. Calm down Francis.
 
Last edited:
Your looking at it superficially, buying something just because it's new is a bad case scenario if you will never get to actually use it's features. Prosumer versus consumer you are for the most part correct, in this case there are more reasons to buy into PCIe 5 than just because it is the shiny brand new thing but typically unless you like beta testing it's never a wise idea to jump on the first iteration of something.
PCIe 5 is the 5th iteration of PCIe. It’s not the “first iteration” at all. And it works with cards with older specifications of PCIe. It’s sensible to buy a platform with the most up to date versions of the technologies employed.

Last year I would have said that PCIe4 was unlikely to be a benefit to gamers for years. However I would have been wrong. This year AMD released a “midrange” card in the 6600XT with only 8 lanes, that requires PCIe4, punishing users who opted for PCIe3. This may end up being a trend from the GPU manufacturers. We simply do not know what our future requirements will be so it’s always prudent to choose a more up to date platform.
 
It's always great when someone of the peanut gallery stands up with a big target on their back and starts talking about some technology being a "game changer" then switches gears to "do your own research!!!". If you cant back up your point then all you are doing is shitting up the thread with CXL shilling.

The CXL protocol *MAY* be a game changer with HSA implementations, but not only does that not immediately mean it will matter to consumers, but since the majority of hardware out there is still 3.0 with plenty of 4.0 out there, it will be a LONG time before CXL has the market penetration for any software devs to actually care, then it will take them years to actually take advantage of it By then PCIe 6.0 will be out. We still see only a small difference between PCIe 2.0, 3.0, and 4.0 unless you bring in multi GPU or cards like AMD's 5500xt that are gimped with x8 connections.
Censorship has been the death of many a tech site. WCCFtech's whitelist has cut traffic to a fraction of what it once was.
So Techspot shouldnt report on news if it looks bad for AMD? Funny, most people would call that "bias".
Ok because you asked for it,
Do you know why SLI and Crossfire were garbage? Here let me spell it out for you..
PCIe has a problem, It has a Latency issue on top of an unstable bandwidth that fluctuates caused by microcontrollers in the past with the North Bridge and Now the Controller on the CPU die, That isn't a problem for a single card, however when you add a second or more card into the equation it causes issues. Well unless the single GPU is really banging off the Bandwidth cap of the PCIe spec then your performance metrics will be all over the place to a margin of degree.
That is why Nvidia Band aid the problem with the bridge and AMD just ended up with Frame Timing issues worse than NVidia ended up having. This basically has several consequences,
1 - Every game needs specific drivers for both the game and the card itself that allocates the bandwidth to ensure they are below the fluctuation so it isn't maxed out dropping frames left and right, Which is alot of work DX11 and earlier on GPU manufacturers and DX12 later on Software developers with M-adapter which we se how that went.
2 - GPU scaling where as you had 60 class products and lower with 100% scaling or near that amount the larger and faster GPUs degrade the percentage they can utilize with it further being decreased based on adding GPUs to the PCIe lanes. Tri/Quad SLI ended up a waste of money.
3- VRAM is mirrored as there is no way with the limited bandwidth to copy VRam and send it across card configs.
NVidia basically created their own connector(NVLink) for IBM Power PC workstations to get around the problem within respect to Workstation GPUs which we never seen on the consumer level for several reasons other than a hi bandwidth Bridge which was never really used thanks to M-adaptor never really being utilized and NVidia's Marketing director at the time leaving for Intel.
Intel created a new protocol for PCIe called CXL the basis of it -
Co-processors are given a super low latency connection with a stable bandwidth and a bandwidth for communication between co-processors -
Simple right? Thats pretty much it and as you see it only for machine learning and, HSA yadayada yada.
But, But you made a serious fatal mistake.
Intel needed CXL baked into PCIe, to do that they had to make it open sourced as per the PCIe consortium and they did and now with NVidia and AMD sit on a CXL consortium with other corporations.
See the major advantage of CXL is it is baked into the PCIe Hardware and low end software.
Meaning it needs no software level support beyond the BIOS/UEFI level and compliant Motherboards and CPU/GPUs - That is it, well I mean you will need OS support obviously but seriously ..that is it, because it is baked into the PCIe 5 standard its operating at the lowest level possible of the hardware interaction, WHICH IS WHY ITS SUCH A BIG DEAL
I'll get back to this, and why your being essentially a dumb dumb that didn't read about it let alone is properly informed.
Lets get back to GPUs,
So we identified what is wrong currently with MultiGPU configs
Bandwidth and Latency,......Huh what again does CXL do.
Exactly that and creates high bandwidth communication pathways for the co-processing unit to send info to all relevant Units,
It provides enough bandwidth at a low latency that will fix the PCIe hardware bandwidth issues. Meaning no GPU scaling needed, no bridges, No driver or software support needed.
So what does a GPU look like on CXL.......specifically in Windows
1 giant GPU - CXL is additive across the BUS rather than seeing them in parallel.
Hey that is correct it basically merges 2 GPU s into 1 and is additive across the board, so 2 8GB VRAM cards will appear in Windows as a 16GB GPU, and it does this at a lower level than the OS...Why exactly,
SO THERE IS NO NEEED FOR SOFTWARE SUPPORT which is why Intel needed it baked into the PCIe standard in the first place.
The OS and the API only see 1 GPU the whole time and the CPU is communicating with 2 GPUs and they are talking to each other directly without the need of bridges or half assed Software patches to cobble it together to make it work. OS/API and the Game only sees 1 GPU everything is again done at the lowest level of integration.
Why is this important to people like you.
Monolithic Dies even with EUV/UVL are only getting larger and more expensive to produce, Obviously MXM/Chiplet design is the next step forward however there are ridiculous challenges that need to be tackled(Lisa Su even said as much) a Stable interconnect means in the meantime they can continue to sell GPUs scaling towards higher end goals while profiting off multi GPU sales..............Lets face it in reality you need 2x 3090 class GPUs to run real time Ray Tracing without any kind of FXFidelity/DLSS at a reasonable frame rate, and in order for graphics to continue getting better and to keep framerates at better levels they are going to have to bring back MultiGPU configs in some order of magnitude, I mean unless you want to keep paying $700+ for GPUs that can barely tackle 4k at a reasonable rate.
Also theoretically from Intel they were stating there is the possibility that GPUs with the same arch and and structure will scale if different cards are present meaning you could theoretically stack a 60 class card with a 80 class card, the only question so far not technically answered is what will happen if you exceed the bandwidth limits, but that I believe is why its releasing with 5.0 as it doubles the bandwidth and they may very well cap units specifically to 2 GPUs.
If you really want I can go into the white paper on a more technical level of understanding the problem is don't take this the wrong way it really may be over your head as well as many others that try reading it, because partially alot of math and terms people aren't familiar with.
 
Oh no? It's nearly impossible to max out a PCIe slot right now so I feel like this somehow isn't gonna be an issue. Like, at all. For anybody.
 
PCIe 5 is the 5th iteration of PCIe. It’s not the “first iteration” at all. And it works with cards with older specifications of PCIe. It’s sensible to buy a platform with the most up to date versions of the technologies employed.

Last year I would have said that PCIe4 was unlikely to be a benefit to gamers for years. However I would have been wrong. This year AMD released a “midrange” card in the 6600XT with only 8 lanes, that requires PCIe4, punishing users who opted for PCIe3. This may end up being a trend from the GPU manufacturers. We simply do not know what our future requirements will be so it’s always prudent to choose a more up to date platform.
That wasn't punishing it was probably something they had to do as PCie 4 has a bit more than just raw bandwidth to contend with versus PCie 3. You kind of proved my point about ignorance, don't take it in a bad way please but you just don't have or understand the deep dive on the iterations.

There is no such thing as future proofing and there are more than enough instances where things were not backwards compatible, look at X370 same socket, but isn't supported with newer CPUs all because of a Bios/UEFI chip being too small and causing several other issues.
At some point you really need to decide, Are you a prosumer or a consumer? If you are a consumer there is no future proofing things are going to start moving very fast again, its a trend you see every once and a while, this isn't Intel milking 10+ years out of the same quad core. The industry is picking up steam again and you are on the Cusp of several new breakthrough technologies when they drop what you do have will be antiquated very fast.

Just buying the shiny new doesn't future proof you, part of being that cutting edge prosumer crowd is understanding you paid money to beta test something for the other later down the line 2000 Nvidia GPUs are a perfect example.
 
That wasn't punishing it was probably something they had to do as PCie 4 has a bit more than just raw bandwidth to contend with versus PCie 3. You kind of proved my point about ignorance, don't take it in a bad way please but you just don't have or understand the deep dive on the iterations.

There is no such thing as future proofing and there are more than enough instances where things were not backwards compatible, look at X370 same socket, but isn't supported with newer CPUs all because of a Bios/UEFI chip being too small and causing several other issues.
At some point you really need to decide, Are you a prosumer or a consumer? If you are a consumer there is no future proofing things are going to start moving very fast again, its a trend you see every once and a while, this isn't Intel milking 10+ years out of the same quad core. The industry is picking up steam again and you are on the Cusp of several new breakthrough technologies when they drop what you do have will be antiquated very fast.

Just buying the shiny new doesn't future proof you, part of being that cutting edge prosumer crowd is understanding you paid money to beta test something for the other later down the line 2000 Nvidia GPUs are a perfect example.
I’m not ignorant in the slightest.

Also you are incorrect. Buying PCIe5 actually does “future proof” you more than buying PCIe4. More devices will be compatible with a PCIe5 slot than a PCIe4 slot. This is a fact. You’re being wilfully “ignorant” to believe that it doesn’t. You do not know what our future requirements will be.

There makes no sense to oppose the progression of technology. There is no downside to buying PCIe5 over PCIe4 as a decision on it’s own. Why are you trying to persuade users not to buy a more up to date PCI express technology? The only reason I can think of is that you’re just upset that it might not be on AMDs next platform.
 
Oh no? It's nearly impossible to max out a PCIe slot right now so I feel like this somehow isn't gonna be an issue. Like, at all. For anybody.
Uhm really.. My workstation does it actually all the time, grant it not a typical but it does saturate the PCIe Bus there is more than just gaming when it comes to computing.
Gaming If MultiGPus were still a thing then yes running 2x 3090s will oversaturate the PCIe 4 Bus.
 
I’m not ignorant in the slightest.

Also you are incorrect. Buying PCIe5 actually does “future proof” you more than buying PCIe4. More devices will be compatible with a PCIe5 slot than a PCIe4 slot. This is a fact. You’re being wilfully “ignorant” to believe that it doesn’t. You do not know what our future requirements will be.

There makes no sense to oppose the progression of technology. There is no downside to buying PCIe5 over PCIe4 as a decision on it’s own. Why are you trying to persuade users not to buy a more up to date PCI express technology? The only reason I can think of is that you’re just upset that it might not be on AMDs next platform.

Yes you are, and the fact you cannot admit it tells me you are not an adult at all.

Buying PCie 5 does not future proof you when you are CPU limited so I am not incorrect, and you cannot make assumptions, your system is only as fast as your slowest or most outdated component, according to steam most still use 1080p panels which means the majority are CPU limited and your GPU and PCIe spec has no real relevance other than specified Voltages through the power portion as well as other possible features.

Look you cannot have it both ways,
You state you are not ignorant because AMD is a big bad meenie that made you go PCIe 4 for 8x lanes rather than x16 PCIe 3 lanes but then rant on about buying the newest PCIe spec without knowing what what it actually does or why you need to buy it other than shiny new and or future proof that doesn't exist the days of a system lasting a decade are over, in fact you are going to be lucky to get 3-5 years of use with the way things are picking up..

Its from Gigabyte I am not worried in the slightest they are the most *** backwards company right now, exploding power supplies terrible cases and a host of other issues.
In fact the earliest AM5 boards PCie 5 and DDR5 and USB 4 I didn't expect to be solely native to the new specs, USB I expected chipset additions on the mobo, DDR4/5 support, and PCIe 4 native to probably the first set of CPUs, which is solely why I don't want to be a beta tester, and I maxed out currently to tide myself over until the 2nd or 3rd gen of Am5 processors.

With CXL and what it will bring to the table with PCIe 5, yes the sooner the better I want to run Multi GPU configs again that aren't complete **** and have scaling problems with high frame times, sorry I could care less about budget plebs, and the concerns of people spending less than 700$ on a computer, that is like budget arcade computing for me.I also don't like fanboyism, AMD Nvidia Intel all the same to me, guilty of the same crap, and the people loyal to them are *****s because they share no loyalty to you nor do they actually care, they are a business there to make money that is all as long as I pay and get what I want out of any of them they get the money and I am content, I just don't like putting up with the fantards.

Upset no Lol I have been around too long in the tech sector to care that much, I just also don't want techspot turning into Wccftech forums of a toxic mess of loyal fanboyism with no actual technical data or understanding posting really dumb crap, best to nip it in the butt here.
 
This arguing brings back the old days when Rambus looked good on paper but wasn't worth the performance/price in reality.

I don't have a crystal ball but with GPU's and mining in the coming years the GPU will be the weakest/slowest link in demand. It's sad, to be honest I bet in the next few years the issue won't be PCIe 5 since no one will be able to afford or want to spend top $$$$$ on a GPU and the majority of people will be rocking out power of the likes of the RX590 or 3060ti when PCIe 5 comes out. In other words adaption will be slow.
 
Last edited:
Not too mention when they go full pcie 5 it will allow cheaper chipset be able to split the lanes between devices and not lose performance because of the doubled bandwith of 5.0
No. Pcie 5.0 x1 is and always will be more expensive than Pcie 4.0 x2 that is and always will be more expensive than Pcie 3.0 x4. Total cost, that is. Not only chipset
 
Well a part of the reason people can debate on something that hasn't happened yet is because so much of this stuff is set in stone years ahead.

The road maps tell us they work on these things for years and years and something this big for something coming in a year or so is almost always set in stone by this point.

Not saying that's true for everything but we've seen in the past that it usually is.
Road maps always change especially if it's year old road map. It almost never reaches 100% accuracy of the initial road map, some things, even bigger ones change (although rarely), or get delayed, but usually small details do.

No, it's not set in stone yet about PCIe 4.0 or 5.0. We assume this hack of info is about 2021 info because this was hacked now, but it can be older (we don't know) and there is still time for AMD to go PCIe 5.0 if they think it's worth it because it's still 1 year away and they have time to change their mind...

The major architecture points may be set in stone years in advance, but something like PCIe 4.0 or 5.0 can be modified with 1 year still to go until launch...

If the leaks still say PCIe 4.0 in Q1 2022, then I'll believe them, until then it's a useless debate.
 
When intel was still on pcie3, amd fanboys said “intel is terrible, mired in stagnation, can’t innovate anymore, Intel is dead.”

Now that Intel has forged ahead leading the industry with pcie5 while amd is stuck on pcie4 for years, it’s “well no one needs pcie5, it’s a gimmick, there’s no discernible difference between pcie3.0 and pcie4.0 so who even needs pcie5, blah blah blah.”

Kinda amusing and sad simultaneously.

PCIe 4.0 was announced in 2011, but it has only been widely implemented in products for a couple of years now. And devices using PCIe 4.0 only really the last year.

Try harder, fanboy. Or just stop being one.
 
Well actually, *"ignorant"* that clearly just think it's a bandwidth increase then, there is more than just raw bandwidth for GPUs to be concerned with, like feature sets, like CXL that is a complete game changer for the way devices or co-processors, yeah that is your GPU, in the PCIe pipeline are handled.

*"* I am calling ignorant because of a lack of knowledge while making judgements about a tech commenters apparently are not informed about which is the proper term and not to be meant in an inherently demeaning way like most people perceive it thanks to bastardized English, it's just a lack of knowledge on the subject. So try not to perceive it in the wrong way please and own up if you are uninformed.
Does Alder Lake support CXL though ? Sapphire Rapids Xeon apparently does, but indications I found are against ADL supporting it.

 
Does Alder Lake support CXL though ? Sapphire Rapids Xeon apparently does, but indications I found are against ADL supporting it.

It's not here yet, it's a year away from release, in other words it's too early to tell, I wouldn't put it past Intel on their platforms to lock it down the hedt releases as they have in the past to segregate their marketed lines, it can be blocked through microcode but it is baked into PCIe 5 standard. Itel may also wait till it benefits them with their line of GPUs.
 
This arguing brings back the old days when Rambus looked good on paper but wasn't worth the performance/price in reality.

I don't have a crystal ball but with GPU's and mining in the coming years the GPU will be the weakest/slowest link in demand. It's sad, to be honest I bet in the next few years the issue won't be PCIe 5 since no one will be able to afford or want to spend top $$$$$ on a GPU and the majority of people will be rocking out power of the likes of the RX590 or 3060ti when PCIe 5 comes out. In other words adaption will be slow.
No not really, GPUs are capped by several things, 1 being the fact monolithic dies are getting to expensive, PCIe 5 brings to the table CXL which basically puts multi gpu back on the table which allows them to diversify their lines in meaningful ways until they solve mxm/chiplet designs which should reduce costs in combination with more refined EUV and UVL processing.
 
Fun fact:
When AMD was touting PCIe 4 on X470, the chipset was PCIe 2.

Meanwhile on Intel's platform, four full speed PCIe 4 M.2 drives can run sultaneously on Z690.

AMD didn't capatilize while Intel was still on 14nm. I saw this coming.
Back then intel had only 16 lanes on the CPU while AMD offered 4 more direct lanes for the NVME drive. Intel only recently added 4 more lanes for their consumer CPUs so you were forced to use the chipset lanes for the I/O.

And it seems that only the GPU will get PCIe 5.0, the extra 4 lanes will be PCIe 4.0 like on AMD. This means that NVME drives will still be made only for PCIe 4.0.

I/O is the only place I see an use for PCIe 5.0 for the next few years (until 2025 maybe) and Intel is not directly making use of it there (maybe only in niche NVME cards for PCIe slots, but it isn't really for regular consumers)
 
Last edited:
No not really, GPUs are capped by several things, 1 being the fact monolithic dies are getting to expensive, PCIe 5 brings to the table CXL which basically puts multi gpu back on the table which allows them to diversify their lines in meaningful ways until they solve mxm/chiplet designs which should reduce costs in combination with more refined EUV and UVL processing.

Unless I worded it wrong I was trying to point out that bridge or gap between speed of a GPU and the majority of people trying to get their hands on a fast/modern GPU will be the real issue. The real bottleneck is really REALITY and your WALLET. Most people I know have a mid range card and will be nursing it and trying to extend it's gaming life longer since the GPU's are super expensive and hard to get.

We will be entering a part two challenge in our computer world to milk our preciousness cards as long as feasible possible. Console gaming will catch up and have a few friends that are throwing down the towel and caving in to gaming consoles.
 
Unless I worded it wrong I was trying to point out that bridge or gap between speed of a GPU and the majority of people trying to get their hands on a fast/modern GPU will be the real issue. The real bottleneck is really REALITY and your WALLET. Most people I know have a mid range card and will be nursing it and trying to extend it's gaming life longer since the GPU's are super expensive and hard to get.

We will be entering a part two challenge in our computer world to milk our preciousness cards as long as feasible possible. Console gaming will catch up and have a few friends that are throwing down the towel and caving in to gaming consoles.

I hope we are not going back to multi card sli/xfire days for gaming. Twice the cost for less than double the performance plus the added driver issues.
 
No not really, GPUs are capped by several things, 1 being the fact monolithic dies are getting to expensive, PCIe 5 brings to the table CXL which basically puts multi gpu back on the table which allows them to diversify their lines in meaningful ways until they solve mxm/chiplet designs which should reduce costs in combination with more refined EUV and UVL processing.
In simple terms for you: CLX will do nothing for the average Joe, it's mostly for datacenters with specialised needs. Most workloads can already take advantage of multi GPU servers/workstations.

We will see multi GPUs setups for gaming, but it will just be done via chiplets. I doubt we'll see SLI and Crossfire implementation ever becoming popular again.

Why do you even want to see such multi-GPU systems become popular again?
 
In simple terms for you: CLX will do nothing for the average Joe, it's mostly for datacenters with specialised needs. Most workloads can already take advantage of multi GPU servers/workstations.

We will see multi GPUs setups for gaming, but it will just be done via chiplets. I doubt we'll see SLI and Crossfire implementation ever becoming popular again.

Why do you even want to see such multi-GPU systems become popular again?
Read on the technology and understand what it is capable of especially when correlating to co-processors in this case GPUs and why SLI and Crossfire sucked.

If all you are seeing is this for use for data centers you are incredibly short sighted.

This is just plain useless, if you want anything to progress there are 1 of 2 ways 1 deals with GPUs moving from monolithic design to chiplet/mxm design or to make more efficiently the PCIe bus communications at the lowest level removing the need for complex bandaids.
 
Unless I worded it wrong I was trying to point out that bridge or gap between speed of a GPU and the majority of people trying to get their hands on a fast/modern GPU will be the real issue. The real bottleneck is really REALITY and your WALLET. Most people I know have a mid range card and will be nursing it and trying to extend it's gaming life longer since the GPU's are super expensive and hard to get.

We will be entering a part two challenge in our computer world to milk our preciousness cards as long as feasible possible. Console gaming will catch up and have a few friends that are throwing down the towel and caving in to gaming consoles.
Once again the price increase is the result of monolithic lithography getting too expensive, everything filters down, prosumer eventually find the consumer changes.
 
Back