It's always great when someone of the peanut gallery stands up with a big target on their back and starts talking about some technology being a "game changer" then switches gears to "do your own research!!!". If you cant back up your point then all you are doing is shitting up the thread with CXL shilling.
The CXL protocol *MAY* be a game changer with HSA implementations, but not only does that not immediately mean it will matter to consumers, but since the majority of hardware out there is still 3.0 with plenty of 4.0 out there, it will be a LONG time before CXL has the market penetration for any software devs to actually care, then it will take them years to actually take advantage of it By then PCIe 6.0 will be out. We still see only a small difference between PCIe 2.0, 3.0, and 4.0 unless you bring in multi GPU or cards like AMD's 5500xt that are gimped with x8 connections.
Censorship has been the death of many a tech site. WCCFtech's whitelist has cut traffic to a fraction of what it once was.
So Techspot shouldnt report on news if it looks bad for AMD? Funny, most people would call that "bias".
Ok because you asked for it,
Do you know why SLI and Crossfire were garbage? Here let me spell it out for you..
PCIe has a problem, It has a Latency issue on top of an unstable bandwidth that fluctuates caused by microcontrollers in the past with the North Bridge and Now the Controller on the CPU die, That isn't a problem for a single card, however when you add a second or more card into the equation it causes issues. Well unless the single GPU is really banging off the Bandwidth cap of the PCIe spec then your performance metrics will be all over the place to a margin of degree.
That is why Nvidia Band aid the problem with the bridge and AMD just ended up with Frame Timing issues worse than NVidia ended up having. This basically has several consequences,
1 - Every game needs specific drivers for both the game and the card itself that allocates the bandwidth to ensure they are below the fluctuation so it isn't maxed out dropping frames left and right, Which is alot of work DX11 and earlier on GPU manufacturers and DX12 later on Software developers with M-adapter which we se how that went.
2 - GPU scaling where as you had 60 class products and lower with 100% scaling or near that amount the larger and faster GPUs degrade the percentage they can utilize with it further being decreased based on adding GPUs to the PCIe lanes. Tri/Quad SLI ended up a waste of money.
3- VRAM is mirrored as there is no way with the limited bandwidth to copy VRam and send it across card configs.
NVidia basically created their own connector(NVLink) for IBM Power PC workstations to get around the problem within respect to Workstation GPUs which we never seen on the consumer level for several reasons other than a hi bandwidth Bridge which was never really used thanks to M-adaptor never really being utilized and NVidia's Marketing director at the time leaving for Intel.
Intel created a new protocol for PCIe called CXL the basis of it -
Co-processors are given a super low latency connection with a stable bandwidth and a bandwidth for communication between co-processors -
Simple right? Thats pretty much it and as you see it only for machine learning and, HSA yadayada yada.
But, But you made a serious fatal mistake.
Intel needed CXL baked into PCIe, to do that they had to make it open sourced as per the PCIe consortium and they did and now with NVidia and AMD sit on a CXL consortium with other corporations.
See the major advantage of CXL is it is baked into the PCIe Hardware and low end software.
Meaning it needs no software level support beyond the BIOS/UEFI level and compliant Motherboards and CPU/GPUs - That is it, well I mean you will need OS support obviously but seriously ..that is it, because it is baked into the PCIe 5 standard its operating at the lowest level possible of the hardware interaction, WHICH IS WHY ITS SUCH A BIG DEAL
I'll get back to this, and why your being essentially a dumb dumb that didn't read about it let alone is properly informed.
Lets get back to GPUs,
So we identified what is wrong currently with MultiGPU configs
Bandwidth and Latency,......Huh what again does CXL do.
Exactly that and creates high bandwidth communication pathways for the co-processing unit to send info to all relevant Units,
It provides enough bandwidth at a low latency that will fix the PCIe hardware bandwidth issues. Meaning no GPU scaling needed, no bridges, No driver or software support needed.
So what does a GPU look like on CXL.......specifically in Windows
1 giant GPU - CXL is additive across the BUS rather than seeing them in parallel.
Hey that is correct it basically merges 2 GPU s into 1 and is additive across the board, so 2 8GB VRAM cards will appear in Windows as a 16GB GPU, and it does this at a lower level than the OS...Why exactly,
SO THERE IS NO NEEED FOR SOFTWARE SUPPORT which is why Intel needed it baked into the PCIe standard in the first place.
The OS and the API only see 1 GPU the whole time and the CPU is communicating with 2 GPUs and they are talking to each other directly without the need of bridges or half assed Software patches to cobble it together to make it work. OS/API and the Game only sees 1 GPU everything is again done at the lowest level of integration.
Why is this important to people like you.
Monolithic Dies even with EUV/UVL are only getting larger and more expensive to produce, Obviously MXM/Chiplet design is the next step forward however there are ridiculous challenges that need to be tackled(Lisa Su even said as much) a Stable interconnect means in the meantime they can continue to sell GPUs scaling towards higher end goals while profiting off multi GPU sales..............Lets face it in reality you need 2x 3090 class GPUs to run real time Ray Tracing without any kind of FXFidelity/DLSS at a reasonable frame rate, and in order for graphics to continue getting better and to keep framerates at better levels they are going to have to bring back MultiGPU configs in some order of magnitude, I mean unless you want to keep paying $700+ for GPUs that can barely tackle 4k at a reasonable rate.
Also theoretically from Intel they were stating there is the possibility that GPUs with the same arch and and structure will scale if different cards are present meaning you could theoretically stack a 60 class card with a 80 class card, the only question so far not technically answered is what will happen if you exceed the bandwidth limits, but that I believe is why its releasing with 5.0 as it doubles the bandwidth and they may very well cap units specifically to 2 GPUs.
If you really want I can go into the white paper on a more technical level of understanding the problem is don't take this the wrong way it really may be over your head as well as many others that try reading it, because partially alot of math and terms people aren't familiar with.