AMD is happy to help Intel and Nvidia enable Smart Access Memory

I like it. That's a start. Tech corporations should start working together on some things for the sake of the customers and the industry standard. Thus, if AMD is going to teach them how, then they'd better be good learners. Especially on the software front.

The feature has always been there in the PCIe spec to raise the BAR memory window above 256MB, but there's no point in supporting that in a 32-bit world.

We've been waiting 15 years for the 32-bit era to die-off before we changed this, so all we need to do is decide on how to standardize this new window increase.

Feature has always existed, but now we need to agree on the details. AMD would be absolutely stupid to do anything but play nice on this, because it cuts the number of Base-Address Registers in-half (from 6 to 3), you will need to take the "how to best implement this" into consideration.
 
Yeah, I haven't heard anything like this. The only thing that I've ever seen that deliberately gimps games was GameWorks from nVidia. The HairWorks tessellation levels were so bad that they even gimped nVidia GPUs but since they gimped ATi GPUs even more, nVidia was happy with it. The fact that it hurt the enjoyment level of nVidia's own customers was of no concern to them as long as they managed to completely ruin the experience of people with Radeon cards. I'm not sure if everyone remembers this so here's a reminder:

You know, when people make false claims that they can't back up, that's a type of trolling but it doesn't have it's own name like flaming does. I made up a word to describe the action of making false claims with no evidence to back them up. Recent events have inspired me to call it "Trumping". I even made a dictionary-style definition for it:

Trumping [/trəmpiNG/] (v.) - A form of trolling that specifically involves making false claims with absolutely no evidence whatsoever.

Pretty good, eh? :D
The dictionary definition is spot on. I think it needs an even more is an encyclopedic analysis. A doctorate dissertation or two will be written about it I'm sure.
 
One whole game used as an example. Also, Valhalla is artificially handicapped by the developers so that AMD cards score higher. This has already been proven. I would like to see less biased examples of games. There are way more examples of the 3080 outperforming the 6800XT in games with DX12, ray tracing and DLSS. It’s not even close and the 3080 destroys the 6800XT.

Lol ... the article is about smart access memory and how it can improve performance, samples given were merely to show how in the testing it panned out in the real world. But thanks for the angry Nvidia fa ... er ... "enthusiast" rant ... that made me chuckle.
 
The feature has always been there in the PCIe spec to raise the BAR memory window above 256MB, but there's no point in supporting that in a 32-bit world.

We've been waiting 15 years for the 32-bit era to die-off before we changed this, so all we need to do is decide on how to standardize this new window increase.

Feature has always existed, but now we need to agree on the details. AMD would be absolutely stupid to do anything but play nice on this, because it cuts the number of Base-Address Registers in-half (from 6 to 3), you will need to take the "how to best implement this" into consideration.
I get it, in Intel's case. But for nvidia's case, working together here with both companies is necessary.
 
The dictionary definition is spot on. I think it needs an even more is an encyclopedic analysis. A doctorate dissertation or two will be written about it I'm sure.
Thank you! A doctorate dissertation.... Oooo, I like the sound of that! :D
I better get my Nobel acceptance speech ready. :laughing:

The problem with being a stable genius is that I'm stoopid.
- Who should say this line? heheheheh
 
Maybe because validation is needed to ensure that the feature works fine ?
Come on mate are you new here? It’s not available on the 3xxx series because AMD want people to buy 5xxx CPUs so they are artificially locking it off, if Intel did this people would call it an “anti-consumer” move. “validation” lol, if AMD can make it open and available to Intel they can “validate” it on their own older chips.
 
Come on mate are you new here? It’s not available on the 3xxx series because AMD want people to buy 5xxx CPUs so they are artificially locking it off, if Intel did this people would call it an “anti-consumer” move. “validation” lol, if AMD can make it open and available to Intel they can “validate” it on their own older chips.

Maybe there‘s....a technical reason:


Besides that validation takes time and resources.
 
In a couple of decades we'll be running completely different types of architectures and systems with different production materials and different everything :)
I expect silicon to be dropped in the next 5-6 years or so with maybe first gen 2nm being the last one to use it.
In a couple of decades, digital electronics will be passé and we'll be using quantum computers instead. Remember that in 30 years, we went from the 8088 to the i7 and technological advancement is accelerating. Since 2000, we have gone from 5¼" 1.06GB hard drives to tiny PCB sticks that hold more than 1,000x that amount and 3½" hard drives that hold 20,000x that amount and video cards that hold 24x that amount in their VRAM.

In 20 years, quantum computing will be the norm and people will look at digital electronics the way that we look at analogue electronics now. The mighty EPYC-7742 will be seen the way the old Motorola 6800 is seen now. A breakthrough at the time but by the standards of the day, less than useless.
 
In a couple of decades, digital electronics will be passé and we'll be using quantum computers instead.
I can promise you this won't happen. Not in 20 years, not in 30. There have been far too many articles on the subject by journalists who fail to understand the subject. We don't even yet have a general-purpose computing algorithm for quantum computers, which limits them at present to only a few special-purpose problems. Since that requires a theoretical breakthrough, it's a far more difficult problem than the 'simple' engineering one of building a collection of qubits compact, reliable, and cheap enough for consumer use.
 
I can promise you this won't happen. Not in 20 years, not in 30. There have been far too many articles on the subject by journalists who fail to understand the subject. We don't even yet have a general-purpose computing algorithm for quantum computers, which limits them at present to only a few special-purpose problems. Since that requires a theoretical breakthrough, it's a far more difficult problem than the 'simple' engineering one of building a collection of qubits compact, reliable, and cheap enough for consumer use.
50 years, likely. But in the mean time, there will be breakthroughs we perhaps can't imagine yet, somewhere in between, even within the more traditional methods of computing.
 
He's one of those people who never says anything good about AMD and never says anything bad about Intel or nVidia. It's weird.
He‘s a bit odd...isn‘t overly one sided but remember him posting that Big Navi reference cards were loud in a way that made it appear he knew this first hand and they turned out to be anything but. So there‘s that.
Still, if there is indeed a technical reason why only Zen 3 and up support smart memory, that‘s that. But if we get support on 400 series boards that would be nice.
 
Back