AMD is happy to help Intel and Nvidia enable Smart Access Memory

mongeese

Posts: 643   +123
Staff
In context: Smart Access Memory, or SAM, is a software/hardware trick that takes advantage of a PCI Express feature called Base Address Register. So far, only AMD is using with their new Radeon RX 6000 GPUs to eek out a little extra performance. It lets the CPU feed information directly into the entire video memory buffer, instead of just a small portion of it. The result is about a 5-10% performance improvement depending on the title, which isn't insignificant.

When AMD announced Smart Access Memory on its latest GPUs, it sounded like SAM required close cooperation between the CPU and GPU. Consequently, it was launched as a new feature that only works when AMD RX 6000 GPUs are paired with AMD Ryzen 5000 CPUs.

Nvidia, however, believes that a form of SAM could be implemented universally, as long as all manufacturers can agree on some standards since it does rely on a PCI Express feature. PC World asked AMD's Scott Herkelman if AMD think that's possible, and to what extent AMD would support standardization. This an edited transcript of the interview:

Q: Will competitors need to push out BIOS updates for their whole ecosystem of products?

A: I think you'll have to ask them. But I believe that they'll need to work on their own drivers. Intel will have to work with their own motherboard manufacturers, and work on their own chipsets. I think there's some work to do for our competitors.

And, just to be clear, our Radeon group will work with Intel to get them ready. And I know that our Ryzen group will work with Nvidia. There's already conversations underway. If they're interested in enabling this feature on AMD platforms, we're not going to stop them.

As a matter of fact, I hope they do. At the end of the day, the gamer wins, and that's all that matters. We're just the company that could do it the fastest because we're the only company in the world (finally) with enthusiast GPUs and enthusiast CPUs.

The standardization of a SAM-like feature could have an impact on the way games are developed. Although the feature doesn't require developer support, some games utilize it much more effectively than others: at 1440p, Assassin's Creed Valhalla sees a 14% improvement, but Shadow of the Tomb Raider only sees a 6% improvement. If developers are willing, future titles could be designed to utilize SAM to the greatest extent possible.

For now, however, SAM is a nice little incentive to keep your system all-AMD. In our 1440p eighteen game average, the RX 6800 XT was just 5% slower than the RTX 3090, a difference that SAM might be able to eliminate.

Also, as we noted in our RX 6800 XT review, PCIe 4.0 doesn’t appear to be required for SAM. We ran a few limited tests in Assassin’s Creed Valhalla while forcing PCIe 3.0 on our X570 test system and didn’t see a decline in performance when compared to PCIe 4.0, so that’s interesting.

Permalink to story.

 
Allowing the CPU to essentially treat VRAM as conventional RAM is one more step towards a a GPU-centric future. In a couple decades, I believe a general-purpose CPU will be an option you bolt onto your "GPU", rather than the other way around.
 
Good. I know they've been on the underdog for a while, but I do like when competition helps push good features for the customer.

Definitely sounding like a red and green PC setup next...
 
Allowing the CPU to essentially treat VRAM as conventional RAM is one more step towards a a GPU-centric future. In a couple decades, I believe a general-purpose CPU will be an option you bolt onto your "GPU", rather than the other way around.

I heard this few months back from someone somewhere else, interesting thought!
 
Allowing the CPU to essentially treat VRAM as conventional RAM is one more step towards a a GPU-centric future. In a couple decades, I believe a general-purpose CPU will be an option you bolt onto your "GPU", rather than the other way around.
In a couple of decades we'll be running completely different types of architectures and systems with different production materials and different everything :)
I expect silicon to be dropped in the next 5-6 years or so with maybe first gen 2nm being the last one to use it.
 
Allowing the CPU to essentially treat VRAM as conventional RAM is one more step towards a a GPU-centric future.
That's not quite what SAM does - it reduces the number of requests the CPU has to make of the PCIe controller to access the device's memory, by allowing it to address larger portions of that memory, and reduces the number of data transfers to/from the system memory, per timed cycle. That's not going to make GPU local memory any more conventional, as the overall latency is still pretty poor compared to the system RAM's.
 
That's not going to make GPU local memory any more conventional, as the overall latency is still pretty poor compared to the system RAM's.
I didn't say it transformed VRAM into conventional ram, but that it allows the CPU to treat the VRAM as such -- direct, memory-mapped access to the entire VRAM address space. A disk page file isn't conventional ram either -- but it looks exactly like such to your processor. And under upcoming PCIe 5.0, latency will be low enough that BAR-mapped VRAM will certainly be fast enough for most applications.

Can you even imagine if LeatherJacket came up with this first? They probably call it Nvidia Access Memory, patent the process and charge people to license it.
They can't patent it, as it's part of the PCIe standard. AMD is simply the first to make good use of it.
 
I expect silicon to be dropped in the next 5-6 years or so with maybe first gen 2nm being the last one to use it.
That's a little overly-optimistic. TSMC is already developing their (silicon-based) 2nm node (though it'll use gate-all-around transistors rather than FinFETs). Based on prior nodes, they'll likely start volume production on that around 2027. I would bet dollars to doughnuts that we'll have at least one node after that, before we even think about abandoning silicon-based litho. There just aren't any alternatives anywhere near commercialization yet.
 
I don’t understand why AMD can’t enable SAM on older Ryzen CPUs that would benefit the most from it.
I haven't seen anything that says it couldn't be enabled on older ryzen processors, just that AMD has only enabled it for the 5000 series this far. I suspect they will open it up to older processors if/when Intel and Nvidia implement a version of the same technology, assuming there isn't a hardware limitation.
 
One whole game used as an example. Also, Valhalla is artificially handicapped by the developers so that AMD cards score higher. This has already been proven. I would like to see less biased examples of games. There are way more examples of the 3080 outperforming the 6800XT in games with DX12, ray tracing and DLSS. It’s not even close and the 3080 destroys the 6800XT.
 
One whole game used as an example. Also, Valhalla is artificially handicapped by the developers so that AMD cards score higher. This has already been proven. I would like to see less biased examples of games. There are way more examples of the 3080 outperforming the 6800XT in games with DX12, ray tracing and DLSS. It’s not even close and the 3080 destroys the 6800XT.
If it's proven then you should provide link. Too much "proven" on the internet with no legit sources these days.
 
If it's proven then you should provide link. Too much "proven" on the internet with no legit sources these days.
Yeah, I haven't heard anything like this. The only thing that I've ever seen that deliberately gimps games was GameWorks from nVidia. The HairWorks tessellation levels were so bad that they even gimped nVidia GPUs but since they gimped ATi GPUs even more, nVidia was happy with it. The fact that it hurt the enjoyment level of nVidia's own customers was of no concern to them as long as they managed to completely ruin the experience of people with Radeon cards. I'm not sure if everyone remembers this so here's a reminder:

You know, when people make false claims that they can't back up, that's a type of trolling but it doesn't have it's own name like flaming does. I made up a word to describe the action of making false claims with no evidence to back them up. Recent events have inspired me to call it "Trumping". I even made a dictionary-style definition for it:

Trumping [/trəmpiNG/] (v.) - A form of trolling that specifically involves making false claims with absolutely no evidence whatsoever.

Pretty good, eh? :D
 
Last edited:
I made up a word to describe the action of making false claims with no evidence to back them up.
You mean like accusing someone of colluding with the Russians? Or intentionally slowing mail deliveries? Or accusing them of calling Covid a hoax? Or claiming they're cheating on their taxes, or that they owe millions to Chinese banks? Something like that?
 
You mean like accusing someone of colluding with the Russians? Or intentionally slowing mail deliveries? Or accusing them of calling Covid a hoax? Or claiming they're cheating on their taxes, or that they owe millions to Chinese banks? Something like that?
Damn. I have to say, I didn't expect a political discussion under this topic, but as you may or may not understand, a tu quoque logical fallacy literally defeats no argument. There can be more than one bad group or person. You trying to excuse trump by bringing up another party is laughable and renders your point irrelevant.
Ex. "Have you heard of how many families were separated at the border?" Your response:" oh yeah? Obama did it too! BAM!"
 
Damn. I have to say, I didn't expect a political discussion under this topic, but as you may or may not understand, a tu quoque logical fallacy literally defeats no argument. There can be more than one bad group or person. You trying to excuse trump by bringing up another party is laughable and renders your point irrelevant.
Ex. "Have you heard of how many families were separated at the border?" Your response:" oh yeah? Obama did it too! BAM!"
The really funny thing is that Obama wasn't really all that great of a president. At the beginning of his first term the Dems had the house and the senate which meant that he could have done anything but he did absolutely nothing of note. That situation would not exist again for the rest of his presidency which is why all that the American people got was Obamacare while the big corporations got their bailout money in 2008. Obama was a typical neo-liberal corporate Democrat who happened to be a phenomenal speaker.

What makes me laugh is that although Trump has an insane hatred of Obama, he did ol' Barry a massive favour. By being such a terrible president, he made Obama look amazing by comparison.
 
I like it. That's a start. Tech corporations should start working together on some things for the sake of the customers and the industry standard. Thus, if AMD is going to teach them how, then they'd better be good learners. Especially on the software front.
 
Obama wasn't really all that great of a president...he could have done anything but he did absolutely nothing of note...Obama was a typical neo-liberal corporate Democrat who happened to be a phenomenal speaker.
He was a poor speaker when the teleprompter was off, too. But let's not confine the do-nothing criticism to him. Both Bush's as well as Clinton were presidents more in favor of talking than doing; none of them, Obama included, even made an attempt to keep any of their campaign promises. Trump, to his credit, implemented or did his best to implement every single one.
 
That's a little overly-optimistic. TSMC is already developing their (silicon-based) 2nm node (though it'll use gate-all-around transistors rather than FinFETs). Based on prior nodes, they'll likely start volume production on that around 2027. I would bet dollars to doughnuts that we'll have at least one node after that, before we even think about abandoning silicon-based litho. There just aren't any alternatives anywhere near commercialization yet.

TSMC just said mass production on 2nm will happen in 2024 with risk production starting in 2023. Of course 2nm really isn't 2nm, so using this naming scheme we will be seeing .01nm eventually as they run out of whole digits (since the name has nothing to do with actual feature size anymore apparently)
 
TSMC just said mass production on 2nm will happen in 2024 with risk production starting in 2023.
If you're referring to their MbcFET announcement -- that's not quite correct. Some (non-TSMC) analysts predicted that their announcement meant they would move up their 2nm node to 2024, but TSMC has not formally said such.
 
Back