Modder converts $95 AMD APU into a 16GB Linux AI workhorse

Daniel Sims

Posts: 1,375   +43
Staff
Forward-looking: The AI boom has companies scrambling for Nvidia's H100 and A100 GPUs, which are in short supply and cost tens of thousands of dollars. Meanwhile, a modder has discovered and shared a method of performing AI tasks on hardware available for less than one percent of that price.

A modder recently published instructions for coaxing AMD APUs that cost around $100 into running AI tasks usually associated with far more expensive graphics cards. If it catches on, the method could significantly expand the number of people who can at least experiment with AI.

The most prominent players in AI today operate tools like large language models using H100 and A100 graphics cards that Nvidia sells for $25,000 to $30,000 (reportedly a 1,000% profit margin for the GPU manufacturer), and they can't get enough of them. Meanwhile, smaller-scale AI operations on consumer hardware typically involve high-end cards costing at least several hundred dollars.

However, Reddit user chain-77 discovered that a $95 Ryzen 5 4600G APU can do respectable AI work by telling Linux to see it as a 16GB GPU. Although the processor doesn't compare to dedicated cards in traditional graphics rendering, AI relies heavily on memory, where an APU's ability to allocate shared memory freely becomes an advantage.

Devoting half of a system's 32GB of RAM to the integrated GPU gives it more memory than many beefier dedicated chips. Responses on Reddit suggested that assigning more RAM to video memory might be possible with some motherboards, but chain-77 is currently unable to test the theory.

The resulting DIY AI device supports AMD's ROCm platform, enabling it to run tools like Pytorch and TensorFlow. When tested on Stable Diffusion, the 4600G generates a 50-step 512 x 512-pixel image in under two minutes, comparing favorably against some high-end dedicated cards. Fastchat, MiniGPT 4, and Whisper also work, but the APU struggled with LLaMA. The more recent $130 Ryzen 5 5600G, which TechSpot considered an excellent choice for compact PCs upon its 2021 launch, can perform the same trick as its predecessor.

On top of being cheaper, the APUs also draw relatively little power while running AI – about 0.35 KWh per day. The modder released detailed descriptions of his work on Medium and YouTube (above) for those looking to save money while performing AI workloads.

Permalink to story.

 
Personally, I ALWAYS go for CPUs with integrated graphics (or integrated in the chipset or mobo, back then), always.
Just like my next planned upgrade is one of the AMD APUs, the best one I can.
This way I make sure I have graphics available even if the GPU dies. or use the video decoder or some image processing on the integrated GPU at the lowest possible power consumption.
 
Last edited:
It'd be nice if AMD invested into an APU only platform. Like what they did with socket FM2, but give it a 256 bit memory bus and larger 20+ CU iGPUs. It'd be great for ITX builds.
 
Personally, I ALWAYS go for CPUs with integrated graphics, always.
Just like my next planned upgrade is one of the AMD APUs, the best I can.
This way I make sure I have graphics available even if the GPU dies. or use the video decoder or some image processing on the integrated GPU at the lowest possible power consumption.
Even better when they support Radeon Dual Graphics.
 
Oh dear - the next gen AMD APUs are meant to really powerful - the budget option GPU for 1080p gaming

I think this guys work is great - however hope this doesn't push the price up for those looking for a small powerful cheap PC

Why would companies want it ?- maybe processing at data source /work stations/ in the field - before offloading to central big boy - will also mean less bandwidth to be sent - especially if remote mobile/wifi /satellite however

Also Military - cheap target , area analysis by drones etc - search and rescue
 
Last dGPU I used was in 2005. IGPU and more recently APUs are the future, Apple is already doing it. Very happy with my current 5700G
 
However, let's all believe Jensen when he is saying that GPUs are the only way to achieve AI workloads...

/SARCASM
 
Personally, I ALWAYS go for CPUs with integrated graphics, always.
Just like my next planned upgrade is one of the AMD APUs, the best I can.
This way I make sure I have graphics available even if the GPU dies. or use the video decoder or some image processing on the integrated GPU at the lowest possible power consumption.

Just one thing to keep in mind. Amd apu's are always a genaration behind in the architecture.
So a 5600 is much faster than a 5600g because it got more cache and updated architecture.
So it is not that good of an idea to buy an apu if you are buying a dgpu anyways.
The am5 platform has changed this and all cpu's have basic igpu that do not hamper the cpu performance. Like intel ones.
 
I understood that most AI libraries are written in CUDA (or CUDA-X) which only runs on NVidia. AMD has the nearly equivalent OpenCL but wouldn't that be restrictive on what could be run on these budget AI workstations?

It might be an interesting article to cover how CUDA, CUDA-X and OpenCL differ and what they're suited to.
 
When tested on Stable Diffusion, the 4600G generates a 50-step 512 x 512-pixel image in under two minutes, comparing favorably against some high-end dedicated cards.
That's pretty good for such a cheap and tiny CPU+GPU combination, and especially one that's not using CUDA, but it's quite a bit off from a decent graphics card. For example, running a similar Stable Diffusion test as that in the video (or as close as I can get it from the lack of details), I get an image within 15 seconds on an RTX 4070 Ti. Still, $95 vs $800...
 
Please CMIIW, would this enable running rather large AI model which usually needs top or workstation GPU (which have 16GB of VRAM or more) albeit in lower compute power?

If so, any of AMD APU could be used including laptop versions, providing one have 32GB of RAM or more.
 
Back