Engineers boost CPU/GPU speed 21%, could be a glimpse of AMD's future

Matthew DeCarlo

Posts: 5,271   +104
Staff

North Carolina State University researchers have discovered a method of boosting processing performance by more than 21%. Although many processing designs by Intel, AMD and ARM house CPU and GPU cores in the same package, the components still function mostly independently.

Because they rarely collaborate on workloads, NCSU believes the chips waste a lot of potential. They simply aren't as efficient as they could be and improving efficiency -- both in manufacturing and computing -- is the arguably the primary goal of combining various chipsets into a single package.

The engineers have devised a scheme that allows the individual cores to cooperate on loads while still assigning them roles that play to their strengths. The configuration allows GPU to execute computational functions while the CPU is relegated to pre-fetching data from the main memory.

"This is more efficient because it allows CPUs and GPUs to do what they are good at. GPUs are good at performing computations. CPUs are good at making decisions and flexible data retrieval," wrote co-author Dr. Huiyang Zhou, an associate professor of electrical and computer engineering.

CPUs and GPUs fetch data from memory at about the same rate, but the report says GPUs execute functions quicker. This setup wasn't as easy to accomplish prior to designs like Intel's Sandy Bridge architecture and AMD's Fusion products because CPUs and GPUs were completely separate parts.

Zhou's team recorded an average performance increase of 21.4% using the fused setup. The paper, titled "CPU-Assisted GPGPU on Fused CPU-GPU Architectures," will be presented on February 27 at the 18th International Symposium on High Performance Computer Architecture in New Orleans.

Interestingly, the research was partly funded by AMD. The chipmaker recently announced it would focus less on standard desktop CPUs and more on Fusion-based mobile solutions emphasizing heterogeneous computing (I.e. the unification of CPUs and GPUs). Could this be a taste of what's to come?

Permalink to story.

 
Sounds good. But, am I understanding this correctly. This is exclusively for gpgpu tasks? It frees up resources on the gpu so it can crunch faster? It'd be nice if it could intelligently do this on every program. Serial tasks for the cpu and parallel ones on the gpu.
 
http://www.extremetech.com/computin...md-cpu-performance-by-20-without-overclocking

"To achieve the 20% boost, the researchers reduce the CPU to a fetch/decode unit, and the GPU becomes the primary computation unit. This works out well because CPUs are generally very strong at fetching data from memory, and GPUs are essentially just monstrous floating point units. In practice, this means the CPU is focused on working out what data the GPU needs (pre-fetching), the GPU?s pipes stay full, and a 20% performance boost arises."

To boot, it was all done in a processor "simulator," not on actual hardware.
 
mevans336 said:
To boot, it was all done in a processor "simulator," not on actual hardware.

Yes it would be interesting on what basis they can judge a speed increase when using a simulator.
 
Would it make sense to design a group or system of chips one general cpu and the others designed for specialized tasks? The CPU is the brains and farms out the hard work to the specialized tasks. Or does it make more sense to do what we have been doing and just add more general cpu cores? And/or is the answer really in how the software deals with the hardware and that by writing better multitasking software we can get better performance?
 
Maybe
AMD needs to develop its own compliers to Windows or even Android OS so to maximize utiltztions of its own special chips. Is AMD still relying on Intel's compliers ?
 
AMD has had it's own compiler (a customized version of GCC) for years. AMD has never relied on intel's compiler.
 
Back