North Carolina State University researchers have discovered a method of boosting processing performance by more than 21%. Although many processing designs by Intel, AMD and ARM house CPU and GPU cores in the same package, the components still function mostly independently.

Because they rarely collaborate on workloads, NCSU believes the chips waste a lot of potential. They simply aren't as efficient as they could be and improving efficiency – both in manufacturing and computing – is the arguably the primary goal of combining various chipsets into a single package.

The engineers have devised a scheme that allows the individual cores to cooperate on loads while still assigning them roles that play to their strengths. The configuration allows GPU to execute computational functions while the CPU is relegated to pre-fetching data from the main memory.

"This is more efficient because it allows CPUs and GPUs to do what they are good at. GPUs are good at performing computations. CPUs are good at making decisions and flexible data retrieval," wrote co-author Dr. Huiyang Zhou, an associate professor of electrical and computer engineering.

CPUs and GPUs fetch data from memory at about the same rate, but the report says GPUs execute functions quicker. This setup wasn't as easy to accomplish prior to designs like Intel's Sandy Bridge architecture and AMD's Fusion products because CPUs and GPUs were completely separate parts.

Zhou's team recorded an average performance increase of 21.4% using the fused setup. The paper, titled "CPU-Assisted GPGPU on Fused CPU-GPU Architectures," will be presented on February 27 at the 18th International Symposium on High Performance Computer Architecture in New Orleans.

Interestingly, the research was partly funded by AMD. The chipmaker recently announced it would focus less on standard desktop CPUs and more on Fusion-based mobile solutions emphasizing heterogeneous computing (i.e. the unification of CPUs and GPUs). Could this be a taste of what's to come?