Researchers create processor that can cut power usage while drastically boosting performance

Polycount

Posts: 3,017   +590
Staff
In brief: It's no secret that the tech industry has been pushing the limits of Moore's Law for some time now - smartphones seem to have pretty much peaked in terms of battery life and performance capabilities. However, Princeton researchers may have discovered a breakthrough in chip technology that could significantly slash energy usage while boosting performance.

Specifically, scientists have developed a prototype chip that uses a technique called "in-memory computing" to reduce the load on a system's processor. Instead of relying on the processor to continually fetch data from a device's memory, in-memory computing allows those tasks to occur within the memory itself, paving the way for "greater speed and efficiency." As a result, not only does the chip boast improved performance, it also consumes much less energy.

So, exactly how much faster is this new chip technology? The answer is a bit complicated. While lab test results have allowed researchers to reach performance levels that are "tens to hundreds" of times faster than other chips out there, their design is primarily intended for machine learning purposes; "deep learning inference," in particular.

According to Knowridge, deep learning inference occurs when algorithms allow computers to "make decisions and perform complex tasks by learning from data sets." Amazon's facial recognition tech, appropriately dubbed "Rekognition," is one example of this sort of AI in action.

Of course, that isn't to say the hardware can't be used for other purposes -- on the contrary -- but it will require individual applications to take advantage of its capabilities before any significant performance gains or energy reductions can be realized.

As fascinating as this new hardware research is, don't expect to see it arrive in modern smartphones or other devices any time soon. Researchers will undoubtedly need to test their chip a lot more before it's ready for prime time, so for now, it may be best to look at it as little more than an interesting experiment.

Permalink to story.

 
This only works in a very small application area. With large databases it can be useful, for example. However, in most (other) cases this technology will be useless.
It is comparable to graphics cards, which can perform certain (very simple) calculations much faster than CPUs, while they are not usable for 99% of the software.
 
This only works in a very small application area. With large databases it can be useful, for example. However, in most (other) cases this technology will be useless.
It is comparable to graphics cards, which can perform certain (very simple) calculations much faster than CPUs, while they are not usable for 99% of the software.
I think this will all come down to optimization. Sure, not everything will work great, but I can imagine places where it could be beneficial, such as gaming where main memory also needs some degree of vectorization (it holds most graphics data, and the GPU swaps what graphics it needs from main memory since it's more abundant. Now imagine some graphic needing to be darker/lighter/rotated/etc, it can be done preemptively from memory directly then sent to the GPU already processed.)
 
Back