In brief: It's no secret that the tech industry has been pushing the limits of Moore's Law for some time now - smartphones seem to have pretty much peaked in terms of battery life and performance capabilities. However, Princeton researchers may have discovered a breakthrough in chip technology that could significantly slash energy usage while boosting performance.

Specifically, scientists have developed a prototype chip that uses a technique called "in-memory computing" to reduce the load on a system's processor. Instead of relying on the processor to continually fetch data from a device's memory, in-memory computing allows those tasks to occur within the memory itself, paving the way for "greater speed and efficiency." As a result, not only does the chip boast improved performance, it also consumes much less energy.

So, exactly how much faster is this new chip technology? The answer is a bit complicated. While lab test results have allowed researchers to reach performance levels that are "tens to hundreds" of times faster than other chips out there, their design is primarily intended for machine learning purposes; "deep learning inference," in particular.

According to Knowridge, deep learning inference occurs when algorithms allow computers to "make decisions and perform complex tasks by learning from data sets." Amazon's facial recognition tech, appropriately dubbed "Rekognition," is one example of this sort of AI in action.

Of course, that isn't to say the hardware can't be used for other purposes -- on the contrary -- but it will require individual applications to take advantage of its capabilities before any significant performance gains or energy reductions can be realized.

As fascinating as this new hardware research is, don't expect to see it arrive in modern smartphones or other devices any time soon. Researchers will undoubtedly need to test their chip a lot more before it's ready for prime time, so for now, it may be best to look at it as little more than an interesting experiment.