In coordination with EPFL researchers, IBM has created a new method for working with large data sets to train machine learning algorithms. The new algorithm, called Duality-gap based Heterogeneous Learning (DuHL), is capable of pushing through 30GB of data every 60 seconds, resulting in a 10x improvement over previous methods.

Ordinarily, it takes terabytes of memory to be able to simulate certain machine learning models. It is quite expensive for the server hardware required to do this and even once an operation is set up, compute power is still an issue for researchers. Several days or even weeks to run one test makes it difficult to keep a study running, especially when renting hardware on an hourly basis.

Utilizing GPUs for parallel computing has been going on for years now but current graphics cards still are not anywhere close to the terabyte-scale memory requirements needed for IBM's research. There are methods of splitting compute needs across different nodes but not all tasks are suitable to distribute.

IBM is relying on the concept of duality gap certificates to allow its machine learning tools to change the emphasis placed on individual pieces of data as the algorithm progresses. Simply put, past accomplishments are run through a validation stage in order to provide feedback to the system that points it in the right direction faster than ever before.

During preliminary testing, IBM used an Nvidia Quadro M4000 with 8GB of GDDR5 memory. With a modestly priced professional graphics card, IBM demonstrated that it could train Support Vector Machines over 10 times faster using its DuHL system compared to a standard sequential operating approach.

Being able to operate effectively with limited memory is a great stride forward for machine learning. Field-programmable gate arrays (FPGAs) are often used in embedded systems for applications that require repetitive computations and can also greatly benefit from IBM's DuHL system.