Posts: 12,316 +120
Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT), and some of his colleagues recently compared 81 pruning algorithms – tweaks that make neural networks more efficient. “Fifty papers in,” he said, “it became clear that it wasn’t obvious what the state of the art even was.”
Science Magazine cites several reports to back up the claim. In 2019, for example, a meta-analysis of “information retrieval algorithms used in search engines” found that the high mark was actually set in 2009. A paper that found its way to arXiv in March which looked at loss functions concluded that accuracy involving image retrieval had not improved since 2006. In a separate study from last year that analyzed neural network recommendation systems used by media streaming services, researchers found that six out of seven failed to outperform simple, non-neural algorithms developed years earlier.
Part of the problem is that, as Zico Kolter, a computer scientist at Carnegie Mellon University, notes, researchers are more motivated to create a new algorithm and tweak it rather than simply tuning up an existing one. The latter approach also makes it harder to derive a paper out of, Kolter added.
It’s not a total wash, however. Even if newer methods aren’t always fundamentally better than older techniques, in some instances, the tweaks can be applied to the legacy approaches to make them better. Blalock said it’s almost like a venture capital portfolio, "where some of the businesses are not really working, but some are working spectacularly well."
Image credit: Andrii Vodolazhskyi, iStock