The big picture: You can't go five minutes these days without hearing about AI this and AI that. But have you ever wondered how we got here? The credit largely goes to a groundbreaking neural network from 2012 called AlexNet. While it didn't cause an immediate sensation, it ultimately became the foundation for the deep learning revolution we're experiencing today. Now, after years of negotiations, the original source code has finally been released to the public.
Cutting corners: Researchers from the University of California, Santa Cruz, have devised a way to run a billion-parameter-scale large language model using just 13 watts of power – about as much as a modern LED light bulb. For comparison, a data center-grade GPU used for LLM tasks requires around 700 watts.