CUDA compatibility with AMD and Intel GPUs isn't a pipe dream anymore
Why it matters: Nvidia introduced CUDA in 2006 as a proprietary API and software layer that eventually became the key to unlocking the immense parallel computing power of GPUs. CUDA plays a major role in fields such as artificial intelligence, scientific computing, and high-performance simulations. But running CUDA code has remained largely locked to Nvidia hardware. Now, an open-source project is working to break that barrier.
Editor's take: Despite being hailed as the next step in the evolution of artificial intelligence, large language models are no smarter than a piece of rotten wood. Every now and then, some odd experiment or test reminds everyone that so-called "intelligent" AI doesn't actually exist if you're living outside a tech company's quarterly report.
Why it matters: It was only a matter of time before governments jumped aboard the AI hype train. The US government has a lousy track record when it comes to rolling out tech projects – remember the Affordable Care Act website fiasco? Now, it is launching a new LLM, and early signs suggest the tool is rushed and unfinished.
In brief: A future in which generative AIs write emails back and forth to each other on our behalf has moved a little closer. Google is improving Gemini's smart replies, making them not only longer, but also more personalized by analyzing your previous emails and Drive files.
Manners are not ruining the environment: The costs of training and running artificial intelligence model are massive. Even excluding everything but electricity, AI data centers burn through over $100 million a year to process user prompts and model outputs. So, does saying "please" and "thank you" to ChatGPT really cost OpenAI millions? Short answer: probably not.