Large language model articles

nvidia gpu cuda chip ai opinion training

Open source project is making strides in bringing CUDA to non-Nvidia GPUs

CUDA compatibility with AMD and Intel GPUs isn't a pipe dream anymore
Why it matters: Nvidia introduced CUDA in 2006 as a proprietary API and software layer that eventually became the key to unlocking the immense parallel computing power of GPUs. CUDA plays a major role in fields such as artificial intelligence, scientific computing, and high-performance simulations. But running CUDA code has remained largely locked to Nvidia hardware. Now, an open-source project is working to break that barrier.
chatgpt large language model chess emulation atari 2600 stella

ChatGPT gets crushed at chess by a 1 MHz Atari 2600

Editor's take: Despite being hailed as the next step in the evolution of artificial intelligence, large language models are no smarter than a piece of rotten wood. Every now and then, some odd experiment or test reminds everyone that so-called "intelligent" AI doesn't actually exist if you're living outside a tech company's quarterly report.
sam altman chatgpt openai large language model robots terminator comics meme

Sam Altman says polite ChatGPT users are burning millions of OpenAI dollars

OpenAI CEO says it's money "well spent"
Manners are not ruining the environment: The costs of training and running artificial intelligence model are massive. Even excluding everything but electricity, AI data centers burn through over $100 million a year to process user prompts and model outputs. So, does saying "please" and "thank you" to ChatGPT really cost OpenAI millions? Short answer: probably not.