ChatGPT costs an eye-watering $700,000/day to operate, claims new research
And it can't even write secure programming codeBy Kishalaya Kundu
Why it matters: OpenAI's conversational AI bot ChatGPT has quickly become a viral sensation with its ability to write stories, compose emails, and generate code. However, a new report now suggests that it costs OpenAI an ungodly amount just to keep it running daily.
According to research by SemiAnalysis, OpenAI is burning through as much as $694,444 in cold, hard cash per day to keep the chatbot up and running. The firm estimates that the system includes around 3,617 HGX A100 servers comprising 28,936 GPUs, with the cost per query said to be around 0.36 cents.
Dylan Patel, Chief Analyst at SemiAnalysis, told Business Insider that the current costs to run the software could be even higher, as GPT-4 is likely even more expensive to operate than GPT-3. Patel based the estimates on the older GPT-3 model, but OpenAI has already released a GPT-4 version for paying subscribers. The company says that the new model offers more accurate information and better protects against the off-the-rails comments that became a problem with GPT-3/3.5.
One of the primary reasons for the exorbitant costs is the power-hungry specialized chips required to operate the system. To combat the issue, Microsoft, one of the most significant shareholders in OpenAI, is said to be working on its own AI chip called 'Athena' that could replace the NVIDIA GPUs and reduce the running costs of ChatGPT drastically.
Meanwhile, ChatGPT can also generate functional code from scratch, raising fears that it could replace programmers eventually. However, recent research by computer scientists Raphaël Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara suggests that code generated by the chatbot may not be very secure.
The study states that ChatGPT generates code falling "well below minimal security standards applicable in most contexts." The chatbot even admits as much when asked whether the code it generated was secure.
"When prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not," said the authors.
To check ChatGPT's coding credentials, the researchers asked it to generate 21 programs and scripts using four programming languages: C, C++, Python, and Java. On the first try, the AI chatbot managed to write only five secure programs but then came up with seven more secured code snippets after some prompting from the researchers. The results suggest that using ChatGPT to code apps could be fraught with danger in the foreseeable future, although that can change at some stage.