In context: Wikipedia is the internet's most popular encyclopedia, a collaborative effort featuring over 58 million articles written in more than 300 languages. This trove of free digital knowledge is currently put together by human editors and writers with no "AI" in sight. Things could change pretty soon, though.

ChatGPT-like chatbots and other "intelligent" algorithms could one day write entire Wikipedia articles, even though that future is still far away. Jimmy "Jimbo" Wales, founder of the collaborative encyclopedia and the Wikimedia non-profit organization, tackled the AI problem in a recently published interview, stating that machine learning-based chatbots are a promising technology that could help vastly improve Wikipedia.

As things are today, ChatGPT and other over-hyped AI algorithms are probing the whole internet to create a massive database of text snippets to statistically answer people's prompts. A non-trivial part of the database comes from Wikipedia, one of the most quoted online sources. There could be a role reversal where the AI will write most of the information on the platform in the future.

Wales thinks that future is still in the distance, without a guess "how far away we are." We're certainly closer than he would have thought just a couple of years ago, Wales said, even though his tests have highlighted the many flaws still affecting the chatbot business.

Wales is particularly worried about the hallucination problem, which he appropriately calls "lying." The AI tends to fabricate "facts," which is not appropriate for Wikipedia. ChatGPT can provide false or made-up information, kill people beforehand, or answer prompts with contradicting facts, all with absolute confidence. Wales provided an example of a conversation he had with the bot about the Empire State Building.

"I asked ChatGPT, did an airplane crash into the empire state building? And it said no, an airplane did not crash into the empire state building. But then it went on to say there is a famous building in New York, and one of the most famous things that happened is when a B25 bomber crashed into the Empire State Building. I said, is a B25 bomber a type of airplane? It said says [yes]---so I said you were wrong when you said a plane didn't crash into it. And it said, you're right. I apologize for my error."

While complete AI authorship is still out of the question, Wales said, Wikipedia is already discussing other ways the chatbot algorithms could help improve the encyclopedia in the coming months (and years). Wales is thinking about intriguing opportunities for an AI trained "on the right corpus of things," like checking two Wikipedia entries to see if there are statements that contradict each other.

Human editors could detect those contradictions, but the AI would automate the entire process, potentially finding hundreds of examples and greatly helping the Wikipedia community. Another well-known issue is AI bias, which results from unbalanced algorithm training. Wikipedia continually struggles with this issue. Critics have often charged Wikipedia with having a male-centric and white-centric perspective on world events. A biased AI wouldn't help address that problem.

However, a properly trained AI could spot information gaps that haven't been covered yet, providing new ideas and content. Wales estimates that if the AI could triple the number of Wikipedia entries, the encyclopedia's operating costs would only increase by around $1,250 annually.