A hot potato: The loss of jobs, the death of human creativity, plagiarism, wiping out the human race – is there anything else we need to worry about when it comes to advanced AI? Yes, according to the head of the SEC: a financial crisis that is "nearly unavoidable."

Gary Gensler, chair of the US Securities and Exchange Commission, told the Financial Times that the increasing use of AI systems will almost certainly lead to the financial markets crashing within the next decade.

Gensler warns that the almost inevitable crisis will come about due to reliance on AI models developed by tech companies. He also blamed the a lack of diversity among the AI tools that are currently used by financial institutions to monitor the markets, offer advice, automate account opening, and more.

The solution, Gensler says, is to introduce regulation that oversees both the generative AI models and how they are used by Wall Street entities, which have been adopting the technology in droves since the start of the year. But the SEC head admits that this will be a "cross-regulatory challenge."

"It's frankly a hard challenge," Gensler told the FT. "It's a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it's just in the nature of what we do. And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator."

Gensler's scenario wouldn't mark the first time technology has crashed financial markets. Back in 2010, a British trader caused a "flash crash" by illegally manipulating the market by flooding the Chicago Mercantile Exchange with bogus orders from his parents' basement in London. It resulted in almost $1 trillion being wiped off the value of US shares before immediately rebounding. Regulators said that high-frequency trading algorithms played a part in the crash.

So far, AI companies have agreed to self-regulate and manage the risks posed by their technologies, but governments are calling for tighter regulation. The EU's in-the-works AI Act could force developers of generative artificial intelligence tools to submit them for review before general release. The US government, meanwhile, is still reviewing the technology to decide which aspects require regulation.

The SEC proposed new rules in July that would require broker-dealers and investment advisers to take certain steps to address conflicts of interest associated with their use of predictive analytics to interact with investors. The aim is to prevent firms from placing their interests ahead of investors' interests.

While the likes of Morgan Stanley and JPMorgan are using AI models to help traders and financial advisers, Goldman Sachs, Deutsche Bank, and Bank of America all banned employees from using ChatGPT at work earlier this year.

Center image: Third Way Think Tank