A hot potato: There are plenty of legitimate concerns about advancements in the field of artificial intelligence, from the number of jobs it could eliminate to the copyright implications of generative AI. In the UK, government officials are worried about its more lethal potential, such as criminals or terrorists using AI to cause mass destruction, and the technology escaping human control.

British officials are touring the world ahead of an AI safety summit in the UK, writes The Guardian, trying to drum up support for a joint statement that warns of rogue actors using the technology to create the likes of bioweapons. Some even believe that AI could advance far enough that it evades human control completely. That might sound like the stuff of sci-fi, but there have been plenty of previous similar warnings and calls for regulation to prevent such a possibility.

"There are two areas the summit will particularly focus on: misuse risks, for example, where a bad actor is aided by new AI capabilities in biological or cyber-attacks, and loss of control risks that could emerge from advanced systems that we would seek to be aligned with our values and intentions," said the government in a statement.

Earlier this year, UK Prime Minister Rishi Sunak met with the chief executives of Google DeepMind, OpenAI, and Anthropic AI to discuss ways of limiting AI's risks. No 10 Downing Street acknowledged the "existential" risk of artificial intelligence following the meeting.

Back in March, an open letter was published on Future of Life Institute's website that called for a six-month pause on advanced AI development. More than 1,000 signatories expressed concern regarding what they called an out-of-control race to develop and deploy increasingly powerful systems, but the plea mostly fell on deaf ears.

MIT professor Max Tegmark, the co-founder of the institution and organizer of the letter, last week said that while many tech execs agreed with the concept of a pause on development, they are locked in a "race to the bottom," meaning no company can be the only one to pause development for fear of rivals pulling ahead.

The potential threat to human life posed by AI has been a hot-button topic this year. OpenAI boss Sam Altman was one of several experts who have warned about the possibility of AI causing the extinction of the human race, comparing these systems to the risks posed by nuclear war and pandemics. Over two-thirds of Americans are worried about it threatening civilization, and Warren Buffett compared AI's creation to the atomic bomb.

In July, OpenAI announced it was putting together a team to mitigate the risks of an AI smarter than people being developed that could cause the disempowerment of humanity or even human extinction within the next ten years. The company is also dedicating 20% of the compute it has secured to this effort. Let's hope it's successful.