In brief: A lot of people are concerned about the advancement of AI, especially when it comes to the creation of machines smarter than humans. Even ChatGPT creator OpenAI is aware of the potential dangers of superintelligence, including the extinction of the human race, and has put together a team to mitigate these risks.

OpenAI writes that controlling AI systems much smarter than people requires scientific and technical breakthroughs to steer and control them. To address this issue within four years, it is starting a new team and dedicating 20% of the compute it has secured to this effort.

OpenAI believes superintelligence will be the most impactful technology ever invented and could help solve many of the world's problems. But its vast power might also be dangerous, leading to the disempowerment of humanity or even human extinction. Such an AI might seem a very long way off, but the company believes it could be here sometime this decade.

"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," writes OpenAI co-founder Ilya Sutskever and Jan Leike, the new team's co-head.

As humans won't be able to supervise AI systems much smarter than us, and current alignment techniques will not scale to superintelligence, new breakthroughs are required.

The superalignment team's goal is to build a "human-level automated alignment researcher." Alignment research refers to ensuring AI is aligned with human values and follows human intent. The aim is to train AI systems using human feedback, then train AI that can help evaluate other AI systems, and finally build an AI that can perform alignment research faster and better than humans.

OpenAI admits that solving the technical challenges of superintelligence in four years is an ambitious goal and there's no guarantee it will succeed, but it is optimistic. The company is now hiring researchers and engineers to join the team.

"Superintelligence alignment is fundamentally a machine learning problem, and we think great machine learning experts – even if they're not already working on alignment – will be critical to solving it," explain Sutskever and Leike. "We plan to share the fruits of this effort broadly and view contributing to alignment and safety of non-OpenAI models as an important part of our work."

We've seen plenty of concerning reports about where AI is heading, including one of the 'Godfathers of AI,' Geoffrey Hinton, leaving Google with a warning that as companies take advantage of more powerful AI systems, they're becoming increasingly dangerous.

OpenAI boss Sam Altman was one of several experts who recently warned about the possibility of AI causing the extinction of the human race, comparing these systems to the risks posed by nuclear war and pandemics. Over two-thirds of Americans are worried about it threatening civilization, and Warren Buffett compared AI's creation to the atomic bomb.

Not everyone shares these fears, though. Meta's chief scientist and another one of the three AI Godfathers, Prof Yann LeCun, said that warnings AI is a threat to humanity are "ridiculous."