What just happened? Eighteen countries, including the US and the UK, have joined hands to ensure AI development is "secure by design." The agreement lays out a set of principles that aims to address persistent concerns about how AI could become a disruptive force if it falls into the hands of malicious actors.

The 20-page document mentions several safeguards to ensure responsible AI development, including the need for tech companies to "develop and deploy it in a way that keeps customers and the wider public safe from misuse." Apart from the US and the UK, other countries that have signed the document include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

The agreement extensively deals with ways to secure AI from being hijacked by hackers, and includes recommendations on how companies should first conduct appropriate security testing before releasing their models. However, as reported by Reuters, the document does not address controversial issues, such as how data is gathered for the AI models. There have already been multiple legal skirmishes over this matter, so one would hope that regulators and lawmakers would step in sooner rather than later to address it once and for all.

Described as the first global agreement on AI safety, the Guidelines for Secure AI Development is aimed at making AI developers more responsible for their software. The document also urges all stakeholders, including data scientists, developers, managers, decision-makers and risk owners to follow the recommendations to make "informed decisions about the design, development, deployment and operation of their AI systems."

The US Cybersecurity and Infrastructure Security Agency (CISA) hailed the agreement as a step in the right direction, and reminded everyone that AI development should prioritize public safety above all else. In a message, CISA director Jen Easterly said that AI should not only be about "cool features" and driving down costs by cutting corners, because that could have dangerous side effects in the long run.

Even before this agreement was inked, some countries already began formulating rules on how companies like Google and OpenAI should self-regulate AI development. While France, Germany and Italy recently reached an agreement on "mandatory self-regulation through codes of conduct," the Biden administration in the US passed an executive order to reduce the risks that the unregulated development of AI could potentially pose to national security.