The US, UK and 16 other countries sign agreement for safe AI development

DragonSlayer101

Posts: 372   +2
Staff
What just happened? Eighteen countries, including the US and the UK, have joined hands to ensure AI development is "secure by design." The agreement lays out a set of principles that aims to address persistent concerns about how AI could become a disruptive force if it falls into the hands of malicious actors.

The 20-page document mentions several safeguards to ensure responsible AI development, including the need for tech companies to "develop and deploy it in a way that keeps customers and the wider public safe from misuse." Apart from the US and the UK, other countries that have signed the document include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

The agreement extensively deals with ways to secure AI from being hijacked by hackers, and includes recommendations on how companies should first conduct appropriate security testing before releasing their models. However, as reported by Reuters, the document does not address controversial issues, such as how data is gathered for the AI models. There have already been multiple legal skirmishes over this matter, so one would hope that regulators and lawmakers would step in sooner rather than later to address it once and for all.

Described as the first global agreement on AI safety, the Guidelines for Secure AI Development is aimed at making AI developers more responsible for their software. The document also urges all stakeholders, including data scientists, developers, managers, decision-makers and risk owners to follow the recommendations to make "informed decisions about the design, development, deployment and operation of their AI systems."

The US Cybersecurity and Infrastructure Security Agency (CISA) hailed the agreement as a step in the right direction, and reminded everyone that AI development should prioritize public safety above all else. In a message, CISA director Jen Easterly said that AI should not only be about "cool features" and driving down costs by cutting corners, because that could have dangerous side effects in the long run.

Even before this agreement was inked, some countries already began formulating rules on how companies like Google and OpenAI should self-regulate AI development. While France, Germany and Italy recently reached an agreement on "mandatory self-regulation through codes of conduct," the Biden administration in the US passed an executive order to reduce the risks that the unregulated development of AI could potentially pose to national security.

Permalink to story.

 
Who reads this and see's the actual statement
- "The US, UK and 16 other countries sign agreement for safe AI development"
Self learning in a world where something can learn a million truths and lies in 10min is going to be safe..
The statement does not make sense, unless you can completely limit what it can learn, which is kind of impossible in a self learning machine.
 
Reuters: "The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers."

So, it is just a publicity stunt for political points. Too much money in the US to actually pass any type of law or regulations that will truly limit the development or use of AI.
 
Please educate yourselves. Huge number of applications of AI making massive difference that aren't just stupid search engines or Fauxtography based. It's double edged sword for sure, but the scientific/engineering applications are immense.
 
I read it as US,UK wants to curb the extent of other's development while doing those unlimited and unhinged development themselves.
 
Back