AI companies sign new voluntary commitments pledging AI safety

Alfonso Maruccia

Posts: 1,730   +505
Staff
Facepalm: Major AI companies have shown how irresponsible and ruthless they can be in leveraging machine learning algorithms to generate financial gains for board members and shareholders. Now, these same companies are asking the entire tech world to trust them to act responsibly when truly dangerous AI models are eventually developed.

Some of the most important companies working with AI algorithms and services have signed a new voluntary agreement to promote AI safety, making their operations more transparent and trustworthy. The agreement, introduced ahead of the recent AI Seoul Summit, provides no enforceable measures to control unsafe AI services, but it is seemingly satisfactory enough to please the UK and South Korean governments.

The new agreement involved tech and AI giants such as Microsoft, OpenAI, xAI (Elon Musk and his Grok venture), Google, Amazon, Meta, and the Chinese company Zhipu AI. All parties will now outline and publish their plans to classify AI-related risks and are apparently willing to refrain from developing models that could have severe effects on society.

The agreement follows previous commitments on AI safety approved by international organizations and 28 countries during the AI Safety Summit hosted by the United Kingdom in November 2023. These commitments, known as the Bletchley Declaration, called for international cooperation to manage AI-related risks and potential regulation of the most powerful AI systems (Frontier AI).

According to UK Prime Minister Rishi Sunak, the new commitments should assure the world that leading AI companies "will provide transparency and accountability" in their plans to create safe AI algorithms. Sunak stated that the agreement could serve as the new "global standard" for AI safety, demonstrating the path forward to reap the benefits of this powerful, "transformative" technology.

AI companies should now set the "thresholds" beyond which Frontier AI systems can pose a risk unless proper mitigations are deployed and describe how those mitigations will be implemented. The agreements emphasize collaboration and transparency. According to UK representatives, the Bletchley Declaration, which calls for international cooperation to manage AI-related risks, has been working well so far, and the new commitments will continue to "pay dividends."

The companies trusted to protect the world against AI risks are the same organizations that have repeatedly proven they shouldn't be trusted at all. Microsoft-backed OpenAI sought Scarlett Johansson's permission to use her voice for the latest ChatGPT bot, and then used her voice anyway when she declined the offer. Researchers have also shown that chatbots are incredibly powerful malware-spreading machines, even without "Frontier AI" models.

Permalink to story:

 
A meaningless gesture done by companies that will abuse AI and your data to high hell and back.
Hot takes up ahead.

Adsense was abusing your data to exploit you, now, you're effectively slave to big pimp 'Sammy Alteroir Motivesman' claiming that AI needs more power and money to prevent an "SuperAI takeover." This is magical thinking, and if the cash cow stops flowing and stop investors and the masses from buying into his manufactured delusions, maybe we could tone down the abuse already plaguing the arteries of the internet to its core, as OpenAI (and Midjourney) is the chef, and they're serving formless, average, and IP-law sidestepping "4k, from Artstation" slop all year round.

OpenAI ruined art for me because human-made art has form, and that form can be reproduced through the tools and the medium, such as the material, aperature of the lens, exposure, appeal, and more. AI Slop removes the form, it mends it into median of all the data its been trained on, why do you think AI image reflections have no specularity, no real or unique roughness to its gloss? The internet has lost a lot of integrity in a short span, and big corpo couldn't even begin to care less.

Think of it like this, humans express and convey experiences through the beauty that is perceived and the emotion that is protrayed depending on the medium, when your image search results are plagued with ai content that trends based on "how right and good it looks for an ai image" rather than "what it is trying to communicate" through the details and bits and pieces it has. This does not constitute all content, some conveniently has the mode matching the message, but it hits such a majority that when you look at an artpiece to discover its form you realize its formless as no human could recreate the brush strokes because there weren't any, the higher the resolution you make the generated content, the more the lack of form is apparent.

I reckon this will change in due time but the problem is AI safety is intentionally obscure and only appeals from a standpoint of moderation of what can and cannot be generated rather than create IP law to disable algorithmic transformers from accessing content by creators and replicating and often regurgitating the images nearly detail for detail without consent; AI safety is about control, and not consent. It is essentially meaningless for a normal individual or creator with intellectual property.
 
Last edited:
Once someone does something daring and bold, others will follow. After all, why should that guy have all the fun and goodies and we can't?
 
Hot takes up ahead.

Adsense was abusing your data to exploit you, now, you're effectively slave to big pimp 'Sammy Alteroir Motivesman' claiming that AI needs more power and money to prevent an "SuperAI takeover." This is magical thinking, and if the cash cow stops flowing and stop investors and the masses from buying into his manufactured delusions, maybe we could tone down the abuse already plaguing the arteries of the internet to its core, as OpenAI (and Midjourney) is the chef, and they're serving formless, average, and IP-law sidestepping "4k, from Artstation" slop all year round.

OpenAI ruined art for me because human-made art has form, and that form can be reproduced through the tools and the medium, such as the material, aperature of the lens, exposure, appeal, and more. AI Slop removes the form, it mends it into median of all the data its been trained on, why do you think AI image reflections have no specularity, no real or unique roughness to its gloss? The internet has lost a lot of integrity in a short span, and big corpo couldn't even begin to care less.

Think of it like this, humans express and convey experiences through the beauty that is perceived and the emotion that is protrayed depending on the medium, when your image search results are plagued with ai content that trends based on "how right and good it looks for an ai image" rather than "what it is trying to communicate" through the details and bits and pieces it has. This does not constitute all content, some conveniently has the mode matching the message, but it hits such a majority that when you look at an artpiece to discover its form you realize its formless as no human could recreate the brush strokes because there weren't any, the higher the resolution you make the generated content, the more the lack of form is apparent.

I reckon this will change in due time but the problem is AI safety is intentionally obscure and only appeals from a standpoint of moderation of what can and cannot be generated rather than create IP law to disable algorithmic transformers from accessing content by creators and replicating and often regurgitating the images nearly detail for detail without consent; AI safety is about control, and not consent. It is essentially meaningless for a normal individual or creator with intellectual property.
Where is the TL;DR......
 
Back