Mozilla mockingly asks Microsoft to clarify users' data exploitation for AI training practices

Alfonso Maruccia

Posts: 1,025   +301
Staff
A hot potato: Training the next generation of generative AI models requires a significant amount of high-quality, meaningful data, which computers are unable to produce on their own. Now, Mozilla wants to know if Microsoft plans to use information created by users for algorithm coaching.

Left on its own, generative AI technology is at risk of becoming increasingly less valuable due to the feedback loop effect. Synthetic data can offer a potential solution to this issue, but achieving the best training results will likely require a substantial amount of user-generated content (UGC) in the near future. Microsoft boasts a large user base, and it may well choose to leverage this UGC to gain a competitive advantage over other Big Tech companies.

Recently, the Redmond corporation cautioned users about the forthcoming changes in the Microsoft Services Agreement. This agreement governs how people utilize Microsoft's consumer online products and services. The company provided a summary of the most significant changes to the agreement, with explanations concerning privacy, content, code of conduct, Bing, and, of course, AI services.

The updated document includes an entirely new section titled "AI Services," in which Microsoft outlines the actions users are permitted and prohibited from taking with these services. The revised Services Agreement also specifies that Microsoft will "process and store your inputs to the service as well as output from the service" for the purpose of monitoring and preventing "abusive or harmful uses" of the service. Additionally, sections related to privacy and "Your Content" have been expanded to encompass new AI-based services, such as Bing Chat.

While the new Microsoft Services Agreement is lengthy and can be tiresome to read, it is expected to provide sufficient clarification regarding the company's intentions with users' data for AI training purposes. However, according to Mozilla, it appears that understanding Microsoft's true intentions remains challenging, even after involving experts in reviewing the new document.

Over the past few days, the open-source Foundation has humorously called on Microsoft to be more transparent about its AI training practices. The company utilized its social media presence, specifically on X, to announce that its team of nine privacy experts was unable to determine if Microsoft intends to utilize personal data for AI training. These experts expressed confusion and requested further clarification regarding the new service agreement from the company.

Mozilla's X campaign launched an official petition directed at Microsoft, posing a straightforward question: "Are you using our personal data to train AI?" The petition highlights that if four lawyers, three privacy experts, and two campaigners were unable to discern Microsoft's intentions through the new agreement, then what chance does the average person have?

While there are no guarantees, Microsoft may hopefully consider the petition if a sufficient number of users sign it. In the meantime, Mozilla is actively promoting a more "responsible" and socially acceptable approach to generative AI technology.

Permalink to story.

 
Back