LinkedIn warns: you are responsible for sharing inaccurate content created by our AI

midian182

Posts: 10,634   +141
Staff member
A hot potato: Companies that offer generative AI tools tend to advise users that the content being created might be inaccurate. Microsoft's LinkedIn has a similar disclaimer, though it goes slightly further by warning that any users who share this misinformation will be held responsible for it.

Microsoft recently updated its Service Agreement with a disclaimer emphasizing that its Assistive AI is not designed, intended, or to be used as substitutes for professional advice.

As reported by The Reg, LinkedIn is updating its User Agreement with similar language. In a section that takes effect on November 20, 2024, the platform states that users might interact with features that automate content generation. This content might be inaccurate, incomplete, delayed, misleading or not suitable for their purposes.

So far, so standard. But the next section is something we don't often see. LinkedIn states that users must review and edit the content that its AI generates before sharing it with others. It adds that users are responsible for ensuring this AI-generated content complies with its Professional Community Policies, which includes not sharing misleading information.

It seems somewhat hypocritical that LinkedIn strictly enforces policies against users sharing fake or inauthentic content that its own tools can potentially generate. Repeat violators of its policies might be punished with account suspensions or even account terminations.

The Reg asked LinkedIn if it intends to hold users responsible for sharing AI content that violates its policies, even if the content was created by its own tools. Not really answering the question, a spokesperson said it is making available an opt-out setting for training AI models used for content generation in the countries where it does this.

"We've always used some form of automation in LinkedIn products, and we've always been clear that users have the choice about how their data is used," the spokesperson continued. "The reality of where we're at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our GenAI services do is help give them that assist."

Another eyebrow-raising part in all this is that LinkedIn announced the upcoming changes on September 18, which is around the same time that the platform revealed it had started to harvest user-generated content to train its AI without asking people to opt-in first. The outcry and investigations led to LinkedIn later announcing that it would not enable AI training on users' data from the European Economic Area, Switzerland, and the UK until further notice. Those in the US still have to opt-out.

Permalink to story:

 
Yeah, SURE this is gonna hold up in court LMAO.

I'll take "how to lose safe harbor provisions" for $200 alex.

I just checked the stock market. Their valuation is in the gutter--a steady downward spiral. This move makes sense now.

LinkedIn wouldn't be risking this kind of reputational damage, without a good reason. Clearly, they've been working on this system for a while and it just is not ready, and they've decided that they can no longer wait until it IS ready, to release it. So, they simply amended their T&C to make it the user's problem, probably hoping to retroactively remove this provision and then act like their system has been working all along, before anyone noticed. Unfortunately for them, we did. That's the good-faith interpretation, anyway.

The bad-faith interpretation is, they know it's bad, they know users don't want it, but also, "What are you going to do, take your business somewhere else? Ha, you're a captive audience, you have no rights!" Whatever makes "line go up".
 
This is how AI bubble will eventually burst. Companies made decisions to scrap user data from random sources for the sake of having data to train the AI. When it comes to responsibility of ensuring some level of quality/ accuracy, they push it back to the users. If I cannot trust the data, it limits the possible use cases. If you can't scale in technologies like this, there will still be some relevant use cases, but people will not spend too much on it.
 
Back