In the age of AI disinformation, people remain the biggest challenge

midian182

Posts: 9,745   +121
Staff member
A hot potato: One of the many fears surrounding generative AI is its ability to create disinformation that is then spread online. But while companies and organizations are working against this worsening phenomenon, it seems the biggest challenge they face is the people who refuse to believe something is fake.

Artificial intelligence has been used to create disinformation for years, but the new wave of generative AI has brought advancements few could have imagined. Convincing images, videos, and audio clips can all be created and used to influence or reinforce the public's opinions.

Part of the problem is that many people distrust those institutions that can confirm something is fake. One only has to look at social media to see all the accusations of nefarious influences at work, along with the line "Of course they want you to believe it's not real." The situation isn't helped by many users being unable to spot these fakes.

Hany Farid, an expert in deepfake analysis and a professor at the University of California, Berkeley, told Bloomberg, "Social media and human beings have made it so that even when we come in, fact check and say, 'nope, this is fake,' people say, 'I don't care what you say, this conforms to my worldview.'"

"Why are we living in that world where reality seems to be so hard to grip?" he said. "It's because our politicians, our media outlets and the internet have stoked distrust."

One of the biggest concerns over misinformation created by generative AI is anything related to next year's election. Microsoft warned last month that Chinese operatives have been using the technology for this purpose, creating images and other content focusing on politically divisive topics, including gun violence and denigrating US political figures and symbols.

The spread of this sort of election misinformation in the US has been minimal so far, but expect to see it increase. It's especially bad on X/Twitter, which the EU says is the worst platform for disinformation; a designation that came just as X disabled a feature that allows users to report misinformation related to elections.

Ultimately, people are usually the biggest obstacle when it comes to fighting AI-generated misinformation. Many share this content because they simply don't know it's fake, and with video and images becoming increasingly realistic, it can be hard to convince them otherwise.

Permalink to story.

 
"misinformation" is a funny buzzword to use in an attempt to discredit anyone that doesn't support "the message" and just like all the other funny buzzwords, it has lost all meaning. Remember the "misinformation" about that lab in woo-han that later turned out to be true?
Apologies, however, as I see it, this is a prime example of "believing what conforms to one's world view." There's still wide disagreement about the "lab in woo-han" and its role related to "gain in function" with the COVID-19 virus. There is science out there that specifically states that such "gain in function" (meaning making the virus more infectious to humans) is very, very, very difficult if not impossible to achieve.

I doubt you'll read it, but here's an interesting read on the entire subject of "the lab in woo-han" https://www.factcheck.org/2021/05/the-wuhan-lab-and-the-gain-of-function-disagreement/

The bulk of the education system has focused on recitation which is good for standardization. This improves the worst schools/students, but is bad for the best students as innovation, creativity, and critical thinking suffer.
I agree because I've been there.
 
Apologies, however, as I see it, this is a prime example of "believing what conforms to one's world view." There's still wide disagreement about the "lab in woo-han" and its role related to "gain in function" with the COVID-19 virus. There is science out there that specifically states that such "gain in function" (meaning making the virus more infectious to humans) is very, very, very difficult if not impossible to achieve.

I doubt you'll read it, but here's an interesting read on the entire subject of "the lab in woo-han" https://www.factcheck.org/2021/05/the-wuhan-lab-and-the-gain-of-function-disagreement/
You may not remember, but back in 2021 anyone who questioned the official statement of "the lab has nothing to do with this" was labeled as "misinformation", some even suggested there should be punishment for making such statements. This demand continued to build....until facebook refused to take down statements or posts that questioned The Science (tm). Suddenly, now that the social media puppets are backtalking, its perfectly OK to question if maybe the lab had something to do with it, and ask for further investigation as public unrest grows. Many of the statements made in the article you posted would have been labeled as "misinformation" in 2021. Once it was no longer politically expedient to deny any attachment (or that Fauci was in any way involved with said lab) suddenly it was no longer misinformation but rather " A claim that demands further investigation".

Remember: this was called "misinformation" in 2020

"Enter the Yan report. On September 14, an article was posted to Zenodo, an open-access site for sharing research papers, which claimed that genetic evidence showed that the SARS-CoV-2 coronavirus was made in a lab, rather than emerging through natural spillover from animals. The 26-page paper, led by Chinese virologist Li-Meng Yan, a postdoctoral researcher who left Hong Kong University, has not undergone peer review and asserts that this evidence of genetic engineering has been “censored” in the scientific journals. (National Geographic contacted Yan and the report’s three other authors for comment but received no reply.)"

https://www.nationalgeographic.com/...gins-misinformation-yan-report-fact-check-cvd

"“It’s encroaching on pseudoscience, really,” says Robertson. “This paper just cherry-picked a couple of examples, excluded evidence, and came up with a ridiculous scenario.”"

The same claims the US now wants to be investigated were once "pseudoscience", called "misinformation" and accredited only to negative individuals. "Misinformation" is a dogwhistle used to discredit and silence someone who may be noticing something inconvenient to those in power.
 
The once-trusted institutions traded their integrity for political expediency.

Now they bemoan their loss of prestige and trust. 🤷‍♂️ Trust is hard-earned. Sorry, you wasted it.
 
From my perspective the use of AI for any kind of information removes the liability for intentional misinformation. Put a person in front of the camera and make they aware they are liable for what's said and you'll cut down on misinformation. Won't entirely eliminate it but will cut it down.
 
The spread of this sort of election misinformation in the US has been minimal so far but expect to see it increase.

Really? Campaign misinformation has been going on as long as humans have been campaigning for office. I doubt you could find a politician that hasn't promised something on the campaign trail that they have later walked back or completely abandoned. Look at Biden's border wall statement, "not another foot of border wall..." and yet here we are, getting ready to build more wall, enabled by Biden's removal of several (26) federal laws preventing wall construction.
 
When I fact checked the fact checkers, I found that most the time they were lying. This wasn't mistakes, they were lies, and they were from the top fact checkers. For one example in many, Obama's comment from "Dreams of My Father" about Islam is true. It is right there in the book, and what the fact checker said he said was the sentence before it. This kind of thing kept repeating itself. If you think the fact checkers are reliable, you should look into how they are funded. Many fact checking organizations are funded by a cause that has an axe to grind.
 
Where money is involved "misinformation" will always be used to hide the truth until pockets and bank accounts are filled. Then statements comes out to say "we made an error in our findings".
 
Back