MIT trained an AI algorithm on data from a Reddit community about death

Shawn Knight

Posts: 15,296   +192
Staff member
Recap: MIT's research highlights the importance of training AI on the right data set. For those fearful of a real-life Skynet, results like this certainly won't make you sleep any easier at night.

Researchers at MIT’s Media Lab have created what they’re calling the world’s first psychopath AI. Norman (named after a character in Alfred Hitchcock’s Psycho) was trained with data from “the darkest corners of Reddit” and serves as a case study of how biased data can influence machine learning algorithms.

As the team highlights, AI algorithms can see very different things in an image if trained on the wrong data set. Norman was trained to perform image captioning, a deep learning method used to generate a description of an image. It was fed image captions from an “infamous” subreddit that documents the disturbing reality of death (the specific name of the subreddit wasn’t provided due to its graphic nature).

Once trained, Norman was tasked with describing Rorschach inkblots – a common test used to detect underlying thought disorders – and the results were compared with a standard image captioning neural network trained on the MSCOCO data set. The results were quite startling.

Well, alright then.

Researchers note that due to ethical concerns, they only trained Norman on the image captions; no images of real people dying were used in the experiment.

This isn't the first time we've seen AI exhibit poor behavior. In 2016, if you recall, Microsoft launched an AI chat bot named Tay modeled after a 19-year-old girl. In less than 24 hours, the Internet managed to corrupt the bot's personality, forcing Microsoft to promptly pull the plug.

Permalink to story.

 
"DEATH" is the ultimate philosophical question and has yet to be understood adequately.

Obviously, there are also several religious views on the subject.

So to take a difficult question for mankind to deal with and to dare assume that any form of an AI response is meaningful,
is like taking a 2% confidence interval result from a statistical sample and treating it as a definitive, perfect solution --
IMO, self delusional.
 
To illustrate the difficulty of understanding and coping with death, just look at the classical approaches to dealing with the consequences - - aka grief
 
"DEATH" is the ultimate philosophical question and has yet to be understood adequately.

Obviously, there are also several religious views on the subject.

So to take a difficult question for mankind to deal with and to dare assume that any form of an AI response is meaningful,
is like taking a 2% confidence interval result from a statistical sample and treating it as a definitive, perfect solution --
IMO, self delusional.
That wasn't the point of the test . . .
 
That wasn't the point of the test . . .
Yes, you are correct of course. However, I am objecting to treating one of the most complicated and least understood issues as if were simple to solve with AI.

Such work requires a model to find any solution(s) and the results will vary by
  • the understanding of the problem
  • creating a model (or models) for that understanding
  • proposing alternative interpretations of the data
One phenomenon we all face every day is weather and its prediction and we're all aware even that's hit-n-miss.

https://www.ncdc.noaa.gov/data-access/model-data has some classic prediction models online to highlight the complexity and highly variable results:
  • Global Data Assimilation System

    Global Ensemble Forecast System

    Global Forecast System

    Climate Forecast System

    North American Multi-Model Ensemble
Remember that the MIT guys/gals, while very good in there fields, they still need to face the "reality check". For example, a negative square root is one of two square roots of a positive number. For the number 25, its negative square root is -5 because (-5)^2 = 25. Mathematically correct, but -5 is not founded in the real world - - ever have -5 pounds?

So the MIT model needs to stand up to peer-review and applicability in the real world - - don't hold your breath.

Heck, psychology can't even get Behavior Modification work 100% of the time, so Rorschach test are questionable as they are not even double blind studies.
 
Back