1. TechSpot is dedicated to computer enthusiasts and power users. Ask a question and give support. Join the community here.
    TechSpot is dedicated to computer enthusiasts and power users.
    Ask a question and give support.
    Join the community here, it only takes a minute.
    Dismiss Notice

MIT trained an AI algorithm on data from a Reddit community about death

By Shawn Knight · 6 replies
Jun 7, 2018
Post New Reply
  1. Researchers at MIT’s Media Lab have created what they’re calling the world’s first psychopath AI. Norman (named after a character in Alfred Hitchcock’s Psycho) was trained with data from “the darkest corners of Reddit” and serves as a case study of how biased data can influence machine learning algorithms.

    As the team highlights, AI algorithms can see very different things in an image if trained on the wrong data set. Norman was trained to perform image captioning, a deep learning method used to generate a description of an image. It was fed image captions from an “infamous” subreddit that documents the disturbing reality of death (the specific name of the subreddit wasn’t provided due to its graphic nature).

    Once trained, Norman was tasked with describing Rorschach inkblots – a common test used to detect underlying thought disorders – and the results were compared with a standard image captioning neural network trained on the MSCOCO data set. The results were quite startling.

    Well, alright then.

    Researchers note that due to ethical concerns, they only trained Norman on the image captions; no images of real people dying were used in the experiment.

    This isn't the first time we've seen AI exhibit poor behavior. In 2016, if you recall, Microsoft launched an AI chat bot named Tay modeled after a 19-year-old girl. In less than 24 hours, the Internet managed to corrupt the bot's personality, forcing Microsoft to promptly pull the plug.

    Permalink to story.

     
  2. Jeff Re

    Jeff Re TS Addict Posts: 146   +106

    Just wait until he escapes...
     
    dogofwars likes this.
  3. jobeard

    jobeard TS Ambassador Posts: 12,817   +1,518

    "DEATH" is the ultimate philosophical question and has yet to be understood adequately.

    Obviously, there are also several religious views on the subject.

    So to take a difficult question for mankind to deal with and to dare assume that any form of an AI response is meaningful,
    is like taking a 2% confidence interval result from a statistical sample and treating it as a definitive, perfect solution --
    IMO, self delusional.
     
  4. jobeard

    jobeard TS Ambassador Posts: 12,817   +1,518

    To illustrate the difficulty of understanding and coping with death, just look at the classical approaches to dealing with the consequences - - aka grief
     
  5. wiyosaya

    wiyosaya TS Evangelist Posts: 3,879   +2,206

    Machine learning algorithms are not the only thing that biased data can influence.
     
    SirChocula and lumbeeman like this.
  6. Tanstar

    Tanstar TS Evangelist Posts: 658   +202

    That wasn't the point of the test . . .
     
  7. jobeard

    jobeard TS Ambassador Posts: 12,817   +1,518

    Yes, you are correct of course. However, I am objecting to treating one of the most complicated and least understood issues as if were simple to solve with AI.

    Such work requires a model to find any solution(s) and the results will vary by
    • the understanding of the problem
    • creating a model (or models) for that understanding
    • proposing alternative interpretations of the data
    One phenomenon we all face every day is weather and its prediction and we're all aware even that's hit-n-miss.

    https://www.ncdc.noaa.gov/data-access/model-data has some classic prediction models online to highlight the complexity and highly variable results:
    • Global Data Assimilation System

      Global Ensemble Forecast System

      Global Forecast System

      Climate Forecast System

      North American Multi-Model Ensemble
    Remember that the MIT guys/gals, while very good in there fields, they still need to face the "reality check". For example, a negative square root is one of two square roots of a positive number. For the number 25, its negative square root is -5 because (-5)^2 = 25. Mathematically correct, but -5 is not founded in the real world - - ever have -5 pounds?

    So the MIT model needs to stand up to peer-review and applicability in the real world - - don't hold your breath.

    Heck, psychology can't even get Behavior Modification work 100% of the time, so Rorschach test are questionable as they are not even double blind studies.
     

Add your comment to this article

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...