Computers, Gaming, & Technology Here you can talk about anything with circuit boards, or dilithium crystals, or flux capacitors. Show off your technology, computing, and gaming knowledge.

This could end bad

Thread Tools
 
Old 06-08-2018, 11:00 AM
  #1  
Administrator
Thread Starter
 
Visionz's Avatar
 
Join Date: May 2001
Location: Upstate NY
Posts: 23,223
Received 6 Likes on 6 Posts
Vehicle: 2010 Genesis 2.0T
Default This could end bad

MIT scientists created an AI-powered 'psychopath' named Norman

MIT scientists created an AI-powered 'psychopath' named Norman

Norman always sees the worst in things.

That's because Norman is a "psychopath" powered by artificial intelligence and developed by the MIT Media Lab.

Norman is an algorithm meant to show how the data behind AI matters deeply.

MIT researchers say they trained Norman using the written captions describing graphic images and video about death posted on the "darkest corners of Reddit," a popular message board platform.

The team then examined Norman's responses to inkblots used in a Rorschach psychological test. Norman's responses were compared to the reaction of another algorithm that had standard training. That algorithm saw flowers and wedding cakes in the inkblots. Norman saw images of a man being fatally shot and a man killed by a speeding driver.

"Norman only observed horrifying image captions, so it sees death in whatever image it looks at," the MIT researchers behind Norman told CNNMoney.

Named after the main character in Alfred Hitchcock's "Psycho," Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," according to MIT.

We've seen examples before of how AI is only as good as the data that it learns from. In 2016, Microsoft (MSFT) launched Tay, a Twitter chat bot. At the time, a Microsoft spokeswoman said Tay was a social, cultural and technical experiment. But Twitter users provoked the bot to say racist and inappropriate things, and it worked. As people chatted with Tay, the bot picked up language from users. Microsoft ultimately pulled the bot offline.

The MIT team thinks it will be possible for Norman to retrain its way of thinking vialearning from human feedback. Humans can take the same inkblot test to add their responses to the pool of data.

According to the researchers, they've received more than 170,000 responses to its test, most of which poured in over the past week, following a BBC report on the project.

MIT has explored other projects that incorporate the dark side of data and machine learning. In 2016, some of the same Norman researchers launched "Nightmare Machine," which used deep learning to transform faces from pictures or places to look like they're out of a horror film. The goal was to see if machines could learn to scare people.

MIT has also explored data as an empathy tool. In 2017, researcherscreated an AI tool called Deep Empathy to help people better relate to disaster victims. It used technology to visually simulate what it would look like if that same disaster hit in your hometown.
Related Topics
Thread
Thread Starter
Forum
Replies
Last Post
stephenvv
Health, Body, & Wellness
1
03-15-2018 02:56 PM
Visionz
Computers, Gaming, & Technology
2
03-10-2015 04:34 PM
judd2001
Off Topic Cafe
5
08-28-2012 11:44 AM
Visionz
Computers, Gaming, & Technology
5
12-17-2011 12:02 PM
Visionz
Computers, Gaming, & Technology
7
08-19-2011 09:16 PM



Quick Reply: This could end bad



All times are GMT -6. The time now is 02:29 AM.