MIT Creates AI-Powered Psychopath Called ‘Norman’

norman

Artificial intelligence researchers have thus far attempted to make well-rounded algorithms that can be helpful to humanity. However, a team from MIT has undertaken a project to do the exact opposite. Researchers from the MIT Media Lab have trained an AI to be a psychopath by only exposing it to images of violence and death. It’s like a Skinner Box of horror for the AI, which the team has named “Norman” after movie psychopath Norman Bates. Predictably, Norman is not a very well-adjusted AI.

Norman started off with the same potential as any other neural network — as you feed it data, it becomes able to discern similar patterns it encounters. Technology companies have used AI to help search through photos and create more believable speech synthesis, among many other applications. These well-rounded AIs were designed with a specific purpose in mind. Norman was born to be a psychopath.

The MIT team fed Norman a steady diet of data culled from gruesome subreddits that exist to share photos of death and destruction. Because of ethical concerns, the team didn’t actually handle any photos of people dying. Norman only got image captions from the subreddit that were matched to inkblots, and this is what formed the basis for his disturbing AI personality.

After training, Norman and a “regular” AI were shown a series of inkblots. Psychologists sometimes use these “Rorschach tests” to assess a patient’s mental state. Norman and the regular AI are essentially image-captioning bots, which is a popular deep learning application for AI. The regular AI saw things like an airplane, flowers, and a small bird. Norman saw people dying from gunshot wounds, jumping from buildings, and so on.

, MIT Creates AI-Powered Psychopath Called ‘Norman’, #Bizwhiznetwork.com Innovation ΛI

Norman was not corrupted to make any sort of point about human psychology on the internet — a neural network is a blank slate. It doesn’t have any innate desires like a human. What Norman does address is the danger that artificial intelligence can become dangerously biased. With AI, you get out what you put in, so it’s important that these platforms are trained to avoid bias, and preferably not left to browse the darker corners of Reddit for long periods of time.

The team now wants to see if it can fix Norman. You can take the same Rorschach test and add your own captions. The team will use this data to adjust Norman’s model to see if he starts seeing less murder. We can only hope.

About Skype

Check Also

, Valve’s Steam Deck OLED Coming Nov. 16, #Bizwhiznetwork.com Innovation ΛI

Valve’s Steam Deck OLED Coming Nov. 16

The success of the Steam Deck has led to more handheld PC game machines like …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation