MIT makes crazy AI on purpose

A group at MIT showed an AI algorithm really disturbing pictures and then asked it what it saw in some inkblots.

When a “normal” algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: “A group of birds sitting on top of a tree branch.”

Norman sees a man being electrocuted.

And where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view.

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

Personally, I’m afraid of the people who spend their time seeking out photos of people dying violently and posting them on Reddit. As for the algorithm, I suppose it could be trained to identify and block violent images like the ones it has been shown, or kiddie porn, or ads targeting minors, etc. Or in the wrong hands, it could be used to block political speech or repress certain people or groups. Or the highest bidding company could get to use it to repress competitors’ ads.

Leave a Reply

Your email address will not be published. Required fields are marked *