AI’s make a lot of guesses and we should know that

One of the most important AI ethics tasks is to educate developers and especially users about what AI’s can do well and what they cannot do well. AI systems do amazing things, and users mostly assume these things are done accurately based on a few demonstrations. For example, the police assume facial recognition systems accurately tag bad guys, and that license plate databases accurately contain lists of stolen cars. But these systems are brittle, and an excellent example of this is the fun, new ImageNet Roulette [update 2/22/20: no longer available] web tool put together by artist and researcher Trevor Paglen.

ImageNet Roulette is a provocation designed to help us see into the ways that humans are classified in machine learning systems. It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. 

ImageNet Roulette (via The Verge)

The service claims not to keep any uploaded photos, so if you trust them, you can upload a webcam image of yourself and see how the internet classifies your face.

Of course no human would look at a random image of another human devoid of context and attempt to assign a description such as “pipe smoker” or “newspaper reader.” We would say, “I don’t know. It just looks like a person.”

But AI’s aren’t that smart yet. They don’t know what they can’t know. So ImageNet Roulette calculates probabilities that an image falls into a given description, and then it outputs the highest probability description. It’s a shot in the dark. You might think it is seeing something deep, but nope. It has 2,500 labels and it has to apply one. I apparently look like a sociologist.