AI’s make a lot of guesses and we should know that

One of the most important AI ethics tasks is to educate developers and especially users about what AI’s can do well and what they cannot do well. AI systems do amazing things, and users mostly assume these things are done accurately based on a few demonstrations. For example, the police assume facial recognition systems accurately tag bad guys, and that license plate databases accurately contain lists of stolen cars. But these systems are brittle, and an excellent example of this is the fun, new ImageNet Roulette web tool put together by artist and researcher Trevor Paglen.

ImageNet Roulette is a provocation designed to help us see into the ways that humans are classified in machine learning systems. It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. 

ImageNet Roulette (via The Verge)

The service claims not to keep any uploaded photos, so if you trust them, you can upload a webcam image of yourself and see how the internet classifies your face.

Of course no human would look at a random image of another human devoid of context and attempt to assign a description such as “pipe smoker” or “newspaper reader.” We would say, “I don’t know. It just looks like a person.”

But AI’s aren’t that smart yet. They don’t know what they can’t know. So ImageNet Roulette calculates probabilities that an image falls into a given description, and then it outputs the highest probability description. It’s a shot in the dark. You might think it is seeing something deep, but nope. It has 2,500 labels and it has to apply one. I apparently look like a sociologist.

UK court approves police use of facial recognition

In contrast to recent U.S. municipal decisions restricting government use of facial recognition technology, a UK court has ruled that police use of the technology does not violate any fundamental rights.

In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

Police Use of Facial Recognition Is Accepted by British Court

The UK is of course one of the most surveilled countries in the world.

AI for military also means compassion

Phenomenal essay by Lucas Kunce, a U.S. Marine who served in Iraq and Afghanistan, responding to news that 4,600 Google employees signed a petition urging the company to refuse to build weapons technology:

People frequently threw objects of all sizes at our vehicles in anger and protest. Aside from roadside bombs, the biggest threat at the time, particularly in crowded areas, was an armor-piercing hand-held grenade. It looked like a dark soda can with a handle protruding from the bottom. Or, from a distance and with only an instant to decide, it looked just like many of the other objects that were thrown at us. 

One day in Falluja, at the site of a previous attack, an Iraqi man threw a dark oblong object at one of the vehicles in my sister team. The Marine in the turret, believing it was an armor-piercing grenade, shot the man in the chest. The object turned out to be a shoe.

[. . . . .]

When I think about A.I. and weapons development, I don’t imagine Skynet, the Terminator, or some other Hollywood dream of killer robots. I picture the Marines I know patrolling Falluja with a heads-up display like HoloLens, tied to sensors and to an A.I. system that can process data faster and more precisely than humanly possible — an interface that helps them identify an object as a shoe, or an approaching truck as too light to be laden with explosives.

Dear Tech Workers, U.S. Service Members Need Your Help

Interview with John Shawe-Taylor, professor at University College London

I enjoyed this interview and especially the title: “Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair.”

What are some core problems or research areas you want to approach?

People are now solving problems just by throwing an enormous amount of computation and data at them and trying every possible way. You can afford to do that if you are a big company and have a lot of resources, but people in developing countries cannot afford the data or the computational resources. So the theoretical challenge, or the fundamental challenge, is how to develop methods that are better understood and therefore don’t need experiments with hundreds of variants to get things to work.

Another thing is that some of the problems with current datasets, especially in terms of the usefulness of these systems for different cultures, is that there is a cultural bias in the data that has been collected. It is Western data informed with the Western way of seeing and doing things, so to some extent having data from different cultures and different environments is going to help make things more useful. You need to learn from data that is more relevant to the task.

Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair

And of course:

“Solving” is probably too strong, but for addressing those problems, as I’ve said, the problem is that we don’t realise that they are the reflections of our own problems. We don’t realise how biased we are until we see an AI reproduce the same bias, and we see that it’s biased.

I chuckle a bit when I hear about biased humans going over biased data in the hopes of creating unbiased data. Bias is a really hard problem, and it’s always going to be with us in one form or another. Education and awareness are the most important tools for addressing.

Campaign to Stop Killer Robots

Toby Walsh is an Australian professor of computer science working to prevent the development of autonomous robotic weapons:

[Y]ou can’t have machines deciding whether humans live or die. It crosses new territory. Machines don’t have our moral compass, our compassion and our emotions. Machines are not moral beings. 

The technical argument is that these are potentially weapons of mass destruction, and the international community has thus far banned all other weapons of mass destruction.

Toby Walsh, A.I. Expert, Is Racing to Stop the Killer Robots

Different emphasis, but again the focus is on human-in-the-loop safety.

AI-enabled water gun

An artist working for a European Commission project called SHERPA, which investigates the way “smart information systems” impact human rights, has built an AI-enabled water gun:

‘Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,’ said Prof. Stahl. ‘The idea is to get people to think about what this sort of technology can do.’

Getting AI ethics wrong could ‘annihilate technical progress’

The project is intended to highlight how biased or inaccurate AI systems can impact ordinary people. Does it think you’re between 30 and 50 years-old? Squirt.

The AI Detection Arms Race

AI’s generate fake news and AI’s detect them:

Grover is a strong detector of neural fake news precisely because it is simultaneously a state-of-the-art generator of neural fake news.

Why We Released Grover

Grover can both generate and detect AI text. The blog post explains why the research team from the University of Washington decided to release the Grover model despite OpenAI’s decision that GPT-2 (a similarly powerful text generation model) was “too dangerous to release.” They conclude that the real danger is in even bigger models, and that research must continue.

The new charge of “ethics washing”

Khari Johnson for VentureBeat:

One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.

How AI companies can avoid ethics washing

I don’t think it’s fair to criticize companies for false effort in AI ethics quite yet. There are no generally accepted standards.

Oakland, CA also bans city agencies from using facial recognition tech

Following on the heels of San Francisco and Somerville, Massachusetts:

The Oakland city council voted last night to pass an ordinance banning city agencies from using facial recognition technology. The move sets up Oakland to become the third city in the United States to pass similar legislation.

Oakland city council votes to ban government use of facial recognition

Are we entering an AI cool down in which the hard tech gets acknowledged as hard and the effective tech gets banned? It makes a certain amount of sense of course: effective is dangerous. We need good processes.

Defining a “bot” is hard

A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:

California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.

You Might Be a Robot at 3.

Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:

A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.

Id. at 40.

One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.