AI’s need to learn what we want

As it becomes increasingly apparent that we cannot tell artificial intelligence precisely what the goal should be, a growing chorus of researchers and ethicists are throwing up their hands and asking the AI’s to learn that part as well.

Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines.

Uncertainty about objectives might sound counterproductive, but it is actually an essential feature of safe intelligent systems. It implies that no matter how intelligent they become, machines will always defer to humans. They will ask permission when appropriate, they will accept correction, and, most important, they will allow themselves to be switched off — precisely because they want to avoid doing whatever it is that would give humans a reason to switch them off.

How to Stop Superhuman A.I. Before It Stops Us

This begs a lot of questions, not the least of which is what are our objectives? But it turns out we have the same problem describing what it is we want as we have describing how we perceive. We’re just going to have to show you.

There is no Western AI plan

Tim Wu, writing in the New York Times:

But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.

The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. . . .

To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.

America’s Risky Approach to Artificial Intelligence

Planning requires paying attention. We’re a little distracted in the West these days. And Russia and China love that.

AI’s find non-intuitive ways to win at simulated hide-and-seek

Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” including “box surfing, where seekers learn to bring a box to a locked ramp in order to jump on top of the box and then ‘surf’ it to the hider’s shelter,” according to OpenAI.

AI breaks simulated laws of physics to win at hide and seek

These are entertaining simulations to watch.

Massive private surveillance networks

Joseph Cox with Motherboard has authored a story on a massive private license plate surveillance network called DRN:

This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN’s database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network.

This Company Built a Private Surveillance Network. We Tracked Someone With It

I wrote recently about private surveillance projects that may meet or exceed government efforts. It won’t be long before the license plate readers are facial recognition scanners. It’s probably happening now.

AI’s make a lot of guesses and we should know that

One of the most important AI ethics tasks is to educate developers and especially users about what AI’s can do well and what they cannot do well. AI systems do amazing things, and users mostly assume these things are done accurately based on a few demonstrations. For example, the police assume facial recognition systems accurately tag bad guys, and that license plate databases accurately contain lists of stolen cars. But these systems are brittle, and an excellent example of this is the fun, new ImageNet Roulette web tool put together by artist and researcher Trevor Paglen.

ImageNet Roulette is a provocation designed to help us see into the ways that humans are classified in machine learning systems. It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. 

ImageNet Roulette (via The Verge)

The service claims not to keep any uploaded photos, so if you trust them, you can upload a webcam image of yourself and see how the internet classifies your face.

Of course no human would look at a random image of another human devoid of context and attempt to assign a description such as “pipe smoker” or “newspaper reader.” We would say, “I don’t know. It just looks like a person.”

But AI’s aren’t that smart yet. They don’t know what they can’t know. So ImageNet Roulette calculates probabilities that an image falls into a given description, and then it outputs the highest probability description. It’s a shot in the dark. You might think it is seeing something deep, but nope. It has 2,500 labels and it has to apply one. I apparently look like a sociologist.

Moving beyond detection of statistical patterns in AI

Gary Marcus and Ernest Davis writing for the NYT:

. . . We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Google’s Talk to Books, an A.I. venture that aims to answer your questions by providing relevant passages from a huge database of texts, did no better. It served up 20 passages with a wide array of facts, some about George Washington, others about the invention of computers, but with no meaningful connection between the two.

The situation is even worse when it comes to A.I. and the concepts of space and causality. Even a young child, encountering a cheese grater for the first time, can figure out why it has holes with sharp edges, which parts allow cheese to drop through, which parts you grasp with your fingers and so on. But no existing A.I. can properly understand how the shape of an object is related to its function. Machines can identify what things are, but not how something’s physical features correspond to its potential causal effects.

How to Build Artificial Intelligence We Can Trust

Modern AI’s have no basic understanding of the world, and there’s not much progress.

Does automating the boring stuff leave only the unpleasant stuff?

Fred Benenson, writing for The Atlantic, is concerned that AI automation will leave only the most difficult and unpleasant tasks for humans:

What’s less understood is that artificial intelligence will transform higher-skill positions, too—in ways that demand more human judgment rather than less. And that could be a problem. As AI gets better at performing the routine tasks traditionally done by humans, only the hardest ones will be left for us to do. But wrestling with only difficult decisions all day long is stressful and unpleasant. Being able to make at least some easy calls, such as allowing Santorini onto Kickstarter, can be deeply satisfying.

“Decision making is very cognitively draining,” the author and former clinical psychologist Alice Boyes told me via email, “so it’s nice to have some tasks that provide a sense of accomplishment but just require getting it done and repeating what you know, rather than everything needing very taxing novel decision making.”

AI Is Coming for Your Favorite Menial Tasks

He recognizes that many professions (e.g., lawyers!) may welcome automation of the boring stuff. But he’s particularly concerned about content moderators.

But we may find that as jobs get harder, the benefits get better.

Survey suggests most Americans support police use of facial recognition technology

According to the Pew Research Center, a full 56 percent said that they trust police and officials to use these technologies responsibly. That goes for situations in which no consent is given: About 59 percent said it is acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces.

Police Use of Facial Recognition is Just Fine, Say Most Americans

Black and Hispanic adults approve at lower rates. See the study for details.

UK court approves police use of facial recognition

In contrast to recent U.S. municipal decisions restricting government use of facial recognition technology, a UK court has ruled that police use of the technology does not violate any fundamental rights.

In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

Police Use of Facial Recognition Is Accepted by British Court

The UK is of course one of the most surveilled countries in the world.

Language models are improving quickly

Aristo, an AI developed by the Allen Institute for Artificial Intelligence, passed an eighth-grade science test with a score of 90%. They used BERT.

At Google, researchers built a system called Bert that combed through thousands of Wikipedia articles and a vast digital library of romance novels, science fiction and other self-published books.

Through analyzing all that text, Bert learned how to guess the missing word in a sentence. By learning that one skill, Bert soaked up enormous amounts of information about the fundamental ways language is constructed. And researchers could apply that knowledge to other tasks.

The Allen Institute built their Aristo system on top of the Bert technology. They fed Bert a wide range of questions and answers. In time, it learned to answer similar questions on its own.

A Breakthrough for A.I. Technology: Passing an 8th-Grade Science Test