AI’s find non-intuitive ways to win at simulated hide-and-seek

Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” including “box surfing, where seekers learn to bring a box to a locked ramp in order to jump on top of the box and then ‘surf’ it to the hider’s shelter,” according to OpenAI.

AI breaks simulated laws of physics to win at hide and seek

These are entertaining simulations to watch.

Massive private surveillance networks

Joseph Cox with Motherboard has authored a story on a massive private license plate surveillance network called DRN:

This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN’s database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network.

This Company Built a Private Surveillance Network. We Tracked Someone With It

I wrote recently about private surveillance projects that may meet or exceed government efforts. It won’t be long before the license plate readers are facial recognition scanners. It’s probably happening now.

AI’s make a lot of guesses and we should know that

One of the most important AI ethics tasks is to educate developers and especially users about what AI’s can do well and what they cannot do well. AI systems do amazing things, and users mostly assume these things are done accurately based on a few demonstrations. For example, the police assume facial recognition systems accurately tag bad guys, and that license plate databases accurately contain lists of stolen cars. But these systems are brittle, and an excellent example of this is the fun, new ImageNet Roulette web tool put together by artist and researcher Trevor Paglen.

ImageNet Roulette is a provocation designed to help us see into the ways that humans are classified in machine learning systems. It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. 

ImageNet Roulette (via The Verge)

The service claims not to keep any uploaded photos, so if you trust them, you can upload a webcam image of yourself and see how the internet classifies your face.

Of course no human would look at a random image of another human devoid of context and attempt to assign a description such as “pipe smoker” or “newspaper reader.” We would say, “I don’t know. It just looks like a person.”

But AI’s aren’t that smart yet. They don’t know what they can’t know. So ImageNet Roulette calculates probabilities that an image falls into a given description, and then it outputs the highest probability description. It’s a shot in the dark. You might think it is seeing something deep, but nope. It has 2,500 labels and it has to apply one. I apparently look like a sociologist.

Moving beyond detection of statistical patterns in AI

Gary Marcus and Ernest Davis writing for the NYT:

. . . We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Google’s Talk to Books, an A.I. venture that aims to answer your questions by providing relevant passages from a huge database of texts, did no better. It served up 20 passages with a wide array of facts, some about George Washington, others about the invention of computers, but with no meaningful connection between the two.

The situation is even worse when it comes to A.I. and the concepts of space and causality. Even a young child, encountering a cheese grater for the first time, can figure out why it has holes with sharp edges, which parts allow cheese to drop through, which parts you grasp with your fingers and so on. But no existing A.I. can properly understand how the shape of an object is related to its function. Machines can identify what things are, but not how something’s physical features correspond to its potential causal effects.

How to Build Artificial Intelligence We Can Trust

Modern AI’s have no basic understanding of the world, and there’s not much progress.

Does automating the boring stuff leave only the unpleasant stuff?

Fred Benenson, writing for The Atlantic, is concerned that AI automation will leave only the most difficult and unpleasant tasks for humans:

What’s less understood is that artificial intelligence will transform higher-skill positions, too—in ways that demand more human judgment rather than less. And that could be a problem. As AI gets better at performing the routine tasks traditionally done by humans, only the hardest ones will be left for us to do. But wrestling with only difficult decisions all day long is stressful and unpleasant. Being able to make at least some easy calls, such as allowing Santorini onto Kickstarter, can be deeply satisfying.

“Decision making is very cognitively draining,” the author and former clinical psychologist Alice Boyes told me via email, “so it’s nice to have some tasks that provide a sense of accomplishment but just require getting it done and repeating what you know, rather than everything needing very taxing novel decision making.”

AI Is Coming for Your Favorite Menial Tasks

He recognizes that many professions (e.g., lawyers!) may welcome automation of the boring stuff. But he’s particularly concerned about content moderators.

But we may find that as jobs get harder, the benefits get better.

Survey suggests most Americans support police use of facial recognition technology

According to the Pew Research Center, a full 56 percent said that they trust police and officials to use these technologies responsibly. That goes for situations in which no consent is given: About 59 percent said it is acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces.

Police Use of Facial Recognition is Just Fine, Say Most Americans

Black and Hispanic adults approve at lower rates. See the study for details.

UK court approves police use of facial recognition

In contrast to recent U.S. municipal decisions restricting government use of facial recognition technology, a UK court has ruled that police use of the technology does not violate any fundamental rights.

In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

Police Use of Facial Recognition Is Accepted by British Court

The UK is of course one of the most surveilled countries in the world.

Language models are improving quickly

Aristo, an AI developed by the Allen Institute for Artificial Intelligence, passed an eighth-grade science test with a score of 90%. They used BERT.

At Google, researchers built a system called Bert that combed through thousands of Wikipedia articles and a vast digital library of romance novels, science fiction and other self-published books.

Through analyzing all that text, Bert learned how to guess the missing word in a sentence. By learning that one skill, Bert soaked up enormous amounts of information about the fundamental ways language is constructed. And researchers could apply that knowledge to other tasks.

The Allen Institute built their Aristo system on top of the Bert technology. They fed Bert a wide range of questions and answers. In time, it learned to answer similar questions on its own.

A Breakthrough for A.I. Technology: Passing an 8th-Grade Science Test

AI for military also means compassion

Phenomenal essay by Lucas Kunce, a U.S. Marine who served in Iraq and Afghanistan, responding to news that 4,600 Google employees signed a petition urging the company to refuse to build weapons technology:

People frequently threw objects of all sizes at our vehicles in anger and protest. Aside from roadside bombs, the biggest threat at the time, particularly in crowded areas, was an armor-piercing hand-held grenade. It looked like a dark soda can with a handle protruding from the bottom. Or, from a distance and with only an instant to decide, it looked just like many of the other objects that were thrown at us. 

One day in Falluja, at the site of a previous attack, an Iraqi man threw a dark oblong object at one of the vehicles in my sister team. The Marine in the turret, believing it was an armor-piercing grenade, shot the man in the chest. The object turned out to be a shoe.

[. . . . .]

When I think about A.I. and weapons development, I don’t imagine Skynet, the Terminator, or some other Hollywood dream of killer robots. I picture the Marines I know patrolling Falluja with a heads-up display like HoloLens, tied to sensors and to an A.I. system that can process data faster and more precisely than humanly possible — an interface that helps them identify an object as a shoe, or an approaching truck as too light to be laden with explosives.

Dear Tech Workers, U.S. Service Members Need Your Help

Interview with John Shawe-Taylor, professor at University College London

I enjoyed this interview and especially the title: “Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair.”

What are some core problems or research areas you want to approach?

People are now solving problems just by throwing an enormous amount of computation and data at them and trying every possible way. You can afford to do that if you are a big company and have a lot of resources, but people in developing countries cannot afford the data or the computational resources. So the theoretical challenge, or the fundamental challenge, is how to develop methods that are better understood and therefore don’t need experiments with hundreds of variants to get things to work.

Another thing is that some of the problems with current datasets, especially in terms of the usefulness of these systems for different cultures, is that there is a cultural bias in the data that has been collected. It is Western data informed with the Western way of seeing and doing things, so to some extent having data from different cultures and different environments is going to help make things more useful. You need to learn from data that is more relevant to the task.

Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair

And of course:

“Solving” is probably too strong, but for addressing those problems, as I’ve said, the problem is that we don’t realise that they are the reflections of our own problems. We don’t realise how biased we are until we see an AI reproduce the same bias, and we see that it’s biased.

I chuckle a bit when I hear about biased humans going over biased data in the hopes of creating unbiased data. Bias is a really hard problem, and it’s always going to be with us in one form or another. Education and awareness are the most important tools for addressing.