EU Expert Group Favors Banning AI Mass Surveillance and AI Deception

The EU High-Level Expert Group on Artificial Intelligence released its Policy and Investment Recommendations for Trustworthy AI today. The 50-page document is a bit more prescriptive that their previous Ethics Guidelines, and suggests that governments “refrain from disproportionate and mass surveillance of individuals” and “introduce mandatory self-identification of AI systems.” (But see deceptive NYPD chatbots.)

A big chunk of the report also urges the EU to invest in education and subject matter expertise.

So far the discussion around AI mass surveillance has been relatively binary: do it or not. At some point I expect we will see proposals to do mass surveillance while maintaining individual privacy. The security benefits of mass surveillance are too attractive to forego.

AI’s evaluating humans

Here’s an application that could use some transparency:

When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

A Machine May Not Take Your Job, but One Could Become Your Boss

I have no idea how this AI might have been trained, and the article sheds no light.

Do as I say, not as I do: robot edition

Deep learning has revolutionized artificial intelligence. We’ve changed from telling computers how to do things, and are now telling computers what to do and letting them figure it out. For many activities (e.g., object identification) we can’t even really explain how to do it. It’s easier to just tell a system, “This is a ball. When you see this, identify it as a ball. Now here are 1M more examples.” And the system learns pretty well.

Except when it doesn’t. There is a burgeoning new science of trying to tell artificial intelligence systems what exactly we want them to do:

Told to optimize for speed while racing down a track in a computer game, a car pushes the pedal to the metal … and proceeds to spin in a tight little circle. Nothing in the instructions told the car to drive straight, and so it improvised.

[. . . . .]

The team’s new system for providing instruction to robots — known as reward functions — combines demonstrations, in which humans show the robot what to do, and user preference surveys, in which people answer questions about how they want the robot to behave.

“Demonstrations are informative but they can be noisy. On the other hand, preferences provide, at most, one bit of information, but are way more accurate,” said Sadigh. “Our goal is to get the best of both worlds, and combine data coming from both of these sources more intelligently to better learn about humans’ preferred reward function.”

Researchers teach robots what humans want

This is critical research, and probably under-reported. If robots (like people) are going to learn mainly by mimicking humans, what human behaviors should they mimic?

People want autonomous cars to drive less aggressively than they do. And they should also be less racist, sexist, and violent. Getting the right reward function is critical. Getting it wrong may be immoral.

Rock Paper Scissors robot wins 100% of the time

Via Schneier on Security, this is old but I hadn’t seen it either:

The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow.

Rock Paper Scissors robot wins 100% of the time

Having super-human reaction times is a nice feature, and this certainly isn’t the only application.

Beijing AI Principles

This is fascinating in light of China’s use of AI for automated racism against the minority Muslim population in Xinjiang:

A group of leading institutes and companies have published a set of ethical standards for AI research and called for cross-border cooperation amid vigorous development of the industry.

The Beijing AI Principles was jointly unveiled Saturday by the Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University, Institute of Automation and Institute of Computing Technology in Chinese Academy of Sciences, and an AI industrial league involving firms like Baidu, Alibaba and Tencent.

“The development of AI is a common challenge for all humanity. Only through coordination on a global scale can we build AI that is beneficial to both humanity and nature,” said BAAI director Zeng Yi.

Beijing publishes AI ethical standards, calls for int’l cooperation

The principles themselves are as laudatory and vague as most other frameworks: “Do Good,” “Be Ethical.” They explicitly call out the human rights of privacy, dignity, freedom, and autonomy. It’s difficult to say if this is a sign of internal dissent or strategic positioning given the primarily academic and commercial origin of the framework.

Using facial recognition in police investigations

The Georgetown Law Center on Privacy & Technology issued a report (with its own vanity URL!) on the NYPD’s use of face recognition technology, and it starts with a particularly arresting anecdote:

On April 28, 2017, a suspect was caught on camera reportedly stealing beer from a CVS in New York City. The store surveillance camera that recorded the incident captured the suspect’s face, but it was partially obscured and highly pixelated. When the investigating detectives submitted the photo to the New York Police Department’s (NYPD) facial recognition system, it returned no useful matches.1

Rather than concluding that the suspect could not be identified using face recognition, however, the detectives got creative.

One detective from the Facial Identification Section (FIS), responsible for conducting face recognition searches for the NYPD, noted that the suspect looked like the actor Woody Harrelson, known for his performances in CheersNatural Born KillersTrue Detective, and other television shows and movies. A Google image search for the actor predictably returned high-quality images, which detectives then submitted to the face recognition algorithm in place of the suspect’s photo. In the resulting list of possible candidates, the detectives identified someone they believed was a match—not to Harrelson but to the suspect whose photo had produced no possible hits.2

This celebrity “match” was sent back to the investigating officers, and someone who was not Woody Harrelson was eventually arrested for petit larceny.

GARBAGE IN, GARBAGE OUT: FACE RECOGNITION ON FLAWED DATA

The report describes a number of incidents that it views as problematic, and they basically fall into two categories: (1) editing or reconstructing photos before submitting them to face recognition systems; and (2) simply uploading composite sketches of suspects to face recognition systems.

The report also describes a few incidents in which individuals were arrested based on very little evidence apart from the results of the face recognition technology, and it makes the claim that:

If it were discovered that a forensic fingerprint expert was graphically replacing missing or blurry portions of a latent print with computer-generated—or manually drawn—lines, or mirroring over a partial print to complete the finger, it would be a scandal.

I’m not sure this is true. Helping a computer system latch onto a possible set of matches seems an excellent way to narrow a list of suspects. But of course we should not be permitted to arrest or convict based solely on fabricated fingerprint or facial “evidence”. We need to understand the limits of technology used in the investigative process.

As technology becomes more complex, it is increasingly difficult to understand how it works and does not work. License plate readers are fantastically powerful technology, responsible for solving really terrible crimes. But the technology stack makes mistakes. You cannot rely on it alone.

There is no difference in principle between facial recognition technology, genealogy searches, and license plate readers. They are powerful tools but they are not perfect. And, crucially, they can be far less accurate when used on minority populations. Using powerful tools requires training. And the benefits are remarkable. But users need to understand how the technology works and where it can break down. This will always be true.

Salvador Dalí recreated with AI at Dalí Museum in Florida

What is dead may never die, at least with AI. The painter Salvador Dalí has been recreated on life-size video to interact with visitors to the Dalí Museum in St. Petersburg, Florida.

Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.

. . . . .

It’s hard to think of another artist who would be better suited for this than Dalí.

Deepfake Salvador Dalí takes selfies with museum visitors

This is going to be everywhere soon. How long until people start paying to have themselves recreated after they die?

The video is worth watching:

SF restricts its government agencies from using facial recognition technology

There are many reports that “SF bans facial recognition” (I’m looking at you NYT), but this is not true. The “ban” is just a restriction on its own government agencies (including the police) from using facial recognition.

San Francisco’s ban covers government agencies, including the city police and county sheriff’s department, but doesn’t affect the technology that unlocks your iPhone or cameras installed by businesses or individuals. It’s part of a broader package of rules, introduced in January by supervisor Aaron Peskin, that will require agencies to gain approval from the board before purchasing surveillance tech and will require that they publicly disclose its intended use.

SAN FRANCISCO BANS AGENCY USE OF FACIAL-RECOGNITION TECH

None of the reporting seems to link to the actual ordinance, but you can find it on the SF Board of Supervisor’s website. It is file #190110, introduced 1/29/2019. The actual ordinance is here. Summary is here.

Play with OpenAI’s GPT-2 language generation model

In February 2019, OpenAI disclosed a language generation algorithm called GPT-2. It did only one thing: predict the next word given all previous words in the text. And, while not perfect, it does this very well. When prompted with:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

it responds with:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

(The text continues.)

GPT-2 is a transformer-based neural network with 1.5 billion parameters trained on a dataset of 8 million web pages. Transformer-based networks were introduced by Google researchers in 2017 primarily for the purpose of language translation. They work on language by figuring out how much attention to pay to which words. Some words have more semantic value than others, and transformer-based neural networks can learn how to value different words with large amounts of training data. The biggest benefit of a transformer-based network is that the computation can be easily performed in parallel, in contrast to the more traditional and sequential RNN models used for language translation.

In a controversial move, OpenAI originally declined to make the GPT-2 model available to researchers, citing concerns about it being used to create “deceptive, biased, or abusive language at scale . . . .” Recently, however, they have released a smaller, less capable version of the model, and are considering other ways to share the research with AI partners.

Anyways…. now you can play with the smaller GPT-2 model at TalkToTransformer.com.