Animals using artificial (or at least non-neural) intelligence

Joshua Sokol for Quanta Magazine:

And then there are animals that appear to offload part of their mental apparatus to structures outside of the neural system entirely. Female crickets, for example, orient themselves toward the calls of the loudest males. They pick up the sound using ears on each of the knees of their two front legs. These ears are connected to one another through a tracheal tube. Sound waves come in to both ears and then pass through the tube before interfering with one another in each ear. The system is set up so that the ear closest to the source of the sound will vibrate most strongly.

In crickets, the information processing — the job of finding and identifying the direction that the loudest sound is coming from — appears to take place in the physical structures of the ears and tracheal tube, not inside the brain. Once these structures have finished processing the information, it gets passed to the neural system, which tells the legs to turn the cricket in the right direction.

The Thoughts of a Spiderweb

The broader concept is known as “extended cognition,” and in my view it may just be semantics. Many natural and artificial features of our environments, from ear shape to computers, amplify and filter information in ways that reduce cognitive load. I’d hesitate to describe these as “cognition.” But intelligence as a concept is certainly broader than brains.

OpenAI identifies AI Ethics as a collective action problem

OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:

If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.

Why Responsible AI Development Needs Cooperation on Safety

And they identify four strategies to address this issue:

  1. Promote accurate beliefs about the opportunities for cooperation
  2. Collaborate on shared research and engineering challenges
  3. Open up more aspects of AI development to appropriate oversight and feedback; and
  4. Incentivize adherence to high standards of safety

The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.

We reserve the right to *allow the AI* to refuse service to anyone

From denying a cat’s entry to denying a person’s entry:

A sign at the front of the store reads, “Look at camera for entry” and “Facial Recognition Technology In Use.” An automated voice tells approaching customers through a mounted speaker to look up at the camera.

KIRO 7 first learned about the facial recognition technology being used there from a customer, a regular at the store, who posted about it on Facebook after stopping by the 24-hour location at night last week.

“Tonight, I was confronted with a whole new Jackson’s,” she wrote. “You had to stare at the camera before they let you in.”

The woman said she was told by clerks that the technology is being used to cut down on thefts from the store.

“Sometimes I would walk out of there, jaw to the ground, at the in-your-face theft,” she wrote.

‘Look at camera for entry’: Tacoma convenience store using facial recognition technology

The makers of this technology, Blue Line Technology, seem rather overconfident: “Blue Line Technology spokesperson Sawyer said the software has never misidentified anyone.” Meanwhile, state-of-the-art facial recognition technology is notoriously inaccurate with dark-skinned individuals and women.

So if you’re a dark-skinned individual mistakenly identified as “bad,” what does the appeal process look like? Stand outside and shout at the clerk?

Detecting deepfakes by committee

I guess this is a plan?

To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.

Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not.

‘A perfect storm’: The Wall Street Journal has 21 people detecting ‘deepfakes’

Major supplier of police body cameras concludes facial recognition is not reliable enough to sell ethically

Chaim Gartenberg, writing for The Verge:

Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.

[. . . . .]

According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.

Axon (formerly Taser) says facial recognition on police body cams is unethical

One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?

EU Expert Group Favors Banning AI Mass Surveillance and AI Deception

The EU High-Level Expert Group on Artificial Intelligence released its Policy and Investment Recommendations for Trustworthy AI today. The 50-page document is a bit more prescriptive that their previous Ethics Guidelines, and suggests that governments “refrain from disproportionate and mass surveillance of individuals” and “introduce mandatory self-identification of AI systems.” (But see deceptive NYPD chatbots.)

A big chunk of the report also urges the EU to invest in education and subject matter expertise.

So far the discussion around AI mass surveillance has been relatively binary: do it or not. At some point I expect we will see proposals to do mass surveillance while maintaining individual privacy. The security benefits of mass surveillance are too attractive to forego.

AI’s evaluating humans

Here’s an application that could use some transparency:

When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

A Machine May Not Take Your Job, but One Could Become Your Boss

I have no idea how this AI might have been trained, and the article sheds no light.

Do as I say, not as I do: robot edition

Deep learning has revolutionized artificial intelligence. We’ve changed from telling computers how to do things, and are now telling computers what to do and letting them figure it out. For many activities (e.g., object identification) we can’t even really explain how to do it. It’s easier to just tell a system, “This is a ball. When you see this, identify it as a ball. Now here are 1M more examples.” And the system learns pretty well.

Except when it doesn’t. There is a burgeoning new science of trying to tell artificial intelligence systems what exactly we want them to do:

Told to optimize for speed while racing down a track in a computer game, a car pushes the pedal to the metal … and proceeds to spin in a tight little circle. Nothing in the instructions told the car to drive straight, and so it improvised.

[. . . . .]

The team’s new system for providing instruction to robots — known as reward functions — combines demonstrations, in which humans show the robot what to do, and user preference surveys, in which people answer questions about how they want the robot to behave.

“Demonstrations are informative but they can be noisy. On the other hand, preferences provide, at most, one bit of information, but are way more accurate,” said Sadigh. “Our goal is to get the best of both worlds, and combine data coming from both of these sources more intelligently to better learn about humans’ preferred reward function.”

Researchers teach robots what humans want

This is critical research, and probably under-reported. If robots (like people) are going to learn mainly by mimicking humans, what human behaviors should they mimic?

People want autonomous cars to drive less aggressively than they do. And they should also be less racist, sexist, and violent. Getting the right reward function is critical. Getting it wrong may be immoral.

Rock Paper Scissors robot wins 100% of the time

Via Schneier on Security, this is old but I hadn’t seen it either:

The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow.

Rock Paper Scissors robot wins 100% of the time

Having super-human reaction times is a nice feature, and this certainly isn’t the only application.