The new charge of “ethics washing”

Khari Johnson for VentureBeat:

One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.

How AI companies can avoid ethics washing

I don’t think it’s fair to criticize companies for false effort in AI ethics quite yet. There are no generally accepted standards.

Oakland, CA also bans city agencies from using facial recognition tech

Following on the heels of San Francisco and Somerville, Massachusetts:

The Oakland city council voted last night to pass an ordinance banning city agencies from using facial recognition technology. The move sets up Oakland to become the third city in the United States to pass similar legislation.

Oakland city council votes to ban government use of facial recognition

Are we entering an AI cool down in which the hard tech gets acknowledged as hard and the effective tech gets banned? It makes a certain amount of sense of course: effective is dangerous. We need good processes.

Defining a “bot” is hard

A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:

California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.

You Might Be a Robot at 3.

Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:

A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.

Id. at 40.

One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.

Human-Centered AI Tools

This is a fantastic piece of work (and paper title!) about the benefits of human-in-the-loop AI processes.

Based on identified user needs, we designed and implemented SMILY (Figure 2), a deep-learning based CBIR [content-based image retrieval] system that includes a set of refinement mechanisms to guide the search process. Similar to existing medical CBIR systems, SMILY enables pathologists to query the system with an image, and then view the most similar images from past cases along with their prior diagnoses. The pathologist can then compare and contrast those images to the query image, before making a decision.

Human-centered tool for coping with Imperfect Algorithms During Medical Decision-Making (via The Gradient)

The system used three primary refinement tools: (1) refine by region; (2) refine by example; and (3) refine by concept. The authors reported that users found the software to offer greater mental support, and that users were naturally focused on explaining surprising results: “They make me wonder, ‘Oh am I making an error?'” Critically, this allowed users some insight into how the algorithm worked without an explicit explanation.

I suspect human-in-the-loop AI processes are our best version of the future. They have also been proposed to resolve ethical concerns.

Animals using artificial (or at least non-neural) intelligence

Joshua Sokol for Quanta Magazine:

And then there are animals that appear to offload part of their mental apparatus to structures outside of the neural system entirely. Female crickets, for example, orient themselves toward the calls of the loudest males. They pick up the sound using ears on each of the knees of their two front legs. These ears are connected to one another through a tracheal tube. Sound waves come in to both ears and then pass through the tube before interfering with one another in each ear. The system is set up so that the ear closest to the source of the sound will vibrate most strongly.

In crickets, the information processing — the job of finding and identifying the direction that the loudest sound is coming from — appears to take place in the physical structures of the ears and tracheal tube, not inside the brain. Once these structures have finished processing the information, it gets passed to the neural system, which tells the legs to turn the cricket in the right direction.

The Thoughts of a Spiderweb

The broader concept is known as “extended cognition,” and in my view it may just be semantics. Many natural and artificial features of our environments, from ear shape to computers, amplify and filter information in ways that reduce cognitive load. I’d hesitate to describe these as “cognition.” But intelligence as a concept is certainly broader than brains.

OpenAI identifies AI Ethics as a collective action problem

OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:

If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.

Why Responsible AI Development Needs Cooperation on Safety

And they identify four strategies to address this issue:

  1. Promote accurate beliefs about the opportunities for cooperation
  2. Collaborate on shared research and engineering challenges
  3. Open up more aspects of AI development to appropriate oversight and feedback; and
  4. Incentivize adherence to high standards of safety

The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.

We reserve the right to *allow the AI* to refuse service to anyone

From denying a cat’s entry to denying a person’s entry:

A sign at the front of the store reads, “Look at camera for entry” and “Facial Recognition Technology In Use.” An automated voice tells approaching customers through a mounted speaker to look up at the camera.

KIRO 7 first learned about the facial recognition technology being used there from a customer, a regular at the store, who posted about it on Facebook after stopping by the 24-hour location at night last week.

“Tonight, I was confronted with a whole new Jackson’s,” she wrote. “You had to stare at the camera before they let you in.”

The woman said she was told by clerks that the technology is being used to cut down on thefts from the store.

“Sometimes I would walk out of there, jaw to the ground, at the in-your-face theft,” she wrote.

‘Look at camera for entry’: Tacoma convenience store using facial recognition technology

The makers of this technology, Blue Line Technology, seem rather overconfident: “Blue Line Technology spokesperson Sawyer said the software has never misidentified anyone.” Meanwhile, state-of-the-art facial recognition technology is notoriously inaccurate with dark-skinned individuals and women.

So if you’re a dark-skinned individual mistakenly identified as “bad,” what does the appeal process look like? Stand outside and shout at the clerk?

Detecting deepfakes by committee

I guess this is a plan?

To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.

Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not.

‘A perfect storm’: The Wall Street Journal has 21 people detecting ‘deepfakes’

Major supplier of police body cameras concludes facial recognition is not reliable enough to sell ethically

Chaim Gartenberg, writing for The Verge:

Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.

[. . . . .]

According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.

Axon (formerly Taser) says facial recognition on police body cams is unethical

One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?