One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.
Following on the heels of San Francisco and Somerville, Massachusetts:
The Oakland city council voted last night to pass an ordinance banning city agencies from using facial recognition technology. The move sets up Oakland to become the third city in the United States to pass similar legislation.
A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:
California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.
Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:
A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.
Id. at 40.
One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.
This is a fantastic piece of work (and paper title!) about the benefits of human-in-the-loop AI processes.
Based on identified user needs, we designed and implemented SMILY (Figure 2), a deep-learning based CBIR [content-based image retrieval] system that includes a set of refinement mechanisms to guide the search process. Similar to existing medical CBIR systems, SMILY enables pathologists to query the system with an image, and then view the most similar images from past cases along with their prior diagnoses. The pathologist can then compare and contrast those images to the query image, before making a decision.
The system used three primary refinement tools: (1) refine by region; (2) refine by example; and (3) refine by concept. The authors reported that users found the software to offer greater mental support, and that users were naturally focused on explaining surprising results: “They make me wonder, ‘Oh am I making an error?'” Critically, this allowed users some insight into how the algorithm worked without an explicit explanation.
And then there are animals that appear to offload part of their mental apparatus to structures outside of the neural system entirely. Female crickets, for example, orient themselves toward the calls of the loudest males. They pick up the sound using ears on each of the knees of their two front legs. These ears are connected to one another through a tracheal tube. Sound waves come in to both ears and then pass through the tube before interfering with one another in each ear. The system is set up so that the ear closest to the source of the sound will vibrate most strongly.
In crickets, the information processing — the job of finding and identifying the direction that the loudest sound is coming from — appears to take place in the physical structures of the ears and tracheal tube, not inside the brain. Once these structures have finished processing the information, it gets passed to the neural system, which tells the legs to turn the cricket in the right direction.
The broader concept is known as “extended cognition,” and in my view it may just be semantics. Many natural and artificial features of our environments, from ear shape to computers, amplify and filter information in ways that reduce cognitive load. I’d hesitate to describe these as “cognition.” But intelligence as a concept is certainly broader than brains.
OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:
If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.
And they identify four strategies to address this issue:
Promote accurate beliefs about the opportunities for cooperation
Collaborate on shared research and engineering challenges
Open up more aspects of AI development to appropriate oversight and feedback; and
Incentivize adherence to high standards of safety
The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.
A sign at the front of the store reads, “Look at camera for entry” and “Facial Recognition Technology In Use.” An automated voice tells approaching customers through a mounted speaker to look up at the camera.
KIRO 7 first learned about the facial recognition technology being used there from a customer, a regular at the store, who posted about it on Facebook after stopping by the 24-hour location at night last week.
“Tonight, I was confronted with a whole new Jackson’s,” she wrote. “You had to stare at the camera before they let you in.”
The woman said she was told by clerks that the technology is being used to cut down on thefts from the store.
“Sometimes I would walk out of there, jaw to the ground, at the in-your-face theft,” she wrote.
The makers of this technology, Blue Line Technology, seem rather overconfident: “Blue Line Technology spokesperson Sawyer said the software has never misidentified anyone.” Meanwhile, state-of-the-art facial recognition technology is notoriously inaccurate with dark-skinned individuals and women.
So if you’re a dark-skinned individual mistakenly identified as “bad,” what does the appeal process look like? Stand outside and shout at the clerk?
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not.
Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.
[. . . . .]
According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.
One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?