Toby Walsh is an Australian professor of computer science working to prevent the development of autonomous robotic weapons:
[Y]ou can’t have machines deciding whether humans live or die. It crosses new territory. Machines don’t have our moral compass, our compassion and our emotions. Machines are not moral beings.
The technical argument is that these are potentially weapons of mass destruction, and the international community has thus far banned all other weapons of mass destruction.Toby Walsh, A.I. Expert, Is Racing to Stop the Killer Robots
Different emphasis, but again the focus is on human-in-the-loop safety.
An artist working for a European Commission project called SHERPA, which investigates the way “smart information systems” impact human rights, has built an AI-enabled water gun:
‘Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,’ said Prof. Stahl. ‘The idea is to get people to think about what this sort of technology can do.’Getting AI ethics wrong could ‘annihilate technical progress’
The project is intended to highlight how biased or inaccurate AI systems can impact ordinary people. Does it think you’re between 30 and 50 years-old? Squirt.
AI’s generate fake news and AI’s detect them:
Grover is a strong detector of neural fake news precisely because it is simultaneously a state-of-the-art generator of neural fake news.Why We Released Grover
Grover can both generate and detect AI text. The blog post explains why the research team from the University of Washington decided to release the Grover model despite OpenAI’s decision that GPT-2 (a similarly powerful text generation model) was “too dangerous to release.” They conclude that the real danger is in even bigger models, and that research must continue.
Khari Johnson for VentureBeat:
One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.How AI companies can avoid ethics washing
I don’t think it’s fair to criticize companies for false effort in AI ethics quite yet. There are no generally accepted standards.
Following on the heels of San Francisco and Somerville, Massachusetts:
The Oakland city council voted last night to pass an ordinance banning city agencies from using facial recognition technology. The move sets up Oakland to become the third city in the United States to pass similar legislation.Oakland city council votes to ban government use of facial recognition
Are we entering an AI cool down in which the hard tech gets acknowledged as hard and the effective tech gets banned? It makes a certain amount of sense of course: effective is dangerous. We need good processes.
A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:
California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.You Might Be a Robot at 3.
Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:
A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol. Id. at 40.
One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.
This is a fantastic piece of work (and paper title!) about the benefits of human-in-the-loop AI processes.
Based on identified user needs, we designed and implemented SMILY (Figure 2), a deep-learning based CBIR [content-based image retrieval] system that includes a set of refinement mechanisms to guide the search process. Similar to existing medical CBIR systems, SMILY enables pathologists to query the system with an image, and then view the most similar images from past cases along with their prior diagnoses. The pathologist can then compare and contrast those images to the query image, before making a decision.Human-centered tool for coping with Imperfect Algorithms During Medical Decision-Making (via The Gradient)
The system used three primary refinement tools: (1) refine by region; (2) refine by example; and (3) refine by concept. The authors reported that users found the software to offer greater mental support, and that users were naturally focused on explaining surprising results: “They make me wonder, ‘Oh am I making an error?'” Critically, this allowed users some insight into how the algorithm worked without an explicit explanation.
I suspect human-in-the-loop AI processes are our best version of the future. They have also been proposed to resolve ethical concerns.
OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:
If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.Why Responsible AI Development Needs Cooperation on Safety
And they identify four strategies to address this issue:
- Promote accurate beliefs about the opportunities for cooperation
- Collaborate on shared research and engineering challenges
- Open up more aspects of AI development to appropriate oversight and feedback; and
- Incentivize adherence to high standards of safety
The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.
From denying a cat’s entry to denying a person’s entry:
A sign at the front of the store reads, “Look at camera for entry” and “Facial Recognition Technology In Use.” An automated voice tells approaching customers through a mounted speaker to look up at the camera.
KIRO 7 first learned about the facial recognition technology being used there from a customer, a regular at the store, who posted about it on Facebook after stopping by the 24-hour location at night last week.
“Tonight, I was confronted with a whole new Jackson’s,” she wrote. “You had to stare at the camera before they let you in.”
The woman said she was told by clerks that the technology is being used to cut down on thefts from the store.
“Sometimes I would walk out of there, jaw to the ground, at the in-your-face theft,” she wrote.‘Look at camera for entry’: Tacoma convenience store using facial recognition technology
The makers of this technology, Blue Line Technology, seem rather overconfident: “Blue Line Technology spokesperson Sawyer said the software has never misidentified anyone.” Meanwhile, state-of-the-art facial recognition technology is notoriously inaccurate with dark-skinned individuals and women.
So if you’re a dark-skinned individual mistakenly identified as “bad,” what does the appeal process look like? Stand outside and shout at the clerk?
Here’s an application that could use some transparency:
When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.
Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.
Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.
Not empathetic enough? A heart icon pops up.A Machine May Not Take Your Job, but One Could Become Your Boss
I have no idea how this AI might have been trained, and the article sheds no light.