Defining a “bot” is hard

A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:

California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.

You Might Be a Robot at 3.

Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:

A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.

Id. at 40.

One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.

Human-Centered AI Tools

This is a fantastic piece of work (and paper title!) about the benefits of human-in-the-loop AI processes.

Based on identified user needs, we designed and implemented SMILY (Figure 2), a deep-learning based CBIR [content-based image retrieval] system that includes a set of refinement mechanisms to guide the search process. Similar to existing medical CBIR systems, SMILY enables pathologists to query the system with an image, and then view the most similar images from past cases along with their prior diagnoses. The pathologist can then compare and contrast those images to the query image, before making a decision.

Human-centered tool for coping with Imperfect Algorithms During Medical Decision-Making (via The Gradient)

The system used three primary refinement tools: (1) refine by region; (2) refine by example; and (3) refine by concept. The authors reported that users found the software to offer greater mental support, and that users were naturally focused on explaining surprising results: “They make me wonder, ‘Oh am I making an error?'” Critically, this allowed users some insight into how the algorithm worked without an explicit explanation.

I suspect human-in-the-loop AI processes are our best version of the future. They have also been proposed to resolve ethical concerns.

OpenAI identifies AI Ethics as a collective action problem

OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:

If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.

Why Responsible AI Development Needs Cooperation on Safety

And they identify four strategies to address this issue:

  1. Promote accurate beliefs about the opportunities for cooperation
  2. Collaborate on shared research and engineering challenges
  3. Open up more aspects of AI development to appropriate oversight and feedback; and
  4. Incentivize adherence to high standards of safety

The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.

We reserve the right to *allow the AI* to refuse service to anyone

From denying a cat’s entry to denying a person’s entry:

A sign at the front of the store reads, “Look at camera for entry” and “Facial Recognition Technology In Use.” An automated voice tells approaching customers through a mounted speaker to look up at the camera.

KIRO 7 first learned about the facial recognition technology being used there from a customer, a regular at the store, who posted about it on Facebook after stopping by the 24-hour location at night last week.

“Tonight, I was confronted with a whole new Jackson’s,” she wrote. “You had to stare at the camera before they let you in.”

The woman said she was told by clerks that the technology is being used to cut down on thefts from the store.

“Sometimes I would walk out of there, jaw to the ground, at the in-your-face theft,” she wrote.

‘Look at camera for entry’: Tacoma convenience store using facial recognition technology

The makers of this technology, Blue Line Technology, seem rather overconfident: “Blue Line Technology spokesperson Sawyer said the software has never misidentified anyone.” Meanwhile, state-of-the-art facial recognition technology is notoriously inaccurate with dark-skinned individuals and women.

So if you’re a dark-skinned individual mistakenly identified as “bad,” what does the appeal process look like? Stand outside and shout at the clerk?

AI’s evaluating humans

Here’s an application that could use some transparency:

When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

A Machine May Not Take Your Job, but One Could Become Your Boss

I have no idea how this AI might have been trained, and the article sheds no light.

Beijing AI Principles

This is fascinating in light of China’s use of AI for automated racism against the minority Muslim population in Xinjiang:

A group of leading institutes and companies have published a set of ethical standards for AI research and called for cross-border cooperation amid vigorous development of the industry.

The Beijing AI Principles was jointly unveiled Saturday by the Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University, Institute of Automation and Institute of Computing Technology in Chinese Academy of Sciences, and an AI industrial league involving firms like Baidu, Alibaba and Tencent.

“The development of AI is a common challenge for all humanity. Only through coordination on a global scale can we build AI that is beneficial to both humanity and nature,” said BAAI director Zeng Yi.

Beijing publishes AI ethical standards, calls for int’l cooperation

The principles themselves are as laudatory and vague as most other frameworks: “Do Good,” “Be Ethical.” They explicitly call out the human rights of privacy, dignity, freedom, and autonomy. It’s difficult to say if this is a sign of internal dissent or strategic positioning given the primarily academic and commercial origin of the framework.

U.S. facial recognition also rolling out

Jon Porter, writing for The Verge:

The Department of Homeland Security says it expects to use facial recognition technology on 97 percent of departing passengers within the next four years. The system, which involves photographing passengers before they board their flight, first started rolling out in 2017, and was operational in 15 US airports as of the end of 2018. 

The facial recognition system works by photographing passengers at their departure gate. It then cross-references this photograph against a library populated with facesimages from visa and passport applications, as well as those taken by border agents when foreigners enter the country.

US facial recognition will cover 97 percent of departing airline passengers within four years

It’s not automated racism, but it’s similar in scope to China’s rollout. Routine facial recognition for tracking is here, like it or not.

Automated Racism in China

Paul Mozur, writing for the New York Times:

Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.

The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.

One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority

Bill Gates recently said that AI is the new nuclear technology: both promising and dangerous.

Our long term survival probably requires being good at managing the dangers of increasingly powerful technologies. Not a great start.

AI Transparency Tension: NYPD Sex Chat Bots

The NYPD is using AI chat bots to surface and warn individuals looking to buy sex:

A man texts an online ad for sex.

He gets a text back: “Hi Papi. Would u like to go on a date?” There’s a conversation: what he’d like the woman to do, when and where to meet, how much he will pay.

After a few minutes, the texts stop. It’s not unexpected — women offering commercial sex usually text with several potential buyers at once. So the man, too, usually texts several women at once.

What he doesn’t expect is this: He is texting not with a woman but with a computer program, a piece of artificial intelligence that has taught itself to talk like a sex worker.

A.I. Joins the Campaign Against Sex Trafficking

The article posts an example of an actual chat conversation and it is worth reading to get a sense of the AI capabilities.

Ethics tension. It’s worth noting that many AI ethics frameworks emphasize the importance of informing humans when they are interacting with bots. See also the Google Duplex controversy. Instead, this is indeed deception-by-design. How does this fall within an ethical framework? Are we immediately making trade-offs between effectiveness and transparency?

Ethics and “Experts”

Surprise! They don’t got’em! Well, sometimes they do. But I feel like the following story happens repeatedly (though maybe not as egregiously!).

From Mr. Levine at Bloomberg. Basic story, defendant interview potential experts and hire one, leaving the others behind. One of the left behind experts goes and pitches himself to the plaintiff in the case … after leaving a written record about how bad the plaintiff’s case is with the defendant. You. Should. Not. Do. This.

Also, if you do, it’s a common courtesy to tell the plaintiff about it before they hire you. Pro Tip to all my experts out there.