Free speech under assault from both the left and right

The Economist pens an essay on freedom of expression that is worth reading in full:

Who is the greater threat to free speech: President Donald Trump or campus radicals? Left and right disagree furiously about this. But it is the wrong question, akin to asking which of the two muggers currently assaulting you is leaving more bruises. What matters is that big chunks of both left and right are assaulting the most fundamental of liberties—the ability to say what you think. . . .

. . .Human beings are not free unless they can express themselves. Minds remain narrow unless exposed to different viewpoints. Ideas are more likely to be refined and improved if vigorously questioned and tested. Protecting students from unwelcome ideas is like refusing to vaccinate them against measles. When they go out into the world, they will be unprepared for its glorious but sometimes challenging diversity.

As societies polarise, free speech is under threat. It needs defenders

A More Nuanced Encryption Policy Debate

Bruce Schneier on a speech by Attorney General Barr on encryption policy:

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how: ­an approach we have derisively named “nerd harder.”

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having­ [–] not the fake one about whether or not we can have both security and surveillance.

Attorney General William Barr on Encryption Policy

Schneier still believes it is more important that everyone is secure than to provide backdoors to law enforcement, but at least everyone is starting to acknowledge the reality that law enforcement backdoors weaken security.

Defining a “bot” is hard

A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:

California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.

You Might Be a Robot at 3.

Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:

A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.

Id. at 40.

One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.

OpenAI identifies AI Ethics as a collective action problem

OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:

If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.

Why Responsible AI Development Needs Cooperation on Safety

And they identify four strategies to address this issue:

  1. Promote accurate beliefs about the opportunities for cooperation
  2. Collaborate on shared research and engineering challenges
  3. Open up more aspects of AI development to appropriate oversight and feedback; and
  4. Incentivize adherence to high standards of safety

The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.

Major supplier of police body cameras concludes facial recognition is not reliable enough to sell ethically

Chaim Gartenberg, writing for The Verge:

Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.

[. . . . .]

According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.

Axon (formerly Taser) says facial recognition on police body cams is unethical

One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?

A proposal to tax targeted digital ads

Paul Romer proposes tax policy, instead of antitrust, to nudge privacy in the right direction:

Of course, companies are incredibly clever about avoiding taxes. But in this case, that’s a good thing for all of us. This tax would spur their creativity. Ad-driven platform companies could avoid the tax entirely by switching to the business model that many digital companies already offer: an ad-free subscription. Under this model, consumers know what they give up, and the success of the business would not hinge on tracking customers with ever more sophisticated surveillance techniques. A company could succeed the old-fashioned way: by delivering a service that is worth more than it costs.

A Tax That Could Fix Big Tech

Not a bad idea.

Sludge: A Negative Sort of Nudge

Cass Sunstein wrote a new paper on “sludge,” which is the inverse of his and Richard Thaler’s concept of Nudge.

A “nudge” is a way of designing choices so that the easiest path is the healthiest or smartest or “best.” It is a form of libertarian paternalism that tries to influence behavior while also respecting freedom of choice. E.g., automatic opt-in to organ donations with the option to opt-out.

Sludge, on the other hand, is “excessive or unjustified frictions that make it more difficult for consumers, employees, . . . and many others to get what they want or to do as they wish.” For example:

To obtain benefits under a health care law, people must navigate a complicated website. Many of them do not understand the questions that they are being asked. For many people, the application takes a long time. Some of them give up. 

Sunstein argues that organizations should regularly perform “sludge audits” to remove these kinds of anti-nudges from their process:

[T]he power of simplification puts a spotlight on the large
consequences of seemingly modest sludge—on the effects of choice architecture in determining outcomes. Simplification and burden reduction do not merely reduce frustration; they can change people’s lives.

Sludge Audits at 10-11.

As the world becomes more complicated and the attention economy more competitive, choice architecture is more important than ever.

AI Transparency Tension: NYPD Sex Chat Bots

The NYPD is using AI chat bots to surface and warn individuals looking to buy sex:

A man texts an online ad for sex.

He gets a text back: “Hi Papi. Would u like to go on a date?” There’s a conversation: what he’d like the woman to do, when and where to meet, how much he will pay.

After a few minutes, the texts stop. It’s not unexpected — women offering commercial sex usually text with several potential buyers at once. So the man, too, usually texts several women at once.

What he doesn’t expect is this: He is texting not with a woman but with a computer program, a piece of artificial intelligence that has taught itself to talk like a sex worker.

A.I. Joins the Campaign Against Sex Trafficking

The article posts an example of an actual chat conversation and it is worth reading to get a sense of the AI capabilities.

Ethics tension. It’s worth noting that many AI ethics frameworks emphasize the importance of informing humans when they are interacting with bots. See also the Google Duplex controversy. Instead, this is indeed deception-by-design. How does this fall within an ethical framework? Are we immediately making trade-offs between effectiveness and transparency?

European Commission publishes “framework for achieving Trustworthy AI”

Like many recent frameworks, this “High-Level Expert Group” assessment provides a list of fairly vague but nevertheless laudatory principles that AI developers should respect:

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1. it should be lawful, complying with all applicable laws and regulations;

2. it should be ethical, ensuring adherence to ethical principles and values; and

3. it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

Ethics Guidelines for Trustworthy AI (via Commission reports website)

Great: lawful, ethical, and robust. Ok, how do we do that? Well, the report also lays out four ethical principles to help achieve Trustworthy AI:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

Ok, great: lawful, ethical, and robust. And ethical means respect human autonomy, prevent harm, be fair, explain what the AI is doing. Got it. No wait, there’s seven more (non-exhaustive) requirements for Trustworthy AI:

  • Human agency and oversight
  • Technical robustness and safety (robustness duplicate!)
  • Privacy and data governance (lawfulness duplicate!)
  • Transparency (explicability duplicate!)
  • Diversity, non-discrimination and fairness (fairness duplicate!)
  • Societal and environmental wellbeing (prevention of harm duplicate?)
  • Accountability

Ok, nail all these and we’re good? No, no, the report also recognizes that, “Tensions may arise between the above principles, for which there is no fixed solution.” For example, “trade-offs might have to be made between enhancing a system’s explainability (which may reduce its accuracy) or increasing its accuracy (at the cost of explainability).” And what should we do if tensions arise? “[S]ociety should endeavour to align them.”

Clear as mud. Of course, to be fair, no one else is doing any better.

AI researchers call on Amazon to stop selling facial recognition technology to law enforcement

A group of 27 AI researchers affiliated with various academic institutions as well as Microsoft, Google, and Facebook have written an open letter calling on Amazon to stop selling its face recognition technology (Rekognition) to law enforcement. The letter gets into the weeds very quickly but the main complaint is that Rekognition is biased against darker skinned individuals:

A recent study conducted by Inioluwa Deborah Raji and Joy Buolamwini, published at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society, found that the version of Amazon’s Rekognition tool which was available on August 2018, has much higher error rates while classifying the gender of darker skinned women than lighter skinned men (31% vs. 0%).

On Recent Research Auditing Commercial Facial Analysis Technology

Amazon’s response has essentially been “no that’s not quite right and we’re also concerned and continually improving but none of this is any reason to stop selling the product.”

What all of this highlights is:

  1. No consensus on amount of tolerable bias. Perfectly zero bias may be unreachable. Do we insist upon it, or near it? Or is there a level of tolerable bias? Less bias than an average human would be an improvement in most cases.
  2. No framework for assessing bias. We don’t have any standards on how to judge whether an AI system is “tolerably biased” or not. Much of the debate here is over how the biased was measured.
  3. No framework for assessing impact of bias. Objections to Amazon’s Rekognition technology are premised on its commercial sale, especially to law enforcement. If Amazon had simply released the technology as a research project, it would have joined many other examples of bias in AI research that cause concern but not outrage. Should we insist on zero bias for law enforcement applications? Can retail applications be more tolerably biased?
  4. No laws or regulations at all. And of course there are no laws or regulations governing the sale or use of these systems anywhere in the United States. But… perhaps coming soon.