Defining a “bot” is hard

A new paper by Mark Lemley and Bryan Casey discusses the difficulties of formulating legal definitions of robots:

California enacted a statute making it illegal for an online “bot” to interact with consumers without first disclosing its non-human status. The law’s definition of “bot,” however, leaves much to be desired. Among other ambiguities, it bases its definition on the extent to which “the actions or posts of [an automated]” account are not the result of a person,” with “person” defined to include corporations as well as “natural” people. Truthfully, it’s hard to imagine any online activity—no matter how automated—that is “not the result of a (real or corporate) person” at the end of the day.

You Might Be a Robot at 3.

Like obscenity, there do not appear to be any good definitions of “robot.” The paper instead suggests that regulators focus on behavior, not definitions:

A good example of this approach is the Better Online Ticket Sales Act of 2016 (aka “BOTS Act”). The Act makes no attempt to define bot. Instead, it simply prohibits efforts to get around security protocols like CAPTCHA. We don’t actually need to decide whether you are a bot. As the BOTS Act demonstrates, we can achieve our goals by deciding whether someone (or something) is circumventing the protocol.

Id. at 40.

One of the major problems is that so much unethical behavior is a combination of both human and automated activity. Meanwhile, human-in-the-loop processes are viewed as a solution to ethical AI problems. The idea that bots are ever truly autonomous is specious. We are the bots, and the bots are us.

OpenAI identifies AI Ethics as a collective action problem

OpenAI has released a blog post and paper addressing the problem of collective action in AI ethics:

If companies respond to competitive pressures by rushing a technology to market before it has been deemed safe, they will find themselves in a collective action problem. Even if each company would prefer to compete to develop and release systems that are safe, many believe they can’t afford to do so because they might be beaten to market by other companies.

Why Responsible AI Development Needs Cooperation on Safety

And they identify four strategies to address this issue:

  1. Promote accurate beliefs about the opportunities for cooperation
  2. Collaborate on shared research and engineering challenges
  3. Open up more aspects of AI development to appropriate oversight and feedback; and
  4. Incentivize adherence to high standards of safety

The bottom line is that normal factors encouraging development of safe products (the market, liability laws, regulation, etc.) may not be present or sufficient in the race to develop AI products. Self regulation will be important if companies want to maintain that government regulation is not necessary.

Major supplier of police body cameras concludes facial recognition is not reliable enough to sell ethically

Chaim Gartenberg, writing for The Verge:

Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.

[. . . . .]

According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.

Axon (formerly Taser) says facial recognition on police body cams is unethical

One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?

A proposal to tax targeted digital ads

Paul Romer proposes tax policy, instead of antitrust, to nudge privacy in the right direction:

Of course, companies are incredibly clever about avoiding taxes. But in this case, that’s a good thing for all of us. This tax would spur their creativity. Ad-driven platform companies could avoid the tax entirely by switching to the business model that many digital companies already offer: an ad-free subscription. Under this model, consumers know what they give up, and the success of the business would not hinge on tracking customers with ever more sophisticated surveillance techniques. A company could succeed the old-fashioned way: by delivering a service that is worth more than it costs.

A Tax That Could Fix Big Tech

Not a bad idea.

Sludge: A Negative Sort of Nudge

Cass Sunstein wrote a new paper on “sludge,” which is the inverse of his and Richard Thaler’s concept of Nudge.

A “nudge” is a way of designing choices so that the easiest path is the healthiest or smartest or “best.” It is a form of libertarian paternalism that tries to influence behavior while also respecting freedom of choice. E.g., automatic opt-in to organ donations with the option to opt-out.

Sludge, on the other hand, is “excessive or unjustified frictions that make it more difficult for consumers, employees, . . . and many others to get what they want or to do as they wish.” For example:

To obtain benefits under a health care law, people must navigate a complicated website. Many of them do not understand the questions that they are being asked. For many people, the application takes a long time. Some of them give up. 

Sunstein argues that organizations should regularly perform “sludge audits” to remove these kinds of anti-nudges from their process:

[T]he power of simplification puts a spotlight on the large
consequences of seemingly modest sludge—on the effects of choice architecture in determining outcomes. Simplification and burden reduction do not merely reduce frustration; they can change people’s lives.

Sludge Audits at 10-11.

As the world becomes more complicated and the attention economy more competitive, choice architecture is more important than ever.

AI Transparency Tension: NYPD Sex Chat Bots

The NYPD is using AI chat bots to surface and warn individuals looking to buy sex:

A man texts an online ad for sex.

He gets a text back: “Hi Papi. Would u like to go on a date?” There’s a conversation: what he’d like the woman to do, when and where to meet, how much he will pay.

After a few minutes, the texts stop. It’s not unexpected — women offering commercial sex usually text with several potential buyers at once. So the man, too, usually texts several women at once.

What he doesn’t expect is this: He is texting not with a woman but with a computer program, a piece of artificial intelligence that has taught itself to talk like a sex worker.

A.I. Joins the Campaign Against Sex Trafficking

The article posts an example of an actual chat conversation and it is worth reading to get a sense of the AI capabilities.

Ethics tension. It’s worth noting that many AI ethics frameworks emphasize the importance of informing humans when they are interacting with bots. See also the Google Duplex controversy. Instead, this is indeed deception-by-design. How does this fall within an ethical framework? Are we immediately making trade-offs between effectiveness and transparency?

European Commission publishes “framework for achieving Trustworthy AI”

Like many recent frameworks, this “High-Level Expert Group” assessment provides a list of fairly vague but nevertheless laudatory principles that AI developers should respect:

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1. it should be lawful, complying with all applicable laws and regulations;

2. it should be ethical, ensuring adherence to ethical principles and values; and

3. it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

Ethics Guidelines for Trustworthy AI (via Commission reports website)

Great: lawful, ethical, and robust. Ok, how do we do that? Well, the report also lays out four ethical principles to help achieve Trustworthy AI:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

Ok, great: lawful, ethical, and robust. And ethical means respect human autonomy, prevent harm, be fair, explain what the AI is doing. Got it. No wait, there’s seven more (non-exhaustive) requirements for Trustworthy AI:

  • Human agency and oversight
  • Technical robustness and safety (robustness duplicate!)
  • Privacy and data governance (lawfulness duplicate!)
  • Transparency (explicability duplicate!)
  • Diversity, non-discrimination and fairness (fairness duplicate!)
  • Societal and environmental wellbeing (prevention of harm duplicate?)
  • Accountability

Ok, nail all these and we’re good? No, no, the report also recognizes that, “Tensions may arise between the above principles, for which there is no fixed solution.” For example, “trade-offs might have to be made between enhancing a system’s explainability (which may reduce its accuracy) or increasing its accuracy (at the cost of explainability).” And what should we do if tensions arise? “[S]ociety should endeavour to align them.”

Clear as mud. Of course, to be fair, no one else is doing any better.

AI researchers call on Amazon to stop selling facial recognition technology to law enforcement

A group of 27 AI researchers affiliated with various academic institutions as well as Microsoft, Google, and Facebook have written an open letter calling on Amazon to stop selling its face recognition technology (Rekognition) to law enforcement. The letter gets into the weeds very quickly but the main complaint is that Rekognition is biased against darker skinned individuals:

A recent study conducted by Inioluwa Deborah Raji and Joy Buolamwini, published at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society, found that the version of Amazon’s Rekognition tool which was available on August 2018, has much higher error rates while classifying the gender of darker skinned women than lighter skinned men (31% vs. 0%).

On Recent Research Auditing Commercial Facial Analysis Technology

Amazon’s response has essentially been “no that’s not quite right and we’re also concerned and continually improving but none of this is any reason to stop selling the product.”

What all of this highlights is:

  1. No consensus on amount of tolerable bias. Perfectly zero bias may be unreachable. Do we insist upon it, or near it? Or is there a level of tolerable bias? Less bias than an average human would be an improvement in most cases.
  2. No framework for assessing bias. We don’t have any standards on how to judge whether an AI system is “tolerably biased” or not. Much of the debate here is over how the biased was measured.
  3. No framework for assessing impact of bias. Objections to Amazon’s Rekognition technology are premised on its commercial sale, especially to law enforcement. If Amazon had simply released the technology as a research project, it would have joined many other examples of bias in AI research that cause concern but not outrage. Should we insist on zero bias for law enforcement applications? Can retail applications be more tolerably biased?
  4. No laws or regulations at all. And of course there are no laws or regulations governing the sale or use of these systems anywhere in the United States. But… perhaps coming soon.

Anti-Anti-Vax Drama

Brooks Bryce and Sarah Beck live in Phoenix with their three kids. Ms. Beck took their youngest, a two-year old boy, to a clinic where he registered a temperature over 100 degrees. Upon learning that the boy was lethargic and unvaccinated, the doctor told Ms. Beck to take him to an emergency room. Ms. Beck declined, explaining later:

“I called the doctor back and said ‘Hey, I’m not sure how you got this 105 reading, my son’s acting fine,’” Ms. Beck told a local TV station. “‘This doesn’t really seem like a medical emergency.’”

With Guns Drawn, Officers Raided Home to Get Unvaccinated, Feverish Child

The doctor was concerned that her directive was not being followed and called the Arizona Department of Child Safety, setting in motion a chain of events that led to the Chandler Police Department breaking down the family’s door at 1am with guns drawn. The boy was taken to a hospital and found to have a respiratory illness. Both parents were charged with child abuse.

The Chandler Police Department released video of the entire encounter, including multiple knocks on the door and calls to the household before making a forced entry. Overall all parties appeared to act reasonably under the circumstances. The Chandler Police Department was reasonably patient and attempted to deescalate. They were acting under a court order. They didn’t charge into the house as soon as they breached the door. And Mr. Bryce himself was unfailingly polite and non-aggressive when talking to the officers, although he was firm in his conviction that his child was fine and he did not want the police to bother him any more.

So how did we get to a situation where a bunch of officers break down a door in the middle of the night to take a child with what sounds like a mild respiratory illness to the hospital? That is an insane and disturbing result. Two reasons it seems:

  • The child was unvaccinated. Did that bias the doctor’s assessment of whether the parents had the judgment necessary to take care of their son? Almost certainly. But should it have?
  • Health care costs. Listening to the video I was stunned to hear the entirely rational explanation by Mr. Bryce for not wanting to take their son to an emergency room: they didn’t want to incur thousands of dollars in fees when they fundamentally believed their son was ok, an assessment apparently later shown to be accurate.

We are setting people up to fail if we impose standards, enforced by potential violence, without providing the support necessary to meet those standards. This story makes me deeply uncomfortable.

Patent Litigation Insurance

Patent litigation insurance definitely exists, and every so often a casual observer will be confronted by the enormous cost of litigating a patent case and suggest that maybe you should get insurance. After all, there are a lot of other kinds of insurance for the normal hazards of doing business: product liability, business interruption, even cyber attack. So why not patent litigation insurance?

The problem is that insurance works by grouping a whole bunch of entities together that all have similar risk, and then figuring out how to get them to share that risk while still making some money on the premiums. That doesn’t work for patent litigation because companies have wildly different risk profiles. It is impossible to take a group of companies, somehow average out their risk of patent litigation, and then calculate a premium that both covers that average risk and makes you some (but not too much) money on the side. The companies will either overpay or underpay.

As a result, patent litigation insurers take a look at your individual risk profile, figure they can estimate the risk better than you can, and then charge an individualized premium to make sure they are covered. Public reporting places the annual cost of patent litigation insurance at about 2-5% of the insured amount, with the addition of hard liability caps and co-payments. Most big companies decline those terms and end up self-insuring or mitigating risk through license aggregators like RPX.

But still patent litigation insurance seems to fascinate, especially the academics. In a November 2018 paper titled The Effect of Patent Litigation Insurance, researchers examined the effect of recently introduced insurance on the rate of patent assertions. And they found (headline!) that the availability of defensive insurance was correlated with significantly reduced likelihood that specific patents would be asserted. They conclude:

Whatever the merits of specific judicial and legislative reforms presently under consideration, our study suggests that it is also possible for market-based mechanisms to alter the behavior of patent enforcers. Indeed, it has been argued that one reason legislative and judicial reform is needed is because collective action is unlikely to cure the patent system’s ills because defending against claims of patent infringement generates uncompensated positive externalities. Our study suggests that defensive litigation insurance may be a viable market-based solution to complement, or supplant, other reforms that aim to reduce NPE activity.

The Effect of Patent Litigation Insurance at 59-60.

But there is a very important caveat: the insurance company selected in advance every patent they would insure against. IPISC sold two menus of “Troll Defense” insurance: one for insurance against 200 specific patents, and one for insurance against an additional 107 specific patents. Indeed, this is how the researchers were able to assess whether assertions went down. (Other patent litigation insurers use more complex policies that do not identify specific patents.) In addition, IPISC capped the defense insurance limit at $1M, which is well below the cost of litigating your average patent case. This is a very narrow space for patent litigation insurance!

IPISC must have had confidence they could accurately quantify the risk associated with these patents. The insured patents had tended to be asserted before by well-known patent assertion entities. I suspect the prior assertions settled quickly for relatively small amounts because that’s how these entities tend to work. Indeed, that is the whole business model. But throw in the availability of insurance specific to these patents and now you have a signal that many potential defendants will not simply settle and move on. Wrench in the model, assertions go down.

So yes, this narrow type of patent litigation insurance might be useful if you are an entity concerned about harassment by specific patents in low value patent litigation. Interesting study, your mileage may vary.