A proposal to tax targeted digital ads

Paul Romer proposes tax policy, instead of antitrust, to nudge privacy in the right direction:

Of course, companies are incredibly clever about avoiding taxes. But in this case, that’s a good thing for all of us. This tax would spur their creativity. Ad-driven platform companies could avoid the tax entirely by switching to the business model that many digital companies already offer: an ad-free subscription. Under this model, consumers know what they give up, and the success of the business would not hinge on tracking customers with ever more sophisticated surveillance techniques. A company could succeed the old-fashioned way: by delivering a service that is worth more than it costs.

A Tax That Could Fix Big Tech

Not a bad idea.

Sludge: A Negative Sort of Nudge

Cass Sunstein wrote a new paper on “sludge,” which is the inverse of his and Richard Thaler’s concept of Nudge.

A “nudge” is a way of designing choices so that the easiest path is the healthiest or smartest or “best.” It is a form of libertarian paternalism that tries to influence behavior while also respecting freedom of choice. E.g., automatic opt-in to organ donations with the option to opt-out.

Sludge, on the other hand, is “excessive or unjustified frictions that make it more difficult for consumers, employees, . . . and many others to get what they want or to do as they wish.” For example:

To obtain benefits under a health care law, people must navigate a complicated website. Many of them do not understand the questions that they are being asked. For many people, the application takes a long time. Some of them give up. 

Sunstein argues that organizations should regularly perform “sludge audits” to remove these kinds of anti-nudges from their process:

[T]he power of simplification puts a spotlight on the large
consequences of seemingly modest sludge—on the effects of choice architecture in determining outcomes. Simplification and burden reduction do not merely reduce frustration; they can change people’s lives.

Sludge Audits at 10-11.

As the world becomes more complicated and the attention economy more competitive, choice architecture is more important than ever.

AI Transparency Tension: NYPD Sex Chat Bots

The NYPD is using AI chat bots to surface and warn individuals looking to buy sex:

A man texts an online ad for sex.

He gets a text back: “Hi Papi. Would u like to go on a date?” There’s a conversation: what he’d like the woman to do, when and where to meet, how much he will pay.

After a few minutes, the texts stop. It’s not unexpected — women offering commercial sex usually text with several potential buyers at once. So the man, too, usually texts several women at once.

What he doesn’t expect is this: He is texting not with a woman but with a computer program, a piece of artificial intelligence that has taught itself to talk like a sex worker.

A.I. Joins the Campaign Against Sex Trafficking

The article posts an example of an actual chat conversation and it is worth reading to get a sense of the AI capabilities.

Ethics tension. It’s worth noting that many AI ethics frameworks emphasize the importance of informing humans when they are interacting with bots. See also the Google Duplex controversy. Instead, this is indeed deception-by-design. How does this fall within an ethical framework? Are we immediately making trade-offs between effectiveness and transparency?

European Commission publishes “framework for achieving Trustworthy AI”

Like many recent frameworks, this “High-Level Expert Group” assessment provides a list of fairly vague but nevertheless laudatory principles that AI developers should respect:

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1. it should be lawful, complying with all applicable laws and regulations;

2. it should be ethical, ensuring adherence to ethical principles and values; and

3. it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

Ethics Guidelines for Trustworthy AI (via Commission reports website)

Great: lawful, ethical, and robust. Ok, how do we do that? Well, the report also lays out four ethical principles to help achieve Trustworthy AI:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

Ok, great: lawful, ethical, and robust. And ethical means respect human autonomy, prevent harm, be fair, explain what the AI is doing. Got it. No wait, there’s seven more (non-exhaustive) requirements for Trustworthy AI:

  • Human agency and oversight
  • Technical robustness and safety (robustness duplicate!)
  • Privacy and data governance (lawfulness duplicate!)
  • Transparency (explicability duplicate!)
  • Diversity, non-discrimination and fairness (fairness duplicate!)
  • Societal and environmental wellbeing (prevention of harm duplicate?)
  • Accountability

Ok, nail all these and we’re good? No, no, the report also recognizes that, “Tensions may arise between the above principles, for which there is no fixed solution.” For example, “trade-offs might have to be made between enhancing a system’s explainability (which may reduce its accuracy) or increasing its accuracy (at the cost of explainability).” And what should we do if tensions arise? “[S]ociety should endeavour to align them.”

Clear as mud. Of course, to be fair, no one else is doing any better.

AI researchers call on Amazon to stop selling facial recognition technology to law enforcement

A group of 27 AI researchers affiliated with various academic institutions as well as Microsoft, Google, and Facebook have written an open letter calling on Amazon to stop selling its face recognition technology (Rekognition) to law enforcement. The letter gets into the weeds very quickly but the main complaint is that Rekognition is biased against darker skinned individuals:

A recent study conducted by Inioluwa Deborah Raji and Joy Buolamwini, published at the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society, found that the version of Amazon’s Rekognition tool which was available on August 2018, has much higher error rates while classifying the gender of darker skinned women than lighter skinned men (31% vs. 0%).

On Recent Research Auditing Commercial Facial Analysis Technology

Amazon’s response has essentially been “no that’s not quite right and we’re also concerned and continually improving but none of this is any reason to stop selling the product.”

What all of this highlights is:

  1. No consensus on amount of tolerable bias. Perfectly zero bias may be unreachable. Do we insist upon it, or near it? Or is there a level of tolerable bias? Less bias than an average human would be an improvement in most cases.
  2. No framework for assessing bias. We don’t have any standards on how to judge whether an AI system is “tolerably biased” or not. Much of the debate here is over how the biased was measured.
  3. No framework for assessing impact of bias. Objections to Amazon’s Rekognition technology are premised on its commercial sale, especially to law enforcement. If Amazon had simply released the technology as a research project, it would have joined many other examples of bias in AI research that cause concern but not outrage. Should we insist on zero bias for law enforcement applications? Can retail applications be more tolerably biased?
  4. No laws or regulations at all. And of course there are no laws or regulations governing the sale or use of these systems anywhere in the United States. But… perhaps coming soon.

Anti-Anti-Vax Drama

Brooks Bryce and Sarah Beck live in Phoenix with their three kids. Ms. Beck took their youngest, a two-year old boy, to a clinic where he registered a temperature over 100 degrees. Upon learning that the boy was lethargic and unvaccinated, the doctor told Ms. Beck to take him to an emergency room. Ms. Beck declined, explaining later:

“I called the doctor back and said ‘Hey, I’m not sure how you got this 105 reading, my son’s acting fine,’” Ms. Beck told a local TV station. “‘This doesn’t really seem like a medical emergency.’”

With Guns Drawn, Officers Raided Home to Get Unvaccinated, Feverish Child

The doctor was concerned that her directive was not being followed and called the Arizona Department of Child Safety, setting in motion a chain of events that led to the Chandler Police Department breaking down the family’s door at 1am with guns drawn. The boy was taken to a hospital and found to have a respiratory illness. Both parents were charged with child abuse.

The Chandler Police Department released video of the entire encounter, including multiple knocks on the door and calls to the household before making a forced entry. Overall all parties appeared to act reasonably under the circumstances. The Chandler Police Department was reasonably patient and attempted to deescalate. They were acting under a court order. They didn’t charge into the house as soon as they breached the door. And Mr. Bryce himself was unfailingly polite and non-aggressive when talking to the officers, although he was firm in his conviction that his child was fine and he did not want the police to bother him any more.

So how did we get to a situation where a bunch of officers break down a door in the middle of the night to take a child with what sounds like a mild respiratory illness to the hospital? That is an insane and disturbing result. Two reasons it seems:

  • The child was unvaccinated. Did that bias the doctor’s assessment of whether the parents had the judgment necessary to take care of their son? Almost certainly. But should it have?
  • Health care costs. Listening to the video I was stunned to hear the entirely rational explanation by Mr. Bryce for not wanting to take their son to an emergency room: they didn’t want to incur thousands of dollars in fees when they fundamentally believed their son was ok, an assessment apparently later shown to be accurate.

We are setting people up to fail if we impose standards, enforced by potential violence, without providing the support necessary to meet those standards. This story makes me deeply uncomfortable.

Patent Litigation Insurance

Patent litigation insurance definitely exists, and every so often a casual observer will be confronted by the enormous cost of litigating a patent case and suggest that maybe you should get insurance. After all, there are a lot of other kinds of insurance for the normal hazards of doing business: product liability, business interruption, even cyber attack. So why not patent litigation insurance?

The problem is that insurance works by grouping a whole bunch of entities together that all have similar risk, and then figuring out how to get them to share that risk while still making some money on the premiums. That doesn’t work for patent litigation because companies have wildly different risk profiles. It is impossible to take a group of companies, somehow average out their risk of patent litigation, and then calculate a premium that both covers that average risk and makes you some (but not too much) money on the side. The companies will either overpay or underpay.

As a result, patent litigation insurers take a look at your individual risk profile, figure they can estimate the risk better than you can, and then charge an individualized premium to make sure they are covered. Public reporting places the annual cost of patent litigation insurance at about 2-5% of the insured amount, with the addition of hard liability caps and co-payments. Most big companies decline those terms and end up self-insuring or mitigating risk through license aggregators like RPX.

But still patent litigation insurance seems to fascinate, especially the academics. In a November 2018 paper titled The Effect of Patent Litigation Insurance, researchers examined the effect of recently introduced insurance on the rate of patent assertions. And they found (headline!) that the availability of defensive insurance was correlated with significantly reduced likelihood that specific patents would be asserted. They conclude:

Whatever the merits of specific judicial and legislative reforms presently under consideration, our study suggests that it is also possible for market-based mechanisms to alter the behavior of patent enforcers. Indeed, it has been argued that one reason legislative and judicial reform is needed is because collective action is unlikely to cure the patent system’s ills because defending against claims of patent infringement generates uncompensated positive externalities. Our study suggests that defensive litigation insurance may be a viable market-based solution to complement, or supplant, other reforms that aim to reduce NPE activity.

The Effect of Patent Litigation Insurance at 59-60.

But there is a very important caveat: the insurance company selected in advance every patent they would insure against. IPISC sold two menus of “Troll Defense” insurance: one for insurance against 200 specific patents, and one for insurance against an additional 107 specific patents. Indeed, this is how the researchers were able to assess whether assertions went down. (Other patent litigation insurers use more complex policies that do not identify specific patents.) In addition, IPISC capped the defense insurance limit at $1M, which is well below the cost of litigating your average patent case. This is a very narrow space for patent litigation insurance!

IPISC must have had confidence they could accurately quantify the risk associated with these patents. The insured patents had tended to be asserted before by well-known patent assertion entities. I suspect the prior assertions settled quickly for relatively small amounts because that’s how these entities tend to work. Indeed, that is the whole business model. But throw in the availability of insurance specific to these patents and now you have a signal that many potential defendants will not simply settle and move on. Wrench in the model, assertions go down.

So yes, this narrow type of patent litigation insurance might be useful if you are an entity concerned about harassment by specific patents in low value patent litigation. Interesting study, your mileage may vary.

The American Harvest of Talent

Garry Kasparov on Tanitoluwa Adewumi, an eight-year old Nigerian refugee living in a family shelter in New York, who just won the NY State K-3 Chess Championship this month:

The United States is where the world’s talent comes to flourish. Since its inception, one of America’s greatest strengths has been its ability to attract and channel the energy of wave after wave of striving immigrants. It’s a machine that turns that vigor and diversity into economic growth. It may mean opening a dry-cleaners or a start-up that becomes Google. It could mean studying medicine, law or physics, or — as Tani says he would like to do — becoming the world’s youngest chess champion.

Many of the questions I received as world champion centered on why the Soviet Union produced so many great chess players. After the dissolution of the U.S.S.R., these questions were asked again along new national borders. Why did Russia, or Armenia, or my native Azerbaijan have so many grandmasters? Was there something in the water, the genes or the schools? And why weren’t there more chess prodigies from the United States (or wherever the questioner was from)?

My answer was always the same: Talent is universal, but opportunity is not, and talent cannot thrive in a vacuum.

The heart-warming tale of the 8-year-old chess champion is quintessentially American

One version of America’s exceptionalism is its ability to harvest raw talent in the world, wherever it arises. How long will that last?

Human Interface Design in the Law

Fantastic essay by Tim Wu (with whom I do not often find common ground) on the importance of “human interface design” in the law:

The Affordable Care Act is a good example of the complexity problem. Yes, it was an important policy achievement, and yes, many of its problems can be rightly blamed on industry resistance and Republican efforts to dismantle it.

But the act is also exceptionally hard to understand and discouragingly daunting to make use of. An emphasis on “choice” and “transparency” resulted in a law that only a rational-choice theorist could love. The act made health insurance more complicated, not less, which is one reason that such a high percentage of medical bills go to paying administrative costs, and why the Affordable Care Act is much less popular than it could be.

The Democrats’ Complexity Problem

I am a bit disappointed in the partisan framing; it’s unnecessary. Progressives and Democrats aren’t the only policy makers with this problem. And the problem can be rightly framed as a fundamental lack of respect for the public:

But policy experts are rarely good at interface design, for we have a bad habit of assuming that people have unlimited time and attention and that to respect them means offering complete transparency and a multiplicity of choices. Real respect for the public involves appreciating what the public actually wants and needs. The reality is that most Americans are short on time and attention and already swamped by millions of daily tasks and decisions. They would prefer that the government solve problems for them — not create more work for them.

The public is entitled to demand that policy makers do the extra work of making laws understandable and decisions simple.

US Government Tries to Address AI

Recently there’s been a push by the U.S. government to figure out this AI thing. After all, China has a big long-term plan. We should, right?

So President Trump issued an executive order in February, and the White House put together this glossy website to talk about AI initiatives.

It’s all just noise. Here’s what the executive order says:

  • We should continue to lead in AI by (a) leading; (b) developing standards; (c) training; (d) fostering public trust; and (e) promoting international cooperation.
  • All departments should pursue these objectives: (a) invest in AI; (b) invest in data; (c) reduce barriers to using AI (but not so much that it impacts safety etc.); (d) develop secure standards; (e) train people; and (f) develop an action plan!
  • The National Science and Technology Council Select Committee on Artificial Intelligence should coordinate all this.
  • AI R&D is a funding priority, depending on your mission of course.
  • Publish a bunch of stuff in the Federal Register asking for public comments, and consider these goals within 120-180 days of the order.

Bottom line: someone should really start thinking about this stuff and maybe we should spend some money on it? There is zero vision in any of this.