Do as I say, not as I do: robot edition

Deep learning has revolutionized artificial intelligence. We’ve changed from telling computers how to do things, and are now telling computers what to do and letting them figure it out. For many activities (e.g., object identification) we can’t even really explain how to do it. It’s easier to just tell a system, “This is a ball. When you see this, identify it as a ball. Now here are 1M more examples.” And the system learns pretty well.

Except when it doesn’t. There is a burgeoning new science of trying to tell artificial intelligence systems what exactly we want them to do:

Told to optimize for speed while racing down a track in a computer game, a car pushes the pedal to the metal … and proceeds to spin in a tight little circle. Nothing in the instructions told the car to drive straight, and so it improvised.

[. . . . .]

The team’s new system for providing instruction to robots — known as reward functions — combines demonstrations, in which humans show the robot what to do, and user preference surveys, in which people answer questions about how they want the robot to behave.

“Demonstrations are informative but they can be noisy. On the other hand, preferences provide, at most, one bit of information, but are way more accurate,” said Sadigh. “Our goal is to get the best of both worlds, and combine data coming from both of these sources more intelligently to better learn about humans’ preferred reward function.”

Researchers teach robots what humans want

This is critical research, and probably under-reported. If robots (like people) are going to learn mainly by mimicking humans, what human behaviors should they mimic?

People want autonomous cars to drive less aggressively than they do. And they should also be less racist, sexist, and violent. Getting the right reward function is critical. Getting it wrong may be immoral.

Patent Litigation Insurance

Patent litigation insurance definitely exists, and every so often a casual observer will be confronted by the enormous cost of litigating a patent case and suggest that maybe you should get insurance. After all, there are a lot of other kinds of insurance for the normal hazards of doing business: product liability, business interruption, even cyber attack. So why not patent litigation insurance?

The problem is that insurance works by grouping a whole bunch of entities together that all have similar risk, and then figuring out how to get them to share that risk while still making some money on the premiums. That doesn’t work for patent litigation because companies have wildly different risk profiles. It is impossible to take a group of companies, somehow average out their risk of patent litigation, and then calculate a premium that both covers that average risk and makes you some (but not too much) money on the side. The companies will either overpay or underpay.

As a result, patent litigation insurers take a look at your individual risk profile, figure they can estimate the risk better than you can, and then charge an individualized premium to make sure they are covered. Public reporting places the annual cost of patent litigation insurance at about 2-5% of the insured amount, with the addition of hard liability caps and co-payments. Most big companies decline those terms and end up self-insuring or mitigating risk through license aggregators like RPX.

But still patent litigation insurance seems to fascinate, especially the academics. In a November 2018 paper titled The Effect of Patent Litigation Insurance, researchers examined the effect of recently introduced insurance on the rate of patent assertions. And they found (headline!) that the availability of defensive insurance was correlated with significantly reduced likelihood that specific patents would be asserted. They conclude:

Whatever the merits of specific judicial and legislative reforms presently under consideration, our study suggests that it is also possible for market-based mechanisms to alter the behavior of patent enforcers. Indeed, it has been argued that one reason legislative and judicial reform is needed is because collective action is unlikely to cure the patent system’s ills because defending against claims of patent infringement generates uncompensated positive externalities. Our study suggests that defensive litigation insurance may be a viable market-based solution to complement, or supplant, other reforms that aim to reduce NPE activity.

The Effect of Patent Litigation Insurance at 59-60.

But there is a very important caveat: the insurance company selected in advance every patent they would insure against. IPISC sold two menus of “Troll Defense” insurance: one for insurance against 200 specific patents, and one for insurance against an additional 107 specific patents. Indeed, this is how the researchers were able to assess whether assertions went down. (Other patent litigation insurers use more complex policies that do not identify specific patents.) In addition, IPISC capped the defense insurance limit at $1M, which is well below the cost of litigating your average patent case. This is a very narrow space for patent litigation insurance!

IPISC must have had confidence they could accurately quantify the risk associated with these patents. The insured patents had tended to be asserted before by well-known patent assertion entities. I suspect the prior assertions settled quickly for relatively small amounts because that’s how these entities tend to work. Indeed, that is the whole business model. But throw in the availability of insurance specific to these patents and now you have a signal that many potential defendants will not simply settle and move on. Wrench in the model, assertions go down.

So yes, this narrow type of patent litigation insurance might be useful if you are an entity concerned about harassment by specific patents in low value patent litigation. Interesting study, your mileage may vary.

Food for Thought

From The Atlantic’s March 2019 issue:

Fish pain is something different from our own pain. In the elaborate mirrored hall that is human consciousness, pain takes on existential dimensions. Because we know that death looms, and grieve for the loss of richly imagined futures, it’s tempting to imagine that our pain is the most profound of all suffering. But we would do well to remember that our perspective can make our pain easier to bear, if only by giving it an expiration date. When we pull a less cognitively blessed fish up from the pressured depths too quickly, and barometric trauma fills its bloodstream with tissue-burning acid, its on-deck thrashing might be a silent scream, born of the fish’s belief that it has entered a permanent state of extreme suffering.

Scientists Are Totally Rethinking Animal Cognition

This is the right answer!

Tyler Cowen in a piece about blackmail on Bloomberg column:

So what are some lessons from the apparent greater prevalence of blackmail risk?


First: Be good! Minimize the chance that someone can blackmail you.

https://www.bloomberg.com/opinion/articles/2019-02-13/bezos-and-national-enquirer-seven-lessons-about-blackmail

Generational Change

Yearbooks as horror shows:

Although they may appear to be innocuous collections of school memories, yearbooks have fueled major political controversies in recent months. Whether it be the racist photograph of a student in blackface and another in a Ku Klux Klan costume on Virginia Gov. Ralph Northam’s medical school yearbook page or Supreme Court Justice Brett M. Kavanaugh’s high school yearbook jokes about drinking and sex, decades-old school publications have returned to public scrutiny for politicians, and it’s guaranteed that Northam’s will not be the last.

Why it’s shocking to look back at med school yearbooks from decades ago

If 50 year-old yearbook pages are horrifying now, will 50 year-old Facebook posts be equally horrifying to our children? My guess is probably not, but it is incredible to see what was normal 50 years ago, and remarkable how much has changed.

See also Grandma’s #MeToo Stories Fucking Horrifying.

Bad Blood

Great book. But I haven’t a different question. Why did not one of the in-house counsel who worked there do a noisy exit? Just FYI, my belief is that in-house lawyers serve as the moral compass for their companies. Yes, this can sound overly optimistic and conceited – but who else is better placed?

The Value of Loyalty

Yesterday I amplified Juan Pablo Villarino’s comments on the importance of being able to demonstrate your value to your community.

Also yesterday, David Brooks wrote these words about the philosophy of Josiah Royce, a late 19th century American historian / philosopher:

Royce argued that meaningful lives are marked, above all, by loyalty. Out on the frontier, he had seen the chaos and anarchy that ensues when it’s every man for himself, when society is just a bunch of individuals searching for gain. He concluded that people make themselves miserable when they pursue nothing more than their “fleeting, capricious and insatiable” desires.

So for him the good human life meant loyalty, “the willing and practical and thoroughgoing devotion of a person to a cause.”

A person doesn’t have to invent a cause, or find it deep within herself. You are born into a world of causes, which existed before you were born and will be there after you die. You just have to become gripped by one, to give yourself away to it realizing that the cause is more important than your individual pleasure or pain.

. . . . .

Royce’s philosophy is helpful with the problem we have today. How does the individual fit into the community and how does each community fit into the whole? He offered a shift in perspective. When evaluating your life, don’t ask, “How happy am I?” Ask, “How loyal am I, and to what?”

Your Loyalties Are Your Life

What are the rules for AI?

No one knows. But lots of folks are asking.

Microsoft has one answer. Google has another, similar answer. The Future of Life organization has another (23 point!) list of ethical rules.

These rules have a lot of overlap, but also a lot of noise. Of course systems should be safe and reliable and just and secure. This is marketing noise and no one disagrees. We need to figure out the hard rules. How transparent should we require AI systems to be? How explainable? This could be hard.

In any case, this year seems the one for forming advisory boards to figure out what rules we should have around (1) letting AI’s defend / kill us; (2) letting AI’s treat us; and (3) maintaining US dominance in AI.