Attempted theft of trade secrets is also illegal

Camilla Hrdy points out that you can’t sue for trade secret theft if the information stolen is not actually protected as a trade secret. But you can charge someone with attempted trade secret theft even if the information wasn’t a trade secret. Which means you can go to jail for attempted trade secret theft even if you couldn’t be sued for it. That is a weird inversion.

The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an “attempt” crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mindThe criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that “[w]hoever, with intent to convert a trade secret … to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals…obtains… possesses…[etcetera]” a trade secret, or “attempts to” do any of those things, “shall… be fined under this title or imprisoned not more than 10 years, or both…” 

Anthony Levandowski: Is Being a Jerk a Crime?

How to Become a Federal Criminal

It’s super easy and you may already be one!

You may know that you are required to report if you are traveling to or from the United States with $10,000 or more in cash. Don’t hop over the Canadian border to buy a used car, for example, or the Feds may confiscate your cash (millions of dollars are confiscated every year). Did you also know that you can’t leave the United States with more than $5 in nickels??? That’s a federal crime punishable by up to five years in prison. How about carrying a metal detector in a national park–up to six months in prison. And God forbid you should use your metal detector and find something more than 100 years old, that can put you away for up to a year. Also illegal in a national park? Making unreasonable gestures to a passing horse.

How to Become a Federal Criminal

Worth re-linking to one of my favorite legal lectures of all time: Don’t Talk to the Police. Even if you are going to tell the truth, even if you did nothing wrong. There is no way it will help you.

France bans some litigation analytics

In what appears to be a breathtaking overreaction to a privacy concern, France has banned statistical reporting about individual judges’ decisions:

The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.

A key passage of the new law states:

‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ 

France Bans Judge Analytics, 5 Years In Prison For Rule Breakers

This raises many issues of free speech, transparency, and just plain old protectionism:

Bright Line Trademark Rule on Likelihood of Confusion

I’m a sucker for the predictability of a bright line rule, and Camilla Hrdy at the Written Description blog describes a possible de facto rule about the likelihood of confusion in trademark cases:

In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an “appreciable number” of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur. 

In theory.

But in practice, Bernstein asserted, there is a magic number: it’s around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion.

Likelihood of Confusion: Is 15% The Magic Number?

There are of course many confounding factors including whether this 15% applies to “gross confusion” (total confusion that includes noise from other factors) or “net confusion” (caused only by use of the trademark), and problems with survey evidence in general. But I’ll briefly fantasize about being asked what “likelihood of confusion” means in trademark law and answering, “15%. It’s just 15%.”

Sixth Circuit says chalking tires is an unreasonable search

In Taylor v. City of Saginaw, the Sixth Circuit U.S. Court of Appeals (covering Kentucky, Michigan, Ohio, and Tennessee) has concluded that the common – indeed, ubiquitous! – practice of tracking how long a car has been parked by chalking its tires is unconstitutional:

Alison Taylor, a frequent recipient of parking tickets, sued the City and its parking enforcement officer Tabitha Hoskins, alleging that chalking violated her Fourth Amendment right to be free from unreasonable search. The City moved to dismiss the action. The district court granted the City’s motion, finding that, while chalking may have constituted a search under the Fourth Amendment, the search was reasonable. Because we chalk this practice up to a regulatory exercise, rather than a community-caretaking function, we REVERSE.

This is a great example of a court following individual precedent down a winding path to a conclusion that is actually very strange. Here’s how they got there:

  1. Start with the Constitution. The Fourth Amendment to the Constitution protects the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”
  2. Is it a search? Yes. But really only because the Supreme Court recently decided that attaching GPS devices to cars is a search. Attaching GPS devices is a search because it is a trespass. And chalking is also a trespass because the common law says that “acting upon a chattel as intentionally to cause it to come in contact with some other object” is a trespass. So chalking is a trespass to obtain information. And that makes it a search.
  3. Is it unreasonable? We assume so. The government bears the burden of proving that the search was not unreasonable, and this is where they fell down. First, the government said people have a reduced expectation of privacy in cars. Nope, the Court says, that analysis only applies when you have a warrant or probable cause, and the government didn’t have either. Second, the government said the parking officers weren’t operating as law enforcement; they were operating as “community caretakers” and another standard applies. Nope, the Court says, the government is actually enforcing laws so that doesn’t apply either. Hearing no other arguments, the Court concludes the search was unreasonable.
  4. And now tire chalking is an unconstitutional, unreasonable search.

I’m not sure the drafters of the Fourth Amendment would agree with this analysis. Chalking a tire doesn’t seem to be either unreasonable or a search. And of course there are a number of other ways to argue this case, including with the “administrative search exception,” which the government failed to raise. It’s possible this case gets reviewed.

On the other hand, plenty of other options are available to parking enforcement officers including video, photos, parking meters, and taking notes!

Summary of the DETOUR Act

On April 9, 2019, Representatives Warner (D-VA) and Fischer (R-NE) introduced the Deceptive Experiences To Online Users Reductions (DETOUR) Act. The bill would criminalize user interface “dark patterns” around user consent, which trick or nudge (depending on your perspective) users into consenting to things that may not be in their best interests.

There is a whole website devoted to dark patterns, and it is pretty informative. Here’s an example of a dark pattern that makes it hard to figure out how not to sign up for a service:

The DETOUR Act gives the FTC power to regulate user interfaces that have “the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.” It also prohibits the segmentation of users into groups for behavioral experimentation without informed consent.

The Act would only apply to “large online operators,” defined as having more than 100M authenticated users in any 30 day period. (Small online operators can still trick people?) Large online operators would also have to disclose their experiments every 90 days and establish Independent Review Boards to oversee any such user research.

Summary of the proposed Algorithmic Accountability Act of 2019

Senators Wyden (D-OR) and Booker (D-NJ) have proposed a Senate bill that would require big businesses and data brokers to conduct “impact assessments” for (1) “high-risk automated decision systems”; and (2) “high-risk information systems”.

The bill essentially gives the FTC power to promulgate regulations requiring companies with a lot of personal data to conduct studies of how their use of that data impacts people. Think of it as the equivalent of an environmental impact study for big data, or the US equivalent of GDPR’s Data Protection Impact Assessment process. In fact, it is very similar to the GDPR requirement.

Here’s a summary of the key terms:

Covered entities. The bill would apply to anyone that (a) receives more than $50M in revenue over the preceding three-year period; (b) possesses personal information on more than 1M consumers or consumer devices; or (c) is a “data broker,” defined as possessing personal information on individuals that are not customers or employees as a substantial part of business.

Definition of personal information. Broadly defined as any information “reasonably linkable to a specific consumer or consumer device.”

Impact assessments. At a minimum, requires a description of the system, design, training process, data, purpose, relative benefits and costs, data minimization practices, retention policies, access to data by consumers, ability of consumers to correct or object to the data, sharing of data, risks of inaccurate, biased, unfair, or discriminatory decisions, and safeguards to minimize risks.

Systems which must be evaluated. Must evaluate any system that “poses a significant risk” to the privacy and security of personal information or results in inaccurate, unfair, biased, or discriminatory decisions, especially if the system alters legal rights or profiles “sensitive aspects” of consumer lives such as protected class, criminal convictions, work performance, economic situation, health, personal preferences, interests, behavior, location, etc.

Enforcement. Enforced by the FTC or the Attorney General of any State upon notice to the FTC.

Elizabeth Warren and the Corporate Executive Accountability Act

Elizabeth Warren has introduced the Corporate Executive Accountability Act and is pushing it in a Washington Post Op-Ed:

I’m proposing a law that expands criminal liability to any corporate executive who negligently oversees a giant company causing severe harm to U.S. families. We all agree that any executive who intentionally breaks criminal laws and leaves a trail of smoking guns should face jail time. But right now, they can escape the threat of prosecution so long as no one can prove exactly what they knew, even if they were willfully negligent.

If top executives knew they would be hauled out in handcuffs for failing to reasonably oversee the companies they run, they would have a real incentive to better monitor their operations and snuff out any wrongdoing before it got out of hand.

Elizabeth Warren: Corporate executives must face jail time for overseeing massive scams

The bill itself is pretty short. Here’s a summary:

  • Focuses on executives in big business. Applies to any executive officer of a corporation with more than $1B in annual revenue. Definition of executive officer is same as under traditional federal regulations, plus anyone who “has the responsibility and authority to take necessary measures to prevent or remedy violations.”
  • Makes execs criminally liable for a lot of things. Makes it criminal for any executive officer “to negligently permit or fail to prevent” any crime under Federal or State law, or any civil violation that “affects the health, safety, finances, or personal data” of at least 1% of the population of any state or the US.
  • Penalty. Convicted executives go to prison for up to a year, or up to three years on subsequent offenses.

This is pretty breathtaking in its sweep of criminal liability. It criminalizes negligence. And it applies that negligence standard to any civil violation that “affects” the health, safety, finances, or personal data of at least 1% of a state.

Under this standard every single executive at Equifax, Facebook, Yahoo, Target, etc. risks jail for up to a year. Just read this list. Will be interesting to see where this goes.

Facebook and Housing Discrimination

The Department of Housing and Urban Development sued Facebook for housing discrimination. The allegations are fascinating and, although we mostly knew all of this before (based on reporting by Pro Publica), I think most people do not realize how impressively targeted advertisements can be on Facebook. For example:

Respondent [Facebook] has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area. Respondent also provides drop-down menus and search boxes to exclude or include (i.e., limit the audience of an ad exclusively to) people who share specified attributes. Respondent has offered advertisers hundreds of thousands of attributes from which to choose, for example to exclude “women in the workforce,” “moms of grade school kids,” “foreigners,” “Puerto Rico Islanders,” or people interested in “parenting,” “accessibility,” “service animal,” “Hijab Fashion,” or “Hispanic Culture.” Respondent also has offered advertisers the ability to limit the audience of an ad by selecting to include only those classified as, for example, “Christian” or “Childfree.”

Complaint at paragraph 14.

But Facebook’s system doesn’t just enable this kind of micro-targeting. It also refuses to show ads to users that its system judges as unlikely to interact with the ads, even if the advertisers want to target those users:

Even if an advertiser tries to target an audience that broadly spans protected class groups, Respondent’s ad delivery system will not show the ad to a diverse audience if the system considers users with particular characteristics most likely to engage with the ad. If the advertiser tries to avoid this problem by specifically targeting an unrepresented group, the ad delivery system will still not deliver the ad to those users, and it may not deliver the ad at all.

Complaint at paragraph 19.

Thus, the allegation is that the system functions “just like an advertiser who intentionally targets or excludes users based on their protected class.”

There is an AI angle to this as well. The complaint specifically references Facebook’s “machine learning and other prediction techniques” as enabling this kind of targeting. And while folks may disagree on whether this is “AI” or just sophisticated statistical analysis, it is a concrete allegation of real-world harm caused by big data and computation. And I think it is an interesting case study in whether we need extra laws to prevent AI harm.

Here is a hypothesis: our existing laws prohibiting various types of harm will work just fine or better in the AI context. Housing discrimination is already illegal, whether you do it subjectively and intentionally or objectively by sophisticated computation. And in fact, it’s easier to prove the latter. The AI takes input and outputs a result. That result is objective and (with the help of legal process) transparent. The AI doesn’t rationalize its decisions or try to explain away its hidden bias because it fears social judgment. If it operates in a biased manner, we will see it and we can fix it.

There is a lot of anxiety around whether our laws are sufficient for the AI future we envision. Will product liability laws be sufficient to determine who is at fault when a self-driving vehicle crashes? Will anti-discrimination laws be sufficient to disincentivize AI-facilitated bias? Yes, yes I think they will. Perhaps the law is more robust than we fear.

Patent Litigation Insurance

Patent litigation insurance definitely exists, and every so often a casual observer will be confronted by the enormous cost of litigating a patent case and suggest that maybe you should get insurance. After all, there are a lot of other kinds of insurance for the normal hazards of doing business: product liability, business interruption, even cyber attack. So why not patent litigation insurance?

The problem is that insurance works by grouping a whole bunch of entities together that all have similar risk, and then figuring out how to get them to share that risk while still making some money on the premiums. That doesn’t work for patent litigation because companies have wildly different risk profiles. It is impossible to take a group of companies, somehow average out their risk of patent litigation, and then calculate a premium that both covers that average risk and makes you some (but not too much) money on the side. The companies will either overpay or underpay.

As a result, patent litigation insurers take a look at your individual risk profile, figure they can estimate the risk better than you can, and then charge an individualized premium to make sure they are covered. Public reporting places the annual cost of patent litigation insurance at about 2-5% of the insured amount, with the addition of hard liability caps and co-payments. Most big companies decline those terms and end up self-insuring or mitigating risk through license aggregators like RPX.

But still patent litigation insurance seems to fascinate, especially the academics. In a November 2018 paper titled The Effect of Patent Litigation Insurance, researchers examined the effect of recently introduced insurance on the rate of patent assertions. And they found (headline!) that the availability of defensive insurance was correlated with significantly reduced likelihood that specific patents would be asserted. They conclude:

Whatever the merits of specific judicial and legislative reforms presently under consideration, our study suggests that it is also possible for market-based mechanisms to alter the behavior of patent enforcers. Indeed, it has been argued that one reason legislative and judicial reform is needed is because collective action is unlikely to cure the patent system’s ills because defending against claims of patent infringement generates uncompensated positive externalities. Our study suggests that defensive litigation insurance may be a viable market-based solution to complement, or supplant, other reforms that aim to reduce NPE activity.

The Effect of Patent Litigation Insurance at 59-60.

But there is a very important caveat: the insurance company selected in advance every patent they would insure against. IPISC sold two menus of “Troll Defense” insurance: one for insurance against 200 specific patents, and one for insurance against an additional 107 specific patents. Indeed, this is how the researchers were able to assess whether assertions went down. (Other patent litigation insurers use more complex policies that do not identify specific patents.) In addition, IPISC capped the defense insurance limit at $1M, which is well below the cost of litigating your average patent case. This is a very narrow space for patent litigation insurance!

IPISC must have had confidence they could accurately quantify the risk associated with these patents. The insured patents had tended to be asserted before by well-known patent assertion entities. I suspect the prior assertions settled quickly for relatively small amounts because that’s how these entities tend to work. Indeed, that is the whole business model. But throw in the availability of insurance specific to these patents and now you have a signal that many potential defendants will not simply settle and move on. Wrench in the model, assertions go down.

So yes, this narrow type of patent litigation insurance might be useful if you are an entity concerned about harassment by specific patents in low value patent litigation. Interesting study, your mileage may vary.