Bright Line Trademark Rule on Likelihood of Confusion

I’m a sucker for the predictability of a bright line rule, and Camilla Hrdy at the Written Description blog describes a possible de facto rule about the likelihood of confusion in trademark cases:

In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an “appreciable number” of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur. 

In theory.

But in practice, Bernstein asserted, there is a magic number: it’s around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion.

Likelihood of Confusion: Is 15% The Magic Number?

There are of course many confounding factors including whether this 15% applies to “gross confusion” (total confusion that includes noise from other factors) or “net confusion” (caused only by use of the trademark), and problems with survey evidence in general. But I’ll briefly fantasize about being asked what “likelihood of confusion” means in trademark law and answering, “15%. It’s just 15%.”

Sixth Circuit says chalking tires is an unreasonable search

In Taylor v. City of Saginaw, the Sixth Circuit U.S. Court of Appeals (covering Kentucky, Michigan, Ohio, and Tennessee) has concluded that the common – indeed, ubiquitous! – practice of tracking how long a car has been parked by chalking its tires is unconstitutional:

Alison Taylor, a frequent recipient of parking tickets, sued the City and its parking enforcement officer Tabitha Hoskins, alleging that chalking violated her Fourth Amendment right to be free from unreasonable search. The City moved to dismiss the action. The district court granted the City’s motion, finding that, while chalking may have constituted a search under the Fourth Amendment, the search was reasonable. Because we chalk this practice up to a regulatory exercise, rather than a community-caretaking function, we REVERSE.

This is a great example of a court following individual precedent down a winding path to a conclusion that is actually very strange. Here’s how they got there:

  1. Start with the Constitution. The Fourth Amendment to the Constitution protects the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”
  2. Is it a search? Yes. But really only because the Supreme Court recently decided that attaching GPS devices to cars is a search. Attaching GPS devices is a search because it is a trespass. And chalking is also a trespass because the common law says that “acting upon a chattel as intentionally to cause it to come in contact with some other object” is a trespass. So chalking is a trespass to obtain information. And that makes it a search.
  3. Is it unreasonable? We assume so. The government bears the burden of proving that the search was not unreasonable, and this is where they fell down. First, the government said people have a reduced expectation of privacy in cars. Nope, the Court says, that analysis only applies when you have a warrant or probable cause, and the government didn’t have either. Second, the government said the parking officers weren’t operating as law enforcement; they were operating as “community caretakers” and another standard applies. Nope, the Court says, the government is actually enforcing laws so that doesn’t apply either. Hearing no other arguments, the Court concludes the search was unreasonable.
  4. And now tire chalking is an unconstitutional, unreasonable search.

I’m not sure the drafters of the Fourth Amendment would agree with this analysis. Chalking a tire doesn’t seem to be either unreasonable or a search. And of course there are a number of other ways to argue this case, including with the “administrative search exception,” which the government failed to raise. It’s possible this case gets reviewed.

On the other hand, plenty of other options are available to parking enforcement officers including video, photos, parking meters, and taking notes!

Summary of the DETOUR Act

On April 9, 2019, Representatives Warner (D-VA) and Fischer (R-NE) introduced the Deceptive Experiences To Online Users Reductions (DETOUR) Act. The bill would criminalize user interface “dark patterns” around user consent, which trick or nudge (depending on your perspective) users into consenting to things that may not be in their best interests.

There is a whole website devoted to dark patterns, and it is pretty informative. Here’s an example of a dark pattern that makes it hard to figure out how not to sign up for a service:

The DETOUR Act gives the FTC power to regulate user interfaces that have “the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.” It also prohibits the segmentation of users into groups for behavioral experimentation without informed consent.

The Act would only apply to “large online operators,” defined as having more than 100M authenticated users in any 30 day period. (Small online operators can still trick people?) Large online operators would also have to disclose their experiments every 90 days and establish Independent Review Boards to oversee any such user research.

Summary of the proposed Algorithmic Accountability Act of 2019

Senators Wyden (D-OR) and Booker (D-NJ) have proposed a Senate bill that would require big businesses and data brokers to conduct “impact assessments” for (1) “high-risk automated decision systems”; and (2) “high-risk information systems”.

The bill essentially gives the FTC power to promulgate regulations requiring companies with a lot of personal data to conduct studies of how their use of that data impacts people. Think of it as the equivalent of an environmental impact study for big data, or the US equivalent of GDPR’s Data Protection Impact Assessment process. In fact, it is very similar to the GDPR requirement.

Here’s a summary of the key terms:

Covered entities. The bill would apply to anyone that (a) receives more than $50M in revenue over the preceding three-year period; (b) possesses personal information on more than 1M consumers or consumer devices; or (c) is a “data broker,” defined as possessing personal information on individuals that are not customers or employees as a substantial part of business.

Definition of personal information. Broadly defined as any information “reasonably linkable to a specific consumer or consumer device.”

Impact assessments. At a minimum, requires a description of the system, design, training process, data, purpose, relative benefits and costs, data minimization practices, retention policies, access to data by consumers, ability of consumers to correct or object to the data, sharing of data, risks of inaccurate, biased, unfair, or discriminatory decisions, and safeguards to minimize risks.

Systems which must be evaluated. Must evaluate any system that “poses a significant risk” to the privacy and security of personal information or results in inaccurate, unfair, biased, or discriminatory decisions, especially if the system alters legal rights or profiles “sensitive aspects” of consumer lives such as protected class, criminal convictions, work performance, economic situation, health, personal preferences, interests, behavior, location, etc.

Enforcement. Enforced by the FTC or the Attorney General of any State upon notice to the FTC.

Elizabeth Warren and the Corporate Executive Accountability Act

Elizabeth Warren has introduced the Corporate Executive Accountability Act and is pushing it in a Washington Post Op-Ed:

I’m proposing a law that expands criminal liability to any corporate executive who negligently oversees a giant company causing severe harm to U.S. families. We all agree that any executive who intentionally breaks criminal laws and leaves a trail of smoking guns should face jail time. But right now, they can escape the threat of prosecution so long as no one can prove exactly what they knew, even if they were willfully negligent.

If top executives knew they would be hauled out in handcuffs for failing to reasonably oversee the companies they run, they would have a real incentive to better monitor their operations and snuff out any wrongdoing before it got out of hand.

Elizabeth Warren: Corporate executives must face jail time for overseeing massive scams

The bill itself is pretty short. Here’s a summary:

  • Focuses on executives in big business. Applies to any executive officer of a corporation with more than $1B in annual revenue. Definition of executive officer is same as under traditional federal regulations, plus anyone who “has the responsibility and authority to take necessary measures to prevent or remedy violations.”
  • Makes execs criminally liable for a lot of things. Makes it criminal for any executive officer “to negligently permit or fail to prevent” any crime under Federal or State law, or any civil violation that “affects the health, safety, finances, or personal data” of at least 1% of the population of any state or the US.
  • Penalty. Convicted executives go to prison for up to a year, or up to three years on subsequent offenses.

This is pretty breathtaking in its sweep of criminal liability. It criminalizes negligence. And it applies that negligence standard to any civil violation that “affects” the health, safety, finances, or personal data of at least 1% of a state.

Under this standard every single executive at Equifax, Facebook, Yahoo, Target, etc. risks jail for up to a year. Just read this list. Will be interesting to see where this goes.

Facebook and Housing Discrimination

The Department of Housing and Urban Development sued Facebook for housing discrimination. The allegations are fascinating and, although we mostly knew all of this before (based on reporting by Pro Publica), I think most people do not realize how impressively targeted advertisements can be on Facebook. For example:

Respondent [Facebook] has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area. Respondent also provides drop-down menus and search boxes to exclude or include (i.e., limit the audience of an ad exclusively to) people who share specified attributes. Respondent has offered advertisers hundreds of thousands of attributes from which to choose, for example to exclude “women in the workforce,” “moms of grade school kids,” “foreigners,” “Puerto Rico Islanders,” or people interested in “parenting,” “accessibility,” “service animal,” “Hijab Fashion,” or “Hispanic Culture.” Respondent also has offered advertisers the ability to limit the audience of an ad by selecting to include only those classified as, for example, “Christian” or “Childfree.”

Complaint at paragraph 14.

But Facebook’s system doesn’t just enable this kind of micro-targeting. It also refuses to show ads to users that its system judges as unlikely to interact with the ads, even if the advertisers want to target those users:

Even if an advertiser tries to target an audience that broadly spans protected class groups, Respondent’s ad delivery system will not show the ad to a diverse audience if the system considers users with particular characteristics most likely to engage with the ad. If the advertiser tries to avoid this problem by specifically targeting an unrepresented group, the ad delivery system will still not deliver the ad to those users, and it may not deliver the ad at all.

Complaint at paragraph 19.

Thus, the allegation is that the system functions “just like an advertiser who intentionally targets or excludes users based on their protected class.”

There is an AI angle to this as well. The complaint specifically references Facebook’s “machine learning and other prediction techniques” as enabling this kind of targeting. And while folks may disagree on whether this is “AI” or just sophisticated statistical analysis, it is a concrete allegation of real-world harm caused by big data and computation. And I think it is an interesting case study in whether we need extra laws to prevent AI harm.

Here is a hypothesis: our existing laws prohibiting various types of harm will work just fine or better in the AI context. Housing discrimination is already illegal, whether you do it subjectively and intentionally or objectively by sophisticated computation. And in fact, it’s easier to prove the latter. The AI takes input and outputs a result. That result is objective and (with the help of legal process) transparent. The AI doesn’t rationalize its decisions or try to explain away its hidden bias because it fears social judgment. If it operates in a biased manner, we will see it and we can fix it.

There is a lot of anxiety around whether our laws are sufficient for the AI future we envision. Will product liability laws be sufficient to determine who is at fault when a self-driving vehicle crashes? Will anti-discrimination laws be sufficient to disincentivize AI-facilitated bias? Yes, yes I think they will. Perhaps the law is more robust than we fear.

Patent Litigation Insurance

Patent litigation insurance definitely exists, and every so often a casual observer will be confronted by the enormous cost of litigating a patent case and suggest that maybe you should get insurance. After all, there are a lot of other kinds of insurance for the normal hazards of doing business: product liability, business interruption, even cyber attack. So why not patent litigation insurance?

The problem is that insurance works by grouping a whole bunch of entities together that all have similar risk, and then figuring out how to get them to share that risk while still making some money on the premiums. That doesn’t work for patent litigation because companies have wildly different risk profiles. It is impossible to take a group of companies, somehow average out their risk of patent litigation, and then calculate a premium that both covers that average risk and makes you some (but not too much) money on the side. The companies will either overpay or underpay.

As a result, patent litigation insurers take a look at your individual risk profile, figure they can estimate the risk better than you can, and then charge an individualized premium to make sure they are covered. Public reporting places the annual cost of patent litigation insurance at about 2-5% of the insured amount, with the addition of hard liability caps and co-payments. Most big companies decline those terms and end up self-insuring or mitigating risk through license aggregators like RPX.

But still patent litigation insurance seems to fascinate, especially the academics. In a November 2018 paper titled The Effect of Patent Litigation Insurance, researchers examined the effect of recently introduced insurance on the rate of patent assertions. And they found (headline!) that the availability of defensive insurance was correlated with significantly reduced likelihood that specific patents would be asserted. They conclude:

Whatever the merits of specific judicial and legislative reforms presently under consideration, our study suggests that it is also possible for market-based mechanisms to alter the behavior of patent enforcers. Indeed, it has been argued that one reason legislative and judicial reform is needed is because collective action is unlikely to cure the patent system’s ills because defending against claims of patent infringement generates uncompensated positive externalities. Our study suggests that defensive litigation insurance may be a viable market-based solution to complement, or supplant, other reforms that aim to reduce NPE activity.

The Effect of Patent Litigation Insurance at 59-60.

But there is a very important caveat: the insurance company selected in advance every patent they would insure against. IPISC sold two menus of “Troll Defense” insurance: one for insurance against 200 specific patents, and one for insurance against an additional 107 specific patents. Indeed, this is how the researchers were able to assess whether assertions went down. (Other patent litigation insurers use more complex policies that do not identify specific patents.) In addition, IPISC capped the defense insurance limit at $1M, which is well below the cost of litigating your average patent case. This is a very narrow space for patent litigation insurance!

IPISC must have had confidence they could accurately quantify the risk associated with these patents. The insured patents had tended to be asserted before by well-known patent assertion entities. I suspect the prior assertions settled quickly for relatively small amounts because that’s how these entities tend to work. Indeed, that is the whole business model. But throw in the availability of insurance specific to these patents and now you have a signal that many potential defendants will not simply settle and move on. Wrench in the model, assertions go down.

So yes, this narrow type of patent litigation insurance might be useful if you are an entity concerned about harassment by specific patents in low value patent litigation. Interesting study, your mileage may vary.

SRI International v. Cisco

On March 20, 2019, a Federal Circuit panel decided SRI International v. Cisco, addressing subject matter eligibility (yes), willfulness (no), exceptional case (yes), a running royalty (yes), and claim construction among other issues. Let’s break it down.

Subject matter eligibility. Over a dissent, the majority held the following method of cybersecurity network monitoring to be eligible because it is fundamentally “directed to a technological solution to a technological problem“:

1. A computer-automated method of hierarchical event monitoring and analysis within an enterprise network comprising:
deploying a plurality of network monitors in the enterprise network;
detecting, by the network monitors, suspicious network activity based on analysis of network traffic data selected from one or more of the following categories: {network packet data transfer commands, network packet data transfer errors, network packet data volume, network connection requests, network connection denials, error codes included in a network packet, network connection acknowledgements, and network packets indicative of well-known network-service protocols};
generating, by the monitors, reports of said suspicious activity; and
automatically receiving and integrating the reports of suspicious activity, by one or more hierarchical monitors. 

The dissenting judge saw it differently:

The claims only recite the moving of information. The computer is used as a tool, and no improvement in computer technology is shown or claimed. 

Willfulness. The jury found that Cisco willfully infringed the patents. The district court judge denied JMOL of non-willfulness. And, in the only win for Cisco, the Federal Circuit reversed:

  • Evidence that Cisco employees did not read the patents-in-suit until their depositions is not evidence of willfulness; Cisco has plenty of lawyers to diligently respond to these issues.
  • Evidence that Cisco designed their products in an infringing manner is not evidence of willfulness; it’s evidence of infringement.

While the jury heard evidence that Cisco was aware of the patents in May 2012, before filing of the lawsuit, we do not see how the record supports a willfulness finding going back to 2000. As the Supreme Court recently observed, “culpability is generally measured against the knowledge of the actor at the time of the challenged conduct.” Halo, 136 S. Ct. at 1933. Similarly, Cisco’s allegedly aggressive litigation tactics cannot support a finding of willful infringement going back to 2000, especially when the litigation did not start until 2012. Finally, Cisco’s decision not to seek an advice-of-counsel defense is legally irrelevant under 35 U.S.C. § 298.

Exceptional case. The district court judge found that the case was exceptional and awarded attorneys’ fees based on Cisco maintaining “nineteen invalidity theories until the eve of trial but only presenting two at trial and pursuing defenses at trial that were contrary to the court’s ruling or Cisco’s internal documents.” The Federal Circuit affirmed.

Running royalty. The district court judge imposed a running royalty of 3.5% on infringing products not colorably different and that was ok.

Human Interface Design in the Law

Fantastic essay by Tim Wu (with whom I do not often find common ground) on the importance of “human interface design” in the law:

The Affordable Care Act is a good example of the complexity problem. Yes, it was an important policy achievement, and yes, many of its problems can be rightly blamed on industry resistance and Republican efforts to dismantle it.

But the act is also exceptionally hard to understand and discouragingly daunting to make use of. An emphasis on “choice” and “transparency” resulted in a law that only a rational-choice theorist could love. The act made health insurance more complicated, not less, which is one reason that such a high percentage of medical bills go to paying administrative costs, and why the Affordable Care Act is much less popular than it could be.

The Democrats’ Complexity Problem

I am a bit disappointed in the partisan framing; it’s unnecessary. Progressives and Democrats aren’t the only policy makers with this problem. And the problem can be rightly framed as a fundamental lack of respect for the public:

But policy experts are rarely good at interface design, for we have a bad habit of assuming that people have unlimited time and attention and that to respect them means offering complete transparency and a multiplicity of choices. Real respect for the public involves appreciating what the public actually wants and needs. The reality is that most Americans are short on time and attention and already swamped by millions of daily tasks and decisions. They would prefer that the government solve problems for them — not create more work for them.

The public is entitled to demand that policy makers do the extra work of making laws understandable and decisions simple.

It’s Not About Fairness

As always, a wonderful take on the college admissions bribery scandal by Matt Levine:

Here is one thing that U.S. Attorney Andrew Lelling said in announcing the charges:

“There can be no separate college admissions system for the wealthy, and I’ll add that there will not be a separate criminal justice system either.”

Level playing field! Here is another thing he said less than a minute later:

“We’re not talking about donating a building so that a school’s more likely to take your son or your daughter. We’re talking about deception and fraud.”

There can be no separate college admissions system for the wealthy, except for the extremely well-known one where you donate a building in exchange for getting your kid in! “Lol just donate a building like a real rich person,” the U.S. Attorney almost said.

. . . . .

It is not about fairness; it is about theft. Selective colleges have admissions spots that they want to award in particular ways. They want to award some based on academic factors; they want to award others based on athletic skill; they want to award others in exchange for cash, but—and this is crucial—really a whole lot of cash. Buildings are not cheap.

You Have to Pay the Right Person