White House Releases AI Principles

The White House has released draft “guidance for regulation of artificial intelligence applications.” The memo states that “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

Agencies should consider new regulation only after they have reached the decision . . . that Federal regulation is necessary.

Nevertheless, the memo enumerates ten principles that agencies should take into account should they ultimately take action that impacts AI:

  1. Public Trust in AI. Don’t undermine it by allowing AI’s to pose risks to privacy, individual rights, autonomy, and civil liberties.
  2. Public Participation. Don’t block public participation in the rule making process.
  3. Scientific Integrity and Information Quality. Use scientific principles.
  4. Risk Assessment and Management. Use risk management principles.
  5. Benefits and Costs.
  6. Flexibility. Be flexible and ensure American companies are not disadvantaged by the United States’ regulatory regime.
  7. Fairness and Non-Discrimination.
  8. Disclosure and Transparency.
  9. Safety and Security.
  10. Interagency Coordination. Agencies should coordinate.

Overall, the memo is a long-winded directive that agencies should not regulate, but if for some reason they feel they have to, they should consider the same basic principles that everyone else is listing about AI concerns: safety, security, transparency, fairness.

A proposed, reformed CDA 230

Bruce Schneier posts a proposal about a possible reform of CDA 230, which largely immunizes online providers from liability for the content of their posts:

Hi Facebook/Twitter/YouTube/everyone else:

You can build a communications based on inspecting user content and presenting it as you want, but that business model also conveys responsibility for that content.

-or-

You can be a communications service and enjoy the protections of CDA 230, in which case you cannot inspect or control the content you deliver.

Reforming CDA 230

I’m not sure I have a view on whether CDA 230 should be reformed. I just know that it’s harder than it looks to write good policy in this space. But this is a fascinating proposal.

Copyrightability of AI creations

One of the many fascinating things about AI is whether AI creations can be copyrighted and, if so, by whom. Under traditional copyright analysis, the human(s) that made some contribution to the creative work own the copyright by default. If there is no human contribution, there is no copyright. See, for example, the so-called “monkey selfie” case in which a monkey took a selfie and the photographer that owned the camera got no copyright in the photo.

But when an AI creates a work of art, is there human involvement? A human created the AI, and might have fiddled with its knobs so to speak. Is that sufficient? The U.S. Copyright Office is concerned about this. One question they are asking is this:

2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person

(i) designed the AI algorithm or process that created the work;

(ii) contributed to the design of the algorithm or process;

(iii) chose data used by the algorithm for training or otherwise;

(iv) caused the AI algorithm or process to be used to yield the work;

or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an ‘‘author’’?

Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation

No one really knows the answer to this because (1) it is going to be very fact intensive (lots of different ways for humans to be involved or not involved); and (2) it feels weird to do a lot of work or spend a lot of money to build an AI and not be entitled to copyright over its creations.

In any case, these issues are going to be litigated soon. A reddit user recently used a widely-available AI program called StyleGAN to create a music visualization. And although the underlying AI was not authored by the reddit poster, the output was allegedly created by “transfer learning with a custom dataset of images curated by the artist.”

Does the reddit poster (aka self-proclaimed “artist”) own a copyright on the output? Good question.

Using fake news laws to take down critical speech

Don’t like fake news? Pass a law! But of course fake news is in the eye of the beholder:

Singapore just showed the world how it plans to use a controversial new law to tackle what it deems fake news — and critics say it’s just what they expected would happen.

The government took action twice this week on two Facebook posts it claimed contained “false statements of fact,” the first uses of the law since it took effect last month.

One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations.

In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them.

Singapore just used its fake news law. Critics say it’s just what they feared

Not a great start for Singaporian efforts to police false news.

Attempted theft of trade secrets is also illegal

Camilla Hrdy points out that you can’t sue for trade secret theft if the information stolen is not actually protected as a trade secret. But you can charge someone with attempted trade secret theft even if the information wasn’t a trade secret. Which means you can go to jail for attempted trade secret theft even if you couldn’t be sued for it. That is a weird inversion.

The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an “attempt” crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mindThe criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that “[w]hoever, with intent to convert a trade secret … to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals…obtains… possesses…[etcetera]” a trade secret, or “attempts to” do any of those things, “shall… be fined under this title or imprisoned not more than 10 years, or both…” 

Anthony Levandowski: Is Being a Jerk a Crime?

How to Become a Federal Criminal

It’s super easy and you may already be one!

You may know that you are required to report if you are traveling to or from the United States with $10,000 or more in cash. Don’t hop over the Canadian border to buy a used car, for example, or the Feds may confiscate your cash (millions of dollars are confiscated every year). Did you also know that you can’t leave the United States with more than $5 in nickels??? That’s a federal crime punishable by up to five years in prison. How about carrying a metal detector in a national park–up to six months in prison. And God forbid you should use your metal detector and find something more than 100 years old, that can put you away for up to a year. Also illegal in a national park? Making unreasonable gestures to a passing horse.

How to Become a Federal Criminal

Worth re-linking to one of my favorite legal lectures of all time: Don’t Talk to the Police. Even if you are going to tell the truth, even if you did nothing wrong. There is no way it will help you.

France bans some litigation analytics

In what appears to be a breathtaking overreaction to a privacy concern, France has banned statistical reporting about individual judges’ decisions:

The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.

A key passage of the new law states:

‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ 

France Bans Judge Analytics, 5 Years In Prison For Rule Breakers

This raises many issues of free speech, transparency, and just plain old protectionism:

Bright Line Trademark Rule on Likelihood of Confusion

I’m a sucker for the predictability of a bright line rule, and Camilla Hrdy at the Written Description blog describes a possible de facto rule about the likelihood of confusion in trademark cases:

In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an “appreciable number” of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur. 

In theory.

But in practice, Bernstein asserted, there is a magic number: it’s around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion.

Likelihood of Confusion: Is 15% The Magic Number?

There are of course many confounding factors including whether this 15% applies to “gross confusion” (total confusion that includes noise from other factors) or “net confusion” (caused only by use of the trademark), and problems with survey evidence in general. But I’ll briefly fantasize about being asked what “likelihood of confusion” means in trademark law and answering, “15%. It’s just 15%.”

Sixth Circuit says chalking tires is an unreasonable search

In Taylor v. City of Saginaw, the Sixth Circuit U.S. Court of Appeals (covering Kentucky, Michigan, Ohio, and Tennessee) has concluded that the common – indeed, ubiquitous! – practice of tracking how long a car has been parked by chalking its tires is unconstitutional:

Alison Taylor, a frequent recipient of parking tickets, sued the City and its parking enforcement officer Tabitha Hoskins, alleging that chalking violated her Fourth Amendment right to be free from unreasonable search. The City moved to dismiss the action. The district court granted the City’s motion, finding that, while chalking may have constituted a search under the Fourth Amendment, the search was reasonable. Because we chalk this practice up to a regulatory exercise, rather than a community-caretaking function, we REVERSE.

This is a great example of a court following individual precedent down a winding path to a conclusion that is actually very strange. Here’s how they got there:

  1. Start with the Constitution. The Fourth Amendment to the Constitution protects the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.”
  2. Is it a search? Yes. But really only because the Supreme Court recently decided that attaching GPS devices to cars is a search. Attaching GPS devices is a search because it is a trespass. And chalking is also a trespass because the common law says that “acting upon a chattel as intentionally to cause it to come in contact with some other object” is a trespass. So chalking is a trespass to obtain information. And that makes it a search.
  3. Is it unreasonable? We assume so. The government bears the burden of proving that the search was not unreasonable, and this is where they fell down. First, the government said people have a reduced expectation of privacy in cars. Nope, the Court says, that analysis only applies when you have a warrant or probable cause, and the government didn’t have either. Second, the government said the parking officers weren’t operating as law enforcement; they were operating as “community caretakers” and another standard applies. Nope, the Court says, the government is actually enforcing laws so that doesn’t apply either. Hearing no other arguments, the Court concludes the search was unreasonable.
  4. And now tire chalking is an unconstitutional, unreasonable search.

I’m not sure the drafters of the Fourth Amendment would agree with this analysis. Chalking a tire doesn’t seem to be either unreasonable or a search. And of course there are a number of other ways to argue this case, including with the “administrative search exception,” which the government failed to raise. It’s possible this case gets reviewed.

On the other hand, plenty of other options are available to parking enforcement officers including video, photos, parking meters, and taking notes!

Summary of the DETOUR Act

On April 9, 2019, Representatives Warner (D-VA) and Fischer (R-NE) introduced the Deceptive Experiences To Online Users Reductions (DETOUR) Act. The bill would criminalize user interface “dark patterns” around user consent, which trick or nudge (depending on your perspective) users into consenting to things that may not be in their best interests.

There is a whole website devoted to dark patterns, and it is pretty informative. Here’s an example of a dark pattern that makes it hard to figure out how not to sign up for a service:

The DETOUR Act gives the FTC power to regulate user interfaces that have “the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.” It also prohibits the segmentation of users into groups for behavioral experimentation without informed consent.

The Act would only apply to “large online operators,” defined as having more than 100M authenticated users in any 30 day period. (Small online operators can still trick people?) Large online operators would also have to disclose their experiments every 90 days and establish Independent Review Boards to oversee any such user research.