States can’t be sued for copyright infringement

In March, the U.S. Supreme Court decided Allen v. Cooper, Governor of North Carolina and ruled that States cannot be hauled into federal court on the issue of copyright infringement.

The decision is basically an extension of the Court’s prior decision on whether States can be sued for patent infringement in federal court (also no), and Justice Kagan writes for the unanimous Court in saying, “Florida Prepaid all but prewrote our decision today.”

But one of the most interesting discussions in the opinion is about when, perhaps, States might be hauled into federal court for copyright infringement under the Fourteenth Amendment prohibition against deprivation of property without due process:

All this raises the question: When does the Fourteenth Amendment care about copyright infringement? Sometimes, no doubt. Copyrights are a form of property. See Fox Film Corp. v. Doyal, 286 U. S. 123, 128 (1932). And the Fourteenth Amendment bars the States from “depriv[ing]”a person of property “without due process of law.” But even if sometimes, by no means always. Under our precedent, a merely negligent act does not “deprive” a person of property. See Daniels v. Williams, 474 U. S. 327, 328 (1986). So an infringement must be intentional, or at least reckless, to come within the reach of the Due Process Clause. See id., at 334, n. 3 (reserving whether reckless conduct suffices). And more: A State cannot violate that Clause unless it fails to offer an adequate remedy for an infringement, because such a remedy itself satisfies the demand of “due process.” See Hudson v. Palmer, 468 U. S. 517, 533 (1984). That means within the broader world of state copyright infringement is a smaller one where the Due Process Clause comes into play.

Slip Op. at 11.

Presumably this means that if North Carolina set up a free radio streaming service with Taylor Swift songs and refused to pay any royalties, they might properly be hauled into federal court. But absent some egregiously intentional or reckless conduct, States remain sovereign in copyright disputes.

DC District Court: “the CFAA does not criminalize mere terms-of-service violations on consumer websites”

Two academics wished to test whether employment websites discriminate based on race or gender. They intended to submit false information (e.g., fictitious profiles) to these websites, but worried that these submissions violated the sites’ terms-of-services and could subject them to prosecution under the federal Computer Fraud and Abuse Act. So they sued for clarity.

The District Court ruled that:

a user should be deemed to have “accesse[d] a computer without authorization,” 18 U.S.C. § 1030(a)(2), only when the user bypasses an authenticating permission requirement, or an “authentication gate,” such as a password restriction that requires a user to demonstrate “that the user is the person who has access rights to the information accessed,” . . . .

Sandvig v. Barr (Civil Action No. 16-1386, March 27, 2020) at 22.

In other words, terms-of-service violations are not violations of the Computer Fraud and Abuse Act, and cannot be criminalized by virtue of that act.

Three main points appeared to guide the Court’s reasoning:

  1. The statutory text and legislative history contemplate a “two-realm internet” of public and private machines. Private machines require authorization, but public machines (e.g., websites) do not.
  2. Website terms-of-service contracts provide inadequate notice for criminal violations. No one reads them! It would be crazy to criminalize ToS non-adherence.
  3. Enabling private website owners to define the scope of criminal liability under the CFAA simply by editing their terms-of-service contract also seems crazy!

It’s worth noting that the government here argued that the researchers did not have standing to bring this suit and cited a lack of “credible threat of prosecution” because Attorney General guidance “expressly cautions against prosecutions based on [terms-of-service] violations.”

But the absence of a specific disavowal of prosecution by the Department undermines much of the government’s argument. . . . Furthermore, as noted above the government has brought similar Access Provision prosecutions in the past and thus created a credible threat of prosecution.

Discovery has not helped the government’s position. John T. Lynch, Jr., the Chief of the Computer Crime and Intellectual Property Section of the Criminal Division of the Department of Justice, testified at his deposition that it was not “impossible for the Department to bring a CFAA prosecution based on [similar] facts and de minimis harm.” Dep. of John T. Lynch, Jr. [ECF No. 48-4] at 154:3–7. Although Lynch has also stated that he does not “expect” the Department to do so, Aff. of John T. Lynch, Jr. [ECF No. 21-1] ¶ 9, “[t]he Constitution ‘does not leave us at the mercy of noblesse oblige[.]”

Sandvig v. Barr at 10.

Meanwhile, the US Supreme Court today agreed to decided whether abusing authorized access to a computer is a federal crime. In Van Buren v. United States:

a former Georgia police officer was convicted of breaching the CFAA by looking up what he thought was an exotic dancer’s license plate number in the state’s database in exchange for $6,000. The ex-officer, Nathan Van Buren, was the target of an FBI sting operation at the time.

. . . .

Van Buren’s attorneys argued that the Eleventh Circuit’s October 2019 decision to uphold the CFAA conviction defined the law in overly broad terms that could criminalize seemingly innocuous behavior, like an employee violating company policy by using work computers to set up an NCAA basketball “March Madness” bracket or a law student using a legal database meant for “educational use” to access local housing laws in a dispute with their landlord.

. . . .

The First, Fifth and Seventh Circuits have all agreed with the Eleventh Circuit’s expansive view of the CFAA, while the Second, Fourth and Ninth Circuits have defined accessing a computer “in excess of authorization” more narrowly, the petition says.

High Court To Examine Scope Of Federal Anti-Hacking Law

Summary of EARN IT Act of 2019

Senator Lindsey Graham has introduced the EARN IT Act of 2019, which would eliminate online service providers’ immunity for the actions of their users under Section 230 of the Communications Decency Act.

The Act essentially establishes a National Commission on Online Child Exploitation Prevention, tasks this commission with drafting online best practices for preventing child exploitation by users (which would presumably mean no end-to-end encryption), and eliminates Section 230 immunity unless service providers follow those best practices.

SAFE HARBOR.—Subparagraph (A) [removing immunity] shall not apply to a claim in a civil action or charge in a criminal prosecution brought against a provider of an interactive computer service if – (i) the provider has implemented reasonable measures relating to the matters described in section 4(a)(2) [referring to creation of the best practices] of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019 to prevent the use of the interactive computer service for the exploitation of minors . . . .

Page 17 of the EARN IT Act of 2019

Other sections create liability for “reckless” violations (instead of “knowing” violations), require online service providers to certify that they are complying with the created best practices, and set forth the requirements for membership in the newly created commission.

This bill comes after a hearing in December 2019 over the issue of legal access to encrypted devices. During that hearing Senator Graham warned representatives of Facebook and Apple that, “You’re gonna find a way to do this or we’re going to do it for you.”

3/15/20 Update – A revised version of the EARN IT Act, introduced on March 5, alters how so-called “best practices” are created. First, a 19-member commission comprising the Attorney General, the Secretary of Homeland Security, the Chairman of the FTC, and (to be chosen by the heads of each party in the House and Senate) four representatives from law enforcement, four from the community of child-exploitation victims, two legal experts, two technology experts, and four representatives from technology companies. The support of 14 members would be required to approve any best practices, the recommendations must be approved by the AG, Secretary of Homeland Security, and the FTC Chair, and then Congress itself must enact them.

London police adopt facial recognition, permanently

Adam Satariano, writing for the NYT:

The technology London plans to deploy goes beyond many of the facial recognition systems used elsewhere, which match a photo against a database to identify a person. The new systems, created by the company NEC, attempt to identify people on a police watch list in real time with security cameras, giving officers a chance to stop them in the specific location.

London Police Amp Up Surveillance With Real-Time Facial Recognition

The objections voiced in the article are about potential inaccuracies in the system. But that will change over time. I don’t see many objections over the power of the system.

As Europe considers banning facial recognition technology, and police departments everywhere look to it to improve policing and safety, this may be the technology fight of the 2020’s.

Prediction: security wins over privacy.

German Data Ethics Commission insists AI regulation is necessary

The German Data Ethics Commission issued a 240-page report with 75 recommendations for regulating data, algorithmic systems, and AI. It is one of the strongest views on ethical AI to date and favors explicit regulation.

The Data Ethics Commission holds the view that regulation is necessary, and cannot be replaced by ethical principles.

Opinion of the Data Ethics Commission – Executive Summary at 7 (emphasis original).

The report divides ethical considerations into concerns about either data or algorithmic systems. For data, the report suggests that rights associated with the data will play a significant role in the ethical landscape. For example, ensuring that individuals provide informed consent for use of their personal data addresses a number of significant ethical issues.

For algorithmic systems, however, the report suggests that the AI systems might have no connection to the affected individuals. As a result, even non-personal data for which there are no associated rights could be used in an unethical manner. The report concludes that regulation is necessary to the extent there is a potential for harm.

The report identifies five levels of algorithmic system criticality. Applications with zero or negligible potential for harm would face no regulation. The regulatory burden would increase as the potential for harm increases, up to a total ban. For applications with serious potential for harm, the report recommends constant oversight.

The framework appears to be a good candidate for future ethical AI regulation in Europe, and perhaps (by default) the world.

White House Releases AI Principles

The White House has released draft “guidance for regulation of artificial intelligence applications.” The memo states that “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

Agencies should consider new regulation only after they have reached the decision . . . that Federal regulation is necessary.

Nevertheless, the memo enumerates ten principles that agencies should take into account should they ultimately take action that impacts AI:

  1. Public Trust in AI. Don’t undermine it by allowing AI’s to pose risks to privacy, individual rights, autonomy, and civil liberties.
  2. Public Participation. Don’t block public participation in the rule making process.
  3. Scientific Integrity and Information Quality. Use scientific principles.
  4. Risk Assessment and Management. Use risk management principles.
  5. Benefits and Costs.
  6. Flexibility. Be flexible and ensure American companies are not disadvantaged by the United States’ regulatory regime.
  7. Fairness and Non-Discrimination.
  8. Disclosure and Transparency.
  9. Safety and Security.
  10. Interagency Coordination. Agencies should coordinate.

Overall, the memo is a long-winded directive that agencies should not regulate, but if for some reason they feel they have to, they should consider the same basic principles that everyone else is listing about AI concerns: safety, security, transparency, fairness.

A proposed, reformed CDA 230

Bruce Schneier posts a proposal about a possible reform of CDA 230, which largely immunizes online providers from liability for the content of their posts:

Hi Facebook/Twitter/YouTube/everyone else:

You can build a communications based on inspecting user content and presenting it as you want, but that business model also conveys responsibility for that content.

-or-

You can be a communications service and enjoy the protections of CDA 230, in which case you cannot inspect or control the content you deliver.

Reforming CDA 230

I’m not sure I have a view on whether CDA 230 should be reformed. I just know that it’s harder than it looks to write good policy in this space. But this is a fascinating proposal.

Copyrightability of AI creations

One of the many fascinating things about AI is whether AI creations can be copyrighted and, if so, by whom. Under traditional copyright analysis, the human(s) that made some contribution to the creative work own the copyright by default. If there is no human contribution, there is no copyright. See, for example, the so-called “monkey selfie” case in which a monkey took a selfie and the photographer that owned the camera got no copyright in the photo.

But when an AI creates a work of art, is there human involvement? A human created the AI, and might have fiddled with its knobs so to speak. Is that sufficient? The U.S. Copyright Office is concerned about this. One question they are asking is this:

2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person

(i) designed the AI algorithm or process that created the work;

(ii) contributed to the design of the algorithm or process;

(iii) chose data used by the algorithm for training or otherwise;

(iv) caused the AI algorithm or process to be used to yield the work;

or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an ‘‘author’’?

Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation

No one really knows the answer to this because (1) it is going to be very fact intensive (lots of different ways for humans to be involved or not involved); and (2) it feels weird to do a lot of work or spend a lot of money to build an AI and not be entitled to copyright over its creations.

In any case, these issues are going to be litigated soon. A reddit user recently used a widely-available AI program called StyleGAN to create a music visualization. And although the underlying AI was not authored by the reddit poster, the output was allegedly created by “transfer learning with a custom dataset of images curated by the artist.”

Does the reddit poster (aka self-proclaimed “artist”) own a copyright on the output? Good question.

Using fake news laws to take down critical speech

Don’t like fake news? Pass a law! But of course fake news is in the eye of the beholder:

Singapore just showed the world how it plans to use a controversial new law to tackle what it deems fake news — and critics say it’s just what they expected would happen.

The government took action twice this week on two Facebook posts it claimed contained “false statements of fact,” the first uses of the law since it took effect last month.

One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post [now blocked] was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations.

In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them.

Singapore just used its fake news law. Critics say it’s just what they feared

Not a great start for Singaporian efforts to police false news.

Attempted theft of trade secrets is also illegal

Camilla Hrdy points out that you can’t sue for trade secret theft if the information stolen is not actually protected as a trade secret. But you can charge someone with attempted trade secret theft even if the information wasn’t a trade secret. Which means you can go to jail for attempted trade secret theft even if you couldn’t be sued for it. That is a weird inversion.

The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an “attempt” crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mindThe criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that “[w]hoever, with intent to convert a trade secret … to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals…obtains… possesses…[etcetera]” a trade secret, or “attempts to” do any of those things, “shall… be fined under this title or imprisoned not more than 10 years, or both…” 

Anthony Levandowski: Is Being a Jerk a Crime?