Summary of EARN IT Act of 2019

Senator Lindsey Graham has introduced the EARN IT Act of 2019, which would eliminate online service providers’ immunity for the actions of their users under Section 230 of the Communications Decency Act.

The Act essentially establishes a National Commission on Online Child Exploitation Prevention, tasks this commission with drafting online best practices for preventing child exploitation by users (which would presumably mean no end-to-end encryption), and eliminates Section 230 immunity unless service providers follow those best practices.

SAFE HARBOR.—Subparagraph (A) [removing immunity] shall not apply to a claim in a civil action or charge in a criminal prosecution brought against a provider of an interactive computer service if – (i) the provider has implemented reasonable measures relating to the matters described in section 4(a)(2) [referring to creation of the best practices] of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019 to prevent the use of the interactive computer service for the exploitation of minors . . . .

Page 17 of the EARN IT Act of 2019

Other sections create liability for “reckless” violations (instead of “knowing” violations), require online service providers to certify that they are complying with the created best practices, and set forth the requirements for membership in the newly created commission.

This bill comes after a hearing in December 2019 over the issue of legal access to encrypted devices. During that hearing Senator Graham warned representatives of Facebook and Apple that, “You’re gonna find a way to do this or we’re going to do it for you.”

London police adopt facial recognition, permanently

Adam Satariano, writing for the NYT:

The technology London plans to deploy goes beyond many of the facial recognition systems used elsewhere, which match a photo against a database to identify a person. The new systems, created by the company NEC, attempt to identify people on a police watch list in real time with security cameras, giving officers a chance to stop them in the specific location.

London Police Amp Up Surveillance With Real-Time Facial Recognition

The objections voiced in the article are about potential inaccuracies in the system. But that will change over time. I don’t see many objections over the power of the system.

As Europe considers banning facial recognition technology, and police departments everywhere look to it to improve policing and safety, this may be the technology fight of the 2020’s.

Prediction: security wins over privacy.

German Data Ethics Commission insists AI regulation is necessary

The German Data Ethics Commission issued a 240-page report with 75 recommendations for regulating data, algorithmic systems, and AI. It is one of the strongest views on ethical AI to date and favors explicit regulation.

The Data Ethics Commission holds the view that regulation is necessary, and cannot be replaced by ethical principles.

Opinion of the Data Ethics Commission – Executive Summary at 7 (emphasis original).

The report divides ethical considerations into concerns about either data or algorithmic systems. For data, the report suggests that rights associated with the data will play a significant role in the ethical landscape. For example, ensuring that individuals provide informed consent for use of their personal data addresses a number of significant ethical issues.

For algorithmic systems, however, the report suggests that the AI systems might have no connection to the affected individuals. As a result, even non-personal data for which there are no associated rights could be used in an unethical manner. The report concludes that regulation is necessary to the extent there is a potential for harm.

The report identifies five levels of algorithmic system criticality. Applications with zero or negligible potential for harm would face no regulation. The regulatory burden would increase as the potential for harm increases, up to a total ban. For applications with serious potential for harm, the report recommends constant oversight.

The framework appears to be a good candidate for future ethical AI regulation in Europe, and perhaps (by default) the world.

White House Releases AI Principles

The White House has released draft “guidance for regulation of artificial intelligence applications.” The memo states that “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

Agencies should consider new regulation only after they have reached the decision . . . that Federal regulation is necessary.

Nevertheless, the memo enumerates ten principles that agencies should take into account should they ultimately take action that impacts AI:

  1. Public Trust in AI. Don’t undermine it by allowing AI’s to pose risks to privacy, individual rights, autonomy, and civil liberties.
  2. Public Participation. Don’t block public participation in the rule making process.
  3. Scientific Integrity and Information Quality. Use scientific principles.
  4. Risk Assessment and Management. Use risk management principles.
  5. Benefits and Costs.
  6. Flexibility. Be flexible and ensure American companies are not disadvantaged by the United States’ regulatory regime.
  7. Fairness and Non-Discrimination.
  8. Disclosure and Transparency.
  9. Safety and Security.
  10. Interagency Coordination. Agencies should coordinate.

Overall, the memo is a long-winded directive that agencies should not regulate, but if for some reason they feel they have to, they should consider the same basic principles that everyone else is listing about AI concerns: safety, security, transparency, fairness.

A proposed, reformed CDA 230

Bruce Schneier posts a proposal about a possible reform of CDA 230, which largely immunizes online providers from liability for the content of their posts:

Hi Facebook/Twitter/YouTube/everyone else:

You can build a communications based on inspecting user content and presenting it as you want, but that business model also conveys responsibility for that content.

-or-

You can be a communications service and enjoy the protections of CDA 230, in which case you cannot inspect or control the content you deliver.

Reforming CDA 230

I’m not sure I have a view on whether CDA 230 should be reformed. I just know that it’s harder than it looks to write good policy in this space. But this is a fascinating proposal.

Copyrightability of AI creations

One of the many fascinating things about AI is whether AI creations can be copyrighted and, if so, by whom. Under traditional copyright analysis, the human(s) that made some contribution to the creative work own the copyright by default. If there is no human contribution, there is no copyright. See, for example, the so-called “monkey selfie” case in which a monkey took a selfie and the photographer that owned the camera got no copyright in the photo.

But when an AI creates a work of art, is there human involvement? A human created the AI, and might have fiddled with its knobs so to speak. Is that sufficient? The U.S. Copyright Office is concerned about this. One question they are asking is this:

2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person

(i) designed the AI algorithm or process that created the work;

(ii) contributed to the design of the algorithm or process;

(iii) chose data used by the algorithm for training or otherwise;

(iv) caused the AI algorithm or process to be used to yield the work;

or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an ‘‘author’’?

Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation

No one really knows the answer to this because (1) it is going to be very fact intensive (lots of different ways for humans to be involved or not involved); and (2) it feels weird to do a lot of work or spend a lot of money to build an AI and not be entitled to copyright over its creations.

In any case, these issues are going to be litigated soon. A reddit user recently used a widely-available AI program called StyleGAN to create a music visualization. And although the underlying AI was not authored by the reddit poster, the output was allegedly created by “transfer learning with a custom dataset of images curated by the artist.”

Does the reddit poster (aka self-proclaimed “artist”) own a copyright on the output? Good question.

Using fake news laws to take down critical speech

Don’t like fake news? Pass a law! But of course fake news is in the eye of the beholder:

Singapore just showed the world how it plans to use a controversial new law to tackle what it deems fake news — and critics say it’s just what they expected would happen.

The government took action twice this week on two Facebook posts it claimed contained “false statements of fact,” the first uses of the law since it took effect last month.

One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post [now blocked] was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations.

In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them.

Singapore just used its fake news law. Critics say it’s just what they feared

Not a great start for Singaporian efforts to police false news.

Attempted theft of trade secrets is also illegal

Camilla Hrdy points out that you can’t sue for trade secret theft if the information stolen is not actually protected as a trade secret. But you can charge someone with attempted trade secret theft even if the information wasn’t a trade secret. Which means you can go to jail for attempted trade secret theft even if you couldn’t be sued for it. That is a weird inversion.

The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an “attempt” crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mindThe criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that “[w]hoever, with intent to convert a trade secret … to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals…obtains… possesses…[etcetera]” a trade secret, or “attempts to” do any of those things, “shall… be fined under this title or imprisoned not more than 10 years, or both…” 

Anthony Levandowski: Is Being a Jerk a Crime?

How to Become a Federal Criminal

It’s super easy and you may already be one!

You may know that you are required to report if you are traveling to or from the United States with $10,000 or more in cash. Don’t hop over the Canadian border to buy a used car, for example, or the Feds may confiscate your cash (millions of dollars are confiscated every year). Did you also know that you can’t leave the United States with more than $5 in nickels??? That’s a federal crime punishable by up to five years in prison. How about carrying a metal detector in a national park–up to six months in prison. And God forbid you should use your metal detector and find something more than 100 years old, that can put you away for up to a year. Also illegal in a national park? Making unreasonable gestures to a passing horse.

How to Become a Federal Criminal

Worth re-linking to one of my favorite legal lectures of all time: Don’t Talk to the Police. Even if you are going to tell the truth, even if you did nothing wrong. There is no way it will help you.

France bans some litigation analytics

In what appears to be a breathtaking overreaction to a privacy concern, France has banned statistical reporting about individual judges’ decisions:

The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.

A key passage of the new law states:

‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ 

France Bans Judge Analytics, 5 Years In Prison For Rule Breakers

This raises many issues of free speech, transparency, and just plain old protectionism: