Discovery sanctions for GDPR redactions

An order by Judge Payne out of the Eastern District of Texas does not agree that redactions allegedly required by GDPR were proper:

To further demonstrate the alleged bad faith application of the GDPR, Arigna showed where Continental blacked out the faces of its Executive Board in a picture even though that picture was available on Continental’s public website without the redactions. Based on these redactions and failure to timely produce the ESI, Argina seeks an adverse inference instruction; an order precluding Continental from using any document that it did not timely produce, and Arigna’s costs and fees.

In response, Continental argued (but did not show) that it received an opinion letter from a law firm based in Europe stating the redactions were required by the GDPR, and that it had worked diligently to produce the ESI while also complying with the GDPR.

July 29, 2022 Memorandum Order, Case No. 22-cv-00126 (EDTX)

Wikipedia influences judicial decisions

Bob Ambrogi:

To assess whether Wikipedia impacts judicial decisions, the researchers set out to test for two types of influence: (1) whether the creation of a Wikipedia article on a case leads to that case being cited more often in judicial decisions; and (2) whether the text of judicial decisions is influenced by the text of the corresponding Wikipedia article.

Scientists Conclude that Wikipedia Influences Judges’ Legal Reasoning

They found that the addition of a case to Wikipedia increased the case’s citations by 20%.

They also purport to demonstrate with natural language analysis that “a textual similarity exists between the judicial decisions and the Wikipedia articles.”

I’m skeptical that this method proves actual influence by a Wikipedia article. But it’s easy to believe that case salience would have an impact.

Is ShotSpotter AI?

 A federal lawsuit filed Thursday alleges Chicago police misused “unreliable” gunshot detection technology and failed to pursue other leads in investigating a grandfather from the city’s South Side who was charged with killing a neighbor.

. . . . .

ShotSpotter’s website says the company is “a leader in precision policing technology solutions” that help stop gun violence by using sensors, algorithms and artificial intelligence to classify 14 million sounds in its proprietary database as gunshots or something else.

Lawsuit: Chicago police misused ShotSpotter in murder case

Some commentators (e.g., link) have jumped on this story as an example of someone (allegedly) being wrongly imprisoned due to AI.

But maybe ShotSpotter is just bad software that is used improperly? Does it matter?

The definition of AI is so difficult that we may soon find ourselves regulating all software.

UK IPO suggests copyright exception for text and data mining

The United Kingdom’s Intellectual Property Office has concluded a study on “how AI should be dealt with in the patent and copyright systems.”

For text and data mining, we plan to introduce a new copyright and database exception which allows TDM for any purpose. Rights holders will still have safeguards to protect their content, including a requirement for lawful access.

Consultation outcome / Artificial Intelligence and IP: copyright and patents

They also considered copyright protection for computer-generated works without a human author, and patent protection for AI-devised inventions. But they suggest no changes in the law for these latter two areas.

New York passes “right to repair” law, including electronics

Russell Brandom writing for The Verge:

The New York state legislature has passed the United States’ first “right to repair” bill covering electronics. Called the Fair Repair Act, the measure would require all manufacturers who sell “digital electronic products” within state borders to make tools, parts, and instructions for repair available to both consumers and independent shops.

New York state passes first-ever ‘right to repair’ law for electronics

Makers of “digital electronic equipment” sold in New York must make available (on fair terms) “documentation, parts, and tools” required for “diagnosis, maintenance, or repair.”

“Digital electronic equipment” is defined as any product with a value over $10 that depends for its functioning on “digital electronics.”

If an electronic lock prevents the repair, makers need to allow the device to be unlocked.

And there are a bunch of limitations:

  • no need to reveal trade secrets;
  • no need to provide for “modification” purposes;
  • no need to provide for home appliances with embedded digital electronic products such as refrigerators, ovens, etc.;
  • does not apply to motor vehicles, medical devices, off-road equipment.

You can read the full law here.

States can’t be sued for copyright infringement

In March, the U.S. Supreme Court decided Allen v. Cooper, Governor of North Carolina and ruled that States cannot be hauled into federal court on the issue of copyright infringement.

The decision is basically an extension of the Court’s prior decision on whether States can be sued for patent infringement in federal court (also no), and Justice Kagan writes for the unanimous Court in saying, “Florida Prepaid all but prewrote our decision today.”

But one of the most interesting discussions in the opinion is about when, perhaps, States might be hauled into federal court for copyright infringement under the Fourteenth Amendment prohibition against deprivation of property without due process:

All this raises the question: When does the Fourteenth Amendment care about copyright infringement? Sometimes, no doubt. Copyrights are a form of property. See Fox Film Corp. v. Doyal, 286 U. S. 123, 128 (1932). And the Fourteenth Amendment bars the States from “depriv[ing]”a person of property “without due process of law.” But even if sometimes, by no means always. Under our precedent, a merely negligent act does not “deprive” a person of property. See Daniels v. Williams, 474 U. S. 327, 328 (1986). So an infringement must be intentional, or at least reckless, to come within the reach of the Due Process Clause. See id., at 334, n. 3 (reserving whether reckless conduct suffices). And more: A State cannot violate that Clause unless it fails to offer an adequate remedy for an infringement, because such a remedy itself satisfies the demand of “due process.” See Hudson v. Palmer, 468 U. S. 517, 533 (1984). That means within the broader world of state copyright infringement is a smaller one where the Due Process Clause comes into play.

Slip Op. at 11.

Presumably this means that if North Carolina set up a free radio streaming service with Taylor Swift songs and refused to pay any royalties, they might properly be hauled into federal court. But absent some egregiously intentional or reckless conduct, States remain sovereign in copyright disputes.

DC District Court: “the CFAA does not criminalize mere terms-of-service violations on consumer websites”

Two academics wished to test whether employment websites discriminate based on race or gender. They intended to submit false information (e.g., fictitious profiles) to these websites, but worried that these submissions violated the sites’ terms-of-services and could subject them to prosecution under the federal Computer Fraud and Abuse Act. So they sued for clarity.

The District Court ruled that:

a user should be deemed to have “accesse[d] a computer without authorization,” 18 U.S.C. § 1030(a)(2), only when the user bypasses an authenticating permission requirement, or an “authentication gate,” such as a password restriction that requires a user to demonstrate “that the user is the person who has access rights to the information accessed,” . . . .

Sandvig v. Barr (Civil Action No. 16-1386, March 27, 2020) at 22.

In other words, terms-of-service violations are not violations of the Computer Fraud and Abuse Act, and cannot be criminalized by virtue of that act.

Three main points appeared to guide the Court’s reasoning:

  1. The statutory text and legislative history contemplate a “two-realm internet” of public and private machines. Private machines require authorization, but public machines (e.g., websites) do not.
  2. Website terms-of-service contracts provide inadequate notice for criminal violations. No one reads them! It would be crazy to criminalize ToS non-adherence.
  3. Enabling private website owners to define the scope of criminal liability under the CFAA simply by editing their terms-of-service contract also seems crazy!

It’s worth noting that the government here argued that the researchers did not have standing to bring this suit and cited a lack of “credible threat of prosecution” because Attorney General guidance “expressly cautions against prosecutions based on [terms-of-service] violations.”

But the absence of a specific disavowal of prosecution by the Department undermines much of the government’s argument. . . . Furthermore, as noted above the government has brought similar Access Provision prosecutions in the past and thus created a credible threat of prosecution.

Discovery has not helped the government’s position. John T. Lynch, Jr., the Chief of the Computer Crime and Intellectual Property Section of the Criminal Division of the Department of Justice, testified at his deposition that it was not “impossible for the Department to bring a CFAA prosecution based on [similar] facts and de minimis harm.” Dep. of John T. Lynch, Jr. [ECF No. 48-4] at 154:3–7. Although Lynch has also stated that he does not “expect” the Department to do so, Aff. of John T. Lynch, Jr. [ECF No. 21-1] ¶ 9, “[t]he Constitution ‘does not leave us at the mercy of noblesse oblige[.]”

Sandvig v. Barr at 10.

Meanwhile, the US Supreme Court today agreed to decided whether abusing authorized access to a computer is a federal crime. In Van Buren v. United States:

a former Georgia police officer was convicted of breaching the CFAA by looking up what he thought was an exotic dancer’s license plate number in the state’s database in exchange for $6,000. The ex-officer, Nathan Van Buren, was the target of an FBI sting operation at the time.

. . . .

Van Buren’s attorneys argued that the Eleventh Circuit’s October 2019 decision to uphold the CFAA conviction defined the law in overly broad terms that could criminalize seemingly innocuous behavior, like an employee violating company policy by using work computers to set up an NCAA basketball “March Madness” bracket or a law student using a legal database meant for “educational use” to access local housing laws in a dispute with their landlord.

. . . .

The First, Fifth and Seventh Circuits have all agreed with the Eleventh Circuit’s expansive view of the CFAA, while the Second, Fourth and Ninth Circuits have defined accessing a computer “in excess of authorization” more narrowly, the petition says.

High Court To Examine Scope Of Federal Anti-Hacking Law

Summary of EARN IT Act of 2019

Senator Lindsey Graham has introduced the EARN IT Act of 2019, which would eliminate online service providers’ immunity for the actions of their users under Section 230 of the Communications Decency Act.

The Act essentially establishes a National Commission on Online Child Exploitation Prevention, tasks this commission with drafting online best practices for preventing child exploitation by users (which would presumably mean no end-to-end encryption), and eliminates Section 230 immunity unless service providers follow those best practices.

SAFE HARBOR.—Subparagraph (A) [removing immunity] shall not apply to a claim in a civil action or charge in a criminal prosecution brought against a provider of an interactive computer service if – (i) the provider has implemented reasonable measures relating to the matters described in section 4(a)(2) [referring to creation of the best practices] of the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019 to prevent the use of the interactive computer service for the exploitation of minors . . . .

Page 17 of the EARN IT Act of 2019

Other sections create liability for “reckless” violations (instead of “knowing” violations), require online service providers to certify that they are complying with the created best practices, and set forth the requirements for membership in the newly created commission.

This bill comes after a hearing in December 2019 over the issue of legal access to encrypted devices. During that hearing Senator Graham warned representatives of Facebook and Apple that, “You’re gonna find a way to do this or we’re going to do it for you.”

3/15/20 Update – A revised version of the EARN IT Act, introduced on March 5, alters how so-called “best practices” are created. First, a 19-member commission comprising the Attorney General, the Secretary of Homeland Security, the Chairman of the FTC, and (to be chosen by the heads of each party in the House and Senate) four representatives from law enforcement, four from the community of child-exploitation victims, two legal experts, two technology experts, and four representatives from technology companies. The support of 14 members would be required to approve any best practices, the recommendations must be approved by the AG, Secretary of Homeland Security, and the FTC Chair, and then Congress itself must enact them.

7/2/20 UpdateA weakened version of the EARN IT Act advances out of committee

London police adopt facial recognition, permanently

Adam Satariano, writing for the NYT:

The technology London plans to deploy goes beyond many of the facial recognition systems used elsewhere, which match a photo against a database to identify a person. The new systems, created by the company NEC, attempt to identify people on a police watch list in real time with security cameras, giving officers a chance to stop them in the specific location.

London Police Amp Up Surveillance With Real-Time Facial Recognition

The objections voiced in the article are about potential inaccuracies in the system. But that will change over time. I don’t see many objections over the power of the system.

As Europe considers banning facial recognition technology, and police departments everywhere look to it to improve policing and safety, this may be the technology fight of the 2020’s.

Prediction: security wins over privacy.

German Data Ethics Commission insists AI regulation is necessary

The German Data Ethics Commission issued a 240-page report with 75 recommendations for regulating data, algorithmic systems, and AI. It is one of the strongest views on ethical AI to date and favors explicit regulation.

The Data Ethics Commission holds the view that regulation is necessary, and cannot be replaced by ethical principles.

Opinion of the Data Ethics Commission – Executive Summary at 7 (emphasis original).

The report divides ethical considerations into concerns about either data or algorithmic systems. For data, the report suggests that rights associated with the data will play a significant role in the ethical landscape. For example, ensuring that individuals provide informed consent for use of their personal data addresses a number of significant ethical issues.

For algorithmic systems, however, the report suggests that the AI systems might have no connection to the affected individuals. As a result, even non-personal data for which there are no associated rights could be used in an unethical manner. The report concludes that regulation is necessary to the extent there is a potential for harm.

The report identifies five levels of algorithmic system criticality. Applications with zero or negligible potential for harm would face no regulation. The regulatory burden would increase as the potential for harm increases, up to a total ban. For applications with serious potential for harm, the report recommends constant oversight.

The framework appears to be a good candidate for future ethical AI regulation in Europe, and perhaps (by default) the world.