DC District Court: “the CFAA does not criminalize mere terms-of-service violations on consumer websites”

Two academics wished to test whether employment websites discriminate based on race or gender. They intended to submit false information (e.g., fictitious profiles) to these websites, but worried that these submissions violated the sites’ terms-of-services and could subject them to prosecution under the federal Computer Fraud and Abuse Act. So they sued for clarity.

The District Court ruled that:

a user should be deemed to have “accesse[d] a computer without authorization,” 18 U.S.C. § 1030(a)(2), only when the user bypasses an authenticating permission requirement, or an “authentication gate,” such as a password restriction that requires a user to demonstrate “that the user is the person who has access rights to the information accessed,” . . . .

Sandvig v. Barr (Civil Action No. 16-1386, March 27, 2020) at 22.

In other words, terms-of-service violations are not violations of the Computer Fraud and Abuse Act, and cannot be criminalized by virtue of that act.

Three main points appeared to guide the Court’s reasoning:

  1. The statutory text and legislative history contemplate a “two-realm internet” of public and private machines. Private machines require authorization, but public machines (e.g., websites) do not.
  2. Website terms-of-service contracts provide inadequate notice for criminal violations. No one reads them! It would be crazy to criminalize ToS non-adherence.
  3. Enabling private website owners to define the scope of criminal liability under the CFAA simply by editing their terms-of-service contract also seems crazy!

It’s worth noting that the government here argued that the researchers did not have standing to bring this suit and cited a lack of “credible threat of prosecution” because Attorney General guidance “expressly cautions against prosecutions based on [terms-of-service] violations.”

But the absence of a specific disavowal of prosecution by the Department undermines much of the government’s argument. . . . Furthermore, as noted above the government has brought similar Access Provision prosecutions in the past and thus created a credible threat of prosecution.

Discovery has not helped the government’s position. John T. Lynch, Jr., the Chief of the Computer Crime and Intellectual Property Section of the Criminal Division of the Department of Justice, testified at his deposition that it was not “impossible for the Department to bring a CFAA prosecution based on [similar] facts and de minimis harm.” Dep. of John T. Lynch, Jr. [ECF No. 48-4] at 154:3–7. Although Lynch has also stated that he does not “expect” the Department to do so, Aff. of John T. Lynch, Jr. [ECF No. 21-1] ¶ 9, “[t]he Constitution ‘does not leave us at the mercy of noblesse oblige[.]”

Sandvig v. Barr at 10.

Meanwhile, the US Supreme Court today agreed to decided whether abusing authorized access to a computer is a federal crime. In Van Buren v. United States:

a former Georgia police officer was convicted of breaching the CFAA by looking up what he thought was an exotic dancer’s license plate number in the state’s database in exchange for $6,000. The ex-officer, Nathan Van Buren, was the target of an FBI sting operation at the time.

. . . .

Van Buren’s attorneys argued that the Eleventh Circuit’s October 2019 decision to uphold the CFAA conviction defined the law in overly broad terms that could criminalize seemingly innocuous behavior, like an employee violating company policy by using work computers to set up an NCAA basketball “March Madness” bracket or a law student using a legal database meant for “educational use” to access local housing laws in a dispute with their landlord.

. . . .

The First, Fifth and Seventh Circuits have all agreed with the Eleventh Circuit’s expansive view of the CFAA, while the Second, Fourth and Ninth Circuits have defined accessing a computer “in excess of authorization” more narrowly, the petition says.

High Court To Examine Scope Of Federal Anti-Hacking Law

Constant aerial surveillance, coming to an American city

In 2015, Radio Lab ran a fascinating story about a re-purposed military project that put a drone in the sky all day long to film an entire city in high resolution. This allows the operators to rewind the tape and track anyone moving, forward or backward, anywhere within the city. It’s an amazing tool for fighting crime. And it’s a remarkable privacy intrusion.

The question was, would Americans be ok with this? I figured it was just a matter of time. Maybe another DC sniper would create the push for it.

Five years later Baltimore is the first off the sidelines, and the ACLU is suing to stop them:

The American Civil Liberties Union has sued to stop Baltimore police from launching a sweeping “eye in the sky” surveillance program. The initiative, operated by a company called Persistent Surveillance Systems (PSS), would send planes flying over Baltimore at least 40 hours a week as they almost continuously collect wide-angle photos of the city. If not blocked, a pilot program is expected to begin later this year.

Lawsuit fights new Baltimore aerial surveillance program

Bad software kills 346 people

That’s a fair headline for the story that has ultimately emerged about the Boeing 737-MAX crashes.

The Verge has a good overview:

But Boeing’s software shortcut had a serious problem. Under certain circumstances, it activated erroneously, sending the airplane into an infinite loop of nose-dives. Unless the pilots can, in under four seconds, correctly diagnose the error, throw a specific emergency switch, and start recovery maneuvers, they will lose control of the airplane and crash — which is exactly what happened in the case of Lion Air Flight 610 and Ethiopian Airlines Flight 302.

THE ANCIENT COMPUTERS IN THE BOEING 737 MAX ARE HOLDING UP A FIX

I once linked to a story about how no one really cares about software security because no one ever gets seriously hurt. This is a hell of a counterpoint, though admittedly a narrow one.

Australia rolls driving-while-talking AI detectors

Cameras in New South Wales, Australia will detect when drivers are using mobile phones. Importantly, the system has a human-in-the-loop which verifies the accuracy of the detection.

This kind of automatic policing raises concerns among many ethicists. (What if the system is bad at detecting certain races or genders and skews the enforcement?) But overall it is hard to find fault in this kind of efficient safety innovation. Innocent people are killed every day by distracted drivers.

Indisputable benefits of facial recognition technology

There is a lot of concern about facial recognition technology, but of course there are also indisputable benefits:

The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

He had just found out the results. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This momentous undertaking was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

India is trying to build the world’s biggest facial recognition system (via Marginal Revolution)

Impact of hospital ransomware

Information security is a public health concern too.

Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.

As PBS noted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined.
The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.

“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”

Study: Ransomware, Data Breaches at Hospitals tied to Uptick in Fatal Heart Attacks

“Face surveillance” vs “face identification”

Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:

We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.

Here’s a Way Forward on Facial Recognition

They propose four requirements for allowing facial IDs in law enforcement:

  1. facial IDs should be proven effective across gender and race;
  2. facial IDs should be restricted only to serious crimes;
  3. facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
  4. judicial warrants should be required.

But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.

A former law enforcement lawyer flips on encryption

Jim Baker was the general counsel of the FBI during its much publicized dispute with Apple over iPhone encryption. He now says public officials should become “among the strongest supporters of widely available strong encryption”:

I know full well that this approach will be a bitter pill for some in law enforcement and other public safety fields to swallow, and many people will reject it outright. It may make some of my former colleagues angry at me. I expect that some will say that I’m simply joining others who have left the government and switched sides on encryption to curry favor with the tech sector in order to get a job. That is wrong. My dim views about cybersecurity risks, China and Huawei are essentially the same as those that I held while in government. I also think that my overall approach on encryption today—as well as my frustration with Congress—is generally consistent with the approach I had while I was in government.

I have long said—as I do here—that encryption poses real challenges for public safety officials; that any proposed technical solution must properly balance all of the competing equities; and that (absent an unlikely definitive judicial ruling as a result of litigation) Congress must change the law to resolve the issue. What has changed is my acceptance of, or perhaps resignation to, the fact that Congress is unlikely to act, as well as my assessment that the relevant cybersecurity risks to society have grown disproportionately over the years when compared with other risks.

Rethinking Encryption

In a nutshell: strong encryption is already widely available (not just in the U.S.), we’re probably already in a golden age of surveillance, weak cybersecurity is a bigger problem, and…. China.

Surveillance in schools edition

Surveillance, surveillance everywhere. Next stop, public schools.

In rural Weld county, Colorado, a school official got an alert from GoGuardian, a company that monitors students’ internet searches, that a student was doing a Google search for “how to kill myself” late one evening. The official worked with a social worker to call law enforcement to conduct an in-person safety check at the student’s home, said Dr Teresa Hernandez, the district’s chief intervention and safety officer. When the student’s mother answered the door, she was confused, and said that her child had been upstairs sleeping since 9pm. “We had the search history to show, actually, no, that’s not what was going on,” Hernandez said.

Federal law requires that American public schools block access to harmful websites, and that they “monitor” students’ online activities. What exactly this “monitoring” means has never been clearly defined: the Children’s Internet Protection Act, passed nearly 20 years ago, was driven in part by fears that American children might look at porn on federally funded school computers.

As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.

Under digital surveillance: how American schools spy on millions of kids