Anonymization is hard, Waze edition

Security engineer Peter Gasper:

What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

Waze: How I Tracked Your Mother (via Schneier on Security)

Anonymizing is hard

The task of proper anonymization is harder than it looks. Yet another example:

It turns out, though, that those redactions are possible to crack. That’s because the deposition—which you can read in full here—includes a complete alphabetized index of the redacted and unredacted words that appear in the document.

We Cracked the Redactions in the Ghislaine Maxwell Deposition (via Schneier on Security)

This seems to be a corollary of Schneier’s Law: Any person can anonymize data in a way that he or she can’t imagine breaking it.

Although the truth is most don’t even try to break their own work.

DC District Court: “the CFAA does not criminalize mere terms-of-service violations on consumer websites”

Two academics wished to test whether employment websites discriminate based on race or gender. They intended to submit false information (e.g., fictitious profiles) to these websites, but worried that these submissions violated the sites’ terms-of-services and could subject them to prosecution under the federal Computer Fraud and Abuse Act. So they sued for clarity.

The District Court ruled that:

a user should be deemed to have “accesse[d] a computer without authorization,” 18 U.S.C. § 1030(a)(2), only when the user bypasses an authenticating permission requirement, or an “authentication gate,” such as a password restriction that requires a user to demonstrate “that the user is the person who has access rights to the information accessed,” . . . .

Sandvig v. Barr (Civil Action No. 16-1386, March 27, 2020) at 22.

In other words, terms-of-service violations are not violations of the Computer Fraud and Abuse Act, and cannot be criminalized by virtue of that act.

Three main points appeared to guide the Court’s reasoning:

  1. The statutory text and legislative history contemplate a “two-realm internet” of public and private machines. Private machines require authorization, but public machines (e.g., websites) do not.
  2. Website terms-of-service contracts provide inadequate notice for criminal violations. No one reads them! It would be crazy to criminalize ToS non-adherence.
  3. Enabling private website owners to define the scope of criminal liability under the CFAA simply by editing their terms-of-service contract also seems crazy!

It’s worth noting that the government here argued that the researchers did not have standing to bring this suit and cited a lack of “credible threat of prosecution” because Attorney General guidance “expressly cautions against prosecutions based on [terms-of-service] violations.”

But the absence of a specific disavowal of prosecution by the Department undermines much of the government’s argument. . . . Furthermore, as noted above the government has brought similar Access Provision prosecutions in the past and thus created a credible threat of prosecution.

Discovery has not helped the government’s position. John T. Lynch, Jr., the Chief of the Computer Crime and Intellectual Property Section of the Criminal Division of the Department of Justice, testified at his deposition that it was not “impossible for the Department to bring a CFAA prosecution based on [similar] facts and de minimis harm.” Dep. of John T. Lynch, Jr. [ECF No. 48-4] at 154:3–7. Although Lynch has also stated that he does not “expect” the Department to do so, Aff. of John T. Lynch, Jr. [ECF No. 21-1] ¶ 9, “[t]he Constitution ‘does not leave us at the mercy of noblesse oblige[.]”

Sandvig v. Barr at 10.

Meanwhile, the US Supreme Court today agreed to decided whether abusing authorized access to a computer is a federal crime. In Van Buren v. United States:

a former Georgia police officer was convicted of breaching the CFAA by looking up what he thought was an exotic dancer’s license plate number in the state’s database in exchange for $6,000. The ex-officer, Nathan Van Buren, was the target of an FBI sting operation at the time.

. . . .

Van Buren’s attorneys argued that the Eleventh Circuit’s October 2019 decision to uphold the CFAA conviction defined the law in overly broad terms that could criminalize seemingly innocuous behavior, like an employee violating company policy by using work computers to set up an NCAA basketball “March Madness” bracket or a law student using a legal database meant for “educational use” to access local housing laws in a dispute with their landlord.

. . . .

The First, Fifth and Seventh Circuits have all agreed with the Eleventh Circuit’s expansive view of the CFAA, while the Second, Fourth and Ninth Circuits have defined accessing a computer “in excess of authorization” more narrowly, the petition says.

High Court To Examine Scope Of Federal Anti-Hacking Law

Constant aerial surveillance, coming to an American city

In 2015, Radio Lab ran a fascinating story about a re-purposed military project that put a drone in the sky all day long to film an entire city in high resolution. This allows the operators to rewind the tape and track anyone moving, forward or backward, anywhere within the city. It’s an amazing tool for fighting crime. And it’s a remarkable privacy intrusion.

The question was, would Americans be ok with this? I figured it was just a matter of time. Maybe another DC sniper would create the push for it.

Five years later Baltimore is the first off the sidelines, and the ACLU is suing to stop them:

The American Civil Liberties Union has sued to stop Baltimore police from launching a sweeping “eye in the sky” surveillance program. The initiative, operated by a company called Persistent Surveillance Systems (PSS), would send planes flying over Baltimore at least 40 hours a week as they almost continuously collect wide-angle photos of the city. If not blocked, a pilot program is expected to begin later this year.

Lawsuit fights new Baltimore aerial surveillance program

Bad software kills 346 people

That’s a fair headline for the story that has ultimately emerged about the Boeing 737-MAX crashes.

The Verge has a good overview:

But Boeing’s software shortcut had a serious problem. Under certain circumstances, it activated erroneously, sending the airplane into an infinite loop of nose-dives. Unless the pilots can, in under four seconds, correctly diagnose the error, throw a specific emergency switch, and start recovery maneuvers, they will lose control of the airplane and crash — which is exactly what happened in the case of Lion Air Flight 610 and Ethiopian Airlines Flight 302.

THE ANCIENT COMPUTERS IN THE BOEING 737 MAX ARE HOLDING UP A FIX

I once linked to a story about how no one really cares about software security because no one ever gets seriously hurt. This is a hell of a counterpoint, though admittedly a narrow one.

Australia rolls driving-while-talking AI detectors

Cameras in New South Wales, Australia will detect when drivers are using mobile phones. Importantly, the system has a human-in-the-loop which verifies the accuracy of the detection.

This kind of automatic policing raises concerns among many ethicists. (What if the system is bad at detecting certain races or genders and skews the enforcement?) But overall it is hard to find fault in this kind of efficient safety innovation. Innocent people are killed every day by distracted drivers.

Indisputable benefits of facial recognition technology

There is a lot of concern about facial recognition technology, but of course there are also indisputable benefits:

The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

He had just found out the results. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This momentous undertaking was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

India is trying to build the world’s biggest facial recognition system (via Marginal Revolution)

Impact of hospital ransomware

Information security is a public health concern too.

Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.

As PBS noted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined.
The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.

“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”

Study: Ransomware, Data Breaches at Hospitals tied to Uptick in Fatal Heart Attacks