Constant aerial surveillance, coming to an American city

In 2015, Radio Lab ran a fascinating story about a re-purposed military project that put a drone in the sky all day long to film an entire city in high resolution. This allows the operators to rewind the tape and track anyone moving, forward or backward, anywhere within the city. It’s an amazing tool for fighting crime. And it’s a remarkable privacy intrusion.

The question was, would Americans be ok with this? I figured it was just a matter of time. Maybe another DC sniper would create the push for it.

Five years later Baltimore is the first off the sidelines, and the ACLU is suing to stop them:

The American Civil Liberties Union has sued to stop Baltimore police from launching a sweeping “eye in the sky” surveillance program. The initiative, operated by a company called Persistent Surveillance Systems (PSS), would send planes flying over Baltimore at least 40 hours a week as they almost continuously collect wide-angle photos of the city. If not blocked, a pilot program is expected to begin later this year.

Lawsuit fights new Baltimore aerial surveillance program

Privacy vs. the Coronavirus

Everywhere and at all at once rules around privacy are being relaxed in the face of urgent public health concerns:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

As Coronavirus Surveillance Escalates, Personal Privacy Plummets

Meanwhile, global data privacy regulators are confident that “data protection requirements will not stop the critical sharing of information to support efforts to tackle this global pandemic.”

In the hierarchy of human needs, security always has and always will come first.

Privacy Optimism

Ben Garfinkel, a research fellow at Oxford University, writes about the difference between social privacy (what your intimates and acquaintances know about you) and institutional privacy (what governments and corporations know about you):

How about the net effect of these two trends? Have the past couple hundred years of change, overall, constituted decline or progress?

. . . . .

My personal guess is that, for most people in most places, the past couple hundred years of changes in individual privacy have mainly constituted progress. I think that most people would not sacrifice their social privacy for the sake of greater institutional privacy. I think this is especially true in countries like the US, where there are both high levels of development and comparatively strong constraints on institutional behavior. I think that if we focus on just the past thirty years, which have seen the rise of the internet, the situation is somewhat more ambiguous. But I’m at least tentatively inclined to think that most people have experienced an overall gain.

The Case for Privacy Optimism

And overall he concludes that he is optimistic about privacy trends, particularly because of artificial intelligence:

The existence of MPC [Multi-Party Computation] protocols implies that, in principle, training an AI system does not require collecting or in any way accessing the data used to train it. Likewise, in principle, applying a trained AI system to an input does not require access to this input or even to the system’s output.

The implication, then, is this: Insofar as an institution can automate the tasks that its members perform by training AI systems to perform them instead, and insofar as the institution can carry out the relevant computations using MPC, then in the limit the institution does not need to collect any information about the people it serves.

This view, which of course assumes quite a bit of technology, is both plausible and consistent with a number of other researchers who view AI technology as being a potential improvement on our ability to manage human bias and privacy intrusions.

I also tend to believe the glass is half full. That’s my own bias.

NYC issues report on use of algorithms

New York City convened a task force in 2017 to “develop recommendations that will provide a framework for the use of and policy around ADS [automated decision systems].” The report is now out, and has been immediately criticized:

“It’s a waste, really,” says Meredith Whittaker, co-founder of the AI Now Institute and a member of the task force. “This is a sad precedent.” . . .

Ultimately, she says, the report, penned by city officials, “reflects the city’s view and disappointingly fails to leave out a lot of the dissenting views of task force members.” Members of the task force were given presentations on automated systems that Whittaker says “felt more like pitches or endorsements.” Efforts to make specific policy changes, like developing informational cards on algorithms, were scrapped, she says.

NYC’s algorithm task force was ‘a waste,’ member says

The report itself makes three fairly pointless recommendations: (1) build capacity for an equitable, effective, and responsible approach to the City’s ADS; (2) broaden public discussion on ADS; and (3) formalize ADS management functions.

Someone should really start thinking about this!

The report’s summary contains an acknowledgement that, “we did not reach consensus on every potentially relevant issue . . . .”

Anonymizing data is hard

Google tried to anonymize health care data and failed:

On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.

Google almost made 100,000 chest X-rays public — until it realized personal data could be exposed

This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.

That’s how the system is supposed to work.

“Face surveillance” vs “face identification”

Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:

We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.

Here’s a Way Forward on Facial Recognition

They propose four requirements for allowing facial IDs in law enforcement:

  1. facial IDs should be proven effective across gender and race;
  2. facial IDs should be restricted only to serious crimes;
  3. facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
  4. judicial warrants should be required.

But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.

Surveillance in schools edition

Surveillance, surveillance everywhere. Next stop, public schools.

In rural Weld county, Colorado, a school official got an alert from GoGuardian, a company that monitors students’ internet searches, that a student was doing a Google search for “how to kill myself” late one evening. The official worked with a social worker to call law enforcement to conduct an in-person safety check at the student’s home, said Dr Teresa Hernandez, the district’s chief intervention and safety officer. When the student’s mother answered the door, she was confused, and said that her child had been upstairs sleeping since 9pm. “We had the search history to show, actually, no, that’s not what was going on,” Hernandez said.

Federal law requires that American public schools block access to harmful websites, and that they “monitor” students’ online activities. What exactly this “monitoring” means has never been clearly defined: the Children’s Internet Protection Act, passed nearly 20 years ago, was driven in part by fears that American children might look at porn on federally funded school computers.

As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.

Under digital surveillance: how American schools spy on millions of kids

Can medical data ever be truly anonymous?

Almost be definition, meaningful medical data is unique to individuals. Yet health care studies and developing technologies need access to large amounts of medical data to refine techniques and make new discoveries. Historically this medical data has been anonymized to hide true individual identities. Will that even be possible in the future?

A magnetic resonance imaging scan includes the entire head, including the subject’s face. And while the countenance is blurry, imaging technology has advanced to the point that the face can be reconstructed from the scan. 

Under some circumstances, that face can be matched to an individual with facial recognition software.

You Got a Brain Scan at the Hospital. Someday a Computer May Use It to Identify You.

Google was recently sued because plaintiffs alleged that it had not sufficiently anonymized health care data as a result of its parallel collection of location data in Android phones. (The location data could be allegedly combined with dates of hospital admission in the health care data to re-identify individuals.)

Anonymization is hard, and it’s getting harder.