Privacy Optimism

Ben Garfinkel, a research fellow at Oxford University, writes about the difference between social privacy (what your intimates and acquaintances know about you) and institutional privacy (what governments and corporations know about you):

How about the net effect of these two trends? Have the past couple hundred years of change, overall, constituted decline or progress?

. . . . .

My personal guess is that, for most people in most places, the past couple hundred years of changes in individual privacy have mainly constituted progress. I think that most people would not sacrifice their social privacy for the sake of greater institutional privacy. I think this is especially true in countries like the US, where there are both high levels of development and comparatively strong constraints on institutional behavior. I think that if we focus on just the past thirty years, which have seen the rise of the internet, the situation is somewhat more ambiguous. But I’m at least tentatively inclined to think that most people have experienced an overall gain.

The Case for Privacy Optimism

And overall he concludes that he is optimistic about privacy trends, particularly because of artificial intelligence:

The existence of MPC [Multi-Party Computation] protocols implies that, in principle, training an AI system does not require collecting or in any way accessing the data used to train it. Likewise, in principle, applying a trained AI system to an input does not require access to this input or even to the system’s output.

The implication, then, is this: Insofar as an institution can automate the tasks that its members perform by training AI systems to perform them instead, and insofar as the institution can carry out the relevant computations using MPC, then in the limit the institution does not need to collect any information about the people it serves.

This view, which of course assumes quite a bit of technology, is both plausible and consistent with a number of other researchers who view AI technology as being a potential improvement on our ability to manage human bias and privacy intrusions.

I also tend to believe the glass is half full. That’s my own bias.

NYC issues report on use of algorithms

New York City convened a task force in 2017 to “develop recommendations that will provide a framework for the use of and policy around ADS [automated decision systems].” The report is now out, and has been immediately criticized:

“It’s a waste, really,” says Meredith Whittaker, co-founder of the AI Now Institute and a member of the task force. “This is a sad precedent.” . . .

Ultimately, she says, the report, penned by city officials, “reflects the city’s view and disappointingly fails to leave out a lot of the dissenting views of task force members.” Members of the task force were given presentations on automated systems that Whittaker says “felt more like pitches or endorsements.” Efforts to make specific policy changes, like developing informational cards on algorithms, were scrapped, she says.

NYC’s algorithm task force was ‘a waste,’ member says

The report itself makes three fairly pointless recommendations: (1) build capacity for an equitable, effective, and responsible approach to the City’s ADS; (2) broaden public discussion on ADS; and (3) formalize ADS management functions.

Someone should really start thinking about this!

The report’s summary contains an acknowledgement that, “we did not reach consensus on every potentially relevant issue . . . .”

Anonymizing data is hard

Google tried to anonymize health care data and failed:

On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.

Google almost made 100,000 chest X-rays public — until it realized personal data could be exposed

This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.

That’s how the system is supposed to work.

“Face surveillance” vs “face identification”

Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:

We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.

Here’s a Way Forward on Facial Recognition

They propose four requirements for allowing facial IDs in law enforcement:

  1. facial IDs should be proven effective across gender and race;
  2. facial IDs should be restricted only to serious crimes;
  3. facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
  4. judicial warrants should be required.

But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.

Surveillance in schools edition

Surveillance, surveillance everywhere. Next stop, public schools.

In rural Weld county, Colorado, a school official got an alert from GoGuardian, a company that monitors students’ internet searches, that a student was doing a Google search for “how to kill myself” late one evening. The official worked with a social worker to call law enforcement to conduct an in-person safety check at the student’s home, said Dr Teresa Hernandez, the district’s chief intervention and safety officer. When the student’s mother answered the door, she was confused, and said that her child had been upstairs sleeping since 9pm. “We had the search history to show, actually, no, that’s not what was going on,” Hernandez said.

Federal law requires that American public schools block access to harmful websites, and that they “monitor” students’ online activities. What exactly this “monitoring” means has never been clearly defined: the Children’s Internet Protection Act, passed nearly 20 years ago, was driven in part by fears that American children might look at porn on federally funded school computers.

As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.

Under digital surveillance: how American schools spy on millions of kids

Can medical data ever be truly anonymous?

Almost be definition, meaningful medical data is unique to individuals. Yet health care studies and developing technologies need access to large amounts of medical data to refine techniques and make new discoveries. Historically this medical data has been anonymized to hide true individual identities. Will that even be possible in the future?

A magnetic resonance imaging scan includes the entire head, including the subject’s face. And while the countenance is blurry, imaging technology has advanced to the point that the face can be reconstructed from the scan. 

Under some circumstances, that face can be matched to an individual with facial recognition software.

You Got a Brain Scan at the Hospital. Someday a Computer May Use It to Identify You.

Google was recently sued because plaintiffs alleged that it had not sufficiently anonymized health care data as a result of its parallel collection of location data in Android phones. (The location data could be allegedly combined with dates of hospital admission in the health care data to re-identify individuals.)

Anonymization is hard, and it’s getting harder.

Segmented social media, meet segmented augmented reality

Users on social media are often in their own universes. Liberals often don’t even see the content that conservatives see, and vice versa.

Imagine if that kind of segmentation extended to augmented reality as well:

Imagine a world that’s filled with invisible graffiti. Open an app, point your phone at a wall, and blank brick or cement becomes a canvas. Create art with digital spraypaint and stencils, and an augmented reality system will permanently store its location and placement, creating the illusion of real street art. If friends or social media followers have the app, they can find your painting on a map and come see it. You might scrawl an in-joke across the door of a friend’s apartment, or paint a gorgeous mural on the side of a local store.

Now imagine a darker world. Members of hate groups gleefully swap pictures of racist tags on civil rights monuments. Students bully each other by spreading vicious rumors on the walls of a target’s house. Small businesses get mobbed beyond capacity when a big influencer posts a sticker on their window. The developers of Mark AR, an app that’s described as “the world’s first augmented reality social platform,” are trying to create the good version of this system. They’re still figuring out how to avoid the bad one.

Is the world ready for virtual graffiti?

I first read China Miéville’s The City & the City many years ago, and I keep thinking about how strange it was then, and how much the ideas have resonated since.

Massive private surveillance networks

Joseph Cox with Motherboard has authored a story on a massive private license plate surveillance network called DRN:

This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN’s database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network.

This Company Built a Private Surveillance Network. We Tracked Someone With It

I wrote recently about private surveillance projects that may meet or exceed government efforts. It won’t be long before the license plate readers are facial recognition scanners. It’s probably happening now.

Real-time counter surveillance tool for Tesla vehicles

An engineer has built a counter-surveillance tool on top of the hardware and software stack for Tesla vehicles:

It uses the existing video feeds created by Tesla’s Sentry Mode features and uses license plate and facial detection to determine if you are being followed.

Scout does all that in real-time and sends you notifications if it sees anything suspicious.

Turn your Tesla into a CIA-like counter-surveillance tool with this hack

A video demonstration is embedded in the article.

This is a reminder that intelligent surveillance tools are going to be available at massive scale to even private citizens, not just the government. As governments track citizens, will citizens track government actors and individual police officers? What will we do with all of this data?