Freedom vs. Security continued…

Kashmir Hill for the NYT:

Floyd Abrams, one of the most prominent First Amendment lawyers in the country, has a new client: the facial recognition company Clearview AI.

Litigation against the start-up “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century,” Mr. Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court.

Facial Recognition Start-Up Mounts a First Amendment Defense

Is everything known to the public truly available for any use whatsoever? We are trending away from that view, and this will be a battle to watch closely.

Crime fighting tools of social media posts

James Vincent, reporting for The Verge:

As reported by The Philadelphia Inquirer, at the start of their investigation, FBI agents only had access to helicopter footage from a local news station. This showed a woman wearing a bandana throwing flaming debris into the smashed window of a police sedan.

By searching for videos of the protests uploaded to Instagram and Vimeo, the agents were able to find additional footage of the incident, and spotted a peace sign tattoo on the woman’s right forearm. After finding a set of 500 pictures of the protests shared by an amateur photographer, they were able to clearly see what the woman was wearing, including a T-shirt with the slogan: “Keep the Immigrants. Deport the Racists.”

The only place to buy this exact T-shirt was an Etsy store, where a user calling themselves “alleycatlore” had left a five-star review for the seller just few days before the protest. Using Google to search for this username, agents then found a matching profile at the online fashion marketplace Poshmark which listed the user’s name as “Lore-elisabeth.” 

A search for “Lore-elisabeth” led to a LinkedIn profile for one Lore Elisabeth Blumenthal, employed as a massage therapist at a Philadelphia massage studio. Videos hosted by the studio showed an individual with the same distinctive peace tattoo on their arm. A phone number listed for Blumenthal led to an address. As reported by NBC Philadelphia, a subpoena served to the Etsy seller showed a “Keep the Immigrants. Deport the Racists.” T-shirt had recently been delivered to that same address.

FBI used Instagram, an Etsy review, and LinkedIn to identify a protestor accused of arson

Constant aerial surveillance, coming to an American city

In 2015, Radio Lab ran a fascinating story about a re-purposed military project that put a drone in the sky all day long to film an entire city in high resolution. This allows the operators to rewind the tape and track anyone moving, forward or backward, anywhere within the city. It’s an amazing tool for fighting crime. And it’s a remarkable privacy intrusion.

The question was, would Americans be ok with this? I figured it was just a matter of time. Maybe another DC sniper would create the push for it.

Five years later Baltimore is the first off the sidelines, and the ACLU is suing to stop them:

The American Civil Liberties Union has sued to stop Baltimore police from launching a sweeping “eye in the sky” surveillance program. The initiative, operated by a company called Persistent Surveillance Systems (PSS), would send planes flying over Baltimore at least 40 hours a week as they almost continuously collect wide-angle photos of the city. If not blocked, a pilot program is expected to begin later this year.

Lawsuit fights new Baltimore aerial surveillance program

Privacy vs. the Coronavirus

Everywhere and at all at once rules around privacy are being relaxed in the face of urgent public health concerns:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

As Coronavirus Surveillance Escalates, Personal Privacy Plummets

Meanwhile, global data privacy regulators are confident that “data protection requirements will not stop the critical sharing of information to support efforts to tackle this global pandemic.”

In the hierarchy of human needs, security always has and always will come first.

Privacy Optimism

Ben Garfinkel, a research fellow at Oxford University, writes about the difference between social privacy (what your intimates and acquaintances know about you) and institutional privacy (what governments and corporations know about you):

How about the net effect of these two trends? Have the past couple hundred years of change, overall, constituted decline or progress?

. . . . .

My personal guess is that, for most people in most places, the past couple hundred years of changes in individual privacy have mainly constituted progress. I think that most people would not sacrifice their social privacy for the sake of greater institutional privacy. I think this is especially true in countries like the US, where there are both high levels of development and comparatively strong constraints on institutional behavior. I think that if we focus on just the past thirty years, which have seen the rise of the internet, the situation is somewhat more ambiguous. But I’m at least tentatively inclined to think that most people have experienced an overall gain.

The Case for Privacy Optimism

And overall he concludes that he is optimistic about privacy trends, particularly because of artificial intelligence:

The existence of MPC [Multi-Party Computation] protocols implies that, in principle, training an AI system does not require collecting or in any way accessing the data used to train it. Likewise, in principle, applying a trained AI system to an input does not require access to this input or even to the system’s output.

The implication, then, is this: Insofar as an institution can automate the tasks that its members perform by training AI systems to perform them instead, and insofar as the institution can carry out the relevant computations using MPC, then in the limit the institution does not need to collect any information about the people it serves.

This view, which of course assumes quite a bit of technology, is both plausible and consistent with a number of other researchers who view AI technology as being a potential improvement on our ability to manage human bias and privacy intrusions.

I also tend to believe the glass is half full. That’s my own bias.

NYC issues report on use of algorithms

New York City convened a task force in 2017 to “develop recommendations that will provide a framework for the use of and policy around ADS [automated decision systems].” The report is now out, and has been immediately criticized:

“It’s a waste, really,” says Meredith Whittaker, co-founder of the AI Now Institute and a member of the task force. “This is a sad precedent.” . . .

Ultimately, she says, the report, penned by city officials, “reflects the city’s view and disappointingly fails to leave out a lot of the dissenting views of task force members.” Members of the task force were given presentations on automated systems that Whittaker says “felt more like pitches or endorsements.” Efforts to make specific policy changes, like developing informational cards on algorithms, were scrapped, she says.

NYC’s algorithm task force was ‘a waste,’ member says

The report itself makes three fairly pointless recommendations: (1) build capacity for an equitable, effective, and responsible approach to the City’s ADS; (2) broaden public discussion on ADS; and (3) formalize ADS management functions.

Someone should really start thinking about this!

The report’s summary contains an acknowledgement that, “we did not reach consensus on every potentially relevant issue . . . .”

Anonymizing data is hard

Google tried to anonymize health care data and failed:

On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.

Google almost made 100,000 chest X-rays public — until it realized personal data could be exposed

This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.

That’s how the system is supposed to work.

“Face surveillance” vs “face identification”

Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:

We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.

Here’s a Way Forward on Facial Recognition

They propose four requirements for allowing facial IDs in law enforcement:

  1. facial IDs should be proven effective across gender and race;
  2. facial IDs should be restricted only to serious crimes;
  3. facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
  4. judicial warrants should be required.

But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.