Data Privacy and the Endowment Effect

The endowment effect is a psychological phenomenon in which people overvalue the things they own as compared to equivalent things they do not own. For example, a person given a mug and then offered the chance to trade it for equally valued pens will typically demand twice the value in pens than they would have been willing to pay to acquire the mug in the first place. In other words, a thing that is “yours” has more value than the thing in the abstract.

Turns out the endowment effect is on steroids in the data privacy realm:

Do consumers value data privacy? How much? In a survey of 2,416 Americans, we find that the median consumer is willing to pay just $5 per month to maintain data privacy (along specified dimensions), but would demand $80 to allow access to personal data. This is a “superendowment effect,” much higher than the 1:2 ratio often found between willingness to pay and willingness to accept.

How Much Is Data Privacy Worth? A Preliminary Investigation

The researchers conclude that “policymakers should give little or no attention” to these economic measures of value because willingness to pay and willingness to accept are “highly unreliable guides to the welfare effects of retaining or giving up data privacy.”

Maybe it’s because people have been told data privacy is a major issue, but they actually haven’t seen any real-world impact on their lives.

EU Expert Group Favors Banning AI Mass Surveillance and AI Deception

The EU High-Level Expert Group on Artificial Intelligence released its Policy and Investment Recommendations for Trustworthy AI today. The 50-page document is a bit more prescriptive that their previous Ethics Guidelines, and suggests that governments “refrain from disproportionate and mass surveillance of individuals” and “introduce mandatory self-identification of AI systems.” (But see deceptive NYPD chatbots.)

A big chunk of the report also urges the EU to invest in education and subject matter expertise.

So far the discussion around AI mass surveillance has been relatively binary: do it or not. At some point I expect we will see proposals to do mass surveillance while maintaining individual privacy. The security benefits of mass surveillance are too attractive to forego.

Privacy vs Health

David Brooks:

In his book “Deep Medicine,” which is about how A.I. is changing medicine across all fields, Eric Topol describes a study in which a learning algorithm was given medical records to predict who was likely to attempt suicide. It accurately predicted attempts nearly 80 percent of the time. By incorporating data of real-world interactions such as laughter and anger, an algorithm in a similar study was able to reach 93 percent accuracy.

[. . . . .]

Medicine is hard because, as A.I. is teaching us, we’re much more different from one another than we thought. There is no single diet approach that is best for all people because we all process food in our own distinct way. Diet, like other treatments, has to be customized. 

You can be freaked out by the privacy-invading power of A.I. to know you, but only A.I. can gather the data necessary to do this.

How Artificial Intelligence Can Save Your Life

Rephrasing a sentence from an earlier post, health is halfway around the block before privacy can get its shoes on.

The ACLU has some AI surveillance recommendations

Niraj Chokshi, writing for the New York Time:

To prevent the worst outcomes, the A.C.L.U. offered a range of recommendations governing the use of video analytics in the public and private sectors.

No governmental entity should be allowed to deploy video analytics without legislative approval, public notification and a review of a system’s effects on civil rights, it said. Individuals should know what kind of information is recorded and analyzed, have access to data collected about them, and have a way to challenge or correct inaccuracies, too.

To prevent abuses, video analytics should not be used to collect identifiable information en masse or merely for seeking out “suspicious” behavior, the A.C.L.U. said. Data collected should also be handled with care and systems should make decisions transparently and in ways that don’t carry legal implications for those tracked, the group said.

Businesses should be governed by similar guidelines and should be transparent in how they use video analytics, the group said. Regulations governing them should balance constitutional protections, including the rights to privacy and free expression.

How Surveillance Cameras Could Be Weaponized With A.I.

These recommendations appear to boil down to transparency and not tracking everyone all the time without a specific reason. Seems reasonable as a starting point.

The Complexity of Ubiquitous Surveillance

Bruce Schneier advocates for a technological “pause” to allow policy to catch up:

[U]biquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives­ — when the surveillance system gets it wrong­ — will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change.

Computers and Video Surveillance

On the other hand, isn’t it a good thing we can spot police officers with militant, racist views?

US Customs leaks meta-data in leak announcement

US Customs revealed the name of a hacked subcontractor (presumably accidentally) in the title of a Word document:

A contractor for US Customs and Border Protection has been breached, leaking photos and other sensitive data, the agency announced on Monday. Initially described as “traveler photos,” many of the images seem to be pictures of traveler license plates, likely taken from cars at an automotive port of entry.

Customs has not named the contractor involved in the breach, but a Washington Post article noted that the announcement included a Word document with the name Perceptrics, a provider of automated license plate readers used at a number of southern ports of entry.

License plate photos compromised after Customs contractor breach

So they can’t secure the data. And they can’t secure the data announcing the failure to secure the data.

If you think the government can actually secure your data (or even their own hacking tools), you are fooling yourself.

Update: Customs and Border Patrol has suspended all contracts with this supplier as a result of the breach.

Value of Targeted Advertising

It’s complex research, but worth asking how much all this fuss about privacy is really worth?

How much value do online publishers derive from behaviorally targeted advertising that uses privacy-hostile tracking technologies to determine which advert to show a website user?

A new piece of research suggests publishers make just 4% more vs if they were to serve a non-targeted ad.

Targeted ads offer little extra value for online publishers, study suggests

Privacy Dimensions

Jim Harper, a visiting fellow at the American Enterprise Institute, has written a post on the many dimensions of privacy. It’s a good list:

Fairness. This is at stake when corporations or governments use personal information in decision-making. If the data they use are inaccurate, or if the decision rule is inapposite to the circumstances, they may mistreat individuals in important ways. It’s a serious problem — or set of problems — distinct from control of information per se.

Personal security. Certain types of information, such as home and work address, phone number, and the like, can be used to harass and, in the worst case, facilitate violence and murder. There are some similarities to privacy protection, but controlling this information aims at protecting a specific interest in personal security.

Seclusion. Being able to navigate the world unmolested is another distinct interest often referred to as “privacy.” Telephone calls at home during the dinner hour were an example of this type of intrusion quelled by one of the Federal Trade Commission’s most popular regulatory efforts: the “Do Not Call” list. Notably, the offense here is not release or threatened release of information, but disrupted quietude or peace that is often founded not in knowledge, but in ignorance of the victim and his or her interests.

Autonomy. “Privacy” has also been used to denote authority to make one’s own decisions about intimate matters such as sexuality and procreation. Here again, the essence is not control of information — the classic sense of “privacy” — but control of one’s actual bodily self.

Anti-objectification. Another strain of “privacy” is the distaste for being objectified in the commercial realm. Marketers can be truly obnoxious, and resistance to their constant importuning of people is a part of the “privacy” discussion. This interest is favored — no surprise — by people who are not fond of commerce and markets.

Privacy and other values that go by that name

Harper is skeptical that any legislation can adequately deal with every dimension: “Omnibus legislation trying to protect many different interests and values all at once may advance none of them, if only because of the diffusion of focus.” And that’s not a bad diagnosis for the GDPR.

We can certainly try to rank these in terms of priority. Here’s mine:

  1. Autonomy. This is simply freedom, and it should be a core value that we protect. Any erosion of privacy that threatens individual self-determination should be a priority.
  2. Fairness. The use of personal information to impact us unfairly is often hidden, and I would make this a top priority as well.
  3. Personal security. This is obviously an important issue, but existing mechanisms appear adequate for now.
  4. Seclusion. Or obscurity, which I have previously discussed. For many this is core to living a happy life. But also perhaps generational / cultural.
  5. Anti-objectification. Distasteful certainly, but less of a priority in my view. Seems to make the most noise though. Sign of the times.

France bans some litigation analytics

In what appears to be a breathtaking overreaction to a privacy concern, France has banned statistical reporting about individual judges’ decisions:

The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.

A key passage of the new law states:

‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ 

France Bans Judge Analytics, 5 Years In Prison For Rule Breakers

This raises many issues of free speech, transparency, and just plain old protectionism:

Cameras Inside the New York Times

Eric Nagourney, in an opinion piece for the New York Times:

In announcing the cameras, the company said they were being installed in part to “shorten incident response time.”

Ah. That. The thing too scary to talk about in anything but a whisper.

We may not all have experienced moments where survival depended on shortening “response time.” But the idea has become an all-too-familiar part of our work. The concert in Las Vegas. The mosques in Christchurch. The cafes and nightclubs in Paris. The school in Parkland. The temple in Pittsburgh. And yes, that newsroom in Annapolis.

The Day the Cameras Came

There is really no contest between Safety and Privacy. Safety is halfway around the block before Privacy can get its shoes on.