Ode to Obscurity

The recording of almost everywhere we go and everything we do has become increasingly cheap and easy. “Obscurity” is becoming rarer. Dr. Hartzog (law, computer science) and Dr. Selinger (philosophy) make the point that lack of obscurity may limit our growth as individuals:

Obscurity makes meaningful and intimate relationships possible, ones that offer solidarity, loyalty and love. It allows us to choose with whom we want to share different kinds of information. It protects us from having everyone know the different roles we play in the different parts of our lives. We need to be able to play one role with our co-workers while revealing other parts of ourselves with friends and family. . . .

Obscurity protects us from being pressured to be conventional. This buffer from a ubiquitous permanent record is especially important for the exploratory phase of youth. To develop as humans, people must be free to try things they might later regret. This is how we become better people. Without obscurity, we wouldn’t have the freedom to take risks, fail and dust ourselves off. We’d be stymied by the fear of being deemed failures by a system that never forgets.

Why You Are No Longer Safe in the Crowd

Is obscurity different than privacy? Or perhaps it is another name for a privacy concept that has a million shades of gray. Privacy is weird.

Summary of the DETOUR Act

On April 9, 2019, Representatives Warner (D-VA) and Fischer (R-NE) introduced the Deceptive Experiences To Online Users Reductions (DETOUR) Act. The bill would criminalize user interface “dark patterns” around user consent, which trick or nudge (depending on your perspective) users into consenting to things that may not be in their best interests.

There is a whole website devoted to dark patterns, and it is pretty informative. Here’s an example of a dark pattern that makes it hard to figure out how not to sign up for a service:

The DETOUR Act gives the FTC power to regulate user interfaces that have “the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.” It also prohibits the segmentation of users into groups for behavioral experimentation without informed consent.

The Act would only apply to “large online operators,” defined as having more than 100M authenticated users in any 30 day period. (Small online operators can still trick people?) Large online operators would also have to disclose their experiments every 90 days and establish Independent Review Boards to oversee any such user research.

Summary of the proposed Algorithmic Accountability Act of 2019

Senators Wyden (D-OR) and Booker (D-NJ) have proposed a Senate bill that would require big businesses and data brokers to conduct “impact assessments” for (1) “high-risk automated decision systems”; and (2) “high-risk information systems”.

The bill essentially gives the FTC power to promulgate regulations requiring companies with a lot of personal data to conduct studies of how their use of that data impacts people. Think of it as the equivalent of an environmental impact study for big data, or the US equivalent of GDPR’s Data Protection Impact Assessment process. In fact, it is very similar to the GDPR requirement.

Here’s a summary of the key terms:

Covered entities. The bill would apply to anyone that (a) receives more than $50M in revenue over the preceding three-year period; (b) possesses personal information on more than 1M consumers or consumer devices; or (c) is a “data broker,” defined as possessing personal information on individuals that are not customers or employees as a substantial part of business.

Definition of personal information. Broadly defined as any information “reasonably linkable to a specific consumer or consumer device.”

Impact assessments. At a minimum, requires a description of the system, design, training process, data, purpose, relative benefits and costs, data minimization practices, retention policies, access to data by consumers, ability of consumers to correct or object to the data, sharing of data, risks of inaccurate, biased, unfair, or discriminatory decisions, and safeguards to minimize risks.

Systems which must be evaluated. Must evaluate any system that “poses a significant risk” to the privacy and security of personal information or results in inaccurate, unfair, biased, or discriminatory decisions, especially if the system alters legal rights or profiles “sensitive aspects” of consumer lives such as protected class, criminal convictions, work performance, economic situation, health, personal preferences, interests, behavior, location, etc.

Enforcement. Enforced by the FTC or the Attorney General of any State upon notice to the FTC.

What is privacy?

Kate Klonick in an opinion piece for the NYT:

The instructions were straightforward: At some point in the next two weeks, try to determine a stranger’s identity, in a public place, using only Google search on your phone, based on things the stranger said loudly enough for lots of others to hear and things that are displayed on the person’s clothing or bags, like logos or a monogram.

Is Your Seatmate Googling You?

Most privacy is really obscurity. It’s just that we mostly don’t care, until we do.

What is private information? Is it that which literally no one else knows about you? Almost certainly not. Information that just your spouse or immediate family know? Probably broader than that. Information that isn’t Google-able? Maybe.

Privacy is more likely just the sense that there is information about yourself that you would like to limit distribution of in some way. And this is a core tension: privacy is different for every single person and yet it is a burden to constantly tweak those privacy controls.

Privacy is complicated and no one wants to deal

From the RSA security conference this year comes a new survey on consumer attitudes towards privacy. And it’s not really surprising:

According to survey findings from research released at the RSA Conference 2019 on Tuesday, data privacy is a top concern for most people; out of 4,000 participants queried from January 14 to February 15, a vast majority (96 percent) of said that they care about their privacy (including most Millennials at more than 93 percent); and 93 percent said they use security software.

RSAC 2019: Most Consumers Say ‘No’ to Cumbersome Data Privacy Practices

So they asked people if they cared about privacy and got 96% agreement. Yep, almost all of us really like privacy.

But of course there’s more:

However, users did not follow through with some of the more difficult and cumbersome best practices for data privacy. For instance, only 32 percent said that they read privacy policies and End User License Agreements (EULAs) (and 66 percent say they simply skim through or do not read EULA or other consent forms at all).

No surprise either. And of course a lot of people reuse passwords and don’t bother to check permissions. Because that stuff is a hassle.

I say again: the fundamental premise of GDPR and most other privacy legislation (including California’s upcoming Consumer Privacy Act) is wrong. Legislators think that if you only tell users how their data is being collected and used, then they will strike the right privacy balance for themselves. But apart from a few privacy activists, they won’t.

Better to start carving off the low hanging privacy fruit. While we’re all concerned that companies might not be telling us exactly how they are using our data, some companies are out in the open (or at least in the EULA) selling our location data to the grey market. How’s that for a simple rule? Don’t sell user location data. Start there.

Done with Facebook

I’m done. John Gruber links to yet another story of Facebook’s fundamental inability to govern itself:

On the surface, Facebook prompting people to enable 2FA was a good thing — if you have 2FA enabled it’s much harder for someone who isn’t you to log in to your account. But this being Facebook, they’re not just going to do something that is only good for the user, are they?

Last year it came to light that Facebook was using the phone numbers people submitted to the company solely so they could protect their accounts with 2FA for targeted advertising. And now, as security researcher and New York Times columnist Zeynep Tufekci pointed out, Facebook is allowing anyone to look up a user by their phone number, the same phone number that was supposed to be for security purposes only.


I’m offended as a data privacy lawyer and a cybersecurity professional and a user. I’m just done with Facebook.

Taking a Facebook Break

What happens if you pick some people and boot them off Facebook for a month?

Those booted off enjoyed an additional hour of free time on average. They tended not to redistribute their liberated minutes to other websites and social networks, but chose instead to watch more television and spend time with friends and family. They consumed much less news, and were thus less aware of events but also less polarised in their views about them than those still on the network. Leaving Facebook boosted self-reported happiness and reduced feelings of depression and anxiety.

It also helped some to break the Facebook habit. Several weeks after the deactivation period, those who had been off Facebook spent 23% less time on it than those who had never left, and 5% of the forced leavers had yet to turn their accounts back on. And the amount of money subjects were willing to accept to shut their accounts for another four weeks was 13% lower after the month off than it had been before. Users, in other words, overestimate how much they value the service: a misperception corrected by a month of abstention. . . .

What would happen if Facebook was turned off? (emphasis added)

No surprise, but illustrates the power of Facebook’s algorithmic magic: just one more video, just one more update. Makes you forget there are other things you enjoy more.

Who Balances Privacy?

It’s unclear why the New York Times’ Editorial Board thinks the EU’s GDPR is “leading the way” if they are concerned about the “legal fiction of consent”:

The average person would have to spend 76 working days reading all of the digital privacy policies they agree to in the span of a year. Reading Amazon’s terms and conditions alone out loud takes approximately nine hours.

Why would anyone read the terms of service when they don’t feel as though they have a choice in the first place? It’s not as though a user can call up Mark Zuckerberg and negotiate his or her own privacy policy. The “I agree” button should have long ago been renamed “Meh, whatever.”

How Silicon Valley Puts the ‘Con’ in Consent

GDPR does not fix this problem. The whole premise of GDPR is that consumers should be able to balance their privacy concerns if they only understood and controlled their own data: “Natural persons should have control of their own personal data.

But look, data is really complicated. That’s sort of the entire point of big data. Meanwhile the public seems outraged on the one hand that there is not enough disclosure (“users are not able to fully understand the extent of the processing operations carried out by Google”), and on the other hand that there is too much disclosure (“we discovered that privacy policies have greatly increased in length over the past 10 years”).

Here’s the bottom line: no one wants to read privacy policies. And unless it becomes comically simple, users will not balance their own privacy interests. Solve for the equilibrium.