Done with Facebook

I’m done. John Gruber links to yet another story of Facebook’s fundamental inability to govern itself:

On the surface, Facebook prompting people to enable 2FA was a good thing — if you have 2FA enabled it’s much harder for someone who isn’t you to log in to your account. But this being Facebook, they’re not just going to do something that is only good for the user, are they?

Last year it came to light that Facebook was using the phone numbers people submitted to the company solely so they could protect their accounts with 2FA for targeted advertising. And now, as security researcher and New York Times columnist Zeynep Tufekci pointed out, Facebook is allowing anyone to look up a user by their phone number, the same phone number that was supposed to be for security purposes only.


I’m offended as a data privacy lawyer and a cybersecurity professional and a user. I’m just done with Facebook.

China Baby Bust

The latest evidence that populations will be trending downward in the future:

Fewer babies were born in China last year than in 2017, and already fewer had been born in 2017 than in 2016. There were 15.23 million new births in 2018, down by more than 11 percent from the year before. The authorities had predicted that easing and then abolishing the one-child policy in the mid-2010s would trigger a baby boom; it’s been more like a baby bust.

China Isn’t Having Enough Babies

Growth is going to get a lot harder.

Lawyer conversations to memorize

This is pure legal gold by Matt Levine imagining a conversation between Elon Musk and his designated “Mystery Twitter Sitter”:

Musk: If I tweet “Thursday 2 pm,” is that the sort of thing that “contain[s], or reasonably could contain, information material to Tesla or its stockholders”? No, right?

Mystery Twitter Sitter: Why do you torture me.

Musk: And surely tweeting “California” is fine? Nothing material or misleading about “California.”

MTS: I was an editor of my law review you know.

Musk: And then “Some Tesla news”?

MTS: What have I ever done to—wait, yes, obviously “some Tesla news” could be material to Tesla and its stockholders.

Musk: But there’s nothing misleading in the tweet, so you can pre-approve it, right?

MTS: Well I don’t know, depends on what the Tesla news is. What is it?

Musk: You’ll find out Thursday, lol.

MTS: What if, instead of doing this, you just went to bed.

Musk: Show me what in these three tweets is illegal or misleading.

MTS: Look, as your lawyer, I am telling you that this seems like a bad idea, and you should at least wait until after your contempt-of-court hearing to do anything that might look like a violation of your settlement with the Securities and Exchange commission.

Musk: Nope! I am a smart guy and I like to get into the details of every aspect of my business. I second-guess expert engineers all the time, and often it works out for me; I’m not going to do whatever you tell me just because you are a lawyer. I think these tweets are fine. If you don’t, you have to explain to me, specifically, how they violate the settlement.

MTS: The world is not as black and white as many tech founders wish it was, and the legal system is not just a list of unambiguous written rules applied in a mechanical fashion. Whether you like it or not, regulators and courts operate in large areas of discretion; they have lots of ways to make life more difficult for you and for the company that you manage as a fiduciary for others, and they are used to being treated with a certain amount of deference by the people they regulate. Here they have you dead to rights on a technicality—you didn’t get your “500,000 cars” tweet pre-approved, as you promised you would, and it had to be corrected—and how they respond to that will depend on your overall attitude and behavior. My job as a lawyer is not just to look up rules and show them to you; it is to make predictions, grounded in research but also in experience and a certain professional connoisseurship, about how officials will react to particular fact patterns, and to advise you on the wisest course of action in shaping their reaction. (This is sometimes called “legal realism.”) My expert advice to you is that the benefit to you, and to your company and its workers and shareholders, of sending out these inscrutable late-night tweets is very low, while the risk of further antagonizing the SEC and the courts seems pretty high. I am not giving you a formal legal opinion that it is illegal for you to tweet “California.” I am just telling you that it’s dumb.

Musk: Well are they going to put me in jail for sending these tweets?

MTS: I mean, probably not, no.

Musk: I have a big drill you know.

MTS: I know.

Musk: That’s a kind of legal realism too.

Who Can Say What California Means?

Lawyers working in any complex, regulated space have a version of this conversation every week.

Everything Will Be Climate Change

David Wallace-Wells wrote a book about climate change worst-case scenarios that’s been getting a lot of buzz. In this interview he discusses the book and the notion that science fiction writers sometimes fail to get future predictions right, but more frequently nail the mood.

Well, here’s the mood prediction for climate change:

. . . I think that the 21st century will be dominated by climate change in the same way that, say, the end of the 20th century was dominated by financial capitalism, or the 19th century in the West was dominated by modernity or industry—that this will be the meta-narrative of the coming decades, and there won’t be an area of human life that is untouched by it. Often people talk about climate change as a global problem, which it obviously is, but I don’t think we’ve really started to think about what that means all the way down to the level of individual life.

My basic perspective is that everything about human life on this planet will be transformed by this force. Even if we end up at a kind of best-case outcome, I think the world will be dominated by these forces in the coming decades in ways that it’s hard to imagine and we really haven’t started to think hard enough about.

The 3 Big Things That People Misunderstand About Climate Change

Plausible and unsettling.

The perils of learning English… when you’re not actually learning English

This Economist article is not about the perils of learning English. It’s about the perils of learning bad English. Which… duh?

Teaching children in English is fine if that is what they speak at home and their parents are fluent in it. But that is not the case in most public and low-cost private schools. Children are taught in a language they don’t understand by teachers whose English is poor. The children learn neither English nor anything else.

The perils of learning in English

Execution is everything.

Daniel Dennett on the desirability of general AI

He sees only risks and no rewards from generalized (i.e. conscious) artificial intelligence:

WE DON’T NEED artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights and should not have feelings that could be hurt or be able to respond with resentment to “abuses” rained on them by inept users.

. . . . .

So what we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods.

It will be hard enough learning to live with them without distracting ourselves with fantasies about the Singularity in which these AIs will enslave us, literally. The human use of human beings will soon be changed—once again—forever, but we can take the tiller and steer between some of the hazards if we take responsibility for our trajectory.

Will AI Achieve Consciousness? Wrong Question

Even if generalized AI is not the explicit goal, it may be the natural consequence of building devices that can fend for themselves without human intervention (in, for example, interstellar space). After all, it seems likely that human generalized intelligence evolved only as a necessary by-product of human survival needs, not as a specific goal.

Avoiding the creation of generalized AI (even if we wanted to) may be more difficult than simply deciding against it. And that’s the concern.

State Regulation of AI Technology

With the federal government seemingly unwilling or unable to regulate on cybersecurity, data privacy, and artificial intelligence, the states are increasingly active, in particular about face recognition technology.

A lot of this activity is around forming task forces, but a fair amount also addresses algorithmic impact:

Legislation referring specifically to “artificial intelligence” is currently pending in at least 13 states, according to LexisNexis State Net’s legislative tracking system. Several of the bills provide for the creation of AI study commissions or task forces, while a few deal with education or education funding.

Only four states are considering bills addressing facial recognition camera technology, including Washington, which is considering measures (HB 1654 and SB 5528) concerning the use of such technology by government entities. But at least 27 states are considering bills dealing with the subject of data collection or “data privacy” specifically.

And although there isn’t any pending legislation referencing an “algorithmic impact assessment,” there are bills in 17 states that mention “algorithm.” They include measures dealing with the use of algorithms to censor offensive, political or religious speech on social media (Arkansas HB 1028, Iowa HB 317, Kansas H 2322, and Oklahoma SB 533); calculate insurance scores (Michigan SB 88, Missouri HB 647, Oregon HB 2703 and Virginia HB 2230); and gauge the risk of coronary heart disease (South Carolina HB 3598 and SB 368).

States May Take The Lead On Regulating AI (paywall)

How will licensing IP in the autonomous vehicle space be different?

Autonomous vehicle IP licensing will have all the problems we saw with smartphone IP licensing, but on steroids. Here’s a short list:

  • Damage base is bigger. Value of the end product is bigger (i.e. cars vs smartphones), so royalty demands are bigger. There will be a correspondingly wider range of “acceptable” royalty amounts and FRAND offers.
  • Injunctions are more dramatic. And lock-in is more severe. You can’t roll a software update on a week’s notice just due to the safety / regulatory issues. You can’t substitute a new chip at the factory as you soon as you’re convinced it’s good. Also, is Germany seriously willing to enjoin the sale of a car because some random chip inside it infringes? This is going to put a lot of pressure on proportionality in legal systems.
  • Whole new issue: safety! Safety issues will dominate (perhaps out of proportion to the actual risks), and the safety issues will supercharge everything from mandatory licensing to pricing to cybersecurity.

Auto manufacturers have so far avoided massive IP battles, and have insisted that their suppliers take care of IP and indemnity issues. This has not been done in the smartphone space. Which model will prevail?

Theory vs Practice in Keyboard Layouts

So-called “Dvorak” keyboards replace the standard QWERTY key layout with a more thoughtful organization, putting vowels and constants in better locations to speed typing. Here’s the Dvorak layout:

Perhaps the theory doesn’t work as well as anticipated. Jon Porter spent ten years using a Dvorak keyboard and is meh. It did force him to learn to touch type though:

Eventually, yes, it made me a faster typist, but not for the reasons that I hoped it would. Dvorak made me faster almost entirely because it forced me to learn to touch type. For years I’d tried to do the same using a QWERTY layout, but when my old hunt-and-peck method was so easy to revert to I’d inevitably give up on touch typing when I needed to write something quickly. Dvorak was different. It forced me to learn to type properly, and eventually I did. 

But outside of the advantages of learning to touch type, switching to Dvorak has brought some other benefits along with it. For one thing, my laptop is now a lot more secure. You can watch me typing in my password, but the mismatch of key labels and layout will confound you. Even if you knew the password, you’d have to translate the key positions from QWERTY to Dvorak to type it in. Then, if I’m ever stupid enough to leave myself logged in, it becomes a lot harder to do anything with my machine for anyone who’s not me. Mouse clicking only gets you so far.


I suppose the security benefits are… real? I dunno.

But it’s interesting that the theory doesn’t pan out. In theory, there’s no difference between theory and practice. But in practice…