Bruce Schneier:
People are leaking classified military information on discussion boards for the video game War Thunder to win arguments—repeatedly.
Leaking Military Secrets on Gaming Discussion Boards
Reminds me of this classic XKCD:

the intersection of law and technology
Bruce Schneier:
People are leaking classified military information on discussion boards for the video game War Thunder to win arguments—repeatedly.
Leaking Military Secrets on Gaming Discussion Boards
Reminds me of this classic XKCD:
That’s a fair headline for the story that has ultimately emerged about the Boeing 737-MAX crashes.
The Verge has a good overview:
But Boeing’s software shortcut had a serious problem. Under certain circumstances, it activated erroneously, sending the airplane into an infinite loop of nose-dives. Unless the pilots can, in under four seconds, correctly diagnose the error, throw a specific emergency switch, and start recovery maneuvers, they will lose control of the airplane and crash — which is exactly what happened in the case of Lion Air Flight 610 and Ethiopian Airlines Flight 302.
THE ANCIENT COMPUTERS IN THE BOEING 737 MAX ARE HOLDING UP A FIX
I once linked to a story about how no one really cares about software security because no one ever gets seriously hurt. This is a hell of a counterpoint, though admittedly a narrow one.
Increasingly we know that accidents, especially airline accidents, occur when many independent things all go wrong at the same time. We engineer and plan for the expected errors. We have a very hard time anticipating the sudden intersection of two or three or four simultaneous errors.
William Langewiesche has written a fantastic article for New York Magazine on the two Boeing 737 Max crashes. So many great parts:
An old truth in aviation is that no pilot crashes an airplane who has not previously dinged an airplane somehow. Scratches and scrapes count. They are signs of a mind-set, and Lion Air had plenty of them, generally caused by rushed pushbacks from the gates in the company’s hurry to slap airplanes into the air. Kirana was once asked why Lion Air was experiencing so many accidents, and he answered sincerely that it was because of the large number of flights. Another question might have been why, despite so many crashes, the death toll was not higher. The answer was that all of Lion Air’s accidents happened during takeoffs and landings and therefore at relatively low speed, either on runways or in their immediate obstacle-free vicinities. These were the brief interludes when the airplanes were being flown by hand. The reason crashes never happened during other stages of flight is most likely that the autopilots were engaged.
What Really Brought Down the Boeing 737 Max?
And:
The 737 features two prominent toggle switches on the center pedestal whose sole purpose is to deal with such an event — a pilot simply switches them off to disengage the electric trim. They are known as trim cutout switches. They are big and fat and right behind the throttles. There is not a 737 pilot in the world who is unaware of them. Boeing assumed that if necessary, 737 Max pilots would flip them much as previous generations of 737 pilots had. It would be at most a 30-second event. This turned out to be an obsolete assumption.
And:
This time he was ready when the MCAS engaged, and he managed to avoid a dive by counter-trimming and hanging tight. The surprise was that after the assault ended, the MCAS paused and came at him again and again. In the right seat, Harvino was fumbling through checklists with increasing desperation, trying to figure out which one might apply. Over in the left seat, Suneja was confronting a rabid dog. The MCAS was fast and relentless. Suneja could have disabled it at any time with the flip of the two trim cutout switches, but this apparently never came to mind, and he had no ghost in the jump seat to offer the advice. The fight continued for the next five minutes, during which time the MCAS mounted more than 20 attacks and began to prevail.
The whole article is a study in design, human performance, complexity, and tragic expedience.
Great essay by Dan Brooks:
People love to watch the video in which an unknown artist makes drawing a hand look easy, but they also love the pictures in which ordinary people make it look hard. Failure is funny, especially in this case, which has primed us with the plausible claim that anyone can draw a woman’s hand before yanking us back to the truth that basically no one can draw anything at all.
The Pleasure of Watching Others Confront Their Own Incompetence
The Georgetown Law Center on Privacy & Technology issued a report (with its own vanity URL!) on the NYPD’s use of face recognition technology, and it starts with a particularly arresting anecdote:
On April 28, 2017, a suspect was caught on camera reportedly stealing beer from a CVS in New York City. The store surveillance camera that recorded the incident captured the suspect’s face, but it was partially obscured and highly pixelated. When the investigating detectives submitted the photo to the New York Police Department’s (NYPD) facial recognition system, it returned no useful matches.1
Rather than concluding that the suspect could not be identified using face recognition, however, the detectives got creative.
One detective from the Facial Identification Section (FIS), responsible for conducting face recognition searches for the NYPD, noted that the suspect looked like the actor Woody Harrelson, known for his performances in Cheers, Natural Born Killers, True Detective, and other television shows and movies. A Google image search for the actor predictably returned high-quality images, which detectives then submitted to the face recognition algorithm in place of the suspect’s photo. In the resulting list of possible candidates, the detectives identified someone they believed was a match—not to Harrelson but to the suspect whose photo had produced no possible hits.2
This celebrity “match” was sent back to the investigating officers, and someone who was not Woody Harrelson was eventually arrested for petit larceny.
GARBAGE IN, GARBAGE OUT: FACE RECOGNITION ON FLAWED DATA
The report describes a number of incidents that it views as problematic, and they basically fall into two categories: (1) editing or reconstructing photos before submitting them to face recognition systems; and (2) simply uploading composite sketches of suspects to face recognition systems.
The report also describes a few incidents in which individuals were arrested based on very little evidence apart from the results of the face recognition technology, and it makes the claim that:
If it were discovered that a forensic fingerprint expert was graphically replacing missing or blurry portions of a latent print with computer-generated—or manually drawn—lines, or mirroring over a partial print to complete the finger, it would be a scandal.
I’m not sure this is true. Helping a computer system latch onto a possible set of matches seems an excellent way to narrow a list of suspects. But of course we should not be permitted to arrest or convict based solely on fabricated fingerprint or facial “evidence”. We need to understand the limits of technology used in the investigative process.
As technology becomes more complex, it is increasingly difficult to understand how it works and does not work. License plate readers are fantastically powerful technology, responsible for solving really terrible crimes. But the technology stack makes mistakes. You cannot rely on it alone.
There is no difference in principle between facial recognition technology, genealogy searches, and license plate readers. They are powerful tools but they are not perfect. And, crucially, they can be far less accurate when used on minority populations. Using powerful tools requires training. And the benefits are remarkable. But users need to understand how the technology works and where it can break down. This will always be true.
Garry Kasparov on Tanitoluwa Adewumi, an eight-year old Nigerian refugee living in a family shelter in New York, who just won the NY State K-3 Chess Championship this month:
The United States is where the world’s talent comes to flourish. Since its inception, one of America’s greatest strengths has been its ability to attract and channel the energy of wave after wave of striving immigrants. It’s a machine that turns that vigor and diversity into economic growth. It may mean opening a dry-cleaners or a start-up that becomes Google. It could mean studying medicine, law or physics, or — as Tani says he would like to do — becoming the world’s youngest chess champion.
Many of the questions I received as world champion centered on why the Soviet Union produced so many great chess players. After the dissolution of the U.S.S.R., these questions were asked again along new national borders. Why did Russia, or Armenia, or my native Azerbaijan have so many grandmasters? Was there something in the water, the genes or the schools? And why weren’t there more chess prodigies from the United States (or wherever the questioner was from)?
My answer was always the same: Talent is universal, but opportunity is not, and talent cannot thrive in a vacuum.
The heart-warming tale of the 8-year-old chess champion is quintessentially American
One version of America’s exceptionalism is its ability to harvest raw talent in the world, wherever it arises. How long will that last?
The crash of Ethiopian Airlines Flight 302 is a heartbreaking tragedy, and especially outrageous if it turns out that the pilots fought their own computer for control of the airplane. And of course the crash has prompted another round of hand-wringing over whether planes are just too complicated to fly.
There is a very long history of concern over the complexity of flying machines. In fact it’s why the venerated checklist exists, as described fantastically in Atul Gawande’s Checklist Manifesto.
But planes and other devices have gotten ever more complex to fly. And at the same time, we have grown less tolerant of human mistakes, which are still the cause of most crashes.
The major airlines, Boeing and Airbus, have developed different approaches to solving the problems of airplane safety. I won’t go into the details here, but they basically come down to whether you trust the pilot or the automation more. You can find plenty of examples of problems in both.
But in a number of the most recent incidents, pilots have had difficulty switching control from the automation. As pilot Mac McClellan writes, pilots have always been required to identify a flight automation failure and then disable it:
What’s critical to the current, mostly uninformed discussion is that the 737 MAX system is not triply redundant. In other words, it can be expected to fail more frequently than one in a billion flights, which is the certification standard for flight critical systems and structures.
. . . . .
Though the pitch system in the MAX is somewhat new, the pilot actions after a failure are exactly the same as would be for a runaway trim in any 737 built since the 1960s. As pilots we really don’t need to know why the trim is running away, but we must know, and practice, how to disable it.
. . . . .
But airline accidents have become so rare I’m not sure what is still acceptable to the flying public. When Boeing says truthfully and accurately that pilots need only do what they have been trained to do for decades when a system fails, is that enough to satisfy the flying public and the media frenzy?
I’m not sure. But I am sure the future belongs to FBW [fly-by-wire] and that saying pilots need more training and better skills is no longer enough. The flying public wants to get home safely no matter who is allowed to be at the controls.
Can Boeing Trust Pilots?
For a long time, Boeing has argued that pilots need ultimate control of the aircraft. And they have relied on pilots to intervene when fight automation is not triply redundant. Airbus, on the other hand, has argued that pilots make too many mistakes and that computers should prevent pilots from making unsafe maneuvers.
The lesson of this incident may ultimately be that we cannot allow computers to make mistakes because we cannot rely on pilots to fix them. And if we succeed in not allowing computers to make mistakes, do we need pilots?
This Economist article is not about the perils of learning English. It’s about the perils of learning bad English. Which… duh?
Teaching children in English is fine if that is what they speak at home and their parents are fluent in it. But that is not the case in most public and low-cost private schools. Children are taught in a language they don’t understand by teachers whose English is poor. The children learn neither English nor anything else.
The perils of learning in English
Execution is everything.
This is in the world of finance, but there is no reason why it shouldn’t apply to legal decision making.
We find that forecast accuracy declines over the course of a day as the number of forecasts the analyst has already issued increases. Also consistent with decision fatigue, we find that the more forecasts an analyst issues, the higher the likelihood the analyst resorts to more heuristic decisions by herding more closely with the consensus forecast, by self-herding (i.e., reissuing their own previous outstanding forecasts), and by issuing a rounded forecast
Perhaps more interesting is that the market abides.
Finally, we find that the stock market understands these effects and discounts for analyst decision fatigue.
Hirshleifer, David A. and Levi, Yaron and Lourie, Ben and Teoh, Siew Hong, Decision Fatigue and Heuristic Analyst Forecasts (February 2018). NBER Working Paper No. w24293. Available at SSRN: https://ssrn.com/abstract=3131034
I’m a CA licensed attorney and had to go through a Live Scan fingerprint process today. It took about two hours of my time including understanding the requirements, filling out the form, and actually doing the fingerprinting. It cost money and seemed fragile. The attorney who went before me couldn’t get good prints and now has to do some other process. I was fingerprinted when I applied for admission to the CA bar. Why did I have to do this again?
Basically this is about fixing a broken process with the CA Department of Justice (DOJ). The DOJ is supposed to notify the State Bar of attorney arrests or convictions. But that hasn’t been happening for, it seems, 30 years?
Sigh. Part of my job is to fix broken processes so I can imagine how this went:
State Bar: Hey you were supposed to be notifying us of all attorney arrests and convictions for the last 30 years.
DOJ: Ummmmmmmmmm… sorry. Do you just want a list of literally everyone?
State Bar: Just tell us when an attorney gets nailed.
DOJ: How do we know who the attorneys are? We’re going to need all their fingerprints.
State Bar: But we threw them all away.
Anyway, the State Bar says it’s sorry but we all have to get fingerprinted again.
I’m all for catching bad lawyers. But I’m also for basic table-stakes competency. So I can’t say I was super happy to be doing this.
We’ll see what happens when the DOJ starts reporting arrests and convictions. Some estimates are that up to 10% of California’s licensed attorneys may have unreported criminal activity. As a lawyer, you are supposed to self-report this kind of event to the State Bar, but that rarely happens. And, I don’t know, maybe you can understand that. I’m a very good rule follower but god-forbid if my life got upended by some arrest and conviction I’m not sure reporting the already-pretty-public event to the State Bar would be the first thing on my mind.