User Interfaces, Boeing, Airbus, and the 737 Max 8

The crash of Ethiopian Airlines Flight 302 is a heartbreaking tragedy, and especially outrageous if it turns out that the pilots fought their own computer for control of the airplane. And of course the crash has prompted another round of hand-wringing over whether planes are just too complicated to fly.

There is a very long history of concern over the complexity of flying machines. In fact it’s why the venerated checklist exists, as described fantastically in Atul Gawande’s Checklist Manifesto.

But planes and other devices have gotten ever more complex to fly. And at the same time, we have grown less tolerant of human mistakes, which are still the cause of most crashes.

The major airlines, Boeing and Airbus, have developed different approaches to solving the problems of airplane safety. I won’t go into the details here, but they basically come down to whether you trust the pilot or the automation more. You can find plenty of examples of problems in both.

But in a number of the most recent incidents, pilots have had difficulty switching control from the automation. As pilot Mac McClellan writes, pilots have always been required to identify a flight automation failure and then disable it:

What’s critical to the current, mostly uninformed discussion is that the 737 MAX system is not triply redundant. In other words, it can be expected to fail more frequently than one in a billion flights, which is the certification standard for flight critical systems and structures.

. . . . .

Though the pitch system in the MAX is somewhat new, the pilot actions after a failure are exactly the same as would be for a runaway trim in any 737 built since the 1960s. As pilots we really don’t need to know why the trim is running away, but we must know, and practice, how to disable it.

. . . . .

But airline accidents have become so rare I’m not sure what is still acceptable to the flying public. When Boeing says truthfully and accurately that pilots need only do what they have been trained to do for decades when a system fails, is that enough to satisfy the flying public and the media frenzy?

I’m not sure. But I am sure the future belongs to FBW [fly-by-wire] and that saying pilots need more training and better skills is no longer enough. The flying public wants to get home safely no matter who is allowed to be at the controls.

Can Boeing Trust Pilots?

For a long time, Boeing has argued that pilots need ultimate control of the aircraft. And they have relied on pilots to intervene when fight automation is not triply redundant. Airbus, on the other hand, has argued that pilots make too many mistakes and that computers should prevent pilots from making unsafe maneuvers.

The lesson of this incident may ultimately be that we cannot allow computers to make mistakes because we cannot rely on pilots to fix them. And if we succeed in not allowing computers to make mistakes, do we need pilots?

AI Money and Claims

One report says, lots of money being thrown at AI!

AI-related companies raised $9.3B in 2018, a 72% increase compared to 2017.

PWC MoneyTree Report, Q4 2018

Another report says lots of European AI startups might not actually be focused on AI?

According to the survey from London venture capital firm MMC, 40 percent of European startups that are classified as AI companies don’t actually use artificial intelligence in a way that is “material” to their businesses. MMC studied some 2,830 AI startups in 13 EU countries to come to its conclusion, reviewing the “activities, focus, and funding” of each firm.

Forty percent of ‘AI startups’ in Europe don’t actually use AI, claims report

When a beverage maker’s stock soars after adding “blockchain” to its name on the NASDAQ, it’s not hard to see why companies are perfectly fine being thought of as AI startups despite evidence to the contrary. Caveat emptor.

Daniel Dennett on the desirability of general AI

He sees only risks and no rewards from generalized (i.e. conscious) artificial intelligence:

WE DON’T NEED artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights and should not have feelings that could be hurt or be able to respond with resentment to “abuses” rained on them by inept users.

. . . . .

So what we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods.

It will be hard enough learning to live with them without distracting ourselves with fantasies about the Singularity in which these AIs will enslave us, literally. The human use of human beings will soon be changed—once again—forever, but we can take the tiller and steer between some of the hazards if we take responsibility for our trajectory.

Will AI Achieve Consciousness? Wrong Question

Even if generalized AI is not the explicit goal, it may be the natural consequence of building devices that can fend for themselves without human intervention (in, for example, interstellar space). After all, it seems likely that human generalized intelligence evolved only as a necessary by-product of human survival needs, not as a specific goal.

Avoiding the creation of generalized AI (even if we wanted to) may be more difficult than simply deciding against it. And that’s the concern.

State Regulation of AI Technology

With the federal government seemingly unwilling or unable to regulate on cybersecurity, data privacy, and artificial intelligence, the states are increasingly active, in particular about face recognition technology.

A lot of this activity is around forming task forces, but a fair amount also addresses algorithmic impact:

Legislation referring specifically to “artificial intelligence” is currently pending in at least 13 states, according to LexisNexis State Net’s legislative tracking system. Several of the bills provide for the creation of AI study commissions or task forces, while a few deal with education or education funding.

Only four states are considering bills addressing facial recognition camera technology, including Washington, which is considering measures (HB 1654 and SB 5528) concerning the use of such technology by government entities. But at least 27 states are considering bills dealing with the subject of data collection or “data privacy” specifically.

And although there isn’t any pending legislation referencing an “algorithmic impact assessment,” there are bills in 17 states that mention “algorithm.” They include measures dealing with the use of algorithms to censor offensive, political or religious speech on social media (Arkansas HB 1028, Iowa HB 317, Kansas H 2322, and Oklahoma SB 533); calculate insurance scores (Michigan SB 88, Missouri HB 647, Oregon HB 2703 and Virginia HB 2230); and gauge the risk of coronary heart disease (South Carolina HB 3598 and SB 368).

States May Take The Lead On Regulating AI (paywall)

Honestly, cryptocurrencies are useless

Bruce Schneier’s essay on blockchain is worth repeating:

Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain­ — of course, then they wouldn’t have the cool name.

Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.

Blockchain and Trust

Blockchain is just about replacing one form or trust with another, and there are unfortunately many, many examples of the technological trust placed in blockchain failing spectacularly. Trust is hard.

Building AI’s is Easier

It will come as no surprise, but as companies rely more and more on AI to provide new services and boost productivity, they are building better and better tools to make AI’s. These tools are getting easier to use, meaning the engineers have to understand less about how to build AI’s.

In a medium post, Ryszard Szopa makes two related points: (1) your knowledge of how to build custom AI’s is becoming less relevant; and (2) no one will care because data is more important than algorithms and the leading AI companies have all the data:

In early 2018 the task from above [breast cancer detection!] wasn’t suitable for an intern’s first project, due to lack of complexity. Thanks to Keras (a framework on top of TensorFlow) you could do it in just a few lines of Python code, and it required no deep understanding of what you were doing.

What was still a bit of a pain was hyperparameter tuning. If you have a Deep Learning model, you can manipulate multiple knobs like the number and size of layers, etc. How to get to the optimal configuration is not trivial, and some intuitive algorithms (like grid search) don’t perform well. You ended up running a lot of experiments, and it felt more like an art than a science.
As I am writing these words (beginning of 2019), Google and Amazon offer services for automatic model tuning (Cloud AutoMLSageMaker), Microsoft is planning to do so. I predict that manual tuning is going the way of dodo, and good riddance.

I hope that you see the pattern here.

Your AI skills are worth less than you think

Yes, the pattern is abstraction, and it is wonderful. It allows a programmer to build a web server with a single line of code, or do many other amazing things without understanding the underlying nuts and bolts. It has its problems, but mostly it is to be embraced. Building AI’s is no different.

The issue with too few companies having too much data is another problem entirely, one that will perhaps be dealt with by competition law. Because more data beats better design every time.

AI’s and Protein Folding

The stories are turning into a steady trickle. DeepMind’s neural network again crushes the competition:

DeepMind entered AlphaFold into the Critical Assessment of Structure Prediction (CASP) competition, a biannual protein-folding olympics that attracts research groups from around the world. The aim of the competition is to predict the structures of proteins from lists of their amino acids which are sent to teams every few days over several months. The structures of these proteins have recently been cracked by laborious and costly traditional methods, but not made public. The team that submits the most accurate predictions wins.
On its first foray into the competition, AlphaFold topped a table of 98 entrants, predicting the most accurate structure for 25 out of 43 proteins, compared with three out of 43 for the second placed team in the same category.

Google’s DeepMind predicts 3D shapes of proteins

The result was initially depressing to at least one scientist:

Mohammed AlQuraishi, a biologist who has dedicated his career to this kind of research, flew in early December to Cancun, Mexico, where academics were gathering to discuss the results of the latest contest. As he checked into his hotel, a five-star resort on the Caribbean, he was consumed by melancholy.

The contest, the Critical Assessment of Structure Prediction, was not won by academics. It was won by DeepMind, the artificial intelligence lab owned by Google’s parent company.

“I was surprised and deflated,” said Dr. AlQuraishi, a researcher at Harvard Medical School. “They were way out in front of everyone else.”

. . . . .

After the conference in Cancun, Dr. AlQuraishi described his experience in a blog post. The melancholy he felt after losing to DeepMind gave way to what he called “a more rational assessment of the value of scientific progress.”

But he strongly criticized big pharmaceutical companies like Merck and Novartis, as well as his academic community, for not keeping pace.

Making New Drugs With a Dose of Artificial Intelligence

This is good news! Yes, there will be disruption, but we have discovered new tools to crack the most computationally expensive problems. This is tremendous work and heralds a future of dramatic advances in energy, medicine, and automation.

Embracing the Incomprehensible Future, Fusion Edition

Very, very complicated algorithms are starting to solve problems in ways we don’t fully understand. And it again raises the question of whether we as a species are headed into that incomprehensible future.

I think if it solves fusion, I’ll take it:

The number of design choices for optimizing this fusion plasma is enormous, because all aspects of the capsule’s dimensions and structure, as well as the details of the laser and the time dependence of the laser’s power, can be varied. Implosion performance can also be considerably affected by ‘hydrodynamic’ instabilities that are seeded by inevitable imperfections in the manufactured capsule and imbalances or instabilities in the applied laser light. Unsurprisingly, the complexity of this implosion system leads to fusion performance that is extremely sensitive to design details and instabilities.

With so many design choices, and with limited experimental data, the standard approach to optimizing fusion performance has been to use theoretical insights along with sophisticated radiation–hydrodynamic simulations that follow, as well as we know how, the physics of the implosions and their degradations.

. . . . .

The authors trained a statistical model to match an initial set of experimental data using simulation outputs. They then used this model to suggest changes to the implosion design that the model predicted would improve the fusion performance.

. . . . .

By consistently following this methodology to design a series of experimental campaigns, Gopalaswamy and colleagues improved the fusion yield by a remarkable factor of three compared with OMEGA’s previous record.

Experimentally trained statistical models boost nuclear-fusion performance

And the kicker:

[I]t is humbling for scientists dedicated to understanding such complex systems to recognize how much they don’t understand. As a quote attributed to physicist Eugene Wigner states: “It is nice to know that the computer understands the problem. But I would like to understand it, too”.

Our wetware brains weren’t evolved to track all these variables. But we are building machines that can.

Corporate Gatekeepers

This is not sustainable:

During the midterm elections in the United States last year, Twitter added, most of the false content on its site came from within the country itself. Many of the misleading messages focused on voter suppression, with the company deleting almost 6,000 tweets that included incorrect dates for the election or that falsely claimed that Immigrations and Customs Enforcement was patrolling polling stations.

Twitter Says False Content Is Evolving, and More Comes From the U.S.

It is both dangerous and ineffective to rely on big corporations to curate messages so that only “good messages” are seen. We can’t even decide whether we want this. On the one hand, it scares us that big companies have this power. On the hand, we insist that they use it.

It will get worse. Deepfakes are coming to the political space. (This one is amazing.) And videos do not have to be fake to be taken out of context.

So what to do? Facebook/Twitter/Apple/Google are not the solution. Corporate decision making is not immune from bias, mistake, and poor policy. But it is less transparent and less aligned with our social goals.

We need to slow down and stop making snap judgments. We need a renewed emphasis on the legitimacy / authority of the source. The good news is that we might get better at this over time.