AI solves protein folding

Lots of news on the DeepMind announcement that it has solved the protein folding problem. From the NYT:

Computer scientists have struggled to build such a system for more than 50 years. For the last 25, they have measured and compared their efforts through a global competition called the Critical Assessment of Structure Prediction, or C.A.S.P. Until now, no contestant had even come close to solving the problem.

DeepMind solved the problem with a wide range of proteins, reaching an accuracy level that rivaled physical experiments. Many scientists had assumed that moment was still years, if not decades, away.

“I always hoped I would live to see this day,” said John Moult, a professor at the University of Maryland who helped create C.A.S.P. in 1994 and continues to oversee the biennial contest. “But it wasn’t always obvious I was going to make it.”

London A.I. Lab Claims Breakthrough That Could Accelerate Drug Discovery

This is phenomenal and wonderful, but it is also an oracle into which we have limited insight. To quote a 2018 essay on AlphaZero:

Suppose that deeper patterns exist to be discovered — in the ways genes are regulated or cancer progresses; in the orchestration of the immune system; in the dance of subatomic particles. And suppose that these patterns can be predicted, but only by an intelligence far superior to ours. If AlphaInfinity could identify and understand them, it would seem to us like an oracle.

We would sit at its feet and listen intently. We would not understand why the oracle was always right, but we could check its calculations and predictions against experiments and observations, and confirm its revelations. Science, that signal human endeavor, would reduce our role to that of spectators, gaping in wonder and confusion.

Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future.

Human-Centered AI Tools

This is a fantastic piece of work (and paper title!) about the benefits of human-in-the-loop AI processes.

Based on identified user needs, we designed and implemented SMILY (Figure 2), a deep-learning based CBIR [content-based image retrieval] system that includes a set of refinement mechanisms to guide the search process. Similar to existing medical CBIR systems, SMILY enables pathologists to query the system with an image, and then view the most similar images from past cases along with their prior diagnoses. The pathologist can then compare and contrast those images to the query image, before making a decision.

Human-centered tool for coping with Imperfect Algorithms During Medical Decision-Making (via The Gradient)

The system used three primary refinement tools: (1) refine by region; (2) refine by example; and (3) refine by concept. The authors reported that users found the software to offer greater mental support, and that users were naturally focused on explaining surprising results: “They make me wonder, ‘Oh am I making an error?'” Critically, this allowed users some insight into how the algorithm worked without an explicit explanation.

I suspect human-in-the-loop AI processes are our best version of the future. They have also been proposed to resolve ethical concerns.

Food Hacking

I don’t know if it even qualifies as a “hack” but this automation by Chris Buetti is fantastic:

I created an Instagram page that showcased pictures of New York City’s skylines, iconic spots, elegant skyscrapers — you name it. The page has amassed a following of over 25,000 users in the NYC area and it’s still rapidly growing.

I reach out restaurants in the area either via Instagram’s direct messaging or email and offer to post a positive review in return for a free entree or at least a discount. Almost every restaurant I’ve messaged came back at me with a compensated meal or a gift card. Most places have an allocated marketing budget for these types of things so they were happy to offer me a free dining experience in exchange for a promotion. I’ve ended up giving some of these meals away to my friends and family because at times I had too many queued up to use myself.

The beauty of this all is that I automated the whole thing. And I mean 100% of it. I wrote code that finds these pictures or videos, makes a caption, adds hashtags, credits where the picture or video comes from, weeds out bad or spammy posts, posts them, follows and unfollows users, likes pictures, monitors my inbox, and most importantly — both direct messages and emails restaurants about a potential promotion. Since its inception, I haven’t even really logged into the account.

How I Eat For Free in NYC Using Python, Automation, Artificial Intelligence, and Instagram

This is one of the best casual uses of Python I’ve ever seen. It is rare to find a process with such tangible benefits that can be 100% automated, but he found one and built the automation. Kudos.

Theory vs Practice in Keyboard Layouts

So-called “Dvorak” keyboards replace the standard QWERTY key layout with a more thoughtful organization, putting vowels and constants in better locations to speed typing. Here’s the Dvorak layout:

Perhaps the theory doesn’t work as well as anticipated. Jon Porter spent ten years using a Dvorak keyboard and is meh. It did force him to learn to touch type though:

Eventually, yes, it made me a faster typist, but not for the reasons that I hoped it would. Dvorak made me faster almost entirely because it forced me to learn to touch type. For years I’d tried to do the same using a QWERTY layout, but when my old hunt-and-peck method was so easy to revert to I’d inevitably give up on touch typing when I needed to write something quickly. Dvorak was different. It forced me to learn to type properly, and eventually I did. 

But outside of the advantages of learning to touch type, switching to Dvorak has brought some other benefits along with it. For one thing, my laptop is now a lot more secure. You can watch me typing in my password, but the mismatch of key labels and layout will confound you. Even if you knew the password, you’d have to translate the key positions from QWERTY to Dvorak to type it in. Then, if I’m ever stupid enough to leave myself logged in, it becomes a lot harder to do anything with my machine for anyone who’s not me. Mouse clicking only gets you so far.

I’VE USED DVORAK FOR 10 YEARS, AND I’M HERE TO TELL YOU IT’S NOT ALL THAT

I suppose the security benefits are… real? I dunno.

But it’s interesting that the theory doesn’t pan out. In theory, there’s no difference between theory and practice. But in practice…

Tips for Forming Tiger Teams

In the largest analysis of the issue thus far, investigators have found that the smaller the research team working on a problem, the more likely it was to generate innovative solutions. . . . .

. . . . .

“You might ask what is large, and what is small,” said Dr. Evans. “Well, the answer is that this relationship holds no matter where you cut the number: between one person and two, between ten and twenty, between 25 and 26.”

It also holds within every field in science, whether physics, psychology, computer science, mathematics, or zoology, he added: “You see it within field, within topics. And two-thirds of the effect we found is within the individual. That means that if I’m writing a paper, and I partner with one other person, or two, the result is less disruptive with each person I add.”

Can Big Science Be Too Big?

How much of this is about social norms and regression to the mean? Add enough people and pretty soon you get an average.

Does science excellence require freedom?

This will be a central tension for China and – as a result – for most of the rest of us heading into the middle of the 21st century.

Worth quoting at length:

Mr Xi talks of science and technology as a national project. However, in most scientific research, chauvinism is a handicap. Expertise, good ideas and creativity do not respect national frontiers. Research takes place in teams, which may involve dozens of scientists. Published papers get you only so far: conferences and face-to-face encounters are essential to grasp the subtleties of what everyone else is up to. There is competition, to be sure; military and commercial research must remain secret. But pure science thrives on collaboration and exchange.

. . . . .

Although many researchers will be satisfied with just their academic freedom, only a small number need seek broader self-expression to cause problems for the Communist Party. Think of Andrei Sakharov, who developed the Russian hydrogen bomb, and later became a chief Soviet dissident; or Fang Lizhi, an astrophysicist who inspired the students leading the Tiananmen Square protests in 1989. When the official version of reality was tired and stilted, both stood out as seekers of the truth. That gave them immense moral authority.

Some in the West may feel threatened by China’s advances in science, and therefore aim to keep its researchers at arm’s length. That would be wise for weapons science and commercial research, where elaborate mechanisms to preserve secrecy already exist and could be strengthened. But to extend an arm’s-length approach to ordinary research would be self-defeating. Collaboration is the best way of ensuring that Chinese science is responsible and transparent. It might even foster the next Fang.

Hard as it is to imagine, Mr Xi could end up facing a much tougher choice: to be content with lagging behind, or to give his scientists the freedom they need and risk the consequences. In that sense, he is running the biggest experiment of all.

How China could dominate science

Dash buttons ruled too risky and unfair

It’s amazing we’re all still alive given the risks today.

A German court ruled on Thursday that Amazon’s thumb-sized ordering devices known as “Dash” buttons do not give sufficient information about the product ordered or its price, breaking consumer protection legislation.

. . . . .

“We are always open to innovation. But if innovation means that the consumer is put at a disadvantage and price comparisons are made difficult then we fight that,” Wolfgang Schuldzinski, head of the consumer body, said in a statement.

Court says Amazon ‘Dash’ buttons violate German law

You Can’t Train for Everything

“It was just like, ‘We found a seal with an eel stuck in its nose. Do we have a protocol?’ ” Littnan told The Post in a phone interview.

There was none, Littnan said, and it took several emails and phone calls before the decision was made to grab the eel and try pulling it out.

https://www.washingtonpost.com/nation/2018/12/07/make-better-choices-endangered-hawaiian-monk-seals-keep-getting-eels-stuck-up-their-noses-scientists-want-them-stop/?utm_term=.1d97dc16653a

For some reason this reminds me of a lot of legal work.

When tech comes to health

Apple Watch’s ECG feature is making the news, as it should.* I’m not tracking it, and don’t plan to, but this should spawn a lot of innovation from the plaintiffs’ bar in the complaints we see against Apple. Wrongful alerts leading to economic and health harms, negligence for not alerting (what constitutes a proper training set? And when is that training a form of negligence? What’s the duty? – so much fun stuff), does it reach to wrongful death?

*Full disclosure, I used to work at Apple but never advised on this feature.