To be fair, it’s mostly terrible news. But every once and a while it turns out not as awful as we expected.
A major component of ocean pollution is less devastating and more manageable than usually portrayed, according to a scientific team at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and the Massachusetts Institute of Technology.
Previous studies, including one last year by the United Nations Environment Program, have estimated that polystyrene, a ubiquitous plastic found in trash, could take thousands of years to degrade, making it nearly eternal. But in a new paper, five scientists found that sunlight can degrade polystyrene in centuries or even decades.In the Sea, Not All Plastic Lasts Forever
As it becomes increasingly apparent that we cannot tell artificial intelligence precisely what the goal should be, a growing chorus of researchers and ethicists are throwing up their hands and asking the AI’s to learn that part as well.
Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines.
Uncertainty about objectives might sound counterproductive, but it is actually an essential feature of safe intelligent systems. It implies that no matter how intelligent they become, machines will always defer to humans. They will ask permission when appropriate, they will accept correction, and, most important, they will allow themselves to be switched off — precisely because they want to avoid doing whatever it is that would give humans a reason to switch them off.How to Stop Superhuman A.I. Before It Stops Us
This begs a lot of questions, not the least of which is what are our objectives? But it turns out we have the same problem describing what it is we want as we have describing how we perceive. We’re just going to have to show you.
Farhad Manjoo in an opinion piece for the New York Times:
A parade of American presidents on the left and the right argued that by cultivating China as a market — hastening its economic growth and technological sophistication while bringing our own companies a billion new workers and customers — we would inevitably loosen the regime’s hold on its people. Even Donald Trump, who made bashing China a theme of his campaign, sees the country mainly through the lens of markets. He’ll eagerly prosecute a pointless trade war against China, but when it comes to the millions in Hong Kong who are protesting China’s creeping despotism over their territory, Trump prefers to stay mum.
Well, funny thing: It turns out the West’s entire political theory about China has been spectacularly wrong. China has engineered ferocious economic growth in the past half century, lifting hundreds of millions of its citizens out of miserable poverty. But China’s growth did not come at any cost to the regime’s political chokehold.
A darker truth is now dawning on the world: China’s economic miracle hasn’t just failed to liberate Chinese people. It is also now routinely corrupting the rest of us outside of China.Dealing With China Isn’t Worth the Moral Cost
What do we stand for as Americans? Just money?
Tim Wu, writing in the New York Times:
But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.
The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. . . .
To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.America’s Risky Approach to Artificial Intelligence
Planning requires paying attention. We’re a little distracted in the West these days. And Russia and China love that.
I’ve been cautious about impeachment, but if this isn’t impeachable, nothing is.
President Trump directed the acting White House chief of staff to freeze more than $391 million in aid to Ukraine in the days before Mr. Trump was scheduled to speak by phone with the new Ukrainian president, two senior administration officials said Monday.Trump Ordered Aid to Ukraine Frozen Days Before Call With Its Leader
The man used the power of the United States and taxpayer funds to pressure a foreign government into helping his political campaign. He’s betrayed his oath. It’s time.
Increasingly we know that accidents, especially airline accidents, occur when many independent things all go wrong at the same time. We engineer and plan for the expected errors. We have a very hard time anticipating the sudden intersection of two or three or four simultaneous errors.
William Langewiesche has written a fantastic article for New York Magazine on the two Boeing 737 Max crashes. So many great parts:
An old truth in aviation is that no pilot crashes an airplane who has not previously dinged an airplane somehow. Scratches and scrapes count. They are signs of a mind-set, and Lion Air had plenty of them, generally caused by rushed pushbacks from the gates in the company’s hurry to slap airplanes into the air. Kirana was once asked why Lion Air was experiencing so many accidents, and he answered sincerely that it was because of the large number of flights. Another question might have been why, despite so many crashes, the death toll was not higher. The answer was that all of Lion Air’s accidents happened during takeoffs and landings and therefore at relatively low speed, either on runways or in their immediate obstacle-free vicinities. These were the brief interludes when the airplanes were being flown by hand. The reason crashes never happened during other stages of flight is most likely that the autopilots were engaged.What Really Brought Down the Boeing 737 Max?
The 737 features two prominent toggle switches on the center pedestal whose sole purpose is to deal with such an event — a pilot simply switches them off to disengage the electric trim. They are known as trim cutout switches. They are big and fat and right behind the throttles. There is not a 737 pilot in the world who is unaware of them. Boeing assumed that if necessary, 737 Max pilots would flip them much as previous generations of 737 pilots had. It would be at most a 30-second event. This turned out to be an obsolete assumption.
This time he was ready when the MCAS engaged, and he managed to avoid a dive by counter-trimming and hanging tight. The surprise was that after the assault ended, the MCAS paused and came at him again and again. In the right seat, Harvino was fumbling through checklists with increasing desperation, trying to figure out which one might apply. Over in the left seat, Suneja was confronting a rabid dog. The MCAS was fast and relentless. Suneja could have disabled it at any time with the flip of the two trim cutout switches, but this apparently never came to mind, and he had no ghost in the jump seat to offer the advice. The fight continued for the next five minutes, during which time the MCAS mounted more than 20 attacks and began to prevail.
The whole article is a study in design, human performance, complexity, and tragic expedience.
Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” including “box surfing, where seekers learn to bring a box to a locked ramp in order to jump on top of the box and then ‘surf’ it to the hider’s shelter,” according to OpenAI.AI breaks simulated laws of physics to win at hide and seek
These are entertaining simulations to watch.
Joseph Cox with Motherboard has authored a story on a massive private license plate surveillance network called DRN:
This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN’s database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network.This Company Built a Private Surveillance Network. We Tracked Someone With It
I wrote recently about private surveillance projects that may meet or exceed government efforts. It won’t be long before the license plate readers are facial recognition scanners. It’s probably happening now.
One of the most important AI ethics tasks is to educate developers and especially users about what AI’s can do well and what they cannot do well. AI systems do amazing things, and users mostly assume these things are done accurately based on a few demonstrations. For example, the police assume facial recognition systems accurately tag bad guys, and that license plate databases accurately contain lists of stolen cars. But these systems are brittle, and an excellent example of this is the fun, new ImageNet Roulette web tool put together by artist and researcher Trevor Paglen.
ImageNet Roulette is a provocation designed to help us see into the ways that humans are classified in machine learning systems. It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. ImageNet Roulette (via The Verge)
The service claims not to keep any uploaded photos, so if you trust them, you can upload a webcam image of yourself and see how the internet classifies your face.
Of course no human would look at a random image of another human devoid of context and attempt to assign a description such as “pipe smoker” or “newspaper reader.” We would say, “I don’t know. It just looks like a person.”
But AI’s aren’t that smart yet. They don’t know what they can’t know. So ImageNet Roulette calculates probabilities that an image falls into a given description, and then it outputs the highest probability description. It’s a shot in the dark. You might think it is seeing something deep, but nope. It has 2,500 labels and it has to apply one. I apparently look like a sociologist.
Gary Marcus and Ernest Davis writing for the NYT:
. . . We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.
Google’s Talk to Books, an A.I. venture that aims to answer your questions by providing relevant passages from a huge database of texts, did no better. It served up 20 passages with a wide array of facts, some about George Washington, others about the invention of computers, but with no meaningful connection between the two.
The situation is even worse when it comes to A.I. and the concepts of space and causality. Even a young child, encountering a cheese grater for the first time, can figure out why it has holes with sharp edges, which parts allow cheese to drop through, which parts you grasp with your fingers and so on. But no existing A.I. can properly understand how the shape of an object is related to its function. Machines can identify what things are, but not how something’s physical features correspond to its potential causal effects.How to Build Artificial Intelligence We Can Trust
Modern AI’s have no basic understanding of the world, and there’s not much progress.