Phenomenal essay by Lucas Kunce, a U.S. Marine who served in Iraq and Afghanistan, responding to news that 4,600 Google employees signed a petition urging the company to refuse to build weapons technology:
People frequently threw objects of all sizes at our vehicles in anger and protest. Aside from roadside bombs, the biggest threat at the time, particularly in crowded areas, was an armor-piercing hand-held grenade. It looked like a dark soda can with a handle protruding from the bottom. Or, from a distance and with only an instant to decide, it looked just like many of the other objects that were thrown at us.
One day in Falluja, at the site of a previous attack, an Iraqi man threw a dark oblong object at one of the vehicles in my sister team. The Marine in the turret, believing it was an armor-piercing grenade, shot the man in the chest. The object turned out to be a shoe.
[. . . . .]
When I think about A.I. and weapons development, I don’t imagine Skynet, the Terminator, or some other Hollywood dream of killer robots. I picture the Marines I know patrolling Falluja with a heads-up display like HoloLens, tied to sensors and to an A.I. system that can process data faster and more precisely than humanly possible — an interface that helps them identify an object as a shoe, or an approaching truck as too light to be laden with explosives.Dear Tech Workers, U.S. Service Members Need Your Help
I enjoyed this interview and especially the title: “Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair.”
What are some core problems or research areas you want to approach?
People are now solving problems just by throwing an enormous amount of computation and data at them and trying every possible way. You can afford to do that if you are a big company and have a lot of resources, but people in developing countries cannot afford the data or the computational resources. So the theoretical challenge, or the fundamental challenge, is how to develop methods that are better understood and therefore don’t need experiments with hundreds of variants to get things to work.
Another thing is that some of the problems with current datasets, especially in terms of the usefulness of these systems for different cultures, is that there is a cultural bias in the data that has been collected. It is Western data informed with the Western way of seeing and doing things, so to some extent having data from different cultures and different environments is going to help make things more useful. You need to learn from data that is more relevant to the task.Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair
And of course:
“Solving” is probably too strong, but for addressing those problems, as I’ve said, the problem is that we don’t realise that they are the reflections of our own problems. We don’t realise how biased we are until we see an AI reproduce the same bias, and we see that it’s biased.
I chuckle a bit when I hear about biased humans going over biased data in the hopes of creating unbiased data. Bias is a really hard problem, and it’s always going to be with us in one form or another. Education and awareness are the most important tools for addressing.
Benjamin Heinzerling writes in The Gradient that the “Clever Hans effect” is alive and well in natural language processing (NLP) deep learning models:
Of course, the problem of learners solving a task by learning the “wrong” thing has been known for a long time and is known as the Clever Hans effect, after the eponymous horse which appeared to be able to perform simple intellectual tasks, but in reality relied on involuntary cues given by its handler. Since the 1960s, versions of the tank anecdote tell of a neural network trained by the military to recognize tanks in images, but actually learning to recognize different levels of brightness due to one type of tank appearing only in bright photos and another type only in darker ones.
Less anecdotal, Viktoria Krakovna has collected a depressingly long list of agents following the letter, but not the spirit of their reward function, with such gems as a video game agent learning to die at the end of the first level, since repeating that easy level gives a higher score than dying early in the harder second level. Two more recent, but already infamous cases are an image classifier claimed to be able to distinguish faces of criminals from those of law-abiding citizens, but actually recognizing smiles and a supposed “sexual orientation detector” which can be better explained as a detector of glasses, beards and eyeshadow.NLP’s Clever Hans Moment has Arrived
BERT is a fantastic NLP model, but it’s not displaying deep understanding of the material. For certain tasks, at least, it is exploiting statistical correlations better than you can. And that makes it hard to see what it’s doing.
Reminds me of one of my favorite quotes: “The first principle is that you must not fool yourself – and you are the easiest person to fool.” Richard Feynman
Anil Ananthaswamy, writing for Scientific American:
[T]the signals being collected by LIGO must be matched by supercomputers against hundreds of thousands of templates of possible gravitational-wave signatures. Promising signals trigger an internal alert; those that survive additional scrutiny trigger a public alert so that the global astronomy community can look for electromagnetic and neutrino counterparts.
Template matching is so computationally intensive that, for gravitational waves produced by mergers, astronomers use only four attributes of the colliding cosmic objects (the masses of both and the magnitudes of their spins) to make detections in real time. From there, LIGO scientists spend hours, days or even weeks performing more processing offline to further refine the understanding of a signal’s sources, a task called parameter estimation.
Seeking ways to make that labyrinthine process faster and more computationally efficient, in work published in 2018, Huerta and his research group at NCSA turned to machine learning. Specifically, Huerta and his then graduate student Daniel George pioneered the use of so-called convolutional neural networks (CNNs), which are a type of deep-learning algorithm, to detect and decipher gravitational-wave signals in real time.Faced with a Data Deluge, Astronomers Turn to Automation
And they learned a bit about what the neural networks are seeing:
For Ntampaka, these results suggest that machine-learning systems are not entirely immune to interpretation. “It’s a misunderstanding within the community that they only can be black boxes,” she says. “I think interpretability is on the horizon. It’s coming. We are starting to be able to do it now.” But she also acknowledges that had her team not already known the underlying physics connecting the x-ray emissions from galaxy clusters to their mass, it might not have figured out that the neural network was excising the cores from its analysis.
Gary Marcus, NYU professor of psychology and neural science, is skeptical in light of DeepMind’s loss of $572M last year:
My own guess?
Ten years from now we will conclude that deep reinforcement learning was overrated in the late 2010s, and that many other important research avenues were neglected. Every dollar invested in reinforcement learning is a dollar not invested somewhere else, at a time when, for example, insights from the human cognitive sciences might yield valuable clues. Researchers in machine learning now often ask, “How can machines optimize complex problems using massive amounts of data?” We might also ask, “How do children acquire language and come to understand the world, using less power and data than current AI systems do?” If we spent more time, money, and energy on the latter question than the former, we might get to artificial general intelligence a lot sooner.DEEPMIND’S LOSSES AND THE FUTURE OF ARTIFICIAL INTELLIGENCE
Deep learning has been so hyped that it will be difficult to meet expectations. And reinforcement learning has serious challenges when applied to real-world environments. But they are both revolutions in AI and will alter computing forever.
An engineer has built a counter-surveillance tool on top of the hardware and software stack for Tesla vehicles:
It uses the existing video feeds created by Tesla’s Sentry Mode features and uses license plate and facial detection to determine if you are being followed.
Scout does all that in real-time and sends you notifications if it sees anything suspicious.Turn your Tesla into a CIA-like counter-surveillance tool with this hack
A video demonstration is embedded in the article.
This is a reminder that intelligent surveillance tools are going to be available at massive scale to even private citizens, not just the government. As governments track citizens, will citizens track government actors and individual police officers? What will we do with all of this data?
The Economist pens an essay on freedom of expression that is worth reading in full:
Who is the greater threat to free speech: President Donald Trump or campus radicals? Left and right disagree furiously about this. But it is the wrong question, akin to asking which of the two muggers currently assaulting you is leaving more bruises. What matters is that big chunks of both left and right are assaulting the most fundamental of liberties—the ability to say what you think. . . .
. . .Human beings are not free unless they can express themselves. Minds remain narrow unless exposed to different viewpoints. Ideas are more likely to be refined and improved if vigorously questioned and tested. Protecting students from unwelcome ideas is like refusing to vaccinate them against measles. When they go out into the world, they will be unprepared for its glorious but sometimes challenging diversity.As societies polarise, free speech is under threat. It needs defenders
Andrew Liptak, writing for The Verge:
The report says that while fatigue and lack of training played a role in the accident, the design of the ship’s control console were also contributing factors. Located in the middle of the McCain’s bridge, the Ship’s Control Console (SCC) features a pair of touch-screens on both the Helm and Lee Helm stations, through which the crew could steer and propel the ship. Investigators found that the crew had placed it in “backup manual mode,” which removed computer-assisted help, because it allowed for “more direct form of communication between steering and the SSC.” That setting meant that any crew member at another station could take over steering operations, and when the crew tried to regain control of the ship from multiple stations, control “shifted from the lee helm, to aft steering, to the helm, and back to aft steering.”
. . . Specifically, the board points to the touchscreens on the bridge, noting that mechanical throttles are generally preferred because “they provide both immediate and tactile feedback to the operator.” The report notes that had mechanical controls been present, the helmsmen would have likely been alerted that there was an issue early on, and recommends that the Navy better adhere to better design standards.The US Navy will replace the touchscreen controls with mechanical ones on its destroyers
The crash killed 10 U.S. sailors.
Rachel Thomas with an excellent essay on 8 Things You Need to Know About Surveillance:
I frequently talk with people who are not that concerned about surveillance, or who feel that the positives outweigh the risks. Here, I want to share some important truths about surveillance:
1. Surveillance can facilitate human rights abuses and even genocide
2. Data is often used for different purposes than why it was collected
3. Data often contains errors
4. Surveillance typically operates with no accountability
5. Surveillance changes our behavior
6. Surveillance disproportionately impacts the marginalized
7. Data privacy is a public good
8. We don’t have to accept invasive surveillance
The issues are of course more complex than anyone can summarize in a brief essay, but Thomas’ points on data often containing errors (3) and the frequent lack of processes to remedy those errors (4) deserve special emphasis. We tend to assume that automated systems are accurate and work well as long as they do so in our experience. But for many people they do not, and this has a dramatic impact on their lives.