Google tried to anonymize health care data and failed:
On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.Google almost made 100,000 chest X-rays public — until it realized personal data could be exposed
This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.
That’s how the system is supposed to work.
There is a lot of concern about facial recognition technology, but of course there are also indisputable benefits:
The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.
He had just found out the results. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.
This momentous undertaking was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”India is trying to build the world’s biggest facial recognition system (via Marginal Revolution)
DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab’s more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature.
Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game’s playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.DeepMind’s StarCraft 2 AI is now better than 99.8 percent of all human players
It is remarkable that the computer was able to achieve this performance while being restricted to human-level reaction times (22 non-duplicated actions every five seconds). In the real world we won’t see this kind of restriction.
And, as in chess, we don’t really know what insight the AI is gaining into the game.
The attention economy helps explain much of the news, politics, and media we see these days. The way people receive information has changed more in the last five years than in perhaps the whole of human history, and certainly since the invention of the printing press.
And YouTube, it seems, is ground zero for the hyper-refinement of data driven, attention seeking algorithms:
In some ways, YouTube’s algorithm is an immensely complicated beast: it serves up billions of recommendations a day. But its goals, at least originally, were fairly simple: maximize the likelihood that the user will click on a video, and the length of time they spend on YouTube. It has been stunningly successful: 70 percent of time spent on YouTube is watching recommended videos, amounting to 700 million hours a day. Every day, humanity as a collective spends a thousand lifetimes watching YouTube’s recommended videos.
The design of this algorithm, of course, is driven by YouTube’s parent company, Alphabet, maximizing its own goal: advertising revenue, and hence the profitability of the company. Practically everything else that happens is a side effect. The neural nets of YouTube’s algorithm form connections—statistical weightings that favor some pathways over others—based on the colossal amount of data that we all generate by using the site. It may seem an innocuous or even sensible way to determine what people want to see; but without oversight, the unintended consequences can be nasty.
Guillaume Chaslot, a former engineer at YouTube, has helped to expose some of these. Speaking to TheNextWeb, he pointed out, “The problem is that the AI isn’t built to help you get what you want—it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”
More than this: they can waste your time in harmful ways. Inflammatory, conspiratorial content generates clicks and engagement. If a small subset of users watches hours upon hours of political or conspiracy-theory content, the pathways in the neural net that recommend this content are reinforced.
The result is that users can begin with innocuous searches for relatively mild content, and find themselves quickly dragged towards extremist or conspiratorial material. A survey of 30 attendees at a Flat Earth conferenceshowed that all but one originally came upon the Flat Earth conspiracy via YouTube, with the lone dissenter exposed to the ideas from family members who were in turn converted by YouTube.Algorithms Are Designed to Addict Us, and the Consequences Go Beyond Wasted Time
Conspiracy theories are YouTube theories. Maybe that should be their new name.
Almost be definition, meaningful medical data is unique to individuals. Yet health care studies and developing technologies need access to large amounts of medical data to refine techniques and make new discoveries. Historically this medical data has been anonymized to hide true individual identities. Will that even be possible in the future?
A magnetic resonance imaging scan includes the entire head, including the subject’s face. And while the countenance is blurry, imaging technology has advanced to the point that the face can be reconstructed from the scan.
Under some circumstances, that face can be matched to an individual with facial recognition software.You Got a Brain Scan at the Hospital. Someday a Computer May Use It to Identify You.
Google was recently sued because plaintiffs alleged that it had not sufficiently anonymized health care data as a result of its parallel collection of location data in Android phones. (The location data could be allegedly combined with dates of hospital admission in the health care data to re-identify individuals.)
Anonymization is hard, and it’s getting harder.
Landlords and lenders are pushing the Department of Housing and Urban Development to make it easier for businesses to discriminate against possible tenants using automated tools. Under a new proposal that just finished its public comment period, HUD suggested raising the bar for some legal challenges, making discrimination cases less likely to succeed.Banks and landlords want to overturn federal rules on housing algorithms
The HUD proposed rule adds a new burden-shifting framework that would require plaintiffs to plead five specific elements to make a prima facie case that “a challenged practice actually or predictably results in a disparate impact on a protected class of persons . . . .” Current regulations permit complaints against such practices “even if the practice was not motivated by discriminatory intent.” The new rule continues to allow such complaints, but would allow defendants to rebut the claim at the pleading stage by asserting that a plaintiff has not alleged facts sufficient to support a prima facie claim.
One new requirement is that the plaintiff plead that the practice is “arbitrary, artificial, and unnecessary.” This introduces a kind of balancing test even if the practice has discriminatory impact. (A balancing test is already somewhat present in Supreme Court precedent, and the rule purports to be following this precedent.) As a result, if the challenged practice nevertheless serves a “legitimate objective,” the defendant may rebut the claim at the pleading stage.
The net result of the proposed rule will be to make it easier for new technologies, especially artificial intelligence technologies, to pass muster under housing discrimination laws. If the technology has a legitimate objective, it may not run afoul of HUD rules despite having a disparate impact on a protected class of persons.
This is not theoretical. HUD sued Facebook for housing discrimination earlier this year.
As it becomes increasingly apparent that we cannot tell artificial intelligence precisely what the goal should be, a growing chorus of researchers and ethicists are throwing up their hands and asking the AI’s to learn that part as well.
Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines.
Uncertainty about objectives might sound counterproductive, but it is actually an essential feature of safe intelligent systems. It implies that no matter how intelligent they become, machines will always defer to humans. They will ask permission when appropriate, they will accept correction, and, most important, they will allow themselves to be switched off — precisely because they want to avoid doing whatever it is that would give humans a reason to switch them off.How to Stop Superhuman A.I. Before It Stops Us
This begs a lot of questions, not the least of which is what are our objectives? But it turns out we have the same problem describing what it is we want as we have describing how we perceive. We’re just going to have to show you.
Tim Wu, writing in the New York Times:
But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.
The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. . . .
To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.America’s Risky Approach to Artificial Intelligence
Planning requires paying attention. We’re a little distracted in the West these days. And Russia and China love that.
Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” including “box surfing, where seekers learn to bring a box to a locked ramp in order to jump on top of the box and then ‘surf’ it to the hider’s shelter,” according to OpenAI.AI breaks simulated laws of physics to win at hide and seek
These are entertaining simulations to watch.
Joseph Cox with Motherboard has authored a story on a massive private license plate surveillance network called DRN:
This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN’s database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network.This Company Built a Private Surveillance Network. We Tracked Someone With It
I wrote recently about private surveillance projects that may meet or exceed government efforts. It won’t be long before the license plate readers are facial recognition scanners. It’s probably happening now.