NYC issues report on use of algorithms

New York City convened a task force in 2017 to “develop recommendations that will provide a framework for the use of and policy around ADS [automated decision systems].” The report is now out, and has been immediately criticized:

“It’s a waste, really,” says Meredith Whittaker, co-founder of the AI Now Institute and a member of the task force. “This is a sad precedent.” . . .

Ultimately, she says, the report, penned by city officials, “reflects the city’s view and disappointingly fails to leave out a lot of the dissenting views of task force members.” Members of the task force were given presentations on automated systems that Whittaker says “felt more like pitches or endorsements.” Efforts to make specific policy changes, like developing informational cards on algorithms, were scrapped, she says.

NYC’s algorithm task force was ‘a waste,’ member says

The report itself makes three fairly pointless recommendations: (1) build capacity for an equitable, effective, and responsible approach to the City’s ADS; (2) broaden public discussion on ADS; and (3) formalize ADS management functions.

Someone should really start thinking about this!

The report’s summary contains an acknowledgement that, “we did not reach consensus on every potentially relevant issue . . . .”

Anonymizing data is hard

Google tried to anonymize health care data and failed:

On July 19, NIH contacted Google to alert the company that its researchers had found dozens of images still included personally identifying information, including the dates the X-rays were taken and distinctive jewelry that patients were wearing when the X-rays were taken, the emails show.

Google almost made 100,000 chest X-rays public — until it realized personal data could be exposed

This article comes across as a warning, but it’s a success story. Smart people thought they could anonymize data, someone noticed they couldn’t, the lawyers got involved, and the project was called off.

That’s how the system is supposed to work.

Indisputable benefits of facial recognition technology

There is a lot of concern about facial recognition technology, but of course there are also indisputable benefits:

The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

He had just found out the results. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This momentous undertaking was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

India is trying to build the world’s biggest facial recognition system (via Marginal Revolution)

Veteran’s Day

Maurice Isserman, quoting from a soldier’s letter in an essay for the NYT:

At 1st you wonder if you’ll be shot & you’re scared of not your own skin, but of the people that will get hurt if you are hit. All I could think about was keeping you & the folks from being affected by some 88 shell. I don’t seem to worry about myself because I knew if I did get it, I’d never know it. After a while I didn’t wonder if I get hit — I’d wonder when. Every time a shell came I’d ask myself “Is this the one?” In the 3rd phase I was sure I’d get it & began to ½ hope that the next one would do it & end the goddam suspense.

What It’s Really Like to Fight a War

The American story still resonates

On the 30th anniversary of German reunification, it’s still hard to be German if you aren’t native West German:

For a long time, that discrimination was not merely subconscious, but structural. 

Even as Germany became a major immigration country, no real path to citizenship was extended even to the children of immigrants born in the country.

After the fall of communism, the intrinsic racism of German citizenship law became impossible to ignore. Russian citizens with German ancestry who spoke no German were suddenly allowed passports, while second-generation Turks born and raised in Germany were not. 

The change to the immigration law in 2000 opened parallel tracks to citizenship for those who were born in Germany or who had lived in the country for at least eight years.

As a child, Idil Baydar says she felt German. But that has changed. The 44-year-old daughter of a Turkish guest worker who arrived in the 1970s now describes herself as a “passport German foreigner.” 

“The Germans have turned me into a migrant,” said Ms. Baydar, a comedian who has grown popular on YouTube by mocking Germany’s uneasy relationship with its largest immigrant group.

Germany Has Been Unified for 30 Years. Its Identity Still Is Not.

In contrast, the American story is told in Ronald Reagan’s last speech as President of the United States:

Impact of hospital ransomware

Information security is a public health concern too.

Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.

As PBS noted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined.
The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.

“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”

Study: Ransomware, Data Breaches at Hospitals tied to Uptick in Fatal Heart Attacks

“Face surveillance” vs “face identification”

Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:

We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.

Here’s a Way Forward on Facial Recognition

They propose four requirements for allowing facial IDs in law enforcement:

  1. facial IDs should be proven effective across gender and race;
  2. facial IDs should be restricted only to serious crimes;
  3. facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
  4. judicial warrants should be required.

But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.

AI’s continue to improve at what are essentially war simulations

DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab’s more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature.

Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game’s playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

DeepMind’s StarCraft 2 AI is now better than 99.8 percent of all human players

It is remarkable that the computer was able to achieve this performance while being restricted to human-level reaction times (22 non-duplicated actions every five seconds). In the real world we won’t see this kind of restriction.

And, as in chess, we don’t really know what insight the AI is gaining into the game.

A former law enforcement lawyer flips on encryption

Jim Baker was the general counsel of the FBI during its much publicized dispute with Apple over iPhone encryption. He now says public officials should become “among the strongest supporters of widely available strong encryption”:

I know full well that this approach will be a bitter pill for some in law enforcement and other public safety fields to swallow, and many people will reject it outright. It may make some of my former colleagues angry at me. I expect that some will say that I’m simply joining others who have left the government and switched sides on encryption to curry favor with the tech sector in order to get a job. That is wrong. My dim views about cybersecurity risks, China and Huawei are essentially the same as those that I held while in government. I also think that my overall approach on encryption today—as well as my frustration with Congress—is generally consistent with the approach I had while I was in government.

I have long said—as I do here—that encryption poses real challenges for public safety officials; that any proposed technical solution must properly balance all of the competing equities; and that (absent an unlikely definitive judicial ruling as a result of litigation) Congress must change the law to resolve the issue. What has changed is my acceptance of, or perhaps resignation to, the fact that Congress is unlikely to act, as well as my assessment that the relevant cybersecurity risks to society have grown disproportionately over the years when compared with other risks.

Rethinking Encryption

In a nutshell: strong encryption is already widely available (not just in the U.S.), we’re probably already in a golden age of surveillance, weak cybersecurity is a bigger problem, and…. China.

YouTube is ground zero for the attention economy

The attention economy helps explain much of the news, politics, and media we see these days. The way people receive information has changed more in the last five years than in perhaps the whole of human history, and certainly since the invention of the printing press.

And YouTube, it seems, is ground zero for the hyper-refinement of data driven, attention seeking algorithms:

In some ways, YouTube’s algorithm is an immensely complicated beast: it serves up billions of recommendations a day. But its goals, at least originally, were fairly simple: maximize the likelihood that the user will click on a video, and the length of time they spend on YouTube. It has been stunningly successful: 70 percent of time spent on YouTube is watching recommended videos, amounting to 700 million hours a day. Every day, humanity as a collective spends a thousand lifetimes watching YouTube’s recommended videos.

The design of this algorithm, of course, is driven by YouTube’s parent company, Alphabet, maximizing its own goal: advertising revenue, and hence the profitability of the company. Practically everything else that happens is a side effect. The neural nets of YouTube’s algorithm form connections—statistical weightings that favor some pathways over others—based on the colossal amount of data that we all generate by using the site. It may seem an innocuous or even sensible way to determine what people want to see; but without oversight, the unintended consequences can be nasty.

Guillaume Chaslot, a former engineer at YouTube, has helped to expose some of these. Speaking to TheNextWeb, he pointed out, “The problem is that the AI isn’t built to help you get what you want—it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”

More than this: they can waste your time in harmful ways. Inflammatory, conspiratorial content generates clicks and engagement. If a small subset of users watches hours upon hours of political or conspiracy-theory content, the pathways in the neural net that recommend this content are reinforced.

The result is that users can begin with innocuous searches for relatively mild content, and find themselves quickly dragged towards extremist or conspiratorial material. A survey of 30 attendees at a Flat Earth conferenceshowed that all but one originally came upon the Flat Earth conspiracy via YouTube, with the lone dissenter exposed to the ideas from family members who were in turn converted by YouTube.

Algorithms Are Designed to Addict Us, and the Consequences Go Beyond Wasted Time

Conspiracy theories are YouTube theories. Maybe that should be their new name.