Impact of hospital ransomware

Information security is a public health concern too.

Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.

As PBS noted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined.
The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.

“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”

Study: Ransomware, Data Breaches at Hospitals tied to Uptick in Fatal Heart Attacks

“Face surveillance” vs “face identification”

Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:

We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.

On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.

Here’s a Way Forward on Facial Recognition

They propose four requirements for allowing facial IDs in law enforcement:

  1. facial IDs should be proven effective across gender and race;
  2. facial IDs should be restricted only to serious crimes;
  3. facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
  4. judicial warrants should be required.

But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.

AI’s continue to improve at what are essentially war simulations

DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab’s more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature.

Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game’s playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

DeepMind’s StarCraft 2 AI is now better than 99.8 percent of all human players

It is remarkable that the computer was able to achieve this performance while being restricted to human-level reaction times (22 non-duplicated actions every five seconds). In the real world we won’t see this kind of restriction.

And, as in chess, we don’t really know what insight the AI is gaining into the game.

A former law enforcement lawyer flips on encryption

Jim Baker was the general counsel of the FBI during its much publicized dispute with Apple over iPhone encryption. He now says public officials should become “among the strongest supporters of widely available strong encryption”:

I know full well that this approach will be a bitter pill for some in law enforcement and other public safety fields to swallow, and many people will reject it outright. It may make some of my former colleagues angry at me. I expect that some will say that I’m simply joining others who have left the government and switched sides on encryption to curry favor with the tech sector in order to get a job. That is wrong. My dim views about cybersecurity risks, China and Huawei are essentially the same as those that I held while in government. I also think that my overall approach on encryption today—as well as my frustration with Congress—is generally consistent with the approach I had while I was in government.

I have long said—as I do here—that encryption poses real challenges for public safety officials; that any proposed technical solution must properly balance all of the competing equities; and that (absent an unlikely definitive judicial ruling as a result of litigation) Congress must change the law to resolve the issue. What has changed is my acceptance of, or perhaps resignation to, the fact that Congress is unlikely to act, as well as my assessment that the relevant cybersecurity risks to society have grown disproportionately over the years when compared with other risks.

Rethinking Encryption

In a nutshell: strong encryption is already widely available (not just in the U.S.), we’re probably already in a golden age of surveillance, weak cybersecurity is a bigger problem, and…. China.

YouTube is ground zero for the attention economy

The attention economy helps explain much of the news, politics, and media we see these days. The way people receive information has changed more in the last five years than in perhaps the whole of human history, and certainly since the invention of the printing press.

And YouTube, it seems, is ground zero for the hyper-refinement of data driven, attention seeking algorithms:

In some ways, YouTube’s algorithm is an immensely complicated beast: it serves up billions of recommendations a day. But its goals, at least originally, were fairly simple: maximize the likelihood that the user will click on a video, and the length of time they spend on YouTube. It has been stunningly successful: 70 percent of time spent on YouTube is watching recommended videos, amounting to 700 million hours a day. Every day, humanity as a collective spends a thousand lifetimes watching YouTube’s recommended videos.

The design of this algorithm, of course, is driven by YouTube’s parent company, Alphabet, maximizing its own goal: advertising revenue, and hence the profitability of the company. Practically everything else that happens is a side effect. The neural nets of YouTube’s algorithm form connections—statistical weightings that favor some pathways over others—based on the colossal amount of data that we all generate by using the site. It may seem an innocuous or even sensible way to determine what people want to see; but without oversight, the unintended consequences can be nasty.

Guillaume Chaslot, a former engineer at YouTube, has helped to expose some of these. Speaking to TheNextWeb, he pointed out, “The problem is that the AI isn’t built to help you get what you want—it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”

More than this: they can waste your time in harmful ways. Inflammatory, conspiratorial content generates clicks and engagement. If a small subset of users watches hours upon hours of political or conspiracy-theory content, the pathways in the neural net that recommend this content are reinforced.

The result is that users can begin with innocuous searches for relatively mild content, and find themselves quickly dragged towards extremist or conspiratorial material. A survey of 30 attendees at a Flat Earth conferenceshowed that all but one originally came upon the Flat Earth conspiracy via YouTube, with the lone dissenter exposed to the ideas from family members who were in turn converted by YouTube.

Algorithms Are Designed to Addict Us, and the Consequences Go Beyond Wasted Time

Conspiracy theories are YouTube theories. Maybe that should be their new name.

Surveillance in schools edition

Surveillance, surveillance everywhere. Next stop, public schools.

In rural Weld county, Colorado, a school official got an alert from GoGuardian, a company that monitors students’ internet searches, that a student was doing a Google search for “how to kill myself” late one evening. The official worked with a social worker to call law enforcement to conduct an in-person safety check at the student’s home, said Dr Teresa Hernandez, the district’s chief intervention and safety officer. When the student’s mother answered the door, she was confused, and said that her child had been upstairs sleeping since 9pm. “We had the search history to show, actually, no, that’s not what was going on,” Hernandez said.

Federal law requires that American public schools block access to harmful websites, and that they “monitor” students’ online activities. What exactly this “monitoring” means has never been clearly defined: the Children’s Internet Protection Act, passed nearly 20 years ago, was driven in part by fears that American children might look at porn on federally funded school computers.

As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.

Under digital surveillance: how American schools spy on millions of kids

What is going on in Western China?

Torture – metal nails, fingernails pulled out, electric shocks – takes place in the “black room.” Punishment is a constant. The prisoners are forced to take pills and get injections. It’s for disease prevention, the staff tell them, but in reality they are the human subjects of medical experiments. Many of the inmates suffer from cognitive decline. Some of the men become sterile. Women are routinely raped.

A Million People Are Jailed at China’s Gulags. I Managed to Escape. Here’s What Really Goes on Inside

Now this is worth a trade war.

Can medical data ever be truly anonymous?

Almost be definition, meaningful medical data is unique to individuals. Yet health care studies and developing technologies need access to large amounts of medical data to refine techniques and make new discoveries. Historically this medical data has been anonymized to hide true individual identities. Will that even be possible in the future?

A magnetic resonance imaging scan includes the entire head, including the subject’s face. And while the countenance is blurry, imaging technology has advanced to the point that the face can be reconstructed from the scan. 

Under some circumstances, that face can be matched to an individual with facial recognition software.

You Got a Brain Scan at the Hospital. Someday a Computer May Use It to Identify You.

Google was recently sued because plaintiffs alleged that it had not sufficiently anonymized health care data as a result of its parallel collection of location data in Android phones. (The location data could be allegedly combined with dates of hospital admission in the health care data to re-identify individuals.)

Anonymization is hard, and it’s getting harder.

HUD rules changing on use of automated decision making in housing markets

Landlords and lenders are pushing the Department of Housing and Urban Development to make it easier for businesses to discriminate against possible tenants using automated tools. Under a new proposal that just finished its public comment period, HUD suggested raising the bar for some legal challenges, making discrimination cases less likely to succeed.

Banks and landlords want to overturn federal rules on housing algorithms

The HUD proposed rule adds a new burden-shifting framework that would require plaintiffs to plead five specific elements to make a prima facie case that “a challenged practice actually or predictably results in a disparate impact on a protected class of persons . . . .” Current regulations permit complaints against such practices “even if the practice was not motivated by discriminatory intent.” The new rule continues to allow such complaints, but would allow defendants to rebut the claim at the pleading stage by asserting that a plaintiff has not alleged facts sufficient to support a prima facie claim.

One new requirement is that the plaintiff plead that the practice is “arbitrary, artificial, and unnecessary.” This introduces a kind of balancing test even if the practice has discriminatory impact. (A balancing test is already somewhat present in Supreme Court precedent, and the rule purports to be following this precedent.) As a result, if the challenged practice nevertheless serves a “legitimate objective,” the defendant may rebut the claim at the pleading stage.

The net result of the proposed rule will be to make it easier for new technologies, especially artificial intelligence technologies, to pass muster under housing discrimination laws. If the technology has a legitimate objective, it may not run afoul of HUD rules despite having a disparate impact on a protected class of persons.

This is not theoretical. HUD sued Facebook for housing discrimination earlier this year.

Segmented social media, meet segmented augmented reality

Users on social media are often in their own universes. Liberals often don’t even see the content that conservatives see, and vice versa.

Imagine if that kind of segmentation extended to augmented reality as well:

Imagine a world that’s filled with invisible graffiti. Open an app, point your phone at a wall, and blank brick or cement becomes a canvas. Create art with digital spraypaint and stencils, and an augmented reality system will permanently store its location and placement, creating the illusion of real street art. If friends or social media followers have the app, they can find your painting on a map and come see it. You might scrawl an in-joke across the door of a friend’s apartment, or paint a gorgeous mural on the side of a local store.

Now imagine a darker world. Members of hate groups gleefully swap pictures of racist tags on civil rights monuments. Students bully each other by spreading vicious rumors on the walls of a target’s house. Small businesses get mobbed beyond capacity when a big influencer posts a sticker on their window. The developers of Mark AR, an app that’s described as “the world’s first augmented reality social platform,” are trying to create the good version of this system. They’re still figuring out how to avoid the bad one.

Is the world ready for virtual graffiti?

I first read China Miéville’s The City & the City many years ago, and I keep thinking about how strange it was then, and how much the ideas have resonated since.