Information security is a public health concern too.
Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.
As PBS noted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined.
The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.
“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”Study: Ransomware, Data Breaches at Hospitals tied to Uptick in Fatal Heart Attacks
Law professors Barry Friedman and Andrew Guthrie Ferguson propose a compromise in facial recognition technology:
We should ban “face surveillance,” the use of facial recognition (in real time or from stored footage) to track people as they pass by public or private surveillance cameras, allowing their whereabouts to be traced.
On the other hand, we should allow “face identification”— again, with strict rules — so the police can use facial recognition technology to identify a criminal suspect caught on camera.Here’s a Way Forward on Facial Recognition
They propose four requirements for allowing facial IDs in law enforcement:
- facial IDs should be proven effective across gender and race;
- facial IDs should be restricted only to serious crimes;
- facial IDs should not be limited to criminal databases, inclusion in which may have been influenced by racist policing policies; and
- judicial warrants should be required.
But unless we ban face surveillance for private entities, I think the cat is already out of the bag. Are police to be in a worse position than the local market? That seems untenable.
Jim Baker was the general counsel of the FBI during its much publicized dispute with Apple over iPhone encryption. He now says public officials should become “among the strongest supporters of widely available strong encryption”:
I know full well that this approach will be a bitter pill for some in law enforcement and other public safety fields to swallow, and many people will reject it outright. It may make some of my former colleagues angry at me. I expect that some will say that I’m simply joining others who have left the government and switched sides on encryption to curry favor with the tech sector in order to get a job. That is wrong. My dim views about cybersecurity risks, China and Huawei are essentially the same as those that I held while in government. I also think that my overall approach on encryption today—as well as my frustration with Congress—is generally consistent with the approach I had while I was in government.
I have long said—as I do here—that encryption poses real challenges for public safety officials; that any proposed technical solution must properly balance all of the competing equities; and that (absent an unlikely definitive judicial ruling as a result of litigation) Congress must change the law to resolve the issue. What has changed is my acceptance of, or perhaps resignation to, the fact that Congress is unlikely to act, as well as my assessment that the relevant cybersecurity risks to society have grown disproportionately over the years when compared with other risks.Rethinking Encryption
In a nutshell: strong encryption is already widely available (not just in the U.S.), we’re probably already in a golden age of surveillance, weak cybersecurity is a bigger problem, and…. China.
Surveillance, surveillance everywhere. Next stop, public schools.
In rural Weld county, Colorado, a school official got an alert from GoGuardian, a company that monitors students’ internet searches, that a student was doing a Google search for “how to kill myself” late one evening. The official worked with a social worker to call law enforcement to conduct an in-person safety check at the student’s home, said Dr Teresa Hernandez, the district’s chief intervention and safety officer. When the student’s mother answered the door, she was confused, and said that her child had been upstairs sleeping since 9pm. “We had the search history to show, actually, no, that’s not what was going on,” Hernandez said.
Federal law requires that American public schools block access to harmful websites, and that they “monitor” students’ online activities. What exactly this “monitoring” means has never been clearly defined: the Children’s Internet Protection Act, passed nearly 20 years ago, was driven in part by fears that American children might look at porn on federally funded school computers.
As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.Under digital surveillance: how American schools spy on millions of kids
Users on social media are often in their own universes. Liberals often don’t even see the content that conservatives see, and vice versa.
Imagine if that kind of segmentation extended to augmented reality as well:
Imagine a world that’s filled with invisible graffiti. Open an app, point your phone at a wall, and blank brick or cement becomes a canvas. Create art with digital spraypaint and stencils, and an augmented reality system will permanently store its location and placement, creating the illusion of real street art. If friends or social media followers have the app, they can find your painting on a map and come see it. You might scrawl an in-joke across the door of a friend’s apartment, or paint a gorgeous mural on the side of a local store.
Now imagine a darker world. Members of hate groups gleefully swap pictures of racist tags on civil rights monuments. Students bully each other by spreading vicious rumors on the walls of a target’s house. Small businesses get mobbed beyond capacity when a big influencer posts a sticker on their window. The developers of Mark AR, an app that’s described as “the world’s first augmented reality social platform,” are trying to create the good version of this system. They’re still figuring out how to avoid the bad one.Is the world ready for virtual graffiti?
I first read China Miéville’s The City & the City many years ago, and I keep thinking about how strange it was then, and how much the ideas have resonated since.
According to the Pew Research Center, a full 56 percent said that they trust police and officials to use these technologies responsibly. That goes for situations in which no consent is given: About 59 percent said it is acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces.Police Use of Facial Recognition is Just Fine, Say Most Americans
Black and Hispanic adults approve at lower rates. See the study for details.
In contrast to recent U.S. municipal decisions restricting government use of facial recognition technology, a UK court has ruled that police use of the technology does not violate any fundamental rights.
In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.Police Use of Facial Recognition Is Accepted by British Court
The UK is of course one of the most surveilled countries in the world.
President Trump tweeted an apparently classified image of an Iranian launch pad on August 30. He has the right to do so. But he probably did not expect everything that the tweet would reveal.
Now astronomers have easily identified the exact satellite that took the image. By measuring the semi-major and semi-minor axes of the ellipse (as viewed in the image) of the circular launch platform, they were able to determine the angle of view. This matched precisely with a satellite known as USA 224, previously of unknown capability. Google Earth shows the launch pad as about 60 meters in diameter, which therefore suggests a satellite resolution capability of 10 centimeters per pixel. That resolution is very impressive and also previously unknown.
The detail in the image is surprising, even to satellite imagery experts. In an interview with NPR, Melissa Hanham of the Open Nuclear Network in Vienna said, “… I did not believe <the image> could come from a satellite.” Hanham also said that “I imagine adversaries are going to take a look at this image and reverse-engineer it to figure out how the sensor itself works and what kind of post-production techniques they’re using.”Thanks to Trump, We’ve Got a Better Idea of the Capabilities of US Surveillance Satellites
An engineer has built a counter-surveillance tool on top of the hardware and software stack for Tesla vehicles:
It uses the existing video feeds created by Tesla’s Sentry Mode features and uses license plate and facial detection to determine if you are being followed.
Scout does all that in real-time and sends you notifications if it sees anything suspicious.Turn your Tesla into a CIA-like counter-surveillance tool with this hack
A video demonstration is embedded in the article.
This is a reminder that intelligent surveillance tools are going to be available at massive scale to even private citizens, not just the government. As governments track citizens, will citizens track government actors and individual police officers? What will we do with all of this data?