Surveillance in schools edition

Surveillance, surveillance everywhere. Next stop, public schools.

In rural Weld county, Colorado, a school official got an alert from GoGuardian, a company that monitors students’ internet searches, that a student was doing a Google search for “how to kill myself” late one evening. The official worked with a social worker to call law enforcement to conduct an in-person safety check at the student’s home, said Dr Teresa Hernandez, the district’s chief intervention and safety officer. When the student’s mother answered the door, she was confused, and said that her child had been upstairs sleeping since 9pm. “We had the search history to show, actually, no, that’s not what was going on,” Hernandez said.

Federal law requires that American public schools block access to harmful websites, and that they “monitor” students’ online activities. What exactly this “monitoring” means has never been clearly defined: the Children’s Internet Protection Act, passed nearly 20 years ago, was driven in part by fears that American children might look at porn on federally funded school computers.

As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.

Under digital surveillance: how American schools spy on millions of kids

What is going on in Western China?

Torture – metal nails, fingernails pulled out, electric shocks – takes place in the “black room.” Punishment is a constant. The prisoners are forced to take pills and get injections. It’s for disease prevention, the staff tell them, but in reality they are the human subjects of medical experiments. Many of the inmates suffer from cognitive decline. Some of the men become sterile. Women are routinely raped.

A Million People Are Jailed at China’s Gulags. I Managed to Escape. Here’s What Really Goes on Inside

Now this is worth a trade war.

Can medical data ever be truly anonymous?

Almost be definition, meaningful medical data is unique to individuals. Yet health care studies and developing technologies need access to large amounts of medical data to refine techniques and make new discoveries. Historically this medical data has been anonymized to hide true individual identities. Will that even be possible in the future?

A magnetic resonance imaging scan includes the entire head, including the subject’s face. And while the countenance is blurry, imaging technology has advanced to the point that the face can be reconstructed from the scan. 

Under some circumstances, that face can be matched to an individual with facial recognition software.

You Got a Brain Scan at the Hospital. Someday a Computer May Use It to Identify You.

Google was recently sued because plaintiffs alleged that it had not sufficiently anonymized health care data as a result of its parallel collection of location data in Android phones. (The location data could be allegedly combined with dates of hospital admission in the health care data to re-identify individuals.)

Anonymization is hard, and it’s getting harder.

HUD rules changing on use of automated decision making in housing markets

Landlords and lenders are pushing the Department of Housing and Urban Development to make it easier for businesses to discriminate against possible tenants using automated tools. Under a new proposal that just finished its public comment period, HUD suggested raising the bar for some legal challenges, making discrimination cases less likely to succeed.

Banks and landlords want to overturn federal rules on housing algorithms

The HUD proposed rule adds a new burden-shifting framework that would require plaintiffs to plead five specific elements to make a prima facie case that “a challenged practice actually or predictably results in a disparate impact on a protected class of persons . . . .” Current regulations permit complaints against such practices “even if the practice was not motivated by discriminatory intent.” The new rule continues to allow such complaints, but would allow defendants to rebut the claim at the pleading stage by asserting that a plaintiff has not alleged facts sufficient to support a prima facie claim.

One new requirement is that the plaintiff plead that the practice is “arbitrary, artificial, and unnecessary.” This introduces a kind of balancing test even if the practice has discriminatory impact. (A balancing test is already somewhat present in Supreme Court precedent, and the rule purports to be following this precedent.) As a result, if the challenged practice nevertheless serves a “legitimate objective,” the defendant may rebut the claim at the pleading stage.

The net result of the proposed rule will be to make it easier for new technologies, especially artificial intelligence technologies, to pass muster under housing discrimination laws. If the technology has a legitimate objective, it may not run afoul of HUD rules despite having a disparate impact on a protected class of persons.

This is not theoretical. HUD sued Facebook for housing discrimination earlier this year.

Segmented social media, meet segmented augmented reality

Users on social media are often in their own universes. Liberals often don’t even see the content that conservatives see, and vice versa.

Imagine if that kind of segmentation extended to augmented reality as well:

Imagine a world that’s filled with invisible graffiti. Open an app, point your phone at a wall, and blank brick or cement becomes a canvas. Create art with digital spraypaint and stencils, and an augmented reality system will permanently store its location and placement, creating the illusion of real street art. If friends or social media followers have the app, they can find your painting on a map and come see it. You might scrawl an in-joke across the door of a friend’s apartment, or paint a gorgeous mural on the side of a local store.

Now imagine a darker world. Members of hate groups gleefully swap pictures of racist tags on civil rights monuments. Students bully each other by spreading vicious rumors on the walls of a target’s house. Small businesses get mobbed beyond capacity when a big influencer posts a sticker on their window. The developers of Mark AR, an app that’s described as “the world’s first augmented reality social platform,” are trying to create the good version of this system. They’re still figuring out how to avoid the bad one.

Is the world ready for virtual graffiti?

I first read China Miéville’s The City & the City many years ago, and I keep thinking about how strange it was then, and how much the ideas have resonated since.

It’s not all bad news in the environment

To be fair, it’s mostly terrible news. But every once and a while it turns out not as awful as we expected.

A major component of ocean pollution is less devastating and more manageable than usually portrayed, according to a scientific team at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and the Massachusetts Institute of Technology.

Previous studies, including one last year by the United Nations Environment Program, have estimated that polystyrene, a ubiquitous plastic found in trash, could take thousands of years to degrade, making it nearly eternal. But in a new paper, five scientists found that sunlight can degrade polystyrene in centuries or even decades.

In the Sea, Not All Plastic Lasts Forever

AI’s need to learn what we want

As it becomes increasingly apparent that we cannot tell artificial intelligence precisely what the goal should be, a growing chorus of researchers and ethicists are throwing up their hands and asking the AI’s to learn that part as well.

Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines.

Uncertainty about objectives might sound counterproductive, but it is actually an essential feature of safe intelligent systems. It implies that no matter how intelligent they become, machines will always defer to humans. They will ask permission when appropriate, they will accept correction, and, most important, they will allow themselves to be switched off — precisely because they want to avoid doing whatever it is that would give humans a reason to switch them off.

How to Stop Superhuman A.I. Before It Stops Us

This begs a lot of questions, not the least of which is what are our objectives? But it turns out we have the same problem describing what it is we want as we have describing how we perceive. We’re just going to have to show you.

China has corrupted us

Farhad Manjoo in an opinion piece for the New York Times:

A parade of American presidents on the left and the right argued that by cultivating China as a market — hastening its economic growth and technological sophistication while bringing our own companies a billion new workers and customers — we would inevitably loosen the regime’s hold on its people. Even Donald Trump, who made bashing China a theme of his campaign, sees the country mainly through the lens of markets. He’ll eagerly prosecute a pointless trade war against China, but when it comes to the millions in Hong Kong who are protesting China’s creeping despotism over their territory, Trump prefers to stay mum.

Well, funny thing: It turns out the West’s entire political theory about China has been spectacularly wrong. China has engineered ferocious economic growth in the past half century, lifting hundreds of millions of its citizens out of miserable poverty. But China’s growth did not come at any cost to the regime’s political chokehold.

A darker truth is now dawning on the world: China’s economic miracle hasn’t just failed to liberate Chinese people. It is also now routinely corrupting the rest of us outside of China.

Dealing With China Isn’t Worth the Moral Cost

What do we stand for as Americans? Just money?

There is no Western AI plan

Tim Wu, writing in the New York Times:

But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.

The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. . . .

To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.

America’s Risky Approach to Artificial Intelligence

Planning requires paying attention. We’re a little distracted in the West these days. And Russia and China love that.

It’s time to impeach Trump

I’ve been cautious about impeachment, but if this isn’t impeachable, nothing is.

President Trump directed the acting White House chief of staff to freeze more than $391 million in aid to Ukraine in the days before Mr. Trump was scheduled to speak by phone with the new Ukrainian president, two senior administration officials said Monday.

Trump Ordered Aid to Ukraine Frozen Days Before Call With Its Leader

The man used the power of the United States and taxpayer funds to pressure a foreign government into helping his political campaign. He’s betrayed his oath. It’s time.