Facial recognition software countermeasures

Software that tweaks photos to hide them from facial recognition:

A start-up called Clearview AI, for example, scraped billions of online photos to build a tool for the police that could lead them from a face to a Facebook account, revealing a person’s identity.

Now researchers are trying to foil those systems. A team of computer engineers at the University of Chicago has developed a tool that disguises photos with pixel-level changes that confuse facial recognition systems.

This Tool Could Protect Your Photos From Facial Recognition

This is of course just an arms race. The facial recognition will improve, the hiding software will get tweaked.

Rite Aid has been using facial recognition for 8 years

Jeffrey Dastin writing for Reuters:

The cameras matched facial images of customers entering a store to those of people Rite Aid previously observed engaging in potential criminal activity, causing an alert to be sent to security agents’ smartphones. Agents then reviewed the match for accuracy and could tell the customer to leave.

Rite Aid deployed facial recognition systems in hundreds of U.S. stores

The DeepCam systems were primarily deployed in “lower-income, non-white neighborhoods,” and, according to current and former Rite Aid employees, a previous system called FaceFirst regularly made mistakes:

“It doesn’t pick up Black people well,” one loss prevention staffer said last year while using FaceFirst at a Rite Aid in an African-American neighborhood of Detroit. “If your eyes are the same way, or if you’re wearing your headband like another person is wearing a headband, you’re going to get a hit.”

Shortcuts in Learing

So starts a great article on DNN’s and learning shortcuts:

Recently, researchers trained a deep neural network to classify breast cancer, achieving a performance of 85%. When used in combination with three other neural network models, the resulting ensemble method reached an outstanding 99% classification accuracy, rivaling expert radiologists with years of training.

The result described above is true, with one little twist: instead of using state-of-the-art artificial deep neural networks, researchers trained “natural” neural networks – more precisely, a flock of four pigeons – to diagnose breast cancer.

Shortcuts: How Neural Networks Love to Cheat

Sometimes the learning is clever; sometimes the learning is a problem. Mostly though, it’s hard to make sure an AI learns what you want it to learn.

Crime fighting tools of social media posts

James Vincent, reporting for The Verge:

As reported by The Philadelphia Inquirer, at the start of their investigation, FBI agents only had access to helicopter footage from a local news station. This showed a woman wearing a bandana throwing flaming debris into the smashed window of a police sedan.

By searching for videos of the protests uploaded to Instagram and Vimeo, the agents were able to find additional footage of the incident, and spotted a peace sign tattoo on the woman’s right forearm. After finding a set of 500 pictures of the protests shared by an amateur photographer, they were able to clearly see what the woman was wearing, including a T-shirt with the slogan: “Keep the Immigrants. Deport the Racists.”

The only place to buy this exact T-shirt was an Etsy store, where a user calling themselves “alleycatlore” had left a five-star review for the seller just few days before the protest. Using Google to search for this username, agents then found a matching profile at the online fashion marketplace Poshmark which listed the user’s name as “Lore-elisabeth.” 

A search for “Lore-elisabeth” led to a LinkedIn profile for one Lore Elisabeth Blumenthal, employed as a massage therapist at a Philadelphia massage studio. Videos hosted by the studio showed an individual with the same distinctive peace tattoo on their arm. A phone number listed for Blumenthal led to an address. As reported by NBC Philadelphia, a subpoena served to the Etsy seller showed a “Keep the Immigrants. Deport the Racists.” T-shirt had recently been delivered to that same address.

FBI used Instagram, an Etsy review, and LinkedIn to identify a protestor accused of arson

Zoom and enhance!

Using computer systems to zoom and enhance is a tv trope.

But we’re getting better.

Researchers at Duke University have released a paper on PULSE, an AI algorithm that constructs a high resolution face from a low resolution image. And the results look pretty good:

6/23/2020 Update: The PULSE algorithm exhibits a notable bias towards Caucasian features:

It’s a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

Automated systems are often wrong

And automated background checks may be terrible!

The reports can be created in a few seconds, using searches based on partial names or incomplete dates of birth. Tenants generally have no choice but to submit to the screenings and typically pay an application fee for the privilege. Automated reports are usually delivered to landlords without a human ever glancing at the results to see if they contain obvious mistakes, according to court records and interviews.

How Automated Background Checks Freeze Out Renters

So much of ethical AI comes down to requiring a human-in-the-loop for any system that has a non-trivial impact on other humans.

States can’t be sued for copyright infringement

In March, the U.S. Supreme Court decided Allen v. Cooper, Governor of North Carolina and ruled that States cannot be hauled into federal court on the issue of copyright infringement.

The decision is basically an extension of the Court’s prior decision on whether States can be sued for patent infringement in federal court (also no), and Justice Kagan writes for the unanimous Court in saying, “Florida Prepaid all but prewrote our decision today.”

But one of the most interesting discussions in the opinion is about when, perhaps, States might be hauled into federal court for copyright infringement under the Fourteenth Amendment prohibition against deprivation of property without due process:

All this raises the question: When does the Fourteenth Amendment care about copyright infringement? Sometimes, no doubt. Copyrights are a form of property. See Fox Film Corp. v. Doyal, 286 U. S. 123, 128 (1932). And the Fourteenth Amendment bars the States from “depriv[ing]”a person of property “without due process of law.” But even if sometimes, by no means always. Under our precedent, a merely negligent act does not “deprive” a person of property. See Daniels v. Williams, 474 U. S. 327, 328 (1986). So an infringement must be intentional, or at least reckless, to come within the reach of the Due Process Clause. See id., at 334, n. 3 (reserving whether reckless conduct suffices). And more: A State cannot violate that Clause unless it fails to offer an adequate remedy for an infringement, because such a remedy itself satisfies the demand of “due process.” See Hudson v. Palmer, 468 U. S. 517, 533 (1984). That means within the broader world of state copyright infringement is a smaller one where the Due Process Clause comes into play.

Slip Op. at 11.

Presumably this means that if North Carolina set up a free radio streaming service with Taylor Swift songs and refused to pay any royalties, they might properly be hauled into federal court. But absent some egregiously intentional or reckless conduct, States remain sovereign in copyright disputes.