Crime fighting tools of social media posts

James Vincent, reporting for The Verge:

As reported by The Philadelphia Inquirer, at the start of their investigation, FBI agents only had access to helicopter footage from a local news station. This showed a woman wearing a bandana throwing flaming debris into the smashed window of a police sedan.

By searching for videos of the protests uploaded to Instagram and Vimeo, the agents were able to find additional footage of the incident, and spotted a peace sign tattoo on the woman’s right forearm. After finding a set of 500 pictures of the protests shared by an amateur photographer, they were able to clearly see what the woman was wearing, including a T-shirt with the slogan: “Keep the Immigrants. Deport the Racists.”

The only place to buy this exact T-shirt was an Etsy store, where a user calling themselves “alleycatlore” had left a five-star review for the seller just few days before the protest. Using Google to search for this username, agents then found a matching profile at the online fashion marketplace Poshmark which listed the user’s name as “Lore-elisabeth.” 

A search for “Lore-elisabeth” led to a LinkedIn profile for one Lore Elisabeth Blumenthal, employed as a massage therapist at a Philadelphia massage studio. Videos hosted by the studio showed an individual with the same distinctive peace tattoo on their arm. A phone number listed for Blumenthal led to an address. As reported by NBC Philadelphia, a subpoena served to the Etsy seller showed a “Keep the Immigrants. Deport the Racists.” T-shirt had recently been delivered to that same address.

FBI used Instagram, an Etsy review, and LinkedIn to identify a protestor accused of arson

Zoom and enhance!

Using computer systems to zoom and enhance is a tv trope.

But we’re getting better.

Researchers at Duke University have released a paper on PULSE, an AI algorithm that constructs a high resolution face from a low resolution image. And the results look pretty good:

6/23/2020 Update: The PULSE algorithm exhibits a notable bias towards Caucasian features:

It’s a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

Automated systems are often wrong

And automated background checks may be terrible!

The reports can be created in a few seconds, using searches based on partial names or incomplete dates of birth. Tenants generally have no choice but to submit to the screenings and typically pay an application fee for the privilege. Automated reports are usually delivered to landlords without a human ever glancing at the results to see if they contain obvious mistakes, according to court records and interviews.

How Automated Background Checks Freeze Out Renters

So much of ethical AI comes down to requiring a human-in-the-loop for any system that has a non-trivial impact on other humans.

States can’t be sued for copyright infringement

In March, the U.S. Supreme Court decided Allen v. Cooper, Governor of North Carolina and ruled that States cannot be hauled into federal court on the issue of copyright infringement.

The decision is basically an extension of the Court’s prior decision on whether States can be sued for patent infringement in federal court (also no), and Justice Kagan writes for the unanimous Court in saying, “Florida Prepaid all but prewrote our decision today.”

But one of the most interesting discussions in the opinion is about when, perhaps, States might be hauled into federal court for copyright infringement under the Fourteenth Amendment prohibition against deprivation of property without due process:

All this raises the question: When does the Fourteenth Amendment care about copyright infringement? Sometimes, no doubt. Copyrights are a form of property. See Fox Film Corp. v. Doyal, 286 U. S. 123, 128 (1932). And the Fourteenth Amendment bars the States from “depriv[ing]”a person of property “without due process of law.” But even if sometimes, by no means always. Under our precedent, a merely negligent act does not “deprive” a person of property. See Daniels v. Williams, 474 U. S. 327, 328 (1986). So an infringement must be intentional, or at least reckless, to come within the reach of the Due Process Clause. See id., at 334, n. 3 (reserving whether reckless conduct suffices). And more: A State cannot violate that Clause unless it fails to offer an adequate remedy for an infringement, because such a remedy itself satisfies the demand of “due process.” See Hudson v. Palmer, 468 U. S. 517, 533 (1984). That means within the broader world of state copyright infringement is a smaller one where the Due Process Clause comes into play.

Slip Op. at 11.

Presumably this means that if North Carolina set up a free radio streaming service with Taylor Swift songs and refused to pay any royalties, they might properly be hauled into federal court. But absent some egregiously intentional or reckless conduct, States remain sovereign in copyright disputes.

AI Bias Bounties

Like bug bounties, but for bias in AI:

A similar problem exists in information security and one solution gaining traction are “bug bounty programs”. Bug bounty programs seek to allow security researchers and laymen to submit their exploits directly to the affected parties in exchange for compensation.

The market rate for security bounties for the average company on HackerOne range from \$100-\$1000. Bigger companies can pay more. In 2017, Facebook has disclosed paying \$880,000 in bug bounties, with a minimum of $500 a bounty. Google pays from \$100 to \$31,337 for exploits and Google paid \$3,000,000 in security bounties in 2016.

It seems reasonable to suggest at least big companies with large market caps who already have bounty reporting infrastructure, attempt to reward and collaborate with those who find bias in their software, rather than have them take it to the press in frustration and with no compensation for their efforts.

Bias Bounty Programs as a Method of Combatting Bias in AI

DC District Court: “the CFAA does not criminalize mere terms-of-service violations on consumer websites”

Two academics wished to test whether employment websites discriminate based on race or gender. They intended to submit false information (e.g., fictitious profiles) to these websites, but worried that these submissions violated the sites’ terms-of-services and could subject them to prosecution under the federal Computer Fraud and Abuse Act. So they sued for clarity.

The District Court ruled that:

a user should be deemed to have “accesse[d] a computer without authorization,” 18 U.S.C. § 1030(a)(2), only when the user bypasses an authenticating permission requirement, or an “authentication gate,” such as a password restriction that requires a user to demonstrate “that the user is the person who has access rights to the information accessed,” . . . .

Sandvig v. Barr (Civil Action No. 16-1386, March 27, 2020) at 22.

In other words, terms-of-service violations are not violations of the Computer Fraud and Abuse Act, and cannot be criminalized by virtue of that act.

Three main points appeared to guide the Court’s reasoning:

  1. The statutory text and legislative history contemplate a “two-realm internet” of public and private machines. Private machines require authorization, but public machines (e.g., websites) do not.
  2. Website terms-of-service contracts provide inadequate notice for criminal violations. No one reads them! It would be crazy to criminalize ToS non-adherence.
  3. Enabling private website owners to define the scope of criminal liability under the CFAA simply by editing their terms-of-service contract also seems crazy!

It’s worth noting that the government here argued that the researchers did not have standing to bring this suit and cited a lack of “credible threat of prosecution” because Attorney General guidance “expressly cautions against prosecutions based on [terms-of-service] violations.”

But the absence of a specific disavowal of prosecution by the Department undermines much of the government’s argument. . . . Furthermore, as noted above the government has brought similar Access Provision prosecutions in the past and thus created a credible threat of prosecution.

Discovery has not helped the government’s position. John T. Lynch, Jr., the Chief of the Computer Crime and Intellectual Property Section of the Criminal Division of the Department of Justice, testified at his deposition that it was not “impossible for the Department to bring a CFAA prosecution based on [similar] facts and de minimis harm.” Dep. of John T. Lynch, Jr. [ECF No. 48-4] at 154:3–7. Although Lynch has also stated that he does not “expect” the Department to do so, Aff. of John T. Lynch, Jr. [ECF No. 21-1] ¶ 9, “[t]he Constitution ‘does not leave us at the mercy of noblesse oblige[.]”

Sandvig v. Barr at 10.

Meanwhile, the US Supreme Court today agreed to decided whether abusing authorized access to a computer is a federal crime. In Van Buren v. United States:

a former Georgia police officer was convicted of breaching the CFAA by looking up what he thought was an exotic dancer’s license plate number in the state’s database in exchange for $6,000. The ex-officer, Nathan Van Buren, was the target of an FBI sting operation at the time.

. . . .

Van Buren’s attorneys argued that the Eleventh Circuit’s October 2019 decision to uphold the CFAA conviction defined the law in overly broad terms that could criminalize seemingly innocuous behavior, like an employee violating company policy by using work computers to set up an NCAA basketball “March Madness” bracket or a law student using a legal database meant for “educational use” to access local housing laws in a dispute with their landlord.

. . . .

The First, Fifth and Seventh Circuits have all agreed with the Eleventh Circuit’s expansive view of the CFAA, while the Second, Fourth and Ninth Circuits have defined accessing a computer “in excess of authorization” more narrowly, the petition says.

High Court To Examine Scope Of Federal Anti-Hacking Law

Constant aerial surveillance, coming to an American city

In 2015, Radio Lab ran a fascinating story about a re-purposed military project that put a drone in the sky all day long to film an entire city in high resolution. This allows the operators to rewind the tape and track anyone moving, forward or backward, anywhere within the city. It’s an amazing tool for fighting crime. And it’s a remarkable privacy intrusion.

The question was, would Americans be ok with this? I figured it was just a matter of time. Maybe another DC sniper would create the push for it.

Five years later Baltimore is the first off the sidelines, and the ACLU is suing to stop them:

The American Civil Liberties Union has sued to stop Baltimore police from launching a sweeping “eye in the sky” surveillance program. The initiative, operated by a company called Persistent Surveillance Systems (PSS), would send planes flying over Baltimore at least 40 hours a week as they almost continuously collect wide-angle photos of the city. If not blocked, a pilot program is expected to begin later this year.

Lawsuit fights new Baltimore aerial surveillance program