Watching Things Being Done

Great essay by Dan Brooks:

People love to watch the video in which an unknown artist makes drawing a hand look easy, but they also love the pictures in which ordinary people make it look hard. Failure is funny, especially in this case, which has primed us with the plausible claim that anyone can draw a woman’s hand before yanking us back to the truth that basically no one can draw anything at all.

The Pleasure of Watching Others Confront Their Own Incompetence

Bright Line Trademark Rule on Likelihood of Confusion

I’m a sucker for the predictability of a bright line rule, and Camilla Hrdy at the Written Description blog describes a possible de facto rule about the likelihood of confusion in trademark cases:

In trademark law, infringement occurs if defendant’s use of plaintiff’s trademark is likely to cause confusion as to the source of defendant’s product or as to sponsorship or affiliation. Courts across circuits often frame the question as whether an “appreciable number” of ordinarily prudent purchasers are likely to be confused. But evidence of actual confusion is not required. There is not supposed to be a magic number. Courts are supposed to assess a variety of factors, including the similarity of the marks and the markets in which they are used, along with evidence of actual confusion, if any, in order to asses whether confusion is likely, at some point, to occur. 

In theory.

But in practice, Bernstein asserted, there is a magic number: it’s around fifteen percent. Courts will often state that a survey finding 15% or more is sufficient to support likelihood of confusion, while under 15% suggests no likelihood of confusion.

Likelihood of Confusion: Is 15% The Magic Number?

There are of course many confounding factors including whether this 15% applies to “gross confusion” (total confusion that includes noise from other factors) or “net confusion” (caused only by use of the trademark), and problems with survey evidence in general. But I’ll briefly fantasize about being asked what “likelihood of confusion” means in trademark law and answering, “15%. It’s just 15%.”

Using facial recognition in police investigations

The Georgetown Law Center on Privacy & Technology issued a report (with its own vanity URL!) on the NYPD’s use of face recognition technology, and it starts with a particularly arresting anecdote:

On April 28, 2017, a suspect was caught on camera reportedly stealing beer from a CVS in New York City. The store surveillance camera that recorded the incident captured the suspect’s face, but it was partially obscured and highly pixelated. When the investigating detectives submitted the photo to the New York Police Department’s (NYPD) facial recognition system, it returned no useful matches.1

Rather than concluding that the suspect could not be identified using face recognition, however, the detectives got creative.

One detective from the Facial Identification Section (FIS), responsible for conducting face recognition searches for the NYPD, noted that the suspect looked like the actor Woody Harrelson, known for his performances in CheersNatural Born KillersTrue Detective, and other television shows and movies. A Google image search for the actor predictably returned high-quality images, which detectives then submitted to the face recognition algorithm in place of the suspect’s photo. In the resulting list of possible candidates, the detectives identified someone they believed was a match—not to Harrelson but to the suspect whose photo had produced no possible hits.2

This celebrity “match” was sent back to the investigating officers, and someone who was not Woody Harrelson was eventually arrested for petit larceny.

GARBAGE IN, GARBAGE OUT: FACE RECOGNITION ON FLAWED DATA

The report describes a number of incidents that it views as problematic, and they basically fall into two categories: (1) editing or reconstructing photos before submitting them to face recognition systems; and (2) simply uploading composite sketches of suspects to face recognition systems.

The report also describes a few incidents in which individuals were arrested based on very little evidence apart from the results of the face recognition technology, and it makes the claim that:

If it were discovered that a forensic fingerprint expert was graphically replacing missing or blurry portions of a latent print with computer-generated—or manually drawn—lines, or mirroring over a partial print to complete the finger, it would be a scandal.

I’m not sure this is true. Helping a computer system latch onto a possible set of matches seems an excellent way to narrow a list of suspects. But of course we should not be permitted to arrest or convict based solely on fabricated fingerprint or facial “evidence”. We need to understand the limits of technology used in the investigative process.

As technology becomes more complex, it is increasingly difficult to understand how it works and does not work. License plate readers are fantastically powerful technology, responsible for solving really terrible crimes. But the technology stack makes mistakes. You cannot rely on it alone.

There is no difference in principle between facial recognition technology, genealogy searches, and license plate readers. They are powerful tools but they are not perfect. And, crucially, they can be far less accurate when used on minority populations. Using powerful tools requires training. And the benefits are remarkable. But users need to understand how the technology works and where it can break down. This will always be true.

Salvador Dalí recreated with AI at Dalí Museum in Florida

What is dead may never die, at least with AI. The painter Salvador Dalí has been recreated on life-size video to interact with visitors to the Dalí Museum in St. Petersburg, Florida.

Using archival footage from interviews, GS&P pulled over 6,000 frames and used 1,000 hours of machine learning to train the AI algorithm on Dalí’s face. His facial expressions were then imposed over an actor with Dalí’s body proportions, and quotes from his interviews and letters were synced with a voice actor who could mimic his unique accent, a mix of French, Spanish, and English.

. . . . .

It’s hard to think of another artist who would be better suited for this than Dalí.

Deepfake Salvador Dalí takes selfies with museum visitors

This is going to be everywhere soon. How long until people start paying to have themselves recreated after they die?

The video is worth watching:

SF restricts its government agencies from using facial recognition technology

There are many reports that “SF bans facial recognition” (I’m looking at you NYT), but this is not true. The “ban” is just a restriction on its own government agencies (including the police) from using facial recognition.

San Francisco’s ban covers government agencies, including the city police and county sheriff’s department, but doesn’t affect the technology that unlocks your iPhone or cameras installed by businesses or individuals. It’s part of a broader package of rules, introduced in January by supervisor Aaron Peskin, that will require agencies to gain approval from the board before purchasing surveillance tech and will require that they publicly disclose its intended use.

SAN FRANCISCO BANS AGENCY USE OF FACIAL-RECOGNITION TECH

None of the reporting seems to link to the actual ordinance, but you can find it on the SF Board of Supervisor’s website. It is file #190110, introduced 1/29/2019. The actual ordinance is here. Summary is here.

Play with OpenAI’s GPT-2 language generation model

In February 2019, OpenAI disclosed a language generation algorithm called GPT-2. It did only one thing: predict the next word given all previous words in the text. And, while not perfect, it does this very well. When prompted with:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

it responds with:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

(The text continues.)

GPT-2 is a transformer-based neural network with 1.5 billion parameters trained on a dataset of 8 million web pages. Transformer-based networks were introduced by Google researchers in 2017 primarily for the purpose of language translation. They work on language by figuring out how much attention to pay to which words. Some words have more semantic value than others, and transformer-based neural networks can learn how to value different words with large amounts of training data. The biggest benefit of a transformer-based network is that the computation can be easily performed in parallel, in contrast to the more traditional and sequential RNN models used for language translation.

In a controversial move, OpenAI originally declined to make the GPT-2 model available to researchers, citing concerns about it being used to create “deceptive, biased, or abusive language at scale . . . .” Recently, however, they have released a smaller, less capable version of the model, and are considering other ways to share the research with AI partners.

Anyways…. now you can play with the smaller GPT-2 model at TalkToTransformer.com.

Escape from Filter Bubbles by “Noisifying”

Joe Pinsker, writing for The Atlantic:

Max Hawkins, a 28-year-old programmer, elevated the goal of subverting algorithms to a way of life. After graduating from college in 2013 and getting a job at Google, Hawkins grew restless and sought ways to make his life more interesting. He built a tool that had Uber drop him off at random locations around the Bay Area. Then he built a tool that picked random publicly listed Facebook events for him to attend.

Hawkins found the variety refreshing, and after two years, he left his job. Every few months, he let a computer pick the city he would live in, based on airfare, cost-of-living estimates, and his projected income as a freelance programmer. He tried listening to music picked randomly by Spotify, wearing clothes bought randomly on Amazon, growing out random styles of facial hair, and arranging phone calls with friends on randomly selected topics.

How I Tried to Defy the Facebook Algorithm

And not just online!

A proposal to tax targeted digital ads

Paul Romer proposes tax policy, instead of antitrust, to nudge privacy in the right direction:

Of course, companies are incredibly clever about avoiding taxes. But in this case, that’s a good thing for all of us. This tax would spur their creativity. Ad-driven platform companies could avoid the tax entirely by switching to the business model that many digital companies already offer: an ad-free subscription. Under this model, consumers know what they give up, and the success of the business would not hinge on tracking customers with ever more sophisticated surveillance techniques. A company could succeed the old-fashioned way: by delivering a service that is worth more than it costs.

A Tax That Could Fix Big Tech

Not a bad idea.

Riley Howell

He kept charging. A bullet to the torso did not stop Riley Howell. A second bullet to the body did not prevent him from reaching his goal and hurling himself at the gunman who opened fire last week inside a classroom at the University of North Carolina at Charlotte. The third bullet came as Mr. Howell was inches from the gunman, who fired at point-blank range into his head.

. . . He tackled the gunman so forcefully that the suspect complained to first responders after his arrest of internal injuries, the parents said the authorities told them.

. . . . .

“The chief said no one was shot after Riley body-slammed him,” said his mother, Natalie Henry-Howell.

Riley Howell’s Parents Say He Was Shot 3 Times While Tackling the U.N.C. Charlotte Gunman

A life cut short, but a final act of astonishing courage and sacrifice.

I hope to die as well as Riley Howell. I hope to die as well as Riley Howell. I hope to die as well as Riley Howell. RIP.