We reserve the right to *allow the AI* to refuse service to anyone

From denying a cat’s entry to denying a person’s entry:

A sign at the front of the store reads, “Look at camera for entry” and “Facial Recognition Technology In Use.” An automated voice tells approaching customers through a mounted speaker to look up at the camera.

KIRO 7 first learned about the facial recognition technology being used there from a customer, a regular at the store, who posted about it on Facebook after stopping by the 24-hour location at night last week.

“Tonight, I was confronted with a whole new Jackson’s,” she wrote. “You had to stare at the camera before they let you in.”

The woman said she was told by clerks that the technology is being used to cut down on thefts from the store.

“Sometimes I would walk out of there, jaw to the ground, at the in-your-face theft,” she wrote.

‘Look at camera for entry’: Tacoma convenience store using facial recognition technology

The makers of this technology, Blue Line Technology, seem rather overconfident: “Blue Line Technology spokesperson Sawyer said the software has never misidentified anyone.” Meanwhile, state-of-the-art facial recognition technology is notoriously inaccurate with dark-skinned individuals and women.

So if you’re a dark-skinned individual mistakenly identified as “bad,” what does the appeal process look like? Stand outside and shout at the clerk?

Detecting deepfakes by committee

I guess this is a plan?

To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.

Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not.

‘A perfect storm’: The Wall Street Journal has 21 people detecting ‘deepfakes’

Major supplier of police body cameras concludes facial recognition is not reliable enough to sell ethically

Chaim Gartenberg, writing for The Verge:

Axon (formally known as Taser) has been shifting its business toward body cameras for police officers for the past few years, but today, the company is making a big change. At the recommendation of its AI ethics board, “Axon will not be commercializing face matching products on our body camera,” the company announced in a blog post today.

[. . . . .]

According to the board’s report, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” It cites that, at the very least, more accurate technology that “performs equally well across races, ethnicities, genders, and other identity groups” would be required, assuming facial recognition technology for police body cameras can ever be considered ethical at all, a conversation that the board has begun to examine.

Axon (formerly Taser) says facial recognition on police body cams is unethical

One issue we keep sidestepping is that facial recognition technology is never going to be either perfectly accurate or perfectly equal across all classes of people. In other words, no matter how accurate the technology becomes there will always be some small differences in performance between, for example, recognizing light-skinned and dark-skinned people. So the question becomes, is any difference in accuracy tolerable? What amount?

EU Expert Group Favors Banning AI Mass Surveillance and AI Deception

The EU High-Level Expert Group on Artificial Intelligence released its Policy and Investment Recommendations for Trustworthy AI today. The 50-page document is a bit more prescriptive that their previous Ethics Guidelines, and suggests that governments “refrain from disproportionate and mass surveillance of individuals” and “introduce mandatory self-identification of AI systems.” (But see deceptive NYPD chatbots.)

A big chunk of the report also urges the EU to invest in education and subject matter expertise.

So far the discussion around AI mass surveillance has been relatively binary: do it or not. At some point I expect we will see proposals to do mass surveillance while maintaining individual privacy. The security benefits of mass surveillance are too attractive to forego.

Privacy vs Health

David Brooks:

In his book “Deep Medicine,” which is about how A.I. is changing medicine across all fields, Eric Topol describes a study in which a learning algorithm was given medical records to predict who was likely to attempt suicide. It accurately predicted attempts nearly 80 percent of the time. By incorporating data of real-world interactions such as laughter and anger, an algorithm in a similar study was able to reach 93 percent accuracy.

[. . . . .]

Medicine is hard because, as A.I. is teaching us, we’re much more different from one another than we thought. There is no single diet approach that is best for all people because we all process food in our own distinct way. Diet, like other treatments, has to be customized. 

You can be freaked out by the privacy-invading power of A.I. to know you, but only A.I. can gather the data necessary to do this.

How Artificial Intelligence Can Save Your Life

Rephrasing a sentence from an earlier post, health is halfway around the block before privacy can get its shoes on.

AI’s evaluating humans

Here’s an application that could use some transparency:

When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

A Machine May Not Take Your Job, but One Could Become Your Boss

I have no idea how this AI might have been trained, and the article sheds no light.

Do as I say, not as I do: robot edition

Deep learning has revolutionized artificial intelligence. We’ve changed from telling computers how to do things, and are now telling computers what to do and letting them figure it out. For many activities (e.g., object identification) we can’t even really explain how to do it. It’s easier to just tell a system, “This is a ball. When you see this, identify it as a ball. Now here are 1M more examples.” And the system learns pretty well.

Except when it doesn’t. There is a burgeoning new science of trying to tell artificial intelligence systems what exactly we want them to do:

Told to optimize for speed while racing down a track in a computer game, a car pushes the pedal to the metal … and proceeds to spin in a tight little circle. Nothing in the instructions told the car to drive straight, and so it improvised.

[. . . . .]

The team’s new system for providing instruction to robots — known as reward functions — combines demonstrations, in which humans show the robot what to do, and user preference surveys, in which people answer questions about how they want the robot to behave.

“Demonstrations are informative but they can be noisy. On the other hand, preferences provide, at most, one bit of information, but are way more accurate,” said Sadigh. “Our goal is to get the best of both worlds, and combine data coming from both of these sources more intelligently to better learn about humans’ preferred reward function.”

Researchers teach robots what humans want

This is critical research, and probably under-reported. If robots (like people) are going to learn mainly by mimicking humans, what human behaviors should they mimic?

People want autonomous cars to drive less aggressively than they do. And they should also be less racist, sexist, and violent. Getting the right reward function is critical. Getting it wrong may be immoral.

Reducing consumption of animal products is the single most effective thing an individual can do to flight climate change

Today, and probably into the future, dietary change can deliver environmental benefits on a scale not achievable by producers. Moving from current diets to a diet that excludes animal products (table S13) (35) has transformative potential, reducing food’s land use by 3.1 (2.8 to 3.3) billion ha (a 76% reduction), including a 19% reduction in arable land; food’s GHG emissions by 6.6 (5.5 to 7.4) billion metric tons of CO2eq (a 49% reduction); acidification by 50% (45 to 54%); eutrophication by 49% (37 to 56%); and scarcity-weighted freshwater withdrawals by 19% (−5 to 32%) for a 2010 reference year. . . . For the United States, where per capita meat consumption is three times the global average, dietary change has the potential for a far greater effect on food’s different emissions, reducing them by 61 to 73% . . . .

Reducing food’s environmental impacts through producers and consumers

But vegans and vegetarians make up only about 8 percent of the American population, and that number is not going up.

How to Become a Federal Criminal

It’s super easy and you may already be one!

You may know that you are required to report if you are traveling to or from the United States with $10,000 or more in cash. Don’t hop over the Canadian border to buy a used car, for example, or the Feds may confiscate your cash (millions of dollars are confiscated every year). Did you also know that you can’t leave the United States with more than $5 in nickels??? That’s a federal crime punishable by up to five years in prison. How about carrying a metal detector in a national park–up to six months in prison. And God forbid you should use your metal detector and find something more than 100 years old, that can put you away for up to a year. Also illegal in a national park? Making unreasonable gestures to a passing horse.

How to Become a Federal Criminal

Worth re-linking to one of my favorite legal lectures of all time: Don’t Talk to the Police. Even if you are going to tell the truth, even if you did nothing wrong. There is no way it will help you.