What does it mean for AI to be “explainable”?

A NIST paper attempts to answer this question:

Briefly, our four principles of explainable AI are:

Explanation: Systems deliver accompanying evidence or reason(s) for all outputs. 

Meaningful: Systems provide explanations that are understandable to individual users. 

Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output. 

Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. 

Four Principles of Explainable Artificial Intelligence

Stating this differently: there should be an explanation, it should be understandable and accurate, and the system should stop when it’s generating nonsense.

These are very reasonable principles, but likely tough to deliver with current technology.

Indeed, the paper discusses that humans are often unable to explain why they have taken a certain action:

People fabricate reasons for their decisions, even those thought to be immutable, such as personally held opinions [24, 34, 99]. In fact, people’s conscious reasoning that is able to be verbalized does not seem to always occur before the expressed decision. Instead, evidence suggests that people make their decision and then apply reasons for those decisions after the fact [95]. From a neuroscience perspective, neural markers of a decision can occur up to 10 seconds before a person’s conscious awareness [85]. This finding suggests that decision making processes begin long before our conscious awareness. 

Id. at 14.

And it is well documented that even experts generally cannot predict their own accuracy.

What hope do the AI’s have?

AlphaDogfight wins 5-0 in F-16 battle vs human

Will Knight, writing for Wired:

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

A Dogfight Renews Concerns About AI’s Lethal Potential

This is an under-discussed issue, but is inevitable. DeepMind is convinced that its AlphaZero DNN can master any two-player, turn-based game that shows perfect information. And its AlphaStar DNN shows what it can do in real-time games as well. It is a natural, and inevitable, extension to war capabilities.

Is this ok? Does that question even matter? How long before human-in-the-loop is the unacceptable bottleneck?

Rite Aid has been using facial recognition for 8 years

Jeffrey Dastin writing for Reuters:

The cameras matched facial images of customers entering a store to those of people Rite Aid previously observed engaging in potential criminal activity, causing an alert to be sent to security agents’ smartphones. Agents then reviewed the match for accuracy and could tell the customer to leave.

Rite Aid deployed facial recognition systems in hundreds of U.S. stores

The DeepCam systems were primarily deployed in “lower-income, non-white neighborhoods,” and, according to current and former Rite Aid employees, a previous system called FaceFirst regularly made mistakes:

“It doesn’t pick up Black people well,” one loss prevention staffer said last year while using FaceFirst at a Rite Aid in an African-American neighborhood of Detroit. “If your eyes are the same way, or if you’re wearing your headband like another person is wearing a headband, you’re going to get a hit.”

Automated systems are often wrong

And automated background checks may be terrible!

The reports can be created in a few seconds, using searches based on partial names or incomplete dates of birth. Tenants generally have no choice but to submit to the screenings and typically pay an application fee for the privilege. Automated reports are usually delivered to landlords without a human ever glancing at the results to see if they contain obvious mistakes, according to court records and interviews.

How Automated Background Checks Freeze Out Renters

So much of ethical AI comes down to requiring a human-in-the-loop for any system that has a non-trivial impact on other humans.

AI Bias Bounties

Like bug bounties, but for bias in AI:

A similar problem exists in information security and one solution gaining traction are “bug bounty programs”. Bug bounty programs seek to allow security researchers and laymen to submit their exploits directly to the affected parties in exchange for compensation.

The market rate for security bounties for the average company on HackerOne range from \$100-\$1000. Bigger companies can pay more. In 2017, Facebook has disclosed paying \$880,000 in bug bounties, with a minimum of $500 a bounty. Google pays from \$100 to \$31,337 for exploits and Google paid \$3,000,000 in security bounties in 2016.

It seems reasonable to suggest at least big companies with large market caps who already have bounty reporting infrastructure, attempt to reward and collaborate with those who find bias in their software, rather than have them take it to the press in frustration and with no compensation for their efforts.

Bias Bounty Programs as a Method of Combatting Bias in AI

AI researchers submitting to NeurIPS conference must now address ethical concerns

Khari Johnson, writing for Venture Beat:

For the first time ever, researchers who submit papers to NeurIPS, one of the biggest AI research conferences in the world, must now state the “potential broader impact of their work” on society as well as any financial conflict of interest, conference organizers told VentureBeat.

NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest

NeurIPS, or the Conference on Neural Information Processing Systems, is the largest AI conference in the world.

Facial recognition tech in Moscow

First London, now Moscow.

Moscow is the latest major city to introduce live facial recognition cameras to its streets, with Mayor Sergei Sobyanin announcing that the technology is operating “on a mass scale” earlier this month, according to a report from Russian business paper Vedomosti.

. . . . .

Moscow started trialing live facial recognition in 2017, using technology from Russian firm NtechLab to scan footage from the Russian capital’s network of 160,000 CCTV cameras. The company is best known for its FindFace software, which it launched in 2016 and let users match anyone in a picture to their profile on VK, known as Russia’s Facebook.

The app was criticized by some, particularly as it was used to dox and harass sex workers, and NtechLab eventually shut it down in favor of enterprise and government work.

Moscow rolls out live facial recognition system with an app to alert police

Heart prints are a new biometric

While the world debates the utility and ethics of existing facial recognition technology, new biometrics are constantly being developed. They are likely to replace facial recognition in the long term.

This system, dubbed Jetson, is able to measure, from up to 200 metres away, the minute vibrations induced in clothing by someone’s heartbeat. Since hearts differ in both shape and contraction pattern, the details of heartbeats differ, too. The effect of this on the fabric of garments produces what Ideal Innovations, a firm involved in the Jetson project, calls a “heartprint”—a pattern reckoned sufficiently distinctive to confirm someone’s identity.

To measure heartprints remotely Jetson employs gadgets called laser vibrometers. These work by detecting minute variations in a laser beam that has been reflected off an object of interest. They have been used for decades to study things like bridges, aircraft bodies, warship cannons and wind turbines—searching for otherwise-invisible cracks, air pockets and other dangerous defects in materials. However, only in the past five years or so has laser vibrometry become good enough to distinguish the vibrations induced in fabric by heartprints.

People can now be identified at a distance by their heartbeat

This is astonishing technology and will surely improve. In the long term your unique identity will be readily available to anyone who cares.