Believing AI’s is sometimes easy, and sometimes hard

Most ethicists are concerned that AI’s are wrong, and we harm people by deferring to them. But they can be right and ignored too:

NURSE DINA SARRO didn’t know much about  artificial intelligence when Duke University Hospital installed machine learning software to raise an alarm when a person was at risk of developing sepsis, a complication of infection that is the number one killer in US hospitals. The software, called Sepsis Watch, passed alerts from an algorithm Duke researchers had tuned with 32 million data points from past patients to the hospital’s team of rapid response nurses, co-led by Sarro.

But when nurses relayed those warnings to doctors, they sometimes encountered indifference or even suspicion. When docs questioned why the AI thought a patient needed extra attention, Sarro found herself in a tough spot. “I wouldn’t have a good answer because it’s based on an algorithm,” she says.

AI Can Help Patients—but Only If Doctors Understand It

AI test proctor fails

One college student went viral on TikTok after posting a video in which she said that a test proctoring program had flagged her behavior as suspicious because she was reading the question aloud, resulting in her professor assigning her a failing grade.

A student says test proctoring AI flagged her as cheating when she read a question out loud. Others say the software could have more dire consequences.

This is basic ethics: if your AI has real consequences, you’d better get it right.

The value of distinguishing AI’s from humans

What will happen when we can no longer distinguish human tweets from AI tweets? Does it matter? Should we care? Will there be a verified human status?

Renée DiResta, writing for The Atlantic:

Amid the arms race surrounding AI-generated content, users and internet companies will give up on trying to judge authenticity tweet by tweet and article by article. Instead, the identity of the account attached to the comment, or person attached to the byline, will become a critical signal toward gauging legitimacy. Many users will want to know that what they’re reading or seeing is tied to a real person—not an AI-generated persona. . . .

. . . . .

The idea that a verified identity should be a precondition for contributing to public discourse is dystopian in its own way. Since the dawn of the nation, Americans have valued anonymous and pseudonymous speech: Alexander Hamilton, James Madison, and John Jay used the pen name Publius when they wrote the Federalist Papers, which laid out founding principles of American government. Whistleblowers and other insiders have published anonymous statements in the interest of informing the public. Figures as varied as the statistics guru Nate Silver (“Poblano”) and Senator Mitt Romney (“Pierre Delecto”) have used pseudonyms while discussing political matters on the internet. The goal shouldn’t be to end anonymity online, but merely to reserve the public square for people who exist—not for artificially intelligent propaganda generators.

The Supply of Disinformation Will Soon Be Infinite

The idea that we should reserve the public square for humans is remarkable, in just the sense that this technology is now upon us. Human sentiments have value; AI facsimiles do not.

An optimistic take is that perhaps we will instead pay attention to the useful content of such messages, rather than inflammatory rhetoric. A good idea is a good idea, AI or not.

Portland bans facial recognition by private entities

34.10.030 Prohibition.

Except as provided in the Exceptions section below, a Private Entity shall not use Face Recognition Technologies in Places of Public Accommodation within the boundaries of the City of Portland.

34.10.040 Exceptions.

The prohibition in this Chapter does not apply to use of Face Recognition Technologies:

1. To the extent necessary for a Private Entity to comply with federal, state, or local laws;

2. For user verification purposes by an individual to access the individual’s own personal or employer issued communication and electronic devices; or

3. In automatic face detection services in social media applications.

Prohibit the use of Face Recognition Technologies by Private Entities in Places of Public Accommodation in the City (via PRIVACY & INFORMATION SECURITY LAW BLOG)

Note the exception for use in “social media applications.”

What does it mean for AI to be “explainable”?

A NIST paper attempts to answer this question:

Briefly, our four principles of explainable AI are:

Explanation: Systems deliver accompanying evidence or reason(s) for all outputs. 

Meaningful: Systems provide explanations that are understandable to individual users. 

Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output. 

Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. 

Four Principles of Explainable Artificial Intelligence

Stating this differently: there should be an explanation, it should be understandable and accurate, and the system should stop when it’s generating nonsense.

These are very reasonable principles, but likely tough to deliver with current technology.

Indeed, the paper discusses that humans are often unable to explain why they have taken a certain action:

People fabricate reasons for their decisions, even those thought to be immutable, such as personally held opinions [24, 34, 99]. In fact, people’s conscious reasoning that is able to be verbalized does not seem to always occur before the expressed decision. Instead, evidence suggests that people make their decision and then apply reasons for those decisions after the fact [95]. From a neuroscience perspective, neural markers of a decision can occur up to 10 seconds before a person’s conscious awareness [85]. This finding suggests that decision making processes begin long before our conscious awareness. 

Id. at 14.

And it is well documented that even experts generally cannot predict their own accuracy.

What hope do the AI’s have?

AlphaDogfight wins 5-0 in F-16 battle vs human

Will Knight, writing for Wired:

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

A Dogfight Renews Concerns About AI’s Lethal Potential

This is an under-discussed issue, but is inevitable. DeepMind is convinced that its AlphaZero DNN can master any two-player, turn-based game that shows perfect information. And its AlphaStar DNN shows what it can do in real-time games as well. It is a natural, and inevitable, extension to war capabilities.

Is this ok? Does that question even matter? How long before human-in-the-loop is the unacceptable bottleneck?

Rite Aid has been using facial recognition for 8 years

Jeffrey Dastin writing for Reuters:

The cameras matched facial images of customers entering a store to those of people Rite Aid previously observed engaging in potential criminal activity, causing an alert to be sent to security agents’ smartphones. Agents then reviewed the match for accuracy and could tell the customer to leave.

Rite Aid deployed facial recognition systems in hundreds of U.S. stores

The DeepCam systems were primarily deployed in “lower-income, non-white neighborhoods,” and, according to current and former Rite Aid employees, a previous system called FaceFirst regularly made mistakes:

“It doesn’t pick up Black people well,” one loss prevention staffer said last year while using FaceFirst at a Rite Aid in an African-American neighborhood of Detroit. “If your eyes are the same way, or if you’re wearing your headband like another person is wearing a headband, you’re going to get a hit.”

Automated systems are often wrong

And automated background checks may be terrible!

The reports can be created in a few seconds, using searches based on partial names or incomplete dates of birth. Tenants generally have no choice but to submit to the screenings and typically pay an application fee for the privilege. Automated reports are usually delivered to landlords without a human ever glancing at the results to see if they contain obvious mistakes, according to court records and interviews.

How Automated Background Checks Freeze Out Renters

So much of ethical AI comes down to requiring a human-in-the-loop for any system that has a non-trivial impact on other humans.