Believing AI’s is sometimes easy, and sometimes hard

Most ethicists are concerned that AI’s are wrong, and we harm people by deferring to them. But they can be right and ignored too:

NURSE DINA SARRO didn’t know much about  artificial intelligence when Duke University Hospital installed machine learning software to raise an alarm when a person was at risk of developing sepsis, a complication of infection that is the number one killer in US hospitals. The software, called Sepsis Watch, passed alerts from an algorithm Duke researchers had tuned with 32 million data points from past patients to the hospital’s team of rapid response nurses, co-led by Sarro.

But when nurses relayed those warnings to doctors, they sometimes encountered indifference or even suspicion. When docs questioned why the AI thought a patient needed extra attention, Sarro found herself in a tough spot. “I wouldn’t have a good answer because it’s based on an algorithm,” she says.

AI Can Help Patients—but Only If Doctors Understand It

AI test proctor fails

One college student went viral on TikTok after posting a video in which she said that a test proctoring program had flagged her behavior as suspicious because she was reading the question aloud, resulting in her professor assigning her a failing grade.

A student says test proctoring AI flagged her as cheating when she read a question out loud. Others say the software could have more dire consequences.

This is basic ethics: if your AI has real consequences, you’d better get it right.

Detecting deep fakes by detecting heart beats

Deep fakes have so far not learned to simulate heart beats in images, and so they can be detected as fraudulent. But given time they will learn this as well; it’s an arms race.

In other news, heart beats are clearly visible in processed images!

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

The Subtle Effects of Blood Circulation Can Be Used to Detect Deep Fakes (via Schneier on Security)

Check out the video at 1:30 and then again at 3:18:

The AI’s are certainly going to know a lot about us.

The value of distinguishing AI’s from humans

What will happen when we can no longer distinguish human tweets from AI tweets? Does it matter? Should we care? Will there be a verified human status?

Renée DiResta, writing for The Atlantic:

Amid the arms race surrounding AI-generated content, users and internet companies will give up on trying to judge authenticity tweet by tweet and article by article. Instead, the identity of the account attached to the comment, or person attached to the byline, will become a critical signal toward gauging legitimacy. Many users will want to know that what they’re reading or seeing is tied to a real person—not an AI-generated persona. . . .

. . . . .

The idea that a verified identity should be a precondition for contributing to public discourse is dystopian in its own way. Since the dawn of the nation, Americans have valued anonymous and pseudonymous speech: Alexander Hamilton, James Madison, and John Jay used the pen name Publius when they wrote the Federalist Papers, which laid out founding principles of American government. Whistleblowers and other insiders have published anonymous statements in the interest of informing the public. Figures as varied as the statistics guru Nate Silver (“Poblano”) and Senator Mitt Romney (“Pierre Delecto”) have used pseudonyms while discussing political matters on the internet. The goal shouldn’t be to end anonymity online, but merely to reserve the public square for people who exist—not for artificially intelligent propaganda generators.

The Supply of Disinformation Will Soon Be Infinite

The idea that we should reserve the public square for humans is remarkable, in just the sense that this technology is now upon us. Human sentiments have value; AI facsimiles do not.

An optimistic take is that perhaps we will instead pay attention to the useful content of such messages, rather than inflammatory rhetoric. A good idea is a good idea, AI or not.

Ransomware causes death

The Associated Press:

German authorities said Thursday that what appears to have been a misdirected hacker attack caused the failure of IT systems at a major hospital in Duesseldorf, and a woman who needed urgent admission died after she had to be taken to another city for treatment.

German Hospital Hacked, Patient Taken to Another City Dies

Looks like this was not intended, but the story is an illustration of how dependent we are on software.

Trillions of parameters

Maria Deutscher, writing for Silicon Angle:

Microsoft Corp. has released a new version of its open-source DeepSpeed tool that it says will enable the creation of deep learning models with a trillion parameters, more than five times as many as in the world’s current largest model.

Microsoft AI tool enables ‘extremely large’ models with a trillion parameters

That’s a lot of transformations. If there’s a pattern, a trillion parameters should be able to find and store it.

Portland bans facial recognition by private entities

34.10.030 Prohibition.

Except as provided in the Exceptions section below, a Private Entity shall not use Face Recognition Technologies in Places of Public Accommodation within the boundaries of the City of Portland.

34.10.040 Exceptions.

The prohibition in this Chapter does not apply to use of Face Recognition Technologies:

1. To the extent necessary for a Private Entity to comply with federal, state, or local laws;

2. For user verification purposes by an individual to access the individual’s own personal or employer issued communication and electronic devices; or

3. In automatic face detection services in social media applications.

Prohibit the use of Face Recognition Technologies by Private Entities in Places of Public Accommodation in the City (via PRIVACY & INFORMATION SECURITY LAW BLOG)

Note the exception for use in “social media applications.”

What does it mean for AI to be “explainable”?

A NIST paper attempts to answer this question:

Briefly, our four principles of explainable AI are:

Explanation: Systems deliver accompanying evidence or reason(s) for all outputs. 

Meaningful: Systems provide explanations that are understandable to individual users. 

Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output. 

Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. 

Four Principles of Explainable Artificial Intelligence

Stating this differently: there should be an explanation, it should be understandable and accurate, and the system should stop when it’s generating nonsense.

These are very reasonable principles, but likely tough to deliver with current technology.

Indeed, the paper discusses that humans are often unable to explain why they have taken a certain action:

People fabricate reasons for their decisions, even those thought to be immutable, such as personally held opinions [24, 34, 99]. In fact, people’s conscious reasoning that is able to be verbalized does not seem to always occur before the expressed decision. Instead, evidence suggests that people make their decision and then apply reasons for those decisions after the fact [95]. From a neuroscience perspective, neural markers of a decision can occur up to 10 seconds before a person’s conscious awareness [85]. This finding suggests that decision making processes begin long before our conscious awareness. 

Id. at 14.

And it is well documented that even experts generally cannot predict their own accuracy.

What hope do the AI’s have?

AlphaDogfight wins 5-0 in F-16 battle vs human

Will Knight, writing for Wired:

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

A Dogfight Renews Concerns About AI’s Lethal Potential

This is an under-discussed issue, but is inevitable. DeepMind is convinced that its AlphaZero DNN can master any two-player, turn-based game that shows perfect information. And its AlphaStar DNN shows what it can do in real-time games as well. It is a natural, and inevitable, extension to war capabilities.

Is this ok? Does that question even matter? How long before human-in-the-loop is the unacceptable bottleneck?