Is ShotSpotter AI?

 A federal lawsuit filed Thursday alleges Chicago police misused “unreliable” gunshot detection technology and failed to pursue other leads in investigating a grandfather from the city’s South Side who was charged with killing a neighbor.

. . . . .

ShotSpotter’s website says the company is “a leader in precision policing technology solutions” that help stop gun violence by using sensors, algorithms and artificial intelligence to classify 14 million sounds in its proprietary database as gunshots or something else.

Lawsuit: Chicago police misused ShotSpotter in murder case

Some commentators (e.g., link) have jumped on this story as an example of someone (allegedly) being wrongly imprisoned due to AI.

But maybe ShotSpotter is just bad software that is used improperly? Does it matter?

The definition of AI is so difficult that we may soon find ourselves regulating all software.

AI discoveries in chess

AlphaZero shocked the chess world in 2018.

Now an economics paper is trying to quantify the effect of this new chess knowledge:

[W]e are not aware of any previously documented evidence comparing human performance before and after the introduction of an AI system, showing that humans have learned from AI’s ideas, and that this has pushed the frontier of our understanding.

AlphaZero Ideas

The paper shows that the top-ranked chess player in the world, Magnus Carlsen, meaningfully altered his play and incorporated ideas from AlphaZero on openings, sacrifices, and the early advance of the h-pawn.

Carlsen himself acknowledged the influence:

Question: We are really curious about the influence of AlphaZero in your game.

Answer: Yes, I have been influenced by my hero AlphaZero recently. In essence, I have become a very different player in terms of style than I was a bit earlier and it’s been a great ride.”

Id. at 25 (citing a June 14, 2019 interview in Norway Chess 2019).

Bias mitigations for the DALL-E 2 image generation AI

OpenAI has a post explaining the three main techniques it used to “prevent generated images from violating our content policy.”

First, they filtered out violent and sexual images from the training dataset:

[W]e prioritized filtering out all of the bad data over leaving in all of the good data. This is because we can always fine-tune our model with more data later to teach it new things, but it’s much harder to make the model forget something that it has already learned.

Second, they found that the filtering can actually amplify bias because the smaller remaining datasets may be less diverse:

We hypothesize that this particular case of bias amplification comes from two places: first, even if women and men have roughly equal representation in the original dataset, the dataset may be biased toward presenting women in more sexualized contexts; and second, our classifiers themselves may be biased either due to implementation or class definition, despite our efforts to ensure that this was not the case during the data collection and validation phases. Due to both of these effects, our filter may remove more images of women than men, which changes the gender ratio that the model observes in training.

They fix this by re-weighting the training dataset so that the categories of filtered data are as balanced as the categories of unfiltered data.

Third, they needed to prevent image regurgitation to avoid IP and privacy issues. They found that most regurgitated images (a) were simple vector graphics; and (b) had many near-duplicates in the training set. As a result, these images were easier for the model to memorize. So they de-duplicated images with a clustering algorithm.

To test the effect of deduplication on our models, we trained two models with identical hyperparameters: one on the full dataset, and one on the deduplicated version of the dataset. . . . Surprisingly, we found that human evaluators slightly preferred the model trained on deduplicated data, suggesting that the large amount of redundant images in the dataset was actually hurting performance.

Given the obviously impressive results, this is an instructive set of techniques for AI model bias mitigation.

UK IPO suggests copyright exception for text and data mining

The United Kingdom’s Intellectual Property Office has concluded a study on “how AI should be dealt with in the patent and copyright systems.”

For text and data mining, we plan to introduce a new copyright and database exception which allows TDM for any purpose. Rights holders will still have safeguards to protect their content, including a requirement for lawful access.

Consultation outcome / Artificial Intelligence and IP: copyright and patents

They also considered copyright protection for computer-generated works without a human author, and patent protection for AI-devised inventions. But they suggest no changes in the law for these latter two areas.

Facebook settles housing discrimination lawsuit

In 2019, Facebook was sued for housing discrimination because their machine learning advertising algorithm functioned “just like an advertiser who intentionally targets or excludes users based on their protected class.”

They have now settled the lawsuit by agreeing to scrap the algorithm:

Under the settlement, Meta will stop using an advertising tool for housing ads (known as the “Special Ad Audience” tool) which, according to the complaint, relies on a discriminatory algorithm to find users who “look like” other users based on FHA-protected characteristics.  Meta also will develop a new system over the next six months to address racial and other disparities caused by its use of personalization algorithms in its ad delivery system for housing ads.  If the United States concludes that the new system adequately addresses the discriminatory delivery of housing ads, then Meta will implement the system, which will be subject to Department of Justice approval and court oversight.  If the United States concludes that the new system is insufficient to address algorithmic discrimination in the delivery of housing ads, then the settlement agreement will be terminated.

United States Attorney Resolves Groundbreaking Suit Against Meta Platforms, Inc., Formerly Known As Facebook, To Address Discriminatory Advertising For Housing

Government lawyers will need to approve Meta’s new algorithm, and Meta was fined $115,054, “the maximum penalty available under the Fair Housing Act.”

The DOJ’s press release states: “This settlement marks the first time that Meta will be subject to court oversight for its ad targeting and delivery system.”

Microsoft discontinues face, gender, and age analysis tools

Kashmir Hill for the NYT:

“We’re taking concrete steps to live up to our A.I. principles,” said Ms. Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the ethical A.I. group in 2018. “It’s going to be a huge journey.”

Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’

This coincides with Microsoft’s release of their Microsoft Responsible AI Standard, v2 (see also blog post).

Note, however, that these tools may have been useful for accessibility:

The age and gender analysis tools being eliminated — along with other tools to detect facial attributes such as hair and smile — could be useful to interpret visual images for blind or low-vision people, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Ms. Crampton said.

Trade-offs everywhere.

People don’t reason well about robots

Andrew Keane Woods in the University of Colorado Law Review:

[D]octors continue to privilege their own intuitions over automated decision-making aids. Since Meehl’s time, a growing body of social psychology scholarship has offered an explanation: bias against nonhuman decision-makers…. As Jack Balkin notes, “When we talk about robots, or AI agents, or algorithms, we usually focus on whether they cause problems or threats. But in most cases, the problem isn’t the robots. It’s the humans.”

Robophobia

Making decisions that go against our own instincts is very difficult (see also List of cognitive biases), and relying on data and algorithms is no different.

A major challenge of AI ethics is figuring out when to trust the AI’s.

Andrew Keane Woods suggests (1) defaulting to use of AI’s; (2) anthropomorphizing machines to encourage us to treat them as fellow decision-makers; (3) educating against robophobia; and perhaps most dramatically (4) banning humans from the loop. 😲

AI model predicts who will become homeless

 EMILY ALPERT REYES for the LA Times:

It pulls data from eight county agencies to pinpoint whom to assist, looking at a broad range of data in county systems: Who has landed in the emergency room. Who has been booked in jail. Who has suffered a psychiatric crisis that led to hospitalization. Who has gotten cash aid or food benefits — and who has listed a county office as their “home address” for such programs, an indicator that often means they were homeless at the time.

A computer model predicts who will become homeless in L.A. Then these workers step in

That’s a lot of sensitive personal data. The word “privacy” does not appear in the article.

Data is of course exceptionally helpful in making sure money and resources are applied efficiently. (See also personalized advertising.)

This seems great, so… ok?

Blowing past the Turing Test

Nitasha Tiku for the Washington Post:

“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.

The Google engineer who thinks the company’s AI has come to life

The actual Turing Test has been met for quite some time, though it didn’t lead to a pronouncement of artificial sentience in the way envisioned by Alan Turing himself.

But maybe we are now at the uncanny valley of sentience: it looks similar enough to make you feel uneasy.