AI Bias Bounties

Like bug bounties, but for bias in AI:

A similar problem exists in information security and one solution gaining traction are “bug bounty programs”. Bug bounty programs seek to allow security researchers and laymen to submit their exploits directly to the affected parties in exchange for compensation.

The market rate for security bounties for the average company on HackerOne range from \$100-\$1000. Bigger companies can pay more. In 2017, Facebook has disclosed paying \$880,000 in bug bounties, with a minimum of $500 a bounty. Google pays from \$100 to \$31,337 for exploits and Google paid \$3,000,000 in security bounties in 2016.

It seems reasonable to suggest at least big companies with large market caps who already have bounty reporting infrastructure, attempt to reward and collaborate with those who find bias in their software, rather than have them take it to the press in frustration and with no compensation for their efforts.

Bias Bounty Programs as a Method of Combatting Bias in AI

AI researchers submitting to NeurIPS conference must now address ethical concerns

Khari Johnson, writing for Venture Beat:

For the first time ever, researchers who submit papers to NeurIPS, one of the biggest AI research conferences in the world, must now state the “potential broader impact of their work” on society as well as any financial conflict of interest, conference organizers told VentureBeat.

NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest

NeurIPS, or the Conference on Neural Information Processing Systems, is the largest AI conference in the world.

Facial recognition tech in Moscow

First London, now Moscow.

Moscow is the latest major city to introduce live facial recognition cameras to its streets, with Mayor Sergei Sobyanin announcing that the technology is operating “on a mass scale” earlier this month, according to a report from Russian business paper Vedomosti.

. . . . .

Moscow started trialing live facial recognition in 2017, using technology from Russian firm NtechLab to scan footage from the Russian capital’s network of 160,000 CCTV cameras. The company is best known for its FindFace software, which it launched in 2016 and let users match anyone in a picture to their profile on VK, known as Russia’s Facebook.

The app was criticized by some, particularly as it was used to dox and harass sex workers, and NtechLab eventually shut it down in favor of enterprise and government work.

Moscow rolls out live facial recognition system with an app to alert police

Heart prints are a new biometric

While the world debates the utility and ethics of existing facial recognition technology, new biometrics are constantly being developed. They are likely to replace facial recognition in the long term.

This system, dubbed Jetson, is able to measure, from up to 200 metres away, the minute vibrations induced in clothing by someone’s heartbeat. Since hearts differ in both shape and contraction pattern, the details of heartbeats differ, too. The effect of this on the fabric of garments produces what Ideal Innovations, a firm involved in the Jetson project, calls a “heartprint”—a pattern reckoned sufficiently distinctive to confirm someone’s identity.

To measure heartprints remotely Jetson employs gadgets called laser vibrometers. These work by detecting minute variations in a laser beam that has been reflected off an object of interest. They have been used for decades to study things like bridges, aircraft bodies, warship cannons and wind turbines—searching for otherwise-invisible cracks, air pockets and other dangerous defects in materials. However, only in the past five years or so has laser vibrometry become good enough to distinguish the vibrations induced in fabric by heartprints.

People can now be identified at a distance by their heartbeat

This is astonishing technology and will surely improve. In the long term your unique identity will be readily available to anyone who cares.

London police adopt facial recognition, permanently

Adam Satariano, writing for the NYT:

The technology London plans to deploy goes beyond many of the facial recognition systems used elsewhere, which match a photo against a database to identify a person. The new systems, created by the company NEC, attempt to identify people on a police watch list in real time with security cameras, giving officers a chance to stop them in the specific location.

London Police Amp Up Surveillance With Real-Time Facial Recognition

The objections voiced in the article are about potential inaccuracies in the system. But that will change over time. I don’t see many objections over the power of the system.

As Europe considers banning facial recognition technology, and police departments everywhere look to it to improve policing and safety, this may be the technology fight of the 2020’s.

Prediction: security wins over privacy.

German Data Ethics Commission insists AI regulation is necessary

The German Data Ethics Commission issued a 240-page report with 75 recommendations for regulating data, algorithmic systems, and AI. It is one of the strongest views on ethical AI to date and favors explicit regulation.

The Data Ethics Commission holds the view that regulation is necessary, and cannot be replaced by ethical principles.

Opinion of the Data Ethics Commission – Executive Summary at 7 (emphasis original).

The report divides ethical considerations into concerns about either data or algorithmic systems. For data, the report suggests that rights associated with the data will play a significant role in the ethical landscape. For example, ensuring that individuals provide informed consent for use of their personal data addresses a number of significant ethical issues.

For algorithmic systems, however, the report suggests that the AI systems might have no connection to the affected individuals. As a result, even non-personal data for which there are no associated rights could be used in an unethical manner. The report concludes that regulation is necessary to the extent there is a potential for harm.

The report identifies five levels of algorithmic system criticality. Applications with zero or negligible potential for harm would face no regulation. The regulatory burden would increase as the potential for harm increases, up to a total ban. For applications with serious potential for harm, the report recommends constant oversight.

The framework appears to be a good candidate for future ethical AI regulation in Europe, and perhaps (by default) the world.

Biased algorithms are easier to fix

Sendhil Mullainathan in an excellent essay for the NYT:

Humans are inscrutable in a way that algorithms are not. Our explanations for our behavior are shifting and constructed after the fact. To measure racial discrimination by people, we must create controlled circumstances in the real world where only race differs. For an algorithm, we can create equally controlled just by feeding it the right data and observing its behavior.

Biased Algorithms Are Easier to Fix Than Biased People

This is a fascinating complement to the concern that deep learning algorithms are a black box and we do not understand how they work. Even so, they are much easier to study than humans. Algorithms are tractable in a way that humans are not.

At its core, this essay is an argument for AI regulation, and an argument that such regulation will actually work.

HUD rules changing on use of automated decision making in housing markets

Landlords and lenders are pushing the Department of Housing and Urban Development to make it easier for businesses to discriminate against possible tenants using automated tools. Under a new proposal that just finished its public comment period, HUD suggested raising the bar for some legal challenges, making discrimination cases less likely to succeed.

Banks and landlords want to overturn federal rules on housing algorithms

The HUD proposed rule adds a new burden-shifting framework that would require plaintiffs to plead five specific elements to make a prima facie case that “a challenged practice actually or predictably results in a disparate impact on a protected class of persons . . . .” Current regulations permit complaints against such practices “even if the practice was not motivated by discriminatory intent.” The new rule continues to allow such complaints, but would allow defendants to rebut the claim at the pleading stage by asserting that a plaintiff has not alleged facts sufficient to support a prima facie claim.

One new requirement is that the plaintiff plead that the practice is “arbitrary, artificial, and unnecessary.” This introduces a kind of balancing test even if the practice has discriminatory impact. (A balancing test is already somewhat present in Supreme Court precedent, and the rule purports to be following this precedent.) As a result, if the challenged practice nevertheless serves a “legitimate objective,” the defendant may rebut the claim at the pleading stage.

The net result of the proposed rule will be to make it easier for new technologies, especially artificial intelligence technologies, to pass muster under housing discrimination laws. If the technology has a legitimate objective, it may not run afoul of HUD rules despite having a disparate impact on a protected class of persons.

This is not theoretical. HUD sued Facebook for housing discrimination earlier this year.

AI’s make a lot of guesses and we should know that

One of the most important AI ethics tasks is to educate developers and especially users about what AI’s can do well and what they cannot do well. AI systems do amazing things, and users mostly assume these things are done accurately based on a few demonstrations. For example, the police assume facial recognition systems accurately tag bad guys, and that license plate databases accurately contain lists of stolen cars. But these systems are brittle, and an excellent example of this is the fun, new ImageNet Roulette [update 2/22/20: no longer available] web tool put together by artist and researcher Trevor Paglen.

ImageNet Roulette is a provocation designed to help us see into the ways that humans are classified in machine learning systems. It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. 

ImageNet Roulette (via The Verge)

The service claims not to keep any uploaded photos, so if you trust them, you can upload a webcam image of yourself and see how the internet classifies your face.

Of course no human would look at a random image of another human devoid of context and attempt to assign a description such as “pipe smoker” or “newspaper reader.” We would say, “I don’t know. It just looks like a person.”

But AI’s aren’t that smart yet. They don’t know what they can’t know. So ImageNet Roulette calculates probabilities that an image falls into a given description, and then it outputs the highest probability description. It’s a shot in the dark. You might think it is seeing something deep, but nope. It has 2,500 labels and it has to apply one. I apparently look like a sociologist.

UK court approves police use of facial recognition

In contrast to recent U.S. municipal decisions restricting government use of facial recognition technology, a UK court has ruled that police use of the technology does not violate any fundamental rights.

In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

Police Use of Facial Recognition Is Accepted by British Court

The UK is of course one of the most surveilled countries in the world.