The Department of Housing and Urban Development sued Facebook for housing discrimination. The allegations are fascinating and, although we mostly knew all of this before (based on reporting by Pro Publica), I think most people do not realize how impressively targeted advertisements can be on Facebook. For example:
Respondent [Facebook] has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area. Respondent also provides drop-down menus and search boxes to exclude or include (i.e., limit the audience of an ad exclusively to) people who share specified attributes. Respondent has offered advertisers hundreds of thousands of attributes from which to choose, for example to exclude “women in the workforce,” “moms of grade school kids,” “foreigners,” “Puerto Rico Islanders,” or people interested in “parenting,” “accessibility,” “service animal,” “Hijab Fashion,” or “Hispanic Culture.” Respondent also has offered advertisers the ability to limit the audience of an ad by selecting to include only those classified as, for example, “Christian” or “Childfree.”
Complaint at paragraph 14.
But Facebook’s system doesn’t just enable this kind of micro-targeting. It also refuses to show ads to users that its system judges as unlikely to interact with the ads, even if the advertisers want to target those users:
Even if an advertiser tries to target an audience that broadly spans protected class groups, Respondent’s ad delivery system will not show the ad to a diverse audience if the system considers users with particular characteristics most likely to engage with the ad. If the advertiser tries to avoid this problem by specifically targeting an unrepresented group, the ad delivery system will still not deliver the ad to those users, and it may not deliver the ad at all.
Complaint at paragraph 19.
Thus, the allegation is that the system functions “just like an advertiser who intentionally targets or excludes users based on their protected class.”
There is an AI angle to this as well. The complaint specifically references Facebook’s “machine learning and other prediction techniques” as enabling this kind of targeting. And while folks may disagree on whether this is “AI” or just sophisticated statistical analysis, it is a concrete allegation of real-world harm caused by big data and computation. And I think it is an interesting case study in whether we need extra laws to prevent AI harm.
Here is a hypothesis: our existing laws prohibiting various types of harm will work just fine or better in the AI context. Housing discrimination is already illegal, whether you do it subjectively and intentionally or objectively by sophisticated computation. And in fact, it’s easier to prove the latter. The AI takes input and outputs a result. That result is objective and (with the help of legal process) transparent. The AI doesn’t rationalize its decisions or try to explain away its hidden bias because it fears social judgment. If it operates in a biased manner, we will see it and we can fix it.
There is a lot of anxiety around whether our laws are sufficient for the AI future we envision. Will product liability laws be sufficient to determine who is at fault when a self-driving vehicle crashes? Will anti-discrimination laws be sufficient to disincentivize AI-facilitated bias? Yes, yes I think they will. Perhaps the law is more robust than we fear.