Landlords and lenders are pushing the Department of Housing and Urban Development to make it easier for businesses to discriminate against possible tenants using automated tools. Under a new proposal that just finished its public comment period, HUD suggested raising the bar for some legal challenges, making discrimination cases less likely to succeed.
Banks and landlords want to overturn federal rules on housing algorithms
The HUD proposed rule adds a new burden-shifting framework that would require plaintiffs to plead five specific elements to make a prima facie case that “a challenged practice actually or predictably results in a disparate impact on a protected class of persons . . . .” Current regulations permit complaints against such practices “even if the practice was not motivated by discriminatory intent.” The new rule continues to allow such complaints, but would allow defendants to rebut the claim at the pleading stage by asserting that a plaintiff has not alleged facts sufficient to support a prima facie claim.
One new requirement is that the plaintiff plead that the practice is “arbitrary, artificial, and unnecessary.” This introduces a kind of balancing test even if the practice has discriminatory impact. (A balancing test is already somewhat present in Supreme Court precedent, and the rule purports to be following this precedent.) As a result, if the challenged practice nevertheless serves a “legitimate objective,” the defendant may rebut the claim at the pleading stage.
The net result of the proposed rule will be to make it easier for new technologies, especially artificial intelligence technologies, to pass muster under housing discrimination laws. If the technology has a legitimate objective, it may not run afoul of HUD rules despite having a disparate impact on a protected class of persons.
This is not theoretical. HUD sued Facebook for housing discrimination earlier this year.