Some companies agree to not use location data from “sensitive points of interest”

A subset of Network Advertising Initiative companies have voluntarily agreed that they will not use location data associated with “sensitive points of interest,” which include:

Places of religious worship

Correctional facilities

Places that may be used to infer an LGBTQ+ identification

Places that may be used to infer engagement with explicit sexual content, material, or acts

Places primarily intended to be occupied by children under 16

Domestic abuse shelters, including rape crisis centers

Welfare or homeless shelters and halfway houses

Dependency or addiction treatment centers

Medical facilities that cater predominantly to sensitive conditions, such as cancer centers, HIV/ AIDS, fertility or abortion clinics, mental health treatment facilities, or emergency room trauma centers

Places that may be used to infer refugee or immigrant status, such as refugee or immigration centers and immigration services`

Credit repair, debt services, bankruptcy services, or payday lending institutions

Temporary places of assembly such as political rallies, marches, or protests, during the times that such rallies, marches, or protests take place

Military bases


The announcement is close behind increasing public concern that location data brokers might intentionally or reluctantly provide data on individuals visiting abortion clinics.

More US federal cybersecurity laws

New cybersecurity laws are slowly being passed, mostly around reporting and coordination:

  1. The Better Cybercrime Metrics Act directs the Justice Department to improve data on cybercrimes, including establishing a new reporting category in the National Incident-Based Reporting System specifically for federal, state and local cybercrime reports.
  2. The Federal Rotational Cyber Workforce Program Act allows cybersecurity professionals to rotate through federal agencies to enhance their expertise.
  3. The State and Local Government Cybersecurity Act directs the federal government to coordinate more with state and local governments on cybersecurity.

“For hackers, state and local governments are an attractive target — we must increase support to these entities so that they can strengthen their systems and better defend themselves from harmful cyber-attack,” Rep. Joe Neguse (D-Colo.), who introduced the bill, said in a statement after the House’s passage.

Biden signs cyber bills into law

Facebook settles housing discrimination lawsuit

In 2019, Facebook was sued for housing discrimination because their machine learning advertising algorithm functioned “just like an advertiser who intentionally targets or excludes users based on their protected class.”

They have now settled the lawsuit by agreeing to scrap the algorithm:

Under the settlement, Meta will stop using an advertising tool for housing ads (known as the “Special Ad Audience” tool) which, according to the complaint, relies on a discriminatory algorithm to find users who “look like” other users based on FHA-protected characteristics.  Meta also will develop a new system over the next six months to address racial and other disparities caused by its use of personalization algorithms in its ad delivery system for housing ads.  If the United States concludes that the new system adequately addresses the discriminatory delivery of housing ads, then Meta will implement the system, which will be subject to Department of Justice approval and court oversight.  If the United States concludes that the new system is insufficient to address algorithmic discrimination in the delivery of housing ads, then the settlement agreement will be terminated.

United States Attorney Resolves Groundbreaking Suit Against Meta Platforms, Inc., Formerly Known As Facebook, To Address Discriminatory Advertising For Housing

Government lawyers will need to approve Meta’s new algorithm, and Meta was fined $115,054, “the maximum penalty available under the Fair Housing Act.”

The DOJ’s press release states: “This settlement marks the first time that Meta will be subject to court oversight for its ad targeting and delivery system.”

Microsoft discontinues face, gender, and age analysis tools

Kashmir Hill for the NYT:

“We’re taking concrete steps to live up to our A.I. principles,” said Ms. Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the ethical A.I. group in 2018. “It’s going to be a huge journey.”

Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’

This coincides with Microsoft’s release of their Microsoft Responsible AI Standard, v2 (see also blog post).

Note, however, that these tools may have been useful for accessibility:

The age and gender analysis tools being eliminated — along with other tools to detect facial attributes such as hair and smile — could be useful to interpret visual images for blind or low-vision people, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Ms. Crampton said.

Trade-offs everywhere.

People don’t reason well about robots

Andrew Keane Woods in the University of Colorado Law Review:

[D]octors continue to privilege their own intuitions over automated decision-making aids. Since Meehl’s time, a growing body of social psychology scholarship has offered an explanation: bias against nonhuman decision-makers…. As Jack Balkin notes, “When we talk about robots, or AI agents, or algorithms, we usually focus on whether they cause problems or threats. But in most cases, the problem isn’t the robots. It’s the humans.”


Making decisions that go against our own instincts is very difficult (see also List of cognitive biases), and relying on data and algorithms is no different.

A major challenge of AI ethics is figuring out when to trust the AI’s.

Andrew Keane Woods suggests (1) defaulting to use of AI’s; (2) anthropomorphizing machines to encourage us to treat them as fellow decision-makers; (3) educating against robophobia; and perhaps most dramatically (4) banning humans from the loop. 😲

AI model predicts who will become homeless


It pulls data from eight county agencies to pinpoint whom to assist, looking at a broad range of data in county systems: Who has landed in the emergency room. Who has been booked in jail. Who has suffered a psychiatric crisis that led to hospitalization. Who has gotten cash aid or food benefits — and who has listed a county office as their “home address” for such programs, an indicator that often means they were homeless at the time.

A computer model predicts who will become homeless in L.A. Then these workers step in

That’s a lot of sensitive personal data. The word “privacy” does not appear in the article.

Data is of course exceptionally helpful in making sure money and resources are applied efficiently. (See also personalized advertising.)

This seems great, so… ok?

No inherent legal duty to be good at cybersecurity

Colonial operates a large oil pipeline and had a very bad ransomware attack in 2021 that shut down the pipeline for five days.

Some individuals that purchased gas and paid higher prices as a result of the shutdown sued Colonial for negligence (among other things) under Georgia law.

The District Court for the Northern District of Georgia has now dismissed that lawsuit:

Plaintiffs provide no Georgia statutory or common law authority for the proposition that industry standards impose a duty of care to protect against cyberattacks generally, nor do they provide support that the particular industry standards they allege have been recognized by Georgia courts.

June 17, 2022 Order Granting Motion to Dismiss at 11-12 [N.D. GA, Case 1:21-cv-02098-MHC]

And because plaintiffs could not allege exposure of personal data or any other violation of statute or legal duty, the complaint was dismissed.

Now if Colonial had said it was good at cybersecurity, and then events suggested they were not in fact good at cybersecurity, they would definitely have drawn a few shareholder derivative suits and maybe even an SEC investigation. See Matt Levine (“everything is securities fraud”).

But there is no inherent duty to be good at cybersecurity. (Yet.)

Frustration with GDPR bottleneck in Ireland

VINCENT MANANCOURT writing for Politico:

So far, officials at the EU level have put up a dogged defense of what has become one of their best-known rulebooks, including by publicly pushing back against calls to punish Ireland for what activists say is a failure to bring Big Tech’s data-hungry practices to heel.

Now, one of the European Union’s key voices on data protection regulation is breaking the Brussels taboo of questioning the bloc’s flagship law’s performance so far.

“I think there are parts of the GDPR that definitely have to be adjusted to the future reality,” European Data Protection Supervisor Wojciech Wiewiórowski told POLITICO in an interview earlier this month.

What’s wrong with the GDPR?

The main complaint appears to be that the Irish Data Protection Commission (which handles most big-tech privacy complaints) is overworked and slow.

Otherwise there appears to be a sense that things haven’t quite worked out as hoped, whatever that means.

The Privacy “Duty of Loyalty”

The draft American Data Privacy and Protection Act has a section called “duty of loyalty.” What the heck is that?

In the draft it’s a collection of specific requirements to minimize data collection and prohibit the use and transfer of social security numbers, precise geolocation, etc. See Sections 101, 102, 103 in the Discussion Draft.

But the “duty of loyalty” as a data privacy concept is broader. It means that data collectors must use data in a way that benefits users and places their interests above the interests of making a profit, much like a duty of loyalty (or a fiduciary duty) that a lawyer must have to their client.

Neil M. Richards and Woodrow Hartzog explain the concept in a 2021 paper:

Put simply, under our approach, loyalty would manifest itself primarily as a prohibition on designing digital tools and processing data in a way that conflicts with a trusting party’s best interests. Data collectors bound by such a duty of loyalty would be obligated to act in the best interests of the people exposing their data and engaging in online experiences, but only to the extent of their exposure. 

A Duty of Loyalty for Privacy Law at 966.

Richards and Hartzog suggest that a broad duty of loyalty combined with specific prohibitions against especially troubling practices would work like other areas of regulation (e.g., “unfair and deceptive trade practices”).

But although the American Data Privacy and Protection Act refers to this concept, the broad duty of loyalty is not (yet) part of the draft.