Three teenagers set fire to a home in Denver because they believed someone who stole a phone lived there. Five members of a family died.
The police had video from a neighbor’s house showing three people in hooded sweatshirts and masks near the home at the time of the fire. But for weeks they had no further evidence.
Then the police subpoenaed cell tower data to see who was in the area. They got 7,000 devices, which they narrowed down to exclude neighbors and any that did not match the movement of a vehicle that was observed. Only 33 devices remained.
Then they went to Google:
[A] warrant to Google asked for any searches for the destroyed house’s address anytime in the two weeks before the fire. Google provided five accounts that made that search — including three accounts with email addresses that included [the suspect’s names].
One of the defendants has filed a motion to suppress the Google search evidence, and the EFF has filed an amicus brief in support:
Should the police be able to ask Google for the name of everyone who searched for the address of an abortion provider in a state where abortions are now illegal? Or who searched for the drug mifepristone? What about people who searched for gender-affirming healthcare providers in a state that has equated such care with child abuse? Or everyone who searched for a dispensary in a state that has legalized cannabis but where the federal government still considers it illegal?
The United Kingdom’s Intellectual Property Office has concluded a study on “how AI should be dealt with in the patent and copyright systems.”
For text and data mining, we plan to introduce a new copyright and database exception which allows TDM for any purpose. Rights holders will still have safeguards to protect their content, including a requirement for lawful access.
They also considered copyright protection for computer-generated works without a human author, and patent protection for AI-devised inventions. But they suggest no changes in the law for these latter two areas.
A subset of Network Advertising Initiative companies have voluntarily agreed that they will not use location data associated with “sensitive points of interest,” which include:
Places of religious worship
Places that may be used to infer an LGBTQ+ identification
Places that may be used to infer engagement with explicit sexual content, material, or acts
Places primarily intended to be occupied by children under 16
Domestic abuse shelters, including rape crisis centers
Welfare or homeless shelters and halfway houses
Dependency or addiction treatment centers
Medical facilities that cater predominantly to sensitive conditions, such as cancer centers, HIV/ AIDS, fertility or abortion clinics, mental health treatment facilities, or emergency room trauma centers
Places that may be used to infer refugee or immigrant status, such as refugee or immigration centers and immigration services`
Credit repair, debt services, bankruptcy services, or payday lending institutions
Temporary places of assembly such as political rallies, marches, or protests, during the times that such rallies, marches, or protests take place
New cybersecurity laws are slowly being passed, mostly around reporting and coordination:
The Better Cybercrime Metrics Act directs the Justice Department to improve data on cybercrimes, including establishing a new reporting category in the National Incident-Based Reporting System specifically for federal, state and local cybercrime reports.
“For hackers, state and local governments are an attractive target — we must increase support to these entities so that they can strengthen their systems and better defend themselves from harmful cyber-attack,” Rep. Joe Neguse (D-Colo.), who introduced the bill, said in a statement after the House’s passage.
In 2019, Facebook was sued for housing discrimination because their machine learning advertising algorithm functioned “just like an advertiser who intentionally targets or excludes users based on their protected class.”
They have now settled the lawsuit by agreeing to scrap the algorithm:
Under the settlement, Meta will stop using an advertising tool for housing ads (known as the “Special Ad Audience” tool) which, according to the complaint, relies on a discriminatory algorithm to find users who “look like” other users based on FHA-protected characteristics. Meta also will develop a new system over the next six months to address racial and other disparities caused by its use of personalization algorithms in its ad delivery system for housing ads. If the United States concludes that the new system adequately addresses the discriminatory delivery of housing ads, then Meta will implement the system, which will be subject to Department of Justice approval and court oversight. If the United States concludes that the new system is insufficient to address algorithmic discrimination in the delivery of housing ads, then the settlement agreement will be terminated.
“We’re taking concrete steps to live up to our A.I. principles,” said Ms. Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the ethical A.I. group in 2018. “It’s going to be a huge journey.”
Note, however, that these tools may have been useful for accessibility:
The age and gender analysis tools being eliminated — along with other tools to detect facial attributes such as hair and smile — could be useful to interpret visual images for blind or low-vision people, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Ms. Crampton said.
[D]octors continue to privilege their own intuitions over automated decision-making aids. Since Meehl’s time, a growing body of social psychology scholarship has offered an explanation: bias against nonhuman decision-makers…. As Jack Balkin notes, “When we talk about robots, or AI agents, or algorithms, we usually focus on whether they cause problems or threats. But in most cases, the problem isn’t the robots. It’s the humans.”
A major challenge of AI ethics is figuring out when to trust the AI’s.
Andrew Keane Woods suggests (1) defaulting to use of AI’s; (2) anthropomorphizing machines to encourage us to treat them as fellow decision-makers; (3) educating against robophobia; and perhaps most dramatically (4) banning humans from the loop. 😲
It pulls data from eight county agencies to pinpoint whom to assist, looking at a broad range of data in county systems: Who has landed in the emergency room. Who has been booked in jail. Who has suffered a psychiatric crisis that led to hospitalization. Who has gotten cash aid or food benefits — and who has listed a county office as their “home address” for such programs, an indicator that often means they were homeless at the time.
Some individuals that purchased gas and paid higher prices as a result of the shutdown sued Colonial for negligence (among other things) under Georgia law.
The District Court for the Northern District of Georgia has now dismissed that lawsuit:
Plaintiffs provide no Georgia statutory or common law authority for the proposition that industry standards impose a duty of care to protect against cyberattacks generally, nor do they provide support that the particular industry standards they allege have been recognized by Georgia courts.
June 17, 2022 Order Granting Motion to Dismiss at 11-12 [N.D. GA, Case 1:21-cv-02098-MHC]
And because plaintiffs could not allege exposure of personal data or any other violation of statute or legal duty, the complaint was dismissed.
Now if Colonial had said it was good at cybersecurity, and then events suggested they were not in fact good at cybersecurity, they would definitely have drawn a few shareholder derivative suits and maybe even an SEC investigation. See Matt Levine (“everything is securities fraud”).
But there is no inherent duty to be good at cybersecurity. (Yet.)