The ACLU has some AI surveillance recommendations

Niraj Chokshi, writing for the New York Time:

To prevent the worst outcomes, the A.C.L.U. offered a range of recommendations governing the use of video analytics in the public and private sectors.

No governmental entity should be allowed to deploy video analytics without legislative approval, public notification and a review of a system’s effects on civil rights, it said. Individuals should know what kind of information is recorded and analyzed, have access to data collected about them, and have a way to challenge or correct inaccuracies, too.

To prevent abuses, video analytics should not be used to collect identifiable information en masse or merely for seeking out “suspicious” behavior, the A.C.L.U. said. Data collected should also be handled with care and systems should make decisions transparently and in ways that don’t carry legal implications for those tracked, the group said.

Businesses should be governed by similar guidelines and should be transparent in how they use video analytics, the group said. Regulations governing them should balance constitutional protections, including the rights to privacy and free expression.

How Surveillance Cameras Could Be Weaponized With A.I.

These recommendations appear to boil down to transparency and not tracking everyone all the time without a specific reason. Seems reasonable as a starting point.