To prevent the worst outcomes, the A.C.L.U. offered a range of recommendations governing the use of video analytics in the public and private sectors.
No governmental entity should be allowed to deploy video analytics without legislative approval, public notification and a review of a system’s effects on civil rights, it said. Individuals should know what kind of information is recorded and analyzed, have access to data collected about them, and have a way to challenge or correct inaccuracies, too.
To prevent abuses, video analytics should not be used to collect identifiable information en masse or merely for seeking out “suspicious” behavior, the A.C.L.U. said. Data collected should also be handled with care and systems should make decisions transparently and in ways that don’t carry legal implications for those tracked, the group said.
Businesses should be governed by similar guidelines and should be transparent in how they use video analytics, the group said. Regulations governing them should balance constitutional protections, including the rights to privacy and free expression.
Bruce Schneier advocates for a technological “pause” to allow policy to catch up:
[U]biquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives — when the surveillance system gets it wrong — will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change.
The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow.
US Customs revealed the name of a hacked subcontractor (presumably accidentally) in the title of a Word document:
A contractor for US Customs and Border Protection has been breached, leaking photos and other sensitive data, the agency announced on Monday. Initially described as “traveler photos,” many of the images seem to be pictures of traveler license plates, likely taken from cars at an automotive port of entry.
Customs has not named the contractor involved in the breach, but a Washington Post article noted that the announcement included a Word document with the name Perceptrics, a provider of automated license plate readers used at a number of southern ports of entry.
This was inevitable, but it is worth noting the first time a country has responded to an alleged cyber attack with a kinetic attack:
The Israel Defense Force says that it stopped an attempted cyber attack launched by Hamas over the weekend, and retaliated with an airstrike against the building where it says the attack originated from in Gaza. It’s believed to be the first time that a military has retaliated with physical violence in real time against a cyberattack.
It’s also worth noting, as The Verge comments, that the physical response did not appear strictly necessary: “Given that the IDF admitted that it had halted the attack prior to the airstrike, the question is now whether or not the response was appropriate.”
It’s easy to write about this particular event. It is surely another thing to experience it:
Over and over again, researchers have documented easily found groups of hackers and scammers offering their services on Facebook pages. Researchers at Cisco Talos just documented this again:
In all, Talos has compiled a list of 74 groups on Facebook whose members promised to carry out an array of questionable cyber dirty deeds, including the selling and trading of stolen bank/credit card information, the theft and sale of account credentials from a variety of sites, and email spamming tools and services. In total, these groups had approximately 385,000 members.
These Facebook groups are quite easy to locate for anyone possessing a Facebook account. A simple search for groups containing keywords such as “spam,” “carding,” or “CVV” will typically return multiple results. Of course, once one or more of these groups has been joined, Facebook’s own algorithms will often suggest similar groups, making new criminal hangouts even easier to find.
They aren’t even hiding, and Facebook’s automated systems helpfully suggest other criminals you might also like. This is a serious problem for all big online communities. YouTube recently had to deal with disgusting child exploitation issues that its algorithms helped create as well.
Most services complain that it is hard to stamp out destructive behavior. (But see Pinterest.) Yet when their own algorithms are grouping and recommending similar content, it seems that automatically addressing this is well within their technical capabilities. Criminal services should not be openly advertised on Facebook. But apparently there’s no incentive to do anything about it. Cue the regulators.
I’m proposing a law that expands criminal liability to any corporate executive who negligently oversees a giant company causing severe harm to U.S. families. We all agree that any executive who intentionally breaks criminal laws and leaves a trail of smoking guns should face jail time. But right now, they can escape the threat of prosecution so long as no one can prove exactly what they knew, even if they were willfully negligent.
If top executives knew they would be hauled out in handcuffs for failing to reasonably oversee the companies they run, they would have a real incentive to better monitor their operations and snuff out any wrongdoing before it got out of hand.
The bill itself is pretty short. Here’s a summary:
Focuses on executives in big business. Applies to any executive officer of a corporation with more than $1B in annual revenue. Definition of executive officer is same as under traditional federal regulations, plus anyone who “has the responsibility and authority to take necessary measures to prevent or remedy violations.”
Makes execs criminally liable for a lot of things. Makes it criminal for any executive officer “to negligently permit or fail to prevent” any crime under Federal or State law, or any civil violation that “affects the health, safety, finances, or personal data” of at least 1% of the population of any state or the US.
Penalty. Convicted executives go to prison for up to a year, or up to three years on subsequent offenses.
This is pretty breathtaking in its sweep of criminal liability. It criminalizes negligence. And it applies that negligence standard to any civil violation that “affects” the health, safety, finances, or personal data of at least 1% of a state.
Under this standard every single executive at Equifax, Facebook, Yahoo, Target, etc. risks jail for up to a year. Just read this list. Will be interesting to see where this goes.
It is very hard for technologists to give up the idea of absolute cybersecurity. Their mind set is naturally attracted to the binary secure/insecure classification. They are also used to the idea of security being fragile. They are not used to thinking that even a sieve can hold water to an extent adequate for many purposes. The dominant mantra is that “a chain is only as strong as its weakest link.” Yet that is probably not the appropriate metaphor. It is better to think of a net. Although it has many holes, it can often still perform adequately for either catching fish or limiting inflow of birds or insects.
This is a much better metaphor for thinking about cybersecurity and risk in general.
And it’s helpful that criminals tend to be just as self-interested in cyberspace:
Most criminals, even among those on the extreme edge of the stupidity spectrum, have no interest in destroying the system they are abusing. They just want to exploit it, to extract value for themselves out of it.
An amusing and instructive example of illicit cyber behavior that maintains the functioning of the system is provided by the ransomware criminals. Studies have documented the high level of “customer care” they typically provide. They tend to give expert assistance to victims who do pay up, and have difficulty restoring their computers to the original state. After all, those criminals do want to establish “reputations” that will induce future victims to believe that payment of the demanded ransom will give them back control of their system and enable them to go on with their lives and jobs.
Models of self interest have very high predictive ability everywhere.
Two years ago the President signed an order to hire 15,000 new border agents. Accenture Federal Services picked up the contract to “recruit, vet and hire” 7,500 of those officers and was promptly paid $60.7 million to do so over 5 years. To date, the company has hired 33 such officers.