Real-time counter surveillance tool for Tesla vehicles

An engineer has built a counter-surveillance tool on top of the hardware and software stack for Tesla vehicles:

It uses the existing video feeds created by Tesla’s Sentry Mode features and uses license plate and facial detection to determine if you are being followed.

Scout does all that in real-time and sends you notifications if it sees anything suspicious.

Turn your Tesla into a CIA-like counter-surveillance tool with this hack

A video demonstration is embedded in the article.

This is a reminder that intelligent surveillance tools are going to be available at massive scale to even private citizens, not just the government. As governments track citizens, will citizens track government actors and individual police officers? What will we do with all of this data?

Rachel Thomas on Surveillance

Rachel Thomas with an excellent essay on 8 Things You Need to Know About Surveillance:

I frequently talk with people who are not that concerned about surveillance, or who feel that the positives outweigh the risks. Here, I want to share some important truths about surveillance:

1. Surveillance can facilitate human rights abuses and even genocide
2. Data is often used for different purposes than why it was collected
3. Data often contains errors
4. Surveillance typically operates with no accountability
5. Surveillance changes our behavior
6. Surveillance disproportionately impacts the marginalized
7. Data privacy is a public good
8. We don’t have to accept invasive surveillance

The issues are of course more complex than anyone can summarize in a brief essay, but Thomas’ points on data often containing errors (3) and the frequent lack of processes to remedy those errors (4) deserve special emphasis. We tend to assume that automated systems are accurate and work well as long as they do so in our experience. But for many people they do not, and this has a dramatic impact on their lives.

Peter Thiel goes after Google on AI

Peter Thiel in a NYT op-ed:

A.I.’s military power is the simple reason that the recent behavior of America’s leading software company, Google — starting an A.I. lab in China while ending an A.I. contract with the Pentagon — is shocking. As President Barack Obama’s defense secretary Ash Carter pointed out last month, “If you’re working in China, you don’t know whether you’re working on a project for the military or not.”

Good for Google, Bad for America

And he’s not wrong. But it’s also not possible to prevent China from ultimately obtaining this technology.

Thiel is correct in the short-term, but also dangerously short-sighted. What’s the plan here? Further isolation and an arms race? Liberal democracies need to be focused on global frameworks (rule of law, free speech, free trade, free movement of people and information) that prevent war and human misery. This is an opportunistic easy rhetorical point, not a strategy.

A More Nuanced Encryption Policy Debate

Bruce Schneier on a speech by Attorney General Barr on encryption policy:

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how: ­an approach we have derisively named “nerd harder.”

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having­ [–] not the fake one about whether or not we can have both security and surveillance.

Attorney General William Barr on Encryption Policy

Schneier still believes it is more important that everyone is secure than to provide backdoors to law enforcement, but at least everyone is starting to acknowledge the reality that law enforcement backdoors weaken security.

The ACLU has some AI surveillance recommendations

Niraj Chokshi, writing for the New York Time:

To prevent the worst outcomes, the A.C.L.U. offered a range of recommendations governing the use of video analytics in the public and private sectors.

No governmental entity should be allowed to deploy video analytics without legislative approval, public notification and a review of a system’s effects on civil rights, it said. Individuals should know what kind of information is recorded and analyzed, have access to data collected about them, and have a way to challenge or correct inaccuracies, too.

To prevent abuses, video analytics should not be used to collect identifiable information en masse or merely for seeking out “suspicious” behavior, the A.C.L.U. said. Data collected should also be handled with care and systems should make decisions transparently and in ways that don’t carry legal implications for those tracked, the group said.

Businesses should be governed by similar guidelines and should be transparent in how they use video analytics, the group said. Regulations governing them should balance constitutional protections, including the rights to privacy and free expression.

How Surveillance Cameras Could Be Weaponized With A.I.

These recommendations appear to boil down to transparency and not tracking everyone all the time without a specific reason. Seems reasonable as a starting point.

The Complexity of Ubiquitous Surveillance

Bruce Schneier advocates for a technological “pause” to allow policy to catch up:

[U]biquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives­ — when the surveillance system gets it wrong­ — will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change.

Computers and Video Surveillance

On the other hand, isn’t it a good thing we can spot police officers with militant, racist views?

Rock Paper Scissors robot wins 100% of the time

Via Schneier on Security, this is old but I hadn’t seen it either:

The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow.

Rock Paper Scissors robot wins 100% of the time

Having super-human reaction times is a nice feature, and this certainly isn’t the only application.

US Customs leaks meta-data in leak announcement

US Customs revealed the name of a hacked subcontractor (presumably accidentally) in the title of a Word document:

A contractor for US Customs and Border Protection has been breached, leaking photos and other sensitive data, the agency announced on Monday. Initially described as “traveler photos,” many of the images seem to be pictures of traveler license plates, likely taken from cars at an automotive port of entry.

Customs has not named the contractor involved in the breach, but a Washington Post article noted that the announcement included a Word document with the name Perceptrics, a provider of automated license plate readers used at a number of southern ports of entry.

License plate photos compromised after Customs contractor breach

So they can’t secure the data. And they can’t secure the data announcing the failure to secure the data.

If you think the government can actually secure your data (or even their own hacking tools), you are fooling yourself.

Update: Customs and Border Patrol has suspended all contracts with this supplier as a result of the breach.

First Kinetic Retaliation to Cyber Attack

This was inevitable, but it is worth noting the first time a country has responded to an alleged cyber attack with a kinetic attack:

The Israel Defense Force says that it stopped an attempted cyber attack launched by Hamas over the weekend, and retaliated with an airstrike against the building where it says the attack originated from in Gaza. It’s believed to be the first time that a military has retaliated with physical violence in real time against a cyberattack.

Israel launched an airstrike in response to a Hamas cyberattack

It’s also worth noting, as The Verge comments, that the physical response did not appear strictly necessary: “Given that the IDF admitted that it had halted the attack prior to the airstrike, the question is now whether or not the response was appropriate.”

It’s easy to write about this particular event. It is surely another thing to experience it:

It’s not hard to find criminals on Facebook

Over and over again, researchers have documented easily found groups of hackers and scammers offering their services on Facebook pages. Researchers at Cisco Talos just documented this again:

In all, Talos has compiled a list of 74 groups on Facebook whose members promised to carry out an array of questionable cyber dirty deeds, including the selling and trading of stolen bank/credit card information, the theft and sale of account credentials from a variety of sites, and email spamming tools and services. In total, these groups had approximately 385,000 members.

These Facebook groups are quite easy to locate for anyone possessing a Facebook account. A simple search for groups containing keywords such as “spam,” “carding,” or “CVV” will typically return multiple results. Of course, once one or more of these groups has been joined, Facebook’s own algorithms will often suggest similar groups, making new criminal hangouts even easier to find.

Hiding in Plain Sight

They aren’t even hiding, and Facebook’s automated systems helpfully suggest other criminals you might also like. This is a serious problem for all big online communities. YouTube recently had to deal with disgusting child exploitation issues that its algorithms helped create as well.

Most services complain that it is hard to stamp out destructive behavior. (But see Pinterest.) Yet when their own algorithms are grouping and recommending similar content, it seems that automatically addressing this is well within their technical capabilities. Criminal services should not be openly advertised on Facebook. But apparently there’s no incentive to do anything about it. Cue the regulators.