First Kinetic Retaliation to Cyber Attack

This was inevitable, but it is worth noting the first time a country has responded to an alleged cyber attack with a kinetic attack:

The Israel Defense Force says that it stopped an attempted cyber attack launched by Hamas over the weekend, and retaliated with an airstrike against the building where it says the attack originated from in Gaza. It’s believed to be the first time that a military has retaliated with physical violence in real time against a cyberattack.

Israel launched an airstrike in response to a Hamas cyberattack

It’s also worth noting, as The Verge comments, that the physical response did not appear strictly necessary: “Given that the IDF admitted that it had halted the attack prior to the airstrike, the question is now whether or not the response was appropriate.”

It’s easy to write about this particular event. It is surely another thing to experience it:

It’s not hard to find criminals on Facebook

Over and over again, researchers have documented easily found groups of hackers and scammers offering their services on Facebook pages. Researchers at Cisco Talos just documented this again:

In all, Talos has compiled a list of 74 groups on Facebook whose members promised to carry out an array of questionable cyber dirty deeds, including the selling and trading of stolen bank/credit card information, the theft and sale of account credentials from a variety of sites, and email spamming tools and services. In total, these groups had approximately 385,000 members.

These Facebook groups are quite easy to locate for anyone possessing a Facebook account. A simple search for groups containing keywords such as “spam,” “carding,” or “CVV” will typically return multiple results. Of course, once one or more of these groups has been joined, Facebook’s own algorithms will often suggest similar groups, making new criminal hangouts even easier to find.

Hiding in Plain Sight

They aren’t even hiding, and Facebook’s automated systems helpfully suggest other criminals you might also like. This is a serious problem for all big online communities. YouTube recently had to deal with disgusting child exploitation issues that its algorithms helped create as well.

Most services complain that it is hard to stamp out destructive behavior. (But see Pinterest.) Yet when their own algorithms are grouping and recommending similar content, it seems that automatically addressing this is well within their technical capabilities. Criminal services should not be openly advertised on Facebook. But apparently there’s no incentive to do anything about it. Cue the regulators.

Elizabeth Warren and the Corporate Executive Accountability Act

Elizabeth Warren has introduced the Corporate Executive Accountability Act and is pushing it in a Washington Post Op-Ed:

I’m proposing a law that expands criminal liability to any corporate executive who negligently oversees a giant company causing severe harm to U.S. families. We all agree that any executive who intentionally breaks criminal laws and leaves a trail of smoking guns should face jail time. But right now, they can escape the threat of prosecution so long as no one can prove exactly what they knew, even if they were willfully negligent.

If top executives knew they would be hauled out in handcuffs for failing to reasonably oversee the companies they run, they would have a real incentive to better monitor their operations and snuff out any wrongdoing before it got out of hand.

Elizabeth Warren: Corporate executives must face jail time for overseeing massive scams

The bill itself is pretty short. Here’s a summary:

  • Focuses on executives in big business. Applies to any executive officer of a corporation with more than $1B in annual revenue. Definition of executive officer is same as under traditional federal regulations, plus anyone who “has the responsibility and authority to take necessary measures to prevent or remedy violations.”
  • Makes execs criminally liable for a lot of things. Makes it criminal for any executive officer “to negligently permit or fail to prevent” any crime under Federal or State law, or any civil violation that “affects the health, safety, finances, or personal data” of at least 1% of the population of any state or the US.
  • Penalty. Convicted executives go to prison for up to a year, or up to three years on subsequent offenses.

This is pretty breathtaking in its sweep of criminal liability. It criminalizes negligence. And it applies that negligence standard to any civil violation that “affects” the health, safety, finances, or personal data of at least 1% of a state.

Under this standard every single executive at Equifax, Facebook, Yahoo, Target, etc. risks jail for up to a year. Just read this list. Will be interesting to see where this goes.

Degrees of Threat in Cybersecurity

Via Bruce Schneier a paper discussing why Cybersecurity is not very important:

It is very hard for technologists to give up the idea of absolute cybersecurity. Their mind set is naturally attracted to the binary secure/insecure classification. They are also used to the idea of security being fragile. They are not used to thinking that even a sieve can hold water to an extent adequate for many purposes. The dominant mantra is that “a chain is only as strong as its weakest link.” Yet that is probably not the appropriate metaphor. It is
better to think of a net. Although it has many holes, it can often still perform adequately for either catching fish or limiting inflow of birds or insects.

This is a much better metaphor for thinking about cybersecurity and risk in general.

And it’s helpful that criminals tend to be just as self-interested in cyberspace:

Most criminals, even among those on the extreme edge of the stupidity spectrum, have no interest in destroying the system they are abusing. They just want to exploit it, to extract value for themselves out of it.

An amusing and instructive example of illicit cyber behavior that maintains the functioning of the system is provided by the ransomware criminals. Studies have documented the high level of “customer care” they typically provide. They tend to give expert assistance to victims who do pay up, and have difficulty restoring their computers to the original state. After all, those criminals do want to establish “reputations” that will induce future victims to believe that payment of the demanded ransom will give them back control of their system and enable them to go on with their lives and jobs.

Models of self interest have very high predictive ability everywhere.

Will software security improve?

Software is wildly insecure. Basically all software can be hacked with varying degrees of sophistication. The cheaper the software / device, the easier it is to hack. Some devices ship without any real attention to security at all. C’est la vie.

Here’s the thing: do we care? Sort of. But mostly not. And that’s because, as Danniel Miesller recently pointed out, the benefits of software (insecure or not) far outweigh the costs. Here’s his helpful graphic summary:

Everyone would like, in theory, to have more secure software. But security costs talent, time, and therefore money. We don’t get secure software because we mostly don’t want to pay for it.

Will that change? Should that change? There’s a lot of talk around regulating cybersecurity, but if we’ve collectively decided we don’t need it then perhaps we don’t. We may see cybersecurity regulation focus on preventing black swan events like entire sections of the internet going down or people dying or elections being hacked. But perhaps that’s where the regulation should end. Software is amazing and cheap and, so far, no one dies. Success!

All your big data are belong to us

Maybe China hacked Marriot. Maybe not.

What made the Starwood attack different was the presence of passport numbers, which could make it far easier for an intelligence service to track people who cross borders. That is particularly important in this case: In December, The New York Times reported that the attack was part of a Chinese intelligence gathering effort that, reaching back to 2014, also hacked American health insurers and the Office of Personnel Management, which keeps security clearance files on millions of Americans.

Marriott Concedes 5 Million Passport Numbers Lost to Hackers Were Not Encrypted

But in a world where there are massive repositories of data on massive numbers of people (cue “IN A WORLD…” dramatic narration), that data is going to be used by governments. That’s just how this is going to work.

(The use of the post title meme probably dates me.)

Privacy vs Security

It’s an old topic, long discussed, and for that reason somewhat boring / repetitive. But I think new intelligent video analytics and facial recognition technology are about to make this extremely relevant again.

There’s no question in my mind that we, as a society, as going to trade public privacy (e.g., being monitored in public all the time) for safety. If the DC Sniper incident happens again, we’ll have drones over every major city. But two points:

  1. The privacy of our homes continues to be relatively secure, apart from the voice-control and IoT devices we voluntarily invite inside. Will that change? I don’t see any need for safety purposes.
  2. Will the additional security change the debate on gun control? If we as a society (i.e. the government) know exactly where you are and what you’re doing every time you step outside, does it matter that you have an arsenal inside your home? So long as it stays there…

And I often think of the aphorism attributed to Ben Franklin:

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.

Still relevant? Of course, like many old quotes, this one is often thrown about without any understanding of its context.

On balance, I lean towards freedom to deploy technology and catch law breakers. And freedom to own firearms. Safety and liberty?

Cybersecurity Ethics

At an MLCE today and got this hypothetical:

Your Company learns that a bug in one of your apps could have provided bad guys with access to confidential user information, but you do not have evidence that anyone actually obtained such information. You’ve fixed the bug. Arguably, privacy statutes require the Company to make disclosure to users and/or regulators. Management makes decision not to disclose, because no indication of actual breach. Ethical issue?

The audience of lawyers split 75% / 25% (live polling) calling this an ethical issue. Fascinating.

Two points: (1) I think the right answer is no. If the statute “arguably” does not require disclosure (i.e. reasonable people disagree) then this is not an ethical issue. But also (2) this scenario is almost certainly true all the time for all companies with confidential user data and internet-facing systems. Should they all be disclosing all the time? Is that even realistic?

Just take a look at the National Vulnerability Database, do a blank search, and look at the security bugs listed today. Awful security bugs are being found, published, and fixed every day for every major application everywhere. If you have confidential user information and internet-facing applications, you may face this hypothetical every single day.