Attempted theft of trade secrets is also illegal

Camilla Hrdy points out that you can’t sue for trade secret theft if the information stolen is not actually protected as a trade secret. But you can charge someone with attempted trade secret theft even if the information wasn’t a trade secret. Which means you can go to jail for attempted trade secret theft even if you couldn’t be sued for it. That is a weird inversion.

The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an “attempt” crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mindThe criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that “[w]hoever, with intent to convert a trade secret … to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals…obtains… possesses…[etcetera]” a trade secret, or “attempts to” do any of those things, “shall… be fined under this title or imprisoned not more than 10 years, or both…” 

Anthony Levandowski: Is Being a Jerk a Crime?

Does automating the boring stuff leave only the unpleasant stuff?

Fred Benenson, writing for The Atlantic, is concerned that AI automation will leave only the most difficult and unpleasant tasks for humans:

What’s less understood is that artificial intelligence will transform higher-skill positions, too—in ways that demand more human judgment rather than less. And that could be a problem. As AI gets better at performing the routine tasks traditionally done by humans, only the hardest ones will be left for us to do. But wrestling with only difficult decisions all day long is stressful and unpleasant. Being able to make at least some easy calls, such as allowing Santorini onto Kickstarter, can be deeply satisfying.

“Decision making is very cognitively draining,” the author and former clinical psychologist Alice Boyes told me via email, “so it’s nice to have some tasks that provide a sense of accomplishment but just require getting it done and repeating what you know, rather than everything needing very taxing novel decision making.”

AI Is Coming for Your Favorite Menial Tasks

He recognizes that many professions (e.g., lawyers!) may welcome automation of the boring stuff. But he’s particularly concerned about content moderators.

But we may find that as jobs get harder, the benefits get better.

Survey suggests most Americans support police use of facial recognition technology

According to the Pew Research Center, a full 56 percent said that they trust police and officials to use these technologies responsibly. That goes for situations in which no consent is given: About 59 percent said it is acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces.

Police Use of Facial Recognition is Just Fine, Say Most Americans

Black and Hispanic adults approve at lower rates. See the study for details.

UK court approves police use of facial recognition

In contrast to recent U.S. municipal decisions restricting government use of facial recognition technology, a UK court has ruled that police use of the technology does not violate any fundamental rights.

In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

Police Use of Facial Recognition Is Accepted by British Court

The UK is of course one of the most surveilled countries in the world.

The astronomy community has identified the spy satellite revealed by Trump

President Trump tweeted an apparently classified image of an Iranian launch pad on August 30. He has the right to do so. But he probably did not expect everything that the tweet would reveal.

Now astronomers have easily identified the exact satellite that took the image. By measuring the semi-major and semi-minor axes of the ellipse (as viewed in the image) of the circular launch platform, they were able to determine the angle of view. This matched precisely with a satellite known as USA 224, previously of unknown capability. Google Earth shows the launch pad as about 60 meters in diameter, which therefore suggests a satellite resolution capability of 10 centimeters per pixel. That resolution is very impressive and also previously unknown.

Unreal.

The detail in the image is surprising, even to satellite imagery experts. In an interview with NPR, Melissa Hanham of the Open Nuclear Network in Vienna said, “… I did not believe <the image> could come from a satellite.” Hanham also said that “I imagine adversaries are going to take a look at this image and reverse-engineer it to figure out how the sensor itself works and what kind of post-production techniques they’re using.”

Thanks to Trump, We’ve Got a Better Idea of the Capabilities of US Surveillance Satellites

Language models are improving quickly

Aristo, an AI developed by the Allen Institute for Artificial Intelligence, passed an eighth-grade science test with a score of 90%. They used BERT.

At Google, researchers built a system called Bert that combed through thousands of Wikipedia articles and a vast digital library of romance novels, science fiction and other self-published books.

Through analyzing all that text, Bert learned how to guess the missing word in a sentence. By learning that one skill, Bert soaked up enormous amounts of information about the fundamental ways language is constructed. And researchers could apply that knowledge to other tasks.

The Allen Institute built their Aristo system on top of the Bert technology. They fed Bert a wide range of questions and answers. In time, it learned to answer similar questions on its own.

A Breakthrough for A.I. Technology: Passing an 8th-Grade Science Test

14 iPhone 0-Days

Bruce Schneier:

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Massive iPhone Hack Targets Uyghurs

AI for military also means compassion

Phenomenal essay by Lucas Kunce, a U.S. Marine who served in Iraq and Afghanistan, responding to news that 4,600 Google employees signed a petition urging the company to refuse to build weapons technology:

People frequently threw objects of all sizes at our vehicles in anger and protest. Aside from roadside bombs, the biggest threat at the time, particularly in crowded areas, was an armor-piercing hand-held grenade. It looked like a dark soda can with a handle protruding from the bottom. Or, from a distance and with only an instant to decide, it looked just like many of the other objects that were thrown at us. 

One day in Falluja, at the site of a previous attack, an Iraqi man threw a dark oblong object at one of the vehicles in my sister team. The Marine in the turret, believing it was an armor-piercing grenade, shot the man in the chest. The object turned out to be a shoe.

[. . . . .]

When I think about A.I. and weapons development, I don’t imagine Skynet, the Terminator, or some other Hollywood dream of killer robots. I picture the Marines I know patrolling Falluja with a heads-up display like HoloLens, tied to sensors and to an A.I. system that can process data faster and more precisely than humanly possible — an interface that helps them identify an object as a shoe, or an approaching truck as too light to be laden with explosives.

Dear Tech Workers, U.S. Service Members Need Your Help

Interview with John Shawe-Taylor, professor at University College London

I enjoyed this interview and especially the title: “Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair.”

What are some core problems or research areas you want to approach?

People are now solving problems just by throwing an enormous amount of computation and data at them and trying every possible way. You can afford to do that if you are a big company and have a lot of resources, but people in developing countries cannot afford the data or the computational resources. So the theoretical challenge, or the fundamental challenge, is how to develop methods that are better understood and therefore don’t need experiments with hundreds of variants to get things to work.

Another thing is that some of the problems with current datasets, especially in terms of the usefulness of these systems for different cultures, is that there is a cultural bias in the data that has been collected. It is Western data informed with the Western way of seeing and doing things, so to some extent having data from different cultures and different environments is going to help make things more useful. You need to learn from data that is more relevant to the task.

Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair

And of course:

“Solving” is probably too strong, but for addressing those problems, as I’ve said, the problem is that we don’t realise that they are the reflections of our own problems. We don’t realise how biased we are until we see an AI reproduce the same bias, and we see that it’s biased.

I chuckle a bit when I hear about biased humans going over biased data in the hopes of creating unbiased data. Bias is a really hard problem, and it’s always going to be with us in one form or another. Education and awareness are the most important tools for addressing.

It is easy to be fooled if you do not understand how a model works

Benjamin Heinzerling writes in The Gradient that the “Clever Hans effect” is alive and well in natural language processing (NLP) deep learning models:

Of course, the problem of learners solving a task by learning the “wrong” thing has been known for a long time and is known as the Clever Hans effect, after the eponymous horse which appeared to be able to perform simple intellectual tasks, but in reality relied on involuntary cues given by its handler. Since the 1960s, versions of the tank anecdote tell of a neural network trained by the military to recognize tanks in images, but actually learning to recognize different levels of brightness due to one type of tank appearing only in bright photos and another type only in darker ones.

Less anecdotal, Viktoria Krakovna has collected a depressingly long list of agents following the letter, but not the spirit of their reward function, with such gems as a video game agent learning to die at the end of the first level, since repeating that easy level gives a higher score than dying early in the harder second level. Two more recent, but already infamous cases are an image classifier claimed to be able to distinguish faces of criminals from those of law-abiding citizens, but actually recognizing smiles and a supposed “sexual orientation detector” which can be better explained as a detector of glasses, beards and eyeshadow.

NLP’s Clever Hans Moment has Arrived

BERT is a fantastic NLP model, but it’s not displaying deep understanding of the material. For certain tasks, at least, it is exploiting statistical correlations better than you can. And that makes it hard to see what it’s doing.

Reminds me of one of my favorite quotes: “The first principle is that you must not fool yourself – and you are the easiest person to fool.” Richard Feynman