AI-assisted book writing is here

Josh Dzieza, writing for The Verge:

“You are already an AI-assisted author,” Joanna Penn tells her students on the first day of her workshop. Do you use Amazon to shop? Do you use Google for research? “The question now is how can you be more AI-assisted, AI-enhanced, AI-extended.” 

The Great Fiction of AI

And the AI-generated output is mostly usable:

Eager to see what it could do, Lepp selected a 500-word chunk of her novel, a climactic confrontation in a swamp between the detective witch and a band of pixies, and pasted it into the program. Highlighting one of the pixies, named Nutmeg, she clicked “describe.” 

“Nutmeg’s hair is red, but her bright green eyes show that she has more in common with creatures of the night than with day,” the program returned. 

But there are downsides:

There were weirder misfires, too. Like when it kept saying the Greek god Apollo’s “eyes were as big as a gopher’s” or that “the moon was truly mother-of-pearl, the white of the sea, rubbed smooth by the groins of drowned brides.”

And probably some long-term issues when the author is paying less attention:

“I started going to sleep, and I wasn’t thinking about the story anymore. And then I went back to write and sat down, and I would forget why people were doing things. Or I’d have to look up what somebody said because I lost the thread of truth,” she said. 

But these programs are here to stay, and will only get better.

Prompt injection for content synthesis models

It turns out some text synthesis models, and specifically GPT-3, are likely vulnerable to “prompt injection,” which is instructing the model to disregard its “pre-prompts” which contain task instructions or safety measures.

For example, it’s common to use GPT-3 by “pre-prompting” the model with “Translate this text from English to German,” or “I am a friendly and helpful AI chatbot.” These pre-prompts are given before each user input as a way of setting up the user for success at a given task, or preventing the user from doing something different with the model.

But what if the user prompt tells the model to disregard its pre-prompt? That actually seems to work:

It’s also possible to coerce a model into leaking its pre-prompt:

Prompt injection attacks are already being used in the wild.

Getty Images bans upload of AI-generated content

James Vincent, writing for The Verge:

Getty Images has banned the upload and sale of illustrations generated using AI art tools like DALL-E, Midjourney, and Stable Diffusion. It’s the latest and largest user-generated content platform to introduce such a ban, following similar decisions by sites including NewgroundsPurplePort, and FurAffinity.

Getty Images CEO Craig Peters told The Verge that the ban was prompted by concerns about the legality of AI-generated content and a desire to protect the site’s customers.

Getty Images bans AI-generated content over fears of legal challenges

Getty Images is being appropriately cautious. AI image synthesis tools, being trained on the open internet, can be easily prompted into copyright violations.

Misplaced Faith in Computer Precision

Computers can be fantastically accurate. And humans have a tendency to assume that this accuracy means something.

For example, an automated license plate reader might flag the license plate in front of you as “stolen.” You look at the report, confirm the license plate in front of you, and arrest the driver. You may not consider that the report itself is wrong. Even if the technology works exactly as intended, it doesn’t necessarily mean what you assume it means.

Joe Posnanski suggests this kind of faith in computer precision may be unfairly impacting athletes as well:

Maybe you heard about the truly insane false-start controversy in track and field? Devon Allen — a wide receiver for the Philadelphia Eagles — was disqualified from the 110-meter hurdles at the World Athletics Championships a few weeks ago for a false start.

Here’s the problem: You can’t see the false start. Nobody can see the false start. By sight, Allen most definitely does not leave before the gun.

Checkmate

Allen’s reaction time was 0.099 seconds, just 1/1000th of a second under the “allowable limit” of 0.1 seconds.

Posnanski writes:

World Athletics has determined that it is not possible for someone to push off the block within a tenth of a second of the gun without false starting. They have science that shows it is beyond human capabilities to react that fast. Of course there are those (I’m among them) who would tell you that’s nonsense, that’s pseudoscience, there’s no way that they can limit human capabilities like that. There is science that shows it is humanly impossible to hit a fastball. There was once science that showed human beings could not run a four-minute mile.

The computer can tell you his reaction time was 0.099 seconds. But it can’t tell you what that means.

As we rely more and more on computers to make decisions, especially “artificially intelligent” computers, it will be critical to understand what they are telling us and what they are not.

Creative Commons raises questions about use of CC-licensed works to train AI’s

Creative Commons licenses typically put few constraints on the re-use of copyrighted material. And that flexibility has allowed AI’s to be trained on CC-licensed material, which sometimes surprises copyright holders.

In a new blog post, Creative Commons outlines the issue and states that it will “examine, throughout the year, the intersection of AI and open content.”

155 votes in a Twitter poll where the plurality selects “Depends” is… not a lot of guidance.

Clearview AI scores a PR win in the NYT

Kashmir Hill:

If Clearview AI, which is based in New York, hadn’t granted his lawyer special access to a facial recognition database of 20 billion faces, Mr. Conlyn might have spent up to 15 years in prison because the police believed he had been the one driving the car.

Clearview AI, Used by Police to Find Criminals, Is Now in Public Defenders’ Hands

Clearview allowed use of their facial recognition service to identify a good samaritan who had pulled Mr. Conlyn from the passenger side of the vehicle, thereby providing evidence he was not the driver.

AI image synthesis models may struggle with copyright

James Vincent, writing for The Verge:

Like most modern AI systems, Stable Diffusion is trained on a vast dataset that it mines for patterns and learns to replicate. In this case, that core of the training data is a huge package of 5 billion-plus pairs of images and text tags known as LAION-5B, all of which have been scraped from the public web. . . .

We know for certain that LAION-5B contains a lot of copyrighted content. An independent analysis of a 12 million-strong sample of the dataset found that nearly half the pictures contained were taken from just 100 domains. The most popular was Pinterest, constituting around 8.5 percent of the pictures sampled, while the next-biggest sources were sites known for hosting user-generated content (like Flickr, DeviantArt, and Tumblr) and stock photo sites like Getty Images and Shutterstock. In other words: sources that contain copyrighted content, whether from independent artists or professional photographers.

Anyone can use this AI art generator — that’s the risk

Vincent points out that Stable Diffusion even sometimes inserts the “Getty Images” watermark in its generated imagery. Not a good look.

AI that learns from the internet

Ben Thompson at Stratechery points out that new deep learning models are not requiring access to curated data in a way that would advantage large companies:

If not just data but clean data was presumed to be a prerequisite, then it seemed obvious that massively centralized platforms with the resources to both harvest and clean data — Google, Facebook, etc. — would have a big advantage.

. . . . .

To the extent that large language models (and I should note that while I’m focusing on image generation, there are a whole host of companies working on text output as well) are dependent not on carefully curated data, but rather on the Internet itself, is the extent to which AI will be democratized, for better or worse.

The AI Unbundling

This means the new AI models are relatively cheap and also more a reflection of internet content itself, “for better or worse.”

Different approaches to IP protection in the US and UK

 Jennifer Maisel, writing for LexBlog, reviews the differences between US and UK intellectual property protection for AI-generated works.

A chart is worth 1,000 words:

Will Divergent Copyright Laws Between the US and UK Influence Where You Do Business as an Artificial Intelligence Company?

Facebook does not know what data it has

Bruce Schneier, linking to an article in The Intercept about a court hearing in the Cambridge Analytica suit:

Facebook’s inability to comprehend its own functioning took the hearing up to the edge of the metaphysical. At one point, the court-appointed special master noted that the “Download Your Information” file provided to the suit’s plaintiffs must not have included everything the company had stored on those individuals because it appears to have no idea what it truly stores on anyone. Can it be that Facebook’s designated tool for comprehensively downloading your information might not actually download all your information? This, again, is outside the boundaries of knowledge.

“The solution to this is unfortunately exactly the work that was done to create the DYI file itself,” noted Zarashaw. “And the thing I struggle with here is in order to find gaps in what may not be in DYI file, you would by definition need to do even more work than was done to generate the DYI files in the first place.”

FACEBOOK ENGINEERS: WE HAVE NO IDEA WHERE WE KEEP ALL YOUR PERSONAL DATA

Schneier has repeatedly made this fundamental but counter-intuitive point: “Today, it’s easier to build complex systems than it is to build simple ones.”

None of this is surprising to people familiar with modern data center services at scale. Twitter allegedly doesn’t know how to restart its services if they really go down:

The company also lacks sufficient redundancies and procedures to restart or recover from data center crashes, Zatko’s disclosure says, meaning that even minor outages of several data centers at the same time could knock the entire Twitter service offline, perhaps for good.

Ex-Twitter exec blows the whistle, alleging reckless and negligent cybersecurity policies

Most of this is overblown rhetoric, but the underlying point is that no single person understands how any of these complex systems work. And they are not easy to fix or change.