Getty Images bans upload of AI-generated content

James Vincent, writing for The Verge:

Getty Images has banned the upload and sale of illustrations generated using AI art tools like DALL-E, Midjourney, and Stable Diffusion. It’s the latest and largest user-generated content platform to introduce such a ban, following similar decisions by sites including NewgroundsPurplePort, and FurAffinity.

Getty Images CEO Craig Peters told The Verge that the ban was prompted by concerns about the legality of AI-generated content and a desire to protect the site’s customers.

Getty Images bans AI-generated content over fears of legal challenges

Getty Images is being appropriately cautious. AI image synthesis tools, being trained on the open internet, can be easily prompted into copyright violations.

Creative Commons raises questions about use of CC-licensed works to train AI’s

Creative Commons licenses typically put few constraints on the re-use of copyrighted material. And that flexibility has allowed AI’s to be trained on CC-licensed material, which sometimes surprises copyright holders.

In a new blog post, Creative Commons outlines the issue and states that it will “examine, throughout the year, the intersection of AI and open content.”

155 votes in a Twitter poll where the plurality selects “Depends” is… not a lot of guidance.

AI image synthesis models may struggle with copyright

James Vincent, writing for The Verge:

Like most modern AI systems, Stable Diffusion is trained on a vast dataset that it mines for patterns and learns to replicate. In this case, that core of the training data is a huge package of 5 billion-plus pairs of images and text tags known as LAION-5B, all of which have been scraped from the public web. . . .

We know for certain that LAION-5B contains a lot of copyrighted content. An independent analysis of a 12 million-strong sample of the dataset found that nearly half the pictures contained were taken from just 100 domains. The most popular was Pinterest, constituting around 8.5 percent of the pictures sampled, while the next-biggest sources were sites known for hosting user-generated content (like Flickr, DeviantArt, and Tumblr) and stock photo sites like Getty Images and Shutterstock. In other words: sources that contain copyrighted content, whether from independent artists or professional photographers.

Anyone can use this AI art generator — that’s the risk

Vincent points out that Stable Diffusion even sometimes inserts the “Getty Images” watermark in its generated imagery. Not a good look.

UK IPO suggests copyright exception for text and data mining

The United Kingdom’s Intellectual Property Office has concluded a study on “how AI should be dealt with in the patent and copyright systems.”

For text and data mining, we plan to introduce a new copyright and database exception which allows TDM for any purpose. Rights holders will still have safeguards to protect their content, including a requirement for lawful access.

Consultation outcome / Artificial Intelligence and IP: copyright and patents

They also considered copyright protection for computer-generated works without a human author, and patent protection for AI-devised inventions. But they suggest no changes in the law for these latter two areas.

States can’t be sued for copyright infringement

In March, the U.S. Supreme Court decided Allen v. Cooper, Governor of North Carolina and ruled that States cannot be hauled into federal court on the issue of copyright infringement.

The decision is basically an extension of the Court’s prior decision on whether States can be sued for patent infringement in federal court (also no), and Justice Kagan writes for the unanimous Court in saying, “Florida Prepaid all but prewrote our decision today.”

But one of the most interesting discussions in the opinion is about when, perhaps, States might be hauled into federal court for copyright infringement under the Fourteenth Amendment prohibition against deprivation of property without due process:

All this raises the question: When does the Fourteenth Amendment care about copyright infringement? Sometimes, no doubt. Copyrights are a form of property. See Fox Film Corp. v. Doyal, 286 U. S. 123, 128 (1932). And the Fourteenth Amendment bars the States from “depriv[ing]”a person of property “without due process of law.” But even if sometimes, by no means always. Under our precedent, a merely negligent act does not “deprive” a person of property. See Daniels v. Williams, 474 U. S. 327, 328 (1986). So an infringement must be intentional, or at least reckless, to come within the reach of the Due Process Clause. See id., at 334, n. 3 (reserving whether reckless conduct suffices). And more: A State cannot violate that Clause unless it fails to offer an adequate remedy for an infringement, because such a remedy itself satisfies the demand of “due process.” See Hudson v. Palmer, 468 U. S. 517, 533 (1984). That means within the broader world of state copyright infringement is a smaller one where the Due Process Clause comes into play.

Slip Op. at 11.

Presumably this means that if North Carolina set up a free radio streaming service with Taylor Swift songs and refused to pay any royalties, they might properly be hauled into federal court. But absent some egregiously intentional or reckless conduct, States remain sovereign in copyright disputes.

Copyrightability of AI creations

One of the many fascinating things about AI is whether AI creations can be copyrighted and, if so, by whom. Under traditional copyright analysis, the human(s) that made some contribution to the creative work own the copyright by default. If there is no human contribution, there is no copyright. See, for example, the so-called “monkey selfie” case in which a monkey took a selfie and the photographer that owned the camera got no copyright in the photo.

But when an AI creates a work of art, is there human involvement? A human created the AI, and might have fiddled with its knobs so to speak. Is that sufficient? The U.S. Copyright Office is concerned about this. One question they are asking is this:

2. Assuming involvement by a natural person is or should be required, what kind of involvement would or should be sufficient so that the work qualifies for copyright protection? For example, should it be sufficient if a person

(i) designed the AI algorithm or process that created the work;

(ii) contributed to the design of the algorithm or process;

(iii) chose data used by the algorithm for training or otherwise;

(iv) caused the AI algorithm or process to be used to yield the work;

or (v) engaged in some specific combination of the foregoing activities? Are there other contributions a person could make in a potentially copyrightable AI-generated work in order to be considered an ‘‘author’’?

Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation

No one really knows the answer to this because (1) it is going to be very fact intensive (lots of different ways for humans to be involved or not involved); and (2) it feels weird to do a lot of work or spend a lot of money to build an AI and not be entitled to copyright over its creations.

In any case, these issues are going to be litigated soon. A reddit user recently used a widely-available AI program called StyleGAN to create a music visualization. And although the underlying AI was not authored by the reddit poster, the output was allegedly created by “transfer learning with a custom dataset of images curated by the artist.”

Does the reddit poster (aka self-proclaimed “artist”) own a copyright on the output? Good question.

You don’t own your tattoos

You might own your dance moves, but you definitely don’t own your tattoos. But can you effectively sub-license them?

Any creative illustration “fixed in a tangible medium” is eligible for copyright, and, according to the United States Copyright Office, that includes the ink displayed on someone’s skin. What many people don’t realize, legal experts said, is that the copyright is inherently owned by the tattoo artist, not the person with the tattoos.

For most people, that is not a cause for concern. Lawyers generally agree that an implied license allows people to freely display their tattoos in public, including on television broadcasts or magazine covers. But when tattoos are digitally recreated on avatars in sports video games, copyright infringement can become an issue.

Athletes Don’t Own Their Tattoos. That’s a Problem for Video Game Developers.

And this is not the first time this has been an issue.

Do Digital First Sales Exist?

If I have an iTunes song that I lawfully purchased and downloaded, can I sell that copy to anyone else? Ever? Not according to the Court of Appeal for the Second Circuit in Capital Records v. ReDigi.

ReDigi was a company that offered such a promise: sell your unwanted iTunes music! And they appear to have designed software to make that transaction as close to a physical transfer as possible:

ReDigi’s system differs in that it effectuates a deletion of each packet from the user’s device immediately after the “transitory copy” of that packet arrives in the computer’s buffer (before the packet is forwarded to ReDigi’s server). In other words, as each packet “leaves the station,” ReDigi deletes it from the original purchaser’s device such that it “no longer exists” on that device. Id. As a result, the entire file never exists in two places at once. Id.

The problem is that the Copyright Act prohibits making copies. And even if ReDigi effectively deletes the original copy (putting aside that this is relatively easy for a user to hide), ReDigi still makes a new copy. The law is violated.

ReDigi makes the reasonable objection that users should not be forced to sell their entire computer hard drive to sell the iTunes song they downloaded. The Second Circuit shrugs:

A secondary market can readily be imagined for first purchasers who cost effectively place 50 or 100 (or more) songs on an inexpensive device such as a thumb drive and sell it. . . . Furthermore, other technology may exist or be developed that could lawfully effectuate a digital first sale.

Sure. Ok. Although I think perhaps the court doesn’t understand the market, users, or computers.

You just don’t own digital copies in the same way you own physical copies, absent some major change in the law.

Curating Content May Defeat DMCA Protections: Mavrix Photographs v. LiveJournal

The DMCA safe harbor (17 U.S.C. § 512) immunizes websites and other internet service providers from copyright infringement caused by their users if the providers follow certain procedures. Generally this means providers have to remove copyrighted works as soon as they become aware of them.

But what if you have a bunch of volunteer moderators / curators determining which user submissions to post? Are you responsible for copyright infringement if the volunteers approve a copyrighted user submission? It seems likely, if those volunteers qualify as “agents” of your company.

In Mavrix Photographs v. LiveJournal, Inc. (April 2017), a Ninth Circuit panel concluded that volunteer moderators on a website could indeed be “agents” of the website, although the agency determination is a question of fact that cannot be resolved on summary judgment.

Importantly, the panel ruled that curating the posts (i.e. only posting about 1/3 of the submissions) probably didn’t qualify LiveJournal for the safe harbor of posting “at the direction of the user”. Previous decisions have allowed websites to take advantage of the safe harbor so long as they were only making “accessibility enhancing” changes to posts such as reformatting or screening for offensive material. But curation may be too much control:

The question for the fact finder is whether the moderators’ acts were merely accessibility-enhancing activities or whether instead their extensive, manual, and substantive activities went beyond the automatic and limited manual activities we have approved as accessibility-enhancing.

This reading appears to be well within the statutory language, but it threatens to put additional burdens on websites that curate (and begs the question of what exactly is curation / moderation). If you curate, how can you also ensure the content is not copyrighted by someone else? It’s not obvious.

Update: A group of internet companies including Google and Facebook have urged en banc review. They argue that this decision incentivizes companies to do no pre-screening of user submitted material, and that this is bad policy and contrary to the intent of the DMCA.

This all boils down to whether some forms of pre-screening are extensive enough to consider it reasonable that the company should police copyright infringement too. And to be fair the panel’s decision didn’t say one way or the other: it just said there are enough fact issues that you can’t decide on summary judgment.