Generative AI vs. Content Creators: The Battle Begins
Generative AI vs. Content Creators: The Battle Begins
Smart IP owners recognize that generative AI technology is here to stay. Rather than wish it away, more and more creators are taking both legal and nonlegal countermeasures to protect their assets. Here are the strategies most likely to succeed.
Words by Chris Wlach
Photo courtesy Getty Images
Last month several popular authors — including John Grisham, Jonathan Franzen, and George R.R. Martin — filed a class-action suit against OpenAI, maker of the popular AI chatbot ChatGPT. The authors announce their core grievance on the first page: that ChatGPT’s models “endanger fiction writers’ ability to make a living.”
While many of generative AI’s commercial effects will emerge over time, the tech is shaking up some industries already. Among the most affected are writers and others who trade in intellectual property (IP) — visual artists, musicians, photographers, and the companies that license or distribute their creations.
These players face a twofold threat. First, their IP is being used, largely without consent, to train the large language models (LLMs) underpinning generative AI technology. And if courts find that consent was not needed, that means lost licensing income for the IP owners. More concerning and enduring is the second threat: that AI-generated material, cheap and in infinite supply, could sop up consumer demand for art, texts, and other content previously made by humans.
Some see these changes as omens of a creative apocalypse, others as the familiar process of new industry supplanting old — what Austrian economist Joseph Schumpeter called capitalism’s “perennial gale of creative destruction.” Gale or not, the winds are picking up. And whatever one’s views on the new tech, the business threats to existing IP-based industries are real. Not surprisingly, these industries are moving swiftly to protect their assets.
Here are five approaches they’re taking:
The recent lawsuit by Grisham and others is one of a growing number of similar actions. Scores of creatives and content owners, including Getty Images, have taken to the courts to halt what they see as a massive infringement of their rights. While the suits assert various theories of liability, they share an underlying goal — enjoining or extracting damages from generative AI companies, some of which boast valuations in the billions.
Most of these cases are in their early stages, but some plaintiffs have seen preliminary success. In September, a Delaware federal judge ruled that it’s a jury question whether an AI start-up violated copyright law by copying Thomson Reuters’ content without their consent. While US copyright law gives creators a bundle of exclusive rights over their works, it’s an open legal question whether the fair use doctrine or other legal exception allows others to use these works to train AI. If this and other suits succeed in their copyright infringement claims, many generative AI companies may find themselves following the path of Napster, a company whose legal troubles suddenly sidelined their business.
Traditional IP companies aren’t just running to the courthouse; they’re also heading to Capitol Hill. Last Thursday a bipartisan group of Senators introduced the NO FAKES Act. The draft bill, supported by the actors’ union SAG-AFTRA, would prohibit producing or distributing “digital replicas” of persons without their consent.
If industry groups successfully push for statutory protections like the NO FAKES Act or further amendments to the Copyright Act, then owners of IP and related rights will have another bulwark against generative AI’s rising tide. But passing bills is much harder than introducing them. Given that the US Congress is hardly a model of legislative efficiency, companies shouldn’t depend on the legislature to defend their interests, at least in the short term.
Litigating and lobbying aren’t the only ways to make law. With a simple penstroke, parties can create private law — that is, contractual rules — governing AI.
And that’s exactly what they’re doing. As SAG-AFTRA negotiates its collective bargaining agreement with the studios, the union is reportedly pushing for protections against AI use. The Authors Guild has also issued sample contract language prohibiting publishers and platforms from training AI models on authors’ works.
Of the law-based strategies to protect IP, contract is the most immediate and practicable. IP owners can incorporate AI usage restrictions into their agreements now, without waiting on litigators or lawmakers to produce results.
It’s natural to think of law-centric ways to safeguard IP. Intellectual property is itself a legal construct. But content-driven businesses are also employing technological measures to stem the AI threat. And as with contract, they can adopt these measures now.
Sure enough, they are. According to the AI detection company Originality.ai, roughly 15% of major websites, including several media and publishing companies, use text files to block bots from crawling their websites to train AI models. Tools that “watermark” AI-generated images and text may also help steer demand back toward human-made content — at least when that’s what the user wants.
Smart IP businesses recognize that generative AI technology is here to stay. Rather than wish it away, many incumbents are seeking to retain their market hold by offering competitive tools. Stock asset licensors Shutterstock and Getty, for instance, have both debuted generative AI tools this year. Other companies with significant IP assets will likely take similar approaches.
Beating generative AI companies in the marketplace may be the most promising strategy of all. If an incumbent can make its own AI offering more attractive than a new entrant’s product, then other competitive countermeasures may prove unnecessary — consumer demand should naturally flow to the better product.
The industry shake-up from generative AI has only just begun. For traditional providers of content, the question is not if but how much the technology will force them to defend their place in the market and change their competitive positioning. Whatever strategies they take, sitting idly is risky: the current generative AI storm may indeed blow past, but it will leave a different landscape in its wake.
Chris Wlach is General Counsel at Huge.