AI

AI is dressing up greed as progress on creative rights

There are two narratives battling for the soul of the creative economy. In one, Silicon Valley venture capitalists cast themselves as the heirs of Prometheus, bringing the fire of generative AI to a backward creative class clinging to outmoded business models. In the other, artists and authors watch their life’s work being fed into a digital maw to produce competition that is “priced at the marginal cost of zero,” as the US Copyright Office recently put it .

For years, the tech lobby has successfully peddled the first narrative, framing copyright law as a dusty relic of the Gutenberg era that must be swept aside so progress can march on. But March 2026 has provided a reality check. Last week, the UK government—facing a blistering campaign from the creative industries and a damning report from the House of Lords—was forced to delay its plans for AI copyright reform, kicking a decision into 2027 . Simultaneously, in a Munich courtroom, the music rights society GEMA began its pivotal case against the AI music generator Suno, while awaiting a ruling on its related victory against OpenAI from last November .

These are not signs of a legal system that is broken or unfit for purpose. They are signs of a legal system that is working—and that the tech industry would prefer to dismantle. The core thesis emerging from the courts, parliaments, and collecting societies of the Western world is this: AI is dressing up greed as progress on creative rights. The problem is not that the law is unfit for the 21st century but that it is being flouted.

The Myth of the Legal Vacuum

Listen closely to the AI developers, and you will hear a consistent refrain: we are innovating in a vacuum; the rules are unclear; we need a modernized framework. This is the lobbying equivalent of a land grab. The House of Lords Communications and Digital Committee, in its scorching report published March 6, saw right through it. They noted that the tech sector’s demand for a broad commercial text and data mining (TDM) exception is not a plea for clarity, but an attempt to “lower… litigation risk by weakening the current level of copyright protection” .

Let us be precise about what existing law actually says. Under UK law, and across most of Europe, copyright is engaged whenever the whole or a substantial part of a protected work is copied—including storing it in digital form. As the Lords report firmly states, “the large-scale making and processing of digital copies of protected works for model training may therefore be characterised as reproduction” . The US Copyright Office, in its pre-publication report from May 2025, similarly affirmed that downloading and processing copyrighted works for training constitutes prima facie infringement, subject only to defenses like fair use .

The industry knows this. They know that hoovering up 100 million images, as Midjourney’s founder casually admitted to doing, requires a defense, not a permission slip . They know that ingesting the “Pirate Library Mirror” and “Library Genesis”—shadowy online repositories of pirated books—to train models like Anthropic’s Claude is not an act of academic research, but of industrial-scale copying . This is not innovation operating in a grey area. This is innovation operating in the dead of night.

What the Courts Are Actually Saying

While Westminster dithers, the judiciary is moving. And contrary to the narrative that judges are helpless in the face of technology, they are proving perfectly capable of applying centuries of copyright principle to silicon.

The most significant ruling of the past year came out of the Munich Regional Court last November. In a case brought by GEMA against OpenAI, the court held that AI training constitutes “reproduction” under German law. Crucially, the court found that even the fixation of copyrighted works into a model’s numerical “probability values” qualifies as reproduction if the work can later be perceived. And because ChatGPT was found to “memorize” and reproduce complete training data (song lyrics), it fell outside the EU’s TDM exceptions . OpenAI is appealing, but the legal logic is sound: a copy is a copy, whether stored on a hard drive or distilled into a matrix of weights.

This is not an isolated European quirk. Across the Atlantic, the $1.5 billion settlement by Anthropic to resolve authors’ claims was a tacit admission of liability . While a US district judge in the Bartz case made a nuanced distinction—ruling that training itself could be fair use but that maintaining a permanent library of pirated books was not—the sheer scale of the payout reveals the underlying risk .

The legal scholar Jane Ginsburg once noted that “the right to read is the right to write.” The AI industry has inverted this: they claim the right to copy is the right to compute. But the Munich ruling reminds us that copying for computational purposes is still copying. The notion that ingesting a novel to “learn” style is the same as a human reading it was rightly dismissed by the US Copyright Office, which noted that a student reading a book cannot subsequently distribute millions of perfect paraphrases of it in seconds .

Recent Legal & Regulatory Actions (2025–2026)
DateCase / EventKey Finding / Status
Nov 2025GEMA v. OpenAI (Munich)AI training = “reproduction”; lyrics memorization violates copyright
Aug 2025Anthropic Authors Settlement$1.5bn class-action settlement over pirated book training
May 2025US Copyright Office Part 3 ReportRejects “non-expressive use” defense; training requires case-by-case fair use analysis
Mar 2026UK Gov’t Copyright ReformDelays decision to 2027 after creative-industry backlash
Mar 2026GEMA v. Suno (Munich)Oral hearings held; ruling expected June 2026

The “Pirate and Delete” Defense

If the legal landscape is clarifying, why the urgency to legislate? Because the industry’s preferred solution is not compliance, but amnesty. The UK government’s now-delayed proposal was for an “opt-out” system—shifting the burden onto creators to police the entire internet and tell AI companies not to steal from them. As the musician and former Labour minister Margaret Hodge reportedly told Parliament, this is like putting a sign on your front door asking burglars not to enter.

The technical term for this strategy is “asymmetric warfare.” AI companies argue they cannot possibly license every work because there are billions of them. But this is an argument of convenience. The EU’s AI Act, which came into force this year, mandates transparency. Its template for training data summaries, published in final form in late 2025, requires providers to list the top data sources and domains used . If they can summarize it for regulators, they can pay for it.

Furthermore, a disturbing legal strategy is emerging from the U.S. cases. As legal analysts at Arnall Golden Gregory noted after the Bartz case, the ruling creates a perverse incentive: if training is fair use but permanent storage is not, the optimal strategy for a company is to “pirate and delete” . Download the stolen library, train the model as fast as possible, delete the evidence, and claim protection under the “transformative” use doctrine. This is not a solution; it is a recipe for laundering copyright infringement on a global scale.

The New Robber Barons

We have been here before. In 18th-century Scotland, booksellers in London held a monopoly on “valuable” literature. Scottish “pirates” like Alexander Donaldson reproduced and sold cheaper editions, arguing that knowledge should be free and that the London booksellers were holding back the enlightenment. The resulting battle—Donaldson v. Beckett—helped forge modern copyright law, establishing that the right is limited and ultimately yields to the public domain. But crucially, the Scottish “pirates” did not pretend the books were not written by someone. They simply exploited a territorial loophole. They were businessmen, not revolutionaries.

Today’s AI companies are the heirs of Donaldson, but with a crucial difference: they have no intention of letting the copyright term expire. They want the raw material of human culture delivered to them, on tap, forever. They want the value without the cost, the reward without the risk.

When Disney and NBCUniversal sue Midjourney, calling it a “bottomless pit of plagiarism,” they are not merely defending Mickey Mouse . They are defending a principle that every studio, every musician, and every journalist relies upon: that you cannot take someone’s labor without consent or compensation. When Paul McCartney releases a “silent album” to protest proposed UK laws, he is making the same point: that the output of a lifetime of creative work is being scraped to build machines that will ultimately silence him .

The Only Way Forward

There is a path forward, but it does not run through weakening the law. It runs through enforcing it.

First, reject the “opt-out” framework. The House of Lords is right: the government should rule out any reform that removes the incentive to license. The default must be opt-in.

Second, mandate transparency. The EU has shown the way. The UK’s Data (Use and Access) Act provides a vehicle for this. We need to know what data was used, where it came from, and how it was processed. The Midjourney admission that it scraped 100 million images without any tracking of provenance should be illegal, not a badge of honor .

Third, let the courts work. The Munich ruling on OpenAI lyrics and the pending GEMA v. Suno decision will provide clarity . So will the New York Times case against OpenAI and the Scarlett Johansson voice cloning suit. These are not roadblocks to innovation; they are the guardrails of a functioning market.

The AI industry likes to quote the maxim that “information wants to be free.” But as Stewart Brand, who coined the phrase, also said, “information also wants to be expensive.” The tension between those two truths is what markets resolve. The attempt to collapse that tension by fiat—by declaring that all information is free for the taking by a handful of monopolists—is not progress. It is a heist dressed up as philosophy.

The law is fit for the 21st century. The question is whether we have the courage to use it.

Abdul Rahman

Recent Posts

Indonesia’s Danantara Shifts to Investment Phase, Targets 7% Returns — Sovereign Wealth Fund Enters Deployment Era Under Prabowo’s Ambitious Vision

The morning light over Jakarta's financial district has a way of making ambition look achievable.…

6 hours ago

Iran’s Tenacious Regime and the Future of the Gulf

Iran's tenacious regime and the future of the Gulf hangs in the balance as Mojtaba…

1 day ago

Brent Crosses $100 as Indian Tanker Path Corrected Near Strait of Hormuz

A single misread ship position sent oil markets through a psychological threshold. What it reveals…

1 day ago

Iran Vows to Keep Strait of Hormuz Closed: Mojtaba Khamenei’s First Statement Signals Escalation as Oil Surges Past $100

Flames from the Safesea Vishnu illuminated the night sky over the Khor Al Zubair Port…

2 days ago

Pakistan’s 5G Era Begins: Pilot Projects Launch Next Week After Record $510 Million Spectrum Auction

Pakistan 5G pilot projects start next week following $507M spectrum auction. How 5G will change…

2 days ago

The 400 Million Barrel Question: Can the IEA’s Historic Reserve Release Save the Global Economy from Iran’s Energy War?

With the Strait of Hormuz effectively closed and 20% of global oil supply offline, the…

3 days ago