Connect with us

AI

The great price deflator: why the AI boom could be the most disinflationary force in a generation

Published

on

Northern Trust’s $1.4 trillion asset management arm says the AI boom is “massively disinflationary.” The evidence is building — but so are the near-term headwinds. Here is what the bulls are getting right, what they are glossing over, and what every central banker should be thinking about this week.

Analysis · 2,150 words · Cites: Northern Trust, IMF WEO April 2026, BIS Working Papers, OECD

There is a sentence making the rounds in macro circles this morning that deserves more than a tweet. Northern Trust Asset Management — custodian of $1.4 trillion in client assets — told the Financial Times that the AI boom is poised to be “massively disinflationary.” Two words, twelve letters, and an argument that, if it proves correct, will reshape monetary policy for the rest of this decade. If it proves wrong, it will look like the most expensive case of group-think in asset management history.

The claim is bold, but it is not baseless. Across its 2026 Capital Market Assumptions, Northern Trust has laid the groundwork: nearly 40 percent of jobs worldwide — and 60 percent in advanced economies — are now exposed to AI, signalling what the firm calls “a major shift” in productivity and labor market dynamics. Add to that the IMF’s own January 2026 estimate that rapid AI adoption could lift global growth by as much as 0.3 percentage points this year alone, and up to 0.8 percentage points annually in the medium term, and suddenly “massively disinflationary” sounds less like a marketing line and more like a macroeconomic thesis worth taking seriously.

But serious theses deserve serious scrutiny. And when you peel back the optimism, you find a story with a considerably more complicated second act.

“AI today is still in its early innings. It is reshaping how we operate. It is reshaping how we work. Yet at the same time, we know there are going to be a number of missteps.” — Northern Trust Asset Management, February 2026

The disinflationary logic — and why it is compelling

The core argument runs as follows. AI raises the productive capacity of every worker, firm, and economy that adopts it. More output from the same inputs means falling unit costs. Falling unit costs mean downward pressure on prices. In a world still wrestling with inflation — the IMF’s April 2026 World Economic Outlook projects global headline inflation at 4.4 percent this year, elevated partly by a new Middle East conflict — that kind of structural supply-side boost could not arrive at a better moment.

The historical analogy is not perfect, but it is instructive. The internet and personal computing drove a productivity renaissance through the 1990s that helped the US run a decade of growth with unusually low inflation. The difference this time, optimists argue, is both speed and scope. Generative AI is being deployed across sectors — finance, law, medicine, logistics, software — simultaneously, rather than trickling through the economy over fifteen years. The IMF’s own research noted that investment in information-processing equipment and software grew 16.5 percent year-on-year in the third quarter of 2025 in the United States alone. That is not a technology cycle. That is a structural reorientation.

At the firm level, the mechanism is equally legible. AI-assisted coding reduces software development costs. AI-powered customer service reduces headcount requirements per unit of output. AI-accelerated drug discovery compresses R&D timelines. Each of these reduces costs for producers, and in competitive markets, cost reductions eventually become price reductions for consumers. The BIS, in its 2026 working paper on AI adoption among European firms, found measurable productivity gains at companies with higher AI adoption rates — gains that, if broad-based, translate directly into disinflationary pressure.

ALSO READ :  Paralysis in Congress Makes America a Dysfunctional Superpower: An Analysis
InstitutionAI growth uplift (medium-term)2026 inflation forecastKey caveat
IMF (Jan 2026)+0.1–0.8 pp/year3.8%Adoption speed uncertain
IMF (Apr 2026)Upside risk4.4% (conflict-driven)Geopolitical shocks dominate near-term
Northern Trust CMA 2026Significant, decade-long~3% (US)Near-term capex inflationary
OECD AI Papers 2026Variable by AI readinessEME gaps constrain diffusion
BIS WP 1321 (2025)Positive short-run impactLabor market disruption risk

The uncomfortable counterarguments

Now for the cold water. The hyperscalers — Alphabet, Microsoft, Amazon, Meta — are expected to spend upwards of $600 billion on data center capital expenditure in 2026 alone, according to Northern Trust’s own analysis. That is $600 billion of demand competing for semiconductors, specialised labor, land, electricity infrastructure, and cooling systems. In the near term, this is not disinflationary. It is, by any honest accounting, inflationary. It bids up the price of every input that AI infrastructure requires.

Energy is the most acute example. Northern Trust’s own economists have noted that data centers are expected to account for 20 percent of the increase in global electricity usage through 2030. The IMF’s recent research put it plainly: energy bottlenecks “could delay AI diffusion, anchor a higher level of core inflation, and generate local pricing pressures” in grid-constrained regions. This is not a theoretical risk. It is a live constraint in the US, the UK, Ireland, Singapore, and across northern Europe, where grid capacity has become a hard ceiling on data center expansion.

There is also the measurement problem — and it is a serious one. As the IMF’s own Finance & Development noted in its March 2026 issue, GDP accounting simultaneously overstates AI’s immediate contribution (by counting massive capital outlays as output) while understating its broader economic impact (by missing productivity spillovers that do not show up in standard national accounts). This is precisely the statistical paradox that masked the early productivity gains of the 1990s IT revolution — and it cuts in both directions for policymakers. If AI is quietly raising potential output, the economy may be running cooler than headline data implies. If the infrastructure surge is instead stoking a new floor for energy and construction costs, central banks may be tightening into a real supply shock.

The IMF’s chief economist Pierre-Olivier Gourinchas put the dilemma with characteristic precision: the AI boom could lift global growth, but it also “poses risks for heightened inflation if it continues at its breakneck pace.” That is the paradox in miniature — the same technology that promises to lower prices over time is currently consuming enormous resources to build itself.

The geopolitical dimension: who wins, who lags, and who is locked out

The disinflationary thesis is not uniformly distributed across the global economy, and this is where the Northern Trust framing risks glossing over structural inequality. Advanced economies — the US, Japan, Australia, South Korea — are positioned to capture the productivity upside first. Their firms are adopting, their labor markets are adapting, and their capital markets are pricing in the gains. Northern Trust’s own forecasts identify the US, Japan, and Australia as likely leaders in equity returns over the next decade, precisely because of AI-driven productivity.

Europe sits in a more ambiguous position. The continent is not at the forefront of AI model development, and Northern Trust acknowledges it explicitly in its CMA 2026. The region offers a healthy dividend yield and attractive valuations — but if AI productivity is the driver of the next decade’s returns, Europe’s relative lag in AI infrastructure and frontier model development is a structural disadvantage, not a cyclical one. The ECB faces its own version of the monetary policy puzzle: if AI-driven disinflation arrives later and slower in Europe than in the US, it changes the rate path, the currency dynamics, and the comparative fiscal math.

Emerging markets face the starkest challenge. The IMF’s analysis of AI in developing economies is clear: AI preparedness — digital infrastructure, human capital, institutional capacity — is the binding constraint on whether productivity gains materialize or get captured entirely by technology importers. Many emerging economies are primarily consumers of AI built elsewhere. The disinflationary benefits they receive are mediated through imports; the inflationary effects of AI-driven energy demand and semiconductor scarcity are borne locally. The net result, without deliberate policy intervention, is a widening productivity gap rather than a convergence story.

ALSO READ :  Tripling of Natural Gas Consumption in India by 2050 Driven by Industry

China deserves a separate paragraph. Its AI investment is substantial and accelerating, even under the constraints of US semiconductor export controls. The China-US AI race is not merely a geopolitical contest — it is a race to determine which economy gets to define and monetize the next general-purpose technology. Beijing’s capacity to deploy AI at scale across manufacturing, logistics, and services could generate its own disinflationary dynamic, although its ability to export that technology — and the disinflation it carries — is constrained by the very geopolitical tensions that are simultaneously driving energy and defence inflation.

What central banks should actually do

The honest answer is: proceed carefully, communicate transparently, and resist the temptation to read AI’s structural effect through the noise of its near-term capex cycle. The IMF’s April 2026 World Economic Outlook makes the right call when it urges central banks to guard against “prolonged supply shocks destabilising inflation expectations” while reserving the right to “look through negative supply shocks” where inflation expectations remain anchored.

That is the narrow path. If AI is genuinely raising potential output, then central banks that tighten aggressively in response to near-term energy and infrastructure inflation are making a classic policy error: fighting tomorrow’s economy with yesterday’s models. The 1990s analogy is instructive again — the Federal Reserve’s willingness to allow growth to run above conventional estimates of potential, on the grounds that productivity was accelerating, helped produce the longest peacetime expansion in American history.

But the reverse error is equally dangerous. If the AI productivity jackpot takes longer to arrive than Northern Trust and its peers anticipate — and Daron Acemoglu’s careful 2025 work in Economic Policy gives serious reason for that caution — then central banks that ease prematurely, trusting in a disinflationary future that is still several years away, risk entrenching the very inflation they spent the early 2020s battling back.

The IMF is right to treat AI as what it called in its April 2026 research note “a macro-critical transition rather than a standard technology shock.” Human decisions — by managers, workers, regulators, and investors — will shape the pace of adoption, the distribution of gains, and the political sustainability of the disruption. Those decisions are not made yet. Which means the data, for now, is genuinely ambiguous.

The verdict: right thesis, wrong timeline

Northern Trust is probably correct that AI will be massively disinflationary. The logic is sound, the historical analogies are supportive, and the scale of investment being made is simply too large to yield no productivity dividend. The question is not whether, but when — and the “when” matters enormously for portfolio construction, monetary policy, and fiscal planning.

The near-term picture, stripped of AI optimism, is one of elevated global inflation shaped by geopolitical conflict, persistent services price stickiness, and a capex boom that is consuming rather than producing cheap goods. The medium-term picture, contingent on adoption rates and diffusion across the global economy, is one where AI-driven productivity could deliver a genuine and sustained disinflationary impulse — the kind that would allow central banks to run looser for longer, equity multiples to expand sustainably, and real wages to recover.

The investor who misidentifies the timeline — and treats the medium-term story as immediate reality — will find themselves long duration in a world where rates stay higher than expected, and long AI infrastructure capex in a world where the ROI question remains, as Northern Trust itself acknowledged in February, one of “many more questions than answers.”

The honest macro position, as of April 2026, is this: Northern Trust is pointing in the right direction. But they may be holding the map upside down with respect to the calendar. For investors, policymakers, and strategists, the discipline required is not deciding whether AI will be disinflationary — it will — but calibrating, with intellectual humility, exactly how long the world will have to wait before the price deflator actually arrives.


Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Opinion

OPINION|When the Treasury Panics, Listen: Anthropic’s Mythos and the AI Threat Hiding Inside Your Bank

Published

on

The most consequential financial-security meeting of 2026 happened Tuesday. Almost nobody was talking about it.

There is a particular quality to urgency in Washington — a calibrated, deliberate kind, stripped of drama precisely because the stakes are too high for theater. When Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell jointly summon the chiefs of America’s largest banks to a private session on a weekday morning, they are not performing concern. They are managing it.

That is what happened on Tuesday, April 8, 2026, in the marbled corridors of Treasury headquarters on Pennsylvania Avenue. Bessent and Powell assembled a group of Wall Street leaders to make sure banks are aware of possible future risks raised by Anthropic’s Mythos model and potential similar systems, and are taking precautions to defend their systems. Bloomberg The CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs were present. JPMorgan’s Jamie Dimon was invited but unable to attend. AOL The Treasury declined to comment. The Fed declined to comment. Anthropic had no immediate comment.

In Washington, silence of that particular texture is its own form of communication.

The Model That Spooked the Regulators

To understand why two of America’s most powerful financial stewards convened an emergency summit with the chiefs of institutions collectively managing trillions in assets, you need to understand what Anthropic’s Claude Mythos Preview actually does — and why it is genuinely different from the parade of large language models that have cycled through headlines since 2022.

Anthropic launched the powerful Mythos model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company said the model is capable of identifying and exploiting weaknesses across “every major operating system and every major web browser.” RTÉ Read that sentence again. Every major operating system. Every major web browser. This is not a chatbot that occasionally hallucinates. This is an autonomous vulnerability-hunting engine with the precision of an elite red team and the speed of software.

Unlike typical consumer-facing AI tools, Mythos is geared toward cybersecurity software engineering tasks. Its specialty is identifying critical software vulnerabilities and bugs, but it can also assemble sophisticated exploits. CoinDesk The distinction matters enormously. Most AI models are generative — they produce text, images, code. Mythos is analytical and adversarial, capable of scanning codebases, identifying failure points invisible to human auditors, and constructing the exploits that could weaponize those failures. In the hands of a sophisticated actor — a state-sponsored hacking collective, a ransomware syndicate, a rogue insider — this capability is not a cybersecurity tool. It is a cybersecurity threat.

This marked the first time Anthropic had limited the launch of a new model. Investing.com That fact alone should arrest attention. A company whose business model depends on broad adoption and API revenue made the deliberate, commercially costly decision to gate access. That restraint — unusual in a sector that tends to race toward release — signals something about how seriously Anthropic’s own researchers regard what they have built.

Project Glasswing: An Experiment in Controlled Power

Access to Mythos will be limited to about 40 technology companies, including Microsoft and Google, and Anthropic has been in ongoing talks with the U.S. government about the model’s capabilities. AOL This restricted release program, referred to internally as Project Glasswing, is a deliberate inversion of how AI has historically been deployed: rather than releasing broadly and patching later, Anthropic gave dominant platform holders a head start — not to monetize first, but to defend first. Anthropic released the model to a select group of partners, including Amazon, Apple, and Microsoft, to give them a head start on securing vulnerabilities. Investing.com

ALSO READ :  Re-evaluating US-China Relations: A Deeper Look Behind the Hostility

It is a genuinely novel approach, and one that deserves more credit than it will likely receive. The logic is sound: if a model can identify zero-day vulnerabilities at machine speed, the most responsible action is to arm defenders before the broader landscape of threat actors can replicate or steal the capability. But Glasswing also exposes a governance gap so wide you could park an aircraft carrier in it.

Who audits the 40 companies with access? What safeguards prevent Mythos from being fine-tuned, transferred, or reverse-engineered? If a Glasswing participant suffers a breach — and given that these are themselves high-value targets, the probability is non-trivial — what is the liability chain? What is the protocol? The answers to these questions do not exist in any regulatory framework currently operative in the United States, the European Union, or anywhere else.

The Systemic Risk Nobody Has Priced

The meeting at Treasury was not primarily about Anthropic. It was about what Anthropic represents: the arrival of AI capabilities that move faster than the regulatory, legal, and institutional machinery designed to contain them.

Consider the financial system’s exposure. Modern banking infrastructure is built on decades of accumulated code — legacy COBOL systems at regional lenders, middleware connecting trading platforms to clearing houses, authentication layers protecting retail deposits. Much of this code has never been audited by a sophisticated adversary because auditing at scale was prohibitively expensive. Mythos eliminates that constraint. A well-resourced actor with access to comparable capability could, in principle, systematically map the attack surface of an entire national banking system in the time it currently takes a human security team to review a single subsystem.

The episode highlights a fundamental change in how regulators are framing AI risk — not merely as a technological challenge, but as a potential catalyst for systemic events. This has already raised red flags in crypto, where experts are worried that Mythos’ capability of discovering and exploiting zero-day vulnerabilities in real-time at low cost poses risk to the DeFi infrastructure. CoinDesk

The systemic risk framing is the right one — and it is the framing that explains why Powell was in that room. The Federal Reserve’s mandate is financial stability. Historically, stability threats have come from credit cycles, liquidity crunches, and contagion. They are now coming from code. A successful AI-enabled attack on a major custodial bank — one that compromised transaction integrity, corrupted ledger data, or triggered a cascade of failed settlement — would represent a category of financial crisis that no existing playbook addresses. The bazooka of emergency liquidity provision is not particularly useful when the crisis is epistemic rather than financial: when the question is not whether there is enough money, but whether the numbers can be trusted at all.

Anthropic vs. the Pentagon: The Contradiction at the Heart of AI Policy

There is a peculiar irony shadowing this episode. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic’s request that it put a pause to the Pentagon’s designation. Bloomberg Law

Anthropic proactively briefed senior U.S. government officials and key industry stakeholders on Mythos’s capabilities RTÉ — engaging responsibly with the national security community — even as one branch of that same government has labeled the company a security liability. The left hand of the U.S. government calls in Anthropic’s most advanced model to warn bankers about cyber risk; the right hand designates its maker a supply-chain threat. This is not incoherence. It is the natural consequence of applying 20th-century institutional categories to 21st-century technology companies that are simultaneously strategic assets, potential vulnerabilities, and independent actors with their own governance philosophies.

The contradiction will not resolve itself. It requires a policy architecture that does not currently exist — one that can hold together the dual realities that Anthropic’s capabilities are a genuine national asset and that Anthropic’s capabilities require genuine national oversight. Neither a blanket clearance nor a blanket designation captures that complexity.

ALSO READ :  Pakistan vs New Zealand T20 World Cup 2026 Super 8 Opener Abandoned: Rain Washes Out Colombo Clash, Both Teams Share One Point

What Bessent and Powell Actually Did — and What It Implies

What HappenedWhat It Means
Joint Bessent-Powell conveningAI cyber risk is now a financial stability issue, not just a tech policy issue
Bank CEOs summoned mid-weekSpeed of response signals real urgency, not regulatory theater
Mythos limited to ~40 companiesAnthropic is self-governing in the absence of formal governance frameworks
Pentagon supply-chain designationExecutive branch is fractured in its AI risk assessment
No public statement from Treasury, Fed, or banksThe regulatory playbook does not yet exist

The convening itself was a significant signal. Bessent and Powell do not share a conference room casually. The joint appearance invested the meeting with the authority of both fiscal and monetary sovereign — the message being that AI cyber risk is no longer a niche technology-sector concern but a macro-prudential one. Banks should be pricing this into their operational risk frameworks. Insurers will follow. Rating agencies will not be far behind.

But signals, however weighty, are not architecture. The meeting produced no public guidance, no regulatory proposal, no framework for how banks should report, manage, or disclose AI-enabled cyber exposures. The CEOs who left Treasury on Tuesday left with warnings — and no rulebook.

The Governance Gap and How to Begin Closing It

The Mythos episode crystallizes three failures that policymakers now have no excuse for ignoring.

First, the pre-release consultation gap. Anthropic did the right thing in briefing U.S. officials before releasing Mythos. But that consultation was informal, voluntary, and ad hoc. The EU AI Act’s tiered risk framework is imperfect, but it at least establishes mandatory pre-market assessment for high-risk systems. The United States has no equivalent. A model capable of autonomously discovering and exploiting zero-days across every major OS and browser is, by any reasonable definition, a high-risk system. Its release should trigger a formal, structured national security review — not a phone call.

Second, the systemic-risk classification vacuum. The Fed can designate non-bank financial institutions as systemically important. It cannot currently designate AI models as systemically risky. That gap is now visible and consequential. What is needed is not a new agency but a clear cross-agency mandate — Treasury, CISA, the Fed, the OCC — with authority to classify certain AI capabilities as requiring coordinated disclosure, pre-release review, and sector-specific defensive preparation.

Third, the liability architecture. If a bank suffers losses traceable to an AI-enabled attack using capabilities derived from or analogous to a commercially released model, who bears what responsibility? The current answer — whatever tort law eventually produces — is wholly inadequate for systemic risks. Liability frameworks that can price and allocate AI-era cyber risk are not a luxury. They are a precondition for insurability and, ultimately, for financial stability.

A New Era of Risk — and Responsibility

There is a version of this story that ends badly: a race between capability development and governance in which capability wins by a decisive margin, and the first major AI-enabled financial system attack comes before any of the above frameworks exist. That version is not inevitable, but it requires active work to prevent.

The Tuesday meeting at Treasury was, in its way, a hopeful sign. It suggests that the United States’ most senior financial authorities understand, at least viscerally, that the risk is real and that the clock is running. It suggests that some version of public-private coordination is possible, even in a regulatory environment that remains deeply fragmented.

Anthropic has previously disclosed that it consulted with U.S. officials ahead of Mythos’ release regarding both its defensive and offensive cyber capabilities. CoinDesk That consultation should become a standard, not an anomaly. The release of any AI system with demonstrated offensive cyber capabilities — the ability to identify and exploit zero-days at scale — should automatically trigger a mandatory interagency review, sectoral briefings for affected industries, and a public risk disclosure, however carefully worded.

What Bessent and Powell did on Tuesday was, in the truest sense, firefighting. The fire is real. But what the financial system needs is not better firefighters. It needs buildings that are harder to burn.

The Mythos moment is a clarifying one. It tells us, with unusual precision, that the era of AI as a productivity story is over. The era of AI as a security story — a national security story, a financial security story, a systemic stability story — has arrived. Policymakers who treat it otherwise are not being optimistic. They are being negligent.


Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

The Private Firms Powering China’s Military AI Push

Published

on

China’s private firms are winning its military AI bids — and Washington doesn’t seem to grasp the implications.

In February 2026, a routine penalty notice appeared on the People’s Liberation Army’s procurement platform. It named Shanxi 100 Trust Information Technology — a 266-person IT company based in Taiyuan, in China’s coal-scarred heartland — and barred it from all military procurement across every service branch for one year. The infraction was bid fraud: the firm had submitted falsified materials to win a contract. In the labyrinthine world of PLA procurement, such violations are not uncommon.

What was uncommon was the company itself.

As a Jamestown Foundation analysis identified, 100 Trust is the sole wholly privately-owned firm operating inside China’s xinchuang (信创) domestic IT innovation framework — a program originally designed to replace foreign technology in sensitive government systems. Despite its modest headcount, the firm holds classified-project clearance and had won some of the PLA’s largest contracts to integrate DeepSeek, China’s breakout open-weight AI model, into military command systems. Its products had reportedly been demonstrated to Xi Jinping himself. And yet, when the opportunity arose to inflate its credentials, someone at 100 Trust apparently couldn’t resist.

The penalty notice tells us almost everything we need to know about China’s military AI push in 2026 — both its ambition and its contradictions. It tells us that China private firms are winning military AI bids once reserved for state giants. It tells us that the structural conditions of Beijing’s civil-military fusion policy have made this outcome not accidental but inevitable. And it tells us that Washington, still operating on a mental model of “China Inc.” — a monolithic, state-directed industrial juggernaut — is watching the wrong companies.

The Data Is Unambiguous: Private Is the New Defense

The anecdote of Shanxi 100 Trust is not an outlier. It is the leading edge of a statistical pattern that, once you see it, is impossible to unsee.

In a landmark September 2025 study, Georgetown University’s Center for Security and Emerging Technology (CSET) analyzed 2,857 AI-related defense contract award notices published by the PLA between January 2023 and December 2024. The finding that should have set off alarms in every national security directorate from Langley to the Pentagon: of the 338 entities that won AI-related PLA contracts, close to three-quarters were nontraditional vendors (NTVs) — firms with no self-reported state ownership ties. These NTVs collectively won 764 contracts, more than any other category. Two-thirds of them were founded after 2010.

These are not shadowy front companies. They are nimble, technically sophisticated private firms that market themselves explicitly on dual-use capability — civilian agility deployed for military ends. They are the companies winning PLA AI procurement private sector contracts that, by any conventional Washington risk framework, should not exist.

The legacy state-owned defense champions — China Electronics Technology Group (CETC), China Aerospace Science and Technology Corporation (CASC), NORINCO — still lead in sheer contract volume among top-tier entities. But the growth is concentrated in the private sector. The civil-military fusion AI China strategy that Xi Jinping has championed for over a decade is, in the AI domain at least, delivering something its architects may not have fully anticipated: a market in which lean private operators consistently outrun the bureaucratic lumbering of the state-owned defense-industrial complex.

The DeepSeek Accelerant

No single development has turbocharged China’s military AI push more dramatically than DeepSeek’s January 2025 release of its R1 reasoning model as an open-weight system — meaning any entity, including the PLA and its contractor ecosystem, could download, modify, and deploy it without restriction.

The Jamestown Foundation, tracking hundreds of DeepSeek-specific PLA procurement tenders, found the same structural pattern: private companies, not SOEs, won a majority of contracts to build DeepSeek-integrated tools for the PLA. The Jamestown analysts note that this likely reflects private firms’ superior capacity to respond to rapidly shifting market dynamics — a competitive edge that bureaucratic SOEs, with their elongated procurement relationships and political dependencies, simply cannot match.

The capabilities being built are not incremental. Researchers at Xi’an Technological University demonstrated a DeepSeek-powered assessment system that processed 10,000 battlefield scenarios in 48 seconds — a task they estimated would require human military planners approximately 48 hours. The PLA’s Central Theatre Command (responsible for defending Beijing) has used DeepSeek in military hospital settings and personnel management. The Nanjing National Defense Mobilization Office has issued guidance documents on deploying it for emergency evacuation planning. State media outlet Guangming Daily has described DeepSeek as “playing an increasingly crucial role in the military intelligentization process.”

The most revealing data point: Norinco, China’s enormous state-owned weapons manufacturer, unveiled the P60 autonomous combat-support vehicle in February 2026 — explicitly powered by DeepSeek. But the integration contracts enabling such deployments across the PLA’s command architecture are being won by private firms powering China military AI systems from Taiyuan to Hefei, not by Norinco’s in-house engineers.

iFlytek Digital and the Art of Corporate Camouflage

One company illuminates the structural logic with particular clarity: iFlytek Digital, the top-awarded nontraditional vendor in CSET’s dataset, which won 20 contracts in 2023 and 2024 alone, including one for the development of AI-enabled decision support systems and translation software for the PLA. As CSET’s full report documents, iFlytek Digital has close ties to its parent company iFlytek — a speech recognition and natural language processing champion that helped build China’s mass automated voice surveillance infrastructure and played a documented role in the CCP’s surveillance programs in Xinjiang and Tibet. iFlytek was placed on the U.S. government’s Entity List in 2019.

ALSO READ :  Paralysis in Congress Makes America a Dysfunctional Superpower: An Analysis

But iFlytek Digital — which became formally independent of its parent in 2021, though its ultimate beneficial owners remained iFlytek executives — operates in a regulatory gray zone that the Entity List framework was never designed to address. This is not an accident. It is a deliberate structural feature: by creating arms-length subsidiaries, spinning off divisions, or establishing new entities that technically lack “state-reported ownership ties,” Chinese tech companies can maintain operational separation from sanctioned entities while preserving functional alignment with them.

For Washington, this matters enormously. The U.S. government’s primary tools — the Commerce Department’s Entity List, the Pentagon’s 1260H “Chinese military company” designations, and the Treasury’s investment restrictions — are built around the premise of identifying specific legal entities. When the PLA’s most consequential AI suppliers are structurally designed to be nontraditional, non-state-affiliated, and technically new, the entity-based framework becomes a sieve. You can list the parent; the subsidiary wins the contract.

The Top Private Winners: A Structural Snapshot

Based on CSET, Jamestown Foundation, and open-source procurement data, the following entities represent the emerging private tier of China’s military AI supplier ecosystem:

  • Shanxi 100 Trust Information Technologyxinchuang framework, DeepSeek integration contracts, classified-project clearance; 266 employees.
  • iFlytek Digital — NLP, translation, AI decision support; 20 PLA contracts in two years; arms-length separation from sanctioned iFlytek parent.
  • PIESAT — Satellite and geospatial analytics; delivering combat simulation platforms and automatic target recognition for the PLA; subsidiaries in Australia, Denmark, Singapore, Malaysia.
  • Sichuan Tengden — Drone manufacturer; produced autonomous systems deployed by the PLA on missions near Japan and Taiwan.
  • DeepSeek (Hangzhou High-Flyer AI) — Open-weight model appearing in 150+ PLA procurement records; U.S. lawmakers have requested its Pentagon designation as a Chinese military company.

What unites this cohort is not state ownership but structural alignment: dependence on state-controlled compute infrastructure, technical agility that SOEs lack, and an incentive architecture that rewards civil-military dual-use positioning.

The Export Control Paradox

Here is the geopolitical irony that Washington has not fully digested: U.S. export controls on advanced semiconductors — Nvidia A100s, H100s, and their successors — were designed to impede China’s military AI development. In the narrow technical sense, they impose real friction. But in the strategic sense, they have produced a second-order effect that cuts against their intended purpose.

By restricting access to Western computing hardware, the Biden and Trump administrations have deepened Chinese private firms’ dependence on state-controlled domestic alternatives — primarily Huawei’s Ascend AI chips and Kunpeng processors. The firms now winning PLA AI contracts are marketing themselves explicitly on Huawei Ascend stacks, partly because of U.S. export controls. Restrictions that force private firms to rely on state-favored compute simultaneously deepen those firms’ incentive to demonstrate loyalty through military work. The export control paradox: the policy meant to widen the capability gap may be accelerating the fusion between private innovation and PLA procurement.

A separate paradox is operational: DeepSeek’s R1 is open-weight. The Export Administration Regulations have no jurisdiction over Chinese-origin technology being used by Chinese military entities. As one former national security official noted in open-source analysis, “you can’t export-control a model that’s already been released.” The horse left the barn in January 2025.

Meanwhile, the February 2026 CSET report on China’s Military AI Wish List — drawing on over 9,000 unclassified PLA RFPs from 2023 and 2024 — documents that the PLA is pursuing AI-enabled capabilities across all domains simultaneously: decision support systems, autonomous drone swarms, deepfake generation for cognitive warfare, seaborne vessel tracking, cyberattack detection, and AI-enabled encryption stress-testing. The breadth alone should recalibrate any analyst who still views China’s military AI push as aspirational rather than operational.

Why Private Firms Are Outcompeting SOEs

Two structural conditions explain why Chinese private tech military contracts are growing at the expense of SOE incumbents — and why this trend will deepen.

First: speed. PLA AI procurement notices in the DeepSeek era feature compressed tender timelines, frequently under six months from solicitation to award. State-owned defense giants, with their multi-layered bureaucratic approval chains and established procurement relationships, are architecturally incapable of this tempo. A 266-person firm from Taiyuan, by contrast, can pivot its entire technical stack in weeks. The CSET data confirms that the majority of NTVs were founded relatively recently; they were built for agile deployment cycles, not Cold War-era production runs.

Second: the PLA’s own institutional crisis. Xi Jinping’s sweeping anti-corruption purge of the PLA Rocket Force leadership in 2023, and its subsequent extension into the Equipment Development Department and broader defense industrial apparatus, has hollowed out precisely the procurement networks on which SOE defense contractors depended. As Foreign Affairs documented in its March 2026 analysis, the PLA is “rapidly prototyping and experimenting” rather than engaging in traditional long-cycle procurement. In an environment where established bureaucratic relationships carry less weight than deployment speed and technical competence, private firms hold a structural advantage they did not engineer and may not fully appreciate.

The result, paradoxically, is that Xi’s anti-corruption campaign — designed to strengthen the PLA — may be reinforcing private firms’ dominance in its most strategically important procurement category.

ALSO READ :  Re-evaluating US-China Relations: A Deeper Look Behind the Hostility

The “China Inc.” Fallacy and Why Washington Is Flying Blind

For decades, Washington’s China threat framework has been organized around a relatively simple mental model: the Chinese state directs; Chinese companies obey. Export controls target state entities and their known subsidiaries. Sanctions lists name the champions. Defense authorizations restrict contracts with designated Chinese military companies.

This framework was always an approximation. It is now actively misleading.

The U.S. policy apparatus is structured to track the companies it already knows — CETC, CASC, Huawei, DJI. But as the CSET data on civil-military fusion makes clear, three-quarters of PLA AI contracts are going to entities that do not self-report state ownership ties. Most of these firms are not on any U.S. government list. Many operate in countries allied with the United States — PIESAT, for instance, claimed subsidiaries in Australia, Denmark, Singapore, and Malaysia as of 2023, as Foreign Policy reported.

The December 2025 letter from House Intelligence Committee Chairman Rick Crawford, House Select Committee on China Chairman John Moolenaar, and Senator Rick Scott to the Pentagon requesting that DeepSeek, Unitree Robotics, and thirteen other companies be designated as Chinese military companies is a belated, if welcome, recognition that the designations framework has fallen catastrophically behind the procurement reality. Designating DeepSeek in late 2025 — after its models had already been open-sourced, downloaded millions of times globally, and integrated into PLA command systems — is roughly analogous to sanctioning gunpowder.

The US policy gap on China’s military AI private sector is not a failure of intelligence. It is a failure of analytical framework. The question Washington keeps asking is: “Which Chinese companies are military?” The question it should be asking is: “Given China’s MCF architecture, which Chinese private technology companies aren’t potentially military?”

Implications for Washington: Three Uncomfortable Truths

The Washington implications of China AI bids being won by private firms rather than state giants are neither abstract nor distant. They are operational, legal, and strategic.

First: the Entity List model is inadequate for the private-sector era. Effective technology controls now require tracking corporate structures — beneficial ownership, subsidiary relationships, executive continuity across spinoffs. The 100 Trust case demonstrates that a company can hold classified-project clearance, win the PLA’s largest DeepSeek integration contracts, and have demonstrated its products to the head of state while remaining, on paper, a 266-person private IT firm from Taiyuan that no U.S. government list has ever named. This requires a fundamental rethinking of how the Bureau of Industry and Security, Treasury’s OFAC, and the Pentagon’s designations process share data and coordinate designations.

Second: open-weight AI has broken the export control paradigm for foundation models. The U.S. framework for restricting technology transfer was designed for hardware and proprietary software — objects that can be tracked, licensed, and withheld. An open-weight model that any PLA researcher can fine-tune for battlefield scenario analysis on a domestic Huawei Ascend cluster requires a fundamentally different policy approach: one focused less on restricting Chinese access to existing models and more on maintaining the frontier gap through sustained domestic R&D investment. The 2026 National Defense Authorization Act took modest steps in this direction, but the pace of reform remains slower than the pace of PLA integration.

Third: the procurement volume is not the capability measure that matters. The 100 Trust penalty — a private firm with Xi-level visibility submitting falsified procurement documents — is evidence of a supply-demand gap in China’s military AI ecosystem. Private firms winning contracts they cannot fully execute, racing deployment timelines that exceed their genuine capabilities, is a signal of fragility as much as strength. Washington should be studying not just how many AI contracts the PLA is awarding to private firms, but how many of those contracts are producing operationally deployed capabilities versus prototype demonstrations or outright fraud. The answer, based on available open-source evidence, is considerably more ambiguous than Beijing’s official narrative suggests.

None of this diminishes the strategic imperative. As CSET’s February 2026 Military AI Wish List study documents, the breadth and speed of PLA AI experimentation — across autonomous systems, cognitive warfare, C5ISRT decision support, and space and maritime domain awareness — represents a genuine challenge to U.S. military advantages that is accelerating, not plateauing. The Foreign Affairs analysis published this month warns that “China is positioning itself to quickly and effectively adopt and deploy operational military AI, thus keeping the gap between the U.S. and Chinese militaries narrow.”

The private firms powering China’s military AI push are not a curiosity. They are the mechanism through which Beijing’s most consequential military modernization is being executed — and they are operating in a regulatory and analytical blind spot that Washington has not yet seriously resolved to close.


Citations Used

  1. “Center for Security and Emerging Technology (CSET) — Pulling Back the Curtain on China’s Military-Civil Fusion”https://cset.georgetown.edu/publication/pulling-back-the-curtain-on-chinas-military-civil-fusion/
  2. “CSET full report (PDF)”https://cset.georgetown.edu/wp-content/uploads/CSET-Pulling-Back-the-Curtain-on-Chinas-Military-Civil-Fusion.pdf
  3. “Jamestown Foundation — DeepSeek Use in PRC Military and Public Security Systems”https://jamestown.org/program/deepseek-use-in-prc-military-and-public-security-systems/
  4. “CSET — China’s Military AI Wish List (February 2026)”https://cset.georgetown.edu/publication/chinas-military-ai-wish-list/
  5. “Foreign Affairs — China’s AI Arsenal (March 2026)”https://www.foreignaffairs.com/china/chinas-artificial-intelligence-arsenal
  6. “Foreign Policy — China: Under Xi, PLA Adopts More Civilian Tech”https://foreignpolicy.com/2025/10/07/china-military-civil-fusion-defense-tech-us/
  7. “House Homeland Security Committee — Letter requesting Pentagon designations for DeepSeek et al.”https://homeland.house.gov/2025/12/19/chairmen-garbarino-moolenaar-crawford-lead-letter-asking-pentagon-to-list-deepseek-gotion-unitree-and-wuxi-as-chinese-military-companies/
  8. “RealClearDefense — DeepSeek: PLA’s Intelligentized Warfare”https://www.realcleardefense.com/articles/2025/11/18/deepseek_plas_intelligentized_warfare_1148009.html
  9. “South China Morning Post — China’s growing civilian-defence AI ties”https://www.scmp.com/news/china/military/article/3324727/chinas-growing-civilian-defence-ai-ties-will-challenge-us-report-says
  10. “FDD — China’s Military Reportedly Deploys DeepSeek AI for Non-Combat Duties”https://www.fdd.org/analysis/policy_briefs/2025/03/27/chinas-military-reportedly-deploys-deepseek-ai-for-non-combat-duties/
  11. “CSET — China Is Using the Private Sector to Advance Military AI”https://cset.georgetown.edu/article/china-is-using-the-private-sector-to-advance-military-ai/
  12. “The Diplomat — The Private Firms Powering China’s Military AI Push (March 2026)”https://thediplomat.com/2026/03/the-private-firms-powering-chinas-military-ai-push

Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

AI is dressing up greed as progress on creative rights

Published

on

There are two narratives battling for the soul of the creative economy. In one, Silicon Valley venture capitalists cast themselves as the heirs of Prometheus, bringing the fire of generative AI to a backward creative class clinging to outmoded business models. In the other, artists and authors watch their life’s work being fed into a digital maw to produce competition that is “priced at the marginal cost of zero,” as the US Copyright Office recently put it .

For years, the tech lobby has successfully peddled the first narrative, framing copyright law as a dusty relic of the Gutenberg era that must be swept aside so progress can march on. But March 2026 has provided a reality check. Last week, the UK government—facing a blistering campaign from the creative industries and a damning report from the House of Lords—was forced to delay its plans for AI copyright reform, kicking a decision into 2027 . Simultaneously, in a Munich courtroom, the music rights society GEMA began its pivotal case against the AI music generator Suno, while awaiting a ruling on its related victory against OpenAI from last November .

These are not signs of a legal system that is broken or unfit for purpose. They are signs of a legal system that is working—and that the tech industry would prefer to dismantle. The core thesis emerging from the courts, parliaments, and collecting societies of the Western world is this: AI is dressing up greed as progress on creative rights. The problem is not that the law is unfit for the 21st century but that it is being flouted.

The Myth of the Legal Vacuum

Listen closely to the AI developers, and you will hear a consistent refrain: we are innovating in a vacuum; the rules are unclear; we need a modernized framework. This is the lobbying equivalent of a land grab. The House of Lords Communications and Digital Committee, in its scorching report published March 6, saw right through it. They noted that the tech sector’s demand for a broad commercial text and data mining (TDM) exception is not a plea for clarity, but an attempt to “lower… litigation risk by weakening the current level of copyright protection” .

Let us be precise about what existing law actually says. Under UK law, and across most of Europe, copyright is engaged whenever the whole or a substantial part of a protected work is copied—including storing it in digital form. As the Lords report firmly states, “the large-scale making and processing of digital copies of protected works for model training may therefore be characterised as reproduction” . The US Copyright Office, in its pre-publication report from May 2025, similarly affirmed that downloading and processing copyrighted works for training constitutes prima facie infringement, subject only to defenses like fair use .

The industry knows this. They know that hoovering up 100 million images, as Midjourney’s founder casually admitted to doing, requires a defense, not a permission slip . They know that ingesting the “Pirate Library Mirror” and “Library Genesis”—shadowy online repositories of pirated books—to train models like Anthropic’s Claude is not an act of academic research, but of industrial-scale copying . This is not innovation operating in a grey area. This is innovation operating in the dead of night.

ALSO READ :  Strategic Importance of Kartarpur Corridor

What the Courts Are Actually Saying

While Westminster dithers, the judiciary is moving. And contrary to the narrative that judges are helpless in the face of technology, they are proving perfectly capable of applying centuries of copyright principle to silicon.

The most significant ruling of the past year came out of the Munich Regional Court last November. In a case brought by GEMA against OpenAI, the court held that AI training constitutes “reproduction” under German law. Crucially, the court found that even the fixation of copyrighted works into a model’s numerical “probability values” qualifies as reproduction if the work can later be perceived. And because ChatGPT was found to “memorize” and reproduce complete training data (song lyrics), it fell outside the EU’s TDM exceptions . OpenAI is appealing, but the legal logic is sound: a copy is a copy, whether stored on a hard drive or distilled into a matrix of weights.

This is not an isolated European quirk. Across the Atlantic, the $1.5 billion settlement by Anthropic to resolve authors’ claims was a tacit admission of liability . While a US district judge in the Bartz case made a nuanced distinction—ruling that training itself could be fair use but that maintaining a permanent library of pirated books was not—the sheer scale of the payout reveals the underlying risk .

The legal scholar Jane Ginsburg once noted that “the right to read is the right to write.” The AI industry has inverted this: they claim the right to copy is the right to compute. But the Munich ruling reminds us that copying for computational purposes is still copying. The notion that ingesting a novel to “learn” style is the same as a human reading it was rightly dismissed by the US Copyright Office, which noted that a student reading a book cannot subsequently distribute millions of perfect paraphrases of it in seconds .

Recent Legal & Regulatory Actions (2025–2026)
DateCase / EventKey Finding / Status
Nov 2025GEMA v. OpenAI (Munich)AI training = “reproduction”; lyrics memorization violates copyright 
Aug 2025Anthropic Authors Settlement$1.5bn class-action settlement over pirated book training 
May 2025US Copyright Office Part 3 ReportRejects “non-expressive use” defense; training requires case-by-case fair use analysis 
Mar 2026UK Gov’t Copyright ReformDelays decision to 2027 after creative-industry backlash 
Mar 2026GEMA v. Suno (Munich)Oral hearings held; ruling expected June 2026 

The “Pirate and Delete” Defense

If the legal landscape is clarifying, why the urgency to legislate? Because the industry’s preferred solution is not compliance, but amnesty. The UK government’s now-delayed proposal was for an “opt-out” system—shifting the burden onto creators to police the entire internet and tell AI companies not to steal from them. As the musician and former Labour minister Margaret Hodge reportedly told Parliament, this is like putting a sign on your front door asking burglars not to enter.

The technical term for this strategy is “asymmetric warfare.” AI companies argue they cannot possibly license every work because there are billions of them. But this is an argument of convenience. The EU’s AI Act, which came into force this year, mandates transparency. Its template for training data summaries, published in final form in late 2025, requires providers to list the top data sources and domains used . If they can summarize it for regulators, they can pay for it.

ALSO READ :  Navigating the Tremors: Understanding Earthquake Risks in California

Furthermore, a disturbing legal strategy is emerging from the U.S. cases. As legal analysts at Arnall Golden Gregory noted after the Bartz case, the ruling creates a perverse incentive: if training is fair use but permanent storage is not, the optimal strategy for a company is to “pirate and delete” . Download the stolen library, train the model as fast as possible, delete the evidence, and claim protection under the “transformative” use doctrine. This is not a solution; it is a recipe for laundering copyright infringement on a global scale.

The New Robber Barons

We have been here before. In 18th-century Scotland, booksellers in London held a monopoly on “valuable” literature. Scottish “pirates” like Alexander Donaldson reproduced and sold cheaper editions, arguing that knowledge should be free and that the London booksellers were holding back the enlightenment. The resulting battle—Donaldson v. Beckett—helped forge modern copyright law, establishing that the right is limited and ultimately yields to the public domain. But crucially, the Scottish “pirates” did not pretend the books were not written by someone. They simply exploited a territorial loophole. They were businessmen, not revolutionaries.

Today’s AI companies are the heirs of Donaldson, but with a crucial difference: they have no intention of letting the copyright term expire. They want the raw material of human culture delivered to them, on tap, forever. They want the value without the cost, the reward without the risk.

When Disney and NBCUniversal sue Midjourney, calling it a “bottomless pit of plagiarism,” they are not merely defending Mickey Mouse . They are defending a principle that every studio, every musician, and every journalist relies upon: that you cannot take someone’s labor without consent or compensation. When Paul McCartney releases a “silent album” to protest proposed UK laws, he is making the same point: that the output of a lifetime of creative work is being scraped to build machines that will ultimately silence him .

The Only Way Forward

There is a path forward, but it does not run through weakening the law. It runs through enforcing it.

First, reject the “opt-out” framework. The House of Lords is right: the government should rule out any reform that removes the incentive to license. The default must be opt-in.

Second, mandate transparency. The EU has shown the way. The UK’s Data (Use and Access) Act provides a vehicle for this. We need to know what data was used, where it came from, and how it was processed. The Midjourney admission that it scraped 100 million images without any tracking of provenance should be illegal, not a badge of honor .

Third, let the courts work. The Munich ruling on OpenAI lyrics and the pending GEMA v. Suno decision will provide clarity . So will the New York Times case against OpenAI and the Scarlett Johansson voice cloning suit. These are not roadblocks to innovation; they are the guardrails of a functioning market.

The AI industry likes to quote the maxim that “information wants to be free.” But as Stewart Brand, who coined the phrase, also said, “information also wants to be expensive.” The tension between those two truths is what markets resolve. The attempt to collapse that tension by fiat—by declaring that all information is free for the taking by a handful of monopolists—is not progress. It is a heist dressed up as philosophy.

The law is fit for the 21st century. The question is whether we have the courage to use it.


Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement
Advertisement

Facebook

Advertisement

Trending

Copyright © 2019-2025 ,The Monitor . All Rights Reserved .

Discover more from The Monitor

Subscribe now to keep reading and get access to the full archive.

Continue reading