AI
AI Bubble: Understanding Economic Implications
The conversation around an AI bubble often conjures images of economic disaster—a sudden, catastrophic market collapse. However, framing it this way overlooks a more nuanced and ultimately more manageable reality. The AI boom isn’t an “all-or-nothing” bet; it’s a supply-and-demand mismatch fundamentally rooted in mismatched timelines.
Table of Contents
Understanding the Economic Bubble
In plain economic terms, a bubble isn’t necessarily a total fraud or a worthless idea. It’s simply a bet that got too big.
When investment pours into a sector, driving valuations to extreme highs, it’s based on an expectation of future demand. If the resulting supply (the products, services, or infrastructure built) eventually outstrips the actual, immediate demand at those elevated prices, the air comes out. That’s the bubble deflating.
The key takeaway is this: even good bets can turn sour if they’re made with too much capital, too quickly. The underlying technology or idea might still be valuable. However, the market’s expectation of when that value will be realized was simply too aggressive.
The AI Timeline Paradox
What makes the current AI situation so tricky is the extraordinary difference in speed between its two core components:
- The Breakneck Pace of AI Software Development:
- AI models are improving at an exponential rate. New, more powerful foundation models, innovative applications, and software tools are emerging every few months. This is the software-driven supply of AI capabilities.
- The Slow Crawl of Data Centre Construction:
- The hardware required to train and run these massive models—the specialised chips (GPUs), the enormous data centres, and the vast amounts of power needed to run them—takes years to plan, finance, permit, build, and bring online. This represents the physical infrastructure supply.
The “bubble” risk here is that the rapid software advancement and resulting investor excitement (the demand for AI) are outpacing the physical infrastructure needed to deploy it at scale.
We may have already built an incredible amount of powerful software “supply.” However, if the energy and data centre “demand” to actually use that software widely and profitably takes years to catch up, there will be a temporary glut. This creates a classic supply/demand mismatch.
A Timing Correction, Not a Total Collapse
Therefore, instead of fearing an “AI apocalypse”, we should prepare for a timing correction.
This correction might mean:
- Temporary Devaluations: Companies whose valuations are based purely on future potential without the current infrastructure or power to execute may see their stock prices deflate.
- A Focus on Efficiency: The scarcity of data centre space and power will incentivise companies to develop smaller, more efficient models that can run on less hardware, driving the next wave of innovation.
- Infrastructure Wins: Companies focused on the slow-moving infrastructure—power generation, specialised cooling, and data centre construction—might see their value hold steady or rise as the world scrambles to catch up to the software’s needs.
The AI revolution is happening, but our investment timelines need to align with our construction timelines. The “bubble” isn’t a sign the technology is worthless; it’s a flashing warning sign that the market’s eagerness has outrun physical reality.
Discover more from The Monitor
Subscribe to get the latest posts sent to your email.
Opinion
OPINION|When the Treasury Panics, Listen: Anthropic’s Mythos and the AI Threat Hiding Inside Your Bank
The most consequential financial-security meeting of 2026 happened Tuesday. Almost nobody was talking about it.
There is a particular quality to urgency in Washington — a calibrated, deliberate kind, stripped of drama precisely because the stakes are too high for theater. When Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell jointly summon the chiefs of America’s largest banks to a private session on a weekday morning, they are not performing concern. They are managing it.
That is what happened on Tuesday, April 8, 2026, in the marbled corridors of Treasury headquarters on Pennsylvania Avenue. Bessent and Powell assembled a group of Wall Street leaders to make sure banks are aware of possible future risks raised by Anthropic’s Mythos model and potential similar systems, and are taking precautions to defend their systems. Bloomberg The CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs were present. JPMorgan’s Jamie Dimon was invited but unable to attend. AOL The Treasury declined to comment. The Fed declined to comment. Anthropic had no immediate comment.
In Washington, silence of that particular texture is its own form of communication.
Table of Contents
The Model That Spooked the Regulators
To understand why two of America’s most powerful financial stewards convened an emergency summit with the chiefs of institutions collectively managing trillions in assets, you need to understand what Anthropic’s Claude Mythos Preview actually does — and why it is genuinely different from the parade of large language models that have cycled through headlines since 2022.
Anthropic launched the powerful Mythos model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company said the model is capable of identifying and exploiting weaknesses across “every major operating system and every major web browser.” RTÉ Read that sentence again. Every major operating system. Every major web browser. This is not a chatbot that occasionally hallucinates. This is an autonomous vulnerability-hunting engine with the precision of an elite red team and the speed of software.
Unlike typical consumer-facing AI tools, Mythos is geared toward cybersecurity software engineering tasks. Its specialty is identifying critical software vulnerabilities and bugs, but it can also assemble sophisticated exploits. CoinDesk The distinction matters enormously. Most AI models are generative — they produce text, images, code. Mythos is analytical and adversarial, capable of scanning codebases, identifying failure points invisible to human auditors, and constructing the exploits that could weaponize those failures. In the hands of a sophisticated actor — a state-sponsored hacking collective, a ransomware syndicate, a rogue insider — this capability is not a cybersecurity tool. It is a cybersecurity threat.
This marked the first time Anthropic had limited the launch of a new model. Investing.com That fact alone should arrest attention. A company whose business model depends on broad adoption and API revenue made the deliberate, commercially costly decision to gate access. That restraint — unusual in a sector that tends to race toward release — signals something about how seriously Anthropic’s own researchers regard what they have built.
Project Glasswing: An Experiment in Controlled Power
Access to Mythos will be limited to about 40 technology companies, including Microsoft and Google, and Anthropic has been in ongoing talks with the U.S. government about the model’s capabilities. AOL This restricted release program, referred to internally as Project Glasswing, is a deliberate inversion of how AI has historically been deployed: rather than releasing broadly and patching later, Anthropic gave dominant platform holders a head start — not to monetize first, but to defend first. Anthropic released the model to a select group of partners, including Amazon, Apple, and Microsoft, to give them a head start on securing vulnerabilities. Investing.com
It is a genuinely novel approach, and one that deserves more credit than it will likely receive. The logic is sound: if a model can identify zero-day vulnerabilities at machine speed, the most responsible action is to arm defenders before the broader landscape of threat actors can replicate or steal the capability. But Glasswing also exposes a governance gap so wide you could park an aircraft carrier in it.
Who audits the 40 companies with access? What safeguards prevent Mythos from being fine-tuned, transferred, or reverse-engineered? If a Glasswing participant suffers a breach — and given that these are themselves high-value targets, the probability is non-trivial — what is the liability chain? What is the protocol? The answers to these questions do not exist in any regulatory framework currently operative in the United States, the European Union, or anywhere else.
The Systemic Risk Nobody Has Priced
The meeting at Treasury was not primarily about Anthropic. It was about what Anthropic represents: the arrival of AI capabilities that move faster than the regulatory, legal, and institutional machinery designed to contain them.
Consider the financial system’s exposure. Modern banking infrastructure is built on decades of accumulated code — legacy COBOL systems at regional lenders, middleware connecting trading platforms to clearing houses, authentication layers protecting retail deposits. Much of this code has never been audited by a sophisticated adversary because auditing at scale was prohibitively expensive. Mythos eliminates that constraint. A well-resourced actor with access to comparable capability could, in principle, systematically map the attack surface of an entire national banking system in the time it currently takes a human security team to review a single subsystem.
The episode highlights a fundamental change in how regulators are framing AI risk — not merely as a technological challenge, but as a potential catalyst for systemic events. This has already raised red flags in crypto, where experts are worried that Mythos’ capability of discovering and exploiting zero-day vulnerabilities in real-time at low cost poses risk to the DeFi infrastructure. CoinDesk
The systemic risk framing is the right one — and it is the framing that explains why Powell was in that room. The Federal Reserve’s mandate is financial stability. Historically, stability threats have come from credit cycles, liquidity crunches, and contagion. They are now coming from code. A successful AI-enabled attack on a major custodial bank — one that compromised transaction integrity, corrupted ledger data, or triggered a cascade of failed settlement — would represent a category of financial crisis that no existing playbook addresses. The bazooka of emergency liquidity provision is not particularly useful when the crisis is epistemic rather than financial: when the question is not whether there is enough money, but whether the numbers can be trusted at all.
Anthropic vs. the Pentagon: The Contradiction at the Heart of AI Policy
There is a peculiar irony shadowing this episode. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic’s request that it put a pause to the Pentagon’s designation. Bloomberg Law
Anthropic proactively briefed senior U.S. government officials and key industry stakeholders on Mythos’s capabilities RTÉ — engaging responsibly with the national security community — even as one branch of that same government has labeled the company a security liability. The left hand of the U.S. government calls in Anthropic’s most advanced model to warn bankers about cyber risk; the right hand designates its maker a supply-chain threat. This is not incoherence. It is the natural consequence of applying 20th-century institutional categories to 21st-century technology companies that are simultaneously strategic assets, potential vulnerabilities, and independent actors with their own governance philosophies.
The contradiction will not resolve itself. It requires a policy architecture that does not currently exist — one that can hold together the dual realities that Anthropic’s capabilities are a genuine national asset and that Anthropic’s capabilities require genuine national oversight. Neither a blanket clearance nor a blanket designation captures that complexity.
What Bessent and Powell Actually Did — and What It Implies
| What Happened | What It Means |
|---|---|
| Joint Bessent-Powell convening | AI cyber risk is now a financial stability issue, not just a tech policy issue |
| Bank CEOs summoned mid-week | Speed of response signals real urgency, not regulatory theater |
| Mythos limited to ~40 companies | Anthropic is self-governing in the absence of formal governance frameworks |
| Pentagon supply-chain designation | Executive branch is fractured in its AI risk assessment |
| No public statement from Treasury, Fed, or banks | The regulatory playbook does not yet exist |
The convening itself was a significant signal. Bessent and Powell do not share a conference room casually. The joint appearance invested the meeting with the authority of both fiscal and monetary sovereign — the message being that AI cyber risk is no longer a niche technology-sector concern but a macro-prudential one. Banks should be pricing this into their operational risk frameworks. Insurers will follow. Rating agencies will not be far behind.
But signals, however weighty, are not architecture. The meeting produced no public guidance, no regulatory proposal, no framework for how banks should report, manage, or disclose AI-enabled cyber exposures. The CEOs who left Treasury on Tuesday left with warnings — and no rulebook.
The Governance Gap and How to Begin Closing It
The Mythos episode crystallizes three failures that policymakers now have no excuse for ignoring.
First, the pre-release consultation gap. Anthropic did the right thing in briefing U.S. officials before releasing Mythos. But that consultation was informal, voluntary, and ad hoc. The EU AI Act’s tiered risk framework is imperfect, but it at least establishes mandatory pre-market assessment for high-risk systems. The United States has no equivalent. A model capable of autonomously discovering and exploiting zero-days across every major OS and browser is, by any reasonable definition, a high-risk system. Its release should trigger a formal, structured national security review — not a phone call.
Second, the systemic-risk classification vacuum. The Fed can designate non-bank financial institutions as systemically important. It cannot currently designate AI models as systemically risky. That gap is now visible and consequential. What is needed is not a new agency but a clear cross-agency mandate — Treasury, CISA, the Fed, the OCC — with authority to classify certain AI capabilities as requiring coordinated disclosure, pre-release review, and sector-specific defensive preparation.
Third, the liability architecture. If a bank suffers losses traceable to an AI-enabled attack using capabilities derived from or analogous to a commercially released model, who bears what responsibility? The current answer — whatever tort law eventually produces — is wholly inadequate for systemic risks. Liability frameworks that can price and allocate AI-era cyber risk are not a luxury. They are a precondition for insurability and, ultimately, for financial stability.
A New Era of Risk — and Responsibility
There is a version of this story that ends badly: a race between capability development and governance in which capability wins by a decisive margin, and the first major AI-enabled financial system attack comes before any of the above frameworks exist. That version is not inevitable, but it requires active work to prevent.
The Tuesday meeting at Treasury was, in its way, a hopeful sign. It suggests that the United States’ most senior financial authorities understand, at least viscerally, that the risk is real and that the clock is running. It suggests that some version of public-private coordination is possible, even in a regulatory environment that remains deeply fragmented.
Anthropic has previously disclosed that it consulted with U.S. officials ahead of Mythos’ release regarding both its defensive and offensive cyber capabilities. CoinDesk That consultation should become a standard, not an anomaly. The release of any AI system with demonstrated offensive cyber capabilities — the ability to identify and exploit zero-days at scale — should automatically trigger a mandatory interagency review, sectoral briefings for affected industries, and a public risk disclosure, however carefully worded.
What Bessent and Powell did on Tuesday was, in the truest sense, firefighting. The fire is real. But what the financial system needs is not better firefighters. It needs buildings that are harder to burn.
The Mythos moment is a clarifying one. It tells us, with unusual precision, that the era of AI as a productivity story is over. The era of AI as a security story — a national security story, a financial security story, a systemic stability story — has arrived. Policymakers who treat it otherwise are not being optimistic. They are being negligent.
Discover more from The Monitor
Subscribe to get the latest posts sent to your email.
AI
The Private Firms Powering China’s Military AI Push
China’s private firms are winning its military AI bids — and Washington doesn’t seem to grasp the implications.
In February 2026, a routine penalty notice appeared on the People’s Liberation Army’s procurement platform. It named Shanxi 100 Trust Information Technology — a 266-person IT company based in Taiyuan, in China’s coal-scarred heartland — and barred it from all military procurement across every service branch for one year. The infraction was bid fraud: the firm had submitted falsified materials to win a contract. In the labyrinthine world of PLA procurement, such violations are not uncommon.
What was uncommon was the company itself.
As a Jamestown Foundation analysis identified, 100 Trust is the sole wholly privately-owned firm operating inside China’s xinchuang (信创) domestic IT innovation framework — a program originally designed to replace foreign technology in sensitive government systems. Despite its modest headcount, the firm holds classified-project clearance and had won some of the PLA’s largest contracts to integrate DeepSeek, China’s breakout open-weight AI model, into military command systems. Its products had reportedly been demonstrated to Xi Jinping himself. And yet, when the opportunity arose to inflate its credentials, someone at 100 Trust apparently couldn’t resist.
The penalty notice tells us almost everything we need to know about China’s military AI push in 2026 — both its ambition and its contradictions. It tells us that China private firms are winning military AI bids once reserved for state giants. It tells us that the structural conditions of Beijing’s civil-military fusion policy have made this outcome not accidental but inevitable. And it tells us that Washington, still operating on a mental model of “China Inc.” — a monolithic, state-directed industrial juggernaut — is watching the wrong companies.
Table of Contents
The Data Is Unambiguous: Private Is the New Defense
The anecdote of Shanxi 100 Trust is not an outlier. It is the leading edge of a statistical pattern that, once you see it, is impossible to unsee.
In a landmark September 2025 study, Georgetown University’s Center for Security and Emerging Technology (CSET) analyzed 2,857 AI-related defense contract award notices published by the PLA between January 2023 and December 2024. The finding that should have set off alarms in every national security directorate from Langley to the Pentagon: of the 338 entities that won AI-related PLA contracts, close to three-quarters were nontraditional vendors (NTVs) — firms with no self-reported state ownership ties. These NTVs collectively won 764 contracts, more than any other category. Two-thirds of them were founded after 2010.
These are not shadowy front companies. They are nimble, technically sophisticated private firms that market themselves explicitly on dual-use capability — civilian agility deployed for military ends. They are the companies winning PLA AI procurement private sector contracts that, by any conventional Washington risk framework, should not exist.
The legacy state-owned defense champions — China Electronics Technology Group (CETC), China Aerospace Science and Technology Corporation (CASC), NORINCO — still lead in sheer contract volume among top-tier entities. But the growth is concentrated in the private sector. The civil-military fusion AI China strategy that Xi Jinping has championed for over a decade is, in the AI domain at least, delivering something its architects may not have fully anticipated: a market in which lean private operators consistently outrun the bureaucratic lumbering of the state-owned defense-industrial complex.
The DeepSeek Accelerant
No single development has turbocharged China’s military AI push more dramatically than DeepSeek’s January 2025 release of its R1 reasoning model as an open-weight system — meaning any entity, including the PLA and its contractor ecosystem, could download, modify, and deploy it without restriction.
The Jamestown Foundation, tracking hundreds of DeepSeek-specific PLA procurement tenders, found the same structural pattern: private companies, not SOEs, won a majority of contracts to build DeepSeek-integrated tools for the PLA. The Jamestown analysts note that this likely reflects private firms’ superior capacity to respond to rapidly shifting market dynamics — a competitive edge that bureaucratic SOEs, with their elongated procurement relationships and political dependencies, simply cannot match.
The capabilities being built are not incremental. Researchers at Xi’an Technological University demonstrated a DeepSeek-powered assessment system that processed 10,000 battlefield scenarios in 48 seconds — a task they estimated would require human military planners approximately 48 hours. The PLA’s Central Theatre Command (responsible for defending Beijing) has used DeepSeek in military hospital settings and personnel management. The Nanjing National Defense Mobilization Office has issued guidance documents on deploying it for emergency evacuation planning. State media outlet Guangming Daily has described DeepSeek as “playing an increasingly crucial role in the military intelligentization process.”
The most revealing data point: Norinco, China’s enormous state-owned weapons manufacturer, unveiled the P60 autonomous combat-support vehicle in February 2026 — explicitly powered by DeepSeek. But the integration contracts enabling such deployments across the PLA’s command architecture are being won by private firms powering China military AI systems from Taiyuan to Hefei, not by Norinco’s in-house engineers.
iFlytek Digital and the Art of Corporate Camouflage
One company illuminates the structural logic with particular clarity: iFlytek Digital, the top-awarded nontraditional vendor in CSET’s dataset, which won 20 contracts in 2023 and 2024 alone, including one for the development of AI-enabled decision support systems and translation software for the PLA. As CSET’s full report documents, iFlytek Digital has close ties to its parent company iFlytek — a speech recognition and natural language processing champion that helped build China’s mass automated voice surveillance infrastructure and played a documented role in the CCP’s surveillance programs in Xinjiang and Tibet. iFlytek was placed on the U.S. government’s Entity List in 2019.
But iFlytek Digital — which became formally independent of its parent in 2021, though its ultimate beneficial owners remained iFlytek executives — operates in a regulatory gray zone that the Entity List framework was never designed to address. This is not an accident. It is a deliberate structural feature: by creating arms-length subsidiaries, spinning off divisions, or establishing new entities that technically lack “state-reported ownership ties,” Chinese tech companies can maintain operational separation from sanctioned entities while preserving functional alignment with them.
For Washington, this matters enormously. The U.S. government’s primary tools — the Commerce Department’s Entity List, the Pentagon’s 1260H “Chinese military company” designations, and the Treasury’s investment restrictions — are built around the premise of identifying specific legal entities. When the PLA’s most consequential AI suppliers are structurally designed to be nontraditional, non-state-affiliated, and technically new, the entity-based framework becomes a sieve. You can list the parent; the subsidiary wins the contract.
The Top Private Winners: A Structural Snapshot
Based on CSET, Jamestown Foundation, and open-source procurement data, the following entities represent the emerging private tier of China’s military AI supplier ecosystem:
- Shanxi 100 Trust Information Technology — xinchuang framework, DeepSeek integration contracts, classified-project clearance; 266 employees.
- iFlytek Digital — NLP, translation, AI decision support; 20 PLA contracts in two years; arms-length separation from sanctioned iFlytek parent.
- PIESAT — Satellite and geospatial analytics; delivering combat simulation platforms and automatic target recognition for the PLA; subsidiaries in Australia, Denmark, Singapore, Malaysia.
- Sichuan Tengden — Drone manufacturer; produced autonomous systems deployed by the PLA on missions near Japan and Taiwan.
- DeepSeek (Hangzhou High-Flyer AI) — Open-weight model appearing in 150+ PLA procurement records; U.S. lawmakers have requested its Pentagon designation as a Chinese military company.
What unites this cohort is not state ownership but structural alignment: dependence on state-controlled compute infrastructure, technical agility that SOEs lack, and an incentive architecture that rewards civil-military dual-use positioning.
The Export Control Paradox
Here is the geopolitical irony that Washington has not fully digested: U.S. export controls on advanced semiconductors — Nvidia A100s, H100s, and their successors — were designed to impede China’s military AI development. In the narrow technical sense, they impose real friction. But in the strategic sense, they have produced a second-order effect that cuts against their intended purpose.
By restricting access to Western computing hardware, the Biden and Trump administrations have deepened Chinese private firms’ dependence on state-controlled domestic alternatives — primarily Huawei’s Ascend AI chips and Kunpeng processors. The firms now winning PLA AI contracts are marketing themselves explicitly on Huawei Ascend stacks, partly because of U.S. export controls. Restrictions that force private firms to rely on state-favored compute simultaneously deepen those firms’ incentive to demonstrate loyalty through military work. The export control paradox: the policy meant to widen the capability gap may be accelerating the fusion between private innovation and PLA procurement.
A separate paradox is operational: DeepSeek’s R1 is open-weight. The Export Administration Regulations have no jurisdiction over Chinese-origin technology being used by Chinese military entities. As one former national security official noted in open-source analysis, “you can’t export-control a model that’s already been released.” The horse left the barn in January 2025.
Meanwhile, the February 2026 CSET report on China’s Military AI Wish List — drawing on over 9,000 unclassified PLA RFPs from 2023 and 2024 — documents that the PLA is pursuing AI-enabled capabilities across all domains simultaneously: decision support systems, autonomous drone swarms, deepfake generation for cognitive warfare, seaborne vessel tracking, cyberattack detection, and AI-enabled encryption stress-testing. The breadth alone should recalibrate any analyst who still views China’s military AI push as aspirational rather than operational.
Why Private Firms Are Outcompeting SOEs
Two structural conditions explain why Chinese private tech military contracts are growing at the expense of SOE incumbents — and why this trend will deepen.
First: speed. PLA AI procurement notices in the DeepSeek era feature compressed tender timelines, frequently under six months from solicitation to award. State-owned defense giants, with their multi-layered bureaucratic approval chains and established procurement relationships, are architecturally incapable of this tempo. A 266-person firm from Taiyuan, by contrast, can pivot its entire technical stack in weeks. The CSET data confirms that the majority of NTVs were founded relatively recently; they were built for agile deployment cycles, not Cold War-era production runs.
Second: the PLA’s own institutional crisis. Xi Jinping’s sweeping anti-corruption purge of the PLA Rocket Force leadership in 2023, and its subsequent extension into the Equipment Development Department and broader defense industrial apparatus, has hollowed out precisely the procurement networks on which SOE defense contractors depended. As Foreign Affairs documented in its March 2026 analysis, the PLA is “rapidly prototyping and experimenting” rather than engaging in traditional long-cycle procurement. In an environment where established bureaucratic relationships carry less weight than deployment speed and technical competence, private firms hold a structural advantage they did not engineer and may not fully appreciate.
The result, paradoxically, is that Xi’s anti-corruption campaign — designed to strengthen the PLA — may be reinforcing private firms’ dominance in its most strategically important procurement category.
The “China Inc.” Fallacy and Why Washington Is Flying Blind
For decades, Washington’s China threat framework has been organized around a relatively simple mental model: the Chinese state directs; Chinese companies obey. Export controls target state entities and their known subsidiaries. Sanctions lists name the champions. Defense authorizations restrict contracts with designated Chinese military companies.
This framework was always an approximation. It is now actively misleading.
The U.S. policy apparatus is structured to track the companies it already knows — CETC, CASC, Huawei, DJI. But as the CSET data on civil-military fusion makes clear, three-quarters of PLA AI contracts are going to entities that do not self-report state ownership ties. Most of these firms are not on any U.S. government list. Many operate in countries allied with the United States — PIESAT, for instance, claimed subsidiaries in Australia, Denmark, Singapore, and Malaysia as of 2023, as Foreign Policy reported.
The December 2025 letter from House Intelligence Committee Chairman Rick Crawford, House Select Committee on China Chairman John Moolenaar, and Senator Rick Scott to the Pentagon requesting that DeepSeek, Unitree Robotics, and thirteen other companies be designated as Chinese military companies is a belated, if welcome, recognition that the designations framework has fallen catastrophically behind the procurement reality. Designating DeepSeek in late 2025 — after its models had already been open-sourced, downloaded millions of times globally, and integrated into PLA command systems — is roughly analogous to sanctioning gunpowder.
The US policy gap on China’s military AI private sector is not a failure of intelligence. It is a failure of analytical framework. The question Washington keeps asking is: “Which Chinese companies are military?” The question it should be asking is: “Given China’s MCF architecture, which Chinese private technology companies aren’t potentially military?”
Implications for Washington: Three Uncomfortable Truths
The Washington implications of China AI bids being won by private firms rather than state giants are neither abstract nor distant. They are operational, legal, and strategic.
First: the Entity List model is inadequate for the private-sector era. Effective technology controls now require tracking corporate structures — beneficial ownership, subsidiary relationships, executive continuity across spinoffs. The 100 Trust case demonstrates that a company can hold classified-project clearance, win the PLA’s largest DeepSeek integration contracts, and have demonstrated its products to the head of state while remaining, on paper, a 266-person private IT firm from Taiyuan that no U.S. government list has ever named. This requires a fundamental rethinking of how the Bureau of Industry and Security, Treasury’s OFAC, and the Pentagon’s designations process share data and coordinate designations.
Second: open-weight AI has broken the export control paradigm for foundation models. The U.S. framework for restricting technology transfer was designed for hardware and proprietary software — objects that can be tracked, licensed, and withheld. An open-weight model that any PLA researcher can fine-tune for battlefield scenario analysis on a domestic Huawei Ascend cluster requires a fundamentally different policy approach: one focused less on restricting Chinese access to existing models and more on maintaining the frontier gap through sustained domestic R&D investment. The 2026 National Defense Authorization Act took modest steps in this direction, but the pace of reform remains slower than the pace of PLA integration.
Third: the procurement volume is not the capability measure that matters. The 100 Trust penalty — a private firm with Xi-level visibility submitting falsified procurement documents — is evidence of a supply-demand gap in China’s military AI ecosystem. Private firms winning contracts they cannot fully execute, racing deployment timelines that exceed their genuine capabilities, is a signal of fragility as much as strength. Washington should be studying not just how many AI contracts the PLA is awarding to private firms, but how many of those contracts are producing operationally deployed capabilities versus prototype demonstrations or outright fraud. The answer, based on available open-source evidence, is considerably more ambiguous than Beijing’s official narrative suggests.
None of this diminishes the strategic imperative. As CSET’s February 2026 Military AI Wish List study documents, the breadth and speed of PLA AI experimentation — across autonomous systems, cognitive warfare, C5ISRT decision support, and space and maritime domain awareness — represents a genuine challenge to U.S. military advantages that is accelerating, not plateauing. The Foreign Affairs analysis published this month warns that “China is positioning itself to quickly and effectively adopt and deploy operational military AI, thus keeping the gap between the U.S. and Chinese militaries narrow.”
The private firms powering China’s military AI push are not a curiosity. They are the mechanism through which Beijing’s most consequential military modernization is being executed — and they are operating in a regulatory and analytical blind spot that Washington has not yet seriously resolved to close.
Citations Used
- “Center for Security and Emerging Technology (CSET) — Pulling Back the Curtain on China’s Military-Civil Fusion” → https://cset.georgetown.edu/publication/pulling-back-the-curtain-on-chinas-military-civil-fusion/
- “CSET full report (PDF)” → https://cset.georgetown.edu/wp-content/uploads/CSET-Pulling-Back-the-Curtain-on-Chinas-Military-Civil-Fusion.pdf
- “Jamestown Foundation — DeepSeek Use in PRC Military and Public Security Systems” → https://jamestown.org/program/deepseek-use-in-prc-military-and-public-security-systems/
- “CSET — China’s Military AI Wish List (February 2026)” → https://cset.georgetown.edu/publication/chinas-military-ai-wish-list/
- “Foreign Affairs — China’s AI Arsenal (March 2026)” → https://www.foreignaffairs.com/china/chinas-artificial-intelligence-arsenal
- “Foreign Policy — China: Under Xi, PLA Adopts More Civilian Tech” → https://foreignpolicy.com/2025/10/07/china-military-civil-fusion-defense-tech-us/
- “House Homeland Security Committee — Letter requesting Pentagon designations for DeepSeek et al.” → https://homeland.house.gov/2025/12/19/chairmen-garbarino-moolenaar-crawford-lead-letter-asking-pentagon-to-list-deepseek-gotion-unitree-and-wuxi-as-chinese-military-companies/
- “RealClearDefense — DeepSeek: PLA’s Intelligentized Warfare” → https://www.realcleardefense.com/articles/2025/11/18/deepseek_plas_intelligentized_warfare_1148009.html
- “South China Morning Post — China’s growing civilian-defence AI ties” → https://www.scmp.com/news/china/military/article/3324727/chinas-growing-civilian-defence-ai-ties-will-challenge-us-report-says
- “FDD — China’s Military Reportedly Deploys DeepSeek AI for Non-Combat Duties” → https://www.fdd.org/analysis/policy_briefs/2025/03/27/chinas-military-reportedly-deploys-deepseek-ai-for-non-combat-duties/
- “CSET — China Is Using the Private Sector to Advance Military AI” → https://cset.georgetown.edu/article/china-is-using-the-private-sector-to-advance-military-ai/
- “The Diplomat — The Private Firms Powering China’s Military AI Push (March 2026)” → https://thediplomat.com/2026/03/the-private-firms-powering-chinas-military-ai-push
Discover more from The Monitor
Subscribe to get the latest posts sent to your email.
AI
AI is dressing up greed as progress on creative rights
There are two narratives battling for the soul of the creative economy. In one, Silicon Valley venture capitalists cast themselves as the heirs of Prometheus, bringing the fire of generative AI to a backward creative class clinging to outmoded business models. In the other, artists and authors watch their life’s work being fed into a digital maw to produce competition that is “priced at the marginal cost of zero,” as the US Copyright Office recently put it .
For years, the tech lobby has successfully peddled the first narrative, framing copyright law as a dusty relic of the Gutenberg era that must be swept aside so progress can march on. But March 2026 has provided a reality check. Last week, the UK government—facing a blistering campaign from the creative industries and a damning report from the House of Lords—was forced to delay its plans for AI copyright reform, kicking a decision into 2027 . Simultaneously, in a Munich courtroom, the music rights society GEMA began its pivotal case against the AI music generator Suno, while awaiting a ruling on its related victory against OpenAI from last November .
These are not signs of a legal system that is broken or unfit for purpose. They are signs of a legal system that is working—and that the tech industry would prefer to dismantle. The core thesis emerging from the courts, parliaments, and collecting societies of the Western world is this: AI is dressing up greed as progress on creative rights. The problem is not that the law is unfit for the 21st century but that it is being flouted.
Table of Contents
The Myth of the Legal Vacuum
Listen closely to the AI developers, and you will hear a consistent refrain: we are innovating in a vacuum; the rules are unclear; we need a modernized framework. This is the lobbying equivalent of a land grab. The House of Lords Communications and Digital Committee, in its scorching report published March 6, saw right through it. They noted that the tech sector’s demand for a broad commercial text and data mining (TDM) exception is not a plea for clarity, but an attempt to “lower… litigation risk by weakening the current level of copyright protection” .
Let us be precise about what existing law actually says. Under UK law, and across most of Europe, copyright is engaged whenever the whole or a substantial part of a protected work is copied—including storing it in digital form. As the Lords report firmly states, “the large-scale making and processing of digital copies of protected works for model training may therefore be characterised as reproduction” . The US Copyright Office, in its pre-publication report from May 2025, similarly affirmed that downloading and processing copyrighted works for training constitutes prima facie infringement, subject only to defenses like fair use .
The industry knows this. They know that hoovering up 100 million images, as Midjourney’s founder casually admitted to doing, requires a defense, not a permission slip . They know that ingesting the “Pirate Library Mirror” and “Library Genesis”—shadowy online repositories of pirated books—to train models like Anthropic’s Claude is not an act of academic research, but of industrial-scale copying . This is not innovation operating in a grey area. This is innovation operating in the dead of night.
What the Courts Are Actually Saying
While Westminster dithers, the judiciary is moving. And contrary to the narrative that judges are helpless in the face of technology, they are proving perfectly capable of applying centuries of copyright principle to silicon.
The most significant ruling of the past year came out of the Munich Regional Court last November. In a case brought by GEMA against OpenAI, the court held that AI training constitutes “reproduction” under German law. Crucially, the court found that even the fixation of copyrighted works into a model’s numerical “probability values” qualifies as reproduction if the work can later be perceived. And because ChatGPT was found to “memorize” and reproduce complete training data (song lyrics), it fell outside the EU’s TDM exceptions . OpenAI is appealing, but the legal logic is sound: a copy is a copy, whether stored on a hard drive or distilled into a matrix of weights.
This is not an isolated European quirk. Across the Atlantic, the $1.5 billion settlement by Anthropic to resolve authors’ claims was a tacit admission of liability . While a US district judge in the Bartz case made a nuanced distinction—ruling that training itself could be fair use but that maintaining a permanent library of pirated books was not—the sheer scale of the payout reveals the underlying risk .
The legal scholar Jane Ginsburg once noted that “the right to read is the right to write.” The AI industry has inverted this: they claim the right to copy is the right to compute. But the Munich ruling reminds us that copying for computational purposes is still copying. The notion that ingesting a novel to “learn” style is the same as a human reading it was rightly dismissed by the US Copyright Office, which noted that a student reading a book cannot subsequently distribute millions of perfect paraphrases of it in seconds .
The “Pirate and Delete” Defense
If the legal landscape is clarifying, why the urgency to legislate? Because the industry’s preferred solution is not compliance, but amnesty. The UK government’s now-delayed proposal was for an “opt-out” system—shifting the burden onto creators to police the entire internet and tell AI companies not to steal from them. As the musician and former Labour minister Margaret Hodge reportedly told Parliament, this is like putting a sign on your front door asking burglars not to enter.
The technical term for this strategy is “asymmetric warfare.” AI companies argue they cannot possibly license every work because there are billions of them. But this is an argument of convenience. The EU’s AI Act, which came into force this year, mandates transparency. Its template for training data summaries, published in final form in late 2025, requires providers to list the top data sources and domains used . If they can summarize it for regulators, they can pay for it.
Furthermore, a disturbing legal strategy is emerging from the U.S. cases. As legal analysts at Arnall Golden Gregory noted after the Bartz case, the ruling creates a perverse incentive: if training is fair use but permanent storage is not, the optimal strategy for a company is to “pirate and delete” . Download the stolen library, train the model as fast as possible, delete the evidence, and claim protection under the “transformative” use doctrine. This is not a solution; it is a recipe for laundering copyright infringement on a global scale.
The New Robber Barons
We have been here before. In 18th-century Scotland, booksellers in London held a monopoly on “valuable” literature. Scottish “pirates” like Alexander Donaldson reproduced and sold cheaper editions, arguing that knowledge should be free and that the London booksellers were holding back the enlightenment. The resulting battle—Donaldson v. Beckett—helped forge modern copyright law, establishing that the right is limited and ultimately yields to the public domain. But crucially, the Scottish “pirates” did not pretend the books were not written by someone. They simply exploited a territorial loophole. They were businessmen, not revolutionaries.
Today’s AI companies are the heirs of Donaldson, but with a crucial difference: they have no intention of letting the copyright term expire. They want the raw material of human culture delivered to them, on tap, forever. They want the value without the cost, the reward without the risk.
When Disney and NBCUniversal sue Midjourney, calling it a “bottomless pit of plagiarism,” they are not merely defending Mickey Mouse . They are defending a principle that every studio, every musician, and every journalist relies upon: that you cannot take someone’s labor without consent or compensation. When Paul McCartney releases a “silent album” to protest proposed UK laws, he is making the same point: that the output of a lifetime of creative work is being scraped to build machines that will ultimately silence him .
The Only Way Forward
There is a path forward, but it does not run through weakening the law. It runs through enforcing it.
First, reject the “opt-out” framework. The House of Lords is right: the government should rule out any reform that removes the incentive to license. The default must be opt-in.
Second, mandate transparency. The EU has shown the way. The UK’s Data (Use and Access) Act provides a vehicle for this. We need to know what data was used, where it came from, and how it was processed. The Midjourney admission that it scraped 100 million images without any tracking of provenance should be illegal, not a badge of honor .
Third, let the courts work. The Munich ruling on OpenAI lyrics and the pending GEMA v. Suno decision will provide clarity . So will the New York Times case against OpenAI and the Scarlett Johansson voice cloning suit. These are not roadblocks to innovation; they are the guardrails of a functioning market.
The AI industry likes to quote the maxim that “information wants to be free.” But as Stewart Brand, who coined the phrase, also said, “information also wants to be expensive.” The tension between those two truths is what markets resolve. The attempt to collapse that tension by fiat—by declaring that all information is free for the taking by a handful of monopolists—is not progress. It is a heist dressed up as philosophy.
The law is fit for the 21st century. The question is whether we have the courage to use it.
Discover more from The Monitor
Subscribe to get the latest posts sent to your email.
-
Featured5 years agoThe Right-Wing Politics in United States & The Capitol Hill Mayhem
-
News4 years agoPrioritizing health & education most effective way to improve socio-economic status: President
-
China5 years agoCoronavirus Pandemic and Global Response
-
Canada5 years agoSocio-Economic Implications of Canadian Border Closure With U.S
-
Democracy5 years agoMissing You! SPSC
-
Conflict5 years agoKashmir Lockdown, UNGA & Thereafter
-
Democracy5 years agoPresident Dr Arif Alvi Confers Civil Awards on Independence Day
-
Digital5 years agoPakistan Moves Closer to Train One Million Youth with Digital Skills
