Connect with us

AI

US-China Decoupling Could Jeopardize AI Governance: Insights from Henry Kissinger

Published

on

Introduction

In the realm of global geopolitics, few figures have commanded as much respect and influence as Henry Kissinger. As a diplomat, strategist, and thinker, he has played an instrumental role in shaping the course of international relations for over half a century. His insights on international affairs have often transcended the boundaries of time and context, providing us with invaluable perspectives on the complex challenges that face the world today. One such challenge, which has been at the forefront of discussions in recent years, is the relationship between the United States and China, particularly in the context of artificial intelligence (AI) governance.

U.S. President Joe Biden shakes hands with Chinese President Xi Jinping as they meet on the sidelines of the G20 leaders’ summit in Bali, Indonesia, November 14, 2022. REUTERS/Kevin Lamarque

In this extensive opinion piece, we will delve into the views of Henry Kissinger on why a US-China decoupling in the field of AI would have detrimental consequences for global AI governance. We will examine the dynamics of this relationship, explore the significance of AI governance, and consider Kissinger’s insights on how these factors intersect and impact the world’s future.

Understanding the US-China Relationship

The relationship between the United States and China has evolved significantly over the past several decades. From the initial stages of opening diplomatic ties in the 1970s to becoming two of the world’s largest economies, the trajectory of their relationship has been marked by cooperation, competition, and complex interdependence.

During the early years of this relationship, cooperation was the dominant theme. Diplomatic efforts led by Kissinger himself resulted in the normalization of relations between the two countries. This paved the way for economic engagement and trade, which in turn, contributed to China’s rapid economic growth and transformation.

However, as China emerged as an economic powerhouse, the nature of the relationship began to shift. Competition in various domains, including trade, technology, and influence in global institutions, became more pronounced. The rise of China’s tech industry, particularly in AI, further intensified the competitive aspect of their relationship. Both countries began investing heavily in AI research and development, aiming to establish dominance in this crucial technology.

ALSO READ :  Iran's Tenacious Regime and the Future of the Gulf

The Significance of AI Governance

Artificial intelligence holds tremendous promise and potential for humanity. It has the capacity to revolutionize industries, enhance healthcare, improve education, and address complex global challenges. However, it also presents a range of ethical, legal, and security concerns that require careful governance.

AI governance refers to the framework of rules, norms, and institutions that guide the development, deployment, and use of AI technologies. It encompasses issues such as data privacy, bias and fairness in AI algorithms, autonomous weapon systems, and the ethical implications of AI in decision-making processes. Effective AI governance is crucial to harness the benefits of AI while mitigating its risks.

In the context of US-China relations, AI governance is not merely a domestic concern but a global one. Both countries are at the forefront of AI research and development, and the decisions they make regarding AI governance will have far-reaching consequences for the entire world. As AI becomes increasingly integrated into various aspects of society, it is imperative that international norms and standards are established to ensure its responsible and ethical use.

Kissinger’s Perspective on US-China Decoupling and AI Governance

Henry Kissinger, in his thoughtful analysis of the US-China relationship, has consistently advocated for a pragmatic approach that prioritizes cooperation over confrontation. He argues that a US-China decoupling in the realm of AI would be detrimental to global AI governance for several reasons.

  1. Interconnectedness: Kissinger highlights the deep interconnectedness of the US and Chinese tech sectors. Companies from both countries collaborate, invest, and compete in the global tech ecosystem. A sudden and extensive decoupling would disrupt supply chains, research collaborations, and the flow of talent, harming technological progress and innovation.
  2. AI Development: China has made significant strides in AI development, often focusing on applications that differ from those in the West. A decoupling could lead to the parallel development of AI technologies with different values, standards, and goals, potentially creating a global divide in AI governance.
  3. Global Norms: Kissinger underscores the importance of establishing global norms for AI governance that reflect a consensus among major stakeholders, including the US and China. A decoupling could hinder the negotiation and implementation of such norms, leaving a void that could be filled by conflicting standards and regulations.
  4. Coordinated Responses: In the face of ethical dilemmas and challenges associated with AI, Kissinger argues that it is crucial for the US and China to work together to find common ground. Whether it’s addressing algorithmic bias or the use of AI in surveillance, coordinated responses are more likely to yield meaningful results.
  5. Global Leadership: As AI becomes increasingly central to technological advancement, global leadership in AI governance is a position of significant influence. Kissinger contends that the US and China should vie for leadership in shaping AI governance rather than isolating themselves from each other. Cooperation in this domain can help set the agenda for global AI norms.
ALSO READ :  6th September: Pakistan's Defence Day 2023

Conclusion

Henry Kissinger’s perspective on the US-China decoupling and its impact on AI governance offers a valuable and nuanced view of the complex dynamics at play in the global arena. While competition between the US and China in the realm of AI is undeniable, Kissinger’s emphasis on cooperation and coordination is a timely reminder of the importance of finding common ground in addressing the challenges posed by artificial intelligence.

As AI continues to evolve and shape the future, the world must come together to develop ethical frameworks and governance structures that ensure its responsible use. A US-China decoupling would not only hinder progress in AI but also risk fragmenting the global AI governance landscape. To navigate this critical juncture in history successfully, policymakers, technologists, and diplomats must heed Kissinger’s wisdom and strive for collaborative solutions that prioritize the common good of humanity over individual interests.


Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

AI

The great price deflator: why the AI boom could be the most disinflationary force in a generation

Published

on

Northern Trust’s $1.4 trillion asset management arm says the AI boom is “massively disinflationary.” The evidence is building — but so are the near-term headwinds. Here is what the bulls are getting right, what they are glossing over, and what every central banker should be thinking about this week.

Analysis · 2,150 words · Cites: Northern Trust, IMF WEO April 2026, BIS Working Papers, OECD

There is a sentence making the rounds in macro circles this morning that deserves more than a tweet. Northern Trust Asset Management — custodian of $1.4 trillion in client assets — told the Financial Times that the AI boom is poised to be “massively disinflationary.” Two words, twelve letters, and an argument that, if it proves correct, will reshape monetary policy for the rest of this decade. If it proves wrong, it will look like the most expensive case of group-think in asset management history.

The claim is bold, but it is not baseless. Across its 2026 Capital Market Assumptions, Northern Trust has laid the groundwork: nearly 40 percent of jobs worldwide — and 60 percent in advanced economies — are now exposed to AI, signalling what the firm calls “a major shift” in productivity and labor market dynamics. Add to that the IMF’s own January 2026 estimate that rapid AI adoption could lift global growth by as much as 0.3 percentage points this year alone, and up to 0.8 percentage points annually in the medium term, and suddenly “massively disinflationary” sounds less like a marketing line and more like a macroeconomic thesis worth taking seriously.

But serious theses deserve serious scrutiny. And when you peel back the optimism, you find a story with a considerably more complicated second act.

“AI today is still in its early innings. It is reshaping how we operate. It is reshaping how we work. Yet at the same time, we know there are going to be a number of missteps.” — Northern Trust Asset Management, February 2026

The disinflationary logic — and why it is compelling

The core argument runs as follows. AI raises the productive capacity of every worker, firm, and economy that adopts it. More output from the same inputs means falling unit costs. Falling unit costs mean downward pressure on prices. In a world still wrestling with inflation — the IMF’s April 2026 World Economic Outlook projects global headline inflation at 4.4 percent this year, elevated partly by a new Middle East conflict — that kind of structural supply-side boost could not arrive at a better moment.

The historical analogy is not perfect, but it is instructive. The internet and personal computing drove a productivity renaissance through the 1990s that helped the US run a decade of growth with unusually low inflation. The difference this time, optimists argue, is both speed and scope. Generative AI is being deployed across sectors — finance, law, medicine, logistics, software — simultaneously, rather than trickling through the economy over fifteen years. The IMF’s own research noted that investment in information-processing equipment and software grew 16.5 percent year-on-year in the third quarter of 2025 in the United States alone. That is not a technology cycle. That is a structural reorientation.

At the firm level, the mechanism is equally legible. AI-assisted coding reduces software development costs. AI-powered customer service reduces headcount requirements per unit of output. AI-accelerated drug discovery compresses R&D timelines. Each of these reduces costs for producers, and in competitive markets, cost reductions eventually become price reductions for consumers. The BIS, in its 2026 working paper on AI adoption among European firms, found measurable productivity gains at companies with higher AI adoption rates — gains that, if broad-based, translate directly into disinflationary pressure.

ALSO READ :  Will Digital Currency Replace Traditional Paper Currency in Pakistan? Implications and Possibilities
InstitutionAI growth uplift (medium-term)2026 inflation forecastKey caveat
IMF (Jan 2026)+0.1–0.8 pp/year3.8%Adoption speed uncertain
IMF (Apr 2026)Upside risk4.4% (conflict-driven)Geopolitical shocks dominate near-term
Northern Trust CMA 2026Significant, decade-long~3% (US)Near-term capex inflationary
OECD AI Papers 2026Variable by AI readinessEME gaps constrain diffusion
BIS WP 1321 (2025)Positive short-run impactLabor market disruption risk

The uncomfortable counterarguments

Now for the cold water. The hyperscalers — Alphabet, Microsoft, Amazon, Meta — are expected to spend upwards of $600 billion on data center capital expenditure in 2026 alone, according to Northern Trust’s own analysis. That is $600 billion of demand competing for semiconductors, specialised labor, land, electricity infrastructure, and cooling systems. In the near term, this is not disinflationary. It is, by any honest accounting, inflationary. It bids up the price of every input that AI infrastructure requires.

Energy is the most acute example. Northern Trust’s own economists have noted that data centers are expected to account for 20 percent of the increase in global electricity usage through 2030. The IMF’s recent research put it plainly: energy bottlenecks “could delay AI diffusion, anchor a higher level of core inflation, and generate local pricing pressures” in grid-constrained regions. This is not a theoretical risk. It is a live constraint in the US, the UK, Ireland, Singapore, and across northern Europe, where grid capacity has become a hard ceiling on data center expansion.

There is also the measurement problem — and it is a serious one. As the IMF’s own Finance & Development noted in its March 2026 issue, GDP accounting simultaneously overstates AI’s immediate contribution (by counting massive capital outlays as output) while understating its broader economic impact (by missing productivity spillovers that do not show up in standard national accounts). This is precisely the statistical paradox that masked the early productivity gains of the 1990s IT revolution — and it cuts in both directions for policymakers. If AI is quietly raising potential output, the economy may be running cooler than headline data implies. If the infrastructure surge is instead stoking a new floor for energy and construction costs, central banks may be tightening into a real supply shock.

The IMF’s chief economist Pierre-Olivier Gourinchas put the dilemma with characteristic precision: the AI boom could lift global growth, but it also “poses risks for heightened inflation if it continues at its breakneck pace.” That is the paradox in miniature — the same technology that promises to lower prices over time is currently consuming enormous resources to build itself.

The geopolitical dimension: who wins, who lags, and who is locked out

The disinflationary thesis is not uniformly distributed across the global economy, and this is where the Northern Trust framing risks glossing over structural inequality. Advanced economies — the US, Japan, Australia, South Korea — are positioned to capture the productivity upside first. Their firms are adopting, their labor markets are adapting, and their capital markets are pricing in the gains. Northern Trust’s own forecasts identify the US, Japan, and Australia as likely leaders in equity returns over the next decade, precisely because of AI-driven productivity.

Europe sits in a more ambiguous position. The continent is not at the forefront of AI model development, and Northern Trust acknowledges it explicitly in its CMA 2026. The region offers a healthy dividend yield and attractive valuations — but if AI productivity is the driver of the next decade’s returns, Europe’s relative lag in AI infrastructure and frontier model development is a structural disadvantage, not a cyclical one. The ECB faces its own version of the monetary policy puzzle: if AI-driven disinflation arrives later and slower in Europe than in the US, it changes the rate path, the currency dynamics, and the comparative fiscal math.

Emerging markets face the starkest challenge. The IMF’s analysis of AI in developing economies is clear: AI preparedness — digital infrastructure, human capital, institutional capacity — is the binding constraint on whether productivity gains materialize or get captured entirely by technology importers. Many emerging economies are primarily consumers of AI built elsewhere. The disinflationary benefits they receive are mediated through imports; the inflationary effects of AI-driven energy demand and semiconductor scarcity are borne locally. The net result, without deliberate policy intervention, is a widening productivity gap rather than a convergence story.

ALSO READ :  A Historical Perspective on US-Pakistan Relations: Democrats and Republicans

China deserves a separate paragraph. Its AI investment is substantial and accelerating, even under the constraints of US semiconductor export controls. The China-US AI race is not merely a geopolitical contest — it is a race to determine which economy gets to define and monetize the next general-purpose technology. Beijing’s capacity to deploy AI at scale across manufacturing, logistics, and services could generate its own disinflationary dynamic, although its ability to export that technology — and the disinflation it carries — is constrained by the very geopolitical tensions that are simultaneously driving energy and defence inflation.

What central banks should actually do

The honest answer is: proceed carefully, communicate transparently, and resist the temptation to read AI’s structural effect through the noise of its near-term capex cycle. The IMF’s April 2026 World Economic Outlook makes the right call when it urges central banks to guard against “prolonged supply shocks destabilising inflation expectations” while reserving the right to “look through negative supply shocks” where inflation expectations remain anchored.

That is the narrow path. If AI is genuinely raising potential output, then central banks that tighten aggressively in response to near-term energy and infrastructure inflation are making a classic policy error: fighting tomorrow’s economy with yesterday’s models. The 1990s analogy is instructive again — the Federal Reserve’s willingness to allow growth to run above conventional estimates of potential, on the grounds that productivity was accelerating, helped produce the longest peacetime expansion in American history.

But the reverse error is equally dangerous. If the AI productivity jackpot takes longer to arrive than Northern Trust and its peers anticipate — and Daron Acemoglu’s careful 2025 work in Economic Policy gives serious reason for that caution — then central banks that ease prematurely, trusting in a disinflationary future that is still several years away, risk entrenching the very inflation they spent the early 2020s battling back.

The IMF is right to treat AI as what it called in its April 2026 research note “a macro-critical transition rather than a standard technology shock.” Human decisions — by managers, workers, regulators, and investors — will shape the pace of adoption, the distribution of gains, and the political sustainability of the disruption. Those decisions are not made yet. Which means the data, for now, is genuinely ambiguous.

The verdict: right thesis, wrong timeline

Northern Trust is probably correct that AI will be massively disinflationary. The logic is sound, the historical analogies are supportive, and the scale of investment being made is simply too large to yield no productivity dividend. The question is not whether, but when — and the “when” matters enormously for portfolio construction, monetary policy, and fiscal planning.

The near-term picture, stripped of AI optimism, is one of elevated global inflation shaped by geopolitical conflict, persistent services price stickiness, and a capex boom that is consuming rather than producing cheap goods. The medium-term picture, contingent on adoption rates and diffusion across the global economy, is one where AI-driven productivity could deliver a genuine and sustained disinflationary impulse — the kind that would allow central banks to run looser for longer, equity multiples to expand sustainably, and real wages to recover.

The investor who misidentifies the timeline — and treats the medium-term story as immediate reality — will find themselves long duration in a world where rates stay higher than expected, and long AI infrastructure capex in a world where the ROI question remains, as Northern Trust itself acknowledged in February, one of “many more questions than answers.”

The honest macro position, as of April 2026, is this: Northern Trust is pointing in the right direction. But they may be holding the map upside down with respect to the calendar. For investors, policymakers, and strategists, the discipline required is not deciding whether AI will be disinflationary — it will — but calibrating, with intellectual humility, exactly how long the world will have to wait before the price deflator actually arrives.


Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading

Opinion

OPINION|When the Treasury Panics, Listen: Anthropic’s Mythos and the AI Threat Hiding Inside Your Bank

Published

on

The most consequential financial-security meeting of 2026 happened Tuesday. Almost nobody was talking about it.

There is a particular quality to urgency in Washington — a calibrated, deliberate kind, stripped of drama precisely because the stakes are too high for theater. When Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell jointly summon the chiefs of America’s largest banks to a private session on a weekday morning, they are not performing concern. They are managing it.

That is what happened on Tuesday, April 8, 2026, in the marbled corridors of Treasury headquarters on Pennsylvania Avenue. Bessent and Powell assembled a group of Wall Street leaders to make sure banks are aware of possible future risks raised by Anthropic’s Mythos model and potential similar systems, and are taking precautions to defend their systems. Bloomberg The CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs were present. JPMorgan’s Jamie Dimon was invited but unable to attend. AOL The Treasury declined to comment. The Fed declined to comment. Anthropic had no immediate comment.

In Washington, silence of that particular texture is its own form of communication.

The Model That Spooked the Regulators

To understand why two of America’s most powerful financial stewards convened an emergency summit with the chiefs of institutions collectively managing trillions in assets, you need to understand what Anthropic’s Claude Mythos Preview actually does — and why it is genuinely different from the parade of large language models that have cycled through headlines since 2022.

Anthropic launched the powerful Mythos model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company said the model is capable of identifying and exploiting weaknesses across “every major operating system and every major web browser.” RTÉ Read that sentence again. Every major operating system. Every major web browser. This is not a chatbot that occasionally hallucinates. This is an autonomous vulnerability-hunting engine with the precision of an elite red team and the speed of software.

Unlike typical consumer-facing AI tools, Mythos is geared toward cybersecurity software engineering tasks. Its specialty is identifying critical software vulnerabilities and bugs, but it can also assemble sophisticated exploits. CoinDesk The distinction matters enormously. Most AI models are generative — they produce text, images, code. Mythos is analytical and adversarial, capable of scanning codebases, identifying failure points invisible to human auditors, and constructing the exploits that could weaponize those failures. In the hands of a sophisticated actor — a state-sponsored hacking collective, a ransomware syndicate, a rogue insider — this capability is not a cybersecurity tool. It is a cybersecurity threat.

This marked the first time Anthropic had limited the launch of a new model. Investing.com That fact alone should arrest attention. A company whose business model depends on broad adoption and API revenue made the deliberate, commercially costly decision to gate access. That restraint — unusual in a sector that tends to race toward release — signals something about how seriously Anthropic’s own researchers regard what they have built.

Project Glasswing: An Experiment in Controlled Power

Access to Mythos will be limited to about 40 technology companies, including Microsoft and Google, and Anthropic has been in ongoing talks with the U.S. government about the model’s capabilities. AOL This restricted release program, referred to internally as Project Glasswing, is a deliberate inversion of how AI has historically been deployed: rather than releasing broadly and patching later, Anthropic gave dominant platform holders a head start — not to monetize first, but to defend first. Anthropic released the model to a select group of partners, including Amazon, Apple, and Microsoft, to give them a head start on securing vulnerabilities. Investing.com

ALSO READ :  Regime change?

It is a genuinely novel approach, and one that deserves more credit than it will likely receive. The logic is sound: if a model can identify zero-day vulnerabilities at machine speed, the most responsible action is to arm defenders before the broader landscape of threat actors can replicate or steal the capability. But Glasswing also exposes a governance gap so wide you could park an aircraft carrier in it.

Who audits the 40 companies with access? What safeguards prevent Mythos from being fine-tuned, transferred, or reverse-engineered? If a Glasswing participant suffers a breach — and given that these are themselves high-value targets, the probability is non-trivial — what is the liability chain? What is the protocol? The answers to these questions do not exist in any regulatory framework currently operative in the United States, the European Union, or anywhere else.

The Systemic Risk Nobody Has Priced

The meeting at Treasury was not primarily about Anthropic. It was about what Anthropic represents: the arrival of AI capabilities that move faster than the regulatory, legal, and institutional machinery designed to contain them.

Consider the financial system’s exposure. Modern banking infrastructure is built on decades of accumulated code — legacy COBOL systems at regional lenders, middleware connecting trading platforms to clearing houses, authentication layers protecting retail deposits. Much of this code has never been audited by a sophisticated adversary because auditing at scale was prohibitively expensive. Mythos eliminates that constraint. A well-resourced actor with access to comparable capability could, in principle, systematically map the attack surface of an entire national banking system in the time it currently takes a human security team to review a single subsystem.

The episode highlights a fundamental change in how regulators are framing AI risk — not merely as a technological challenge, but as a potential catalyst for systemic events. This has already raised red flags in crypto, where experts are worried that Mythos’ capability of discovering and exploiting zero-day vulnerabilities in real-time at low cost poses risk to the DeFi infrastructure. CoinDesk

The systemic risk framing is the right one — and it is the framing that explains why Powell was in that room. The Federal Reserve’s mandate is financial stability. Historically, stability threats have come from credit cycles, liquidity crunches, and contagion. They are now coming from code. A successful AI-enabled attack on a major custodial bank — one that compromised transaction integrity, corrupted ledger data, or triggered a cascade of failed settlement — would represent a category of financial crisis that no existing playbook addresses. The bazooka of emergency liquidity provision is not particularly useful when the crisis is epistemic rather than financial: when the question is not whether there is enough money, but whether the numbers can be trusted at all.

Anthropic vs. the Pentagon: The Contradiction at the Heart of AI Policy

There is a peculiar irony shadowing this episode. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic’s request that it put a pause to the Pentagon’s designation. Bloomberg Law

Anthropic proactively briefed senior U.S. government officials and key industry stakeholders on Mythos’s capabilities RTÉ — engaging responsibly with the national security community — even as one branch of that same government has labeled the company a security liability. The left hand of the U.S. government calls in Anthropic’s most advanced model to warn bankers about cyber risk; the right hand designates its maker a supply-chain threat. This is not incoherence. It is the natural consequence of applying 20th-century institutional categories to 21st-century technology companies that are simultaneously strategic assets, potential vulnerabilities, and independent actors with their own governance philosophies.

The contradiction will not resolve itself. It requires a policy architecture that does not currently exist — one that can hold together the dual realities that Anthropic’s capabilities are a genuine national asset and that Anthropic’s capabilities require genuine national oversight. Neither a blanket clearance nor a blanket designation captures that complexity.

ALSO READ :  Federal Judge's Recent Ruling Deems DACA Program Unlawful Once Again in United States

What Bessent and Powell Actually Did — and What It Implies

What HappenedWhat It Means
Joint Bessent-Powell conveningAI cyber risk is now a financial stability issue, not just a tech policy issue
Bank CEOs summoned mid-weekSpeed of response signals real urgency, not regulatory theater
Mythos limited to ~40 companiesAnthropic is self-governing in the absence of formal governance frameworks
Pentagon supply-chain designationExecutive branch is fractured in its AI risk assessment
No public statement from Treasury, Fed, or banksThe regulatory playbook does not yet exist

The convening itself was a significant signal. Bessent and Powell do not share a conference room casually. The joint appearance invested the meeting with the authority of both fiscal and monetary sovereign — the message being that AI cyber risk is no longer a niche technology-sector concern but a macro-prudential one. Banks should be pricing this into their operational risk frameworks. Insurers will follow. Rating agencies will not be far behind.

But signals, however weighty, are not architecture. The meeting produced no public guidance, no regulatory proposal, no framework for how banks should report, manage, or disclose AI-enabled cyber exposures. The CEOs who left Treasury on Tuesday left with warnings — and no rulebook.

The Governance Gap and How to Begin Closing It

The Mythos episode crystallizes three failures that policymakers now have no excuse for ignoring.

First, the pre-release consultation gap. Anthropic did the right thing in briefing U.S. officials before releasing Mythos. But that consultation was informal, voluntary, and ad hoc. The EU AI Act’s tiered risk framework is imperfect, but it at least establishes mandatory pre-market assessment for high-risk systems. The United States has no equivalent. A model capable of autonomously discovering and exploiting zero-days across every major OS and browser is, by any reasonable definition, a high-risk system. Its release should trigger a formal, structured national security review — not a phone call.

Second, the systemic-risk classification vacuum. The Fed can designate non-bank financial institutions as systemically important. It cannot currently designate AI models as systemically risky. That gap is now visible and consequential. What is needed is not a new agency but a clear cross-agency mandate — Treasury, CISA, the Fed, the OCC — with authority to classify certain AI capabilities as requiring coordinated disclosure, pre-release review, and sector-specific defensive preparation.

Third, the liability architecture. If a bank suffers losses traceable to an AI-enabled attack using capabilities derived from or analogous to a commercially released model, who bears what responsibility? The current answer — whatever tort law eventually produces — is wholly inadequate for systemic risks. Liability frameworks that can price and allocate AI-era cyber risk are not a luxury. They are a precondition for insurability and, ultimately, for financial stability.

A New Era of Risk — and Responsibility

There is a version of this story that ends badly: a race between capability development and governance in which capability wins by a decisive margin, and the first major AI-enabled financial system attack comes before any of the above frameworks exist. That version is not inevitable, but it requires active work to prevent.

The Tuesday meeting at Treasury was, in its way, a hopeful sign. It suggests that the United States’ most senior financial authorities understand, at least viscerally, that the risk is real and that the clock is running. It suggests that some version of public-private coordination is possible, even in a regulatory environment that remains deeply fragmented.

Anthropic has previously disclosed that it consulted with U.S. officials ahead of Mythos’ release regarding both its defensive and offensive cyber capabilities. CoinDesk That consultation should become a standard, not an anomaly. The release of any AI system with demonstrated offensive cyber capabilities — the ability to identify and exploit zero-days at scale — should automatically trigger a mandatory interagency review, sectoral briefings for affected industries, and a public risk disclosure, however carefully worded.

What Bessent and Powell did on Tuesday was, in the truest sense, firefighting. The fire is real. But what the financial system needs is not better firefighters. It needs buildings that are harder to burn.

The Mythos moment is a clarifying one. It tells us, with unusual precision, that the era of AI as a productivity story is over. The era of AI as a security story — a national security story, a financial security story, a systemic stability story — has arrived. Policymakers who treat it otherwise are not being optimistic. They are being negligent.


Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

The Private Firms Powering China’s Military AI Push

Published

on

China’s private firms are winning its military AI bids — and Washington doesn’t seem to grasp the implications.

In February 2026, a routine penalty notice appeared on the People’s Liberation Army’s procurement platform. It named Shanxi 100 Trust Information Technology — a 266-person IT company based in Taiyuan, in China’s coal-scarred heartland — and barred it from all military procurement across every service branch for one year. The infraction was bid fraud: the firm had submitted falsified materials to win a contract. In the labyrinthine world of PLA procurement, such violations are not uncommon.

What was uncommon was the company itself.

As a Jamestown Foundation analysis identified, 100 Trust is the sole wholly privately-owned firm operating inside China’s xinchuang (信创) domestic IT innovation framework — a program originally designed to replace foreign technology in sensitive government systems. Despite its modest headcount, the firm holds classified-project clearance and had won some of the PLA’s largest contracts to integrate DeepSeek, China’s breakout open-weight AI model, into military command systems. Its products had reportedly been demonstrated to Xi Jinping himself. And yet, when the opportunity arose to inflate its credentials, someone at 100 Trust apparently couldn’t resist.

The penalty notice tells us almost everything we need to know about China’s military AI push in 2026 — both its ambition and its contradictions. It tells us that China private firms are winning military AI bids once reserved for state giants. It tells us that the structural conditions of Beijing’s civil-military fusion policy have made this outcome not accidental but inevitable. And it tells us that Washington, still operating on a mental model of “China Inc.” — a monolithic, state-directed industrial juggernaut — is watching the wrong companies.

The Data Is Unambiguous: Private Is the New Defense

The anecdote of Shanxi 100 Trust is not an outlier. It is the leading edge of a statistical pattern that, once you see it, is impossible to unsee.

In a landmark September 2025 study, Georgetown University’s Center for Security and Emerging Technology (CSET) analyzed 2,857 AI-related defense contract award notices published by the PLA between January 2023 and December 2024. The finding that should have set off alarms in every national security directorate from Langley to the Pentagon: of the 338 entities that won AI-related PLA contracts, close to three-quarters were nontraditional vendors (NTVs) — firms with no self-reported state ownership ties. These NTVs collectively won 764 contracts, more than any other category. Two-thirds of them were founded after 2010.

These are not shadowy front companies. They are nimble, technically sophisticated private firms that market themselves explicitly on dual-use capability — civilian agility deployed for military ends. They are the companies winning PLA AI procurement private sector contracts that, by any conventional Washington risk framework, should not exist.

The legacy state-owned defense champions — China Electronics Technology Group (CETC), China Aerospace Science and Technology Corporation (CASC), NORINCO — still lead in sheer contract volume among top-tier entities. But the growth is concentrated in the private sector. The civil-military fusion AI China strategy that Xi Jinping has championed for over a decade is, in the AI domain at least, delivering something its architects may not have fully anticipated: a market in which lean private operators consistently outrun the bureaucratic lumbering of the state-owned defense-industrial complex.

The DeepSeek Accelerant

No single development has turbocharged China’s military AI push more dramatically than DeepSeek’s January 2025 release of its R1 reasoning model as an open-weight system — meaning any entity, including the PLA and its contractor ecosystem, could download, modify, and deploy it without restriction.

The Jamestown Foundation, tracking hundreds of DeepSeek-specific PLA procurement tenders, found the same structural pattern: private companies, not SOEs, won a majority of contracts to build DeepSeek-integrated tools for the PLA. The Jamestown analysts note that this likely reflects private firms’ superior capacity to respond to rapidly shifting market dynamics — a competitive edge that bureaucratic SOEs, with their elongated procurement relationships and political dependencies, simply cannot match.

The capabilities being built are not incremental. Researchers at Xi’an Technological University demonstrated a DeepSeek-powered assessment system that processed 10,000 battlefield scenarios in 48 seconds — a task they estimated would require human military planners approximately 48 hours. The PLA’s Central Theatre Command (responsible for defending Beijing) has used DeepSeek in military hospital settings and personnel management. The Nanjing National Defense Mobilization Office has issued guidance documents on deploying it for emergency evacuation planning. State media outlet Guangming Daily has described DeepSeek as “playing an increasingly crucial role in the military intelligentization process.”

The most revealing data point: Norinco, China’s enormous state-owned weapons manufacturer, unveiled the P60 autonomous combat-support vehicle in February 2026 — explicitly powered by DeepSeek. But the integration contracts enabling such deployments across the PLA’s command architecture are being won by private firms powering China military AI systems from Taiyuan to Hefei, not by Norinco’s in-house engineers.

iFlytek Digital and the Art of Corporate Camouflage

One company illuminates the structural logic with particular clarity: iFlytek Digital, the top-awarded nontraditional vendor in CSET’s dataset, which won 20 contracts in 2023 and 2024 alone, including one for the development of AI-enabled decision support systems and translation software for the PLA. As CSET’s full report documents, iFlytek Digital has close ties to its parent company iFlytek — a speech recognition and natural language processing champion that helped build China’s mass automated voice surveillance infrastructure and played a documented role in the CCP’s surveillance programs in Xinjiang and Tibet. iFlytek was placed on the U.S. government’s Entity List in 2019.

ALSO READ :  Trump’s Ascendance: What It Means for the Future of the Republican Party

But iFlytek Digital — which became formally independent of its parent in 2021, though its ultimate beneficial owners remained iFlytek executives — operates in a regulatory gray zone that the Entity List framework was never designed to address. This is not an accident. It is a deliberate structural feature: by creating arms-length subsidiaries, spinning off divisions, or establishing new entities that technically lack “state-reported ownership ties,” Chinese tech companies can maintain operational separation from sanctioned entities while preserving functional alignment with them.

For Washington, this matters enormously. The U.S. government’s primary tools — the Commerce Department’s Entity List, the Pentagon’s 1260H “Chinese military company” designations, and the Treasury’s investment restrictions — are built around the premise of identifying specific legal entities. When the PLA’s most consequential AI suppliers are structurally designed to be nontraditional, non-state-affiliated, and technically new, the entity-based framework becomes a sieve. You can list the parent; the subsidiary wins the contract.

The Top Private Winners: A Structural Snapshot

Based on CSET, Jamestown Foundation, and open-source procurement data, the following entities represent the emerging private tier of China’s military AI supplier ecosystem:

  • Shanxi 100 Trust Information Technologyxinchuang framework, DeepSeek integration contracts, classified-project clearance; 266 employees.
  • iFlytek Digital — NLP, translation, AI decision support; 20 PLA contracts in two years; arms-length separation from sanctioned iFlytek parent.
  • PIESAT — Satellite and geospatial analytics; delivering combat simulation platforms and automatic target recognition for the PLA; subsidiaries in Australia, Denmark, Singapore, Malaysia.
  • Sichuan Tengden — Drone manufacturer; produced autonomous systems deployed by the PLA on missions near Japan and Taiwan.
  • DeepSeek (Hangzhou High-Flyer AI) — Open-weight model appearing in 150+ PLA procurement records; U.S. lawmakers have requested its Pentagon designation as a Chinese military company.

What unites this cohort is not state ownership but structural alignment: dependence on state-controlled compute infrastructure, technical agility that SOEs lack, and an incentive architecture that rewards civil-military dual-use positioning.

The Export Control Paradox

Here is the geopolitical irony that Washington has not fully digested: U.S. export controls on advanced semiconductors — Nvidia A100s, H100s, and their successors — were designed to impede China’s military AI development. In the narrow technical sense, they impose real friction. But in the strategic sense, they have produced a second-order effect that cuts against their intended purpose.

By restricting access to Western computing hardware, the Biden and Trump administrations have deepened Chinese private firms’ dependence on state-controlled domestic alternatives — primarily Huawei’s Ascend AI chips and Kunpeng processors. The firms now winning PLA AI contracts are marketing themselves explicitly on Huawei Ascend stacks, partly because of U.S. export controls. Restrictions that force private firms to rely on state-favored compute simultaneously deepen those firms’ incentive to demonstrate loyalty through military work. The export control paradox: the policy meant to widen the capability gap may be accelerating the fusion between private innovation and PLA procurement.

A separate paradox is operational: DeepSeek’s R1 is open-weight. The Export Administration Regulations have no jurisdiction over Chinese-origin technology being used by Chinese military entities. As one former national security official noted in open-source analysis, “you can’t export-control a model that’s already been released.” The horse left the barn in January 2025.

Meanwhile, the February 2026 CSET report on China’s Military AI Wish List — drawing on over 9,000 unclassified PLA RFPs from 2023 and 2024 — documents that the PLA is pursuing AI-enabled capabilities across all domains simultaneously: decision support systems, autonomous drone swarms, deepfake generation for cognitive warfare, seaborne vessel tracking, cyberattack detection, and AI-enabled encryption stress-testing. The breadth alone should recalibrate any analyst who still views China’s military AI push as aspirational rather than operational.

Why Private Firms Are Outcompeting SOEs

Two structural conditions explain why Chinese private tech military contracts are growing at the expense of SOE incumbents — and why this trend will deepen.

First: speed. PLA AI procurement notices in the DeepSeek era feature compressed tender timelines, frequently under six months from solicitation to award. State-owned defense giants, with their multi-layered bureaucratic approval chains and established procurement relationships, are architecturally incapable of this tempo. A 266-person firm from Taiyuan, by contrast, can pivot its entire technical stack in weeks. The CSET data confirms that the majority of NTVs were founded relatively recently; they were built for agile deployment cycles, not Cold War-era production runs.

Second: the PLA’s own institutional crisis. Xi Jinping’s sweeping anti-corruption purge of the PLA Rocket Force leadership in 2023, and its subsequent extension into the Equipment Development Department and broader defense industrial apparatus, has hollowed out precisely the procurement networks on which SOE defense contractors depended. As Foreign Affairs documented in its March 2026 analysis, the PLA is “rapidly prototyping and experimenting” rather than engaging in traditional long-cycle procurement. In an environment where established bureaucratic relationships carry less weight than deployment speed and technical competence, private firms hold a structural advantage they did not engineer and may not fully appreciate.

The result, paradoxically, is that Xi’s anti-corruption campaign — designed to strengthen the PLA — may be reinforcing private firms’ dominance in its most strategically important procurement category.

ALSO READ :  Sindh’s Salary Fiasco: A Digital Leap Marred by Institutional Failure

The “China Inc.” Fallacy and Why Washington Is Flying Blind

For decades, Washington’s China threat framework has been organized around a relatively simple mental model: the Chinese state directs; Chinese companies obey. Export controls target state entities and their known subsidiaries. Sanctions lists name the champions. Defense authorizations restrict contracts with designated Chinese military companies.

This framework was always an approximation. It is now actively misleading.

The U.S. policy apparatus is structured to track the companies it already knows — CETC, CASC, Huawei, DJI. But as the CSET data on civil-military fusion makes clear, three-quarters of PLA AI contracts are going to entities that do not self-report state ownership ties. Most of these firms are not on any U.S. government list. Many operate in countries allied with the United States — PIESAT, for instance, claimed subsidiaries in Australia, Denmark, Singapore, and Malaysia as of 2023, as Foreign Policy reported.

The December 2025 letter from House Intelligence Committee Chairman Rick Crawford, House Select Committee on China Chairman John Moolenaar, and Senator Rick Scott to the Pentagon requesting that DeepSeek, Unitree Robotics, and thirteen other companies be designated as Chinese military companies is a belated, if welcome, recognition that the designations framework has fallen catastrophically behind the procurement reality. Designating DeepSeek in late 2025 — after its models had already been open-sourced, downloaded millions of times globally, and integrated into PLA command systems — is roughly analogous to sanctioning gunpowder.

The US policy gap on China’s military AI private sector is not a failure of intelligence. It is a failure of analytical framework. The question Washington keeps asking is: “Which Chinese companies are military?” The question it should be asking is: “Given China’s MCF architecture, which Chinese private technology companies aren’t potentially military?”

Implications for Washington: Three Uncomfortable Truths

The Washington implications of China AI bids being won by private firms rather than state giants are neither abstract nor distant. They are operational, legal, and strategic.

First: the Entity List model is inadequate for the private-sector era. Effective technology controls now require tracking corporate structures — beneficial ownership, subsidiary relationships, executive continuity across spinoffs. The 100 Trust case demonstrates that a company can hold classified-project clearance, win the PLA’s largest DeepSeek integration contracts, and have demonstrated its products to the head of state while remaining, on paper, a 266-person private IT firm from Taiyuan that no U.S. government list has ever named. This requires a fundamental rethinking of how the Bureau of Industry and Security, Treasury’s OFAC, and the Pentagon’s designations process share data and coordinate designations.

Second: open-weight AI has broken the export control paradigm for foundation models. The U.S. framework for restricting technology transfer was designed for hardware and proprietary software — objects that can be tracked, licensed, and withheld. An open-weight model that any PLA researcher can fine-tune for battlefield scenario analysis on a domestic Huawei Ascend cluster requires a fundamentally different policy approach: one focused less on restricting Chinese access to existing models and more on maintaining the frontier gap through sustained domestic R&D investment. The 2026 National Defense Authorization Act took modest steps in this direction, but the pace of reform remains slower than the pace of PLA integration.

Third: the procurement volume is not the capability measure that matters. The 100 Trust penalty — a private firm with Xi-level visibility submitting falsified procurement documents — is evidence of a supply-demand gap in China’s military AI ecosystem. Private firms winning contracts they cannot fully execute, racing deployment timelines that exceed their genuine capabilities, is a signal of fragility as much as strength. Washington should be studying not just how many AI contracts the PLA is awarding to private firms, but how many of those contracts are producing operationally deployed capabilities versus prototype demonstrations or outright fraud. The answer, based on available open-source evidence, is considerably more ambiguous than Beijing’s official narrative suggests.

None of this diminishes the strategic imperative. As CSET’s February 2026 Military AI Wish List study documents, the breadth and speed of PLA AI experimentation — across autonomous systems, cognitive warfare, C5ISRT decision support, and space and maritime domain awareness — represents a genuine challenge to U.S. military advantages that is accelerating, not plateauing. The Foreign Affairs analysis published this month warns that “China is positioning itself to quickly and effectively adopt and deploy operational military AI, thus keeping the gap between the U.S. and Chinese militaries narrow.”

The private firms powering China’s military AI push are not a curiosity. They are the mechanism through which Beijing’s most consequential military modernization is being executed — and they are operating in a regulatory and analytical blind spot that Washington has not yet seriously resolved to close.


Citations Used

  1. “Center for Security and Emerging Technology (CSET) — Pulling Back the Curtain on China’s Military-Civil Fusion”https://cset.georgetown.edu/publication/pulling-back-the-curtain-on-chinas-military-civil-fusion/
  2. “CSET full report (PDF)”https://cset.georgetown.edu/wp-content/uploads/CSET-Pulling-Back-the-Curtain-on-Chinas-Military-Civil-Fusion.pdf
  3. “Jamestown Foundation — DeepSeek Use in PRC Military and Public Security Systems”https://jamestown.org/program/deepseek-use-in-prc-military-and-public-security-systems/
  4. “CSET — China’s Military AI Wish List (February 2026)”https://cset.georgetown.edu/publication/chinas-military-ai-wish-list/
  5. “Foreign Affairs — China’s AI Arsenal (March 2026)”https://www.foreignaffairs.com/china/chinas-artificial-intelligence-arsenal
  6. “Foreign Policy — China: Under Xi, PLA Adopts More Civilian Tech”https://foreignpolicy.com/2025/10/07/china-military-civil-fusion-defense-tech-us/
  7. “House Homeland Security Committee — Letter requesting Pentagon designations for DeepSeek et al.”https://homeland.house.gov/2025/12/19/chairmen-garbarino-moolenaar-crawford-lead-letter-asking-pentagon-to-list-deepseek-gotion-unitree-and-wuxi-as-chinese-military-companies/
  8. “RealClearDefense — DeepSeek: PLA’s Intelligentized Warfare”https://www.realcleardefense.com/articles/2025/11/18/deepseek_plas_intelligentized_warfare_1148009.html
  9. “South China Morning Post — China’s growing civilian-defence AI ties”https://www.scmp.com/news/china/military/article/3324727/chinas-growing-civilian-defence-ai-ties-will-challenge-us-report-says
  10. “FDD — China’s Military Reportedly Deploys DeepSeek AI for Non-Combat Duties”https://www.fdd.org/analysis/policy_briefs/2025/03/27/chinas-military-reportedly-deploys-deepseek-ai-for-non-combat-duties/
  11. “CSET — China Is Using the Private Sector to Advance Military AI”https://cset.georgetown.edu/article/china-is-using-the-private-sector-to-advance-military-ai/
  12. “The Diplomat — The Private Firms Powering China’s Military AI Push (March 2026)”https://thediplomat.com/2026/03/the-private-firms-powering-chinas-military-ai-push

Discover more from The Monitor

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement
Advertisement

Facebook

Advertisement

Trending

Copyright © 2019-2025 ,The Monitor . All Rights Reserved .

Discover more from The Monitor

Subscribe now to keep reading and get access to the full archive.

Continue reading