Making Sense of the Citrini Debate

Citrini Research recently published a controversial report that rhetorically asked, “What if our AI bullishness continues to be right…and what if that’s actually bearish?”

 

Even though the authors qualified the report as “a scenario, not a prediction”, it ignited much debate among investors. As the work is labeled as speculative fiction, I am not inclined to address its points, other than to point out that it raises questions about the interaction between AI-driven productivity and employment.

 

In reaction, the market sold off after the report’s publication, but rebounded the next day. Software stocks, which had been under pressure over the disruptive fears of AI adoption, ended the week by stabilizing. Technology stocks broke a rising relative uptrend (dotted line) and they are testing a key relative support level.
 

 

 

A Net Benefit to Society?
Let’s start with the big picture questions. What are the social benefits of artificial intelligence?

 

An academic paper led by Nobel laureate Daron Acemoglu, with co-authors Dingwen Kong and Asuman Ozdaglar, asked if AI is making us dumber: “AI, Human Cognition and Knowledge Collapse”. Here is the key quote from the paper’s abstract: “When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice.”

 

Former FT journalist Izabella Kamanska came to a similar conclusion in one of her posts: “Artificial intelligence risks dumb outcomes unless politicians act now”. She argued that “if AI cannot generate broader societal gains alongside productivity gains, it doesn’t justify the label ‘intelligence’ at all — it becomes self-cannibalising capital. A parasitical ‘anti-intelligence’ system.”
JPMorgan CEO addressed the audience at an AI investor event and expressed similar concerns:

What if, I think there are 2 million commercial truckers in the United States, and there are lots of other examples you can give. There’s a thought exercise, and you could push a button, eliminate all of them, and they make $120,000 on average. Save fuel, save lives, save time, a more efficient system, less disrupted highways, all that beautiful stuff. Would you do it if you put 2 million people on the street where even if there are jobs available, that next job is $25,000 a year, stocking shelves. I was saying, “That’s kind of really bad, kind of civilly, should we as society agree to that?” I don’t think so. I was talking about the business and government, and they should start thinking today, not when it happens, what would we do to deal with the [AI] issue? It’s got to be business and government.

Equally controversial is the recent standoff between the U.S. Department of War and Anthropic. Anthropic provides Claude to DoW for its use based on two conditions. First, it cannot be used in the surveillance of U.S. citizens within the U.S. Second, autonomous targeting systems must have a human safeguard before an “attack” decision is made. Secretary of War Pete Hegseth threatened Anthropic and wants the safeguards removed. Anthropic published a statement late last week rejecting the Pentagon’s demands.

 

Doesn’t Hegseth remember all the Terminator movies based on the narrative that an AI-bot given unfettered control of U.S. defense systems named Skynet became self-aware, perceived humanity to be a threat, and tried to destroy it?

 

In a separate episode, Techspot reported that Kenneth Payne at King’s College experimented with AI in war game simulations. In 95% of the situations, the AI resorted to the use of tactical nuclear weapons, which is a threshold that humans have strongly resisted in the past.
 

 

Show Me the Money!
The other concern among AI skeptics such as Michael Burry of “The Big Short” fame is where the profits will come from.

 

Torsten Slok at Apollo pointed out that hyperscaler capital expenditures are estimated to be a massive $646 billion in 2026. How massive? That amount is “roughly equivalent to the size of GDP for Singapore, Sweden and Argentina”, and “more than the combined military spending of Germany, France, the U.K., Japan, Italy and Canada”. By comparison, “total U.S. bank loan growth in 2025 was around $700 billion”, “defense spending in 2025 was at $917 billion”.
 

 

Michael Burry recently asked, “When does all this data centre buildout actually end?” Hyperscalers are using accounting tricks to boost their earnings. Specifically, GPU chips are depreciated on the income statement over five years, but NVIDIA produces a new generation of chips about every three years that make the previous generation obsolete. Already, there are reports that H100 chips which cost $40,000 new are selling for $6,000 on eBay. The gap between the accounting depreciation rate and the economic life of GPU chips will have to be reconciled by hyperscalers restating their earnings at some point in the future. This will present an enormous shock to market expectations.
 

The enormous scale of hyperscaler capex is changing their business model from an asset-light and intellectual property-heavy model to a more conventional asset-heavy bricks-and-mortar model of old economy companies. Burry argues that such a shift in capex will strain their finances and load their balance sheets with debt. Already, the market is starting to price the increased default risk of technology companies.
 

 

Market anxiety is already rising in private credit. JPMorgan CEO Jamie Dimon recently said that some of his rivals are doing “dumb things” in their lending practices. Credit market jitters are rising.
 

 

 

AI and Productivity
Artificial intelligence boosters have trumpeted the potential productivity gains from AI adoption. A recent NBER working paper studied the effects of AI. Researchers surveyed 6,000 CEOs, CFOs and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany and Australia, the vast majority see little impact from AI on their operations. Here is the abstract:

We present the first representative international data on firm-level AI use. We survey almost 6,000 CFOs, CEOs and executives from stratified firm samples across the US, UK, Germany and Australia. We find four key facts. First, around 70% of firms actively use AI, particularly younger, more productive firms. Second, while over two thirds of top executives regularly use AI, their average use is only 1.5 hours a week, with one quarter reporting no AI use. Third, firms report little impact of AI over the last 3 years, with over 80% of firms reporting no impact on either employment or productivity. Fourth, firms predict sizable impacts over the next 3 years, forecasting AI will boost productivity by 1.4%, increase output by 0.8% and cut employment by 0.7%. We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. This contrast implies a sizable gap in expectations, with senior executives predicting reductions in employment from AI and employees predicting net job creation.

Adoption is growing, but productivity gains are not evident.

 

The debate over AI-driven productivity has implications for monetary policy. While doves like the incoming and nominated Fed Chair Kevin Warsh are highlighting the likely productivity effects of AI adoption as a reason to lower interest rates as productivity gains will boost non-inflationary growth, there is uncertainty of how AI might affect monetary policy. Former IMF chief economist Olivier Blanchard recently asked the question of the AI effects on r*, or the neutral interest rate.

 

 

Fed Governor Lisa Cook addressed this issue in a recent speech:

In anticipation of future productivity gains, we already see soaring AI-related business investment in data centers and chips, despite interest rates broadly being elevated relative to levels over the past 20 years. With investment contributing to strong aggregate demand, it is possible that the current neutral rate is higher than before the pandemic. This could reverse when the AI productivity gains are more fully realized or if the labor market transition leads to a rise in income inequality, such that well-off consumers receive a larger share of income, which could lower the neutral rate, all else equal.

 

 

A Middle Ground
So where does that leave us? The Citrini report postulates a scenario where AI wipes out millions of jobs, while AI boosters assume widespread productivity gains without disruption.

 

The history of the adoption of disruptive technology indicates that outcomes are never utopian nor Apocalyptic. I therefore prefer to adopt a middle ground. Artificial intelligence is a very real technology and its adoption will dramatically change society. There will be some degree of disruption in capital markets and return expectations.

 

The AI bubble never reached the excessive levels of the dot-com bubble, and the degree of price adjustment will be lower. In all likelihood, its collapse will not cause a recession.

 

Nevertheless, investors need to consider how the value-added in the value chain is distributed. During the build-out phase, hype and hope have pushed funding into the builders of infrastructure, or hyperscalers, but as the technology matures and democratizes, value will accrue to other parts of the value chain. Consider the example of AOL, which charged for dial-up and email in the same way that LLM providers charge for access. Today, email is ubiquitous and bundled with other services and given away as a loss leader. On the other hand, I expect that AI companies will try to price their services on their users’ switching costs, which are known through their social media and other data profiles.

 

 

4 thoughts on “Making Sense of the Citrini Debate

  1. Some forces are very hard to resist, especially if they involve lower costs (aka increased productivity). Take for example the agricultural changes of the last 200 years where the number of people on farms went way down. Aside from the Amish and others like them, we don’t harvest wheat by hand. How can a farmer compete without modern technology? There are niche markets for sure, but in the aggregate what sells at the lowest price sells the most. That’s why most hamburger meat is not prepared at your local corner store any more. So businesses that don’t employ AI may not compete effectively and go out of business. Is the military going to say, “yeah we won’t use AI, we’ll leave that to the Russians or Chinese”? Not going to happen.
    There was a story I read in French class, 60 years ago. It could have been by DeMaupassant (maybe got the name wrong), but the gist of it was a miller who was decrying the use of motorized mills to grind wheat and not the windmills that use the Mistral for power. Well, windmills are in paintings but not grinding wheat.
    Whether we like it or not AI is happening, use whatever analogy you wish, cats and bags, genies and bottles.
    At the individual level, AI will be like TV did to us. We used to be outside a lot, do things differently before TV, now we have couch potatoes. So with AI, some will use it and be more productive and learn, others will have it do everything and not learn basic math.
    Is the world going to a better place? I dunno, but wherever it’s going, it’s going.

    1. Up till now tech innovations means we have new sophisticated tools for us to control. But now AI is both the tool AND its own controller. Assuming AI and robots can fully replicate and/or surpass our intelligence and motor skills humans won’t be needed. Even politicians won’t be needed and could be replaced by blockchain and AI, but they’ll probably be scaremongering using Skynet as an example to stay relevant.

      1. I’ll make an analogy with flowers. You can make artificial flowers that look realistic, only they are not real flowers. When we talk about intelligence we should include emotional intelligence. I suspect that AI could successfully pass the Turing test, but really it is just one big data/algorithmic compiler. We may not have to work like we used to, lose resilience, not be able to tie our shoes, who knows? But the day that robots have emotions is far off in my opinion. The reason C3PO and R2D2 were cute was because they reflected human emotions. I suppose AI may mimic emotions like a mockingbird. Fear is something we “meat based” intelligences learned over say a billion years to survive. It’s going to take a while to teach AI about emotions. Will some try, of course they will…Pandora’s Box .
        Timeless story.

        1. AI can intuit real world physics from videos alone even if imperfectly. It’s not a stretch for it to intuit our emotional frameworks given enough data about our preferences of the arts and films, video games, etc. My personal opinion is AI isn’t there yet and need a few more innovations before it reaches AGI/ASI namely in working/episodic/etc. memories.

          We in the investment and stock community love to talk about trends. So just like how chip developments could track moore’s law for decades even though it took significant innovations and breakthroughs to sustain the trend, many today may risk underestimating the amount of innovations that the industry could output to sustain the rate of growth in AI capabilities.

Comments are closed.