Preface: Explaining our market timing models
We maintain several market timing models, each with differing time horizons. The “
Ultimate Market Timing Model” is a long-term market timing model based on the research outlined in our post,
Building the ultimate market timing model. This model tends to generate only a handful of signals each decade.
The
Trend Asset Allocation Model is an asset allocation model that applies trend-following principles based on the inputs of global stock and commodity prices. This model has a shorter time horizon and tends to turn over about 4-6 times a year. The performance and full details of a model portfolio based on the out-of-sample signals of the Trend Model can be found
here.
My inner trader uses a
trading model, which is a blend of price momentum (is the Trend Model becoming more bullish, or bearish?) and overbought/oversold extremes (don’t buy if the trend is overbought, and vice versa). Subscribers receive real-time alerts of model changes, and a hypothetical trading record of the email alerts is updated weekly
here. The hypothetical trading record of the trading model of the real-time alerts that began in March 2016 is shown below.
The latest signals of each model are as follows:
- Ultimate market timing model: Sell equities (Last changed from “buy” on 26-Mar-2023)
- Trend Model signal: Neutral (Last changed from “bullish” on 17-Mar-2023)
- Trading model: Neutral (Last changed from “bearish” on 15-Jun-2023)
Update schedule: I generally update model readings on my site on weekends. I am also on Twitter at @humblestudent and on Mastodon at @humblestudent@toot.community. Subscribers receive real-time alerts of trading model changes, and a hypothetical trading record of those email alerts is shown here.
Subscribers can access the latest signal in real time here.
AI mania
These days, it’s difficult to turn on financial TV without mention of artificial intelligence and technology stocks. Indeed, popular AI-related plays like NVDIA and C3.AI have been on a tear.
How sustainable is the move? Let’s examine the technical and fundamental underpinnings.
Time for a breather
The technical condition of the NASDAQ 100, which serves as a proxy for the AI and technology mania, looks extended in the short term. The relative performance of the NASDAQ 100 had been closely correlated to the 10-year Treasury yield, but a divergence has appeared (second panel). As well, relative breadth is showing signs of deterioration (bottom two panels). These are signs that tech stock may be due for a breather.
Here is the good news. Technology leadership is evident net of the market cap effect. The top panel depicts the relative performance of large-cap technology to the S&P 500 (black line) and small-cap technology to the Russell 2000 (green line). Both are beating their respective benchmarks in a similar fashion. The bottom panel shows the relative performance of the Russell 2000 to the S&P 500 as an indication of the size effect (black line) and the relative performance of small- to large-cap technology (green line). Even though small-cap technology is outpacing the Russell 2000, the relative performance of small-cap to large-cap technology is similar to the overall market size effect. This is an indication of sector leadership resilience, net of the market cap effect.
However, the price momentum factor hasn’t performed well, which is a bit of a puzzle. The price momentum factor, which measures whether stocks that have outperformed continue to outperform, is usually dominant during market frenzies. The following chart shows the relative returns of? different versions of momentum ETFs against the S&P 500 and none of them are showing signs of strength. One reader pointed out that the popular MTUM ETF rebalances its holdings once every six months while FDMO (bottom panel) rebalances more quickly every three months. Even then, the relative performance of FDMO can’t be described as exciting.
It’s possible that the AI and technology frenzy is only in its early stages and needs time to develop.
Fundamental opportunities and risks
No doubt, AI has the potential to be a radically disruptive technology, much like the internet was in the 1990s. Microsoft CEO Satya Nadella offered some perspective on how AI has transformed Microsoft’s workflows today:
So inside Microsoft, the means of production of software is changing. It’s a radical shift in the core workflow inside Microsoft and how we evangelize our output—and how it changes every school, every organization, and every household. A lot of knowledge work is drudgery, like email triage. Now, I don’t know how I would ever live without an AI copilot in my Outlook. Responding to an email is not just an English language composition, it can also be a customer support ticket. It interrogates my customer support system and brings back the relevant information. This moment is like when PCs first showed up at work. This feels like that to me, across the length and breadth of our products
Notwithstanding any hype about “pie in the sky” technologies that could be here in the future, the BoA Global Fund Manager Survey found respondents mostly believe the widespread adoption of AI in the next two years will boost profits.
What could stop the AI freight train in its tracks?
Soon after the release of ChatGPT, over 1,000 technology leaders and researchers signed an
open letter to calling for “a pause in giant AI experiments”. The letter warned that AI researchers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.” Signatories include luminaries such as Elon Musk and Apple co-founder Steve Wozniak. Soon after the open letter was published, the Association for Advancement of Artificial Intelligence released its own letter warning of the risks of AI.
These warnings are reminiscent of the Robert Oppenheimer warnings about the Bomb. Oppenheimer was a key figure in the Manhattan Project that developed the atomic bomb, and he later came out and voiced his regrets.
What are the risks of AI? The criticisms can be long and tortuous, but there are two categories of risks.
The first is AI is an extremely powerful technology, much like the Bomb. One risk is a malevolent actor deploys it in a destructive way. One obvious use is pattern recognition and neural networks to create new pathogens for biological weapons. China already combines a vast surveillance network with facial recognition to keep tabs on Chinese residents. If you don’t consider government surveillance malevolent, what if it’s done by a private network of data brokers? Even before the use of AI, the business model of Facebook, Google and Amazon is to know everything there is about you in order to sell you more things. The combination of AI pattern recognition and vast computing power makes that prospect even more intrusive. What if the data wasn’t controlled by giant corporations, but networks of unregulated data brokers who sell your information?
Here’s another example. Current versions of chatbots were trained on carefully curated data sets of vast size. AI researchers have grappled with the “data pollution” problem of what happens when bad training data alters chatbot results in unexpected ways. One early visible example of the “data pollution” problem was made evident when Microsoft released a chatbot but had to shut it down because users trained it to become a neo-Nazi.
The other risk is called the fictional Skynet problem, also known as the “alignment problem” in academic circles. The problem is no one will be able to control an AI system that learns because the objectives programmed into the system may have unintended consequences.
There are always a few bugs in the system. Consider the number of operating system updates you may have seen from your software provider. Some were patches to simple bugs, others were in response to>zero-day security holes. Nothing is perfect.
For investors, risks will become apparent once the lawyers get involved. There is a well-defined body of law on liability if a dog attacks someone and causes harm. But what happens if the “dog” is a self-learning AI neural network? Who bears the liability? Is it the “dog” owner? Is it the “dog” breeder or software provider? The “dog” trainer, or the people who trained the system?
These are all good questions, and the issues raised beg for regulation. As the law catches up with these issues, the insurance industry will begin to price these risks and they will become apparent in the deployment of these technologies. But that day is still several years in the future.
Another investor risk a recession, when credit dries up. As an analogy,
Bloomberg published an article, “Beyond Meat Wannabes Are Failing as Hype and Money Fade”. The article detailed how a “shakeout in a once-hot sector is widening as funding dries up”. Just like AI, fake meat is a promising technology and industry, albeit on a smaller scale. Should we see a recession or credit squeeze, the cost of capital for unprofitable start-ups will rise to unsustainable levels, which may crater the promise of AI technology.
The week ahead
Looking to the week ahead, the stock market is facing further downside risk as signs of excessive bullishness are evident.
The S&P 500 reached an overbought extreme on the 5-week RSI and pulled back. Past instances of similar overbought readings have seen the market stall. Initial support is at 4320, and a secondary support zone can be found at about 4200. As well, the VIX Index fell to a multi-year low, which is a sign of complacency.
Sentiment readings are becoming a little giddy The Citi Panic/Euphoria Model is euphoric and at the levels last seen at the February top.
Similarly, the AAII bull-bear spread, which measures individual investor sentiment, and the NAAIM Exposure Index, which measures the attitudes of RIAs who manage individual investor accounts, are elevated.
As well, the put/call ratio has reached levels seen at recent tops and shows no signs of fear, which is a worrisome sign.
Lastly, liquidity has been highly correlated with stock prices and the historical evidence shows that it is coincident or slightly leads the S&P 500. The latest reading shows a contraction in liquidity and a growing divergence between liquidity and stocks.
In conclusion, the adoption of AI promises to be highly disruptive and has potential to improve profitability of companies that adopt the technology. It faces regulatory hurdles and risk-pricing challenges as insurance companies learn to price AI risk, but those problems are a few years away. In the near term, the technology rally appears extended and may need a breather.
Tactically, the S&P 500 may face short-term headwinds as sentiment readings are complacent as the market pulls back from an overbought condition.
Professional Money Managers must show A.I. leading stocks in their mid-year June 30 client portfolios. That is called window dressing.
I once heard it said that a “bubble is what you get fired for not owning.”
With a two day settlement that means no selling of A.I. stocks and only buying until June 28. Like the year-end tax-loss selling, the effect could peak earlier if they have filled their portfolios sooner.
Sorry, I should have concluded by saying a correction in A.I. stocks could occur after this one way buying/no selling period. This jives with Cam’s thinking.
In the late 1970’s, I had a lunch with a client who was a computer software tech. He had bought Microsoft and told me to simply buy it and never sell it because the new MS DOS plus new small computers was a huge revolution. Now Bill Gates and others are saying generative A.I. is a similar leap forward.
Here is a podcast that talks about the revenue that MSFT is generating now and going forward. It talks about their products and how A.I. upgrades them remarkably. After listening, I became a big believer. This isn’t hype.
After tech investors (ARKK and Cathy et al) got burned with hype the last couple of years, it’s easy to shy away from and warn against this new technology. And there will be oceans of meme pump and dump schemes around it. But we are early.
A.I. is not new. The new thing is LARGE LANGUAGE GENERATIVE A.I. This is where this new subset of A.I. learns and gets better (or nasty) as it learns to guess correctly the next word (or paragraph) on a query given that it has the knowledge of the world in its cloud database.
Old A.I. learn about our habits and sent us adverts that matched our preferences. Or it was programmed for a task like chess or stock trading. This is baby stuff compared to large language generative A.I.
With billions of future users to learn from, the knowledge and quality of query responses will grow exponentially. Early days like now, we will hear of stupid query answers and think this isn’t a big deal. We would like to think that because envisioning an all-knowing pocket sentient being in your phone is freaky.
This is disruptive for sure. But also eruptive to your portfolio if Bill Gates is right.
Note: This is not a recommendation to buy since I don’t know your situation. It’s an observation.
sorry, the podcast
https://www.bloomberg.com/news/articles/2023-06-22/microsoft-s-big-investment-in-ai-is-paying-off-big-take-podcast
The smarter machines get, the better they work for us. They will continue with AI whether or not people say we should pause.
Will AI get Iran to drop it’s nuclear ambitions? Will AI get the Dems and Republicans to be friends? Will it make for a balanced budget?
I could go on, but I think I made my point.
The economy is about people. Everything since before the industrial revolution is about people.
That AI will produce some winners and losers, I don’t doubt, but I doubt it will change us. There will always be fear and desire in all it’s manifestations.