Why behind AI: This article will lead to a market selloff
Squeaky bum time
“Squeaky bum time” refers to the sound made by shifting restlessly on a plastic stadium seat due to anxiety. Anybody who’s watched a soccer game in a stadium would’ve experienced this, hopefully followed by an intense relief of pressure due to your team scoring a goal.
We find ourselves in a similar time today, as technology, the economy, and the art of daily living are all meshed together. Those that pay attention have become spectators and occasional players in this new type of game.
The nervousness in the air is hard to miss. If this was only low-stakes entertainment, it would’ve been easy to laugh about it. Unfortunately, just like in sports, the feelings are more intense when we gamble.
Panic at the disco
In 2025, a press release from OpenAI about working with a company on AI infrastructure was sufficient for a stock rally. In 2026, we are witnessing the reverse. Anthropic launching new vertical products, no matter how vague, is sufficient to lead to a selloff in entire sectors. Squeaky bum time, indeed.
When Anthropic announces that their models will now do security reviews (something that OpenAI actually already did the week before with the launch of ChatGPT 5.3), the reflex across both the professional and retail investor community is to dump the cybersecurity stocks.
It doesn’t matter who the company is or how mission critical they are. Dump it.
It doesn’t matter if it’s running a well-managed business with a massive growth opportunity due to AI. Dump it.
It’s squeaky bum time and most of us are very much out of our depth.
Public market forces
Excluding Saudi Aramco, the top 10 public companies today by market cap are all tightly intertwined with tech. Apple leads in consumer tech, Tesla in physical AI, and TSMC in the production of the silicon that powers everything. While retail investors have been influential in driving algorithmic sentiment, the reality is that the large day-to-day movements of tech stocks can be traced back to the large financial institutions that form the largest holders in each public company.
The funny thing about these financial institutions is that many of their investment decisions are driven by regulations and processes, rather than actual insight into what is a great tech company or what is a poor one. This is why you'll often find them having proportional ownership based on market caps, rather than weighted according to, you know, actual insider knowledge.
Fortinet is one of the competitors of CrowdStrike in many areas of cybersecurity. It is an objectively bad software vendor, known to acquire poorly performing companies in order to "cover gaps in their portfolio." It has repeatedly been exploited by nation-state attackers and is widely avoided by technically competent security teams. Yet, it has higher revenue than CrowdStrike.
This is made possible by a business model in cybersecurity where 90% of technology is acquired by Managed Service Providers, who operate it on behalf of their end customers. Those MSPs are mostly interested in maintaining a margin and outcompeting other cheaply priced competitors, rather than doing the actual job of securing their customers.
While you would expect this to be common knowledge for the curious and informed investor, it doesn't appear to be the case. It also doesn't seem to be obvious to the hundreds of thousands of otherwise talented individuals who work in cybersecurity companies today, who often lack even basic understanding of the sector.
Reasoning through the panic
“How you do one thing is how you do everything” is one of the most important observations when it comes to high-performance outcomes in sports. There is a certain level of mental discipline required at all times if you would like to be part of the elite performers. And how most people today are approaching the potential impact of AI across the economy shows a lot of reasoning sloppiness, even if they’ve been directly exposed to the inner workings of tech companies.
I think that the last decade of an evergreen “number goes up” tech market has had some profound negative consequences for your average tech worker, who often happens to be an active retail investor.
The vast majority of them have never bothered to learn how to deploy a Kubernetes container, think through a MITRE ATT&CK pattern, or do an agentic evaluation. The day-to-day of deeply technical activity behind the most important tech layer today, cloud infrastructure software, is a mystery to them.
The same mental laziness can be seen across all parts of cloud infrastructure software today. Salesforce struggling to convince customers to use their system of record as a productized data mesh is a completely different situation from companies rapidly adopting Databricks for essentially the same use case. If they could, most investors would've probably dumped Databricks as well, but the reality today is that if you are running a tech company that matters, you should avoid going public for as long as possible.
SaaS apocalypse and other disasters
There are some merits to the “SaaS apocalypse.” The main one comes back to the evergreen “number goes up” tech market. Since the growth of those businesses was aggressive over the last decade, most of them have never learned to run an efficient organization (or actively avoid that). When growth slowed down in 2022, many made an attempt at efficiency, by promoting actively how they are benchmarking according to the rule of forty. “We are rule of 50! 60! 70!”, CEOs would proudly brag about on earnings calls, while approving expensive stock based compensation grants across the whole company. Funnily enough, the only company that actually benchmarks as profitable according to the criteria above is Palantir, one of the most divisive and “overpriced” companies on the market.
Which brings us to one of the audiences with the most negative sentiment: those working in tech. After a decade of being in the top percentile of earners, partly due to the generous stock compensation packages they got, those working in tech are hurting. They are faced with layoffs, increased accountability and a slow realization that most tech incumbents are not adapting well to AI. Regardless of having worked in the industry for years, their own understanding of how things work is very limited, which is making them uncomfortable. If they kept their company stock, they have likely lost a significant portion of their net worth in the last twelve months. Many helped the selloffs by dumping every time their company stock would vest, as shown by the awkward statistic that the majority of tech company management have never invested in their own company, but have actively been selling whatever they have.
The view on the inside is not too pretty either. Since the generous stock grants have always been seen as a motivational commonality (”we are all shareholders”), CEOs in the last year have tried to come up with different excuses on why the market is punishing their specific company, while trying to optimize for better optics through layoffs and performance improvement plans (the silent layoffs). The stock market became a discussion of company meetings and internal emails, but that did little to pause the intense selling once the internal trading window was open.
So if the tech investors and tech employees are bearish on the future of SaaS, if not the economy at large, then should we consider AI to be the black plague of our generation?
Creative destruction
Recent viral articles claimed how “everything is about to change” and “the efficiency gains alone will kill the economy.” New product launches, together with some helpful social media virality driven by those articles, resulted in significant tech sector selloffs.
Probably the closest way to describe what’s happening is creative destruction. Creative destruction, coined by Joseph Schumpeter, refers to the incessant, necessary, and often painful process where industrial innovation destroys old economic structures to create new ones. It is considered the fundamental engine of capitalism, driving progress by rendering outdated methods, products, and jobs obsolete to foster growth.
While we’ve seen some recent examples of it (digital media vs physical; e-commerce vs brick-and-mortar), the enterprise software industry has largely been spared. Cloud, the most valuable technology transformation prior to AI, took many years to scale and companies have been slowly transitioning applications and workflows to the hyperscalers. Even today, there is a motion of moving back from cloud, due to costs and desire for “digital sovereignty.”
When LLMs first started being integrated in enterprise applications, most companies found that they are poorly prepared to actually productize AI. After the last generation of developers spent their careers mostly working on connecting APIs and deploying existing frameworks, the high requirements of productizing an LLM showed that many companies lacked both the vision and technical talent to be successful. The opportunity led to the largest push of private capital into the new “AI-native” companies, which attracted many of the talented individuals currently working in large software vendors. Why stay at Salesforce, when you could raise $10M and work on problems you actually cared about, leveraging the best frontier models, as soon as they are available?
The results of this dynamic were not obvious at first. X, the main platform where the AI conversation was happening, was full of launch videos, but nothing useful on the real world value of these applications. Playing with the products of these new companies, the general feeling was often that “this is just an LLM wrapper.” This was a bit ironic, since the majority of SaaS software can at best be described as “an AWS wrapper.”
Things started to change towards the end of the year, as the models powering these applications clearly took a step up in function. This was already obvious to early adopters earlier in the year (if you happened to use a model like o3 Pro), but it took a while until LLM reasoning was being used by an early majority of users.
The most obvious change of tone was in the developer community, where the use of AI became strongly associated with Claude Code. Anthropic offering their own developer-focused product was something that most did not expect. Typically when an infrastructure provider (which the frontier labs were considered to be) has been able to establish themselves in the market, moving towards the application layer would be seen as risky, particularly since that meant they would now compete with their own customers.
The adoption of Anthropic models for coding in enterprise was so explosive (the company finished last year at $9B ARR, becoming one of the largest tech companies ever created), that the move towards vertical software felt very much justified for the management team. Anthropic did not stop here, launching Cowork and Security Reviews, two products that were built on the foundation of Claude Code and a logical extension of that motion. In enterprise, they also offered custom products for financial services and life sciences.
The reports of software dying are greatly exaggerated
The current trajectory of AI being a force of change in the software industry is both predictable and positive.
Yes, low value software will see significant margin compression and go out of business.
Yes, the majority of the value will accrue on the bottom of the stack, with cloud infrastructure software (data+AI, cybersecurity, developer tools) being the primary beneficiary.
Yes, most people currently working in the industry will have to actually start delivering outcomes, rather than behave as if they are going to an adult daycare.
Still, this will be a process where “human-in-the-loop” remains critical. Until we can access Artificial General Intelligence at production scale, the highest return from AI implementations would be in augmenting the top 20% of employees.
Scenarios where we don’t have AGI level intelligence, but AI agents have overtaken the productive economy do not make a lot of sense, partly due to efficiency (token price vs speed and quality of outcome), and partly due to in-group preferences. This might sound surprising, but outside of a small group of jaded social outcasts, most people would still prefer to build companies and work together with other humans. The difference is that the quality bar to be hired will be much higher.
Physical AI
One of the strangest glimpses into an alternative future can be seen on the battlefront in Ukraine. The past and the future have fused together, pushing hundreds of thousands into trench warfare reminiscent of World War I. This is where the parallels to the past end, as the majority of what war used to look like has been replaced with flying robots scouting, delivering supplies, transporting and ultimately killing the humans trying to survive.
The frontlines of eastern Ukraine are a window into a dangerous future, where autonomous weapons control every outcome. Drones fly with a fiber optic cable still attached to them in order to reduce interference, while AI-guided systems destroy any target that made the mistake of revealing itself. The humans hide, alone, often for months. Food and supplies arrive from the same type of machines that are hunting them.
Most cities, however small, are being covered with durable nets. If possible, they extend this protection to the road network. Humans walk around with drone detectors, shotguns and their own robot helpers, trying to navigate this new environment. Gathering in groups is incredibly risky and only done as a last resort.
This week, the executive leadership of Anthropic is trying to convince the Department of Defense that they should have the right to refuse the usage of Claude for autonomous weapons. The reporting on this is a bit patchy, but it has already led to the term “WarClaude” emerging into the tech community consciousness. If Anthropic and the DoD do not reach an agreement, Anthropic will lose its federal business (and likely face other consequences). On a long enough timeline, we should assume that all frontier models will end up being used in hardware aimed at killing other humans.
I’m not writing this to scare you, but rather highlight how unpredictable things can get, once physical AI becomes a real thing.
The vision of Tesla for Robotaxi ends up with humans ultimately not being allowed to drive, as traffic becomes a fully automated cluster of machines tasked with deliverables.
The vision of Amazon for their warehouses is for no human labour to be needed in order to get products from the factory to your doorstep.
The vision of many pharmaceutical companies is a future where they can rapidly discover new drugs that accelerate their time to market through AI simulations and verifications at each stage of a traditional clinical trial.
The current progress of AI in software is fascinating to observe, but it hardly constitutes a “fast takeoff.” Outside of enthusiasts gathering for meetups and wearing crab outfits, we are witnessing the creative destruction of legacy software. Still, humans are very firmly in control, even if wealth and influence shifts towards AI-native software companies.
Physical AI is much more difficult to predict. The infamous “AI 2027” report forecasted a negative scenario where we will be controlled, farmed and ultimately removed from existence. Others see physical AI as the ultimate accelerator for human success and the stepping stone to interplanetary species.
The first time that I negotiated with a large financial institution on a software deal, I was faced with a grumpy Scottish gentleman. He didn’t seem interested in the software we were selling or how things would work, but he kept repeating the same phrase, over and over again.
“Whatever happens, there should be no loss of value for the bank. No loss of value for the bank!”
Most of the stock market participants today seem to show the same mindset, overfocusing on perceived loss of value in software. That’s the squeaky bum talking. I firmly believe that software has never been as exciting or relevant as it is today. The difference is that in order to understand where value is accruing, you’ll have to push yourself harder than ever before.








