Across three decades of enterprise investing, Lightspeed has learned that markets often bifurcate around trust. The headlines track one kind of progress. The contracts track another.
In AI and specifically at Anthropic, we’ve seen milestones that match this pattern: Claude Code hitting $500 million run-rate in 90 days. A $200 million government contract. Scaled enterprise partnerships with firms like Deloitte. New frameworks for evaluation, auditability, and observability in agentic workloads. Claude Code plugins deepening integration with development workflows. Evidence that Sonnet 4.5 is a tipping point for cybersecurity defense. Research breakthroughs in mechanistic interpretability.
These events don’t generate PR headlines, but they are critical indicators that AI is now being scaled to unlock some of the biggest (trillion-dollar) and most impactful AI market opportunities. And they point to the fundamental market bifurcation that’s underway.
The current debate centers on speed versus safety – does focusing on reliability constrain velocity and market competitiveness. It’s a reasonable concern given global competition dynamics.
But the enterprise data tells a different story. In markets requiring frontier-level AI capabilities, safety at the architectural level enables and speeds deployment, rather than slowing it. Companies that build trust into their core systems are closing the largest contracts faster.
Cybersecurity’s Lesson
For nearly two decades, Lightspeed has invested in cybersecurity platforms, including Zscaler, Netskope, Wiz, Rubrik, Exabeam, and Cato Networks. These were bets on enablement; that enterprises would demand security, reliability, and control before committing critical applications to new platforms. And would pay substantial premiums for the best protective solutions.
That pattern became undeniable on May 7, 2021.
The ransomware group DarkSide attacked Colonial Pipeline, the largest refined products pipeline in the United States. Within hours, the company made a devastating decision to shut down the entire pipeline system to prevent ransomware from compromising the operational systems. Fuel shortages spread across the southeastern United States. Gas stations ran dry. Panic buying began. The government briefly declared a national emergency. Colonial paid $4.4 million in ransom, and overall negative impact was assessed to be in the billions.
Rubrik understood this risk years earlier and made the strategic decision to deeply integrate data security into its enterprise data management platform. While incumbent backup companies treated it as an afterthought, Rubrik was one of the first enterprise data management platforms to prioritize security, vaulting, encryption, and data resilience, building them directly into its core architecture. When attacks hit, companies using Rubrik restored operations in minutes. Those without it faced weeks of downtime or total shutdown.
The market was severed overnight. Providers that natively integrated security captured the market. Those that treated it as optional failed to catch up and were left scrambling to retrofit security into systems never designed for it.
Rubrik’s revenue surged past $1 billion. Security built-in from day one yielded “must have” capabilities that competitors simply couldn’t replicate. Enterprises needed protection they could trust for critical systems and paid premiums for it. With hindsight, the early investment in security didn’t slow cloud digital transformation – it actually enabled it. AI is following the same trajectory, but at exponential speed we’ve never witnessed before. Today, the gaps between frontier AI capabilities and enterprise and hyperscaler requirements are already widening.
The Gaps Are Already Here
A deep understanding of LLM infrastructure, both current and future, reveals there are gaps around safety and reliability that must be addressed for enterprises to adopt AI at scale. While we have not seen an acute downside event yet, AI capabilities are advancing at exponential speed. As systems become more autonomous and capable, the attack surface expands. Research shows AI task completion in offensive cybersecurity doubles every five months – faster than general capabilities advance.
More capable AI means more ways things can go wrong. AI deployed in more mission critical contexts means that when things do go wrong, the impact could be significantly more damaging. And many of today’s most accessible AI platforms are ripe for misuse by bad actors. The latest research also empirically confirms that AI misalignment where “loss of control” is the result, is no longer just theoretical. In effect, the blast radius for compromised or misbehaving AI systems is growing fast and in ways where it’s impossible to fully understand the outcome.
With those risks, here’s what we believe enterprises actually need to deploy AI at scale:
Compliance and auditability require visibility. When regulators ask how an AI system made a decision, “the model said so” isn’t an answer. Mechanistic interpretability – understanding what’s happening inside AI models – will become the foundation of trust for regulated industries. Among the frontier labs, Anthropic is a leader here. Lightspeed backed Goodfire is also building and productizing these core capabilities independently.
Monitoring and misuse prevention demand sophisticated detection. Red teaming and prompt response classifiers are becoming table stakes as attack surfaces expand. Anthropic has invested deeply in these capabilities and Virtue AI is also independently developing these monitoring systems because enterprises will demand them before deployment.
Steerability means reliable control of advanced AI systems. This is still an emerging science, and a critical one if AI is to be successfully leveraged in high stakes environments where verifiable, measurable precision and finesse are mandatory. As AI becomes increasingly autonomous, humans must possess robust technology that’s ahead of frontier intelligence to remain in ultimate control.
Defense against AI-enabled threats is the next evolution. AI-powered cyber weaponization is an emerging reality as capabilities accelerate. Defense must evolve in parallel and with more robustness than offensive AI threats to protect against critical downside. Lightspeed portfolio company, Cyera, has developed highly innovative solutions in this area.
When you combine AI’s exponential advancement with potential for simultaneous large-scale incidents, governance frameworks also become critical. Whether it’s through voluntary, state, or federal standards, we believe open, rigorous discussion about AI safety and arriving at a uniform set of baseline AI safety principles must happen in the broader ecosystem.
AI safety R&D (as with Rubrik in data security) is a NOW imperative, before there’s a critical outage. The exponential rate of improvement in AI intelligence and the digital rails underlying vast amounts of our economic and societal activity, makes it impossible to predict the severity of AI-enabled cyber events that lie ahead. Companies with the mission and business alignment to make heavy upfront investment in these critical technologies will ultimately be doing all of us a service in addition to delivering strong returns to investors by maximizing our defensive capabilities in advance of a severe AI-enabled “zero day” event.
The Market Is Bifurcating
The head of engineering won’t use AI that might corrupt the codebase. The CFO won’t let AI close the books if it might cook them. The Chief Compliance Officer demands explainability before signing off on anything touching regulated data.
The cost of getting it wrong compounds quickly: incorrect financial reports will trigger audits or penalties, misaligned recruiting or hiring decisions opens legal exposure, an unexplainable model output puts regulatory approval at risk. Enterprise buyers are thinking through these scenarios well before deployment, as the downside potential (notwithstanding upside opportunity) is obvious even if these events are yet to materialize.
Capabilities get you in the room. Reliability will get you the contract. Large organizations pay substantial premiums for solutions they can trust. Healthcare, finance, industrials, government, defense – these markets demand safety guarantees before deployment.
Lightspeed believes scaling intelligence yields the ultimate horizontal enterprise product. Higher performance unlocks more domains, more tipping points, more trillion-dollar markets. But here’s the uncomfortable truth: capabilities without reliability aren’t products. They’re prototypes.
The market is bifurcating in real-time between trusted, frontier systems that can operate at scale and more limited systems that might show well in a demo but will never be deployed in the heart of a sophisticated organization. And it’s choosing trust.
Anthropic Proves the Thesis
At Lightspeed we’ve actively invested in and want to help drive the entire AI security and reliability ecosystem, with partners like Goodfire for mechanistic interpretability, Virtue AI for monitoring and red teaming, Cyera for frontier level AI cyber defense, and Anthropic for integrating safety into frontier models from the outset.
Anthropic understood this fundamental truth from the beginning and is now being validated with concrete market results. They didn’t rush features to market; instead committing early to key research in areas like mechanistic interpretability, alignment, constitutional AI, and responsible scaling standards. And they’ve since shared much of this openly with the rest of the industry. Their steadfast dedication to scaling trustworthy and safe intelligence is paying off. Safety and speed are not in opposition – in fact, they are mutually reinforcing.
- Claude Code went from research preview to over $500 million in run-rate revenue in just three months. The fastest enterprise adoption we’ve seen in three decades of investing. Why? Because enterprises trust it won’t corrupt their codebases.
- Anthropic secured a contract with the United States Government with a $200 million ceiling that wouldn’t have been possible without Claude’s ironclad reliability guarantees.
- Claude for Financial Services and Claude for Healthcare are deployed in systems where mistakes cost millions and compliance is non-negotiable.
Anthropic is building capable models that enterprises can actually deploy. And by publishing research and evaluations that establish industry standards and create transparency they promote competition and enable progress rather than blocking it.
Safety Scales
Companies racing ahead without safety will hit a ceiling before they hit the market. Impressive demos will sit unused in enterprise evaluation environments, waiting for guarantees that were never architected in.
The noise around consumer AI- viral demos, benchmark wars, app store buzz – is only one part of the story. Anthropic doesn’t compete for press attention; they compete for enterprise trust. And they’re delivering results.
They optimize for deployment, not media coverage. They race to earn trust, not just to ship features. That’s not a slower path to market, it’s a faster path to revenue. Safety is an accelerant, a commercial strategy informed by technical reality. Companies scaling both capability and safety architecturally are capturing premium enterprise contracts. Intelligence scales. Features commoditize. The economy of bits is exploding, and enterprises are choosing the platforms they can deploy with confidence, but only for companies that earn trust at the frontier.
Our agenda: ensure AI’s transformative potential reaches the largest set of markets in the most beneficially impactful way. Safety is the unlock that guarantees the world will benefit from AI.
Anthropic understood this from the beginning. The market is proving them right. That’s not a dichotomy. That’s just good business.
The content here should not be viewed as investment advice, nor does it constitute an offer to sell, or a solicitation of an offer to buy, any securities. The views expressed here are those of the individual Lightspeed Management Company, L.L.C. (“Lightspeed”) personnel and are not the views of Lightspeed or its affiliates; other market participants could take different views.
Unless otherwise indicated, the inclusion of any third-party firm and/or company names, brands and/or logos does not imply any affiliation with these firms or companies.
Authors