Substrate: Building Compound AI Systems

We at Lightspeed are excited to announce our investment in Substrate, a developer API for composing and running multi-model and multi-modal AI systems at scale.

The most interesting and magical AI experiences will be enabled by “compound AI systems” that combine multiple models into a broader system. To quote a recent Berkeley AI Research blog on the topic: “state-of-the-art AI results are increasingly obtained by compound systems with multiple components, not just monolithic models.” Already, many of the exciting results that we associate with AI are compound in nature. When you chat with Anthropic’s Claude or OpenAI’s ChatGPT, you aren’t just talking to one model. There are in fact multiple models in play, complementary components that contribute the best outcome for end users.

This will soon be the norm — the AI community is effectively speed running the shift from monoliths to microservices that’s taken decades in traditional software development. Yet these systems are inherently complex, and that complexity prevents most engineers from taking advantage of AI in its fullest form. You could call it the “Twitter/X demo effect” — AI demos tend to leave users excited, but AI tooling itself leaves engineers in the lurch.

In addition to the vast sums they spend on GPUs, large AI labs make massive investments in AI-specific internal tooling to help their engineers be most productive. Popular frameworks like TensorFlow and PyTorch are really only the tip of the iceberg — most of what engineers need never leaves the building. Traditional software engineers are left to cobble together a messy string of tooling and infrastructure, combined with brittle prompt engineering and lots of trial and error. As more models and tools are released, the decision space that developers must optimize over only gets more complex.

When it comes time to run your AI workload, developers realize most inference providers are optimized for single-model workloads. Further, most inference APIs are effectively offered on a “best efforts” basis, so any downtime or reliability issues are left for engineers to handle themselves. These providers also don’t know the shape of your workload in advance, so they can’t optimize on your behalf at the individual level — they can only optimize at the aggregate level. Don’t even ask about agentic workflows, where the costs quickly stack up and these providers become cost-prohibitive.

It’s no wonder so few organizations have made it to production with their AI use cases. Thankfully, Substrate is addressing these challenges with some old fashioned, traditional software development ideas — concepts like atomicity, separation of concerns, parallelism, and most importantly, computation graphs.

Substrate Co-Founders Rob Cheung and Ben Guo.

Founders Rob Cheung and Ben Guo are well-attuned to developer needs. Rob has been a founding engineer multiple times over at startups like Substack and Fin.com. He’s a thoughtful, independent-minded engineer, known for doing the work of multiple developers to solve interesting and complex architectural challenges. Ben is a talented and respected engineer, product, and design thinker, with great taste for developer experience, having spent 8 years at Stripe as the founding engineer on Stripe Terminal. Rob and Ben have known each other for over a decade since meeting at Venmo, and Substrate is the company they’ve always wanted to start together.

Substrate lets developers define computation graphs for multi-model AI workflows with its flexible and ergonomic SDK. Nodes of the graph represent atomic units of compute — each does one and only one thing, whether it be generating text, generating an image, or executing sandboxed code. Those nodes can then be linked together into graphs defining arbitrarily complex workflows, with the outputs of each node passed as inputs to the next in only a few milliseconds.

At runtime, Substrate’s backend operates over these graphs, taking advantage of their formal properties and benefits – everything from colocating computation based on network topology, rewriting graphs to optimize for throughput or latency, parallelization, bin packing, you name it. In this way, the same graph developers use to define their AI workflows is used by Substrate to optimize those workloads on the backend.

Substrate connects performant systems to elegant abstractions, seamlessly. Customers like Substack and Maven are already leveraging the platform, and Substrate today is processing billions of tokens per month across complex graphs.

Abstractions have always been critical in software development. It’s why we no longer write raw assembly code, and it’s why you don’t need a PhD in information theory to spin up a simple web application. Unfortunately, much of the discourse around AI centers around arcane technical details, almost for their own sake. It’s easy to lose sight of the “jobs to be done” — the product experiences developers yearn to create, which is the whole point in the first place.

We’re excited to lead Substrate’s $8M seed round alongside South Park Commons, Craft Ventures, Vercel founder Guillermo Rauch, Mercury CEO Immad Akhund, and Will Gaybrick of Stripe.

Substrate is bringing high-quality API experiences to the world of AI engineering. Interested? Substrate is hiring across engineering and developer relations for their New York based team.

Lightspeed Possibility grows the deeper you go. Serving bold builders of the future.