News
Founder stories,
partner perspectives,
and industry insights.

The Sophistication Era of Consumer Fintech
Consumer fintech is in a transitory phase. Consolidation is taking place as value accrues to larger incumbents. We’ve reached peak fragmentation where there is an app for every financial task.
Banking access has largely been solved for younger generations in the US. So we think the next phase will be defined by AI-enabled applications in wealth and spend. And these are increasingly consolidated in vertical superapps. We are tentatively calling this 3rd era of consumer fintech the sophistication era (name suggestions welcome!).
Today there is a relatively small group of VC-backed consumer fintech companies that are worth over $10B in market cap. Just 7 on US exchanges as of this article’s writing: Intuit, PayPal, Block, Nubank, Coinbase, Grab, and Zillow. Eight if you include Mercado Libre (~50% fintech revenue). Intuit is mostly B2B revenue and Zillow isn’t exactly a fintech. So maybe 6.
On one hand, 5 of those IPO’d during the 2nd era of personal finance, the mobile era, so you could argue it was the most successful era of personal finance yet.
On the other hand, many fintech investors feel burned by capital intensive models and CAC wars, so much so that there’s a narrative that consumer fintech is dead. We heavily disagree.
We believe the opportunity for tech-enabled personal finance has never been stronger, and we expect the 3rd era of consumer fintech to continue to advance toward the promise of a personalized finance future.
What might the 3rd era of personal finance look like?
In its most idealistic form, the goal of personal finance is for it to be completely, well, personalized. The complete personalization of finance means each user receives the specific financial help they need at that moment, tailored to their individual financial situation.
We’re not there yet. Today’s rising consumer products all point toward a step before that; the fastest growing categories of consumer fintech activity today revolve around more sophisticated spending and intelligent optimization. Savvy consumers are optimizing their marginal dollar.
Consumer finance’s 3.0 era will be dominated by sophisticated optimization of marginal dollars around 1) wealth management and 2) consumer spending that are 3) increasingly accessed through vertical superapps.
There are large macro trends driving these categories.
In wealth, we are facing the largest wealth transfer ever seen. You all know the details, so we’ll spare a long description, but more than $60 trillion of wealth is expected to change hands in the coming years as baby boomers pass on, and more than 80% of recipients (millennials) are expected to change advisors. There are over 25 million millionaires in the US, so this is not an insignificant number of people. We are also seeing this play out from a bottoms-up perspective: startups growing the most right now in wealth management are tackling several problems serving this need:
- Retirement Plan Administration
- Wealth Transfer
- Estate Planning
- Alternative Asset Access
- Sophisticated Tax Strategies
- Global Accounts as Volatility Hedges
- Yield-bearing products, Fixed-Income Tech
Consumers are becoming more savvy about how they spend their marginal dollar, optimizing for personal efficiency. The macro backdrop of increased spend choices is that this is enabled because payment rails and payment interfaces are undergoing huge changes: the emergence of new government backed rails like Pix, UPI, and FedNow, plus the popularity of stablecoins and crypto payments, mean the card networks are on watch.
We’ve unbundled the bank, now it’s time to unbundle the card networks. We are also seeing this play out from a bottoms-up perspective: startups growing the most right now in spend are tackling several problems serving this need:
- Multiplayer Finance
- Asset backed cards for lower APR
- Savings, Refunds, & Robo Claims
- Pay by Bank
- Digital currencies
- Climate Alignment
- Top of Wallet Spend Optimization
- Debit Rewards + Loyalty
- Global Accounts for Borderless Spend
- Verticalized Spend
In vertical superapps, the macro backdrop is rebundling, taking place after 15 years of bull-market fueled consumer fintech mania due to: macro downturn overfunding and underbuilt startup economic models; consumer fear due to bank collapses (SVB/FRB); and the US’s unique financial market structure not being compatible with the birth of financial superapps like we’ve seen with WePay, Alipay, Mercado Libre, Grab, etc., in other countries.
Acquisitions are underway, startups are dying, and finances are increasingly embedded in workflows and commerce experiences. We are seeing this play out from a bottoms-up perspective: startups we’re meeting that are benefiting from consolidation pressures include:
- Vertical saas fintech (hospitality, restaurant, construction, education)
- Embedded fintech
- BaaS
- Data aggregators
- Growth stage consumer fintechs via m&a and cross selling
Peak Fragmentation
It makes sense that this third era is unfolding now. US consumers already have a plethora of digital banking solutions. The average American has 5-7 debit cards and 6-8 credit cards. Consumers have an overabundance of cards.
In fact, we think we’ve already reached the peak unbundling phase of consumer fintech. There are only 2 business models – unbundling and re-bundling – and after decades of unbundling the bank, we are already starting to see more consolidation.
For example, every mobile banking app of the last decade is now cross-selling their users additional banking products (see part 1 in of this series). The US market is saturated with banking options.
Consumers have their digital banking foundation set. So what do consumers need now?
They need a way to make sense of all their financial activity across institutions. Not just aggregation, but an intelligence layer that optimizes spend. That finds sophisticated wealth and tax solutions. That delivers insights about how to best allocate that consumer’s marginal dollar.
The need for an intelligent solution to optimize consumer spend couldn’t have been timed better; it is arriving just as generative AI sweeps tech. Similar to our identification of SaaS entering its 4th era, the Era of Cognition, personal finances are entering their 3rd era, the Sophistication Era.
Past eras
The second era of consumer fintech, the mobile or neo banking era, is ending. Before this, the first 2 eras of consumer fintech in the US were dominated by digital banking and brokerage platforms moving online.
The first era saw PayPal, eTrade, TurboTax (Intuit), Capital One, Mint, etc. grow during the 90s and 2000s as primarily desktop solutions, digitizing previously offline activities (PayPal – spend, eTrade – invest, TurboTax – taxes, Capital One – borrow).
The second wave was mobile first and saw CashApp, Robinhod, Affirm, Coinbase, SoFi, etc., take market share by transitioning those personal finance activities – spending, investing, and banking – into a seamless, on-the-go experience.
The third era, unfolding now, is all about sophisticated spending and wealth management. Despite having all the fintech apps we need, we still can’t answer basic personal finance questions like, “What two things should I do to most improve my financial situation?” or “How much did I spend across all accounts this month?”
As one Twitter user pithily put it, “there is an app for everything but you still cannot do everything in one app.”
AI’s impact on Sophisticated Wealth + Spend Choices
AI will impact wealth and spend. On the wealth side, sophisticated planning is the focus. On the spend side, sophisticated marginal dollar spend is the big focus.
Sophisticated Wealth
We’re seeing AI applied to wealth management strategies in limited contexts today: Roboadvising has been around for a long time and is automation of savings/investing its most basic form. We are also seeing more AI-directed portfolio strategies roll out and reaching strong product-market fit with tech-savvy users, but still require a lot of education for the average consumer.
Instead we are seeing the consumer opt for more sophisticated wealth management choices through tech-enabled tax-advantaged strategies, estate planning, alternatives access, global accounts as volatility hedges, and much more.
Asset management is also a large line item in JPMorgan’s revenue breakdown. While we believe it will take longer for consumers to trust a fully automated wealth solution, there is a large profit pool up for grabs. 80% of millennials are expected to change advisors when inheriting wealth from their parents: that’s over $50 trillion up for grabs for new financial institutions serving millennials.
We believe it will require a human-in-the-loop for the majority of users for several more years as consumers gain comfort with AI touching their nest egg.
Sophisticated Spend
We’re already seeing AI applied to “found money” opportunities, especially liabilities, as users become more savvy about sophisticated spend.
Today, we are already seeing consumers gain comfort with delegating responsibility for select areas of the liabilities side of their balance sheet to AI. Over the next few years, we should expect consumers to gradually increase comfort level to include all liabilities. Several years after that, trust and education levels should increase such that assets (checking, savings, retirement) can begin to be automated for the masses through AI.
After both of those are achieved, then the vision of AI as the consumer fintech AI interface laid out above can begin to unfold.
As we meet founders today in mid 2023, there are only a few reliable areas where consumers trust AI to fully handle their tasks end to end: 1) debt removal 2) subscription cancellation 3) find savings + new money.
This is for a fairly straightforward reason: most consumers are happy to sign up for a service that can help them decrease their liabilities and increase their assets without risk of losing their nest-egg. It’s unclear whether the 3 product categories listed above will create enough touch points with the user to be a strong wedge into a longer banking relationship.
AI will take profits from incumbents
Highly valued consumer fintechs of the last era (Nubank, PayPal, Block, etc) succeeded by attacking structurally high profit margin incumbencies (Brazil banking, international money transfer, external bank transfers, acquiring), which created space for lower friction digital products to emerge.
Consumers’ personal balance sheet – deposits and liabilities – are a bank’s revenue opportunity. Continued advancement in AI will enable consumers to get a better deal.
Just look at JPMorgan’s 2022 revenue breakdown: over 50% of their $128B in revenue came from interest income, or revenue earned from interest bearing assets – a combination of yield bearing assets, lending, etc.
Some areas where we think there are large remaining profit pools in the USA :
- 6% buy and sell commission for home sales
- Wire fees
- Interchange fees from card networks: acquiring + issuing
- 1% AUM fees for wealth management
- 20-25%+ APRs for consumer credit (earned wage access is even higher)
Where else can tech remove friction from a big profit pool business for incumbents now that AI is here? Feel free to write to us with your thoughts.
AI will improve the consumer fintech business model
We outlined some of the common complaints about consumer fintech business models at the start of this piece. There are really 4 areas a consumer fintech can have a cost advantage: cost to acquire, cost to underwrite, cost to service, and cost of capital.
AI has been used to improve the cost of underwriting for a very long time in lending (see our fintech AI blog post). We only expect this to improve.
The area we’re seeing the greatest change in is generative AI’s ability to reduce the cost to serve users. From robo debt recovery calls, to AI-enabled capital markets benchmarking, to extending the number of users a wealth manager can service, to automatic refund cancellation, to sophisticated tax strategies for the masses, automation is reducing COGS by 30 to 80% across companies we’ve met.
Incumbent banks continue to have a massive advantage in operating with a lower cost of capital. We are starting to see fintechs securing better debt terms using AI-driven software.
Acquisition costs are becoming a more even playing field. We’ve seen consumer fintechs using AI chatbots as acquisition tools. We’d love to hear from founders on even more ways you’re improving costs with AI.
Where incumbents bank advantage remains
The two areas where incumbents continue to have a large advantage over consumer fintechs: 1) distribution – cost of acquisition of subsequent products 2) cost of capital advantage – they can fund their working capital cycles at nearly the Fed Funds rate, the lowest cost available.
The counter attacks to big bank’s distribution and cost of capital advantage are 1) turning commerce experiences into banking experiences – enabling anyone to become a bank (consolidation theme) and 2) encouraging users to fund their own risk: incentivizing the right type of behavior / risk alignment by introducing fee-based products and subscriptions tied to greater financial freedom on platform (collateralized assets that bring down the cost of capital – spend theme).
We think there is still a large opportunity to innovate in these two remaining bank strongholds as the 3rd era of sophisticated finance unfolds.
If you’re looking around, seeing the last era of neobanks still early in their journey and wondering if consumer fintech is dead, think again. The category is evolving into one where sophisticated spending, sophisticated investing tools, and vertical superapps will dominate. This shift has created a very uncertain time in consumer fintech. We are in the midst of a paradigm shift from the second era of consumer fintech to the third.
There is so much opportunity ahead.
If you enjoyed this, don’t hesitate to reach out. We can be reached at fintech_contact@lsvp.com. Checkout other related articles including The Successful Consumer Fintech Path, Fintech x AI, and Lightspeed’s 2023 Fintech Trends.

Building With Startups Leading The AI-Based Transformation
Truly transformative technologies build new markets, upend existing ones, and impact every aspect of how we live, work, and play. At Lightspeed, we’ve been fortunate to have witnessed and participated in the rise of multiple transformative technologies over the last 25 years – from the rise of the internet to the rise of mobile, and now the rise of AI – specifically generative AI.
These transformative cycles are both powerful and chaotic at the same time. And yet, we’ve seen incredible Founders time and again connect the dots between the emergence of a technical innovation and the massive opportunity it unlocks. This happens up and down the stack, across sectors and geographies. A similar narrative is playing out in AI right now. We have been working with extraordinary companies building AI-native applications across sectors, such as enterprise SaaS, robotics, consumer, healthcare, and fintech, as well as engineering the underlying infrastructure to help develop and run AI-based applications. The following image highlights some of these amazing teams, and the sector they are building in. You can also read more about each teams’ ambitious vision for the future on our companies page.
There’s no pre-determined sequence in which transformative technologies play out in the broader technology ecosystem. Often, things start with the emergence of one new infrastructure capability which catches the attention of developers, who then start building on top of it. This virtuous cycle of applications pushes the bounds of the infrastructure, accelerating further innovation, leading to ever more broader application of that technology within applications, and so on.
A similar cycle of infra and apps making each other better is playing out in AI. OpenAI’s introduction of transformer based models (GPT2/3) led to the emergence of applications in areas such as copywriting, blog writing etc. The increased developer interest led to continued innovation by OpenAI (GPT3/3.5/4, ChatGPT, Whisper) but also led to a Cambrian explosion of open source models from Meta, Stability, Mosaic, Databricks, and many more. This then feeds rapid incorporation of generative AI within applications, as well as the rise of generative AI operational, security, and observability platforms.
In subsequent articles, we’ll expand on the major markets outlined in the above map to outline the areas we are actively investing in, and the themes we are excited about. Our focus at Lightspeed is on having prepared minds, so we can connect the dots and dream together with our Founders. And we aren’t done yet – we’d love to talk to you if you have a strong point of view on how AI will create opportunities in this rapidly changing economy.
More Reading from Lightspeed
- Report
- Eight AI Startups Winning The Race For Tech Talent – Meet the Generative Eight — and The Next Gen of AI companies building the future
- Enterprise Applications
- SaaS 4.0: Say Hello to The Era of Cognition – Generative AI will eventually be a native part of every enterprise software company. Here’s how we see the winners emerging.
- AI Infrastructure
- Will Enterprise AI Models be “Winner Take All”? – How a Cambrian explosion of models creates a broad set of opportunities in AI Infrastructure
- Meet Europe’s Next Great Generative AI Startup: Mistral AI – Founded by key members of Meta’s and Alphabet’s AI research teams, Mistral AI has closed a seed round of over €105 million, led by Lightspeed.
- Contextual AI: Making Enterprise-Ready LLMs Available For All – Announcing Lightspeed’s latest investment in the AI space
- FinTech
- Fintech x AI: The Lightspeed View – The future of finance: Not all AI is created equal
- Consumer
- Startups v. Incumbents—The Battle For AI’s Application Layer – If your AI product is going head to head with an incumbent, their distribution advantage will probably kill your startup…unless you fight back with a different game.
- Fictional Reality—Why the Real Metaverse Doesn’t Require a Headset – AI is letting us live out alternative realities on the digital platforms where we already spend our time
- Gaming
- Gaming x AI Market Map: The Infinite Power of Play – An (inevitably incomplete) overview of the emerging landscape of artificial intelligence and procedural generation in interactive media
- Generative Event Recaps
- Scott Belsky on Creativity in Generative AI – An AI-enhanced transcript of the conversation with Adobe’s Chief Strategy Officer and EVP of Design and Emerging Products
- How Founders Can Spot the Next Big Opportunity in AI – Spotify’s VP of Personalization believes there is one underrated trait that all founders need for success.
- Fintech in AI: Five Lessons For Founders – Takeaways from our conversation with Grace Liu, Alloy’s Director of Product; Sasha Orloff, founder and CEO of Puzzle.io; and Leif Abraham, co-founder and co-CEO of Public.com.
- Lightspeed AI Reading List
Companies on our Portfolio Map (57 and counting)
- AI/ML Ops
- Consumer, Commerce
- B2B Enterprise
- Productivity
- Security Observability GRC, Safety
- Vertical Applications
- Healthcare
- Gaming, Media
- Autonomy & Robotics
- AI Infra and Foundation Models

Will Enterprise AI Models Be “Winner Take All?”
A Cambrian explosion of models creates a broad set of opportunities in AI Infrastructure
Over the past decade, we at Lightspeed have had a front row seat to the incredible innovations in AI/ML thanks to the amazing founders we’ve had the privilege to partner with. We’ve been working with their companies, the platforms they’ve built, and the customers they serve, to better understand how enterprises are thinking through GenAI. Specifically, we have investigated the foundation model ecosystem with questions like, “Will the best performing model have winner-take-all dynamics?” and, “Will enterprise use cases all default to calling, say, OpenAI APIs, or will actual usage be more diverse?” The answers will determine the future growth of this ecosystem, and in what direction energy, talent, and dollars will flow.
Categorizing the Model Ecosystem
Based on our learnings, our belief is that there’s a Cambrian explosion of models coming in AI. Developers and enterprises will pick models best suited for the “job to be done,” even though the usage at exploratory stages might look a lot more concentrated. A likely path for enterprise adoption could be the use of big models for exploration, gradually moving to smaller specialized (tuned + distilled) models for production as they learn more about their use case. The following visual helps outline how we see the Foundation Model ecosystem evolving.
The AI model landscape can be divided into 3 primary, though somewhat overlapping, buckets:
Bucket 1: “Big Brain”
These are the best of the best, the leading edge of models. This is where the exciting magical demos that have captivated us all are coming from. They are often the default starting point for developers when trying to explore the limits of what AI can do for their apps. These models are expensive to train, and complex to maintain and scale. But the same model could take the LSAT, the MCAT, write your high school essay, and engage with you as a chat-bot-friend. These models are where developers are currently running experiments and evaluating AI usage within enterprise applications.
But, they are expensive to use, inference latency is high, and can be overkill for well-defined constrained use cases. A second issue is that these models are generalists that can be less accurate on specialized tasks. (For instance, see meta studies such as this one from Cornell.) Lastly, today, in almost every case, they are also black boxes that can present privacy and security challenges for enterprises who are grappling with how to utilize these without giving away the farm (their data!). OpenAI, Anthropic, Cohere are examples of companies in this bucket.
Bucket 2: “Challenger”
These are also high-capability models, with skills and abilities just below the most cutting edge. Models such as Llama 2 and Falcon are the best representations in this category. They are often as good as some of the Gen “N-1” or “N-2” models from the companies building the Bucket 1 models. By some benchmarks, Llama2, for instance, is as good as GPT-3.5-turbo. Tuning these models on enterprise data can bring their abilities to be as good as Bucket 1 models on specific tasks.
Many of these models are open source (or close enough) and once released have led to immediate improvements and optimizations by the open source community.
Bucket 3: “Long Tail”
These are “expert” models. They are built to serve a narrow purpose, like classifying documents, identifying a specific property in an image or video, identifying patterns in business data, etc. These are nimble, inexpensive to train and use, and can be run in data centers or on the edge.
A quick look at Hugging Face is enough to get a sense for how vast this ecosystem already is and will grow to be in the future, thanks to the breadth of use cases it serves!
Matching Use Cases to Models
While early, we’ve seen some of the leading development teams and enterprises already thinking about the ecosystem in this nuanced way. There’s a desire to match the use to the best possible model. Perhaps even use multiple models to serve a more complex use case.
The factors used in evaluating which model/s to use often include the following:
- Data privacy and compliance requirements, which impact whether the model needs to run in enterprise infrastructure, or if data can be sent to an external hosted inference endpoint.
- Whether the ability to fine tune a model is critical or strongly desired for this use case.
- What level of inference ‘performance’ is desired (latency, precision, expense, etc.).
The actual list is often much longer than just the above and is reflective of the tremendous diversity in use cases that developers would like to use AI for.
Where are the opportunities?
There are several implications of this emerging ecosystem:
- Evaluation Frameworks: Enterprises will need access to tooling and expertise that can help evaluate which model to use for which use case. Developers will need to decide how best to evaluate the suitability of a specific model for the ‘job to be done.’ The evaluation would need to be multi-factor and include not just the performance of the model, but also the cost, the level of control that can be exercised etc.
- Running and Maintaining Models: Platforms to help enterprises train, fine tune, and run models, especially the Bucket 3, long tail models, will emerge. Traditionally, these have broadly been referred to as ML Ops platforms, we expect that definition will expand to include generative AI as well. Platforms such as Databricks, Weights and Biases, Tecton and others are rapidly building towards this.
- Augmentation Systems: Models, particularly hosted LLMs, need retrieval augmented generation to deliver superior results. This requires a secondary set of decisions to be made, including:
- Data and meta-data ingestion: How to connect to structured and unstructured enterprise data sources and then ingesting both data, as well as meta data on things such as access policies.
- Generating and storing embeddings: Which model to use to generate embeddings for the data. And then how to store them: Which vector database to use, specifically based on performance, scale and functionality desired?
Opportunities exist here to build enterprise class RAG platforms which can take away the complexity associated with selecting and stitching together these platforms:
- Operational Tooling: Enterprise IT will need to build guardrails for engineering teams, manage costs, etc.; all the tasks that they handle today for software development need to be expanded now to include AI usage. Areas of interest for IT include:
- Observability: How are the models doing in production? Is their performance improving/degrading with time? Are there usage patterns that might impact the choice of model in future versions of the application?
- Security: How to keep AI native applications secure. Are these applications vulnerable to a new class of attack vectors that need new platforms?
- Compliance: We expect AI native applications and usage of LLM will need to be compliant with frameworks that relevant governing bodies are already beginning to work on. That’s in addition to the existing compliance regimes around privacy, security, consumer protection, fairness etc. Enterprises will need platforms that help them stay compliant, run audits, generate proof of compliance and associated tasks.
- Data: Platforms to help understand the data assets an enterprise has, and how to use those assets to extract the maximal value from new(er) AI models, will see rapid adoption. As one of the largest software companies on the planet once said to us, “our data is our moat, our core IP, our competitive advantage.” Monetizing this data using AI, in a way that drives additional “differentiation without diluting defensibility” will be key. Platforms such as Snorkel play a critical role in this.
This is an amazing time to be building platforms in AI infrastructure. Adoption of AI will continue to transform entire industries, but it will require supporting infrastructure, middleware, security, observability, and operations platforms to enable every enterprise on the planet to adopt this powerful technology. If you are building to make this vision become a reality, we’d love to speak with you!

Gaming x AI Market Map: The Infinite Power of Play
An (inevitably incomplete) overview of the emerging landscape of artificial intelligence and procedural generation in interactive media
Learn more about our investment in Inworld AI, the leading character engine for AI NPCs.

Eight AI Startups Winning the Race for Tech Talent
Meet the Generative Eight — and The Next Gen of AI companies building the future
Generative AI has taken the world by storm, and the companies building the future are in a race to hire the best available tech talent.
No technology has more rapidly changed humans’ relationship to their tools than the sci-fi fantasy brought to reality by OpenAI’s ChatGPT: Talking to a machine the way you’d talk to a colleague or friend, and getting valuable output from the interaction. Suddenly, it seems the only code one needs to know to make a computer do something novel is as simple as the English language. That’s true — almost magically so — but only to a point.
For the coders building the large language models, infrastructure, and supporting technology for AI applications, a new and specific set of hardcore technical skills and capabilities is necessary. And of course, experience counts — it takes deep knowledge and creativity to dream up some of the products reaching the market today. The engineers who know how to push the bounds of today’s AI technologies are the ones who’ve spent years perfecting them at some of the largest tech companies in the world.
The engineers who know how to push the bounds of today’s AI technologies are the ones who’ve spent years perfecting them at some of the largest tech companies in the world.
But, those large companies are for the most part playing catchup, as their top talent gets poached. There is a new crop of nimble, innovative AI companies — pure play startups with AI at their core. They are in a race for tech talent. At Lightspeed, we’ve been investing in AI for over seven years, and have been tracking this space for even longer.
This report is based on proprietary research and public data that identifies which AI firms are winning the race to hire the best technical talent, and breaks down where that talent is coming from. Let’s dive in.
AI Excitement
The pace of technological change keeps accelerating. From the automobile to the plane to the semiconductor, we’ve seen the power of innovation unleashed upon the globe. Generative AI is the kind of meta-technology that will, over time, reorganize the way nearly everything in our world functions and evolves, from farming to education to enterprise marketing to finance.
The companies that will define our future are being built today, in front of our very eyes, with technologies that anyone can read the source code to.
Exploiting this sea change requires one thing above all else: talent. Despite overblown fears that AI will replace all jobs with robots, there’s more demand than ever from generative AI startups for the kind of specialized skill sets needed to build new features and products. Not to mention managing all of the associated technology like running data centers holding thousands of GPUs crunching terabytes of training data. The work around LLMs has only just begun, as companies race to train bespoke models based on this technology, and refine those already out in the wild.
For that reason, stars on the Github projects where AI code lives is a useful analogue to track interest. And interest in AI projects — where anyone can contribute and learn how to create AI applications — has exploded, as seen in the chart below. The curve for OpenAI in particular gets steep right around the time of ChatGPT’s release to the public on November 30, 2022.
We can’t compare Github adoption curves to the transition from, say, stagecoaches to railroads, but by one account, the first passengers in the UK were carried on rail in 1825. By 1840 long-distance, horse-drawn coaches, which had dominated intercity travel since 1790, were obliterated. An industry that took 50 years to build only took 15 to creatively destroy and replace with superior technology.
It’s hard to imagine what computing will look like in 15 years, but the AI revolution is likely to be the fastest technology boom and transformation ever in human history.
How We Made This Report
At Lightspeed, we stack rank hundreds of millions of individuals and millions of companies based on publicly available self-reported employee information such as schooling, advanced degrees, prior companies, years of experience, patents, publications, and more, in order to identify companies with top technical talent.
For this analysis, we isolated data from over 3,500 companies working in the Generative AI space today. We filtered for companies with AI at their core, rather than companies layering AI onto existing applications — the companies creating the AI ecosystem. We also filtered for companies that have had notable funding rounds (with one exception). The data was normalized and equally weighted to generate our final rankings. We worked only with publicly available data — companies in stealth, or recently emerged, may have fit our criteria had their data been accessible. Geographical and headcount data is also affected by this factor, and all figures below, including headcount, are specific to a company’s technical/engineering employees only.
The Generative Eight
So, where are the leading purpose-built generative AI companies finding their talent? And who are those companies, exactly?
Our analysis led us to break the companies down into two distinct groups. First, The Generative Eight: The top eight pure-play AI companies when it comes to hiring talent.
These are the eight startups winning the race for tech talent. They’ve hired the highest caliber engineers and AI experts amongst their peers. They’ve found funding success and some level of product-market fit. And they will be the FAANGS of the future — proper nouns that inhabit our conversations and change our patterns of living.
Here are The Generative Eight — the top AI companies for talent, today:
- TOME — AI that helps generate presentations and visual narrative storytelling
- CHARACTER.AI — Create and chat with a variety of AI-generated personalities
- ANTHROPIC — An AI safety and research company, building chatbots for work environments
- OPENAI — The research firm that released ChatGPT, which demonstrates the power of large language models, and DALL-E for image generation
- HUGGING FACE — Building models to perform specific tasks in enterprise
- JASPER — Models that create marketing and brand level creative content
- STABILITY AI — Creators of Stable Diffusion, one of the first image generation models released to the public
- MIDJOURNEY — A self-funded research lab that has released a powerful image generation tool that runs entirely on Discord
These are the companies that are winning the race for tech talent, which is only intensifying. Here’s how their race is shaping up so far:
The Generative Eight’s tech talent is coming from leading companies across many technology sectors — which includes some employees who have already moved on from OpenAI, one of the oldest firms in the group.
Additionally, many Gen Eighters have come from Google, which was taken by surprise by the wide release of ChatGPT, and has been known to have invested heavily in AI research for many years — but with few visible applications or consumer-facing products or services, until recently.
The education background of tech talent at The Generative Eight is heavily weighted towards many recognizable names, but it also reflects an international aspect, with Canada’s University of Waterloo’s Artificial Intelligence Institute claiming fourth place and Belgium’s KU Leuven Institute for Artificial Intelligence taking the eighth spot.
Much has been made of “Cerebral Valley” — San Francisco’s Hayes Valley neighborhood, rechristened to reflect its status as a locus of AI events, meetups, and hacker houses. At the same time, San Francisco has had the worst COVID recovery of any major metropolitan area in the US. It appears that talent is flooding back into the city and the Bay Area (or at least finding new jobs with AI companies).
We weren’t able to track which employees in a given geographic area are remote workers (other than the 33 who specifically indicated so in data), but it’s safe to presume that Generative Eight employees abroad or in cities with low head counts are Zooming into a lot of calls. This data is also underreported, as some employees do not disclose their location.
The Next Gen
While the Generative Eight are claiming the outsize share of public attention, there’s another cohort Lightspeed identified in its research that is also attracting top quality talent to their teams. These 11 companies — The Next Gen — share a common attribute: while AI is at their core, like the Generative Eight, more of them are tackling problems in specific sectors or focused on particular use cases tailored to a real customer.
While many of the Generative Eight are poised to shape the future, and the leading edge of AI, the Next Gen could be the companies that will actually deliver the practical, applied interactions with AI tools that deliver outputs used in industries from healthcare to entertainment. We filtered our data set to only include companies with at least 10 employees and that had taken some external funding.
Here are The Next Gen companies pushing AI technology to practical applications:
- PICTOR LABS — Digital pathology (AI-powered histopathology) to accelerate clinical research on diseases
- ENDPOINT HEALTH — AI-assisted precision therapies for immune-related conditions
- OTTER.AI — AI powered audio transcription, with a new chatbot meeting assistant
- BRIA — Image and video content with commercial use controls
- WELLSAID LABS — Text to speech voice overs
- SUBTLE MEDICAL — AI powered image processing for radiology
- COPY.AI — Copywriting for marketing, via chatbot
- LEXION — AI contract assistant for sales, procurement, HR, legal, and more
- YOU — Search engine with a chatbot interface
- OCTOML — Cloud-based tuning and running for LLM models
- RUNWAY — Video, image, and other creative tools
These 11 companies are The Next Gen, and here is how their race for talent looks so far:
In 2021 the University of Washington was chosen to lead a new AI Institute in Dynamic Systems by the National Science Foundation, in addition to being the home of the Paul G. Allen School of Computer Science & Engineering, one of the leading AI research centers in the world, hence its place as a feeder for The Next Gen cohort:
When it comes to tracking prior employment, UW makes another appearance, along with Microsoft and the Allen Institute for AI, a research institute founded by the late Microsoft co-founder, Paul Allen. The Next Gen list companies Wellsaid, You, and Runway are all headquartered in Seattle:
Perhaps it’s no surprise that Seattle dominates The Next Gen list, given it’s home to several top tier companies and institutions that also covet AI talent.
In 2021 a Brookings Institution report listed “early-adopter” AI metro areas, naming “eight large tech hubs — New York; Boston; Seattle; Los Angeles; Washington, D.C.; San Diego; Austin, Texas; and Raleigh, N.C.” (This data is underreported, as some employees do not disclose their location.)
What’s next
According to Pitchbook, roughly $1.7 billion in deal value was generated in the AI space in Q1 2023. (Not including Microsoft’s $10 billion investment in Open AI, which was announced in January.)
Lightspeed continues to monitor and invest in companies at the forefront of the AI ecosystem, including recent investments in Tome, Stability AI, (both on The Generative Eight), Mistral, Contextual, Glean, Typeface, and many more. Our partners have published manifestos about the future of SaaS and Fintech in a world of AI, and there will be more to come.
Lightspeed is also hosting regular Generative meetups in New York, San Francisco, and Los Angeles, where we invite everyone building products that incorporate AI, and want to share their experiences with colleagues to join us. Sign up to be notified about the next events in these series.
Finally, if you’re building in the AI space, we’d love to hear from you. We’re a global, multi-stage, multi-sector firm investing in everything from seed to secondary and beyond. Learn more about us at lsvp.com.
Lightspeed is a VC firm focused on accelerating disruptive innovations and trends in the enterprise, consumer, and health sectors. Lightspeed has backed 600+ companies globally in the past two decades including Nutanix, Affirm, AppDynamics, MuleSoft, Snap and Nest.
Data: Radhika Mardikar, Eric Wayman, Jerry Ye
Graphics: Elsa Jenna / Palette
Text: Paul Smalera