02/19/2019

Enterprise

Fiddler Labs and the Future of AI

Machine Learning (ML) and Artificial Intelligence (AI) is already a huge part of our everyday lives. They dictate what you see, what you buy, whether or not you’re approved to be hired or able to get a mortgage for your house. And this is just the beginning, as ML models get better and easier to make and deploy, we’re going to see ML influencing and dictating all types of decisions that affect our lives. Organizations will be incentivized to replace or supplement human decision-making with AI to get both scale and self-improvement that wasn’t possible before. Those who do not adapt will offer inferior services or products and at lower margins than others who do.

So how will organizations planning to implement ML to replace human decisions understand what their software is doing? How are they going to ensure ML doesn’t discriminate on race, religion, or sex? How will organizations be able to look into a black box ML model and truly understand what it’s doing? Are we building software that’s fair, ethical and impartial to bias, similar to how we’ve trained our existing workforce? These questions pose significant business risks for organizations.

Those are some of the key questions that led us to our seed investment in , a company that’s building an AI platform made for the real world.

Fiddler recently announced their product strategy which focuses on a key question that enterprises are increasingly asking themselves: “How do I build and deploy a ML model where I can clearly understand exactly how it’s making decisions?”. This is an important question that organizations are willing to invest in meaningful hiring to solve today, not only to help debug and make software, but also to release ethical software that avoids compliance risks. However, many organizations are finding themselves incapable of hiring this talent and scale it with the amount of commodity ML models being implemented and deployed within their organization.

Fiddler Labs was started by former Facebook and Samsung engineers and product managers, Krishna Gade and Amit Paka, focused on solving this problem and are working with industry and academic experts to create the world’s first “Explainable AI” engine. This will enable any organization to easily build a trustable AI solution that works with their existing AI/ML toolkit and should accelerate the adoption of AI and ML by organizations in the process. Fiddler’s Explainable AI Engine is geared towards solving business risks associated with deploying AI in business organizations around model bias, compliance, AI black-box and data privacy.

In my early conversations with the founders, I could immediately see how big and clear a vision they share. As organizations continue to use the platform to build, debug, and deploy trustable AI, they plan to become a key component of the ML operationalizing pipeline. This is an area that will become exponentially important as we see a growth in number of ML models deployed within organizations.

We couldn’t be more excited to partner with Krishna, Amit, and the rest of the Fiddler team as they unfurl their vision of AI on the world. Helping the world innovate faster while upholding fairness and ethics.

Lightspeed Possibility grows the deeper you go. Serving bold builders of the future.