07/31/2023

Consumer

Scott Belsky on Creativity in Generative AI

Scott Belsky and Michael Mignano at Generative NYC summer edition

Our Generative series of meetups brings together the AI community of Founders, builders, and investors to talk about the shape of the future. Recently, we held the summer edition of Generative NYC where Scott Belsky, Adobe’s Chief Strategy Officer and EVP of Design and Emerging Products sat down with Lightspeed Partner Michael Mignano to discuss the influence of generative AI on creativity, work, startups, and healthcare. 

In the spirit of the subject matter, we ran the 6,400 word transcript through ChatGPT with instructions to remove filler words, correct grammar, and reorder sentences and thoughts for flow and coherence, while preserving anecdotes and speaking style. We got back 2,400 efficient words capturing nearly all of the nuance of the discussion. With some minor edits and re-additions from the original transcript, we’re publishing it here.

The next edition of Generative NYC will be back at our Union Square offices on August 15,  focused on Fintech in AI. To apply to attend, sign up here.

Michael Mignano: About a decade or so ago in New York City, during the mobile computing revolution, we used to have these exciting meetups. People from the community gathered to connect, hear each other’s stories, hire talent, secure funding, and showcase their products. It was an incredible time of innovation and collaboration.

When AI exploded on the scene about a year ago, it felt like we were experiencing one of those transformative moments again. It was clear that we needed an event to bring people together just like the old days. And that’s precisely what Generative NYC is all about. Tonight, we have a very special guest who holds a prominent position not only in the tech industry but also in the AI community. His name is Scott Belsky. He serves as the Chief Strategy Officer and EVP of Design and Emerging Products at Adobe, a company that has been a pioneer in nurturing creativity and empowering creators and media generation for decades.

Scott is not only an influential figure at Adobe; he is also a prolific angel investor. Over the years, he has invested in game-changing companies like Uber, Pinterest, and Carta. Personally, I have had the privilege of knowing and working with Scott for a long time. Back when I was at Aviary, a company later acquired by Adobe, I admired Scott’s insights as a product visionary. When I joined Adobe and found out he would be my boss, I was thrilled to learn from him.

Although I left Adobe after nine months to start my own company, Scott’s support remained unwavering. When I asked him to invest in my venture, he didn’t hesitate to commit. I will forever be grateful for his trust and guidance, especially during high-pressure moments, such as when we were selling our company, Anchor, to Spotify.

Scott’s experience goes beyond his role at Adobe and his investment ventures. He co-founded Behance, which was also acquired by Adobe, so he knows firsthand what it’s like to be a Founder in the startup arena. Now, as we see numerous founders in this room building products that leverage AI, Scott undoubtedly has a front-row seat to everything happening in the world of generative AI.

Scott Belsky: Thanks for having me.

Michael Mignano: Let’s dive into generative AI. It feels like you’ve been at the forefront of this field. For me, the “aha” moment happened around nine or ten months ago when DALL-E 2 was launched. But what about you? When did you have that “aha” moment, both personally and for Adobe?

Scott Belsky: Well, one of the ideas that brought me back to Adobe, about five and a half years ago, as the Chief Product Officer, was the idea of “creativity for all.” I was passionate about envisioning what humans would be doing in a future where much of the work is achieved through computing. Creativity, in my opinion, needed to become the next form of productivity, enabling people to stand out at work and in school.

Returning to Adobe, I realized that the creative world faced significant friction. Many people lacked the necessary skills, and creative confidence seemed to peak during kindergarten, only to decline as we faced critics and challenges. I found it unfortunate that creativity tends to decrease over time rather than flourish. Additionally, I was intrigued by the future of digital experiences, exploring what’s next, such as 3D and immersive technologies.

The challenge was clear: creativity was a high-walled box. The entry barrier was daunting due to the skills required and the cost of tools. The ceiling represented the limits of what one could achieve within the constraints of time and ability. I aimed to make that box bigger, thus expanding Adobe’s total addressable market (TAM) while fostering creativity for everyone. We tried various approaches, from improving first-mile experiences to introducing new products and web tools, but none of them truly transformed the landscape.

Then, we stumbled upon DALL-E and other generative AI capabilities brewing in our lab. Suddenly, it all clicked. These innovations effectively lowered the floor of the creativity box. People could now prompt the AI with natural language and watch their ideas come to life. Animations, 3D designs, and immersive experiences that used to be the realm of skilled artists became accessible to all through prompts and algorithms. The ceiling soared as people’s creative potential expanded with AI assistance. That was the “aha” moment when we knew we had to be fully invested in generative AI.

Michael Mignano: So, did this realization lead to the birth of Firefly, or was that a separate journey?

Scott Belsky: The concept of Firefly came to my attention through a white paper in 2019, presented by someone on my team. While I found it intriguing, it seemed distant and not yet fully developed. However, as the industry progressed, with significant milestones and open-source contributions from companies like NVIDIA and OpenAI, the potential of generative AI and Firefly as a product became more evident. The tide was rising, elevating the potential for various companies, including Adobe.

Michael Mignano: And when did Firefly evolve from an idea to a full-fledged product?

Scott Belsky: Firefly’s journey began back in 2019, long before most people were actively considering these possibilities. It took time and development, but eventually, it evolved into a comprehensive product with a series of models and powerful capabilities.

Every company must make serious decisions about what to make internally and what to outsource. We faced this dilemma at Adobe when considering generative AI. Initially, I wasn’t sure if we could be market leaders in this area by doing it all internally. It seemed like we might need a partner. However, as we delved deeper, it became clear that certain aspects, like LLMs, were best outsourced due to their complexity and cost. On the other hand, for imaging, video, and 3D, we possessed the necessary expertise and patents to excel. Moreover, customers were concerned about how some generative AI models were trained, especially when they used copyrighted material without proper permission. This realization fueled our determination to approach generative AI differently and set the stage for Firefly.

Michael Mignano: For those unfamiliar, can you provide an overview of what Firefly is and what features it offers? Also, where do you see it heading in the future?

Scott Belsky: Firefly is a family of generative AI models that began with text-to-image and text-to-text style capabilities. This allowed users to create images with unique styles, for example, using palm trees or sake bottles to spell “hello.” As we progressed, we realized that customers wanted even more flexibility. They desired the ability to prompt on a layer-by-layer basis and use powerful tools like Photoshop for more in-depth creations. The launch of Generative Fill was especially remarkable, and integrating it into Photoshop by default proved to be a successful approach, facilitating product lead growth.

Michael Mignano: It’s incredible to see the impact of Firefly and what people can achieve with it. However, for startups, especially when established players like Adobe possess significant data and distribution advantages, how should they approach AI?

Scott Belsky: Startups need to approach AI with empathy and a data advantage. It’s crucial to understand the actual problems and frictions users face rather than being solely passionate about a solution. Empathy-driven solutions resonate better with customers. Additionally, startups should focus on building unique data sets that incumbents lack – building their moat. Competing based on interface innovation can be risky, but having a disruptive interface approach that challenges the status quo can be advantageous.

Michael Mignano: Big companies like Apple are also venturing into AI, potentially causing disruption. You’ve talked about the personalization wave and its intersection with AI. Can you elaborate on that?

Scott Belsky: Personalization will revolutionize digital experiences. In the future, generalized e-commerce and media experiences will seem outdated. Every website and media platform should be hyper-personalized for each user. The model needs to shift from a small group of people programming content for the masses to having the masses generate content personalized for each individual. The potential for hyper-personalization is enormous, and startups should explore this uncharted territory.

Michael Mignano: Let’s discuss the media side of things. Everything you said makes total sense, especially regarding commerce and experiences. When it comes to media, particularly content discovery and personalization, it seems like we’re heading towards content that’s not just programmed but created in real-time for each individual. While this offers exciting possibilities, there are concerns about how today’s incentives, such as maximizing ad revenue, could lead to a negative outcome. How do you see AI personalization avoiding a dystopian future?

Scott Belsky: I believe we secretly long for the way things were but with more scale and efficiency. We miss the personalized experiences we had in small towns where people knew us, and that was wonderful. AI could enable a similar personalized experience, but it has to be done in a way that respects privacy and comfort. Trust and authenticity will become crucial. Brands will matter more as people seek reliable sources and verification before trusting information. The Content Authenticity Initiative, focusing on content verification, is an open-source framework my team has been working on for years, especially relevant in the generative AI era.

Michael Mignano: It seems like we’re witnessing a transition in media. Established institutions may gain more trust again, given the rise of deep fakes and misinformation. People will look for verification before trusting content, and this shift could benefit long-standing brands and institutions.

Scott Belsky: Absolutely, trust and authenticity will become paramount. It’s a time where we can no longer believe our eyes without verification. Moving forward, we’ll have to verify content first and then trust it.

Michael Mignano: Let’s talk about AI’s impact on the creative process. Does generative AI enable people to be creators or creative directors? What’s the difference between the two?

Scott Belsky: Great creatives desire creative control, so we’ll move beyond the prompt era where generative AI offers limited control. In the creative professional world, people will still want to be creative directors of their work. Social media marketers will need to act in real-time, requiring creative director skills to execute using brand assets and AI capabilities. At the ceiling level, granular tools will still be valuable.

Michael Mignano: We’ve recently seen something akin to an “Napster moment” in AI: a deep fake South Park episode was released, almost identical to the show’s original style. What are the implications of such developments, and what’s the likely regulatory response?

Scott Belsky: We’re entering the era of unauthorized sequels and IP nightmares, where anyone can create content using actors’ likenesses and voices. The situation poses legal and ethical challenges. We may see a Napster-like response with more stringent regulation due to the potential economic detriment to original IP owners. I’m not a lawyer, but one of the things I’ve learned about IP law is it’s very focused on the economic detriment of whoever’s IP you’re violating. And so if I’m inspired by your style, and I put it in my mood board, and I make something sort of like it, that’s typically fine. But if I literally use your voice and likeness, and you’re an actor, or a voice actor, and then I put something out there that people use at the expense of buying your material or paying you to do your job, that’s traditionally not fine.

Michael Mignano: I can see how the latter example gets shut down. The former sounds complicated, right? So if you’re inspired by me, do I get to participate in the commercialization of that in any way? And how does that happen?

Scott Belsky: Yeah, part of me is inspired by your former company, Spotify. They built an attribution and compensation model that, at first, no artist was happy with, but it became better than the alternative. I wonder, why don’t we let Behance members, with amazing portfolios and unique styles, train a model and make it licensable by others through generative AI tools? They could monetize even when they’re sleeping.

Michael Mignano: It’s funny because decades ago, there was a royalties infrastructure built for music that allowed songwriters to get paid no matter how big or small their contribution. It’s a completely antiquated system, but it might serve as a model for what could come next with AI.

Scott Belsky: I think that’s interesting. It reminds me of the other product principle, especially in the enterprise: People are lazy, vain, and selfish. So if we make it easier to use great styles in a licensed fashion rather than relying on rogue general AI tools with potential liability issues, people may prefer it and pay for it, just like with music. This could create a marketplace opportunity.

Michael Mignano: Let’s talk about the enterprise, and the impact of large language models (LLMs) in the workplace. How do you think they will transform the way we work, from organizational design to streamlining meetings?

Scott Belsky: It’s going to change everything in some ways. LLMs are being integrated into various products, and they can answer questions, perform tasks, and even suggest improvements based on data analytics. There’s a huge opportunity to use LLMs to assist with management tasks, which is something most of us struggle with.

If you look at what an assistant within a product can do, I sort of see it as a pyramid. It starts with helping you answer your questions. And then a little bit higher up in the pyramid, it can actually do things for you. So in Photoshop, you can say, “remove the background,” in natural language. And then the top of the pyramid is that it can actually suggest things to you, like, “hey, designer, that color does not actually perform well, in that market,” or, “based on your analytics data, you should actually move that three inches to the left,” and you’re like, “What? Wow, OK.” 

One of the things you mentioned is, there are some parts of every day that are like, really antiquated that LLMs can do that startups should make. One idea I’m passionate about, and hopefully one of you [in the audience] is building this, is management. I think most of us are bad managers. We don’t know what to talk about in our one-on-ones. If you could have cues for each of your direct reports – what they’re struggling with, whether they’re hiring people using Glassdoor or not, what their sentiment scores are, and so on. If you can have all this stuff analyzed by LLM and presented to you during your one-on-one, would we all become more capable managers? I think so. Things like that still need to be invented.

Michael Mignano: You’ve worked with several LLM companies. Do you think we’ll experience a long tail of models or a concentration of power among the big players?

Scott Belsky: Longtail human tuning is happening under the hood to make LLMs more capable and human-like. This is not just about technology, but the hard work behind it. While big players are integrating LLMs into their products, there’s also room for highly specialized LLMs for individual use cases. This specialization could create a diverse ecosystem.

Michael Mignano: I remember your advice about doing things that don’t scale. For instance, when we were building our company, Anchor, we hired dozens of college students to manually distribute podcasts to various platforms like Spotify and Apple Podcasts. It was highly unscalable but very impactful, and changed the trajectory of the company.

Scott Belsky: Exactly! Startups should focus on non-scalable tasks that big companies won’t bother with initially. It can give them a competitive edge.

Michael Mignano: As we wrap up, aside from Adobe’s work, what generative AI products would you like to see in the world?

Scott Belsky: Well we talked about the management idea. I think one debate in my head about where the future of that space lies is whether we should have one general AI assistant or many highly specialized ones. The latter seems more plausible to me. Specialized AI assistants, like those focused on health, could leverage personal data to provide unique insights and patterns about daily activities and health trends.

For example, one of the reasons why I started wearing my Whoop again is because while this data may only be partially helpful to me today, in five years, when I can sync all my data with a health-tuned LLM that knows other things about me as well, I could start getting insights about my daily activities and health that I never even imagined. So I think that we should all start collecting some of the data now, so that we can kind of leverage it when technology catches up.

Michael Mignano: Scott Belsky, thank you so much for being here with us!


To apply to attend Generative NYC: Fintech in AI, sign up here. To be notified about future Generative events in NYC, SF, LA, and London, fill out this form.

Lightspeed Possibility grows the deeper you go. Serving bold builders of the future.