a16z Podcast - Embedding AI: The Questions Every CEO is Asking
Episode Date: May 15, 20232022 was a breakout year for AI. While machine learning had already been integrated into applications for millions of users, for many, these tools still felt like their first real-world encounter with... AI.As AI continues to revolutionize industries, CEOs are discussing how to integrate this new superpower. They are also considering important questions around data privacy, competition, cost, accuracy, and speed.In today's episode, we talk with Cresta, Hex, and Sourcegraph, three companies at the forefront of integrating AI into their existing products. From navigating data privacy concerns to optimizing accuracy and managing costs, these leaders are navigating the complexities of this new superpower.Topics Covered:00:00 - Introduction02:51 - How AI can enhance customer service08:26 - Using AI to shape data and analytics09:33 - Solving the challenges on contextual understanding12:01 - Giving AI the right information and context13:31 - Tools that help build language Models (LLMs)15:39 - Building open source tools18:40 - Constructing prompts22:26 - How do you differentiate?23:48 - Customization as a moat25:26 - Privacy challenges29:14 - Language models and search engines30:41 - Cost and pricing of models32:48 - What does the contact center look like in 2028?Resources:Find Barry McCardel on Twitter https://twitter.com/barraldFind Beyang Liu on Twitter: https://twitter.com/beyangFind Zayd Enam on Twitter: https://twitter.com/zaydenamRecent AI episodes:From Promise to Reality: Inside a16z's Data and AI ForumBeyond Avatars: How AI is Reshaping Online IdentityUnlocking Creativity with Prompt EngineeringStay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. For more details please see
Transcript
Discussion (0)
It is not true that language models make search engines unnecessary.
If anything, they make the search engines more valuable because now all that data that you can search becomes like 10x more powerful.
2020 was a breakout year for AI, as several new tools gained mass adoption.
In fact, many have even claimed that ChatGBTGBT is the fastest growing app of all time.
And despite machine learning being embedded in many applications already,
For millions of users, these tools felt like their first real-world encounter with AI.
And that's because tools like ChatGBT, BT, or Mid-Journey, have put AI at the forefront.
But looking ahead, that may not be the case.
The same way that your users don't care if your web app was built with Angular or React,
or if it happens to be running on AWS or Heroku, the use of AI alone will not be enough to win over users in the long run.
Instead, there will be a whole host of ways that companies differentiate as they cleverly embed
AI with a nod towards solving their customer's core problems.
And that's precisely what seems to be the topic of conversation in every boardroom.
CEOs are asking how to best integrate this new superpower, but they're also asking important
questions around data privacy, competition, cost, accuracy, and also doing all of this really
quickly. How do you structure the data? How do you train them up? How do we make them more
efficient, more insightful, more impactful? And also, why haven't we shipped this yet? And in today's
episode, we speak with three different companies tackling exactly this, exploring the unique
challenges and considerations of implementing AI into their existing products and what they've
learned so far from this fast-moving platform shift. As a reminder, the content here is for
informational purposes only, should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security, and is not directed at any investors or potential
investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments
in the companies discussed in this podcast. For more details, including a link to our investments,
please see A16c.com slash disclosures.
Now, before we get into the weeds of how each company is implementing AI,
we wanted to start our conversation with Zed Inam, co-founder and CEO of Kresta.
Kresta is bringing real-time intelligence in AI to the contact center.
And here, Zed reminds us of the upside of AI and what it can truly unlock.
So I read that contact centers have 30 to 45 percent.
attrition in the first year with agents. And in some cases, that can even be as high as 80%.
So what is going on in these contact centers? Why is attrition so high? The employee not
promoter score for contact centers often less than zero. So generally, it's not a role that
folks sort of end up sticking with for a long time for a few different factors, sort of seasonal
demand for increased contact center volume, you have relatively low wages and the sort of high
inflation environment, and then you have overall sort of a pretty high-stress job.
It's like sort of taking phone calls, frustrated customers and customers in that kind of
environment.
Those things add up and leads to like sort of a relatively higher turnover environment.
Absolutely.
I feel like most people don't need a reminder of how brutal that job can get because they've
been on the other side.
Now, many people jump to thinking that AI should replace the contact center and customer service
agents.
But Cresta instead has its eye on transforming a historically.
low NPS job into one of mastery and creativity.
When you look at artificial intelligence, I think there's really two ways to look at it.
There's one way to look at it, which is lazy artificial intelligence.
And that's basically, I have an existing process that my business does right now.
I'm going to take that process end to end, and I'm going to automate it.
And AI is really good at that.
There's another way to leverage AI, which is more like in a creative approach, which is understanding
my job or my role as a business is to deliver the best possible customer experience or
the best possible product experience.
And so, like, the information and the knowledge that's in my conversations with my
customers is one, like, just really great ways to build strong relationships with my
customers, but two is just like a goldmine of data information about my product, my
market, my competitors, what's happening, how the world is changing.
Can you give a couple examples of that?
Like, what can we do with this technology that a human alone could not effectuate?
Yeah.
So, like, a great example of it is when you have, like, thousands and hundreds of thousands of conversations
or your customers, the nuggets of information about product feedback that you can use to
sort of improve your product or improve your sort of how you position your products in the market
or how competition is changing. Specifically, like, we work with companies that are large
telecommunications companies that sort of are constantly looking at evaluating, how are the pricing
and packaging, their various phone bundles or various cable bundles and health bundles.
And there's just a lot of nuance and data understanding, okay, how does my customer perceive
but I make this offer, what is the context that they're coming in with, and that really informs
their market strategy in terms of what's the new product that they should launch, what's the new
package or pricing that they should bring to market. That's a way that a company can accelerate
its development based on just having a very, very close, like, gear to the ground and like a real
strong pulse of language that customer is saying. And that's ultimately the best way to build
companies and products. And artificial intelligence is amazing at that because like what's possible
now with large language models, what's possible now with like sort of very advanced deep learning
is that you can summarize, you can sort of synthesize, you can pull together information and context
in ways that they can take huge amounts of data, make it super simple to understand, and get to
insight and get to understanding really quickly, in ways that the conversations were unstructured,
not parsable, like they were sort of on legacy, on premise, like audio files that you'd have to
like listen to like listen to for 20 hours to figure out what that's going on. And now you have
like this super advanced like human level speech transcription to like human level summarization,
human level sort of question answering, that sort of helps these companies get to just the next level of
iteration as a business. What you're saying is actually really fascinating. So when I thought of
customer service, I thought, okay, this AI will actually be able to help converse with the customer.
But what it sounds like you're saying is it also provides the business with an additional
layer of understanding because it can basically go through all of this unstructured data and
gain insights from it. Like you said, if some cohort of customers is on one plan and they notice something
about the conversations that they can now parse through, again, this large bucket of unstructured
data and go and basically back to the company and say, hey, maybe you should start a plan
of this nature. Maybe you should offer this cohort of customers, this deal. Maybe you should
talk to them in this way. Is that basically what you're saying? Yeah. And that's like the
missing piece of all this is that it's a bidirectional thing, right? It's like you're serving the customer,
but you're also serving the business. And it would be a lot of information in all this.
For me, like the most inspiring example of this is actually Andy Grove. Intel was originally
a memory business for a very long time, and it started as a memory business. And then there was
what Andy calls a strategic inflection point, where there was a 10x difference in the cost of
memory production. And so all these Japanese manufacturers were sort of able to produce memory
in like this 10x difference. And it's sort of completely changed the dynamics of that market.
And it's funny, like the way he phrased it, but the salespeople at Intel came back to
headquarters, the Japanese salespeople, and they would say the customers are no longer as
respectful as it were. Like that was like the first signal that he got.
that something is different to the market, but like that was an input to him that ultimately
led to him dramatically pivoting and changing the strategy of the company to become a microprocessor
company because they realize that the market is completely changing. Customers are changing
customer reception to their products is changing and to their salespeople and that they need
to hard pivot the company and to become a microprocessor company. It's a hard decision to make.
But when you have the data, like when you understand what's going on with your customers,
you then have the data, you're informed to make the right product, company's strategy decisions
that ultimately save the company.
So it turns out that these AIs won't just support your customers,
but they can actually be an ear or a thousand ears to the ground,
feeding data back to your company, informing how you can better serve them.
Now, we'll come back to the context center of the future,
but now let's introduce Barry McArdle, co-founder and CEO of Hex.
Hex is like a sculptor's tool for data, molding, shaping, refining data,
into valuable insights and visualizations in various formats.
Or, in Barry's words.
for collaborative data science and analytics.
Let's hear a little bit more about what the Hex team is building into their product.
Earlier this year, we launched the closed beta of what we call HexMagic.
Magic is basically a set of AI tools built right into the HexUI.
And so it lets you generate an edit code.
For example, this morning I asked it, you know,
what's our account of paying customers broken down by pricing tier with sum of revenue?
And it wrote me a SQL query that did that.
Or you can say refactor this Python code to be a function, and it will do that for you.
It also has features to document your codes, which is really useful if you're staying at like a super complex query or something that someone else wrote.
It's like, what's going on with this? You can ask it. I'll tell you.
Sounds pretty magical if you ask me.
But it's one thing to dream up these new AI features and another to successfully roll them out.
And as companies like Hex, Cresta, and soon you'll hear from Sourcegraph, do exactly that.
They're inevitably running into challenges.
One issue we've probably all experienced at some point is when technology just does not have the right.
contextual understanding. Like being caught in a customer chatbot loop that just is not addressing
your query, something that Cresson knows all too well. We've all been there when we're interfacing
with a virtual agent and it's asking you the same question over and over and you're like, that
is not my problem. Clearly, my problem has not been coded into the potential set of solutions.
And I find myself in those cases just writing agent, agent, agent, because I need someone with more of that
broad context or the ability to interface in a more dynamic way. But I could also see how
that could introduce maybe some unexpected responses. I mean, I actually saw someone on
LinkedIn today post that he was talking to a virtual service agent and they called him
beautiful soul. And he's like, who would do that? What human would do that? Soon, we'll hear from
Barry and also our third guest, Beyond, about how we can actually improve the outputs of these
models. But first, here's said commenting on how the utilization of these models, if implemented
incorrectly, could really impact your brand. What are you seeing in terms of in the field how
different companies are gut checking what's coming out of these LLMs and interfacing with their
customers? And I'll also tack on a question, what are people doing in terms of ascertaining
whether they should actually disclose whether it's a bot or not? Because as we are getting closer to that idea
of human level parity,
should we be telling people it's still a bot?
Or is it actually good enough
where we don't have to anymore?
I think at the end of the day,
if you are a brand,
you are fundamentally building trust
with your subscriber or customer base.
And your brand value is like sort of,
hey, I can call up this company
and like they'll take care of me or like I can trust them.
And so like in that case, like I do think
leaning towards transparency is the best sort of thing there
because long term you want to sort of
be known as a company that doesn't mess around or like you're direct, you tell the customers
as it is. And it becomes like an interesting topic as well on the topic of accent masking,
right? Because like that's another use case of the technology that's sort of coming to market
where folks can like use this technology to sort of mask accents or change accents or these
kinds of things. And it's the same thing, right? Where like if you're not upfront about it and
like there's like some edge case that comes up and it becomes like clear, do you want to take
that risk to your brand reputation or not? So in addition to the right disclosures, we can actually
improve the performance of these models relative to competitors, even if we're using the same
underlying models. How do we do that? Through fetching the right data. Because it turns out that
if AI doesn't have access to the right data, it'll just make something up. Garbage in, garbage out.
Value in, value out. Now here is Bian Liu, co-founder and CTO Sourcegraph, a code search
and navigation tool for deaf teams. It's this kind of like general purpose, source code understanding
engine. Here is how Sourcegraph is thinking about the importance of locating the right
information, a problem that they've been working on since inception.
So our first kind of major push, I would say, is this editor extension called Cody.
And essentially, what it does is it's a chat-based interface, but also allows you to
search for stuff in context in the code. And the idea is that, like, we wanted something in our
editors that took full advantage of the power of language models, but also kind of addressed a lot of the
challenge that people have encountered with large language models, you know, namely the tendency
to hallucinate facts when they don't really know the answer. And so that's a place where we thought
we could be uniquely positioned to help because source graph, you know, with all the pieces
of context that we have around searching for code and finding references and verifying things
actually exist, we are kind of like the perfect fact checker, if you will, for the language model
and perfect like relevant context provider to the language model. And I mean, let's double click on that
because there are other tools that help you build code using these language models.
You'll just throw out a couple.
A lot of people are familiar with GitHub copilot.
A lot of people are familiar with what Replit is doing with Ghost Rider.
But maybe you could actually speak to this idea of fetching the right information.
Like how would something like a co-pilot do that?
And how would something like a Cody actually differentiate?
I think the way we think about it, as far as I know,
Cody is the only AI-enabled editor-assist in our coding tool today that
fetches context as broadly as we do. You can ask Cody a question like, hey, you know,
where is like the SAML off provider to find my codebase or where's like the GraphQL search
API defined? And Cody will actually go and convert that to a couple of search queries,
his source graph, and surface these relevant snippets of both code and documentation.
Use that as like concrete references to answer the user's question. And that's in contrast to
the way that co-pilot works today.
It's purely kind of like autocomplete-driven.
And the context that they fetched to do that autocompletion is kind of like recent files
that you've opened in your editor.
So it's kind of like this very local context, which works amazingly well.
I mean, like huge credit to that team.
We think that the next evolution of that is providing more relevant context.
And essentially emulating like what a human kind of does when you're trying to write code, right?
Like you as a human, you might go back through some recent history in your editor to see like,
okay, how does that code work?
How do that code work?
And use that as like a pattern matching reference point
for the thing that you're currently writing.
But more often than not,
I think you're doing stuff like,
go to definition, find references.
Let me see a couple of examples
of how to use this particular API that I just imported.
I think that that's going to lead to much better results.
I think it's also going to lead to much more kind of introspectable results.
So getting beyond this like, oh, LMs are magic.
How do they work?
Is it a GI, you know, whatnot?
It's like, Cody will actually tell you like,
hey, I read these files and these are the files I'm using
to generate an answer.
And if it completely returns a lie or is wrong,
you can usually tell by looking at the context that it read.
Like, why are you reading that file, Cody?
That's dumb.
And you can, like, thumbs down that
and we'll take that as a reference point
to improve the product later.
Another key question that founders are asking
is how they'll differentiate.
And Sourcegraph thinks that open source
is part of that answer.
The other major distinction I would draw
from a lot of other offerings on the market
is we are trying to build this as much in the open
is possible. So like Cody, the editor extension, we've released as open source under the
patch of two license. And we think that's kind of like the right mentality to build the standard
AI editor assistant that every dev should use. We think that there's like a natural inclination
among developers. I'm a developer myself. I prefer to use open tools. And I think it will make it
possible for us to build like a much more like plugable ecosystem. Like when you think about like
the wide, wide world of like context that you might want to pull in to answer a question in
your mind as a developer. It's not just the source code or the markdown documentation.
Like, maybe you're searching through your issue tracker. Maybe you're searching through
like chat messages. Maybe you're searching through like Google Docs or notion for like the latest
product spec. And by virtue of being open source, we make it a much better and friendly kind
of ecosystem into which we can plug in like other developer tools and other pieces of context that
aren't necessarily tied to a proprietary compute platform. Yeah. I mean, something that's jumping out to me is
if people aren't developers, like even think about writing an article, right?
Imagine writing that article based on just like your last five tabs that you had open.
That's going to be very different to being able to actually channel your goal and search the web yourself,
search your own notes that are specifically likely relevant to the task at hand.
And so I think this is like a really interesting perspective as to we know what goes into the model matters
in terms of what you get out of the model and basically what you're establishing.
is two different approaches, both the open source approach,
which allows more people to help you extend it,
but then also this approach of really finding the most relevant information
that's going to help the model give you the best answer.
So we learn from beyond that what you feed models really matters,
and that may be an opening for differentiation relative to competitors.
But part of that equation is just the data that you have access to.
Here's Zed commenting on the value of creating proprietary,
datasets. The opportunity I think that exists in the market is that you can do that exact same
thing if you can collect proprietary data sets that are unique and at that same scale. And you can
actually sort of train foundation models that are able to do even more. So like the internet or
the web pages are like a fairly static sort of language modeling task where you're sort of doing
a task and you're trying to complete the next token or the next word. But the kind of things we
interact with in the contact center, which is like what dialogue and action on you're
interacting with for someone and then you're working with enterprise software and systems of record
to fill out basic things.
It's both sort of this intersection of two sets of data sets
that don't really exist together,
and then that sort of enables us to then train over time
these large foundation models that use that intersection of that two data set
to sort of do things that aren't possible just with the webpages.
Another wedge may be customization.
How can you actually use the information from past user behavior
to support the user in creating better prompts?
Here's Barry's tick.
Under the hood, we are doing this.
a ton in terms of constructing the right prompts and parsing responses back from the model APIs we're
using. And so, again, we have thousands of people already writing SQL and writing Python and doing
data work and hex every day. So we have a ton of information we've got we're connected to their
database schemas. So we see the structure of their data. We see past queries and past code they've been.
So we know which tables and columns are most frequently referenced. We have information about
the project they're building. So we know like, oh, this project is already referencing as part of the
schema, and that's probably the relevant data for this. You could even look at things like
this is the typical way this organization is formatting their charts and infer how they might
want their visualizations to look. And so there's a ton of context we get because we're incorporating
this in an existing set of workflows that can help us create the right context and create the right
prompt to pass to the model. And I think with all of this right now, again, our focus really is
on augmenting the data professionals. Like this is not like, you know, trying to build some black box
thing. It's actually like we have a rule as we're building these features. It's like we don't run
the code for you. We will generate the code. We'll show it to you, but we want to keep the human
in the loop. AI sure can feel like a black box. So designing a UI that actually guides the user
is also becoming a differentiator, at least for the time being, where you're effectively
nudging users to want to interface with these new tools, but also come back. Any learnings in terms
of what is increasing completion rates? What is getting people to actually interface
with this new, in some ways, superpower within the app
because there is the flip side
where if it's hallucinating or if it's not helping them
or in some cases it may even be hindering their workflow,
they're not going to return to it.
So how are you thinking about designing that?
There's a bunch of things here.
I think there's like some UI things you've done
as an example like in Hex, you know,
we kind of have these code cells that you use.
And so we put a lot of thought in it like,
what's the right way to interact with that?
It's like a comment completion.
I think giving people feel.
feedback on what's happening. Some of these models, like the latency is still really high.
So even something small, like just being really thoughtful on, like, how you're exposing
to the user, what's going on and why they're waiting. I think one of the things that we've
observed sort of more on the back end in terms of increasing completion rates is when you're
building prompts, I think we were tempted early on to try to shove as much context as we could
in. You kind of figure, like, the more I can tell this model, the better.
You realize you can pretty easily confuse a model in terms of the amount of
context you're basing. And so we've been really thoughtful on iterating through and finding
what's the right level of context to pass through between what else is going on in the project
or the underlying data schemas or it's everything you're doing like before you're sending
requests to that API of like building the right context, understanding how you're iterating on
how different models are going to respond to different types of prompts and different amounts
of context, how you're potentially even chaining together different types of models or different
modalities of models together in one sequence in US. And then how you're,
you're giving that feedback to the user.
Yeah. I mean, I spoke to someone yesterday, and what you're saying really resonates with
what they said. They started off by creating like a lens-like product. But at the end of the day,
there wasn't really a moat there because anyone could kind of hook up to Dream Booth and create
the same thing. Yeah, right. But today, I asked him, like, how he's thinking about differentiating.
And at least in his scenario, he was saying, you know, I now link 15 different models.
And so maybe that isn't much of a moat either, but the point is it's not as simple as just
hooking into one API.
So we've touched on different aspects of differentiation from customization to context
fetching.
But is any of that truly a moat?
Where does value really accrue when your competitors can copy your UI or your latest feature?
Let's attack the question head on.
Where is the competitive advantage?
Where is the moat?
Is it in the UI?
It feels like that can be replicated very easily.
Is it in the cleaning of the data, the linking of the data?
is it something else? Like, how are you thinking about the fact that basically you can kind of depend on all your competitors to replicate what you do if it's working?
Yeah, we've already seen that down to some UI elements that look very familiar. Yeah, the models themselves are commodities. And that's going to accelerate even in the next couple of years. We're going to see a ton of different types of models of merge. We're going to see the costs plunge even further down. I think the ability to plug these in will become as ubiquitous is like using cloud services. It's just like everyone.
does it. Being hosted on the cloud is not a differentiator in any way these days. I think there's
a few places where potential moats emerge. One is we do have a pretty great data advantage.
You know, already having hundreds of customers, thousands of users, writing millions of lines
worth of SQL and Python, we can use that sort of rich information. We're not using it to train
models. I should be very clear. That's one thing we've been very in front with our customers about
like we're not training models where they would expect to ever have some code they've written
be a completion for someone else, which is a problem in other places. But it's more like using
that actually to personalize for that person and their team. Like, you know, again, like which
schemas you're using. What is their code style been like in the past? Like having all of that
information and people who are already doing that work in Hex gives us a big advantage.
Thing too is, and I talk about the team all the time, like it's kind of what we've already been
doing, which is building really thoughtful, well-constructed user.
experiences in UI. You could say that there's very little moot for a lot of things we or a lot of
other SaaS companies do inherently in their products. It's really how do you put these pieces together
and how do you build a really great user experience, both from, you know, the pixels on the screen
to thinking about performance behind the scenes to things like docs. Like that all sort of combines
to being a really superior product experience and it's stuff that we're, you know, three and a half
years into taking super seriously, then I do think that long-term generative AI, large language
models, they will change a lot of like fundamental assumptions. And I think that there will be an
advantage to the companies that can be the first ones to figure out what those things are.
Just like social apps benefit from network effects, it's actually not crazy to imagine that
even if certain products are built on the same models, first movers that incorporate customization
may maintain an advantage. Because if you spend months or even
years personalizing a model to your specific needs, would you want to do that again?
But as these platforms collect more information about your preferences and potentially even
train models on your data, certain companies may build a data moat. But that's also where
privacy comes into play. The future of privacy and security in AI is a complex and evolving
issue, especially since your competitors may be on the other side of that API call.
Companies are rightfully asking questions about data collection and data storage.
storage. A lot of these companies that are integrating AI are building off of just a few models,
right? A lot of people are familiar with OpenAI's API that came out recently. But there's also
that very interesting dynamic that a lot of the same companies that may even consider themselves
competitors are using similar models. And so how did you think about that? And also there's this
kind of layered question as it relates to security and privacy, because depending on the company that
you are, your code is actually potentially somewhat all the way to extremely proprietary,
right? If you're talking about like a self-driving car company. It's especially pertinent to us
because we have a lot of enterprise customers that are very security and privacy sensitive
to the point where, you know, one of the reasons we made it self-hostable is because we wanted
to enable companies that didn't want to put their code bases in the cloud to still have
like awesome, but understanding and code search tools. So the space is fast evolving.
And our mentality is like, look, we have a wide range of customers from like very conservative
large enterprises to like fast moving startups that have different risk and security profiles.
The language model in our like overall architecture is just one component.
The other, you know, components being the source graph code graph and the various other
developer tools that we want to integrate in kind of an open way.
And so we want to make it possible to kind of like bring your own language model to the table.
So our customers have negotiated separate deals with.
with model providers. They want to use that model. Some have in-house models that they've built
that they want to use. And some are like, hey, can you give us something that we can self-post?
Our desire as a business is to do what's best for our customers and users. And right now,
that just means, like, whatever is the latest and greatest on the market, like, let's try to
plug it in and see how it does. So you're basically saying that you give them the selection or
the option. Am I understanding that correctly? We'll give you the option. So right now,
you can use Claude, which is Anthropics, flagship model.
You can use ChatGBTGBT, which is kind of the Open AI model.
And we're looking to integrate additional models, too.
And there's also kind of like different models that we plug in in different pieces of coding, right?
So there's kind of like the chat-based models, which are often like the largest ones.
But there's also things like the embeddings model, right?
That's what we use to generate.
The embeddings vectors that we use to do really good kind of like fuzzy semantic level code
search. And we have an open source model that we fine tuned. And that's actually like the best
embeddings model that we have. But I think our mentality is just like the language model aspect
of this, we want to make as kind of like plugable as possible. That's amazing because something
that that also relates to is cost, right? Like each of these different models has a different cost.
I think a couple weeks ago being like five X their pricing overnight, right? Like you have a
dependency as well, both source graph, but also like that ends up filtering down to your your customers.
And so every one of these models, I mean, I think we're still in the early innings and there's going to be so many more developed.
And each one will, to your point, it'll have a different security posture.
It'll have different pricing scheme.
It'll probably, you know, there will be a range in terms of its efficacy or specialty in certain areas.
And so, you know, it never dawned on me that actually, you know, you could offer the access across the board to all these models, but also kind of relay the transparent pros and cons to the customer base.
That's exactly how we're thinking about it.
And for us, it's kind of like there's so much innovation happening in that space.
We don't want to be kind of tied to any one provider.
And so I think a lot of the value that we can provide is really about combining the language
model with the pieces of context and the structured understanding of code that we have.
And it's funny that you mentioned the kind of Bing price hike.
I thought that was like a big proof point and people noticed because like when chat TPT first came
out, I think a lot of people said like, hey, you know, this kind of replaces search engines,
Right? Like I could just chat with this thing and it was tell me the answer instead of me having to go and like click through a bunch of different results and figure out the answer myself. But then as people started to use language models a bit more, they started to run into more hallucinations. And I think it was like the release of Bing where people finally realized like being released at the integrated chat GBT or GPD four, you know, one of those like awesome like open AI models in. But they didn't just like ship a white label chat GPT. They combined that with Bing search on the back end. And.
I think, combining kind of the language model as sort of like the reasoning engine,
but you still need kind of like an informational retrieval engine to make that truly powerful.
And it's the unison that really is valuable.
And that's maybe, I'm speculating here, but like maybe had something to do with the Bing price hike.
Like it is not true that language models make search engines unnecessary.
If anything, they make the search engines more valuable because now all that data that you can search
becomes like 10x more powerful
because you can use that to get to your answer
with like one-tenth the effort
or in one-tenth of time.
Yeah, I mean, that's such a great point
that the search engine itself
still has utility to find all the information
that at the end of the day feeds into the language model.
Biong brings up a really important point around price.
These models are not free
and we're still in the early innings
of figuring out the right business models for LLMs.
How much did they cost?
What value add would allow certain providers to charge way more?
And is it worth integrating into multiple models to reduce price or platform risk?
But something else that feeds this equation is the cost to maintain data, which is also not free.
Here is Barry reflecting on the deep relationship between dollars and data.
I think one interesting facet of all this is that there's kind of this realization of the value of data.
And I think we're going to see a lot more companies retain their data for longer,
collect more data from their products or from their users.
And so there's this, again, this privacy, security posture of like,
okay, we want to have this data.
We're probably going to keep it for longer.
How do we keep it safe?
But then there's also a cost element, right?
It's not necessarily free to retain data.
It's also not free to run these models.
And so how do you kind of-
It's also not zero-risk.
A lot of companies intentionally delete their data.
Like, people will have retention policies on email and Slack.
But by doing that, you're eliminating knowledge.
And that knowledge could be useful sort of, like, in the first instance, just like,
I want to go back and search.
I think a lot of big companies deal with as they start to, like, have record retention things.
But there's also, like, second order value to that in terms of could that inform a model
or a, you know, potential different applications for how you can learn from what your organization
has done historically.
So I don't know where we're going to wind up on that.
I do think you're right that a lot of companies are realizing the value of their data and their IP and thinking about that in a deeper way.
And once again, I'll just say, I think already operating in the data space, both at Hex and previously, we already have this appreciation for people being really paranoid about their data, people really caring about where their data goes.
And so in some ways, there's nothing new for us here.
But I think there's all sorts of really exciting opportunities on how you could use data that customers are trusting you with, making sure you're continuing to earn that trust.
With all these considerations, it's easy to get caught in the weeds.
So let's take a step back and imagine a future where companies have successfully deployed this technology.
Not as a gimmick, but in a way that successfully gets their customers closer to their goals.
Like, what differentiates software is like, can you build the workflows that get to the business outcomes?
And as we close out, let's return back to where we started, the contact center.
Let's say, you know, we're in 2023. Let's jump to 2028, five years.
from now. Given the technology that we have today, also the change in the cost curve that
you mentioned, which sounds really fundamental. What does the contact center look like in
2008? What I think it looks like is this concept of like, I remember like when we started
the company, Tim and I, we got this video from our first customer that like put us a video
of what their vision for the contact center of the future look like. It was one of our first
customers who basically had this idea that they had a contact center of like people that
didn't sit on computers anymore. They just like would walk around and have like a piece that they
were using to talk to customers, and all they were focusing on was the interaction with the
customer and the relationship building aspect. And in the back end, the system is like doing
the data entry, the filling of the form, all these things. It's like taking care of all the
stuff on the back end. And like, the agent's not typing anything. They're not doing any data
entry. They're not doing any summarization. They're not looking up the customer record. That stuff is
like handled by the machine. And it's handled in a way that the human can just like focus on
how do I build a good relationship and resolve this customer's problem or like how find
and the right product for their solution.
And that's nationally what humans are great at,
which is that sort of empathy and connection
and sort of relationship building.
And then for the stuff that people want,
like,
frictionless experiences,
like there will be things that they don't want to interact with the human about at all.
And like those are like,
we can provide those in a fully digital sort of frictionless manner
that they can deal with if they want to deal with the bot.
And I think that's where it's headed.
Because once you have systems that handle the repetitive,
like reactive things and you move to like sort of becoming strategic and proactive.
Yeah.
And on the customer side, I could see how it could actually be, you know, you could even say fun or entertaining with the integration of technology.
Imagine that, you know, you're talking to a bot and not only does it serve your problem, but it's also in the voice of like your favorite cartoon character.
And it's telling you answers in a way that you're actually like following a story and actually, oh, because they've done all this back-end research based on these customer conversations that actually you leave the conversation with a better deal than you had before.
And you didn't have to spend three hours on the phone to get there.
Like, actually, as we're talking about this, I'm like, well, you know, you go from, as you said, a negative NPS industry, like, you could actually imagine that these things could be fun and frictionless.
Yep. Negative to positive.
Negative to positive. What a great place to end off.
AI's rapid evolution is constantly challenging companies to stay ahead.
And navigating the implementation of AI is bringing up issues of personalization, design, cost, and privacy to center stage.
A big thanks to Hex, Sourcegraph, and Cresta for sharing their insights and experiences in this ever-changing landscape.
And if you'd like to learn more about these companies or what they're doing with AI, links can be found in the show notes,
or you can also find links to a bunch of our recent AI coverage.
Thank you so much for listening, and I'll see you next time.
Thanks for listening to the A16C podcast.
If you like this episode, don't forget to subscribe, leave a review, or,
hell of friend. We also recently launched on YouTube at YouTube.com slash A16Z underscore video
where you'll find exclusive video content. We'll see you next time.