No Priors: Artificial Intelligence | Technology | Startups - What is the future of search? With Neeva’s Sridhar Ramaswamy
Episode Date: March 16, 2023For the first time in decades web search might be at risk for disruption. Bing is allied with OpenAI to integrate LLMs. Google has committed to launching new products. New startups are emerging. Sridh...ar Ramaswamy co-founded the challenger AI-powered, private search platform Neeva in 2019. He is a former 16-year Google veteran who most recently led the internet’s most profitable business as SVP in charge of Google Ads, Commerce and Privacy. Sridhar, Elad and Sarah talk about the challenge of building search, how LLMs have changed the landscape, and how chatbots and "answer services" will affect web publishers. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: LinkedIn Neeva Search Neeva Gist Poe by Quora Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @RamaswmySridhar Show Notes: [1:32] - Why Sridhar started a private search engine after leaving Google [11:11] - Information Retrieval Problems, Mapping Search Queries and LLMs [15:25] - Google and Bing’s approach to search with LLMs [19:06] - Scale challenges when building a search engine startup [22:26] - Distribution challenges and why they release Neeva Gist [24:11] - Why Neeva is a privacy centric subscription service [28:25] - The relationship between search and publishers/content creators [30:16] - Sridhar’s predictions on how AI will disrupt current ecosystems
Transcript
Discussion (0)
Even in terms of what can search engines do, we are very much at the beginning.
I think we're going to expect a lot more from these kinds of interfaces.
And the difference between like a chatbot and a search engine that combines a chatbot
and retrieval is going to just look more and more bloody going forward.
No Pryor's podcast. I'm Sarah Goya. I'm Alad Gail. We invest in, advise, and help start
technology companies. In this podcast, we're talking with the leading founders and researchers
in AI about the biggest questions.
For the first time in decades, one of the Internet's most important products, web search,
feels like it might be at risk for disruption. Bing has allied with open AI to integrate LLMs. Google
is committed to launching new products and new startups are emerging. A former 16-year Google
veteran who most recently led the internet's most profitable business as SVP in charge of Google Ads,
commerce and privacy. Streetar Ramaswamy co-founded the Challenger AI-powered search platform
Niva in 2019. Surrida, I've learned so much from you as an investing partner, founder, and friend.
Welcome to the podcast. Thank you. Very excited to be here. Same. I've learned so much about
companies and investing in tech from you. Let's start with the background. Tell us about the
motivation to start Neva when you were already part of creating the dominant search product?
Yeah. So Neva was a little bit of back-to-basics thinking. When I left Google, I knew I wanted
to start a company. I spent a lot of time with Vivek about what we wanted to work on. And we ultimately
came to the conclusion that we're actually really excited about search. There's the geek in us that
like to help people find information that they needed. And we were also ambitious enough to think
that 20 years in, we could rethink the search product and create a better one.
Our aha moment, it's a little bit of an abstract aha moment, which as we said, if we didn't have
to deal with ads, if we didn't have to worry about monetizing, we truly could start from
back to basics.
As both of you know, in startups, it's as much about taking advantage of opportunity as it
is the original direction that you said.
So the first three years of Neva were really about building a better private search engine.
And honestly, it also taught us a lot of pretty harsh lessons about consumers and, you know, whether they were ready for change or not.
And really, what we saw happen with AI and large language models last year was that aha moment when we realized, wait, we can have the great principles that we started Neva with and create a much, much better experience.
And so that's a little bit of the journey to where we were.
But at our core, Niva was like, there must be a better search product.
It cannot be that there's one company, one religion, one product for the whole world.
So I think many people who use Google every day would say, like, it's actually pretty good.
And as somebody who was working on this, you could see, I think sometimes users are blind when they have a default that's this strong.
What were the things you thought could be better?
If I could add to that, like, how does that factor into the NEVA mission?
Yeah.
So, I mean, an important part, at least early on, was the private and the ads-free.
And, you know, we have to say that we underestimated how much people, especially in the U.S., would care about it.
As you know, figuring out consumers is a very tricky thing.
People will often not do what they say they will do or will not even admit to things that, like, they will or will not do.
That's just the nature of the game.
For us, for example, you were surprised that we did so much better in Europe compared to the United States.
You don't really think of them as being that different, but in practice, in terms of how many people care, it is actually very different.
So a lot of the early NEVA was really about how do we use the power of being privacy-focused and ads-free to create truly a better experience.
So we've tried a number of things.
they have achieved varying degrees of success, for example, the integration of things like personal data, personal preferences.
But I would say the fundamental challenge of NEVA, especially in the United States, has been how do you get people to take that initial step of caring enough to want to change their search engine?
Once you actually get people to do that, the job gets considerably easier and they begin to see all of the things that were not really that great about that experience.
Again, as a startup founder, as a consumer startup founder, I think these are pretty harsh lessons in consumer psychology, but one that, you know, one has to learn.
So more recently, you guys had a big breakthrough in terms of experience and consumer openness to AI summaries, which look very different from traditional search.
Can you just talk about how this product came about and what you had to build to enable it?
Yeah.
So in some sense, AI summaries, I am sure there are many Google engineers and exacts that will tell you, wait, we've been doing this from.
15 years. It's kind of true. Google launched something called featured snippets, like I think it was
a long time ago, 2010, 11. Google's always known that, you know, an answer right in the main
search experience trumps all. Google actually knows this really well. Elad will remember this.
Google knocked out live.com Bing's image search as the top image search product in the world
by integrating image search right into the search experience.
Turns out Bing, you know, live.com then was the one that had the best image search experience.
Google knocked it out by putting it into the search experience.
Same thing happened with Yelp and with Local.
It didn't matter how good Yelp was.
If you could show an answer right in the search experience, that basically won.
Similarly, this featured snippets, which is really pick out the two or three lines from a website
that is exactly the answer.
that the user is looking for
was always a big win.
People love the product.
It goes back to essentially
like Occam's Razor,
like the simplest explanation
is anything that minimizes work,
people are going to love.
And so if you give an answer
instead of letting people click on something,
of course they're going to like it.
This is the reason why, you know,
the currency conversion widget on Google
is wildly popular.
It's not that you and I like can't click
and go to,
on their site, but it's like, ah, why? It's there. And so answers in that sense are old,
but the fundamentals of search have always been that you got back a set of opaque links.
And of course, Google's entire business, the trillion dollar business, is built on this, again,
obvious fact that you and I cannot tell really between a good link and a bad link. We can say a
little bit, if it's the New York Times, our brain, it basically tells us, ah, that's a good site.
For most sites, we don't really know we click, we find out.
But the opacity and the linear scanning order is always an important part of how search has worked.
And so answers are, you know, this linear scanning is important to remember this consistent desire, whether they state it or not, on the part of users to get to the answer in the fastest possible way is an important thing to remember.
But things like featured snippets were never deployable at scale.
you know, the technology simply was not there,
even if Google put the full might of its mighty machine against the problem,
the coverage never really extended beyond like 5, 6, 7%,
and it would make website owners really unhappy.
They're like, you're taking away my clicks.
And so it was always like this edgy feature that Google would be like,
you know, yes, we can show this, but not really too much.
Our aha moment with large language models where we are like,
wait a minute. For the first time, you know, you have these models that can take like any content
and come up with a summary that gets to the heart of what this page is saying. And oftentimes,
you have to do it in the context of the query. If you have a blog, for example, that has six sections
and your query is really about one of those sections, then you better find out the right section to
summarize. And so a lot of it was just realizing that what was essentially previously unsolvable. And
summaries in particular are this frustratingly vague concept.
You and I can do a reasonable job if given a bunch of different kinds of content
to summarize, but actually making a machine learning model do that in general is a tough
thing.
So a lot of last year was really like understanding that, but also trying to make it work
at scale, which is a big effort on our part.
We decided that we didn't really want to be beholden to say like using open AI's
API for doing things like summarizing a 4 billion page index.
We built a lot of the technology in-house, but the final cumulative product is these
cited summaries, which really is one fluid answer when asked a pretty complicated question
or our query.
Obviously, you know, many people are doing this now, but for us, that was this aha moment
of, wait, we can write answers, a single authoritative answer for 50, 60, 70 percent of queries
and large language models, as you folks well know, are also general purpose learners.
The exact same text that can summarize a piece of text can also be used now to pull out
structured information.
We realized that we were sitting basically at a goldmine beyond compare in terms of a better
search experience.
You know, most of what you see for cited summaries are,
in the context of information-seeking queries.
But there's a whole lot of work coming
that can tackle different kinds of commercial queries.
So this is the beginning of a lot of work
that can be done to make the search experience better.
But the code really is,
if you can provide a believable answer to a question,
people are always going to prefer that
over any number of links that you can give them.
People don't like clicking on links.
Yeah, it's really interesting because, you know,
I overlap with you at Google,
and one of the things I worked on pro was mobile search,
and I remember to your point,
we tried to surface every single, but at the time we were calling one boxes, you know,
that would trigger with images or trigger with location information.
And it's pretty amazing that you're able to get to such high amounts of coverage just using
the LLM side.
How do you think about, because I remember when we were building those individual pieces,
there was a lot of custom work.
There was custom indices for news and crawls, and then there was custom ranking algorithms,
you know, everything, you had sort of specialization.
How do you think about the other 30 or 40% that you're covering or is the idea eventually
to do everything via LLMs, is that prohibited from a cost perspective? I guess more generally,
how do you think about information retrieval related problems in this new world and how you
map the different types of search queries and the different types of results against that?
It's a great question. So, for example, in like the 55, 60% that I'm talking about,
I'm actually excluding the one box and that we already fire. So it doesn't include like
the stock cards or the weather cards and stuff like that. In fact, we were working
on a PO integration, and part of what the PO team is saying is like, wait, wait,
if somebody asks for whether just give it back, you have the information already.
It's not that hard.
For clarity, Poe is the Quora app.
Yeah, Po is the Quora app.
It's like, I don't know, what's the right way to put it?
It's like a chat pot aggregator.
It's a pretty cool app.
You can take some of the one box in.
And even there, by the way, this code for triggering, as you point out, Eilads used to be like
really annoying code.
Sometimes it would be regular expressions.
It's basically like a giant, you know, ball of backs when it comes to figuring out how to trigger right.
LMs actually make some of that stuff easier if you want to extract structured information even from user-type queries.
And I'm sure, like, you know, most tech people have dealt with this at some point in their life or the other.
All of us have nightmares about writing beautiful soup code in order to parse web pages.
It's basically regular expression parsing over ever-changing websites.
is horrible. We have done a bunch of it in the first two-ish years of Neva. That stuff is also
easily generalizable with the smallest model that there is. At this point, I don't feel that there's
like a natural limit to how much LLMs can be used with search. I do feel, however, that there's
a very strong limit to how many questions can be usefully answered. And you realize with a shock
that search engines are actually pretty terrible at a lot of tail queries that you and I will now no longer think twice about putting into a chat bot.
I mean, what do I mean by that?
The other time, you know, Jason Kalakhanes, who like you folks, has a big podcast, you know, he just typed in, how are the NICS doing this year into NIVA and a bunch of other search engines?
And it was like, ah, this AI stuff does not work.
But the real answer is no one in their right mind is going to think of typing in, how are the next to next?
doing this year into Google search because it just never gave great answers for stuff like
this. Tailed queries have always been served poorly. I don't think that is going to change instantly,
but queries that can be meaningfully answered, I think a lot of them can be answered with LLMs.
For what it's worth, the approach that we are taking, which is very much like the beginning
of how large language models can be applied to retrieval problems, is this technique called
retrieval augmented generation. Again, a lot of your listeners know this. It's basically how do you
combine a search engine as a tool that a large language model uses. And even there, there's
going to be generalization. There is zero reason why we can't recognize that you actually
typed in an arithmetic expression and fired off a Python interpreter for doing this or some other
API. So again, even in terms of what can search engines do, we are very much at the beginning. I think
we're going to expect a lot more from these kinds of interfaces and the difference between
like a chatbot and a search engine that combines a chatbot and retrieval is going
to just look more and more bloody going forward. So hard questions will continue to be hard,
but a lot of questions that we expect answers for I think will be eminently answerable
with LLMs as one of the tools that go in.
Super exciting. Relatedly, when I've seen people model out the costs of using LLMs versus more
traditional IR approaches. LLMs seem to be more expensive per query. And, you know, I know that
when Sachin Ardello was talking about integrating these things into Bing, he almost had this
your margin is my opportunity style perspective relative to Google, right? I don't know if it's true or not
in terms of how that would substantiate over time, but it almost felt like the claim was that,
you know, Bing was okay, almost subsidizing LLMs integrated into search to try and draw or sort of
hurt the margin on the Google side. How do you think about, you know, the potential cost-perative nature
of LLMs for search. Is it really a thing? Do you deal with it with semiconductors or small models or other
things or is it not really that important of a consideration? Well, first of all, his comment
might have meant two things. There are two ways to think about margin. One is the cost of serving
and the other is the margin that Google makes, say, on an Apple deal. Not clear which one he was
talking about. But this is a topic that you've written a lot on a lot when it comes to like just
LLMs and cost.
We saw something dramatic happened where OpenAI reduced the cost of its API by a factor
of 10.
That's a little insane this early on.
But if you go back to the basics of your question and think roughly like, you know,
an average, very large model call takes about five cents.
That's actually, that is astronomical because you're talking $50, you know, CPMs for serving
a thousand queries.
Now, the average RPM for U.S. queries is about $40 to $50, and clearly, that will be a very high cost.
The rest of the world is a lot lower, by the way, like my memory is on the order of $20 if you average over the whole world.
So I'm sure you folks also know that Sydney, for example, will issue up to three queries for every question that you ask.
I mean, it's an arbitrary limit, but there are like sometimes we need to ask more than one question in order to answer it well.
Put that way, yes, this is an astronomical cost.
But personally, I feel that there is more and more evidence that says that you don't need
like the full power of the largest, biggest model to get most things done.
Certainly the way we think about cost, paid summarization, for example, we're very comfortable
with using models that are in the 5 to 10 billion parameter range.
we are very good at fine-tuning them.
There's a human feedback loop that is about to kick off and be there.
So whatever can be done with very large models for large classes of problems,
our attitude is we'll do them all day long for the kinds of problems that we care about.
And we are fine running six models in six kinds of models instead of running one model that is going to conquer them off.
And so I do feel like for a lot of like known,
problems, model size is not really going to be an issue and there's going to be an ongoing
reduction both in the size and therefore the cost to serve them. Satya, of course, might be
referring to the margin that Apple pays out. And if I were them, I would offer, you know, Apple
100% of rev share in order to get at that traffic. It's a way to establish a beachhead. By the way,
there's precedence. Google gave more than 100% to AOL and close to 100% to Yahoo in its
early years. That's how you make markets. They obviously will be trying everything.
You're saying that we should expect these players or that it be rational to play even more
aggressively from an economics perspective than we've seen so far. Oh, absolutely. Absolutely.
You know, part of the problem with Bing's growth has been that Google has fought it off
very effectively on the business side. Of course, it hasn't helped that it is common perception
to or deserved or not is a different story that Bing's search quality.
is not as good as Google.
For what it's worth, there are very few people on the planet
that can objectively judge search engine quality.
And so they need a way to break through
and establish meaningful presence.
And so it is perfectly rational for them to start with a better product,
but then go out of their way to establish a beachhead,
establish a market, because that is going to pay off
in a pretty big way for them down the line.
Every part of this game feels like an expensive game to play.
And I wanted to ask you about just the building of search, even aside from training LMs.
I remember there was a lot of skepticism when Niva first started, including from yourself,
about how any startup could afford to build a new search engine from both an engineering talent,
ambition of technical project, infrastructure cost perspective.
You've built an all-star team, but obviously can't spend a billion dollars as a startup.
Can you talk a little bit about what's been most challenging to build?
Yeah, search is one of these things where you need a fair amount of scale before you have any kind of meaningful product.
With like an ad system, for example, I can tell you how to build one with a three-person team because it's like limited data.
Or if you're building a new mail client, it's a small problem.
Yes, you'll have scale problems, but only after you have a million users, not on like day one.
Search, like setting up like a new mobile network, let's say, where you have to start from scratch is problematic from that perspective.
simply because you have to do a lot of work to be seen as even vaguely competitive.
And so everything from how we went about doing our crawl to how we built our index has been a
struggle.
I won't deny it.
And it's one of these problems where, like, you know, grown men and women, sane ones,
we'll just run away after a while.
They'll be like, they work on it for three months.
I can't deal with this.
I just need to like go.
And it's disconcerting to, you know, kind of watch that.
But having said that, you know, we do have an amazing team.
Awesome, for example.
was just brilliant at engineering a system that ran completely on Flash
in which we could do things like super rapid iteration,
replace the entire index, or the space of two days,
or put in arbitrary amounts of information for experimentation
in a much more flexible way, problems that took Google like 15 years to solve.
We had solved out of the gate simply because he had run into many, many of these problems.
We're also opportunistic, you know, to the point of LLMs being these, you know,
universal input-output machines, we realize that a lot of problems that Google solved with
massive scale and user data, they could in fact solve with LLMs.
So we use a lot of them for things like query rewriting.
Similarly, extracting structured information.
Turns out it's whether that people will ask about whether, in like, many wondrous ways,
you're in the process of actually replacing a hard-cored system with one that's based on an LLMs
to extract structure.
So you have taken shortcuts wherever we can in order to do this.
it is a daunting problem, but I'll tell you the single biggest positive thing for the team
is actually launching answers because up until then, they sort of had this feeling of
even if we were to be as good, if not better than Google, no one will care. People can't tell
between like, you know, list of links anyway. Once you turn that into, and yes, here is like an
actual answer that my mom can take a look at and say way better than a bunch of links.
things, all of a sudden there's excitement. And so there's the actual psychology, all of you
deal with teams, of what excites the team. And really, it's been over the past few quarters
where people have realized, oh, wait, this can be a transformational experience. That just is like
a big jolt of electricity through everybody just in terms of how excited they are, how hard they
work and things like that. Yeah, it's very exciting progress. I guess one question related to that
is when you look at distribution, because you mentioned, you know, consumer habits or
quite sticky on the distribution side. And I remember, even back when I was at Google many years
ago, like over a decade ago, probably more than that now, 15 years ago or something,
hundreds of millions of dollars a year were being spent on distribution. And obviously,
that number is grown with the Apple deal and other things. And so do you view it as like distribution
through superior product? Is it specific integrations or partnerships? Or how do you think about
getting that consumer interest? Distribution is hard. There's just no question about it. Habits are
hard to change. You can dislodge some of this with a superior product. You can dislodge some of it
with the dollars. Part of the reason why we released this app called Gist, which was a very different
take on search, is we very deliberately said, if we wanted search to look like Instagram stories,
what should it look like? It's an experiment. We hope it'll do well. And so sometimes you have
to look for change, sort of the locus of change. The other thing that we are also actively looking
at is, you know, in this moment where there's going to be enormous amounts of uncertainty
about things like, is search engine traffic basically going to disappear for websites?
or LLM is going to disrupt the aggregator-publisher relationship in a fundamental way.
We are now realizing that we can offer a superior search experience to lots of publishers.
Whether it's a Reddit or a Boston.com or anyone else,
we can give them conversational search on their corpus.
So we are going to try a set of different things.
We've actually had a fair number of success working with privacy products like Dashlane
and obviously other folks that we are talking to like ProtonM.
about how we could work better together.
Distribution continues to be, like,
easily my top worry for how does Neva get scale.
I guess related to distribution and business model,
you opted for a privacy-centric subscription service
without ads quite early.
And I think at the time, that was very innovative thinking, right?
I think now that other products, chat GPT, etc.,
are all sort of coming out with these subscription-based approaches,
I was just sort of curious how you thought about it.
Like, when do you think a product should be supported by subscriptions?
When should it be supported by ads?
and how do you think about it in the context of this type of product?
I mean, for us, it was a way to stand out.
It was to give us a clear runway.
Thoughtfully done, ads monetization is an incredible juggernaut,
as everyone that's on this podcast knows,
in terms of the kinds of scale that it can bring
and how it can disconnect monetization from the product.
So it's almost like a separate team that is working on it.
You know, when it's very successful,
it can actually kind of get an honest.
I'm sure like none of us likes watching broadcast TV anymore, like sports broadcasts drive me
crazy. When I think about like how many ads that I have to sit through, ads sort of come
with elements of self-destruction built in. It's part for the course. When you're doing it,
it's always attractive to do things like show more ads. In some ways, you know, hybrid approaches
of starting ads free and maybe using ads as an additional mechanism might be more.
sustainable, even though, you know, reasonable people will argue that most people that come
to ads later tend to be even more discriminant about how many ads they show and ads
quality than the people that I'd been working on it for the first time.
You know, I worked on it.
It's also the team, but Google Search Ads actually tried very hard to hang on to quality
bars, to hang on to user metrics for a very, very long time.
Compare them to somebody like Amazon today.
I find the Amazon search experience a joke because it is so full of ads and is actually misleading ads where it's really hard to find what is going on.
I think there are viable options.
There are structural elements that then come into what should you adopt.
If you're in the business of providing answers like Chad GPD was, ads just becomes a whole lot harder to do.
You're betting on the quality of answers.
But for many other products that are about more casual consumption, whether it's social media,
or even where search might go,
I think it's an open question
where ultimately it'll settle.
I point to point out to people
that's something like a gist experience,
which is a summary followed by a series of cards,
you can stick ads in there.
We're not planning to do that,
but there are many different ways to solve problems.
In the early days of Google,
one of the arguments that are being made for ads
was that the signal in terms of willingness to pay
was a way to actually boost meaningful link to somebody.
In other words,
if there is somebody who's willing to promote a link,
that in and of itself was a signal on the potential quality of that link
relative to the potential user.
Do you think that's a true statement or do you think it used to be
or is sort of our commerce signals like good boosts for actual ranking?
They can be, but I think the bigger truth is that smart people
will come up with great explanations for everything that they do
as long as it's convenient to them.
The best religion to have are the ones that are aligned with your business interests
And of course the ads team is going to say that.
There's some amount of truth in it, but that clearly is not an explanation for like two screenfuls of ads when you're searching on your phone.
I find this whole thing of ads enable Google to make free products or ads enable Facebook to be available for Ecuadorian people made by billionaire sitting in Palo Alto to be entirely self-serving.
my attitude is like, yep, we can make money with ads, it works pretty well, we're rich, it's okay.
If we just sort of project out a little bit and say these summaries cited or not, chatbot experiences, answers are really compelling to consumers.
How do you see the relationship between search and content producers changing in the long term, right?
If these summaries take traffic from publishers, do we lose the incentive to publish content on the internet?
I think that's one of the big unknowns.
I think what is going to happen is that some of the larger content creators, you know, I would put people like Reddit and Quora, these are some of the forward-thinking ones very much in that bucket.
They're going to say we want to be part of search, but we don't really want to be part of your answers.
Like, you know, taking our data and sticking into LLMs is not really allowed by our crawl policy.
but smaller publishers are not really going to be able to do this.
The bigger ones are going to have things like their own chatbots
so that you can browse Reddit content or Quora content.
So I liken the current moment to, you know, basically we're going to be dropping a bomb
or a giant impulse to the center of how a lot of us get at information.
This is going to radiate out from here to a whole bunch of sites to the content,
ecosystem, I think it's going to be a little bit. It's a little hard in my mind to predict.
It does feel like there might be more centralization or more consolidation when it comes to
content creation. Your average small blog which could subsidize itself or which could monetize
itself with advertising is going to find it hard to compete in this answer world, especially
if the expected experience for everybody is going to be, I don't really want to read giant
pages, I want to be like talking to you. Give me, give me a bit of a summary of what you're going
to say. Then I'll ask follow-up questions. All those experiences are possible, but not for every
blog that there is. So I think that is potentially a very different platform that is going to evolve
for how content is going to be created. That looks a little bit different from how it is today.
You know, when you were at Google, your team was doing machine learning and AI at a scale that
I think roughly didn't exist anywhere else.
And you were very forward-thinking in terms of then applying really interesting
cutting-edge technologies at Neva and creating one of the really first and most interesting
LLM-based search engines, right, which I think is super exciting work.
What else are you predicting gets most disrupted within the AI world beyond search?
Or what are some areas that you think are coming over the next coming years?
I mean, we talked about content, how it's going to get disrupted.
I'm not even talking about synthetic content.
Yes, it will be, but I think there will be techniques.
That's a cat and mouse game of detecting it.
But obvious places where content is generated, actually, ironically, is going to be advertising.
I can see how personalized advertising actually plays a pretty big role, especially when it gets to be multimodal.
I joke to people that, like, Michael Jordan is going to be telling you to buy his Air Jordans, like, you know, look at you in the eye and speak your name and so on and so forth.
So advertising with its closed loop for optimization and the relentless focus on efficiency
actually is a natural area.
I'm not saying there's not going to be, but obviously there are a lot of companies that
are saying things like, oh, we can apply LLM technology to every other information function,
whether it's mail or how we consume documents.
But what I find, you know, interesting is that we have.
a set of incumbent technology companies that are actually very smart and very driven.
Think about it, Microsoft to be this innovative, this late into the game, you don't hear about
stuff like that from IBM, not at the scale of, like, consumers and the whole world.
So I think they're all going to react pretty quickly, incorporate a lot of it.
So I don't know how, like, how much there is going to be pure SaaS innovation on products
that we take for granted.
I'm not saying there's not going to be, but...
It's a little bit harder.
One of the areas I'm personally very excited about is the generalization concept that I spoke
about earlier, which is if you think of LLMs as like machine language, then the natural thing
is how do you combine them with the various tools that we use in terms of search engines,
calculators, APIs, programs, other websites.
So I think like Action Transformers is going to be an incredibly powerful area.
The technology is very nascent.
So unlike, say, you know, OpenAI's ability to crank out new generations of LLMs, I don't think that tech is yet at a point where people can build lots of applications on top of it.
But to me, that is potentially a big breakthrough, not just for things like RPAs, but also potentially for, hey, can you create an AI SRE?
Can you create an AI code reviewer?
Can you create like fill in the blanks?
I think that's incredibly exciting, but I think the technology is also quite a bit more nascent
than what we have just come to expect will happen with language models.
Yeah, the agendization of the world is a very exciting future.
So we'll wait out with bated breath.
As we wrap up, is there anything else you'd like to talk about that we didn't touch on?
You know, it's trite.
It is repeated.
But as a technologist, this is a really exciting moment where I do think that this is powerful new technology.
it's also getting democratized very rapidly.
You know, my take is that WhatsApp was the seminal moment of, like, mobile computing.
Here, a team of 30 people could create a product for the whole world.
To me, that represented the power of mobile platforms.
And if two years from now, if whatever, three college kids, you know, 20 years old,
are able to build a brand new application that uses the things that we,
know for sure, whether it's web servers or databases, but also language models in a fundamental
way and say like, wow, we never thought of that. You know, that feels very possible. That is what
is really exciting about where we are. Yeah, in the meanwhile, super excited for where we are able
to take search with Neva and appreciate all your wisdom and support. I'm counting on that to
happen, actually, but I think a lot thinks it will too. Streetar, all an incredible conversation.
As always, thank you for joining us on the podcast. We appreciate it.
Thank you, Sarah. Thank you, a lot. Thanks for joining us.
Thank you for listening to this week's episode of No Priors.
Follow No Priors for a new guest each week and let us know online what you think and who an AI you want to hear from.
You can keep in touch with me and conviction by following at Serenormis.
You can follow me on Twitter at Alad Gill. Thanks for listening.
NoPriars is produced in partnership with Pod People.
Special thanks to our team, Cynthia Geldea and Pranav Reddy and the production team at
pod people. Alex McManus, Matt Saab, Amy Machado, Ashton Carter, Danielle Roth, Carter Wogan, and
Billy Libby. Also, our parents, our children, the Academy, and Open Google Soft AI, the future
employer of all of mankind.