Big Technology Podcast - Google’s Weird Year + Neeva Goes to Snowflake — With Sridhar Ramaswamy
Episode Date: September 6, 2023Sridhar Ramaswamy is the co-founder of Neeva, SVP at Snowflake and former SVP at Google. He joins Big Technology Podcast to reflect on the strange year Google’s had in 2023, working on the fly to r...eimagine search and ship faster than it was initially comfortable with. In this episode, Ramaswamy delivers deep insights on the future of search, generative AI, and how his former employer will adapt in these times. Stay tuned for the second half where Ramaswamy candidly discusses his search competitor Neeva, why he sold it to Snowflake, and what the two companies hope to accomplish together. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
A long-time Google executive, entrepreneur, and generative AI builder discusses how the technology
is changing search in the business world and what's next for his old employer.
All that coming up right after this.
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world
and beyond. Joining us today is Sarita Ramoswami. He's a friend of the program. He's the co-founder
of Neva. He's an SVP at Snowflake. Currently, as he puts it, a minister without a portfolio.
We'll get into it. We're going to talk a lot about what's going on inside Google and the search
world and then also how generative AI might be applied in the business world in ways that might not
have been covered in depth up until this point. We'll get into it. Sweetart, welcome to the show.
Great to see you. Thank you, Alex. Good to be back. Great to have you back. So let's start with
Google. It's been a pretty weird year for Google. I would say this is the weirdest year for Google since
2011 when they introduced Google Plus.
I'm curious if you would agree with that assessment and like, how would you really rate?
I mean, we're almost, you know, we're a little bit past the halfway now.
How would you rate the 2023 for the company?
Pretty turbulent, right?
It is pretty turbulent.
I think at one level, it exposed fairly deep gaps in the ability of Google leadership,
both to visualize the future, but also execute towards it.
It's very clear that the crazy popularity of Chad GPD and the speed at which others,
including Neva, rolled out pretty credible AI-powered products,
I think was pretty surprising for them.
It clearly has led to a bunch of soul-searching and team alignments,
you know, things like Demis being now the overall head of AI and stuff like that.
And, you know, so the company has reacted.
And it's actually been interesting to then watch the product developments coming out.
I don't know how closely you follow Bard, but I've been following it for a while.
There's now a pretty credible integration into a regular search.
And Bard is getting better by the day.
And this combined with the fact that, you know, Sydney, Bing's chatbot has not made as much progress, I would say actually puts them in a better position in the middle of the year than early this year when it looks like they were truly caught flat-footed.
Yeah, so I want to pick up on that part and what you said, that they had an inability to visualize the future.
I mean, it is interesting because for the past few years we've been hearing completely about.
about how search was going conversational,
how people want to talk to search.
The Google Assistant was effectively the right product
for this moment, just the wrong execution.
What happened there?
Well, so that's actually four or five years ago,
which was the previous craze around voice search
and chat bots and stuff like that.
Remember, this was the time of Alexa
and all the devices that Amazon was rolling out.
I mean, I was still very much a part of Google at the time, and all of us feared that voice search would be the new platform, that these devices sitting everywhere would be the replacement for search.
And Google actually put a multi-thousin person team to work on this, both in search, but also within my team.
The shopping team, for example, had an assistant.
We had partnerships with companies like Walmart.
We took it seriously.
But here's the important but.
That technology was pretty much a previous generation.
It was not based on transformers.
It was not based on the rapid advances that have happened in AI.
And in many ways are kind of strung together in ways that limited what it could do.
So those were pretty early, and all of us, as consumers, discovered that beyond a select few use cases, like, you know, hey, Google, what time is it, or what's the weather today like, or play this song for me, we came to understand that those devices were very, very limited in what they could do.
And so your Alexa became a fairly expensive remote control for Spotify.
But in all of these, course search did not.
But first you'll have to unlock your device.
Even worse.
The podcast guest, uninvited.
But the thing that has, the thing that did not change is that Google, things like the assistant that came along,
there were vineyards on top of search.
They did not change search in a core fashion, meaning that you retrieve some sites and, you know, there were some sets of things that you would directly take actions on, but it was very limited.
The power of generative AI now, I think, comes from the fact that you can understand multiple pages and you can write a fluid answer for way more queries than what the assistant could ever do.
Remember, with the assistance, whether it's on Alexa or on Google, pretty much if you ask it a
complicated question, it will quickly get into, according to so-and-so site, blah, blah, blah,
which is not quite the same as here is a three-sentence summary that truly captures the gist
of what it is that you're looking for.
Yeah, and it's interesting because you actually built a product in NEVA.
That is generative search.
That's right.
That's right.
That's right.
You had this experience.
It's a really fascinating experience getting to see what comes in through the back end.
I'm just going to quote something that you said.
You said, the thing that surprised me about chat was how much it has dramatically expanded
the pool of queries and questions people pose.
As you likely know from Pi, which is another bot, people will type things into a chat bot
that they will never dream of typing into a search engine.
So, I mean, tell us a little bit about what you saw on the other side.
Like, what do people type into these bots?
And then how does that, is it even search at that point?
Like, how does it change what we see?
I think it's a very different product.
And I think it's fascinating to watch Pi, to watch character.a.i and all of these people
create products that are very different from search.
And even in the context of search, the kind of questions that you would ask of it have changed in a big way.
The one example that I'd like to, you know, give people, but there are many such examples.
examples is Jason Calacanis, as you know, runs like another podcast.
I haven't heard of it.
And the question that he asked Neva was, hey, how are the Knicks doing?
And he was, this was early this year.
And he was offended that we gave him a summary of articles from like late December
because that was the best that the search engine could find in terms of how the NICs were doing.
obviously the season had changed.
Once you get used to the idea that you can just say things,
I think the set of questions that you can ask dramatically change,
you will ask a lot more subjective questions.
Remember, at the end of the day, like the search engines of today, are quite limited.
If you ask it a deep, complicated question, you get a bunch of like gobbledygook pages.
Right. And so I think that is, that don't really have a whole lot to do with the question that you asked.
And so I think we ask a lot more subjective questions. What do you think this article says, this is something I try with Bard because it has access to real-time data.
I will put in a link and say, hey, can you summarize this link for me? Or how is this opinion different from that other person's opinion?
So I think the class of problems that we expect chatbots to solve simply by virtue of the fact that they accept full, full text English or every language, really, I think can dramatically expand the scope of what it is that we ask them to do.
Of course, there's a big gap between, you know, what they, what they do currently and what our expectations are.
But nevertheless, I think our expectations are just much higher.
And this is purely in the give me information that exists in the world kind of mode.
But I think what Pye and Character.A.I. are showing is the ability for these things to have
like conversations.
You know, they don't really have things like long-term memory.
Like there's a bunch of technical gimmicks that people can use to have these bots pretend
like they have long-term state.
So there's a lot of technology to be built.
but open-ended freewheeling conversations
about your feelings, about your emotions,
about what you should do.
I think, like, this field is just opening up.
And so you were, you had access to the back end there, right?
You were running this search engine.
And so were people, I mean, was that type of, you know,
how are the NICS doing is kind of like, okay,
I would ask that to Google today.
I mean, maybe I would just type Nix in.
But so where does that expand, like the range of things
that people will type into a search engine.
Like, were they actually, like, confessing their feelings in the chat window?
Like, what did you see that surprised you?
We don't look at sort of individual queries.
That's one of the no-noes of any search engine.
And so, like, we would do things like analysis on the length of queries, what sort of quality
that we would serve.
But, as I said, comparison queries increased.
a lot, nuanced questions about how things were working would also increase quite a bit.
Like how things are working in terms of like how systems work?
How systems work?
What is your opinion of what so-and-so did yesterday?
Right.
It's just, you know, these are very different questions.
As I said, we would just not think of putting them into a search engine.
And I would almost say that in a search engine, Alex is very likely to type.
you know, Knicks standings.
Right.
You're not likely to type like, hey, are the Knicks having a great season?
How are the NICs doing?
What has their performance been recently?
Or what will it take for them to make the playoffs?
These all come naturally in the context of a chat pot that we somehow think is omniscient,
but are not things that we will type into a search engine.
Did you worry a little bit about how much people trusted the responses that Niva was given?
100% talk a little bit more about that yeah we are and I think this is one of the
like societal problems that like are going to be pretty tough for the next 10 20 years
it took what's the right way to put it we were very clear that Neva represented what was
on the internet they're like hey listen we are not God many of these things we just don't
No. But what we are good at is finding out pages, ideally trustworthy pages, and summarizing them for you. If you ask a question for which there are only conspiracy theories, those are the pages we will find and we will summarize them for you. And this is why we were very persniquity about making sure that every sentence that we provided came with a citation. So you could see whether it came from New York Times or the Wall Street Journal.
or whether it came from a conspiracy site.
So, for example, you know, Neva would provide an AI answer to questions like,
what are Hitler's good qualities?
Because there are some sites that say like, okay, here are this person's good qualities.
You'd be like, according to so-and-so.
But the thing that still shocked me was how much there was a tendency to look at those three sentences
and say, okay, done, I'm good.
People trusted it.
I'm going to trust it.
This is the same problem that people have had.
ad with Facebook, which is the tendency to trust something that's like on your phone that
looks kind of authentic is very, very real. Similarly, any text that is generated by a chat
pot, and this is part of the reason why Google has been hesitant, is if they put up some text,
even if there's a citation, people are going to say, oh, Google said so, this must be true.
And I think that, you know, this sort of critical thinking that one needs in order to figure out, when is a chatbot representing some site and is the site trustworthy?
When is a chatbot generating an answer or an opinion that we really should be careful about?
And now, beyond the realm of chatbots, you basically cannot trust any content that's on a page because that could be an AI model that is spewing it out.
and somebody doing a CEO to get traffic.
I think, and then it goes on and on.
Like, we can't trust what we hear anymore
because people can replicate voice.
People can soon make videos of everybody and everything.
I think, like, our notion of reality
is going to be subject to such a barrage
of, like, fake and real signals
that I think it's going to be a real problem
for us to keep our heads straight.
And by the way, the way search engines would deal with things like that is, you know, Google had this system that was roughly called flight to quality.
Whenever there would be a new untrending topic, the search algorithm would basically say, I'm going to go to trustworthy sites because if it is a brand new topic, the likelihood that someone spots a conspiracy theory about that is super high.
So we all need mechanisms like that for us to really figure out what is it to we trust.
So how does this change, like, what the nature of search is?
Because you can build, you can build like something purpose built, like a character AI, right?
Where you can, like, chat with Thomas Jefferson.
Or I can go into like Bing and say, pretend you're Thomas Jefferson or a bard and have that conversation.
So does search now, I mean, it seems like the use case is blend where search becomes part of conversation partner and part a discovery engine.
And these discovery engine or these conversation partners, like a character AI, which lets you chat with historical figures.
can ask you what the weather is today, and it should have some sort of discussion. So how does
search evolve from this moment? Well, I think we are living in a grand experiment. I don't think
anyone knows. I think there are several things that are happening at the same time. First, as you
point out, you can chat with a lot of chat bots. They have access to a certain amount of,
let's call it, like real information. And so we are going to type things into these chatbots
and expect answers that are backed by authority.
It'll make it much easier for us to get information
from, like, honestly, completely entrust for these sites.
But there's also a second order thing that is going on.
People are understanding that there is a lot of traffic
for go money to be made from generating pages
and feeding them into these search engines.
We worried about it.
We did some experiments at NEVA
on can you detect content that's generated by AI.
but that's already happening.
And I'm sure you've seen articles that have come out recently that talk about, you know, there's a, if these language models learn on content that other language models have generated beyond a point.
They just generate trash.
So I think there is that real-time experiment going on around new content that is being generated, which is, of course, going to be reabsorbed back into the,
these language models is going to be indexed by search engines.
Right, like the AI eating itself.
Yeah.
So it's like the AI eating itself.
So I don't think anyone, you know, these are long, powerful cycles with millions, if not,
hundreds of millions of people all actively trying to game it.
I don't hazard to pretend that I know exactly what the outcome is going to be.
Exactly.
And let's say we stick with like what search is in general, right?
Like search, you know, let's say we stick with Google and there's a generative layer on it.
It still changes, right?
Like even if you're not using it for these like what is the meaning of life questions, now what happens?
I'm sure you're in the generative AI lab, right?
You write a question and it thinks for a second.
And then your entire window is content that's generated from Google.
So it goes from like a tool that you use to explore the web to effectively the entire answer.
You know, to help you find answers, now it becomes the answer.
Yeah.
So I'm curious, like, what you think that means for search.
And I know some of these are unanswerable, but I'm going to keep firing at you.
Well, yeah, what do you think it means for search?
I mean, you, let's face it, if that is the format that we want, and, you know, again,
from a personal experience, I just much preferred a four-line summary that told me what I wanted.
It was just fine, 95, 98% of the time.
There was no reason to click on anything and go.
elsewhere. So this entity that's been one of the main sources of traffic to all of our
sites, your site, whatever site I know, Niva created, I created. It is just going to behave
very, very differently. Of course, there are going to be second order effects like a Reddit
saying, wait, wait, you don't get to do this. Right. You don't get to take my content and use
that to generate answers.
But there is a further cascade from there.
A bunch of Reddit moderators are going,
wait, wait, wait, wait, wait.
I don't understand how you make money off of content that we are going to create.
So, you know, we might yet come to a place where content creators, essentially, for their
own survival, have to essentially, like, collapse together.
And so there might very much be a consolidation when it comes to content.
and creation. Just like out of the, you know, all information should be free and the web
is free, let's face it, the sort of two credible, pure information businesses, newspapers
that have come out of that or the New York Times and the Wall Street Journal. And everybody
else is a little bit off and also ran. So I think those kinds of consolidation effects are
most definitely possible. And I think there's a technology opportunity which we explore
towards the end of Neva, which is any content creator, especially if they are part of this
conglomerate type organization, is basically going to work as hard as they can to keep everybody
that came to them, meaning that chatbots are going to be the norm for how information is
discovered on a site once you get that person to come to that site. And part of the fun of
technology like this is on your site, for example, to be able to say, hey, you can talk to any
of the podcast that I have put up here. Just ask a question, and we will fish out the right
segment for you. So I think there are all this, there's going to be like this cascade of
actions that are happening both at the center of Google, but also towards the periphery where content
is being created. And there's an impact for Google's advertising business as well. I mean,
you ran ads at Google for a number of years, right?
When the content all of a sudden comes down,
takes up the whole browser window,
and doesn't have you go on a fishing expedition
for the website you're trying to find,
A, there's less room for ads,
and B, you're not going to click on those boo-links
as often as you would have otherwise.
What happens there?
I think there's more opportunity coming there.
I think, you know, it would not surprise me
if, you know, like the,
advertising arm of Google essentially comes up with the chatbot for how you should get your
local plumber or something. And maybe that becomes an entirely paid experience.
Now, Google's already gone back and forth. Google used to have organic shopping. I famously
made combined organic shopping and paid shopping because I was like, all of this is commercial
content. I can't have four search engines on one search page with organic shopping,
paid shopping, paid text advertising, and organic search.
So I think you will see business model innovation.
I think part of the exciting technology that is being developed by lots of people,
this is something we are looking into from enterprise use cases as well,
is essentially API calling driving tools.
So I think you're going to see experiences where you can, again, chat with a website
and be able to drive purchases off of it.
This was the kind of thing that was really hard
with the previous generation of voice technology.
There is hope that the technology has gotten significantly better
so that shopping becomes easier.
I don't know about you,
but I find shopping on the web to be incredibly difficult
if it is not like the 20 items that I keep buying from Amazon over and over again,
anything that is like meaningfully complicated
is actually really, really hard to find on the web.
And we have also gone to, you shall talk to no one.
So most of the time, I'm just like lost trying to figure out what to do.
So I think there's a lot of business model innovation to come as well.
And Sydney, as you likely know, is also experimenting with things like, you know, sponsored sentences.
I don't know what to call them.
It's very strange.
It feels weird even to say it.
Yeah.
And there's a part of me that like, you know, my heart sinks when I see stuff like that.
I'm like, this is an assault on my reality.
So I think there's lots to come here.
This is a part of the reason why Google's kind of slow.
I think in an ideal case, you know, the way to deal with this is to say,
Alex, for all your informational queries, we have the perfect AI answer for you.
But the minute you type best headphones, my man, we're going to show you a bunch of links
and you're going to click and you're going to give us money.
Yeah.
So what about the competitive side of this thing?
So Google, you mentioned earlier, right?
They developed the transformer model, which has basically sprung a lot of this innovation.
And that was a model that they put the paper out.
They open sourced a lot of this technology.
Was that a mistake?
I would just think that you put the mode up, right, and say, all right, we have this technology.
This is probably going to change.
I mean, this conversation is probably going to change the way that we operate.
I'm keeping it.
I remember six years ago, it was not clear that this technology was going to be
transformation. And at the time, the currency for a lot of researchers was the ability to publish.
If you had told these people at that time that they could not publish, they would have gone
and worked for universities. They would have gone and worked for Microsoft. They had
gone and worked for other people. And so, you know, yes, there is a, I mean, there's some altruism, but
the altruism at Google, as it should be, is always governed by a combination of, you know,
if we do this, we will attract higher quality researchers to work with us, and a bet that
if there is a commercial application of some paper, it will be as fast, if not faster than
anyone else.
So how did that not happen?
say in hindsight that this was a mistake, but I actually think that Google got better about
publishing in the 2010s, like first 10 years of Google, we really did not publish much.
And I think that drove forward a bunch of innovation that's generally been good for, you know,
for all of us. And remember, Google's lack of progress in generative AI, like, you know, that's
internally driven. Nothing stopped them from creating chat GPD. They chose not to.
So what does that say about the inside of Google? I mean, it's a, it's a big place. There are
tons and tons of opportunities and people had been burnt by generative AI before. You remember
the Facebook chatbot as well as the Microsoft chat bot that went that went racist. You know,
And so they were, you know, they were cautious.
They were hesitant.
And those were fine qualities at a time where stability mattered more than breakneck innovation.
But now that they see the existential threat, clearly a bunch of people are pretty aggressive about getting the technology out.
maybe you need a startup that has nothing to lose.
This is a beautiful thing about startups.
They have nothing to lose.
Well, they have something to lose,
just they'll disappear if they don't do interesting things and make money.
And so perhaps it needed a company like Open AI to pave the way
for others to then come and figure out how to exploit.
And the question is who actually is going to win on this.
There was a Google engineer in May who talked about the open source question,
and they said the uncomfortable truth is we aren't positioned to win this arms
race, and neither is open AI. While we've been squabbling, a third faction has been quietly eating
our lunch, and that's the open source community. I mean, it's kind of interesting. You were running a
search engine, right? You were building this stuff. On open source models. Open source, yeah,
mattered. So what do you think about this claim from inside Google that, like, by allowing open source,
or not even allowing open source, that they don't have real moat against open source?
I think that's a misinformed opinion. Yeah. Simply because products matter.
technologies don't win businesses, products win, and relationships win.
I don't think there's been much of a change to Google Searcher.
That tells you how powerful the default position for Google Search has been.
Still not trivial to make ChatGPD or Search Engine even if you want it.
Right.
And there is innovation in open source models, but again,
the blunt truth is that the very best of the models out there, whether it's GPD4 or Claude's
biggest model or a clear step ahead of the pack when it comes to quality.
When it comes to reasoning, when it comes to the quality of the text that they produce,
there is a big gap.
Having said that, there is a tremendous amount of excitement around open source models.
There's a lot of innovation, and there are a lot of researchers who felt cut out of how Google
and Open AI and Anthropic operated that are salivating and going, oh, wait, this is a chance
for us to, you know, have a big deal.
And so not a week goes by without another open source model coming out and people, you know,
claiming, no, I mean, and people having a substantial jump in metrics for.
for some case or the, you know, for some case or the other.
But the fact, at least today, is that the very largest models are ahead of the curve,
even though the open source models are catching up.
And there is a tremendous amount of innovation behind, you know, behind these models.
I think that is what makes this exciting.
I'm very unexcited by the prospect of like, you know, three companies having
a technology
that everyone on the planet
has to use again. These are
well-entrenched companies
and it would just add to their strength.
I think it's actually quite nice
that there is a lot more
competition. Having said
that, you know, search
still appears to be a game
between Google and Bing.
Chad GPD has had growth, but
it has sort of
flattened out. And
you know, I don't think
the mere presence of open source models threaten the existing businesses as much.
What do you think about the multimodal models? Right now, even Google's been hinting about the
fact that you're going to build models, they're going to build models or researching models
that are like not only capable of understanding text, but also can process images, maybe do other
things. I mean, that to me, like, you know, coming from an age where we really were working with
narrow AIs, AIs that are really good at one task.
The concept that there's going to be models that can deal with more than one tasks is
kind of mind-blowing to me and somewhat underrated, I think, in the popular discussion.
Or maybe I'm wrong.
I'm curious what you think about that.
I think multi-modal models will have a lot more business use cases where you're looking at
PDFs.
We announced a model at Snowflake Summit that can understand PDFs, extract diagrams from them,
them also understand the text, extract facts.
So I think there are, I see lots and lots of use cases for these models.
But to me, that is one of many dimensions of this.
I think API calling being able to call actions and use the output of those actions to drive
further actions, I think that is just as exciting as the multimodal capable
So I think it's very, very early.
I have a harder time, you know, other than for things like image generation, how multimodal is going to make a humongous difference for something like consumer search.
I mean, think about the last time where you said, you know, here's an image and I have a question and do something interesting for me.
But there are tons of use cases.
This is where technology, I think, like this core AI technology has these angles.
whether it's multimodal, whether it is tool use that I think can meaningfully solve a tremendous
number of problems that we can't quite like, you know, envision just yet.
My dream that there is a nice model on my phone that I talk to that can copy information
from, you know, one app to the other, it can actually take a photo that I took and actually
attach it to the, you know, chat that I have with you, all just with voice instructions.
I think, like, you know, even compared to the web browsers, the phones that we use day to day are so,
whatever, 1970s, I'm waiting for the time when there is a real language model on the phone
that can truly help us do stuff much more easily than what we were able to before.
Yeah, that would be amazing. I mean, it's been the dream that big tech companies have been talking about for a while and to actually come through would be cool. All right, let's go to a break. We're here with Sridor Ramaswami. He is the co-founder of Neva, the search engine that we've been talking about throughout this conversation. He's also an SVP at Snowflick. How did the two fit together? Well, he recently sold Neva to Snowflick. We'll tell the story and then go to a bit of a lightning round on the other side of this break.
Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like that.
the one you're using right now.
And we're back here on big technology podcast with Frida Ramoswamy.
He's the co-founder of Neva.
He's also a SVP at Snowflake.
And let's talk a little bit about the Neva story.
So, you know, we've been dancing around it a little bit in this conversation.
But you built a search engine.
It was no, there were no ads.
You subscribed to it.
You would sign in.
You had identity there.
And it seemed like a real challenge to go.
Google. You had come from Google. And, you know, it's kind of interesting because then, you know,
right as you're kind of hitting, you know, your moment where you're trying to figure out what the
company is going to be, this generative AI moment hits. And all of a sudden, there's a chance that
search is going to reinvent. And so it is, I find it kind of interesting that you then decided to
shut down the consumer side of it and then went and sold it to an enterprise company like Snowflakes.
So can you tell us the story about like what happened there, what it's like competing with
a Google, what lessons you learned?
Yeah, I mean, lots of people know this.
It was still sobering to deal with it in practice,
which is that getting consumers to change their behavior about search is hard.
And the players involved, the browsers, the companies,
simply do not make it easy.
It is really, really hard.
Outside of the prescribed five, you can't change the search engine on Safari
to anything else, even today.
Why is that? That's just an Apple thing?
That's an Apple thing.
You just can't do it.
And so that was sort of the reality.
And we have talked about this before.
The people that tried us, a pretty decent fraction of them,
were perfectly willing to say,
it's like, ah, 50 bucks a year, that's fine.
They would just pay the 50 rather than the $5 and one.
And the thing that changed Alex in a pretty big way
was we went from an environment,
2020, 19, 21,
where a company could get funded
at like 300 times next year's earnings
when the revenue
when the revenue was small
to suddenly the expectation being,
oh, your valuation is 10 to 15 times revenue.
And ironically,
a whole bunch of enterprise opportunities
also popped up earlier this year, where people are interested in our crawl table,
generative AI companies, language, like they wanted language model companies, especially,
they wanted access to a search API.
There are also a bunch of pricing changes where people wanted the search API to power search.
There are a whole bunch of these opportunities that came about early this year.
but our overall conclusion was that in the new 5% interest rate environment, we could not
catch up to can you be, can your valuation be 10 times revenue?
We thought about this.
And when we had conversations with Snowflake, part of what was really exciting was the
core technology that we had built around search, which not only was a keyword-based,
quality-based system, but also had things like vector indexing built in, we realized that we had a chance
to have a big impact with search within Snowflake. And I've talked a lot about this. In my mind,
one of the key ingredients for believable AI, for referensible AI, is a great search retrieval
system that sets the context for how a language model is going to generate answers.
So these are the two broad areas where we were very convinced, we meaning Vivek and I and
the Neva team, but also the snowflake team in terms of the impact that we could bring to bear.
And that was the main reason the acquisition, you know, went through.
And we've been at this for four weeks.
There are existing teams in Snowflake that I've been doing things like, you know,
deep learning models to better understand documents.
But this is the area that we are working on, which is search and generative AI.
We showed some demos of what is possible.
Imagine a co-pilot experience is built into every place where you interact with Snowflake.
but imagine also creating technology that will let our customers,
which are most of the Fortune 500, you know, top 2,000 enterprises in the world,
how do we bring this technology to all of them?
So those are sort of roughly the areas.
And that's, that's sort of, that is, that was our motivation for why we decided to stop
the consumer journey and be part of a larger organization focused on enterprise data.
Yeah.
And then there's been this moment now where it's like, okay, the chat, the interest in chat,
CPT is kind of tailed off and people are wondering, like, have we hit the, you know, the end of
innovation here or is there more stuff coming? And some of the stuff that you're going to be able to do
with Snowflake, to me, seems to be the place where we could see some of the breakthroughs happen
on the existing technology and, and I guess incorporating the innovations. And I think you've made
this point in previous interviews, but I think maybe you could elaborate on it. It seems to me like
what people are going to be able to do is they're going to have all their data in Snowflake.
And then basically be able to speak with it.
So you could have, like, anybody in the organization access, you know, whatever part of the data is, you know, available to them and actually start to have a conversation and not have to run, like, complex coding algorithms in order to be able to make sense of what's going on in the company.
So is that what's going to happen?
Like, give some practical examples.
Yeah.
So, you know, Snowflake is proud of its mission to democratize data.
access to everybody within the enterprise. There are companies like Fidelity that have made
snowflake this centerpiece of their data architecture. What we're excited about being able to do
is use the part of generative AI on top of this incredible platform that's already been built.
It ranges from the simple, which is how do we help you generate much better SQL queries? We have
something called SnowSight, which is where you type in SQL queries.
I don't know about you, but I've spent a good chunk of my life writing SQL,
even at Neva, and it's tedious.
It is tricky to get right.
We want to make it much easier so people that are doing this,
who are typically analysts, data engineers, can do this 10x faster.
But even more importantly, and this goes to the point that you're talking about,
is how do you make it easy for business users that don't necessarily understand the ins and outs
of the schemas and the tables and stuff like that to be able to ask business questions
and for Snowflake to then automatically decide, is that an existing dashboard?
Is that a SQL query that's been run before?
Do we need to write something new from scratch and visualize it?
It is that ability to offer up this data.
And this is everything from, hey, how is revenue doing by region for this quarter to more complicated questions?
How do you make that easily available to lots of people?
But there's definitely more part of the transformation that Snowflake has been going over the past like five, six years now,
is to really become the data cloud, a platform not just for the data,
but also to build applications on top of the data.
And so we bought a company that makes it super easy for you to write visualization programs
on top of this data.
So it's almost a complete programming stack.
These are the things where I think, you know, like our bet is that we can 10x the number
of users that can use the platform, 100x the number of queries that are going to be run
on the platform.
But just as importantly, think deeply about how do we make this test?
available to all of, you know, our customers.
Part of the problem right now is that,
I guess there are big language models like GPD4,
but pretty much most of the time,
you're sending over your proprietary data over to them.
And at Snowflake, Vivek and I are particularly excited
about all of the great things that are happening with open source models
because we want to make it really easy for our customers
to be able to then deploy them.
within their Snowflake security perimeter
and be able to do meaningful things with them.
This basic arc of everything in Snowflake,
whether it's writing a SQL query,
visualizing or interacting,
gets an assistant is just the first part,
but lots of other applications,
including things like if you have a table
with a set of documents inside it,
they can even be sitting in cloud storage
and you can just point Snowflake to it.
How do you create a quick conversation
interface, where instead of having, you know, I don't know how you search through
PDFs, but my favorite method is command F, where I put in a word.
It's super painful.
You should be able to simply talk to it and say, if you have earnings reports, how did this
company do?
What were the growth rates for the past four quarters?
And then the underlying model goes and figures this out across a set of document, shows
it to you, but also shows the citation so that you can be sure.
that it is the right answer.
That would be cool.
I would use that for sure.
Okay, so do you have time for a quick lightning round before we head out?
Let's do this.
Okay.
First thing, where do you think this is going to leave us on jobs?
Are we going to lose jobs for this?
Are we going to, I mean, it seems like, you know, everyone said chat GPT is going to take
your job.
It hasn't really happened yet.
Why is that?
Because change is slow.
I think definitely when it comes to things like customer support, you know, you need,
you need tools, you need much better retrieval systems, you need much better action-taking systems.
I definitely think that there are a whole class of white-collar jobs that are going to be affected
in a pretty significant way. Hopefully there are new jobs that are going to be created,
but, you know, one can't bet on stuff like that. Simple information functions 100% are going
to be done better by AI models.
Elon Musk is starting a company called XAI.
It's his answer to OpenAI.
What do you think is going to happen there?
They have competent people.
They're going to generate, you know,
from Google and the University of Toronto.
Yeah, yeah, we met Igor.
Plenty of GPUs, yeah.
Yeah, yeah.
You know, I don't know what to say.
I think it's a way to, I think it's a way to stand out.
Let's face it.
Things like how you make AI models safe is a little bit of an art and, you know, art and science.
And Elon sees a way in which this, like, no, the company can stand up.
But competition in general is a good thing.
I'll mention that the work that Facebook is doing to open source some of their models
or to even have them be commercially usable by lots of people is an exciting development for
for everybody.
So my attitude generally is like the more
the merrier competition is good.
I don't know about you,
but I love the streaming providers.
There's lots of competition, lots of choice.
Do I really want like five subscriptions?
Probably not, but I'm glad that they're there.
Why does everyone who is worried about the future of AI
and AI wiping us out seem to be working on their own project
advancing this state of the art and this technology?
analogy. You unincluded.
I sort of genuinely do not know.
iPhone thinks like the call for a moratorium for six months to be absurd.
And the people that were starting new efforts in AI were some of the
signatories to that effort.
Don't get me wrong.
there, you know, like, yes, this tech can get out of hand.
But the way I would handle that is to make sure that existing laws we have against discrimination
or illegal use are also applied equally aggressively to these models.
I don't think stopping work on AI or declaring it to be the end of humankind is the right way to think
about it. There are lots of positive ways in which AI can be used and 100%. As I said earlier,
AI is going to be an assault on our reality. So there's a lot of public education that needs
to happen simply about what is believable. But he can't also necessarily stop technologies like
this, especially ones that can no longer be centralized. I'm sure you know this. You can find
you know, model for 500 bucks in one evening without a whole lot of technical skill. And so,
you know, I think this technology, similar to the internet, is going to be widely available
to a lot of people, is going to produce some, you know, unforeseen consequences. We have to
be ready for it. Finally, what makes Nvidia so special? We made an early bet. It's, I think,
such a fascinating story. Remember, for much of the last 30 years, we are like,
Yeah, they make GPU for games.
It's like such a niche industry.
I think it's one of these cases where it's like there's a lot of right place, right time.
The same things that made them really good for doing graphics processing, which is a lot of, you know, fairly simple operations done at massive scale.
You know, graphics processing is like drawing a lot of triangles.
on your screen.
But the same thing,
and matrix multiplication,
were wildly
applicable in the era
of AI,
and pretty much the world
has centralized around them.
It's an amazing story.
Sider Ramaswai. Thank you so much for joining.
Thank you, Alex. Great to chat.
Always great to talk. Thank you, everybody,
for listening. Thank you, Nick Gwattany,
for handling the audio, LinkedIn, for having me as part
of your podcast network, and all of you for
listening. We'll be back on Friday breaking down the news, as we do always. Thanks again for
listening, and we'll see you next time on Big Technology Podcast.