Moonshots with Peter Diamandis - Should AI Be Open-Sourced? The Debate That Will Shape Everything w/ Mark Surman | EP #136
Episode Date: December 12, 2024In this episode, Mark and Peter discuss why Open-Source is the future of AI, and how established companies should be thinking of AI. Recorded on Oct 15th, 2024 Views are my own thoughts; not Financ...ial, Medical, or Legal Advice. Mark Surman serves as the President of the Mozilla Foundation. As President of Mozilla, he leads efforts to build a more open, equitable, and trustworthy internet, focusing on advancing ethical AI through initiatives like Mozilla.ai, a commercial AI R&D lab, and Mozilla Ventures, an impact venture fund. Previously, he spent 15 years as Executive Director of the Mozilla Foundation, growing it into a global force for digital rights, open-source advocacy, and internet health. A recipient of the prestigious Shuttleworth Fellowship, Surman has delivered keynotes on five continents and is regularly featured in major media outlets discussing the future of AI, open-source technology, and internet privacy. White paper on Public AI: https://blog.mozilla.org/en/mozilla/ai/public-ai-counterpoint/ Creating Turstworthy AI: https://foundation.mozilla.org/en/insights/trustworthy-ai-whitepaper/ Status Update on Creating Trustworthy AI White Paper: https://foundation.mozilla.org/en/research/library/accelerating-progress-toward-trustworthy-ai/whitepaper/ Pre-order my Longevity Guidebook here: https://longevityguidebook.com/ ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at  https://www.oneskin.co/ #oneskinpod Get real-time feedback on how diet impacts your health with https://join.levelshealth.com/peter/ _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
For someone who hasn't really captured the idea of open source, what does that mean?
Most of the things you use today, whether that's made by Google, made by Amazon, made
by Meta, or made by some startup, or made by some weird artist, or made by your cousin,
in most cases there's a bunch of open source underneath it.
What are the motivations for someone to build the open source?
The mythological early open source motivation
was scratch my own itch.
I'm using a piece of software.
It doesn't do the thing I want it to do.
So I'm gonna modify it or I'm gonna build something.
And like, I'm a nerd, I'm gonna build a thing
I wanna share with other nerds.
And I think that is still a part of it.
Do you believe we actually have privacy?
Oh, it's such a tricky question.
We struggle with what is privacy now?
We think privacy, what it means has got to evolve.
Welcome to Moonshots.
Today we're gonna be talking about
the open source movement with Mark Sermon,
who's the president of Mozilla Foundation.
As president, he leads an effort to drive
more open, equitable, and trustworthy internet,
focusing on advancing ethical AI.
What does that mean?
What does open source mean?
That's our conversation.
We're going to be diving into a paper they just released
called Public AI, Making AI Work for Everyone.
I'm going to put the link in the show notes.
And he also put out a paper recently on trustworthy AI.
I'll put that in the show notes as well.
So if you've wondered what open source means,
if you've wondered how it's going to impact AI,
is Llama truly open source know dive in with me here.
All right, let's jump into the conversation with Mark.
And if you like this podcast and the people I'm bringing to you,
please subscribe.
All right, welcome Mark Sermon.
Before we get started, I want to share with you the fact that there are incredible breakthroughs coming on the health span and longevity front.
These technologies are enabling us to extend how long we live, how long we're healthy. there are incredible breakthroughs coming on the health span and longevity front.
These technologies are enabling us to extend how long we live, how long we're healthy.
The truth is, a lot of the approaches are in fact cheap or even free.
And I want to share this with you.
I just wrote a book called Longevity Guidebook that outlines what I've been doing to reverse
my biological age, what I've been doing to increase my health, my strength, my energy,
and I want to make this available to my community at cost.
So LongevityGuidebook.com, you can get information or check out the link below.
Alright, let's jump into this episode.
Mark, good to meet you in person, sort of, in this virtual world we live in.
Sort of in person, yeah.
Yeah, and I'm in...
This is as impersonating as most life gets these days.
It's crazy, right? We forget how convenient it is that we can live this virtualized life. And for
me to have met you and have this conversation in the past, I would have had to literally jump on
an airplane, fly to Toronto, in this case New York where you are right now. But we're living in this extraordinary world.
So this conversation for me is an important one.
And I think I want to dive deep to understand what you think of as the open source movement.
What does it mean?
Why does it exist?
What are its advantages and what are the trades? We hear about open source a lot.
I've had conversations in the past with Iman Moustak about this, with Elon Musk, with others.
And as president of Mozilla Foundation, it's an area that you are advocating for.
Let's jump straight in. For someone who hasn't really captured the
idea of open source, what does that mean? I think at the very top level, it means
that for the last 20 years, 25 years, however you want to talk about it, as we've
started to build out like billions of us this digital world, that there is a set of
Lego blocks that anybody who's capable of using them or wants to learn how to
use them can build something out of. I think that's a, I'll define open source
in a second, but I think that's really the key piece is that underpinning most
of the things you use today, whether that's you know made by Google, made by
Amazon, made by Meta, or made by some startup,
or made by some weird artist, or made by your cousin,
in most cases there's a bunch of open source underneath it.
And that open source underneath it is the Lego kit
that has let so many creative people, so many startups
like go fast because they've got a bunch of stuff
they can take for free to start building something.
And then on top of it, they might build something that's closed or they sell to you or whatever.
So I think in that Lego kit, I mean, there was a recent Harvard study that talks about
open source over the last 20 years, creating $8 trillion in value.
And so a critical part of the digital economy we're living in with the creativity part and
the money part of it come from the fact that we have this Lego kit.
And so like what is, we'll go ahead Peter.
Yeah, I was gonna say, so we've got this, I think Lego kit's a great description of
it, chunks of software that are available for anyone to use
without cost for free? Is it always for free?
Yeah, there's four things that it means. And it's not always like always free as in beer to use an old open source trial.
It's free as in speech. And so the four things that make a piece of software
open source are you can use it without any encumbrance
You can use it without paying for it. You can study it. So that means you can look inside. It's transparent
You can see how it works
You can modify it so you can turn something that you pick up into something else and you can share it again
So you want to share it again also means you can make money on top of it as long as you're not charging for the thing itself.
And so that's what makes something open source underneath.
There's a lot of how does that play out in AI
debate going on right now, and we can get to that too.
But that's the basics.
Mark, thanks.
But let me dig a little bit deeper here
because I really want to understand
the core motivations and the core principles of open source
before we dive into AI, which is equally important there.
What are the motivations for someone to build the open source software?
Is it just curiosity and desire to help?
Then, one other part of the question I have even the open source software that someone is building those Lego Brock blocks right now
Those are built using
Non open source elements, right?
I mean at some point you go down and you're saying someone's paying for the bandwidth someone's paying for the compute
Someone's paying for the stuff that DARPA
Spun out and then Microsoft took over. So it's not,
as the old principle, it's not turtles all the way down. It's not open source
all the way down. So there's some open source layers, right? So help me
understand that and then the core motivations for people playing.
Well, I think it's important, and we'll get into this, I'm sure, to distinguish between it costs money
and it's open source.
So all the way down, there's stuff that costs money,
including our time is worth something,
even if we're not getting compensated in cash
or the bandwidth that can do the DARPA, all this stuff.
And in fact, in some ways, that cost of building it
can either get privatized or can turn into public goods.
And so that's the nice thing about the history of DARPA and the internet, right?
Is this big public investment, which turns into these open protocols, not open source,
but open protocols, which run the internet, which are public goods.
And so they actually take public dollars, tax dollars,
turn into something that actually is open that everybody gets the benefit from.
So I think we want to just be careful there. In terms of what are some of their motivations,
they really vary. So if you think about what are some of those building blocks,
Linux or Apache or Firefox or Wikipedia, which is is not software but let's call it open source
i mean it is open source is not software uh or an open ai open source ai model like like
olma from the allen institute there's a lot of motivations i think the the mythological early
open source motivation was scratch my own itch right i'm using a piece of software it doesn't
do the thing i want it to do so i I'm gonna modify it or I'm gonna build something
and like I'm a nerd, I'm gonna build the thing I want
and share them with other nerds.
And I think that is still a part of it, right?
Is like, I just need to make a thing for myself
is really the heart of it.
I think Linus Torvalds, when he started Linux,
it was some combination of I want a version of Unix
that works the way that I want it to work
and I'm going to make
it.
But then, as you go on, it absolutely is maybe a more collective or corporate or even commercially
driven utility.
It's the scratch my own itch, but at the level of a meta or a Google or an AWS, which is I
need a software stack that's going to run my cloud
computing platform, whether I can build a social network on.
And lowering the cost and increasing the reliability of that is my absolute goal.
I don't want to care about that thing.
I just want to make sure it works and it costs me as little as possible and that it's as
flexible as possible. So that's why say something like Linux which is really under most of
the things we use on the internet in terms of cloud computing. All of those
companies I just mentioned put huge amounts of money or engineers back into
Linux because it's the plumbing and if they all basically collectivize the
cost of the plumbing it becomes the standard it just works it's the plumbing. And if they all basically collectivize the cost of the plumbing, it becomes the standard. It just works. It's much cheaper than, you know,
buying, you know, Solaris or Windows NT, which don't exist anymore, but the
proprietary things from the past or from building it themselves. So I think there's
a next level of scratch to the edge, which is get the thing I want for cheaper in a
way that works for me. And collective effort is the way that that works.
And then, you know, there is an end where somebody's really building open source as
a business strategy on its own.
And so, you know, I'm Red Hat and I take a lot of stuff that's already been built and
leveraging everybody else scratching the itch.
I'm building some other open source to glue it together so it's easier to use Linux.
And then I'm charging for software, or it's not for software, I'm not charging for software.
I'm charging for support. I'm charging for other kinds of services that allow people to use what's
free in a way that's more effective. So I think you kind of have the personal itch, you have the
collective itch or the kind of infrastructure play, and then you have an open source business.
So it's going to be interesting as we dive into the conversation around AI, and I know
I'm excited to hear your thoughts.
You recently wrote a paper called Public AI, Making AI Work for Everyone, and I want to
dive into that because there's been a lot of advocacy for open source and transparency in AI and at the
same time there's been a lot of how do I put it giga dollars put into the field a
lot of capital flowing in it's an expensive and energy intensive field so
I want to understand how the two can co-exist.
How do you make it happen fast enough, safe enough
with the proper motivations?
And how do you get the smartest people,
sort of the meritocracy to work on it
with the right incentive?
So this is gonna be our conversation
for those listening who wanna understand where we're gonna go.
And I'm gonna play
both the fan and a skeptic on both sides of the equation, Mark, if you don't mind.
Yeah, so, you know, I've started, I don't know,
seven or eight nonprofits over the years and I've sworn that I'm never gonna start
another nonprofit.
And the reason for that is the inefficiency
of what I see as nonprofits.
And I run, I'm executive chairman of the X-Prize,
which I think is a highly efficient leveraged,
we put up, since we've launched 30 prizes over 30 years 600 million dollars in competitions
Uh, but we've got to just struggle day in and day out to raise the money to convince somebody
um either with in kind or time or donations to fund us
Versus, you know build a business that is churning, youurning revenues and I reinvest it.
I've had an argument for years that Google, even though it's a for-profit,
has done such extraordinary benefits for humanity, making knowledge,
leveling the playing field, making it accessible, democratized and demonetized around the world that if you tried to create that kind of
capability
in a
Nonprofit, I don't think it would I don't think it would have been possible
Given the amount of capital needed. So how do you so by the way?
You can make the point first off that open source doesn't mean nonprofit just to begin there.
You made it for me, Peter.
You made it for me.
Yeah.
So I mean, open source doesn't mean nonprofit.
So we go back to your original question of like, you know, how does, maybe I'll take
as implicit, open source compete or provide value or whatnot in a world where there's
so much money on compute.
It's so expensive to do this.
We need the brightest minds, all of that.
And I think the answer is twofold.
One, our vision of open source and certainly public AI,
which is this broader concept,
which we can get to in a little bit,
isn't that it's exclusive from proprietary or closed things.
Open source has always done well in a world
where it is a counterpoint, a compliment, right? Or something actually that comes a
little bit later and replaces some commercial innovation. So Linux really is
the server operating system and you don't have the Solaris's and the Windows
NTs and that's because it just becomes such a commonplace thing that it kind of
commoditizes as open source. So it's you know it's it's not to say open source
exclusively and then on that question like how does that get financed or how
does it come to be because it's not going to just be somebody in the basement
who's going to create the Linux of the AI era. Although lots of those people are
doing cool stuff. You already see with with Lama, for example,
that Meta sees a very clear interest in NVIDIA who's helping them and lots of other companies
who are in the open source AI space and IBM us that the idea of moving early to commoditize
and collectivize what is effectively becoming commoditized infrastructure.
Like a lot of what, I mean I think you see the core innovations of transformers and LLMs are going to become pretty commonplace.
They become commoditized.
Isn't it crazy that the world's most powerful technology is effectively free?
Yeah, I mean, that blows my mind.
I mean, if you'd gone back a decade and said, listen, you're going to have these things
called large language models and you're going to be able to ask any questions, ask it to
create video clips, images, and it's perfect.
And it's like the world's knowledge is at your fingertips.
And it's got...
And then a company like Manna is actually putting it out there for free, where the others
are putting it out there for free or the others are putting it there for free
How much would you how much would you have guessed a decade ago that a license to that for an individual would cost right?
I mean, it's insane. It's free
Well, what's interesting actually is a decade ago if I'd imagined this technology
I would have imagined it was free two years ago I would not
So I think we were actually in a moment as chat to BT exploded where it looked like all this stuff was going to be thrown
behind API's
Nobody was going to release it all because there's a lot of open source there still is today
In AI all the way along means it's how the innovation moves so fast is you have researchers, scientists
talking to each other, writing papers, sharing their code.
So 10 years ago, five years ago, I would have imagined there's a lot.
And I think we got to a moment where really stuff got enclosed and locked down.
You think about OpenAI taking the transformer paper, innovating, really boxing everything
up, productizing it, which is good in terms
of end user value, and really move quickly that Llama was able to get out there and provide
basically the same thing because I think you see that Manna is playing the Linux version
of that or the Linux play.
I was just saying, what's Zuck's motivation here?
Is it catching up?
Is it just taking a sort of adjacency that gives him a larger fan base or a different
user base?
Why did he do that?
I think Zuck's motivation is the same motivation
I put money into Linux is I'm not,
if I'm meta in the LLM business,
I'm not trying to be open AI,
I'm in the metaverse business or the social media business
or whatever business they're gonna be in.
And I need this current generation of technology
to work as reliably and as a low cost as possible.
So they put a ton of money in upfront.
It's a big risk to be the, you know,
try to have llama be the Linux of that, the LLM era.
But they're not just sounding every with Jensen and, and Zach, where, you know,
video supposedly putting 200 engineers into llama.
And so I think the idea is if it becomes the standard,
everybody pitches in and it's a kind of Linux play.
And so that's the motivation.
I think that the tricky thing about Llama
and Meta with that is they've put in this thing
that makes Llama in our view, not open source,
which is this weird license piece that says
at 700 million users, it's no longer free for you anymore.
It's no longer open source.
And open source licenses don't have those caps.
It kind of breaks the covenant of open source.
Imagine if Zach had built Facebook on Linux,
and when he hit 700 million users,
Linus Torvalds shows up in Palo Alto,
knocks on his door.
It says, sorry.
With a bag.
With a giant bag.
It doesn't work that way.
And so I actually do think other things will emerge, and this actually gets to your question
about nonprofits in a second, that will be pure play open source and that will actually
become the dominant infrastructure.
Did you see the movie Oppenheimer? If you did, did you know that besides building the atomic bomb at Los Alamos National Labs,
that they spent billions on bio-defense weapons, the ability to accurately detect viruses and
microbes by reading their RNA?
Well, a company called Viome exclusively licensed the technology from Los Alamos Labs to build
a platform that can measure your microbiome and the RNA in your blood.
Now, Viome has a product that I've personally used for years called Full Body Intelligence,
which collects a few drops of your blood, spit, and stool, and can tell you so much
about your health.
They've tested over 700,000 individuals and used their AI models to deliver members critical health guidance like what foods you
should eat, what foods you shouldn't eat, as well as your supplements and
probiotics, your biological age and other deep health insights. And the results of
the recommendations are nothing short of stellar. As reported in the
American Journal of Lifestyle Medicine, after just six months of following
Viom's recommendations, members reported the following.
A 36% reduction in depression, a 40% reduction in anxiety, a 30% reduction in diabetes, and
a 48% reduction in IBS.
Listen, I've been using Viom for three years.
I know that my oral and gut health is one of my highest priorities.
Best of all, Viome is affordable, which is part of my mission to democratize health.
If you want to join me on this journey, go to Viome.com slash Peter.
I've asked Naveen Jain, a friend of mine who's the founder and CEO of Viome, to give my listeners
a special discount.
You'll find it at Viome.com slash Peter.
Before we go any further,
I'd love you to give us a little bit of a background
on Mozilla Foundation.
It's the context in which you come,
it's the work that you're doing,
and I think folks should have a little background here.
Please.
Well, it's interesting,
because we get back to that question of motivation.
Mozilla started in 1998. I wasn't around
I was a fanboy from the outside as
the open source project on top of the Netscape browser source code
So, you know Netscape was losing to Internet Explorer and he'd been the first big browser after Mosaic
I remember well, and so they thought well if we put it out there maybe other
people will run with it and you did have Red Hat and Sun and IBM, a bunch of other
people contributing to the open-source version of Netscape which was called
Mozilla but what really drove it was for about five years before Mozilla
Foundation even existed it was just really a bunch of hackers around the world
trying to beat Microsoft. And so actually the motivation question was,
these guys stole the web from us. Bill Gates didn't even want to have the web. He didn't believe in it.
And then now that 98% of browser market share, they've got ActiveX so you can only use web pages really on Windows or
inside Internet Explorer. And you know, so there was an army of very angry geeks that
said, Microsoft, you don't own the internet, we're gonna show you. And
eventually, it takes them five years, they get from this clunky Mozilla browser
that was just a kind of a slight meval version of Netscape to Firefox.
And Firefox is the kind of breakout of like, let's make it small, let's make it sexy for
people, let's put in pop-up blockers, and let's make sure it does JavaScript really
well, which sounds boring today, but was radical because what it became was the thing that
allowed people to develop interactive web apps
instead of dumb web pages.
And the joke kind of goes, like, what's the best version of Firefox ever released?
Internet Explorer 7.
Because that's the one that had JavaScript in it.
And, you know, until you had all the browsers working with JavaScript and Ajax,
you couldn't do Gmail, you couldn't do Facebook, you couldn't
do Twitter.
So that was the set of people that really wanted the web to be open.
So Firefox makes an emergence and it takes on market share from Internet Explorer, but
then Chrome comes along and begins to dominate.
What was it that, why didn't Firefox dominate and why did Chrome,
what allowed Chrome to come in with such a fury?
Well, the cheeky second answer to the joke about Firefox
is the best version of Firefox ever is Chrome.
Okay.
Because really the goal of Mozilla
is that the whole system is open.
And of course, we need to have enough market share,
which we don't today with Firefox, if you ask me,
to be influential in our values.
So the other thing about Mozilla is,
it wasn't just that kind of passionate set of people
who wanted to counteract Microsoft,
that only focus on a browser.
They had a bigger dream, they had the Mozilla Manifesto,
and that was really about the internet
being in service of all humanity.
And so, the idea is we've got enough openness
to shift the market towards being open
and towards the kind of tech working for people.
And I would say, Chrome came as a third real platform after Internet Explorer
and Firefox.
That was a boon.
There was a period where we really lost the ball in terms of keeping up with the tech.
And it's a lesson that is if you want to have this mission, and it's going to be true now
in AI, of making sure that the tech has certain values,
you also have to be on top of the tech being great.
And we didn't always stay on top of that.
Isn't it true you also have to be ready
for verticalization?
I mean, Chrome has succeeded as has Gmail,
as has Android, and a thousand other things, Maps,
because Google was large enough to verticalize and build
interdependencies on these things that made it super convenient for folks to use.
And I can imagine we're going to see even more of that with AI in that verticalization,
that deep stack up and down the user experience.
Yeah, it's an interesting question about
where verticalization, which is a natural tendency,
and ultimately monopolization,
which is something we don't want
and is illegal in our society.
That's a tendency that will emerge from companies
that is trying to get as much market share
and as many things as possible.
And disruption interact, because Microsoft also found
to be a monopolist in its era had totally verticalized, right?
They owned the server room, the databases,
they're trying to own the content,
they certainly owned the browser,
they owned the office suite, all of that stuff.
And they get disrupted by the web.
And so one of the interesting questions is, you know, the web era and the smartphone era are verticalized as
well. There's real tendencies towards that verticalization in, you know, in the AI
era. I mean Gemini actually, even being late to the game, I think has a real
advantage being built into all of the Google suite. What will come to disrupt that?
And is it, you know, have we gotten so far along
that disrupting the verticalization becomes so hard
or almost impossible?
It's a really critical question right now.
You know, I, call me cynical,
but I think every single company
and every single product eventually becomes disrupted
because they become comfortable, right? I think every single company and every single product eventually becomes disrupted because
they become comfortable.
I'll never forget about six years ago, Jeff Bezos goes on an investor call and says, yes,
in 30 years, Amazon will not exist anymore.
Some quote like that saying, I don't know if he's trying to scare his employees or just
dock his stock price, but it's everything gets disrupted right fedex. I mean the dominant, you know overnight carrier
As we see what amazon has built
now
one of the questions is there is a situation where
a company a for-profit company with a great leader, motivated employees, just do a better
job and they can reinvest and they can, as a meritocracy, just continue to increase their
capabilities.
And yeah, you don't, I mean, there's a difference between monopolistic behavior and being a
monopoly, right? And yeah, you don't I mean there's a difference between Monopolistic behavior and being a monopoly right if you have the best product in the world
And everyone loves it and
And guess what? You know, you've got 99 market share
Is that a monopolistic behavior or is it a monopoly or is it are you just providing a fantastic product and service?
This is the tricky question before the courts on a number of topics including right now search,
right? It's a great product. So, I mean, that's that I guess for as we evolve our monopoly laws
for us to figure out, ultimately what you want from antitrust regulation is
Competition and the opportunity for people to come in and disrupt sure and several and that's where I know that you're being cynical I think you're being hopeful when you say
ultimately every company is gonna know I believe it because
Every company gets fat dumb and happy to some degree and new technologies constantly, I mean that's the laws of
physics or technology we just have constant you know we're in a super
exponential period and I'd say the day before something is truly a breakthrough
it's a crazy idea and we get disrupted by crazy ideas that a company that's
reporting on a quarterly basis is unwilling
to take, but some entrepreneur someplace is willing to, they have nothing to lose, so
we'll take the bet and oh my god, that's incredible.
Well, probably maybe five or ten years before, not the day before, and that takes an eye
from 2015 to...
Yeah, I'm being...
But I think your point is right.
One of the things I want to pull at, and it goes back to a
question I didn't answer earlier, but also on this, like, companies are going to want to grow in this
way, is there also is a question of disruption to one end, right? So some people will come in and
disrupt because they've got a great idea, they want to build a company. And like, capitalism and
companies are great at solving certain problems and creating public good and even public goods, you know, things that
are shared in common, contributing to Linux, all that stuff. But there also are
things to your question, like not starting a non-profit, the companies are
just never gonna be good at. And so that's where when we talk about public AI
as a
counterpoint, I don't think that a company is ever going to be
good at what has happened with Linux, which is a collective
public good that all, you know, that all the companies who build
on it and researchers and everybody governments build on
top of and for all it's a pain in the ass of Linux Foundation
struggles, who's getting enough members.
That is a form of social organization that lends itself to being an independent third
party.
And unless you want to get rid of governments altogether, I mean, that's another form of
social innovation that has its purpose.
I would call myself a libertarian capitalist, so that's where I bend towards.
Yeah, and I'm an old punk anarchist.
And so, you know, sometimes we'll have some common cause there.
But I think that the thing is, you know,
what social forms or social organization
are helpful to what innovation and accelerating what innovation.
So, you know, do you, are we happy that,
in addition to NBC, ABC, CBS, we also had PBS in the broadcast era?
I think it took on a role that the commercial players were never going to take on.
And so, to me, the question at any point included this point.
But innovation comes in and YouTube comes online.
Great.
And then we don't need PBS or BBC or CBC.
Exactly.
And so when we have a government formed provider,
I would rather have complete and total open access for anybody
to provide whatever they want.
But that comes later in the process. and I think the question to always be asking is
what's not going to happen if you just leave it to the market?
I think that's a very fair and a very important question for humanity's benefit.
Everybody, I want to take a short break from our episode to talk about a company that's
very important to me and could actually save your life or the life of someone that you love. Company is
called Fountain Life and it's a company I started years ago with Tony Robbins and
a group of very talented physicians. You know, most of us don't actually know
what's going on inside our body. We're all optimists. Until that day when you
have a pain in your side, you go to the physician in the emergency
room and they say, listen, I'm sorry to tell you this, but you have this stage three or
four going on.
And you know, it didn't start that morning.
It probably was a problem that's been going on for some time.
But because we never look, we don't find out.
So what we built at Fountain Life was the world's most advanced diagnostic centers.
We have four across the US today and we're building 20 around the world.
These centers give you a full body MRI, a brain, a brain vasculature, an AI enabled coronary
CT looking for soft plaque, a DEXA scan, a grail blood cancer test, a full executive
blood workup.
It's the most advanced workup you'll ever receive.
150 gigabytes of data that then go to our AIs and our physicians to find any disease
at the very beginning when it's solvable.
You're going to find out eventually.
You might as well find out when you can take action.
Fountain Life also has an entire side of therapeutics.
We look around the world for the most advanced therapeutics
that can add 10, 20 healthy years to your life.
And we provide them to you at our centers.
So if this is of interest to you,
please go and check it out.
Go to fountainlife.com backslash Peter.
When Tony and I wrote our New York Times bestseller Lifeforce, we had 30,000 people reached out
to us for Fountain Life memberships.
If you go to fountainlife.com backslash Peter, we'll put you to the top of the list.
Really it's something that is for me one of the most important things I offer my entire
family, the CEOs of my companies, my friends,
it's a chance to really add decades onto our healthy lifespans.
Go to fountainlife.com backslash Peter.
It's one of the most important things I can offer to you as one of my listeners.
All right, let's go back to our episode.
So I want to jump into your public AI, making AI work for everyone.
And I pulled out five points, and I'd love to dive into them a little bit.
I think that entrepreneurs listening to our conversation here, there are a few different
debates going on in the AI world.
One is, will digitalintelligence destroy humanity?
That's a great debate. Not going to have that conversation right now. Will it pull all the jobs?
Thank God. I'm happy to have it, but thank God we're not having it right now.
Yeah, will AI pull all our jobs? Will it help us create longevity and fusion? The answer is yes, but we'll get back to that later. But the question of how do we assure
safety and
transparency and who's responsible for that? I mean these are fundamental questions. So here's the first point
I've written down and let's discuss what this means. So AI development is dominated by commercial interests.
Why is that a bad thing or is that a good thing?
Well, I think that commercial interests are a part of driving the innovation is a great
thing.
And you often have these dances if you think about the internet overall between non-commercial
research like deep innovation,
so whether that's DARPA and the internet
or CERN and the web,
that then you actually figure out what to do with it,
and there's a lot of commercial drivers
of innovation from there.
And then often afterwards, there's a rule
for what is a commercial player not doing,
where you take Firefox or Wikipedia or Linux as
examples where the nonprofit players come and play a really hugely socially and economically
beneficial role.
So fast forward to today in terms of AI, I think it's great the amount of commercial
innovation that is happening.
Commercial dominance is a different thing than commercial innovation.
And so it's where what we see and want to accelerate is that there also is a public option
that complements it. Not just that a kind of motherhood and apple pie, but because it is
really important in doing the things we talked about before. What won't the market do on its own?
And so some of the things the market won't do on its own
is I don't think it'll pay attention to safety
in a way that is actually broad enough for us to be safe.
I think people trying to corner the market on safety
is actually a dangerous,
I mean, you might get some good stuff out of it, right?
But if we actually want to protect humanity, the idea that there's one or two vendors or
10 vendors, like, cornering the market on safety is a dangerous game for humanity.
Having it open where a lot of different players can pitch in on safety, see how stuff works
under their hood, I don't think the market is going to drive that on its own.
And that's a key piece. But isn't that the...
I agree with you it's a key piece, but isn't that the role of the government versus open
source or any particular company?
When I think about why governments should exist and where are they not overreaching,
and this is a delicate balance
safety
For the population it
Represents for me whether it's safety from armed forces or police or regulation
It's like the most fundamental thing. I think a government should provide
Do you agree with that?
Absolutely and we haven't figured out, although I think we're actually zooming
towards it, if the American political system worked at all it would be easier.
Finally, having that regulation is the role of government. What are the
guardrails? And you know, you take something like the evolution of
transportation, I'm happy there are traffic laws and safety laws, and that is the proper role of government.
And building cars is the proper role of the private sector.
Public transportation is a thing, you know, that kind of sits in between.
So I agree, providing safety absolutely, you know, regulating the guardrails on safety,
absolutely the role of government.
Jump to AI, one of the critical things I believe and we believe in those regulations working
is transparency and the ability for people to collectively tackle the safety problems
in order to comply with those regulations.
And so the whole history of open science
and everybody looking together to drive AI forward,
with enough eyes, all bugs are shallow
in that old open source principle,
to me that's actually the more likely
gonna produce an outcome that lines up with that regulation.
It's not the government's job to do the implementation.
On the flip side, having a few people
trying to corner the market in compliance
with those regulations and what safety is
and lock it all down, some of those could be useful players,
but I don't think that's enough to keep us safe.
My bent again is towards entrepreneurs to solve problems.
I think about, you mentioned transportation,
I think about in the space industry,
which is my earlier part of my life,
it was Lockheed and Boeing,
sort of the large defense contractors
that were launching humanity into space.
And here comes, and they were dominant by far, far right and the government is building the space shuttle under contractors
and then here comes Elon as a disruptor and
Captures 99% of the market right and the other companies will go
The only reason the companies are that exist is the government likes having a second supplier in place
but we'll see we'll see relativity space and Bezos with Blue Origin
come in and those will become the second suppliers. But I don't think you could have ever gotten innovation and brilliance in a governmental program or a
governmentally driven program. It was like just getting the very best people on the planet and
forcing them or focusing them to take huge risks and innovation. So the question is can you get that speed and energy
in open source?
Or is it some segments of open source and not others?
Well, I think open source and governments are different.
So I think the role of governments
is either to create guardrails or to fund public goods.
And so, you know, certainly you think about DARPA,
funding public goods.
I mean, you got a lot of innovation
and to some degree speed, although it was a long game,
you know, out and the government's
playing that funding public goods role. You can get that in open source. I do think the role of
open source often though is after some of the high-speed entrepreneurial innovation happens.
So, you know, you see Linux comes like 10 years, a half a generation after Solaris or
Windows NT and it's like, we want a a different thing we want to collectivize this and make it
infrastructure. Firefox comes 10 years after Netscape. Wikipedia comes
10 years after Ncargap, but if you still have one of those CD-ROMs, you know,
you get a prize. But I think the role of open source is actually to
create often the more malleable public
good version of what has been driven by commercial innovation.
So on this first point of AI development is dominated by commercial interests, is there
– so that is a truth and is continuing.
Should that be dissuaded?
Should that be – so the answer is so what and what do we do about it is the
question.
Right.
Yeah.
And the so what is you won't get to the public goods you need in terms of keeping the market
open for smaller players in terms of researchers and in terms of safety.
I think you need open source and truly open source that operates in a way that we all have access to and doesn't get cut off at.
But if the open source would be...
But let me say why.
Yeah, please.
So I think in the so what...
And how, if you would, and how that should go about.
So the why is things like, maybe I already talked about them, we have the Lego box for this era,
we've got the ability to have transparency for safety
how you know, maybe
Corporate players actually even drive that and I think them corporate players. I have a little two by two matrix
I have which is like open closed private, you know commercial public and you see different people, you know kind of down in the
commercial but open you have
different people, you know, kind of down in the commercial but open, you have Meta trying to play. I think that's good. In the long run, I also want to make
sure there's stuff that is kind of owned in common. So you have people like the
Allen Institute, which was set up by Paul Holland before he died, Allen Institute for AI.
Which he made the money to fund that from a monopolistic activity.
Awesome. You have a tax system that supports philanthropy in America. So
That's what is supposed to happen
So, you know, he really believed in open AI. He really believed in open source AI
And you have an amazing guy Ali Farad. He leading that I think that could become the Linux or the Linux
foundation of this era. And so, you know, both of those are useful players to kind
of have out there. I do think one of the critical things that we're missing right
now is in every Western country, you're seeing huge amounts of money being
thrown at compute or huge kind of looking at how we're gonna deal with energy and AI and to me we really should make sure that those
government dollars go to public goods. So if I'm giving you huge amounts of
compute as a researcher or even as a company that should produce open source
at the other end of the process. If public dollars pay for it, public goods
should come out at the other end. Yeah, E. Mostak has made a, you know, a statement which I really liked which is this is infrastructure. This is
fundamental, the compute is fundamental infrastructure for every country and every country should own its own
models and its own infrastructure. I mean it's going to become oxygen, electricity
for a nation. I think it's, I agree with him 100% that it's infrastructure and it's not just the computer,
it's the models, it's the whole stack. We did a big paper on that with a bunch of other people,
including Jan Lacoon was a part of it at an event last year or earlier this year. And one of the things I would be really careful of
is not to think that it's national sovereignty,
but actually, I think the democracies of the world
building the system that is open and controlled by them.
And you're gonna have the Spanish language law,
you can have the Spanish large language model
or the Italian or the French or whatever on
top of a shared pool of infrastructure which is effectively AI built that is open source
and for democracy.
But I actually think that that infrastructure is something that as a set of countries in
the world that have a certain set of values, we want that collectively that we can all
lean on. Where do you come out in the discussion like we need to go as rapidly as we can because we're in
a fundamental race with China and that fast as we can is going to be you know government and
investors pumping money into for-profit companies that have employed the smartest people on the planet.
And this is a race for the principles of freedom.
So I agree, we got to go as fast as we can to build a technological society, including
an AI stack that is driven by freedom and pluralism and values that I hold dear.
And I think private companies are a part of it.
I also think public AI is central to it.
I mean, we talk about public orientation,
public use, public goods,
and that public orientation is,
how do you put the intent of pluralism and democracy
into the design of these products
and test against it over time,
which is about safety,
which is about who gets to contribute, all those things.
There's a great book, or I think it's only a digital book, which maybe that's all that matters
right now, by Audrey Tang, who was the digital minister for Taiwan, talking about you actually
can build. I mean, you basically have people who are very focused on private wealth and China, which is very focused on
a particular totalitarian approach, shaping AI in their own image.
And what we don't have is a high speed, fast approach to how do we build democratic pluralistic
AI.
And I'm kind of buy into that.
Yes, go fast and go fast not just to back somebody owning the market in
the West, but go fast towards something that is a set of players building something that
supports democracy and pluralism and you make money off of it.
Real quick, I've been getting the most unusual compliments lately on my skin.
Truth is, I use a lotion every morning and every night religiously called One Skin.
It was developed by four PhD women who determined a 10 amino acid sequence that is a synolytic
that kills senile cells in your skin.
This literally reverses the age of your skin and I think it's one of the most incredible
products I use it all the time.
If you're interested, check out the show notes. I've asked my team to link to it below. All right, let's get back to
the episode. I mean, there is a scenario where companies and the best meritocracies are attracting
the very best people with the most capital, the most compute, are building the best systems, and then the government becomes a user of those systems
to support its people.
In the same way that NASA didn't build Starship,
kudos to Elon for Flight 5,
but is gonna become one of the largest users of Starship.
So, can we deliver on the public good with private models and
private companies? Yes and I think there's always going to be stuff that
private companies don't do. And there are private companies who play totally in a black box, like OpenAI,
and private companies that play in a way that,
for their own interest, also benefits the public good
and creates public goods, like Meta is with Lama.
And so I think there's a, you have to take a nuanced view
of how do you get to that stuff.
And our view is you want both commercial and government
and nonprofit or open source community players
all pushing towards this kind of pluralistic open
and public AI option alongside everything else,
not too exclusive.
Alongside everything else, it's not exclusive.
I get it and I agree.
And as long as, you know, again, the government is not
mandating, but it's enabling the emergence of those resources,
of those open source teams and perspectives and so forth.
I mean, that makes complete sense.
Can I ask another question?
Because I think privacy is one of the main,
is a big driver for Mozilla, yes?
Absolutely.
Do you believe we actually have privacy?
Oh, it's such a tricky question.
So privacy is a core in
the you know it's something like an individual's privacy and security is
sacrosanct. It doesn't quite say sacrosanct but is in the Mozilla
Manifesto. It's one of the core tenets. I know. And we struggle with what is
privacy now because we for us privacy was make a browser that collects no data
about anybody and minimize data as
much as possible.
We all know you can't make digital things now without data.
It's more fundamental than code.
That's what AI is.
What is privacy in that context?
That's a thing to work through.
I do think it is built to go back to the question of like public orientation or building in
values and looking at how you build technology, AI technology that doesn't unduly expose information
about you or lets you opt into things that are more private.
So you see that with Apple intelligence in trying to do more stuff on device and
lean in that direction where it doesn't mean I'm a completely private individual,
but there's some stuff I want to keep close to myself.
And we've actually funded through Mozilla Ventures,
a company who's building basically the open source equivalent of Apple
Intelligence called Flower AI.
So we think privacy, what it means is got to evolve, it's got to have a lot more to
do with sort of how you think about privacy in your physical life, which is, oh, I'm going
to close the blinds or I'm going to talk a little quieter.
Like there have to be ways that we can express a desire to be seen and less seen in the digital world
we're building.
I just, you know, it's, it hits me is I've got, you know, whispered so she hasn't come
alive, you know, Alexa listening.
And she's listening all the time.
Right?
I've got Siri here listening all the time.
You can have an AI with a camera read my lips from, you know,
a hundred meters away. You can shake my hand, grab a couple of skin cells and sequence them.
So, you know, I think to some degree, you know, privacy is an illusion that we like to believe in.
And the question is...
Well, it's one of the reasons we talk about trust and trustworthy AI as well.
So, it's talking about trust and trustworthy AI as well. You can build that into the technology that
the stuff is local or the camera is turned off or that whatever and do you trust and do you trust or
have enough control yourself to believe that that's true. So there's constraints that can be put on
all the things you just talked about and a lot of it is either
I fully control it which is pretty hard in today's connected world as you're super technical and willing to kind of put yourself on a very
You know in an island or you trust the parties that are providing and that's what Apple is good at
It's what Mozilla has been good at historically. I think we want to be good at in the AI era.
We haven't talked a lot about where we're going with our own AI work, but it really
is in the trustworthy space and in the open source space because we think there's a lot
of desire for those things.
Let's say the next president of the United States comes to you and says, what are the
policies you'd like the White House to enact? Do you have a
clear set of recommendations? Yeah, I mean at the high level they're pretty clear.
And what's interesting actually is despite that you can't get
anything done, I'm Canadian so I can make fun of the US political system, despite
you can't pass laws, I do think you have a lot of bipartisan commonality
on some of these topics. And so I think one is, you know, the antitrust law works in today's era.
We haven't figured out how that works, but I think so there's space for entrepreneurs,
so there's space for innovation. And I do think that's critical, you know, a critical thing to figure out for this era.
The second is we figure out what are the right AI and privacy guardrails.
And that's where Europe hasn't gotten it perfect, but they have a little bit right in that where
they focused AI regulations on uses of AI.
Like it's not let's, you know, regulate all AI.
It's like if I'm going to use this for something that might be sensitive or dangerous or harmful let's
have guardrails on that. So we've got to do a better version and that's you know
for humanity to figure out but I think regulating that and then the third is
making sure that that innovation funding is going to public goods is going to
stuff that everybody can use and frankly also that it's going to regions other than just California and Washington
State. Because you know really one of the other things about open source is the
idea that people can innovate from anywhere, and I think you see not just
corporate concentration a few companies, geographic concentration in a few places,
and I think it is an opportunity for government, if it's putting resources out there, to spread
them widely.
Yeah, I mean, but I do, again, go in my aerospace route.
So I remember, you know, NASA's space projects would be distributed among 30 congressional
districts and companies, you know, strategic policy was to move in a congressional district that had no aerospace suppliers so they can get the contract, which makes for a lot of inefficiencies.
Yeah, for sure. The good news is that we're talking about, you know, bits not atoms.
Yes, yeah, for sure. Lots changed from there. What are their policies? How do you, you know, so the question ultimately is
how do we regulate it to make sure we don't have,
you know, disastrous outcomes and how do we,
you know, how do we sneak up against AGI,
whatever that is, and maybe we've passed it already,
or digital super intelligence that disrupts,
disrupts the way of our lives so rapidly that it leaves our head
spinning.
Do you have any recommendations?
There's two answers to that.
I mean, I think one is just keeping our eye on the ball.
And you saw this in the debates around SB 1047, the AI safety law that knew some veto recently, is really the push towards
we need to build an evidence-based way to kind of look at where are the real risks and
where are they not.
And it gets back to like, we should be regulating against risks and not just generally against
amorphous fears.
So it's urgent, it's important to kind of have regulation that can do that and needs
to be grounded in evidence.
The thing that is actually much trickier to fix, and I don't know how to fix it, is we have a bureaucracy,
we have political systems designed for the industrial or even the pre-industrial age.
Let's, you know, wrap on a conversation about what your prediction is on open source, what is your hope on where
this is going to go?
Well, my prediction and my hope are the same and I hope my prediction comes true.
So my prediction is that there will be a public option, an open source option, and that you'll
see both coexisting.
But I also think that the infrastructure layer,
that the people who are just trying to do
the commoditized fundamental stuff
will not be the ones who win commercially.
Netscape doesn't exist anymore.
Sun, I don't know if they exist anymore,
but certainly they're not the definer
of the ecosystem that they were.
So I do think that the infrastructure,
the building blocks to get back to that Lego kit
will be open and I think that will benefit us all
if it can be true from a safety perspective,
from an entrepreneurship perspective,
from a general creativity innovation perspective.
So that is both my hope and my prediction.
I guess the hope I layer on top of it
is that we can be smart enough
that if we're spending public dollars anyways
on things like compute,
that the government plays a fueling role in this
as it did in previous eras.
And that's something it does with intent
as opposed to a kind of constraining role
in trying to micro-manage things.
One last aside on here.
I just got back from India, which is a nation of 1.41 billion people, the largest on the
planet, and a nation that I think needs AI for its survival,
you cannot provide the education and healthcare to the,
there's a hundred million people that are the tax base
for the nation, the rest are in some degree of poverty.
And the only way you possibly provide health and education
to them at scale is gonna to be AI on top of
the geo 5G network that's there. And then I go to Greece and I meet with some leadership there
and they're like, help us get AI going. So for a lot of nations that are not AI
that are not AI centric today,
that are looking and feel and know that they need AI in their nation
to compete and survive and thrive.
How does the open source movement support them?
Well, I think that is actually the core of public AI
and why we were talking about that
beyond just open source, right?
The three things are public use, public orientation, public goods.
Public use is like use this stuff to deliver healthcare and education.
We have to figure out how to do that well.
And that's where the public orientation comes in.
Like build it in a way that is enabling of humans.
And Darren Akamoglu, I never get his name right, who just won the Nobel Prize for Economics,
talks about machine usefulness.
So how do we actually make this helpful to humans,
to public hands?
And then the third piece is public goods,
and that's where open source comes in.
It's like, let's, as we do these things,
build into a commons,
and certainly this government should do that,
so that what India does helps Greece,
and what America does helps India and that that virtuous cycle begins and that you know that's
something where the best kind of human progress I think has come from you know
those things all kind of heading up together. Mark before we sign off where
can people find you, follow you and find Mozilla Foundation and how can they be
involved? How can they support your work?
Easy to find mozilla at mozilla on ex twitter whenever you call it these days. You can find us
on LinkedIn. You can find us at benzilla.org where we've been for 25 years and you know I'm just at
msermon on on twitter and LinkedIn. I'm kind of not in either of those places often, but you can find me there.
And I think how you can get involved,
it is really just looking at how does public AI,
this concept of AI and service to humanity benefit you,
even figuring it out selfishly,
and then how do you give back?
If you're building software, can it be open source?
If you're a policymaker,
can you make sure that public dollars
go to public goods and so on.
Amazing.
Mark Sermon, an open source warrior to benefit humanity.
That's my new brand for you.
All right, well I'll get the t-shirt printed here
in your mouth right now.
Thank you for your work, Mark.
A pleasure to meet you.
Thanks for the conversation.
Likewise, thanks for having me on Peter.