Digital Social Hour - AI's Future: Open Source or Closed Control? | Dr. Travis Olipant DSH #1317
Episode Date: April 11, 2025AI's Future: Open Source or Closed Control? 🤖 Join Sean Kelly on the Digital Social Hour as he sits down with Dr. Travis Olipant, a trailblazing AI expert to tackle one of the most pressing questio...ns of our time. Is open source the key to AI's potential, or will closed control dominate its future? 🌐 This episode is packed with valuable insights on AI's rapid evolution, the role of big tech, and how open source could revolutionize industries, education, and even YOUR daily life. From the power of personalized learning to the ethics of AI governance, we’re covering it all. 💡 Discover how AI is reshaping industries like healthcare, gaming, and education, and why owning your own AI might be the game-changer you didn’t know you needed. Plus, hear fascinating stories about quantum computing, the rise of AI in chess and poker, and what open source really means for innovation. ♟️🎮 Don’t miss out on this engaging and eye-opening conversation! Watch now and subscribe for more insider secrets. 📺 Hit that subscribe button and stay tuned for more thought-provoking episodes on the Digital Social Hour with Sean Kelly! 🚀 CHAPTERS: 00:00 - Intro 00:26 - Travis’ Concerns with AI 03:17 - Closed Source vs Open Source AI 08:36 - Most Advanced AI Model 10:58 - Education and AI 12:02 - Benefits of Open Source 15:42 - Full Body MRI Technology 23:45 - Quantum Computing Insights 24:59 - Is AI Overhyped? 26:04 - Open Source AI Discussion 26:47 - Closing Remarks APPLY TO BE ON THE PODCAST: https://www.digitalsocialhour.com/application BUSINESS INQUIRIES/SPONSORS: jenna@digitalsocialhour.com GUEST: Dr. Travis Olipant https://x.com/teoliphant https://www.linkedin.com/in/teoliphant/ LISTEN ON: Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015 Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759 Sean Kelly Instagram: https://www.instagram.com/seanmikekelly/ #ainews #generativeai #openai #aitrends #airesearch
Transcript
Discussion (0)
to help improve our job prospects.
I feel like how to make chess really well, right?
Really well.
What if you can learn to trade really well?
What if you can learn to do?
I totally agree.
So that's exciting to me, but to do that, what we need are millions of professionals,
millions of people, tens of millions of people, hundreds of millions of people,
billions of people all using AI for the purposes.
That's amazing.
Okay guys, got Travis here today. We're going to talk AI, one of the pioneers in space.
Thanks for hopping on today.
Absolutely.
Great to be here, Sean.
Yeah.
The space is evolving so fast.
Does it concern you at all?
Yeah, it concerns me for a number of reasons, but probably not the same reasons other people
think.
I think there's a lot of things um, a lot of things happening quickly.
And a lot of people trying to make sense of it quickly, even though
that there's not a lot of understanding of how it actually works.
And so there's a lot of, um, uncertainty that can lead to confusion.
So that, that probably concerns me more than anything is just that, um,
uncertainty leading to rapid action and not thoughtful action.
Yeah.
What are the biggest concerns and red flags you're seeing right now?
So, um, kind of overreaction by governments is one that concerns me, you
know, people trying to, uh, pass laws, make regulations where they don't really
understand what the implications of those are.
So kind of ended up with rules and patterns that don't really fit what,
what emerges.
Yeah.
So that concerns me.
I think the other thing that concerns me is a lot of closed
source companies just trying to own the space. A whole lot of kind of realist like a land grab.
Oh, here's this AI space. Let's grab all the attention. Whereas I'm a really big proponent of
people learning from AI and making it part of their toolbox. Ultimately letting us become better
agents for ourselves by having AI as a
tool that we all can use. So there's kind of this land grab going on where a lot of information flow
is happening to a few companies. So that concerns me too. I want to see AI knowledge diffuse and
disperse and have lots of people use it effectively. But there's a lot of money, advertising,
promoting. It's amazing how quickly people can be informed by narratives.
Right.
We're sort of driven by narratives.
We, we, we seek out narratives and worldviews and way to think.
And, uh, without critical thinking, without background, you can easily
be persuaded by something that just isn't true, especially with social media
these days.
Yeah.
Exactly.
Exactly.
And so, um, that's so AI could be used to actually amplify that capability.
You know, people are good at it, but what if you had AI even be better at it?
Right.
That's in one sense why social media has been challenging is because it's even
stupidly, not, not with any intent at all, but just trying to get eyeballs.
AI algorithms have already been feeding people information that they want to hear.
And so it reinforces cognitive bias, confirmation bias, and the cognitive
dissonance that happens all the time.
So you're just basically being reinforced what you want to hear.
And it kind of, it's create polarization in our society.
It's creating people, you know, people are creating enemies out of who could be friends.
And that's one of the things I think, I don't want AI to amplify that.
I want to see how can we use AI to understand each other better and actually
maybe show a little more empathy to each other and understand how, Hey, you know,
we're not that different.
We have our differences and that's that could be beautiful, but let's not emphasize
them because that can lead to conflict.
Yeah.
You said earlier, could you explain what that is and which companies are closed
source?
Absolutely.
So closed source means that it's actually
been the norm for a long, long time.
Well, if you go way, way back when software first came around,
when hardware was instituted, the software
came with the hardware.
Because basically people competed on, here's new hardware,
here's a new machine to run your business.
And the software was all open source.
They didn't have the term then.
It was just you could use the software.
As kind of in the 80s and early 90s, then people would say, oh wait, these are valuable.
And the company Microsoft actually was one that said, and Apple and those companies emerged.
They went, hey, there's value in the software.
We can't give it away, so we can't show the code.
Because if you show the code, then people can potentially take it, derive from it, build
from it.
So they would close the code, and then would be able to run that closed source.
Now it's still a lot of software gets built as closed source.
And that's fine.
It's not like that's some kind of moral evil to close the source, but it is a,
it does create challenges for innovation.
If you would open source was as a movement that started around the time
Linux came around, you know, Linux, the, the, you've heard of Linux, it's an
operating system that is essentially why we have cloud computing today. time Linux came around, you know, Linux, the, the, you've heard of Linux, it's an operating
system that is essentially why we have cloud computing today.
Wow.
It's, it's this massive operating system that has just now runs all the servers.
Yeah.
You know, it was, it was, it's a pretty impressive kind of movement and it's the reason AWS exists.
It's the reason GCP exists.
Absolutely.
It's hugely impactful.
So open source has been a, an, uh,
an extremely impactful social movement.
And that's probably the way to describe it because, um,
and I started participating in open source in the nineties, late nineties,
when I was a graduate student, uh, just, I'm, I'm kind of a geek at heart.
I'm a science geek who loves physics and loves math and loves to kind of make
things. Um, and I need software to do it.
And I wanted to, and so I got wrapped into this open source movement because I liked how when I did the work, I could share it with others.
And that's essentially a lot of us, you know, millions of people, uh, got, have
been pulled into this open source ecosystem, sharing their code with each other.
Uh, so it's kind of this, uh, interesting world that's emerged over the past 30
years where people share code. Uh. There's places that that code can
be seen and people can build from it. There's lots of movements around that
code. So open source is just this phenomena sharing your code. Everyone can
use it. Closed sources, you got to license the code to use. But open
source, there's lots and lots of parallels. We could have long conversations
about what open source means, how it
derives value, how do you make money from it?
In fact, my story and what I'm doing now really starts there.
I loved open source.
I love the engagement that it created.
I love the fact that I could share.
People could comment, people could work with me and I'd build a community.
Love that.
That's cool.
Just, you know, cause all of us need community and, and, and tribe.
In fact, I think that's a critical thing to understand about human
behaviors. You want to have your tribe. You want to have your community.
Open source gave a place for people to have community.
Yeah. So is chat GBT open source?
No. So is that the whole dilemma with them and Elon?
Yes, that's part of it.
That's part of it. I mean, some part of it is just egos, right?
But a big part of it is the fact that Elon gave them money
to build open source AI.
Got it.
That's why they started, was Elon was concerned
about Google having all the knowledge of AI.
So some of the same concerns that I'm expressing,
Elon expressed years ago, where he was saying, look,
we need to make sure that AI, as it emerges,
isn't just controlled by a few hands.
We have to have lots of people aware of how to use this.
And he was worried that Google was actually consulting all the AI experts.
And with their deep mind, they were, they were advancing very, very rapidly.
So open AI was basically the initial tranche of, Hey, let's go give some money,
create a foundation and have open source AI.
But then, you know, things changed.
There were some different opinions and I don't know Sam well enough and have,
I don't know quite what drove those decisions.
I can, I can understand there's some probably good reasons and reasons that I wouldn't agree with.
But, so we pushed for a kind of closed AI and then, but you know, had the release of ChatDBT
that had this phenomenal explosion in the world of people going,
oh, these models that scientists have been working on for decades can do interesting things like predict words reliably, predict
phrases that sound realistic, and then more than going beyond that from just
words to music and to audio and to video and to images.
We're doing Micros now, the Google one.
Yes, exactly.
You can actually produce a podcast.
And it sounds decent too.
It does.
That's scary.
It is.
No, there's a company I've been consulting with called Zyphra.
They're out in Palo Alto and they have a mechanism to produce speech from text
that produces realistic voice models, just like somebody. You clone yourself.
Yeah. And that model is cool to me because I'm an audio learner.
Like I love podcasts and audio books. So when you can do that,
you can learn really fast. Absolutely. So I'm more of learner. Like I love podcasts and audio books. So when you can do that, you can learn really fast. Absolutely.
So I'm more of a visual guy, but I love audio books
and I love podcasts too.
So I understand, I like to listen at 2X speeds.
Same, yeah.
Sometimes 2.5.
Sometimes 2.5.
I know the recent one can go up to three.
If I'm going on some people I can listen to at 3X speed.
Same, but not in a straight line.
He talks too fast.
Even 2X at bend is tough.
It's true.
But you're sitting there going,
wait, I got to process all this information quickly, can I?
Yeah, but sometimes when it's in seven hour Rogan episode,
I'll do like 2X for sure.
Yes. Right. And that's, oh, that was only three hours.
Yeah. He's had some long ones lately, man. So true.
So there's AI companies coming out of China now.
Who do you think has the most advanced one?
We're filming this in March, 2025.
Yeah. Well, right now, China seems to have some really cool advanced
models deep seek showed that it's very advanced, but, but, um, Gemini is,
Google is actually showing some advanced models, uh, uh, Anthropic showing
advanced models, actually some of the open source models are also, uh, getting
to where they're comparable.
So it kind of, it's no longer, fortunately, just a matter of who has the best
models, you really have to start asking for what purpose, right? It's no longer, fortunately, just a matter of who has the best models.
You really have to start asking for what purpose.
Got it.
Right, it's no longer, there's one best model.
It's okay, what are you trying to do with this?
What's your goal?
Do you want to summarize text?
Do you want to clone a voice?
Do you want to run a podcast?
I think that's the future I'm excited about
is we're getting away from this race towards the God AI.
Right. Now that's still there and there's still a lot of messaging about that.
Uh, I'm definitely in a camp of, we're not going to incrementally get to, uh,
superior to the artificial general intelligence or the human like intelligence.
What we have is, is definitely a clear intelligence system that may be a part of
how the human mind works, but it's not the complete thing.
And so that's cool.
But really any value coming out of that is coming from a system that's produced.
So you take the model, you take some other hardware,
you take some just computing capability,
and you stitch it together in a system that's produced.
Right now, as we're speaking, this manis is all the rage.
It just came out, like, week and everybody's kind of going,
whoa, this is amazing because it can run my business.
It can do my research report.
It can run a stock report.
It can file my taxes, if they think.
I mean, it's making games.
GROK has a great model too, actually.
GROK 3 was just released and it's beating in a lot of measurements,
a lot of the other models. Wow. So GROK is also just released and it's beating in a lot of measurements,
a lot of the other models.
Wow.
So GROK is also really a fantastic based model and they have a deep search and they have
kind of additional modules around the model that they're starting to release as well that
people are going to experience with.
But honestly, Sean, it's really early.
So it's easy to kind of have these, uh, F1 race concepts,
but it's not really the model that works because everyone has to kind of answer,
ask the question, what am I trying to do to use this for?
And what for me is going to be a valuable tool. Um,
and that's going to be the most productive question.
Like for me, like on the side, I'm a chess player.
AI has revolutionized the chess space. It's called,
it's caused players to become a lot better. For example, I played Andrew Tate in chess yesterday.
And I beat him because think about this.
He played chess his whole childhood, but there was no AI or computers back then.
So to get better at chess was really hard.
Now when I play on the chess.com phone, AI analyzes every single game and I could
see where I mess up so I could get better way quicker.
So I love that.
I think that's a fantastic use case of AI.
I think it's an important one too.
It's about helping humans get better.
Like I'm a big advocate for natural intelligence.
Like we have not optimized how humans learn.
Right.
In fact, I think our education system,
at least in the United States is really, really bad.
Really bad.
It's terrible.
And a lot of it's systemic.
And schools are banning AI. And that's completely a mistake. Yeah. Because AI needs to be used to help
exactly this. It can make personalized education more possible. It can help you
take an interest you have and in that moment of interest amplify your
capability. And the iteration ability to learn, powerful. So good. Actually there's
a guy, Gerald Chan, he might be a little annoyed that I
announced, I talked about on this podcast. And, there's a guy, Gerald Chan, he might be a little annoyed that I announced,
I talked about it on this podcast.
And he's an investor.
He's somebody that invested in Anaconda.
But he gave a talk at Berkeley just a few weeks ago about the role AI can have in improving
education.
It was actually quite inspiring.
I need to watch that.
Yeah.
I don't think the video is out there, but I can send you the paper.
Anybody interested, I know he's willing to let the paper be spread.
It is a phenomenal discussion of something
I think is a critical question.
Because one of the things people are worried about appropriately
is how will AI disrupt my work?
People are worried about unemployment,
they're worried about jobs, they're worried,
what if AI takes away my job?
I'm not a fear-based person.
I think those kinds of comments, that kind of commentary is useful,
but it can be paralyzing.
It's usually normally better to turn it to a,
okay, what do I need to use AI for to help?
And I think there is, we can help,
you use AI to help improve our job prospects.
You think how to adjust really well, right?
Really, what if you can learn to trade really well?
What if you can learn to?
I totally agree.
So that's exciting to me, but to do that, what we need are millions of professionals,
millions of people, tens of millions of people, hundreds of millions of people,
billions of people all using AI for their purposes.
So to see how we need to convert AI from being a thing somebody else does to us,
to a tool that we all use to better ourselves and improve our lives.
That's what I'm about.
That's what this open source AI foundation that I recently started working with and joining
is all about.
It's recognizing that the same, like I just said that Linux as an open source operating
system gave rise to cloud.
That same phenomena of using open source AI, it'll give rise to a future we can't expect
if we keep people in charge of it.
And a lot of people in charge of it. Not just one or two people.
Not just a few thousands of people, but millions and billions of people having access to similar tools.
Just level the playing field and help people engage with each other.
Now that will, you know, a lot of people go, wait, that's going to change everything.
Yeah, I could.
And so I'm not all for, I'm not, I'm not all for kind of, um, rapid disruption.
How do we do this in a measured way where people are accountable and people
have ways to work together and people do it in their communities and do it in their
families and do it in their tribes and do it in their, their virtual groups.
Like that's, that's how we, we already are organized as humans and all these little different governance groups. Like that's how we already are organized as humans
and all these little different governance groups.
AI can help us each organize better,
help us relate to each other better.
And it can bring about this incredible world, I believe.
So that's what I'm about.
That's what I love to try to promote.
Open source is how we got here.
I've been involved in open source for a long, long time.
I started as a scientist.
Wow.
Really, it was during my, I got a master's degree using satellite images
to measure, um, backscatter off the earth.
Holy crap.
Yeah.
It's intense.
It was intense, but it's also very, you know, it's math.
I mean, I know I really is math, not for everybody, but I love math and I love
learning as much math as I could.
And to me, math is just a tool.
It's a tool that lets you get insight from data.
And we did that with satellite data.
Backscatter, you basically have electromagnetic radiation.
So you like beam a radar down to the earth.
You measure what comes back.
And then you try to infer what that means about the ice field,
about the wind speed and direction of the ocean,
about the plant vegetation.
So that was my first experience with large scale data
processing.
But I went to the medical area to try to do the same with, with, with images, with MRI, with ultrasound and that industry could progress faster.
It's a little more regulated.
And so there's a lot.
Progress is slower.
Yeah.
That's another topic we could go into, but probably a different, different
data, but go ahead.
Yeah.
Have you heard of Pranula?
I have not.
Full body MRI. They use AI to analyze the MRI.
The problem is it's expensive, so most people can't afford this.
Um, but yeah, I got it. They used AI to analyze my results.
I learned a lot about my body.
Really?
And, uh, that's where I hope the future like medicine goes to.
And that's the same with my dentist. So there's holistic dentists now.
Yeah.
We'll take photos of your teeth, throw it into AI.
It was finding my cavities and it was finding my gum infections.
Sean, I love that.
Yeah.
That's actually why I went to PhD is to make, make insurance better.
Like actually, cause I think that's possible.
Like, and you've looked into why things are so expensive.
There are some reasons for it, but there's, these can be more, made less expensive.
We could easily have MRI technology at least as pervasive as dental imaging
Right at least as pervasive as your local doctor could have one. I hope we get there. Yeah, I think so
Yeah, it was like
2500 which is a lot for an MRI full body MRI, you know
It is and some of that's the magnet it's expensive
But some of it's the processing and you know the cut that some of it is you can actually save money if you don't put as much effort into building
a very homogeneous magnetic field.
But that requires better data processing.
And so your point, if AI can help us process data better,
then we can have MRI more ubiquitous for less time.
Yeah, and so for that, you had to get a doctorate
to manually review every result.
It is.
And there's also a lot.
It was expensive to make the field
so that the processing was simple.
That's the big thing.
So, cause right now that's how MRIs work,
is the processing is relatively simple
from a mathematical point of view.
Cause if you have the field slightly in homogeneous
and the processing is a lot harder,
but potentially still possible.
And with AI, hey, maybe we can get there.
I also am excited about AI just design, AI helping scientists iterate faster.
Just like you said with chess, you learn quickly.
What if scientists learn more quickly about how to, you know, what does this mean?
What if I make this change?
What does that mean?
Wow.
There's a big saying I've come to say all the time, which is innovation is iteration.
Like the speed of iteration determines your speed of innovation.
Yeah. Yes, you need creativity. Yes, you need, you know, it's people who pull that off. But iterating is really the key to progress. Yeah. Yeah. I'm also a big poker
fan. AI has revolutionized poker. I bet it has. They call them solvers, but it shows you how to
play like the best strategy, the best hand and when and what to bet.
Texas poker? Yeah. Well, it has, I mean, Texas, uh, the Texas poker.
Yeah.
Well, it has all the different poker variables, but it's just,
people have gotten so much better at poker now.
I agree.
It's actually a corollary of something I always say, which is, you know, for
your job, it's not about being replaced.
It's about being replaced with someone that knows how to use AI better.
Exactly.
Right.
So if you're worried about your job and AI, just turn that into motivation to learn to use AI better. Exactly. Right. So if you're worried about your job and AI,
just turn that into motivation to learn to use AI.
Yeah.
Same with my video editors.
A lot of them are using AI now to find clips.
And it's like, I love that.
Like, I don't want to replace you.
I want you to be able to give me a ton of clips.
Right.
We're still going to need the human connection.
I really am a promoter of accountability with people.
Like you're not gonna have AIV accountable.
In fact, that's kind of the thing
that's not really the root of it.
Like, oh, well even the Tesla cars can drive you now.
But you still have to sit in the driver's seat.
I know there are self-driving cars going around the city.
The Waymo.
The Waymo's are showing up.
But a big part of that is actually liability.
Who's liable if something goes wrong, right?
What if it crashes or something crashes?
What if there's a problem?
And so ultimately that's the real question has to be resolved.
That will be resolved through, through accountability layers.
Right?
So my answer is, well, we, accountability is with individuals.
And then you have a tool that's AI, then you're still accountable.
All my developers, I have a few companies I've worked at,
have developers that work with me,
and I tell them, look, use AI all you want,
but the code you commit to a repository
and ship to a customer, you're accountable for that code.
That's your response for the code.
You can't say the AI made me do it.
It's fine if the AI helped you, totally behind that.
Like, you do that all day long, but then at the end of the year, if you're
accountable for your shoes, you can't sue AI yet.
Right.
Yes.
And that's a different question.
Uh, but we're not, we're not even close to having that conversation.
Yeah.
Let's give that about 10 years.
Yeah.
We're not full term near yet.
Right.
Right.
Do you have fears that some models can go haywire without proper regulation?
I think, um, yes, models could definitely go haywire.
I think they already have in a way in terms of how they disrupted our social
contract with each other, our social connectivity.
Yes, they can go haywire.
The regulation question I think is I'm all for governance.
I'm all for people governing, learning those principles of governance.
Every community has governance.
You don't have community without some amount of governance, I'd rather have it
be at that level rather than kind of a huge scale.
So you don't want the federal government?
I don't want all federal government.
I don't want the, I'm not saying they're involved entirely.
I want them to be restricted to the things they need to care about.
Right.
Not just lay out all AI policy.
That would be a really, I think an ill-suited idea right now.
I think cause it evolved so fast, but they lay out policies so much could change, then
they got to keep updating it.
But individual departments could have policies about how they use AI in their department.
For example, like Health and Human Services could have a strategy for AI adoption and
how they use it. I think that's true. But like just having some law about how AI, because
anyway, what do we mean by AI?
You know, AI is just a math program.
Like it really is just an array,
it's a multiplying numbers together
and then summing them up in a nonlinear function
with a nonlinear, nonlinear is in the middle.
You end up with this, it's just math.
So we're gonna regulate math, okay?
How are we gonna do that exactly?
I agree, SEC has been trying to regulate crypto for years and it just, and they mess up.
It's a mess. It's a mess. They actually, they do more hard than good.
I think they do too. So I tend to be like, when I was younger,
I was, I was quite libertarian, quite open, like get rid of all regulation.
And you know, in some future world,
I might imagine that experiment being interesting, but I think ultimately,
I recognize that like a value of regulation is to avoid suing each other.
Like if you can, that's the thing
that you're trying to avoid.
You're avoiding the problem of,
hey, we're debating this and I'm mad at you
because you did this, I'm mad at you because you did this.
You just have lawsuits everywhere.
We can slow down the whole economy
because you got people suing each other
with a very inefficient judicial system.
Or a very unjust one too, where sometimes it's not a judicial system, then it goes into
arbitration.
Anyway, there's real problems there.
So I could see the value of regulation.
I see the value of good rules.
What are the good rules?
How do we know what those rules are?
We can know, but only if we have enough context, enough understanding, enough experience.
Right now, AI is just so new new and the rules might be different.
It might be different when it comes to this industry versus that industry.
Right.
It's another industry.
It's a great point.
So I think we ought to just like let, um, let people, I'm not saying we just
throw caution to the wind and let people hurt each other.
Not saying that, right?
If you have a claim against somebody because they use AI, you have that claim.
Like those rules already exist.
Yeah.
It's going to be interesting to see how it plays out.
Cause let's say you ask AI for stock advice.
Yeah.
It just gave financial advice, but can you go after them for that?
You know, if you lose money?
Well, I guess I think in the, most of the general AI systems, they'll have a terms
that says, you know, you can't sue us for stuff you did with this.
Right.
And that's pretty fair.
But if somebody came out with a financial advisor, right.
Cause yeah.
And then offered stock advice and you have to register, you already have
to register to do that, then yeah, you potentially could.
But most of those financial advisors have all kinds of, you know, things
you sign over saying, no, I know it recognizes your advice and I'm responsible.
So I see it again, if we just focus on, we've already got systems, those systems can be improved, maybe AI can help us improve them, but let's not panic over AI.
I think the thing to be concerned about is, is AI open?
Is AI available?
And can people actually use it for their accountability?
Right.
That's really, we want to make AI as distributed as possible.
Agreed. I'm hearing a lot about quantum computing as someone in the crypto space.
That seems to be advancing rapidly. So they're actually saying it's going to be so advanced
they could hack into wallets in a few years. That's what they're saying. I tend to be
skeptical of those statements. I've been on record as being quantum skeptic for a long time.
Not that there isn't something there. There is.
There's some really cool things that happen.
But we have a really hard time organizing a bunch of quantum bits together
and understanding what even that means.
Quantum is one of those areas where we're still trying to figure out what...
Nobody knows what it means.
Quantum mechanics is a description of nature
that just gives us a way to predict what nature will do.
But what does it mean?
We don't know.
And so it's easy to get hyped up.
The other metaphor is I'm an electromagnetics guy,
and we had optical computers back in the day.
Optical computers can do really fast things,
like take the Fourier transform really, really quickly
just by propagating light.
But we don't have optical computers today.
They could be useful for some things,
like MRI and imagery construction.
We do that in optical computer very fast.
But the infrastructure of optical computers,
actually building them and the whole ecosystem around it is really expensive.
So I understand that, you know, why quantum computers are exciting,
but quite often they're kind of overhyped.
I'm not saying they're an interesting research topic and they're an interesting idea,? I'm not saying they're an issue in research topic
and they're an issue in idea,
but you know, I'm not saying nobody should ever invest in them.
I think they're worth investing in.
But I think most of us, it's going to be a non-issue.
We'll not even realize what's happening with them.
I remember when I was in high school,
everyone said 3D printing was the future.
You're going to print houses with 3D printing.
And then it flopped.
And then it flopped, right?
I wonder if quantum is going to be like that.
Quantum is kind of like that in the sense of it's really cool tech and really
cool science. And, and, you know, honestly, the only thing is, okay,
just make your cryptographic hash longer. That's your fine.
See phrase is 16 words now, maybe they should make it like...
Exactly. And so there's kind of quantum.
So I think it's worth thinking about,
but most of what I've seen is people get a little bit overhyped about it because they believe the, they believe the, the, the, all the rest of it.
Again, I still love science. I like the work that people are doing. I don't want to dismiss the great work of the scientists. I think they're doing amazing things.
I just think commercially it's not something on our, on our horizon in the next 15 years.
Thanks, Science. Travis, anything else you're working on
or want to close off with here?
Yeah, well, I'm working on helping
to make sure open source AI exists through the open source.
Yes, exactly.
We have a phrase actually, make AI open source again.
Oh, I love it.
Make AI open source again.
The whole institution behind AI can be better, can be awesome,
if we make it open and help people own their own AI.
That's a big one. People need to own their own AI.
Rather than send all your data to somebody else and use a closed model,
own your own AI and have the model serve you and your data. Keep your data yours or your own.
I love it. We'll link all your companies below and your social media handles. Thanks for coming on.
Sean, great to be with you.
Thanks.
Check them out guys, and I'll see you next time.
All right, take care.