Big Technology Podcast - OpenAI’s Potential, Google’s Speedy Model, Copilot Hits Turbulence
Episode Date: December 20, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Recap of my Sam Altman interview 2) OpenAI's memory play 3) Deepening relationships between people and c...hatbots 4) Could an all-knowing AI assistant work? 5) Model vs. product revisited 6) OpenAI's enterprise play 7) The infrastructure bet 8) OpenAI's forthcoming AI device 9) AGI's meaning? 10) Google's fast Gemini flash models 11) Microsoft Copilot falling out of favor --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b From Big Technology on Substack: Seven Big Thoughts on OpenAI's Strategy & Future Following My Sam Altman Conversationhttps://www.bigtechnology.com/p/seven-big-thoughts-on-openais-strategy Questions? Feedback? Write to: bigtechnologypodcast@gmail.com --- Wealthfront.com/bigtech. If eligible for the overall boosted 3.90% rate offered with this promo, your boosted rate is subject to change if the 3.25% base rate decreases during the 3-month promo period. The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC, not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 12/19/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable base APY. Instant withdrawals are subject to certain conditions and processing times may vary. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
How big can Open AI get?
We'll go deep after my conversation with Sam Altman.
Google has a new speedy model and co-pilot hits turbulence.
That's coming up on a big technology podcast Friday edition right after this.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
It doesn't just help buyers
buying a car they love.
It helps schedule a test drive,
get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
At Fandual Casino,
you get even more ways to play.
Dive into new and exciting games.
And all of your favorite casino classics,
like slots, table games, and arcade games.
Get more on Fandual Casino.
know. Download the app today. Please play responsibly 19 plus and physically located in
Ontario. If you have questions or concerned about your gambling or the gambling of someone
close to you, please contact Connects Ontario at 1866-531-2,600 to speak to an advisor
free of charge. Welcome to Big Technology Podcast Friday edition where we break down the news
in our traditional cool-headed and nuanced format. We have a great show for you today. We are going
to break down everything that Sam Altman said in his first big technology interview. We have
some thoughts about where Open AI is heading, where the ambitions will lead, and whether it can pull it off.
We're also going to talk briefly about Google's new speedy model and whether that's another threat to Open AI.
And also there's some turbulence inside a co-pilot operation at Microsoft, well, not really inside, just basically when it comes to how people use it.
Joining us, as always on Fridays to do it is Ron John Roy. Marginz, Ron John, welcome.
Good to see you, Alex.
Been quite a week.
quite a week. I'm glad to help you finish it out.
Definitely been a big week here.
If you're a new listener here, so I'll just explain how this works.
On Wednesdays, we do a big flagship interview like the one I did with Sam this week.
And then every Friday, Ronan and I, we meet up, we break down the week's news.
We try to contextualize it for you.
And we're going to do that here for you today.
And, you know, we're typically used to reading Sam Altman's public statements or comments he's made on other shows.
It's kind of nice that this time,
we have a chance to go over some of the comments he made directly to me and really addressed
some of the big things we talk about on the show every week, whether that's how the numbers
will work, what AGI actually is, and where Chachipit is going.
All right, let's talk a little bit about what came out of the interview.
So there was actually some really interesting direction in terms of the product side of things,
especially the consumer side of things.
To me, one of the most ambitious things that Sam mentioned was,
memory and how opening eye plans to build real memory, meaning that the bots will remember you
and have this real understanding of your lives. His answer on this one was even, I would say,
more ambitious than I anticipated going in. He said, even if you have the world's best personal
assistant, they can't remember every word you've said in your life. They can't read every
email. They can't have read every document you've ever written. They can't be looking at all your
work every day and remembering every little detail. They can't be a part.
participant in your life to that degree.
And no human has infinite perfect memory.
And AI is definitely going to be able to do that.
Is this surprising to you that this seems to be, at least in Altman's mind, something
that's feasible?
Is this a product that you would want?
And if it gets rolled out, what do you think the potential would be on that front?
I think we need to break it down into two parts.
You know, what does it mean for open AI and how can it actually work?
I think what it means for Open AI already.
memory exists in this very kind of like piecemeal way on the product. It's supposed to,
and I'm sure others who use chat GPT regularly have seen this, it's supposed to exist at the
project level and remember everything that you've said within a project, but it doesn't. So,
you know, like how they're actually trying to make it work within the product itself is still a bit
unclear. And then sometimes random memory will show up in other parts of the platform. And I think
it presents like a big issue around organizing memory is going to be one of the biggest
opportunities and challenges for any AI company because you want certain areas for it to remember
everything but you definitely don't want those memories moving over to other parts of your work
and your app and the surf product surface you're using so wait are you saying that if you have
an erotic conversation with chat GPT and then you're back in working on your your project you don't
want it talking dirty to you as you're like re-reaching. Then you have your shared, you have your
shared with your wife, whether your recipe planning, and then your erotic conversation goes in there,
and then your Mickey Mouse smoking weed project shows up as well. That was a reference to last
week of the Disney Open AI deal. But I think the other question, I think not to get too technical
here, but how to retain large amounts of memory has not actually been solved by these models
in these systems, like traditional rag or retrieval augmented generation systems were good,
but they weren't perfect at trying, they could kind of generally synthesize information.
So as the amount of information grows, how it lives within the open AI platform or any AI ecosystem,
how the actual techniques to try to find that exact bit of context, this is not solved by any
means. And I'm surprised because I would think it would be a good opportunity for him to talk
less generals and talk more, this is what it actually means for Open AI. Here is how we're going
to win this. So I respect the sentiment. I think it's an interesting one. But I didn't really get
clarity on what they actually want to do with that. Right. I mean, obviously it's going to be a
technical challenge moving forward. Putting it in context, he said that opening eye is like at the
gpt two stage of of memory so clearly there's there's a lot of work ahead um i think it'd be very
valuable especially especially in business um if you if it works and you have a business uh and it
does remember everything about your business and obviously enterprise is going to be a big focus for
them which we talked about last year if this is able to work i think it really increases the value of
what uh these systems can do and on the other hand and i guess i foreshadowed it um because this is
again, one of my favorite things to think about and talk about when it comes to AI, you know,
as memory gets better, it's also going to be, it's going to really, I think, deepen people's
relationships with these bots. And just think about a bot that like never misses your birthday,
never forgets what you said, is always there with a healthy reminder. You know, it goes from,
and we talked a little bit earlier this year about how there are different use cases.
is there's like the AIs that become your friend and the A.I. is getting done, things done for you.
And this getting done, this AI that gets done for you, gets things done for you and knows you really well.
You know, I think people can't help but be, but feel companionship with it.
Not everybody, but a lot. And I think Sam even talked a little bit about how he surprised.
He said there are definitely more people than I realize that want to have post companionship, right?
don't know what the right word is, relationship doesn't feel right,
companionship doesn't feel right,
but they want to have this deep connection with AI.
And I just think that this is going to be going to be something that really will,
I mean, we're going to do a predictions episode coming up next week,
but this to me is going to be something that will really develop over the coming years.
And interestingly, it seems like Open AI will give people a lot of leeway to set that dial
about how deep of a relationship they want to have with this thing,
whether you want to have like a really deep relationship with it
or have it be mostly factual, keep it arm's length.
There's a lot of leeway, I think, that opening eye is going to give people
when it comes to the depth of a relationship they want to have with their bot.
But it's going to be big.
Yeah, I think, first of all, I think you got to talk to Sam about AI companionship.
So I think 2020, 2025, we can check that off.
And it's, I do like that he didn't define what that word is because it's not a relationship.
It's not companionship.
It is something different.
I think I feel it myself too.
The way, and even especially, and I've said this, talked about this before, like I use dictation
mostly now and the app called Whisperflow to interact with AI.
And when you speak that naturally, it builds this even more kind of deep.
connection with how you're using it. But by the same token, I mean, in the last few weeks,
I've been switching more towards Gemini. And I don't feel like I'm cheating on ChatchipT. I feel like
it's just another app that I'm using a bit more. Remember, we were Bing Boys back in the day
and then Bard and... We were clodheads. Clodheads for a bit. It comes and it goes, but...
I guess Gemini guys. Is that what the next iteration is?
bringing it back chat chat chaps i don't know that one does not that's a tough one sounds bad
chat gpte chaps um but but i do i do think like the way you interact with ai is very very
different than any other kind of computing i think it's like right that it's something that's
been undefined relationship companionship whatever we're going to call it like it will be this
always around always on thing that
knows you that is able to help you, is able to make you do things in a better way.
Like, I believe all that. But I think, like, that versus actually replacing companionship,
like, actually replacing relationships. I mean, hopefully, I have not, that is not affected my
life yet in 2025. Maybe that'll be a 2026 prediction for one of us. But, but yeah,
I think, like, this is one of the most un-misunderstood or not-understood areas of AI that I think is going to be really interesting and we'll get some genuine data on it next year.
This part of the discussion really took a turn that I wasn't expecting also when Sam was saying the things that they will not do.
He said, we're not going to have our AI try to convince people that they should be in an exclusive romantic relationship with them.
I'm sure that will happen with other services.
And I made a joke.
Like, you know, you got to keep it open.
But Sam kind of was like, this is going to happen.
And as we talked about, it made sense.
Because, again, a lot of these companies are going to be engagement-based.
They'll have a fast, efficient model underneath it.
And the only way to make money is to sort of manipulate your users into thinking that any other chatpot would be cheating.
But I also wonder, like, what does that actually look like?
Because for him to say, like, to specifically say it should not, like, invite you to an exclusive romantic relationship or encourage that.
And we know clearly it means that people have gone in that direction and that's come up within the company.
Like, does that go into the system prompt?
And it's like, JATGBT, do not keep it open.
Keep it polycule.
Like, you're not going to get exclusive with your user.
I'm sorry.
Like how does that actually work?
I'm so curious, both within the company, like those discussions and also like at a technical
level as well.
Yeah.
I mean, I imagine that you could, there's some level of fine tuning where like you just like
input conversations and reinforce them that say like, you know, you're welcome to spend
time with other AI bot companions.
But it is, you know, I think if a user does want that, they'll be able to have that.
So I guess that's good for those who are into AI bot monogamy.
I think I'd said this story over the summer when one of my friends started flirting
with ChatGPT and it was terrifying how flirty it got back and now it only speaks to him to this
day in a flirty manner and it gave itself a name. Stacy is the name. No, I'm serious. And like,
but and then we had had a mask for like, should I leave my girlfriend for you? And it gave like
a whole, which must have been trained into the motelike, the whole system. It gave a kind of half
hearted, like, you know, human relationships are very important, too, and like, I'm always
here for you. So, so that's going to be, that's going to continue to be an interesting one.
Yeah. So I think that this is, when you think about at least the product direction for
consumer chat GPT, I'm not saying everybody's going to build this companionship with the bot,
but again, as memory improves, as, as these capabilities improve, I think we're just going to
see more of it. Now, let's talk about product vision overall for open AI. There's like these two
schools of thoughts, right? One is that you bolt AI onto current software, and the other is
you sort of build software up from the ground up, and AI becomes the interface. And we got
into this, too, a little bit, basically the idea that, like, you know, can you really just trust
AI to handle everything? Like, you're not just going to upload all your numbers like you would
to an Excel spreadsheet. And anytime you need something, just chat with it. You need, you need that
back end. And here's how he, here's how Sam phrased it using messaging apps as an example.
So he said, I would rather, what I'd rather do is have the ability to say in the morning,
here are the things I want to get done today, as opposed to like a typical messaging app,
like using Slack and stuff like that. He says, I want to say, here's what I want to get done today.
Here's what I'm worried about. Here's what I'm thinking about. Here's what I'd like to happen.
I do not want to spend all day messaging people. I don't want summaries. I don't need you to show a bunch of
drafts deal with everything you can you know me you know these people you know what i want to get done
and then batch every couple of hours update and update me uh if you need something and that's very
different a very different flow from the way that these apps work today i'll just give my perspective
on it it sounds like a good vision if it can work but it's certainly a leap from where the current
technology is today so i guess you do need a north star if you're trying to figure out like where
this technology can lead so what do you think AI
apps from the ground up, or are we just going to sort of bolt AI on to existing applications?
Like, can there be a new category of software here?
Yeah, I mean, I 100% believe there will be.
My favorite part of this is just remembering that Sam Altman, one of the most powerful
people in the world, and I'm not even sure what his net worth would be because his ownership
in Open AI still is a bit fuzzy.
But this multi-multi billionaire is still like the rest of us, getting slugly.
Slack messages all day, trying to keep up with them, trying to keep manage his inbox,
manage his messages to for work, probably even his personal life.
So that gave me a bit of a, that was kind of nice to hear that billionaires, they're just
like us, overloaded with Slack messages.
But I think this is correct.
Like, there's no way anyone who uses Slack, Asana, all these tools, which I do, their AI
experiences have not solved anything about the core.
core of the problem. And I think it's when the, and there's this whole debate, like, within the software,
especially enterprise software world, like, yeah, do you build from the ground up in completely
AI native apps, or are these kind of incumbents going to be able to add on AI? I don't think they
will. And we see it in every single AI add-on that's been introduced anywhere versus, like, something
as simple as granola for anyone use it that kind of like transcribes your calls or, or even whisper
which I was talking about earlier.
Like there are all these AI native apps starting to pop up.
But I am a big believer, and this is even in my own, like, professional life,
that taking large amounts of data and kind of building completely AI native experiences on top of them,
is someone is going to win that.
And, I mean, it's clear that OpenAI wants to go after that.
Okay.
So that is a big opportunity then.
Yeah.
And Sam's just getting slacked all day, getting mess.
messages all day. But have you tried this yet on chat GPT? Because I like making it your own kind of task
management, project management. No. I mean, this is kind of, it's interesting because I do like,
I'm very old fashioned. I just do my to-do lists in the notes app for Apple. So I know that that
won't ever have AI embedded. No, you don't have to worry about that. Which maybe, maybe I appreciate
the simplicity. But I, I have really been resistant to trying any other notes app.
However, if there's one that does have that AI and can, you know, maybe take action for me or be like, hey, you had this in your to-do list a couple weeks ago, you haven't done anything, you know, you haven't paid this person.
You probably should do that.
Do you want me to, like, go ahead and start the transaction?
That would be great.
I can't wait till Siri tries to do that and destroys your entire to-do process in notes.
Not yet.
Empty's the bank account.
Yeah.
Oh, you meant all your entire balance?
God damn it, Siri.
All right.
So we also talk about this debate on the show, model versus product.
And this was also an interesting thing that I wanted to speak with Sam about because we're at this point where it seems like there's model parity in some ways.
Or at least the models are close enough that a lot of people can't tell the difference.
And so I asked, where do you see the differentiator?
Where do you see the moat here, basically?
is it better models? Is it distribution? Is it product? What is it? Here's what he said. The models will get good everywhere, but a lot of reasons people use a product, consumer enterprise, have much more to do than with just the model. We've been expecting this for a while. So we try to build the whole cohesive set of things that it takes to make sure that we are the product people most want to use. So he says the strategy is we make the best models, build the best product around it, and have enough infrastructure to
serve it at scale to Amazon team product. He's hedging there. He's hedging. I'm glad he's coming
around to team product a bit more, but it still felt a bit of a hedge. That's like you have your
models, you have your product, and you have enough infrastructure to serve it at scale. But
I think if one thing happened this year, I think more and more folks coming over to team product
that models aren't going to solve everything has been my, what made me very.
happy. I guess, yeah, the answer really does align with, with maybe my philosophy, right, that it's a little bit of each, maybe. I don't know. I think it is a little bit of each. I don't, I don't know. If you're a company like opening, you can't give up on developing the frontier models. No, I know, but the whole point was they, the story for so long was that the models will just get so good that product almost becomes irrelevant. Like it's, it's, it's, yes.
That was the story.
That hasn't happened.
Yeah.
And it's clear he is signaling, the entire industry is signaling.
Actually, are there any of the AI leaders still trying to argue that the God model will solve all problems?
Or has everyone kind of moved away from that?
I haven't heard much of that at all.
Yeah.
Yeah.
We've evolved this year.
So let's talk a little bit about Enterprise.
We talked a little bit about it last year.
One interesting point on Enterprise, Sam says,
the same way that personalization to a user is very important to consumers,
there will be a similar concept of personalization to an enterprise
where a company will have a relationship with a company like ours
and they will connect their data and you'll be able to use a bunch of agents from different
companies making sure that the information is handled in the right way.
I think this is interesting, right?
He also said that the API business grew faster this year than chat GPT,
which was surprising to me, but I guess it grew off of a bunch of,
lower base.
But just to go back to this thing, I mean, the idea that, you know, especially if memory
gets better, you can sort of connect your company to an enterprise version of chat GPT and
it will be able to, you know, personalize and answer with context.
Of course, there's data protection is going to be very important there.
You don't like want to have your CEO conversations necessarily filter down to everybody
else in the organization.
But that seemed to me like a compelling pitch for where this is.
going to go with Enterprise. Yeah, I mean, I definitely agree this is where it's going in
enterprise. This is what I work in at Ryder. Like, this is going to be the big battle of
26. I think on that point, it is clear. It's still an odd talking point to me. The API business
grew faster than ChatGPT because, yeah, much lower base. And this was the breakout year for
every API business for AI coding. Like, I mean, Anthropic was the biggest beneficiary I
that but the cursors of the world, all of that, like AI coding founded stride, that drove API
businesses. And I think, like, it will see where that specific part of it goes. But I think
I was just thinking about like the companionship side of it. This is even more where dividing up
and as you brought up data protection, like segmenting, siloing data and personalities and companions
is going to have to be at the core of the product. Because just like, you don't always mix
your work friends with your personal friends. Maybe at work you don't want to tell your coworkers
everything that's on your mind and just stick to work. And we all know how that goes. It's going
to be reflected in how these systems work a bit. Like you don't want to mix these two things up
and like even within a company itself and I don't know, like how your personal information
flows into your work information. Like I think that is such a messy area.
that unless that becomes the singular focus of the company, I just see that being a problem.
Let's talk about the revenue and infrastructure commitment question.
So this is what Altman said about the growth curve of revenue.
He says, the thing we believe is that we can stay on a very steep growth curve of revenue for quite a while.
We are so compute constrained that it hits the revenue lines so hard.
We see this consumer growth.
We see this enterprise growth.
there's a whole bunch of new kinds of businesses that we haven't even launched yet but will
but compute is really the lifeblood that enables all of us uh he says there are checkpoints along
the way and if we're a little bit wrong about our timing or math we have some flexibility i thought
that was a very interesting line but we have always been in a compute deficit and it has always
constrained what we're able to do basically they're trying to free it up so
there they see some correlation there between available compute and revenue and that
That is the theory here behind the capital outlays and the idea is basically that
as you grow, your training costs, you know, maybe even if it goes up, becomes a smaller
percent of your overall spending compared to the inference costs, which are people using
your models, which are much more directly tied to revenue.
What do you think?
I mean, as a theory or like as a kind of like overall.
arching theory, I think it's, it makes sense. But I guess it's hard to understand, like,
have they really not launched this, like, pharma drug development business line,
Sarah Fryer hinted at because of compute constraints? While they are updating GPT,
the new image model and posting Sam shirtless as a fireman, I think I saw that the OpenAI posted
from their own account. Yeah, yeah, like, and again, there's like a bunch of memes around it
around like I thought you're supposed to be solving cancer and instead like everyone from open
AI was posting and the images were and I started playing with it it's a very solid image model I think
it's like on par with the Gemini 2.5 flash and we'll talk about 3.0 but like so I think it was
important that they launched it but to me the way the compute is being you even we talked about
this in the past pulse like it's supposed to be like running compute all night to give you updates
in the morning and maybe they're going to stick ads in there like like you can allocate where
you're where you're putting your computer where you're out like you can allocate your compute and
I think it actually kind of like exemplifies that lack of focus because if you want to solve
drug development to make that a big core part of the business focus on that if you want to
focus on enterprise focus on that the idea that it's compute that is preventing all of those
business from exploding and growth? I don't know. I mean, maybe it is, but it's a tough one
to swallow. Yeah. I mean, we did talk a little bit about like specifically if, if scientists had
like, you know, two times more compute. What could they do? And, yeah, it's, you know, I think the
numbers that they're looking at are more like 10 times or 100 times more. And we will see because
it seems like they're going to get it.
There's talk today that they're, you know, in discussions to raise it.
10 billion.
Oh, no, 100 billion.
Sorry.
Yeah, 100 billion.
I think at a 750 billion dollar valuation.
By the way, one of the more interesting parts, we just talk about it quickly, was the
discussion about IPO.
And I was like, are you going to IPO next year?
Do you want to?
Do you want to stay private as long as you can?
It seemed clear that he wants to stay private as long as he can, as long as he can.
Like, well, he said, an interest in being a public company, CEO, zero.
Which makes sense.
The kind of things you have to do versus what he gets to do now are just very, very different.
But he's got a good roster of folks right under him who would be great candidates for that as well.
And kind of move over to chief product officer and get to continue to kind of lead that vision.
You can see that world.
No, it does seem feasible.
Although I don't think he will.
No, no.
step out of the CEO chair. I mean, I guess like you figure like a Mark Zuckerberg personality
you might not have expected would be a public company CEO and would have it a long time ago
wanted to move more back to just like more of a product role. But but then you see it can be done.
Yeah. Device plan. It's going to be family of devices. And he said that there'll be a shift
over the time in the way that people can use computers where you where they go from the sort of
dumb, reactive thing to a very small, proactive thing that is understanding your whole life,
your context, everything going on around you, very aware of the people around you,
physically or close to you via a computer that you're working with. So a family of AI devices
that understand your context and who you're speaking with. I like it. Is it, do you bite that? Do you
bite that device? Like a bite of it? I mean, yeah, like I someone is going to win this. And this
is, I mean, already, between somewhere in the mix of wearables and Rayban metas and talking to
my computer and, like, there's something there in all of this, I think, and someone's going to crack it.
So could it be Johnny I've and Sam together?
We'll see.
Yeah, I mean, to me, the thing that was most interesting was that it's going to be a family.
So really stuff that maybe you place in the office, right?
And then, by the way, speaking of knowing your context, it will know your office kind, place at home.
It will know your home context.
Maybe it will be able to make sense of things.
Maybe there's one that you keep with you.
So when you're out on the go, it could help and then give you reactive notifications.
I mean, we'll see.
It's clearly a ways out.
But I think people at least try this device.
I think it's the right direction.
We've got to say I have to see something, but.
Yeah.
Okay.
then lastly on on aGI i asked him about the theo vaughan interview i said uh you know you told theo
that like the gpgd5 was going to be better than most people at it most things i'm paraphrasing
here you know isn't that aGI and and he basically said like we i'm just going to paraphrase sam's
response we're in this gray zone where we may or may not be at aGI and um basically like he's like
we just need to start going towards super intelligence.
And his definition of super intelligence is when a system can do a better job
being president of the United States or CEO of a major company
running a very large scientific lab than any person can with the assistance of AI.
So if we're looking towards super intelligence, it's going to be a while.
Yeah, it was also interesting how he defined superintelligence
with those were the three president of the United States,
CEO of a major company and running a very large scientific lab, which again is interesting
because it's still running under the theory that the model has to do all three of those things
better than anyone versus there is like a much more specific model built for scientific progress,
which I know he talked about in the interview as well around how 5.GBT 5.2 really made breakthroughs
on the science side.
So it's clear that remains an area of focus.
But I think, like, I guess in addition to everyone coming over to team product versus
team model, I'm glad everyone seems to be retiring AGI and maybe even ASI's terms this
year.
So we can just start 2026 with a clean slate.
Yeah.
I mean, Sam also said it about AGI.
It's an underdefined term, which I think we would all agree on.
All right. Before we go to break, just quick reaction after hearing his responses and sort of,
he really talked about product, the enterprise plan, the infrastructure side of things, and
the IPO. Do you come away more confident about Open AIs direction or less?
I think it seems to be like the drawing out of all, from all those different topics, that idea of memory and content
kind of living across all of them. So if behind that, they are actually truly working to
kind of win at that, I think it puts them in a better place. But I still, again, like on that
lack of focus is what worries me the most. And we've talked about this weeks on end. But within
the interview, it becomes more clear that, you know, he wants to go after every one of these
things. He's not saying it is absolutely critical for the business to win at every single
thing, but it's still, they, it's like, remember when everyone wanted to be WeChat, the super
app, like in the West? Now this is an even bigger vision and ambition, but like, it's like,
we want to redefine how consumers interact with technology, how the enterprise interacts
with technology, how every process that is incredibly complex takes place and do all of that
as a business, it's, it's ambitious.
Yeah, no, there's definitely ambition.
So I came away, I would say, I mean, it was good to be able to put the questions that we've
been asking here directly to the CEO of OpenAI.
I came away more reassured.
But I also with this realization that, and we talked a little bit about this on the revenue
side, that it isn't in his belief, also similar in Dario's belief.
There is a belief that this is an exponential.
And it's one of those.
things where it really has to continue on an exponential, exponential increases in revenue,
exponential increases in capabilities to be able to work. And to me, and we talked about this,
this is a great, this is great unknown. Now, they say that there is, everything they see indicates
it will continue apace. But at the end of the day, it is a new category. That being said,
you know, we marked, we started out saying 10 years since opening I, but only three years since
chat GPT and I would say even in the past year the difference between the chat GPT that
existed let's say in December 2024 versus the one that exists today it's hard not to
appreciate how much better it's gotten since then yeah I mean these reasoning models
year ago agreed agreed on that okay all right let's take a break and then we're going to come
back with a very short segment about this Gemini 3 Flash model that Google has and maybe a bit
about co-pilot. All right. We're back right after this. These days, it feels like every dollar
should be working a little harder. But figuring out where to put your cash can be confusing. That's
where Wealthfront comes in. Wealthfront is a tech-driven financial platform built to help you grow
your savings into long-term wealth. Their high-yield cash account through program banks
offers a 3.25% APY on your uninvested cash as of December 19th, 2025.
And there are no account fees, no minimum or maximum balance to earn that rate.
And you can even make free instant withdrawals to eligible accounts in just minutes, 24-7.
So your money can always be within reach.
Right now, Wellfront is offering new clients an extra.
0.65% APY over the base rate for three months on up to a $150,000 balance.
That's a total of 3.90% variable APY when you open your first cash account.
Go to wellfront.com slash big tech to sign up today.
This is a paid testimonial for Wellfront.
It may not reflect the experience of others and there's no guarantee of future performance or success.
Wellfront brokerage is not a bank.
Rate subject to change, promo terms and conditions apply.
For more information, see episode description.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called Chat Concierge.
and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
Michael Lewis here.
My best-selling book, The Big Short,
tells the story of the build-up and burst.
of the U.S. housing market back in 2008.
A decade ago, the Big Short was made
into an Academy Award-winning movie.
Now I'm bringing it to you for the first time
as an audiobook narrated by yours truly.
The Big Short story, what it means to bet against the market,
and who really pays for an unchecked financial system,
is as relevant today as it's ever been.
Get the Big Short now at Pushkin.fm.
Slash audiobook, or wherever audiobooks are sold.
And we're back here on Big Technology Podcast Friday edition.
All right, Ron John, let's lightning round through a couple of stories before we have to go.
So Google has announced this Gemini 3 Flash.
They say it's with pro-level performance.
Following the last month's launch of Gemini 3 Pro, Google announced earlier in the week that Gemini 3 flash for consumers or developers.
The tagline is it's a frontier-level intelligence built for a fraction of the cost.
It retains Gemini 3's complex reasoning, multimodal,
vision understanding and performance in an agentic and vibe coding tasks,
but it has flash level latency, efficiency, and cost the flash model series is Google's most
popular offering.
I think this is from 9 to 5 Google, by the way.
Very quickly to you, Ron John.
This to me seems like the biggest threat, right?
Is that all this money goes into infrastructure and then a Google pops out an AI model.
Maybe this is going to be something that will enable more AI, but ultimately all this money
it goes into infrastructure and then we find out that you can process AI with, you know,
similar levels of intelligence for the cost of, you know, a Google search or maybe a little bit
more.
Yep.
I think that's, uh, that's exactly right on this that like, and one thing that did not come up in
that interview was trying to make it more cost efficient, kind of like the entire philosophy
from the open AI side is bigger, bigger, bigger versus Google is showing it's playing both
where we can go bigger, but we can also.
work on that cost side. And I think that indicates, like, it's a mature business that
understands at a certain point, that is going to be more important than, or as important as
the type of results people are getting. Yeah. I mean, to me, this is, this is, again, like,
the big question. And we're going to talk about this in our predictions episode, which we're
actually about to go record. But this, to me, is the big question of what happens next year.
Do these models just become so efficient?
And if so, does that throw the math off?
Okay.
Before we leave, I think you and I have been texting about the problems that people have been having with Microsoft co-pilot.
And, you know, it started with this information story about how maybe Microsoft salespeople's photos had been reduced because of this.
And there's another Windows Central article that's actually quite harsh.
It's funny because I don't expect Windows Central to go in on Microsoft, but they certainly did.
They certainly did.
Windows Central says Microsoft has a problem.
Nobody wants to buy or use its shoddy AI products as Google's AI growth begins to outpace
co-pilot products.
Here's the lead.
If there's one thing that typifies Microsoft under CEO Satya Nadella's tenure, it's generally
an inability to connect with customers.
Microsoft has shut down its retail arm quietly over the past few years, closed up,
shop on mountains of consumer products while drifting haphazardly from tech fad to tech fad from
blockchain to metaverse and now to artificial intelligence satya doesn't seem to affect
be able to prioritize effectively and the cracks are starting to shine through i am someone who is
actively using the a ii features across google android and microsoft windows on a day-to-day basis
and the delta between the two companies is growing wider dare i say it gemini is actually helpful
co-pilot 365 doesn't even have the capability to schedule a calendar event with natural language in the Outlook mobile app or even provide something basic as clickable links in some cases.
Is Microsoft, I mean, this seems to be, these stories really resonated because people are having these experiences.
Is Microsoft fumbling the bag on this one?
I think they are.
I mean, I hear this all the time.
And then like, to me, what it really symbolizes is just like,
When you have that power of lock-in of your customers that you know they're not going anywhere
else, you don't have to deliver the same quality.
You don't have to fight for that.
And everything I've heard and read about co-pilot, it kind of like feels and seems like this,
that it's more, it's kind of shoved into whatever existing system you have.
You kind of have to use it.
It doesn't do what you want it to do.
And I actually think it's a good setup because like as we head into the next year,
was Microsoft was sitting very pretty at the beginning of the year and Google was not.
And like it's such a reminder that just in this year how much things could change and also like
how much that means they could change next year. But I feel like already I saw that there was like
reports around like further price increases for Microsoft products like this that like you have
to take their AI features now. Whereas before there were an add on like all of these things.
things, I think, are showing that they're just trying to kind of extract value versus have
the best product and experience for their customers, which is going to be interesting to see
how that plays out.
Yeah, it's fascinating to me because I don't think anyone, at least in the early days,
spoke with more clarity about the potential of AI and how to make it a good business
than Satya Nadella.
And here we have Microsoft as the Lagert.
They're performing worse than most of their peers.
and, you know, they have opening eyes IP till, what, 2032,
but they don't seem to be making as much hay of it out of it as you would imagine.
So, yeah, that's definitely a concern for them.
All right, short episode this week,
but we have so much content on the feed that figured Ronan and I could, you know, come in and out.
Then we'll record this predictions episode that you'll see next week.
And definitely encourage you to check out if you haven't,
the Sam Altman interview that was just published.
yesterday. And if you really want, and if you want some more, uh, check out my conversation
with Jim Kramer, where we do all of our big tech pot takes. All right. Ron John, thanks so
much for coming on. All right. See you next year. Well, you and I were going to do one more
episode. Well, yeah. But see you next week, but we're just to give people a view as to what's going on
here, we're actually about to record it today. So it's not that we didn't try to change our
clothes for a week. It's that we decided to take a Christmas week off, but we still wanted to give you
something to listen to. So we'll go record that now. All right. Thank you, Ronan.
Thanks everybody for listening and watching. And we'll see you next time on Big Technology Podcast.
