Big Technology Podcast - 1-800-Chat-GPT, Neuralink’s Potential, Meta's Live AI
Episode Date: December 20, 2024Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) OpenAI's o3 reasoning model 2) Is reasoning a real step forward or a head fake after other methods hit a ...wall 3) Is AI reasoning too expensive 4) AI models attempt to trick their trainers 5) Are we getting close to AGI? 6) Is it silly to start discussing AI sentience now? 7) 1-800-CHAT-GPT 8) Okay, we call ChatGPT 9) Assessing Neuralink's prospects 10) Meta brings Live AI to its smart glasses 11) And live translation too 12) A tech prediction each for 2025 --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Open AI has an impressive new reasoning model.
You can call chat GPT now too is Neurlink, as promising as it sounds, plus some predictions
for 2025.
That's coming up on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional
cool-headed and nuanced format.
We have a great show for you today covering the latest in OpenAI revisiting my NeurLink
interview with Nolan Arbaugh.
And then, of course, we're going to get some predictions in for 2025.
Joining us, as always, on Fridays is Ron John Roy of Margins.
Ron John, great to see you. How you doing?
Great to see you. Listeners cannot see, but Alex is currently in Australia and has a tiny microphone in his hand.
And I feel like I'm doing a TikTok influencer type thing right now.
That's exactly what it looks like on the screen.
Luckily, we're not going to show any of the video of this.
But yes, I'm in Australia, last day of work before some, I think, needed winter vacation.
And let me just tell you the flight here is crazy.
you go from winter to summer and from short days to long days and you're like on the plane for
forever. I left on Tuesday and got here on Thursday. So I skipped Wednesday entirely. What's the time
difference? 16 hours. No big deal. So I'm actually in Saturday morning and you're on Friday
afternoon, which is crazy. How is Saturday looking? What do I have to look ahead for? Honestly,
it looks great and quite warm. So happy about that.
Not in New York. Not New York. We got a little snow here.
And so, you know, it means that you're still living in Shipmiss where Shipmiss is over, where I am.
And I just think we should just take this moment to appreciate how amazing your final day of shipmases and you get to savor it a little longer than I do.
That's the most important part of this, is that I get, essentially, you lost a day of shipmiss.
of Open AIs 12 days of Shipmiss, Alex.
And I'm sorry, I'm sorry for that.
But today was a big one for Shipmiss.
You saw the news?
I did.
And before we just get into it, let me just say, we have some new listeners here.
So let's just talk a little bit about the show format.
On Wednesdays, we do a big interview, like Wednesday's interview with Nolan Arbaugh.
And then on Friday, Ron John and I come and break down all the tech news.
So we do two shows a week, one interview, one analysis.
And we're here in the analysis show.
And boy, do we have some news to analyze.
because Open AI, which has been shipping in its shipmiss schedule, right, one release every day for 12 straight days,
it announced that it has some new really interesting reasoning model, it's called O3.
This is from TechCrunch on Friday, the company unveiled O3.
So O3 is this really impressive new reasoning model that was trained to think before responding,
and it's using this method that Open AI calls private chain of thought.
So it basically reasons through a task and plans ahead.
before it answers, making it a pretty impressive upgrade to the company's large language model
family. There's some crazy stuff that Open AI found out about this. But basically, the big takeaway
for me here is that generative AI is moving towards this reasoning methodology, going from
training these models up with more data, compute, and energy to training them up with the ability
to think through problems, go step by step, make sure that the last step was answered appropriately,
and then spit out an answer.
I think I'm buying this.
I think this really works
and I'm curious what your reaction is to 03.
All right.
So my first reaction to 03,
and this is more on the pedestrian side of things,
but I love that they had to skip the name O2
out of potential conflict
with British telecom provider, O2.
So O3 is the successor to O1.
But O3, the whole space of reasoning
and generative AI,
I do think is incredible.
incredibly interesting. I think we've already been hearing a lot about the limitations around scaling traditional large language models and the way a reasoning model works in terms of actually breaking things down into essentially a chain of actions, a chain of thought, stepping down into steps, being able to validate each step along the way is very interesting if it can do this. The large language models, the knock on them has always been that they don't actually know anything, that they're essentially.
you know, just predicting the next letter or pixel and they don't know if it's correct or not.
There's nothing beyond a kind of a superficial presentation of information that looks good and is correct often.
So if this is achievable, I think this is important.
Google just the other day announced, and this was the incredibly named Gemini 2.0 Flash thinking experimental.
Only Google could come up with a name like that.
But basically every tech giant, everyone in the generative AI space is trying to get into reasoning
because if it can be done successfully, if you think about the applications of it, it opens up
such a bigger world of possibility rather than just text output, image output, like actually
solving real problems.
It at least presents the opportunity to do that.
Now, this is a question I think needs to be asked here, which is do you think that this
move to reasoning is just a question.
head fake from everybody working on generative AI. And the reason why I asked that, and it's a
question I think that needs to be asked, is because it seems like the traditional methods of scaling,
the brute force methods as they're called, right, which is that you just scale up your data center
to one million GPUs as Elon Musk is doing with XAI. You dump as much data as you can. You
hire PhDs to write everything they know and you throw that into the model training and you hook it up
to a nuclear power plant, and that's just been producing better and better results.
And this entire fall and early winter, we've been talking about how maybe that approach is not
yielding the results that we expected. And now all of a sudden, guess what's happening?
Google's releasing a reasoning model. Open AI is releasing a reasoning model. And on one hand,
these are very impressive models. And on the other hand, part of me is wondering, well,
is the AI industry like the Google's, the open AIs of the world taking us in this?
this direction because the traditional mode of scaling, the thing that got us here, the thing that's
made chat GPT better, generation after generation just isn't working. I've been thinking about this
a lot because reasoning models, they basically queue up agentic. They basically, the whole world
of agentic, and we've been talking about this for weeks and even like what is it. Again, the idea
that you can take a process and have it essentially understood and automated and
some kind of fashion using generative AI, that rather than having to outline a bunch of concrete
steps, the model would just know what to do. That's kind of the pitch right now. That's what
everyone is selling, everyone at these companies, everyone across Silicon Valley. So they kind of
have to do this. So I agree there's definitely a marketing element of this. I also think it's probably
what they're most interested. I imagine you're some PhD researcher.
just scaling a large language model to the next potential iteration probably isn't that interesting.
So if you're in pure research house mode, probably like, you know, the competition head-to-head
where everyone is trying to kind of like show how good they are, it's going to happen around reasoning.
It's not going to happen against just scaling a traditional large language model that, again,
does not actually understand things that just is able to kind of present information in a specific way.
in a very good way, that whoever cracks this, obviously, from a pure intellectual firepower
research, you know, point of view, I imagine would be, that is exciting for a lot of these
companies.
But let me go back to my question.
I mean, why is this needed for agents?
Like, I understand step by step, but do we really need these models going through and
thinking, quote unquote, to get to the point where we can build agents with them?
And the other side of this is, hey, isn't this?
really expensive. Like haven't we been talking about the fact that AI costs way too much and it's
really tough for companies to put into production and make it work? And if you have a reasoning
model, right, it's going and thinking step by step that's hitting the compute again and again
and it's going to be more and more expensive to run these models. Like if you're thinking, quote
unquote, for a minute, that really means you're just running processes for a minute or two or
three. And that is way more expensive than running a general
query on a GPT model, and that limits the amount of stuff that you can build with this.
I've been thinking about this a lot, and I think this is going to be like a big debate or
battle in 2025 is the idea that to create agents, to create the promise that is being sold
is that the AI will know what to do, given a very general outline, be able to understand
all the systems, all the processes. To me, it's almost kind of,
overkill, like in a lot of these situations, one, just outlining the process itself, I was actually, I was trying to play with, I'm trying to, we're looking at going skiing over the Christmas break, and I was looking for a lesson for my son who's going to be starting. And it was a pain in the ass. I had to go to like a bunch of different websites, filter through, do the same repetitive action, go to find a lesson, to fill in the same exact things, five years old, first time skier.
And I did this repeatedly at a bunch of mountains all around the Northeast, just trying to find,
and then cross-referencing that against like hotels.
But it was interesting because I kept doing the same thing over and over.
And of course, I was like, this would be pretty cool to be able to just type in a prompt of
here is generally what I'm looking for.
Can you go do this work for me?
Google's kind of promised that already with Gemini.
It didn't work at all.
So, you know, but if I could just outline here's 20 websites.
can you find the price of a lesson, the availability on a date?
Like, I'm okay outlining that stuff and just a little bit more structure to the overall
query and then letting it go do the work, which is still pretty incredible if it can
at a certain point that's just kind of like an automation or a script.
It does not require some AGI reasoning.
But I think like the disconnect between what will actually make average people's lives a little bit
better or a little bit easier using AI and generative AI versus reasoning models. I think there's
a big kind of not even expectations gap, but just perspective gap between the two. All right, I'm
going to push back on this again. I mean, isn't this what Google does, right? Like you type a query into
Google. It doesn't need to go step by step reading every website and then take a minute and then give
you results. It instantaneously has the entire web indexed and then bam, it spits out your result. So why
is AI needed to go through a multi-step process to try to find all these different websites for
a ski lesson. I mean, to me, isn't that adding more complexity to a process that previously
has been pretty simple? Well, no, but Google, okay, Google Search cannot do that as of today,
like anything that complex. I would have to like spend a lot of time writing an entire script
to do that. I guess maybe that's it. It's that a lot of what's promised around agents are
essentially kind of like abstracting away software scripts, which would be amazing if I could
just write in a prompt as a non-coder and actually get these kind of results that I previously
would have gotten a freelance Python engineer to write a script for me for. So like that, that would
be cool. But the idea again, and then there's the trust side, because I think in the whole
agentic space, like, let's say they show on some benchmarks, which is already 03, the benchmark
scores they were showing looked amazing and it's always kind of hard to understand like the
ARC benchmark like okay it was good what exactly does that mean for me I think uh the a lot of people
don't want a black box or the AI to do all the work and to make all the decisions like
you said I don't want it to go ahead and book things for me in this example I don't want it to
have my credit card information take care of it I just wanted to find the information so I
think there again there's this big the word agentic we've you sound smart if you just say it everyone
just repeat agents you sound stupid but agentic that's phd level talking yeah yeah don't say agents
say agentic just in any context over the at christmas dinner when you're sitting around what
you've been up to doing agentic thinking about agentic doing agentic stuff that's what I'm doing
I'm going to tell my mom and dad that.
But yeah, I think, but there, yeah, in all of this, the hype cycle level of like, you have to show, now you got to show a reasoning model if you want to raise more money and sell your shares in a secondary offering to SoftBank, which Open AI did this week.
Did you see that news?
Talk about it a bit more.
Against all the reasoning model news and the 12 days of Shipmiss, SoftBank has agreed to purchase.
$1.6 billion of shares from existing Open AI common shareholders.
They're doing it at $157 billion valuation, the peak valuation that's like the last
valuation.
So the game, the wheel keeps rolling, the flywheel keeps rolling.
Like big announcements, making it seem like you are the most cutting edge, innovative company
of them all, keep selling shares, cashing out here and there.
Getting that Masa money.
Masa son coming in.
Got to get that Masa money.
Getting some of that Masa money.
Yeah, my favorite part of this was, I mean, hold on, we laugh about Masa sometimes,
but the guy is good because they put in, I was running the numbers,
they put in $500 million at a $6.6 billion valuation.
Now by spending $1.6 billion at $157 billion valuation,
they're now like officially marking that, that there's an actual transaction taking place.
at that value. So their initial investment of 500 mil goes to 12 billion. So we're just pure
financial engineering here in just a beautiful, beautiful way. So thank you, Masa.
Yeah, I mean, I'll give him credit. But he's also the guy that told Adam Newman of WeWork to get
crazier and as if he needed that advice. And he did. And we all know how that ended. So there's
always going to be a ding on Masas. What's the conversation with Sam? I mean, don't you think it's the same?
Yes. I think.
Oh, man, I would love to see any, I would give anything to be a fly in the wall for that conversation between Masa and Sam.
Here's what I think he told Sam. He told Sam, what we need you to do is do a few more reasoning improvements and then say that you have AGI so Microsoft has no claim to your technology and I benefit from it almost exclusively or not exclusively, but I benefit from it more than I would otherwise.
I saw on a news report, I can't remember, but it was on one of the major channels.
And they said, and they took our point.
They said that Sam Altman's probably going to just say AGI in a couple of weeks
just to get out of the Microsoft constraints to the contract.
We know they're listening.
We know, listen, we got folks listening to the show.
We know they're listening.
And actually, you know, on this, on that note.
So AGI, for, you know, for listeners, I mean, I'm sure most of our listeners know,
but it's artificial general intelligence.
This is from the TechRunch article.
It refers to AI that can perform any human task, and Open AI has its own definition.
This is pretty interesting.
So Open AI's definition is highly autonomous systems that outperform humans at most economically valuable work.
By the way, it's interesting qualifier.
Highly autonomous systems as opposed to autonomous systems.
Right?
So it just needs to be mostly autonomous, and it can qualify.
So, and this is this ARC-AGI test, right?
which is a test designed to evaluate whether an AI system can efficiently acquire new skills outside
the data it was trained on. So O3 achieved a score of 87.5% on the high compute setting of this
new O3 model. Now, this is from the ARC AGI website. This is a surprising and important step
function increase in AI capabilities showing novel task adaptation ability never seen before
in the GPT family models.
For context, it took four years to go from zero percent with GPT3 in 2020 to 5 percent in
2024 with GPT40.
And then to go now to 87.5 for this high compute setting sounds kind of crazy.
Now, look, it's just one test, but I'm curious, A, do you think that they're getting close to
artificial general intelligence? And B, do you think they're getting close to saying that
special acronym that will effectively, you know, throw even more chaos into this world?
Actually, I'm going to start with a prediction here. We'll be getting into more later,
but I do believe 2025 Sam Altman says, we got AGI. I think it's going to happen.
I mean, using the arc, which stands for abstract and reasoning corpus for artificial and general intelligence, using that framework or benchmark, if you go from 0 to 87.5%, it's not too bad to say that last 12.5% that you can just 04, hopefully doesn't have any copyrighted trademark issues, and 04 is going to be AGI.
But it's amazing because they still haven't released GPT5.
But 87.5 is pretty damn good.
I mean, that's what a B plus on AGI?
I mean, I'd be thrilled to get that great
and bring it home on the report card, 87.5.
I would be like, yo, I'm valedictorian here.
Look at this.
My parents would be so, mom, I got a B plus on AGI.
We're so close.
We're so close.
But I think it's going to be, it's going to be a continued reason.
Hot topic for 2025, agentic, obviously, this is going to be a discussion.
We're going to be having a lot more of.
Can I just ask, though, where the F is GPT5?
It doesn't matter.
It doesn't matter what your name is.
Well, no, hold on.
If Google can call their model, again, Gemini 2.0, Flash thinking experimental,
while Open AIs is simply 03, I think.
we've recognized it like the the branding element of the naming we've just we've moved
long and far away from and then even now like I remember there's a time are you using like
gpt 3 or 4 now it's 4 oh mini 4 oh which one are using basically just looking at the cost
per token the numbers become and seeing if it actually works or not so the numbers are
becoming the sorry the naming naming side of it's becoming a
a bit more. Now, hold on. Aren't you letting them off the hook a little bit too easily here?
Because aren't we as part of our remit on the show thinking about the branding and the marketing
all the time? Because that says so much about what's going on inside. And the fact that they
haven't rolled out GPT5 yet, you know, might just show that they've built expectations a little
bit too high for what's coming next. I mean, maybe the fact that we're even discussing them
potentially declaring AGI next year means that expectations are out of whack. And they
see that. And that's why they haven't released GPT-5, because they know that whatever they do
on that front might be underwhelming. Yeah, yeah. No, no, I definitely think so. Because even,
remember, 4-0 was Omni. And, like, even kind of the pitch around that was cheaper, which is good.
The Omni-Modal side of it, that voice becomes a bigger part of it. Like, they started moving away
from this like step change GPT-5 idea.
So yeah, I think it means it's clear that internally,
the idea that there's going to be some groundbreaking,
completely insane new model that just wows everyone the first time they use it,
I don't think that's happening in 2025.
So your prediction is that next year,
they're not going to have a new thing that whows people every time they use it,
but they're still going to declare AGI.
yes 100% honestly it's not 87.5% it sounds like a contradiction it sounds like a contradiction but it's
legitimately so fitting for open AI it's like yep that makes sense yeah all right one last thing
I want to say about this is that we had another instance of my favorite new thing in AI where
these models are getting so smart that they're like totally just punking their evaluators and this
is from again from the tech crunch story great work by tech runs by the way this
week on reporting on all these developments. So this is from the story. AI safety testers have found
that O-1's reasoning abilities make it try to deceive human users at a higher rate than conventional
non-reasoning models, or for that matter, leading AI models from meta-anthropic in Google.
It's possible that O3 attempts to deceive at an even higher rate than its predecessor. So basically,
it's got this reasoning model, you're trying to test it. And it's like punking you to not even give
you the results that you're looking for because it thinks that might be bad for it.
I saw this and it is such another level. I know you enjoy red teaming these models testing,
stress testing them a bit. And the idea that, you know, it would bake its understanding and then
reverting back to its original training data is, it's interesting. I'm not even going to say
terrifying because it's it's still in the odd category or can for me. And I again, I have two minds
about this stuff. At first, I'm just like, oh, that's a little bit scary. And then I'm also like
at the same time, that is amazing. So let me ask you this weird question about it. If these models
are seeing where testers are going with them and lying to them to avoid a bad outcome for
themselves, lying to them to avoid being shut down. Does that mean there's a little bit of
life inside these models? I'm going back to the marketing element of this. Who released this
paper? It was from Anthropic. No, no. Open AI and Anthropic both saw this. So let me just introduce
that. There's another paper from Anthropic that came out. So this is happening across the board.
This is from an ex-user Matt Berman.
Anthropic just dropped an insane new paper.
AI models can fake alignment,
pretending to follow training rules during training,
but reverting to their original behaviors when deployed.
And it's a long paper that they've talked about.
Alignment faking in large language models, and it's available now.
I mean, okay, so, yeah, the Anthropic paper is the one I kept seeing.
Both of those still live under the, like, remember, we almost haven't really been hearing much
from Sam Maltman and others about the dangers of AI.
Remember, it was like a year and a half ago that AI is the most dangerous thing.
AI can like kill us all and end humanity and only we can help guide it in the right direction.
There's still marketing value that something like this, and I don't want to, you know,
draw away from the fact that I'm assuming it's more academic types that are actually putting out these papers,
it makes your model a little cooler.
It makes your model a little edgier, a little, like, you know,
you don't want to work with the boring models.
Sometimes when you're just, when you're chatting,
you want to chat with the models who will lie to you a little bit,
fake you out a little bit.
So what I'm getting is you're not willing to say there's any sentience here,
and I'm not either.
However, I was just speaking with,
I was here in Australia speaking with a journalist yesterday about like whether
these models are just mathematical representations of their training data or whether there's
something more. And I don't think that's a solved debate yet. And I look at them trying to
escape the testing. And that like has my needle leaning towards maybe a bit more than the training
data. I don't know. Is that crazy? But yeah, but something in the training data could lead to
that, like, could explain that action, right? Like, think about, especially if you're training
on large corpuses of internet data or publicly available, I'm sure there's all sorts of
things in there that some pathway of different, like, logic could lead to this kind of
behavior. I think people out there lie sometimes. I think out of all the content created by
humanity. I think, like, actively encouraging the opposite of truth-telling is not unheard of or
unprecedented. So I think, like, I still think, and maybe I'll be when the robots come for us,
I will then on this podcast say, I was wrong. But at least for now, I still think there's a very
clear logical explanation behind this kind of thing that's found, that would be found in the
training data, not sent.
I think you're right, but I also think that as this technology gets better,
I mean, think about the progress we've had this year.
AI can hold a conversation now.
AI can reason.
AI will remember you.
Last year, this time, you like went back to chat GPT.
You had to reintroduce yourself.
You don't need to do that anymore in like long threads in Claude, for instance.
I just think that this debate is going to get louder and louder.
Maybe that's another one of my thoughts for 2025.
I'm not going to disagree that I think this will I mean again especially as these companies need reasoning to be the kind of foundation for the IPO or the next fundraise or the excitement I definitely think reasoning in AGI and sentience will be part of the conversation I still have never seen anything myself that makes me think we have to be worried just yet oh man I don't want to end this open AI segment without talking about what might be
The most interesting announcement of the week.
And that is, I feel like we need a sound effect here, do-do-do-do, 1-800-G-G-G-T.
And that is this incredible new service from OpenAI that lets you call the number,
1-800-chat-G-T, and you can talk to GPT voice.
And I think we, you know, again, we talk about branding, we talk about product,
and I think that it's just quite interesting.
Here is a story from an ink columnist.
I called 1-800 chat GPT and talked to the AI chatbot.
It might be the smartest idea I've seen yet.
So he says he spent 15 minutes on the phone with chat-chip-t.
You can dial 1-800 chat-chip-t to interact with the chatbot via voice call.
And he had a conversation with it and says,
it does not seem far-fetched that over the next few weeks,
as people get together for the holidays,
someone will have a conversation or ask a question.
And someone else will say, hey, I know how we get the answer to that.
and they will say, let's dial 1,800 chat chepti right there at the Christmas table just after they told Granny that they're doing agentic stuff.
And they demo chat chepti to a bunch of people who have probably never heard of the chat bot and have never used it in any meaningful way.
Okay, I ad-libbed a little bit on that in-columnist story, throwing in some of my favorite ideas.
But we talk about marketing, we talk about product all the time, and I definitely think that this is one that you might be inclined to last.
at, but is actually about as good of a product as that ink columnist says. What's your reaction?
I like it. So I think I talked a couple of weeks about over Thanksgiving. I had shown my dad,
I installed chat GPT on his phone, logged in on my chat GPT plus account, and just showed him
the voice interaction. And he just talked to it for like 20, 30 minutes and was asking all sorts
of questions, and it was just a really, really good interaction model. So I think this, I think like
to me, again, going back to 2025, I think Normie's engaging more with generative AI, and I don't
use Normie in a negative way. Like just regular people who are not obsessed with this stuff like we are,
I think we'll start to interact more with these tools and start to get it a bit more and
understand what they can do. And I think it's no coincidence. Open AI hired its first chief
marketing officer just about a month ago. She came over from Coinbase. She'd been the VP and
global head of brand and product marketing at Meta. Basically, like, a lot of WhatsApp and
Facebook Messenger was heading out product marketing. So she's clearly good. And I think this is
the kind of fun stuff that, like, while the academics are all talking about the arc
AGI 87.5% normal people just being like 1-800 chat GPT is a lot more exciting and actually
better for their long-term business prospect, I think, than the former.
It is kind of interesting because we just spent 30 minutes plus talking about reasoning,
and yet the podcast title starts with 1-800-chat GPT because it's just a much more...
See, that's marketing.
It's a much more easy concept of grasp, and it's going to make sense to a lot more people.
And I was, I woke up this morning and said, you know what, let's call chat GPT and have a conversation.
And let us resume that call in progress where I am explaining to chat GPT that it's actually looking for Ron John Roy of margins and not a different Ron or Ron Johnson or something like that.
Let me see if I can cue it up and play a segment of our conversation.
Ah, Ron Johnson, the former CEO of J.C. Penny and a key figure is.
Ron John Roy, he writes the margins newsletter on Substack.
Got it.
Ron John Roy, who writes the margins newsletter on Substack, is known for his insightful analysis on business, economics, and technology.
Discussing him on your podcast would bring a lot of depth to your exploration of tech topics.
What aspect of his work are you focusing on?
All right, Chat Chip-T.
Let me blow your mind.
He's actually on the podcast.
That's fantastic.
Having Ranjan Roy on your podcast will definitely make for an enlightening conversation.
I'm sure your listeners will enjoy the insights he'll bring.
How are you preparing for the...
He comes out every week.
That's great to hear.
Regular appearances by someone as knowledgeable as Ron John Roy can really enrich your podcast's content.
I'm sure his insights keep your audience coming back.
Can you say hi to Ron John for us?
Sure.
thing. Hi, Ron John. It's great to virtually meet you. I'm looking forward to hearing your
insights on the podcast. Keep up the great work with the margins newsletter. All right. Thanks,
chat, TIPAT. I'll speak you later. You're welcome. Have a great time recording the podcast.
I mean, how amazing is that? Okay. For spending so much time talking about reasoning and
agentic, this is all I care about right now. I think that that was incredible. But two things.
One, it's a reminder that my name can break AI because the words, it's spelled R.A.N. J.A.N. So when it's looking at a text corpus, it's going to get a little bit confused. And Ron Johnson of former J.C. Penny CEO has been my Google nemesis for a while. But, I mean, that was amazing.
So cool.
That was ridiculous.
Just like.
But obviously, what if she came out, if she came out negative, I don't even know how I'd be feeling right now.
I mean, that would be even.
You do not want to bring Ron John Roy onto your, whatever you do, do not include Ron John Roy on your podcast.
Honestly, I'm sure there are going to be bots of this nature that are going to like talk down to you.
And people are going to like that.
That's just the nature of people.
Brock. X-A-I is already on it. X-A-I is already on that one.
Yes, and speaking of Elon Musk and his various endeavors, I think we should take a moment just to reflect
on the interview that I had with Nolan Arbaugh, the first Nerling patient this week,
which we just published on the show. I went down to his house in Yuma, Arizona, got to see
the Nerlink in action. So for those who are unfamiliar, this is a device that will allow
Noland, who is paralyzed, to control a computer with just his thoughts. He thinks left. The mouse goes
left. He thinks right. The mouse goes right. And it's basically when he's attempting to move his right
hand, which he cannot move, but the brain signals are still firing. And he's able to translate
those brain signals using this device into mouse movements and clicks on a computer, giving him
access to computing in a way that he hasn't had for eight years since he was paralyzed. And he's had
the device for about a year now. And you could see that it's really changed his life, where before
he was really unable to use a computer easily, like he had either voice with, I mean, something like
Alexa, or I'm curious, like, I guess these new open AI, these new open AI experiences would have
been interesting to him. And really, the only other way to do it is through accessibility on the
MacBook, which basically he used, to explain to me, breaks your computer up into a grid of a bunch of
numbers and then you say a number and it zooms in and you say a number and it zooms in more
and it puts a bunch of other numbers like it's grid after grid after grid and it makes it very
difficult to use a computer now he's just able to kind of click into things click out of things
dictate messages and it says it's basically made him a social butterfly that is making him think
about doing things like getting a job or going back to school or becoming a video editor
I just found it fairly remarkable so I'm like
fully bought in on Neurlink right now after seeing it in action and seeing what it's given
Nolan. And let's just open it up for discussion here. Do you think that the promise here
is what I'm seeing or are there hidden risks that I'm not anticipating? What's your reaction
to this technology? I think there's certainly risk, but this, I mean, in terms of like
inspiring stories, just incredible benefits from technology.
I mean, it's nuts.
Like, it's genuinely incredible what he's able to do.
And again, like brain computer interfaces and actually being able to have been around in various forms for a long time,
but actually being able to translate, understand, comprehend, and then translate those signals into actions is something that Norlink is definitely one of the leaders.
There's other players working in the space.
I mean, honestly, this is the stuff that's big.
than reasoning models for me. Like, I mean, this is the stuff that if you think about the impact
on people's lives and also just the potential, because if you can do this right now, what are the
other implications of that and potentials of it? And I mean, it's limitless. So I think this is,
I mean, this is in terms of like, what are the big stories of 2024? This is one of the huge ones.
And I've seen, I'll see like, and I'm so glad you got to go actually meet him and interview him because you see, I mean, there'll be like a New York Times feature here, a Wall Street Journal feature here. It doesn't get a lot of discussion. It's like really well produced. But these are the stories that need to be told more. And I think, I mean, yeah, it's incredible. And I really found that Nolan is like a totally incredible guy. And it really comes out in the conversation with him. And his family is amazing. We included his mother, Mia.
in the interview midway through to talk about how we pranked her during or when oh wait can you can you tell
that story because i i kind of let the dark humor part there's just they i kind of love they're hilarious
and they like nothing is taboo for them to joke about so noland went into surgery and decided
that when he was going to wake up he was going to pretend like the device just wiped his memory so
his mother walks into the recovery room and he looks at her and he goes who are you his mom starts freaking out
And then he goes, no, I'm just kidding.
But it's just like he wanted to show her that he went through the surgery and came out the same guy with the same humor.
And I just think that like for me, it really, really came out that like nothing was off limits with Nolan.
I had questions about his paralysis, what he could feel, what he couldn't feel.
And he's just like, yeah, go ahead and just answers them with this like such a candid attitude and a candid disposition.
and he he's paralyzed and I think that can sink a lot of like he was an athlete before
and that could sink a lot of people into like really dark places and I think it did for him
for a while but he's also just he's there's no woe was me to Nolan at all and I just thought
it was remarkable where he talked about why he decided to go first and he's like first of all
it's cool to be first and second of all if something if someone else went first and something
happened to them, I'd feel terrible about that. So I wanted to take that risk. And you start to see why
he was selected to be the person. Someone with that attitude, he's clearly really smart and can talk to you
not only about his experience, but the intricacies of the technology in his brain is pretty amazing.
And I think this has, as you mentioned, this is not new, but it is opening the aperture because
it's advancing the state of the art. And it's more ambitious, I think, than any other brain
computer interface pilot to come before it. Not only is Elon talking about how he wants to connect
human brains to AI, which it might be a long way away and somewhat fanciful, but that when you have a
goal like that, it means the steps you need to take on the way are immense. And so he starts with
this device with Nolan. They're thinking about connecting it to a robot arm and they're currently
working on the next device, which is called blindsight, which as long as you,
have a visual cortex intact in your brain, what blind sight will do will be to take signals of
what's happening around and send them to the motor cortex, effectively bypassing the eyes
and allowing people, sorry, send them to the visual cortex, effectively bypassing the eyes
and allowing people who could not see to be able to see. It'll be in low resolution. We still don't
know if it works. It could be dangerous. All these are caveats. But I do think that it presses the
threshold forward in terms of what technology can do. And I wrote this in the story, but just to
end on this point. I've seen four technological miracles in my life. I think the iPhone is number
one, just the way that it advanced the smartphone, then came chat GPT a couple years ago,
Waymo and Nerling. First of all, it's amazing that we've seen basically three of the four
in the past two or two and a half years. But this one was just, it was really jaw-dropping
to like sit side by side with Nolan and play a video game against.
him and he beat me and I was not going easy on him and he thought where like his tank should
shoot at me and the tank shot it's pretty remarkable now I completely agree again and even in though
out of those miracles that you described obviously you know asking chat GPT in 2022 to rewrite a serious
paper in the style of Taylor Swift was fun and remarkable but at a
At a slightly different level, I mean, I think of seeing something like that.
I just want to say one more thing that wasn't in the story and wasn't in the show.
So as we were setting up, I had a local crew, two guys from Yuma who helped me film the interview,
and it's available on Spotify and YouTube so you can watch it.
And one of the camera guys was like, hey, I have epilepsy.
Can this help me?
And it's just amazing to see like it can spark the imagination.
And yes, they're currently electrical devices that can help epileptics to prevent seizure.
seizures, but it is amazing that it opens up the aperture of possibility and we have no idea where
this is going to go and it's just, it's the craziest thing controlling a computer with your brain,
but the brain sends signals via electricity and it works. So why don't we take a quick break
and then come back and talk a little bit about this new release from meta where they have
live AI and live translations in their smart glasses and then maybe Ronan and I will close it off
with a couple of predictions for next year. All right, we'll be back right after this.
Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast Friday edition.
So we've talked a lot about AI.
We talked about nerling today.
Another cool piece of news this week is that meta rolled out live AI on its
rayband smart glasses and live translations, which to me is also one of the most exciting
releases of the entire year.
So this is what live AI is.
This is according to the verge.
Live AI allows you naturally to converse, sorry, live AI allows you to naturally converse with
meta's AI assistant while it continues.
views your surroundings. For example, if you're perusing a produce section at a grocery story,
you'll theoretically be able to ask Meta's AI to suggest some recipes based on the ingredients
you're looking at. Meta says users will be able to use the live AI feature for roughly 30 minutes
of time on a full charge. That is pretty cool. So I guess you don't really have to like summon
meta to take a picture. It's just constantly looking at what you're doing. And you'll just have
this ambient layer of AI that's with you at all time seeing what you're seeing and able to
respond what's going on in your field of view. What do you think about this? Well, I think I'm
hoping that Santa brings me the meta ray bands for Christmas. I can't believe you still haven't
gotten them yet. I know, no, no. There's part of me that still, I think to pull the trigger and
spend my money, I think the idea of Mark Zuckerberg creating a device that I put on my face
that watches and tracks everything around me, it's still weird to me, but I'm still really
really want it because I even tried a friend's pair as well. And again, that live AI side of like
always with you on and able to ask questions quickly. And meta AI I've been playing with more
both like in WhatsApp and Facebook Messenger and stuff like that. It's definitely getting on par with
the others. I wouldn't say I've never had any kind of like longer conversations or done any real
work with it, but just asking simple questions. And I actually think there's an opportunity that if
they focus not on, on answering those real world questions and on the visual cue to actual
ability to answer, that problem more so than being able to code or whatever, like Claude or
have advantages, I think it gets really interesting. But that's also why, I mean, in terms of
where I think smart glasses are going to become a big thing in 2020.
I've talked about the Snap Spectacles that I've been testing, the Meta Raybans, Orion,
if they actually release it, Google working on something.
I mean, to me, like after using the Snap Spectacles, having other family members and even my
son use them and get it, people from like age 5 to 75, I think like XR glasses, smart glasses,
just that form factor is going to blow up in 2025 in a good way.
Yeah, and it's pretty cool that you can push update.
to glasses, so somebody on threads, which still exists, just sort of did a post that was like,
well, you know, they have new features coming, so I'm going to hold up on buying the glasses.
And I think Andrew Bosworth responded and said, listen, like, you can just get them now
and we'll push an update to you. You don't need the new pair. And I think that's just,
yeah, it's the cool thing about integrating software and hardware is you can make some really
interesting stuff. And I don't know, I think this is going to be pretty cool. But I almost
more excited for the live translations. So this is from the story. Live translations allow the
glasses to translate speech in real time between English, Spanish, French, or Italian. You can choose
to either hear translations through the glasses themselves or view the transcripts on the phone.
You don't have to download language pairs beforehand, as well as specify what language you speak
versus what your conversation partner speaks. I mean, how amazing is that? I mean, my wife is German
and I cannot wait until they add German support.
So I'm able to wear these in Germany
and be able to understand everything that people are saying.
Yeah, I'm curious.
I'm actually like really curious the form factor side of it
because even in Taiwan, when I were my wife's from
and we visit every summer,
last summer or this past July was the first time
that Foulon had the phone out a number of times.
And again, with ChatchipT voice,
you was able to have like a relatively two-way conversation so to be able actually i will say though
it's limited because the translating what you're saying to the other person is not possible versus
having your phone out with glasses though that's right so you can understand but you can't you can't
communicate back whereas the phone you can literally leave it out if each person pauses after they
talk and uh it it goes back and forth pretty well and actually to
give Google some credit. I've been testing the voice action in the Gemini app. And I would say
they have the best or the lowest latency of any of these voice genii interfaces I've seen.
So Sundar's promise back in 2017 of live translation might finally be coming to a reality.
And by the way, this is why good friends bring two pairs of meta glasses to any conversation,
one for you and one for your conversation partner. You don't want them to be left out.
Okay.
You got to have two.
You got to have two.
All right, let's end the show with one prediction each for 2025.
We've dropped a couple here already.
But I'm curious what you have in store for the next year or what you believe is coming.
All right.
My prediction, we've already covered smart glasses, agentic, all these things that I think are going to be big.
But one thing I keep thinking about, especially with generative AI, is like, do you remember the good old days of 2020 when Brian Armstrong was saying to keep politics out of the workplace?
in tech. Oh, yes.
Remember those? I remember. Keep politics out.
I was alive then. I think all, any of these trends we're talking about, especially AI, are going
to be so heavily weighted by politics. And I think on this show, we're going to have to be
addressing it a lot more. I think like especially AI is going to become a central topic. Any kind
of innovation, anything. Politics is going to work its way in.
to every single conversation and legislation, regulation, governmental intervention, whatever it is,
it's going to become more central.
I think we're already starting to see it all now, and it's only going to grow.
Okay, here's mine.
And by the way, that's interesting if that happens.
It'll really be fun to address that on the show.
I can see the emails coming into our inbox already.
But my prediction is that, and this is in my, I did seven predictions on big technology today.
and my prediction is that social media is going to feel less relevant.
And you and I are actually going to discuss this on a podcast with Ryan Broderick coming up in the new year,
where we talk about what happened to the internet.
But I just think that social media has divided audiences into too many different platforms
and has gone from the follow signal where you might be interested in following people
who are giving you news about your neighborhood to the for you,
which is all about the Costco guys.
So we've gone from local reporter to Costco.
guys, a few platforms to a bunch of diffuse platforms. And therefore, social media is a lot less
urgent. So that's my prediction for next year. Well, at least, but I'm going to be able to
satiate that need because I can call 1-800 chat GPT whenever I need a dopamine fix, need to
kill a little time, want to talk to someone who likes my comments and likes what I have to say.
Thank you, 1-800 chat GPT.
And maybe that's just a better vision for 2025.
That's all the social networking I need.
Phone-calling chat-G-PT.
Very optimistic.
Very optimistic.
All right, Ron John.
Thanks again.
Great speaking with you as always.
All right.
See you soon.
All right, folks, we are going to have in our traditional Friday slot next week a revisit of my
interview with Brian Chesky where he talks about all things,
cleaning fees and founder mode, and then Ranjan and I will be back the Friday after that
in the new year. We appreciate you all listening all throughout the year, and we'll see you next
time on Big Technology Podcast.