Big Technology Podcast - Sam Altman’s Reflections, NVIDIA’s Robotics Play, Zuckerberg’s Moderation
Episode Date: January 11, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Sam Altman declares the path to AGI is clear 2) Could AGI come before GPT-5? 3) Up next: Superintelligenc...e 4) Anthropic raising $2 billion 5) NVIDIA says robotics is a multi-trillion opportunity 6) NVIDIA has a personal 'supercomputer' 7) Smarter NPCs are here 8) Meta's AI training copyright issues 9) Zuckerberg's fact check reality check 10) Motives of Zuckerberg's moderation moves 11) TikTok ban might actually happen 12) Alex's visit to China --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Sam Altman says OpenAI now knows how to build AGI.
Anthropic is raising another $2 billion.
Invidia looks to a robot future, and Mark Zuckerberg says to hell with fact checkers.
That's coming up on a big technology podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional cool-headed and nuanced format.
We have a great show for you today.
We're talking about Sam Altman's new bold statement about AGI.
We're also going to cover the latest invidia news, the latest anthropic fundraising.
Mark Zuckerberg talking about how content moderation is no longer as important to Facebook as it has been in the past.
And what that all means, whether it's just a play to get in the good graces of the Trump administration.
And of course, a little discussion of my visit to China and where the country stands today and where it's heading.
Joining us as always on Friday is Ron John Roy of Margins.
Ron John, great to see you.
What better way to continue our 2020 thrive than having Sam Altman tell us AGI is here, at least AGI-ish.
I don't know about you, but I've gotten a lot of 2020 Thrive texts over the past week.
So I want to thank you and I want to thank you for introducing that to big technology and my life.
Thank you, Ron John.
I don't think my wife and family, I think, are definitely over me saying 2020 Thrive, but I'm sticking with it.
I'm sticking with it. I will admit I've been saying it as well when someone talks about.
about how this year is going off to a bad start.
I'm like, just 2020, thrive.
Don't worry.
23, thrive.
It's so easy.
Manifest.
All right.
So someone who's manifesting is Sam Altman.
He's talking about AGI.
He wrote a very interesting post called Reflections,
just looking back at the last two years since ChatGBT has released,
basically saying, all right, it's new the new year and I got to look back.
We're going to talk about a few things that he's written,
but to me, the most interesting statement,
one that got the most attention this past week is,
this we are now confident we know how to build aGI as we have traditionally understood that we believe
that in 2025 we may see the first AI agents join the workforce and materially change the output of
companies we continue to believe that iteratively putting great tools in the hands of people
leads to great broadly distributed outcomes i think that's a way of him saying that they're going
to declare aGI this year it got a lot of a lot of pushback in the popular press
Yes. I'm curious what you thought seeing that from Sam.
I mean, I got a lot of pushback for me as well in my mind because it was just ridiculous.
It's, again, at least I think we were correct that at a certain point, he's just going to say AGI, he's going to move the goldposts to define it however he wants.
To say that AGI is the first AI agents joining the workforce, in quotes, to me, makes absolutely no sense.
because agentic AI, and we have discussed this, we can definitely dig more into it,
but it means something completely different, at least in my mind, and at least in the way
most of the industry defines it, than how we've thought about artificial general intelligence
for years. Because, again, AI agents are simply using large language models to take some
autonomous decisions in some kind of existing workflow or business process. It's not that
revolutionary or complicated. It's just some kind of like taking something that used to be
rules-based and letting an LLM try to apply a bit of logic or reason to it. That is not, at least in my
mind, the robots are taking over or we have developed some kind of superintelligence.
If he starts saying superintelligence is now different than AGI, maybe I'm okay with it.
but overall, this read like we need to raise more funds to me.
Well, okay, so I think there's something that I agree with in what you said
and something I disagree with in what you said.
I agree that he's moving the goalposts.
I don't necessarily think that he's saying that agents are going to be AGI.
It might have just been that he's like stringing these statements together.
But he did talk to Bloomberg about what he thinks AGI is.
And this is the way that he defined it.
And to me, it seemed like kind of a lower.
bar. He said, the rough way I try to think about it is when an AI system can do what very skill
humans and important jobs can do. I'd call that AGI. He says, then there's a bunch of follow-on
questions like, well, is it a full part of a job or only part of it? Sorry, is it a full job or
only part of it? Can it start as a computer program and decide it wants to become a doctor? Can it do
what the best people in the field do or the 98th percentile? How autonomous is it? He says,
I don't have deep, precise answers there yet, but if you could hire an AI as a remote employee
to be a great software engineer, a lot of people would say, okay, that's AGI-ish.
So to me, it seems like he's saying, okay, basically we're, if we're not quite there yet,
we're almost there.
A-G-I-ish, that is the term of the week of the month of the year, I think.
Like, to be able to say that it's kind of there, it's kind of, you know, what.
what we've always promised, but we're just going to check off that box. I think kind of captures
this perfectly. Again, like replacing specific functions in the workplace, we're already there.
That's what I don't understand. Like, GPT-40 is good. Gemini 2.0 Flash Experimental is good.
Most of these other foundation models can do a lot of what people already do.
So, you know, this idea, can it start as a computer program and decide it wants to become a doctor?
It's really interesting how he sprinkles in these kind of fantastical statements within more just kind of monotonous, like mundane things.
Like, yes, it can do a human's job slightly better than them or a skilled human's job or it could suddenly decide it wants to become a doctor.
I think this is, it's so tough to me that, I don't know, how he's approaching this.
It was a nice post.
I like this idea of like taking some time to reflect.
I think it was an honestly, like a genuinely intention post that it's been two incredible
crazy years and I want to look back on it.
But still, just this whole, again, it's very Sam-ish in the way he's approaching this.
This is from AI skeptic Gary Marcus.
At a conference yesterday, someone with very good knowledge of Open AI said something fascinating.
More for what was not said than what was said.
What was said is that we should expect to see GPT 4.5 soon.
What was not said by the well-informed source was that we should expect to see GPT5 anytime soon.
Is it possible that OpenAI says we've reached AGI before GPT5 comes out?
I mean, how?
Yes, 100%.
A thousand percent.
He's setting it up for that.
I think, again, like, we had somewhat set it in jest about him just kind of saying, okay, AGI's here, Microsoft contracts and all in void.
I think they're completely setting it up for that.
I think they recognize that to release GPT-5, if it's not some massive step change in terms of, like, ability would be a huge problem.
So I completely believe they're going to say AGI.
I'm going to say by the spring time, by spring, maybe April, May.
As we leave the cold months, we will have AGI.
That sounds right to me, maybe sooner.
And then I think the discussion is really going to turn towards superintelligence.
So I think the past two years, people have been talking about AGI.
I think that they're going to declare, opening eyes going to declare we've reached it or they've reached it.
and then superintelligence is going to become the new buzzword and altman also previews this he says we're beginning to turn our aim beyond aGI to super intelligence in the true sense of the world of the word we love our current products but we are here for the glorious future with super intelligence we can do anything else super intelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own and in turn massively increase abundance
and prosperity. This sounds like science fiction right now and somewhat crazy to even talk about
it. That's all right. We've been there before and we are okay being there again.
Abundance and prosperity. Our super intelligent future is just one of, you know, greatness for all.
I think we're seeing it happening in real time right now. This distinction between AGI and super
intelligence, which were conflated, and now lowering the bar of AGI, and then I think that
fantastical futuristic vision will be labeled superintelligence going forward. And it's savvy, it's
smart. It'll help on the fundraising side. It'll help keep, uh, keep just kind of the general overall
narrative and dream around AI in the overall media. But I think we're very clearly seeing it happen
right now.
Ron's like, just give me an intelligent dashboard.
And Sam's like, we're building superintelligence.
Yeah, well, no, I mean, okay, so I said it last week.
I'm going to say it again.
My theme for 2020 thrive is there is such a disconnect between what the industry is saying
and what actual customers want.
Because the idea that any Fortune 500 business that has infinite complexity and
systems and not clean data and relatively unstructured data is going to allow autonomous processes
into their actual business anytime soon and allow it to be completely autonomous is a pipe dream in my
mind. I think there's a lot of companies and a lot of steps along the way that will be able to
take existing LLM technology and start doing incredible things. But I think like there's still such a
disconnect. Everyone I talk to, like, normal, even business technology people are not asking for
totally autonomous, agentic AI. They're just asking for stuff that makes life a little bit easier.
So I think, like, he's, I did one thing in the reflections I also liked, he did mention that they
were a research house who kind of like almost backed into a business, which is something we've
talked about a lot. And it's even more.
more clear to me that open AI is still a research house that has a business tacked on to it.
And I don't think that's the company that's going to win.
Interesting. Yeah. Even in the Bloomberg conversation, he talks about how they've had to
protect research from like the standard wants of Silicon Valley style company and they have them
in a separate location. So I think that's true. And one more thing about this before we move on.
So I want to just go back to this statement. We are confident we know how to build AGI
as we have traditionally understood that.
What do you think that path is?
Is it just to continue scaling this stuff?
Or, I mean, it's kind of interesting that he winks at it,
but he doesn't exactly say what to do.
This is, again, where I think they've been incredibly savvy,
but again, the ARC AGI test
that supposedly gave an 87.5% score
to the latest, to the latest O3 model
that's still private but being tested, that gave them enough credibility in the conversation
to start saying we are close to AGI.
But it's one of those things that even within the industry, a lot of people say that this
is not actually an accurate test for kind of the traditional understanding of what AGI
would be.
But they have been able to say, look at the leaps and bounds of development we've done over
the last just two years based on this one test and this one benchmark.
So we're there.
know how to do it it's done we're good check and that's amazing we'll see if this continues
one of the greatest salesman of all time i'll give him that i don't want to spend too much more
time on his reflections but i did think that as you mentioned it was like a pretty good post
talking a little bit about what's happened and the messy parts of it and he's talked about
his firing in this post which i don't think is really worth rehashing but i think he really
just sums it up at the beginning uh building a company at such high velocity with so little
training is a messy process it's often two steps forward one step back and sometimes one
one step forward and two steps back.
Mistakes get corrected as you go along,
but there aren't really any handbooks or guideposts
when you're doing original work.
Moving at speed in uncharted waters
is an incredible experience,
but it's also immensely stressful
for all the players, conflicts and misunderstandings abound.
I mean, maybe this is also just clever marketing from Sam,
but even if it is, I just don't think you see this
from high profile CEOs enough,
like being truly honest about what it's like to build a company
and not just being like, you know,
systems go ahead and we are like taking over the world. Yeah, which is why I liked the post. I thought
it was genuinely, I thought it was reflective. So being called reflections was accurate. I thought
he looked at it and it is. This is why I don't want to take away the technological marvel that
they have delivered and kind of unleashed into the world over the last two years. It is something
incredible. So to stop and take a moment to recognize that, that's the part I like. I wish we could
just take a little more time to reflect on that and use the technology we have currently, rather than
having to switch instantly into what's coming next. So speaking of which, Anthropic is back
raising more funds. They just raised a bunch last year. I think there is $4 billion last year.
And they're going to try to raise another $2 billion. This is from the Wall Street Journal.
Anthropic is in advance talks to raise $2 billion in a deal that would value it at $60 billion, more than triple its valuation from a year ago.
It's being led by Lightspeed venture partners, and it would make Anthropic the fifth most valuable U.S. startup after SpaceX, Open AI, Stripe, and Databricks.
This is from the journal story.
Investors are excited about the potential of generative AI to transform how people work and live and are largely unconcerned that most AI startups are losing money because of the high cost.
of the technology, an intense competition.
I mean, sort of an interesting statement in there about, like, the concerns about
being profitable from investors on this.
I guess that's how it always works, but still, like, just to have this blanket statement
that investors are unconcerned seems wrong to me.
That being said, Anthropic hasn't really made a lot of noise recently.
I think it's just been quietly building, and we don't see these posts from their CEO,
Dario Amaday about, you know, reaching AGI the way that you see it from Sam Altman.
He does work a little bit more quietly than the counterparts at Open AI.
But I'm curious what you think this means that Anthropic is going to raise another $2 billion can be valued so highly.
It does seem like that amount today is something that an AI research house will blow through in about a quarter.
But it is significant nonetheless.
Well, the $60 billion is the part.
that's almost terrifying to me, because how do you, like, work your way into that valuation?
How fast do they have to grow their revenue when you're tripling your valuation from
just a year ago?
I think this is where, I will say, like, what you said around how anthropic is basically
doing this quietly, I like, I respect, in a way, it's almost nicely strategic that they're
kind of riding on the coattails, let.
Sam make all the noise and just kind of be the hype man of the industry and then quietly build.
And Claude is other than its usage limits that you hit even as a pro subscriber is an incredible
product. I think it's probably my most used generative AI product. So they, I think the way
they're approaching it is very smart. I think all of these companies, because what do you do at
$60 billion? Are you IPOing? I mean, when you're trying to put together,
an S-1 in public-ready financials for a company that probably is not going to look too good,
you're not IPOing anytime soon. I think, are you, do you just raise more money at a higher
valuation? Do you hope for an acquisition at that scale is getting more and more difficult?
Remember, when Anthropic at $8 to $10 billion, plenty of conversation around a lot of buyers,
at $60 billion, that number starts to dwindle a little bit. Yeah. And I think that there's
going to have to be a couple of really large flameouts, given the amount of money that's been spent
and raised and the amount of money you need to make to justify that. And I'm starting to worry a
little bit about Anthropic. And I wasn't worried last year, but this year I am. I mean,
you think about Open AI. They, as we've said, after looking at their financials, have gone,
they are planning to make the most of their money from ChachyPT. Chat ChipT, as Sam Altman noted in his
post, hovered at 100 million users for a while and didn't seem like it was going to increase
that much. And then all of a sudden, it triples. It goes from 100 million to 300 million users
and has all this new interesting functionality, including the voice stuff. And I think that
that's important. But Claude is not catching on with consumers in the same way, despite being
what I would argue is a better product outside of the fact that it lacks the voice capabilities.
And so where does it go from here?
Is it able to build a $60 billion business on API alone,
despite the fact that some of its best features are the fact that it can talk to you
in a more human-like way and a warmer way,
in a way that people have built these relationships with Claude,
as I spoke about with Casey Newton a couple weeks ago?
So I do worry a little bit about anthropic.
Is that unfounded?
While I said I worry about the valuation,
I actually am not worried about that breakdown.
Again, so revenue estimates,
OpenAI is estimated around 27% of its revenue is on the API slash enterprise side,
and 73% is on the chat GPT side, which would include chat GPT plus,
whereas Anthropic, 85% of its revenue is on the API side,
and only 15% is you and me paying for a Claude Pro.
So I think they are taking the bet that people building on Claude,
building on their foundation models is where they're going to make money.
And I actually think that's a smarter bet because I do think at a certain point,
the consumer side of this gets more and more commoditized.
I think Gemini and Google and kind of entrenched Microsoft and co-pilot on the,
what is my chat bot that will help me and be up on my screen throughout the day for the average
consumer, that's going to get kind of commoditized away into.
existing products into existing ecosystems like Google or Microsoft.
So I think Anthropic actually is taking the smarter bet versus Open AI in this case.
I would not be shocked if Anthropic comes out with $1,000 a month unlimited use version of
Claude, the same way that Open AI came out with, it's $200 a month.
Make it a hundred I'm in, 150 maybe, because I hit that Claude usage limit way too much and I get so
mad. Right, but Altman said that they're losing money on the chat GPT unlimited subscription even at
200 a month. That was such a interesting thing to say. Again, chat GPT, what's it, pro or plus plus
or whatever, the $200, whatever the branding they put on it, when you're saying on a like consumer
facing product, and again, if you're paying 200 bucks, obviously it's still going to be more business
usage, but still, like on an individual per seat product, you're losing money at $200 a
month. And to say that out loud, two investors, when you're going to be asking them for more
money, it's baffling to me. But it almost, again, it still adds to this mystique that
the amount of compute and power required for these services, there's something magical that
none of you or us understand. That's the only reason I can see someone ever saying that,
when you're running a business that's trying to raise money.
I think one of the things he might have been trying to touch on
is the fact that this is so valuable to some people,
and maybe that means there's room to raise the price even more.
That's why I think Anthropic is going to quietly take notes
and release the $1,000 a month version of Claude.
But we'll see.
What would you call it?
We got Claude Pro.
Claude 1000, man.
Claude 1000.
And then Claude 2000 coming soon.
For super special usage.
Yeah.
Okay.
So speaking of major opportunities, we had NVIDIA at CES this week talking about robotics and calling it a multi-trillion dollar opportunity.
And Jensen Wong was out at CES, which is like the Super Bowl for Nvidia.
And he's talking about how the robotics is going to be the next stage for the company's growth.
He said he announced a new range is according to the Financial Times.
He announced a new range of products and partnerships in the physical AI space, including AI models for humanoid robots and a major deal with Toyota.
to use NVIDIA's self-driving car technology during a keynote speech in Vegas.
So to me, I think this is important because we've been talking about how AI can hit a wall
with LLMs and what comes next.
And one of the potential things that comes next is real world understanding.
So instead of just trying to understand the world through text and maybe video,
it's AI models getting out there in the world and understanding how they interact with the world
that we live in.
And that can be done through humanoid robotics, which, you know, can understand and perceive and plan, you know, effectively a way through the world in a way that you just simply cannot do with a text model.
And I found it very interesting in this, in this moment where we're talking about finally hitting AI, maybe it's AI, finally hitting a GI, finally hitting a wall with scaling outside of that, that Nvidia potentially has this new way forward in robotics.
What did you think about the neutron time?
Man, can we appreciate what we have today for a moment? Again, now even Jensen going and having to go to the next big thing. And I get, you got to go to the next big thing. But to me, I do think if someone's going to crack it, I think Nvidia would be potentially the one. I also have to say, I love Nvidia's branding on the way they, we just came up with Claude 1000. Maybe it's good. Maybe it's not.
But so they have, they're going to release a new computer called Jetson Thor.
Jetson Thor.
I mean, just like a mishmash of futuristic things.
And then the robots will be powered by Groot, which stands for Generalist Robot Zero Zero Technology.
I mean, basically Jetson's Groot, just pulling in all these kind of like sci-fi references and Thor as well.
And they're very, so I got to give it to NVIDIA for just making it a little fun, making it exciting, making it memorable.
I do think, yes, in the medium term, this kind of idea of like robotic progress, real world AI, physical AI, how you get things to interact in the physical world, it's going to be huge.
Self-driving is kind of the first real manifestation of that.
That's real. We've both written in Waymo. It's very real. What are the other applications of that? Again, like in warehouses, Amazon for years has been able to use robots. There's lots of, you know, examples of like really special purpose robots already doing things in lots of situations. So I think it's a massive opportunity. I just wish it was not what we were talking about right now.
We could all just, you know, try to build our first agent or two.
Right.
But Nvidia, like, the whole reason why Nvidia has gotten to the place it is is because
it has been thinking two steps forward and everybody else has been stuck.
Yeah, but they never said it before.
Like when their genius was we have a good video game graphics card.
Oh, by the way, we're not going to tell you.
And suddenly you're going to all realize that the entire AI revolution is going to be
powered by something that only gamers would get excited about before.
Like, they didn't go out and just say it.
They just did it.
So I think that that's the part to me that feels different now than before.
You know, what's interesting for me was to hear Jensen talk about it as a multi-trillion
dollar opportunity as opposed to something scientific, right?
To me, is that like, is this just a play to the stock market?
Like, do you think that's what's happening?
I mean, reading about it in the financial times, pretty interesting.
Yes, 100%. But also, I'll give, like we're talking about Open AI is a research house with a business tacked on. Jensen Huang is a business person. Like, I mean, he at his core. So he's a, he knows what he's doing. So I think on that side, and I also, I don't see him as someone who, I think he's very savvy about signaling on this kind of thing, introducing this into the conversation. But I do have the confidence that.
that there's a lot of real stuff happening if he is saying it that really could make this a reality.
Yeah. I'm pumped about it. I mean, we're going to start to see so much more advances here,
especially because with robotics, you can now just like run simulations with robots and then
do that in like the virtual world and then bring that to the physical world. And we're starting to see
some pretty cool stuff with robotics. You know, speaking of the present, there's also some
interesting news that Invidia is also releasing a personal AI supercomputer with its latest and
most powerful AI chip Blackwell, which is going to allow researchers and students to run
multi-billion parameter AI models. This is from the financial times locally rather than through
the cloud. And it's going to be available at the initial price tag of three grand, which I think to
me, I mean the Blackwell chips, what they usually go for 40,000. So the fact that you can get your
own personal supercomputer for 3,000 and run your own experiments on it, I think is pretty cool.
Yeah, I actually, I thought this was more interesting than the humanoid robot stuff,
because one, I saw it and I kind of want it, and I don't even know why I want it.
So it looks kind of like a souped up Mac mini and just somehow it'll just empower you to do
amazing things. So, but also, invidia, to me, 3,000 is still a consumer-ish product.
They say it's for students and researchers, but like, I mean, it's really, I mean,
the Vision Pro was 3,000 and that's a pure consumer product.
Like, so to, to me, this could be quietly, I mean, obviously graphics cards were very consumer
focused, but like they might be entering the consumer market again.
Yeah, I did a story recently about how universities are like all out of chips and basically
cannot compete at all.
And to me, this seems like a bit of an antidote to that.
I mean, I think that NVIDIA really saw the need there and decided to go for it.
And I think this is really positive news.
Yeah.
I think anything NVIDIA does, they still have, you know, kind of the magic touch that until they really flop on something, I think you have to just buy into whatever they're selling right now.
Last bit of NVIDIA news that I'm actually personally pretty excited about.
Basically, NVIDIA is building these souped up.
NPCs or non-player characters, non-playable characters in video games.
And to me, this has always been like one of the dreams, and I really regret not writing
about it earlier because it's something that's been so obvious in development.
And that is that when you're playing video games, there are these non-playable characters
that basically just kind of stand there and they're just like clearly like dumb video
game stand-ins.
And the promise with AI is they can actually become much more human-like and that there's no
more such thing. There's no such thing as an
NPC anymore because every
you know bot within a
video game can be intelligent and
it looks like Nvidia is also working
to power some of these new
characters. This is according
to the verge of these characters that can use AI to
perceive plan and act like human players
this is according to an Nvidia
blog post powered by a gender
AI. This new technology will
enable living dynamic game worlds
with companions that comprehend and support
player goals and enemies that
adapt dynamically to player tactics. The characters are powered by small language models that are
capable of planning at human-like frequencies required for realistic decision-making,
as well as multimodal small language models for vision and audio that allow AI characters
to hear audio cues and perceive their environments. I mean, I'm just thinking about running around
in a video game and realizing that every character there is like, you know, quote-unquote,
intelligent and has its own personality and feels a lot more like the real world.
I think this could be really big for video games.
I think this could be big for virtual reality.
Heck, maybe it even brings back the metaverse where we populated with AI, human-like
AIs that sit side by side with actual human players.
I think this was pretty cool and it was very interesting.
Yeah, I like, see, this is the kind of stuff versus the medium-term futuristic humanoid robots
this is the kind of stuff I like to see.
This is, because this really, I have no doubt, is happening
and will actually just like unleash really cool things and experiences just in the next like
year or two.
I also noted my other, I think 2025 prediction, so they use small language model SLM.
I've been seeing a lot of people in the agenic space using LAM's large action models.
I think everyone's going to start rebranding LLM into other kinds of other kinds of terminology.
Realizing LLM branding is kind of a little played out and a little restricted to when's GPT-5 coming.
So I think you're going to see everyone in these companies start to come up with these alternative terms that are a little more focused, SLM, L-A-M, things like that.
Yeah, the branding and AI.
It's just, it's always on point.
It's all branding.
Okay, so one last bit of AI news here and sort of the dark side of AI, which again,
we're constantly reminded that so much of this AI revolution is built on copyrighted materials.
There's a tech run story that says Mark Zuckerberg gave Mehta's Lama team the OK to train on copyrighted works.
This is coming out in court filings that Zuckerberg, Mark Zuckerberg, cleared his team to train on LibGen,
which is a data set.
This is according to internal messages,
a data set we know to be pirated
that may undermine META's negotiating position
with regulators.
And when that concern was brought up,
this is according to the filing.
The decision makers within META
said that after escalation to MZ,
Mark Zuckerberg,
the AI team was approved to use LibGen.
I get the argument that this is transformation
of people's work.
I do not fully agree with this idea that therefore companies can go ahead and ingest copyrighted works and use them freely for their purposes without a license.
I don't know.
It just seems wrong to me.
What do you think, Rajan?
Well, we're going to be talking about Mark Zuckerberg's announcements regarding the meta platforms in just a moment.
And I think there is going to be some significant copyright lawsuit, resolution.
of some sort in 2025, like, you know, the New York Times suing open AI already.
When stuff like this comes out is one of those that I really think from like a, I like that
this whole episode is basically AI is all branding and PR.
But I genuinely believe that like you need to have the public on your side when this stuff
comes out because on one, it's so crystal clear that obviously there's copyright.
violations in all of this. I think everyone, it's hard to pretend that that's not the case.
However, these products are so valuable to everyday users. It's like that when everyone
supports you and is on your side, I think it's just okay enough that these things will resolve
themselves. But I think like if the general public is turning against you, I think this is
where there can be some real issues going into the next year. Because I think we're going to see
more and more of these lawsuits. And at some point, there has to be precedent set because this is
completely uncharted territory. And there's going to be some kind of court resolution that
establishes some kind of precedent. Yeah. No, I'm with you 100%. I mean, ultimately, I get the
argument that the AI companies are making and why they'd say it's fine to train on copyrighted
material. But there's just something not right about it. And so far there's been, you know,
a small public backlash, not really much of any.
And I just think that that's going to come to ahead or eventually.
All right.
So speaking of meta, there's some new moderation policies that when we speak about them,
we're sure to make at least half of our listeners mad, maybe all of them mad,
and that we will not be daunted.
We will discuss them right after this.
Hey, everyone.
Let me tell you about the Hustle Daily Show,
a podcast filled with business, tech news, and original stories to keep.
keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or less
and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app,
like the one you're using right now.
And we're back here on Big Technology Podcast.
First half, we talked all about the latest in AI.
Now we're going to talk about the second biggest story, I think, in tech this week, which is a meta, full about face on its fact checking and content moderation policies.
So basically, Mark Zuckerberg says they're going to take a whole new approach to speech on Facebook.
He basically says there are five changes they're going to make.
One, they're going to get rid of fact checkers and replace them with community notes like X.
Second, they're going to simplify content policies and get rid of a bunch of restrictions on tax.
topic like immigration and gender, which Zuckerberg says are just out of touch with mainstream
discourse. Third, they've had filters scanning for any content policy violation. They're going
to tune the filters to focus on tackling illegal and high severity violations, leaving the rest
to user reports. Fourth, they're bringing back civic content, which is a way to say that there's
going to be a lot more news and politics on Facebook. And fifth, they're moving their trust and
safety, content moderation teams out of California and to Texas.
And they're also, this is like really a sixth thing.
They're going to work with President Trump to push back governments around the world
that are going after American companies and pushing them to censor more.
Basically, they've had other countries lean on them to censor,
and they're going to try to work with the Trump administration to push back on that.
So those are the big changes coming out of Facebook.
There's, like, more detailed stuff that we're going to get into.
But just on its face, what do you think about these changes, Ron John?
I think Mark Zuckerberg has had an incredible PR turnaround over the last few years from back in the 2017, 18 Cambridge Analytica days.
I think he's definitely risking all of that work with this, to make this public statement.
because there didn't need to be him saying this.
That was interesting to me, like, from a policy perspective,
from an actual product perspective,
they could have just done this to obviously,
so saying it publicly is kind of, you know,
presenting this to Trump very clearly,
kind of like presenting this to the world.
But I think, and I'm curious your take,
my possibly countering two of opinion is,
I think it's good in the sense,
that I have long been wary of misinformation on meta-platforms.
I think fact-checking, just at that scale or any of these efforts that have launched
over the last six to eight years have been kind of terrible in a fool's errand anyways.
So I kind of like just let it rip as awful as that sounds.
Like it just, this is what this platform is.
when you have an algorithm based content platform like this, this is the way it's always going to go.
So to try to hide it and make editorial decisions or value-based decisions was always going to be a
problem.
And now they're just saying, sorry, we're not, it's not our, not our problem.
Yeah, no, I think going from fact checking to community notes is a good move.
I think community notes really works very well on Twitter.
In fact, I'm impressed with how well it works.
and go ahead
okay sorry
I disagree on that
I don't think from an actual
like efficacy standpoint
that any of this is going to work
well in terms of like creating
a better town square
conversation
I just think it's the
the only real decision
and it's actually the most honest one
all right but I'm yes and on this
I just think community notes is good
I mean it's amazing how many viral tweets
that you used to see
that would just go viral without any context,
now go viral with fact checks.
Even ads on Twitter,
when the advertisers lying about their product,
they get community noted.
I mean, and it's not like,
the community notes generally seem like pretty level-headed
and oftentimes we'll have links to like share further context.
So why is this not a great solution here?
Well, because, okay, so we have to separate.
There's the kind of, there's the fact checking side,
which to me was always kind of ridiculous anyways.
Then there's the actual kind of like abuse side, which also they've like severely relaxed the overall restrictions of it, which again, if you say something awful that's not wrong or right factually that is just something that is very like and we'll get into things like immigrants are grubby, filthy pieces of shit, which was one of the examples of now newly permissible speech. I mean, do we need a fact check? Yes, or?
know like so so to me the community note side it does answer the misinformation slash fact checking
side i do agree it's kind of i love seeing an advertiser get called out with community notes but to me
that's almost a small part of this well i think that's a big part of it because it's on the fact checking
but then i think that you're right in pointing out that the abuse stuff is a completely different side
of it and that's again going towards where Zuckerberg talks about how like we're going to simplify our
content policies and get rid of restrictions on topics like immigration agenda that are just
out of touch with mainstream discourse. He says what started as a movement to be more inclusive
has increasingly been used to shut down opinions and shut out people with different ideas and
it's gone too far. And like while I think that's possible, it's kind of interesting to see what's
now permissible on Facebook. I mean, you read like a few things and there are some other things
about trans people. Trans people aren't real. They're mentally ill. Trans people are freaks. It's very
interesting to me how like where Zuckerberg says broaden out the discussion really is just like
let's get into like the most vile parts of our society and let them say the things that they want
to say which I guess like yeah you don't want to limit everybody's personal you know expression of
course I don't think any political opinion should be restricted and I'm definitely not on like
the pro heavy content moderation side on social networks on the other hand like you are building
a platform that does amplify stuff algorithmically. And like, you get to kind of set the parameters
of like the type of world you want to build inside. And it was just kind of interesting from like,
you know, it's like Zuckerberg says, let's broaden out the political debate. And this is from
platformer. This is from a meta, Meta's chief marketing executive. I feel that the actual shock
of friends and family seeing me receive abuse as a gay man help them understand that hatred exists
and hardened support.
Most of our progress on rights happened during these periods without mass censorship
and pushing it underground has coincided with reversals.
Like that is an interesting idea, but the question is like how much of like the
negative stuff do you want to amplify, right?
And I think Casey Newton who wrote that, wrote the story with this quote, pointed out
pretty well, which is basically just like meta's moving to like a more of a for you feed
and less of us, you know, people you follow feed.
So the real question isn't exactly like how much of this do you want to allow?
It's how much of it do you want to amplify?
Well, but I also think that in the end, up until now, with Facebook and Instagram
having relative kind of like social network monopoly positioning, this is the only thing that
the average person, like you and I are very online.
I've dealt with plenty of abuse, especially on Twitter over the years, and it's fine.
I'm online. I deal with it. I know how to deal with it.
Like, for just the average Facebook user, which I'm guessing Facebook Blue has gone more normy than ever,
how are these people going to respond when they start getting flamed with just vile remarks and just feeling horrible?
Do they stop using the platform? Do they use it less?
So I actually think to me, and I kind of hope this resolves itself from an actual kind of
product and business perspective. Maybe that's like overly capitalistic of me. But I do think that
these platforms will become just miserable and unusable. The more, again, for me, I know you,
you're still anti-blue sky and still strong on X. But like, I mean, I would have called myself
anti-blue sky. I just don't think there's a big, you know, bright future there. Well, no,
but I, like, when I go on X now, it's wild. Like, it's crazy. I mean, like, the, like, the,
UK grooming stuff and just like like and I've been going on it less and less and less and then when
I go on just the outright like insane racism not even like kind of like subtle or even funny or
whatever I mean just like that's well just and so but the more that happens the less I'm inclined to
use it and I mean that's why I'm even think blue sky has a chance that I for my own behavior I use
it more and more and more now because as the other alternative becomes
less and less usable. So I do think it's going to be interesting to see with, to see where
this goes from an actual business and product perspective, because it all sounds good right now,
but is this going to make these things unusable for just a normal person who's just like
posting about their kids or whatever else on Facebook and then just starts dealing with just
insane, insane comments? Well, I did have an observation from the business side of this.
as well, because it's interesting to me that they're bringing back politics.
I mean, the civic content thing, I think, to me, is the most undercover part of this
entire thing.
And I think they know that news and politics gives their products engagement they just
can't find elsewhere.
And so I was sort of question whether, like, people flaming each other and blowing
each other up in the comments is bad for Facebook's business.
In fact, it seems like Facebook blue was at its height when the flame wars existed and people
would go after each other.
And they'd be talking about news and politics and whatever it is today,
it doesn't have the same urgency as it did before.
So I think that, like, on its face, this is absolutely,
and I think in truth, this is absolutely a move of Zuckerberg to align Facebook
with the current political administration, as it has always done.
It always aligns itself with the political conventional wisdom in the country and I think
across the globe.
And Zuckerberg has said, point blank, our job here is to just reflect what people
want like we don't have anything higher purpose than that what people want we give them people said
something in the election trump won and therefore he's aligning himself with trump however second order
right now i think is largely it's a business thing as well i think he realizes that to give threads a
chance against twitter and to restore urgency to facebook you need news news equals engagement and i've said
it a thousand times on the show when you have news when you have politics you have engagement you have
urgency and I think what Zuckerberg is doing is bringing that back into the platform and you know and all the
horrible things that come with it in order to revive what is you know a platform that seems to be on
its way to irrelevance in Facebook and certainly there in threats well I also think what he's doing
which is cynically savvy is announcing this just a few days before and today I was January 19th the
deadline for TikTok needing to be divested from bite dance. And listening today to the Supreme Court
hearings about the TikTok ban, suddenly it went from Trump, you know, supporting TikTok again,
being against a ban to now Mark Zuckerberg is fully aligned with me. He sees it. He knows it's
happening. And I mean, you can start to see suddenly, and I was listening to the hearings, like,
even the conservative justices seem to be edging towards a ban. So if Trump,
does not step in on this, I think this could actually be the thing that could push over
the ban to a reality.
Do you still see the ban happening?
Oh, yeah.
Okay.
So today, listening to the Supreme Court hearing, the most fascinating thing, I mean,
in this day and age, the nine justices on the Supreme Court seemingly being aligned
on anything, I think is almost non-existent.
they all seem to be leaning towards and again it's not a ban and actually the funny
Amy Coney Barrett kept saying it but then also Justice Jackson on the liberal side was saying it
they're all like this is not a ban it's a divestiture like all you have to do is divest from bite dance
why can you not from a you know a foreign adversary like a company why can you not divest
And then the solicitor on the bite dance slash TikTok side keep saying this is a matter of free speech.
And they're like, no, it's not speech, just divest.
And it's the fact that everyone from both sides, even Clarence Thomas is like, what is exactly what is TikTok's speech here?
I don't understand why a restriction on bite dance, a Chinese company, represents a limit on TikTok.
So I think Zuckerberg's move might actually be the default.
finding, like, push where Trump suddenly is like, okay, I don't have to worry about meta on Instagram
right now. You know what? I can now go after China. Let it rip. Let it rip. Yeah, that's really
interesting. And I think, again, like, we've talked about this on the show, but if TikTok ends up
just shutting down and not divesting, then that's kind of the game right there. In fact, this is probably
a good test to be like, you know, are you going to try to divest or are you going to shut down?
I still, heart of heart, don't believe that this thing is going to get shut down or sold or, you know, maybe there'll be some declaration of divestment and that'll be that.
However, if it goes through and TikTok says, all right, we're done, it's like, okay, you are a tool of the Chinese Communist Party.
But they've already said that.
All they had to do, like to say, we are not going to use the bite dance algorithm and completely divest our data or algorithm, the business from bite dance.
That's not a completely impossible, unreasonable thing.
Like, that's the part to me that still is almost like confounding.
Like, why can't they just do that?
And that's what you heard it even in the justices when they were talking.
Again, across the aisle, everyone's like, why can't you just do this?
Like you're saying it's free speech.
You're doing all that, like, it's not unreasonable that the foreign adversary of like an opaque company that we don't know exactly the relationship with the Chinese government, you should not be controlled by them when you control the.
narrative of American media like and they there's no good answer to that question why can't
they divest again we're only nine days away it's going to be a fun crazy nine days on this one
yeah i think i think we're going to get all it's going to take is one trump truth truth social
i don't even what are they called again true social uh to say that uh lena but what's the actual
post called if it's a tweet i'm not i'm not that far down i still call x posts tweet even though
they call them posts so uh okay but all it's gonna take is one they're called truths i don't know
truths truths truths that sounds good um all let's get one truth saying let ticot let let let it shut
down and then it's over and i think he's going to be leaning on that way it could be leaning in
that direction given Zuckerberg's announcement given just overall he's
going to need anything that makes him look weak on china is always going to be problematic and so
this was always going to be kind of a delicate situation anyways versus cleanly saying i don't want
china involved in anything shut it down yeah so speaking of china should we close on the fact that
uh i spent 15 hours in Beijing on tuesday on a layover back home and just found it very interesting
to be in China. You know, you've been there as well. To me, the thing that really stood out was the
fact that it really is a surveillance date. I mean, there are cameras everywhere. And we were driving
myself, my wife, and a guide, we were driving out of the airport on our way to the Great Wall
to start the day pre-dawn. And it just felt like every few seconds there was another flash. And I was like,
oh, like in Australia, there's lots of speed cameras. And of course, I knew about surveillance in, in China.
I was like, is that like a speed camera?
It's like, nope, that is what they call eagle eye,
or sort of the nickname for it.
That just tracks everybody's movement,
does facial recognition to see who's in the car,
and if you do things that are wrong,
they will get you.
And in some places like in Tiananmen Square,
every light fixture seemed to have like a dozen cameras
or more affixed to it.
And it's just this incredible thing
that I don't think, you know,
you fully grasp until you see it,
that like wherever you go, wherever you go, you're being watched.
It was very interesting to see in person.
Well, I, so I spent three months in 2009 in Beijing,
and I, like, lived there.
I was taking a language class.
And that just reminded me the surveillance.
So 2009, not quite digital,
but it's such a different mindset in the sense of,
so it was around the time of swine flu.
And so, like, already there was like a lot of,
pandemic-y behaviors going on.
And I went out one night, and I was a bit hung over the next day.
And I didn't feel like going to my class.
So I emailed the teacher, Alan, I feeling a little sick.
I don't think I can make it today.
Two health officials showed up at my door.
Oh my God.
Yeah.
And like literally, and I did, they did not speak any English.
My Chinese still was not very good.
And at the time was non-existent.
and literally start asking me and yelling at me,
yelling at me,
asking me all these questions.
And that entire mentality of surveillance
and like kind of collective watching,
even in the pre-fully digital era was there,
so I can't even fathom what it is like right now.
Yeah, it's pretty incredible.
Like I, at one point I was like,
oh, where's my phone?
Of course, it was in my pocket.
And the guy was like,
don't worry, if you left your phone like a few steps back,
you know, it'll still be there because the cameras will make sure that nobody took it.
And if somebody takes it, we'll be able to find out where they went with it.
Thank you. Thank you.
Oh, my goodness.
But, you know, I mean, outside of that, I just think that it was like a really great visit, honestly,
to be able to see China in person.
And it's just like, it was pretty cool to see, you know, just a culture that has been, you know,
ongoing for so many years, just reflected in the modern day.
As you, like, walk around Beijing, you could really see it.
um with the architecture and and sort of like the dress uh it was it was just super cool and
obviously like getting a chance to see the great wall which is just like one of the wonders
of the world was was really special so i i got a 10 year visa to go and uh so 15 hours of those 10
years are in the books and i definitely got to go back did you download uh doyne the ticot uh the original
ticot based out of china i should have um yeah the thing i
try to download AliPay and I did get it get it to work but it still had some problems with
it like you can literally not pay with uh you need a local phone number right probably I think
that's what it is like yeah in Taiwan I have to deal with the same thing with uh yeah yeah yeah so
couldn't use could not use credit card uh everything is is ali pay there are so many EVs all around
like you see the EVs with the green license plates and there were so many brands they'd never
seen before that are just driving around it was uh almost like sensory overload to like see all these
things that we had like you know read about that i had read about in the press like the surveillance
the evs the mobile payments uh you know the soviet influence even like you'd go from the forbidden
city which is all ancient chinese architecture into Tiananmen square and then you like look around
and there's like the great hall of the people uh which is the most important building i would say
in the country where all the government meetings are held and it's like
oh, that's just like straight up Soviet architecture.
And so it's like, you're there.
You're there.
It's real.
Yeah, you're real.
Soviet themes in the flag.
You know, you go to the market.
There are like sculptures of Ayatollah Khomeini.
You're like, okay, that bond is real.
You know, it's just a very, very interesting trip.
And I think like I wrote this about this in big technology this week, but there
been some debates on like whether, you know, it's worthwhile to travel.
And I just find those debates like so.
absurd because not only do you get a chance to meet people from different backgrounds when
you go, but you also are left with so many more questions and a little bit more context of
all these things that you see. And so I'm strongly on the affirmative side of go out and see
the things that you want to understand. And I'm not going to claim it to be a China expert
after 15 hours in the country. No, you should. You should. I think it's time. But it certainly
opened a lot of new questions for me. And it was really, really helpful, I think, to see this in action.
are you going back yes you got the 10 years so i will go back uh as long as like things stay
kind of calm between the u.s and and china i hope they do um but yeah we'll see i might have to
big technology live in baysing not sure how that would play yeah no maybe maybe it'd be no we're not
coming back i don't think she is going to allow that but it's a good idea all right ran job maybe no
maybe he's a listener maybe he's a listener maybe he's a listener
I do know.
Yeah, that's one of the things
that Xi Jinping enjoys.
If you're out there,
she come on the podcast.
Yeah.
I would love to have you on it.
All right.
I mean,
can you imagine
our first head of state
as Xi Jinping?
I don't know.
I think we're not
people asking questions.
All right, Ron John,
great speaking with you as always.
We'll see you next week.
See you next week.
All right, everybody.
Thank you for listening.
We're going to be talking replica on Wednesday,
so that'll be a really fascinating conversation.
And then Ron John and I will be back next Friday.
Thanks for listening.
I'll see you next time on Big Technology Podcasts.