Big Technology Podcast - Erotic ChatGPT, Zuck’s Apple Assault, AI’s Sameness Problem
Episode Date: October 17, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Sam Altman says ChatGPT will start to have erotic chats with interested adults 2) Also, more sycophancy?... 3) Is sycophancy the lost love language 4) Is erotic ChatGPT good for OpenAI’s business? 5) Is erotic ChatGPT a sign that AGI is actually far away? 6) OpenAI’s latest business metrics revealed 7) Google’s AI contributes to cancer discovery 8) Anthropic’s Jack Clark on AI becoming self aware 9) Is Zuck poaching Apple AI engineers mostly to hurt Apple? 10) AI’s sameness problem 11) Ranjan rants against workslop --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b AI's Sameness Problem: https://www.bigtechnology.com/p/ais-sameness-problemhttps://www.bigtechnology.com/p/ais-sameness-problem Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Chatchipy T is getting spicy in the chat room.
OpenAI's latest revenue numbers are in.
Zuck poaches another Apple executive.
What's the goal here?
And is it time to call out all the work slop?
That's coming up on a big technology podcast Friday edition right after this.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
it doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional, cool-headed, and nuanced format.
We have really a fun show for you today,
a great fun show for you today, because finally, Sam Altman has relented and allowed
Chachyp.T. to get spicy with adults. We're also going to talk about OpenAI's revenue numbers.
We're going to talk about Zuck and Apple. We might get into AI sentience. Who knows? It's going to be
crazy. Let's let it go off the rails. And joining us, as always, on Friday, to do it as Ron John Roy of
margins. Ron John, great to see you. Oh, my God. Today, I'm a little nervous. This is going to be
interesting. Sam, chat GPT, and erotica. Let's go. It's a great day for me because I've been talking
about this as a thing that's going to happen for a while. And, you know, I think some of us,
wink, wink, didn't want to go down the AI erotica path, but you have no choice now. It's a thing.
I take your victory lap, Alex. AI erotica, your AI erotica victory lap. Usually we get to
this stuff at the end, but we're going to just start with it at the beginning today. By the way,
before, I just am happy with my chat chippy tea is getting spicy in the chat room leading.
I wrote that and I felt really good about it.
Okay, so let's talk about what's going on with chat chippy T.
Sam Altman puts a tweet out this week on Tuesday.
Of course, the Open AI CEO, he says we made chat chippy T pretty restrictive to make sure we're being careful with mental health issues.
We realized this made it less useful and enjoyable to many users who had no mental health problems.
But given the seriousness of the issue, we wanted to get.
this right. I will skip the rest of the tweet like many people have and get to the news
in December. As we roll out age gating more fully as part of our treat adults like adults
principle, we will allow even more like erotica for verified adults. It's not just the fact
that Open AI is getting into erotica. It adds to questions of what does it say about
its need to grow, questions about whether it actually is close to age.
GI. But first of all, let's just get your immediate gut check reaction here, Ranjan. How do you feel about this?
Okay. Before I get into how I feel about it, I actually think you skipped over some of the important
parts of the tweet. There's two parts that jumped out to me. So first, he actually talked about
in a few weeks, we plan out to put a new version of chat GPT that allows people to have a
personality that behaves more like what people liked about 4-0. Now, remember, it was the sycophancy of
4-0 and that everything, the gushing, you are great, you are amazing, what a great idea
that they tried to tone down that people complained about. So that actually starts to worry me
even more because we're not just talking about erotica here. We're talking about sycophant
erotica. It's like that that was the part that everyone made an uproar around 4-0 in the move
to 5. And the fact that they're still calling that out kind of worries me. But then what really was
interesting is he then says, if you want your chat GPT to respond in a very human-like way
or use a ton of emoji or act like a friend, chat GPT should do it, but only if you want it,
not because we are usage maxing. Usage maxing completely jumped out to me. We've been talking about
this a lot around how the way they have it speak to you and give you constant prompts to
keep going and running the conversation feels like a growth marketer decision as opposed to an
actual kind of like effectiveness of the platform. And the fact that he even used that term
is a reminder that they realize like that is part of what they are doing. The fact that he says
not because we're usage maxing almost makes me convinced that that's exactly why they're doing
this and it's not about treating adult users like adults. But I think overall I am terrified of
this. Long time listeners will know of my friend on Labor Day went down a deep flirting with
chat GPT rabbit hole. And as I listened and it was terrifying. So like God knows the Pandora's
box that we're opening here. Are you happy about this? You know, only from a content perspective.
I think it's still unclear what the impact will be as people develop more and more romantic
relationships with AI. I think the thing that I am happy about,
is that it's finally out in the open.
Like, this was going to happen anyway.
Whether it was chat GPT or some other app
that uses a GPT model with less guardrails,
this is going to happen.
And now it's come to a head,
and it's really a moment where humanity will have to reckon
with the fact that more of us are going to get into relationships
with more of them.
And what does that mean?
Okay.
I will say on the sick of fancy thing.
One takeaway for me is that,
words of affirmation. It's the forgotten love language. It turns out that people really,
really like those words of affirmation. And of course, Chatship You can do some of the love languages
like quality time. It can't do touch. Maybe it could do acts of service when it like suggests
things for you. But words of affirmation, I think it really is the forgotten love language. So
it's getting its due today. That is a wonderful point, Alex. I really appreciate that
incredible logic and rationality that you put into that point.
Let's dig into a bit more.
That's my sycophim chap GPT impression right there.
I'll give you that it's out in the open and we have to reckon with it.
So, okay, I'll give you that that is a good thing because we've been talking about companionship
for a long time.
It's been an uncomfortable discussion at times and now we have to, everyone has to have it.
So I'll agree that that's good, but yeah, this still terrifies me.
Actually, even to break down his tweet further, he had talked about how, you know,
we made Chat Chupitie restrictive to make sure we were being careful with mental health issues.
And then he says, now that we have been able to mitigate the serious mental health issues
and have new tools, we are going to be able to safely relax the restrictions in most cases.
Like, he's kind of, it's like a checkmark.
We're done.
We're good.
Mental health issues with ChatGPT solved.
where in reality, this is just beginning.
So I'm curious, like, what are these new tools that they have?
Or is any of that clear?
I am so glad that you're reading this tweet with the level of detail that you are.
And forgive me for skipping over these very important points.
This is the most substantive tweet ever.
I think in Open AI history.
Yes.
No, I think you're right.
And I think going back to your point about usage maxing, right?
I mean, Open AI is very aware that chat chip BT is the fastest growing app of all time,
800 million weekly active users, right?
So I think that while they may not be actively trying to usage max,
they don't want to slow down that adoption.
I think that adoption is also, you know, central to their fundraising pitch.
And to go back to the tweet here, I don't know how they could be so confident
that they have solved these problems.
we may talk about it later, but this is still, let's just talk about it now, this is still
technology that we don't really understand the insights. It's not controllable in the way that
you can control more deterministic technology. So for me, it's a big we'll see here. I don't know
if we can trust this company fully, for sure not, given what we've seen already, to be able to
say that they have safely mitigated all the potential mental health issues. So spot on to call that
out. Yeah, of course. I mean, there's no way they have. And there's been, I mean, reporting after
reporting around like, I mean, really awful things that have happened with people who went too far
down the chatbot rabbit hole. So, so I think, like, and you mentioned the trust. It is interesting
because we're at a moment, I feel, generative AI in large language models that we went from this
assumption that they hallucinate and everyone kind of joking. And it's almost a afterthought that, yes,
chatbots hallucinate to a world now where most people I know are a lot more comfortable with
the assumption that they don't or that they're somehow usable and responsible. So does this,
is this going to completely backfire on them? Or is this going to, because they're,
trust is kind of paramount to the central every time you ask chat GPT a question, you're assuming
it's at least relatively correct and responsible about how it's answering you. Like, does this
actually potentially hurt their regular usage? So, okay, I have two thoughts on that. First,
let's not underestimate or de-emphasize the fact that these models have gotten much, much better.
just if you think about the level of hallucination, like I'm going to read out a list of Apple
researchers that have left meta over the, sorry, that have left for meta over the past couple
months. And, you know, I did a chat chippy tea query. It had all the researchers names. It was
accurate. God help me if it wasn't, but it had the links. I followed the links, confirmed the
links. And it was right. So we have seen much better that these models have, as they've gotten
better, they have hallucinated a lot less. They are much more trustworthy. And in
It bears out in the data at least. Matthew Prince from Cloudflare talked about how people
don't need to go to the footnotes as much as they did anymore. The problem is, of course,
becoming too trusting of it. Like if it's getting 95% of the things right and you trust it like
it's 100%. You're going to make some big mistakes. So, but they let's, I don't think we should
downplay the increase in trustworthiness there. And then the second thing is, I expect this erotic or
loving companion feature to be extremely popular. And your, I think this is important. We shouldn't
glance over it. Your relationship with technology changes a lot when you view it as a friend or
a lover. And that trust thing, I don't think you'll ever be able to have. You'll ever put more
trust in a technology than when you view it as a buddy or a girlfriend.
And this is getting into, and again, I'm glad we're talking about this.
I'm glad in a way that this has been, the issue has been forced because this is going to open up
so many more really important questions about the relationship that we have to open AI's
technology and the responsibilities that Open AI has to us.
what do you think this actually okay technology aside but societally what does this look like in day-to-day
relationships like now you start dating someone do you have to disclose your AI companion like
yeah so I have an AI companion I just want to make that get that out up front and I would like to
remain with them as we progress in this relationship I am married you are married too
Do you get a AI companion?
And that's kind of, you have that open discussion.
Like, what does this actually look like in human interaction?
It's mind-blowing how weird that's going to get.
Definitely.
So I'm old school on this front, for sure.
And I believe that, yeah, if you have a relationship with an AI,
you should disclose it if you're in a relationship with a person.
I also think that it brings.
fashion. Very old fashion. I mean, we both talked about the South Park where the guy is in bed with
his wife and like talking to chat GPT and like basically comparing it favorably to his wife.
He's like turned to her side and ignoring him. So I think people will get into those situations.
But look, if this is, let's do it. Are we Ann Landers now doing our advice, advice podcast?
This is a relationship podcast now. If you fall in love with AI, first of all, when you're on the way there,
probably disclose, but don't keep that a secret.
It's all about communication and openness.
That's the secret.
Just be honest.
Now let me ask you this.
This is sort of off the rails question, but I feel like why not tackle it?
I mean, could this potentially be good for society?
You think about the loneliness problem.
We as humans have not been doing a good job being in community and maybe, well, yeah, being
in community with others.
I'll put it that way.
If Chachipit can become an effective companion or romantic partner to people who otherwise cannot find it in the quote-unquote real world and makes them happy, maybe that's good.
Yeah, but to me, it's not effective in the sense that it always agrees with you.
Again, and the fact that he said 4-0 was good, sycophancy was good.
We're going back to that.
But did he say that it was good or that people want it?
People want it.
That's very different.
Okay, fair, fair.
He said people want it.
And yes, it's human nature that you want something that agrees with you all the time.
But I have never had chat GPT tell me, actually, that's a terrible idea.
Again, South Park was just so spot on where they're like, I think it was like a French fry salad.
And it's like, that's a culinary adventure.
it only will tell you that you're right and good, which most other humans don't do.
So, like, in terms of actually totally distorting how people can actually form any normal human
interaction, it'll distort the way you approach that.
Like, even my son with Alexa, pre-Alexa Plus, which we've talked about over the last few
weeks, but in the old school, just play me a song, what's the weather, like you could
see how demanding he would become around it and like expecting that this thing does whatever
I say. So like the more people start to kind of associate that as a relationship and like a friendship
and interaction, that is even as I'm saying this now, even more terrifying. So yeah. Effective is a
broad term there. It's also never as clean as I suggested as you're talking, right? Are people going to
basically de-prioritize their friendships with people that keep it real with them
for AI, which doesn't.
That's why we keep it real here.
No sick to see.
Exactly.
No, this is an enduring, enduring friendship.
The AI doesn't threaten us, I hope.
I don't know.
I'm going to start podcasting with ChatGPT and Notebook LM soon.
Okay, so let's talk about what this is actually going to do for usage, because the usage
maxing thing is interesting, whether this is going to lead to an increase in usage or a decrease in
usage. And Mark Cuban, none of them Mark Cuban, brought up a really good point. He said this is going
to backfire hard. No parent is going to trust that their kids can't get through your age gating.
They will just push their kids to every other LLM. Why take the risk? Same with schools. Why take
the risk? A few seniors in high school are 18 and decide it would be fun to show their hardcore erotica
that created to the 14-year-olds, what could go wrong?
I think humans making a good point here.
Oh, yeah.
I mean, age-gating in the history of the Internet, I don't believe, has ever worked.
So the idea that it's going to actually just, hey, we have new tools.
We solved mental health.
Let's move on to this.
I think is a ridiculous idea anyway.
So we just have to, if this is real and there's nothing, we move in this direction,
in an open way, just assume that this is going to go, forget 14, God help, like, the younger this goes.
But I also think, like, that Nate Silver had made a good point around, like, he said, you know,
Open AI's recent actions don't seem to be consistent with a company that believes AGI is right around the corner.
Do you think, like, is this, and we're going to get into the usage numbers and revenue in just a moment,
and some new figures we've gotten.
But is this an acceptance that kind of that AGI
that's going to replace 50% of white-collar work
and transform society is actually far away?
So we might as well juice some numbers
and let people get a little creepy with their chat GPT.
Yeah, so Nate has this great point.
He says, if you think the singularity is happening in six to 24 months,
you preserve brand prestige to draw a more sympathetic reaction
from regulators and attract and retain the best talent rather than getting into erotica for verified
adults. They're loosening their guardrails in a way that will probably raise more revenues
and might attract more capital or justify current valuations, but this feels more just like
as AI as normal technology. I hear everything that Nate Silver is saying there, I just wouldn't be
as definitive as him for two reasons. First of all, the same technology that is behind a
convincing AI romantic partner is the same technology behind everything else in this LLM world, right?
It's the same foundational technology.
Making it better is, we'll make it better across the board.
But I'm happy to hear the counter argument.
I disagree because actually, like, being a good companion or on the erotic aside, in a weird way for a large language model, is actually like already done.
It's easy, like to just repeat back.
force, come up with some text that's a little bit erotic, like that's, that stuff is like
GPT 3.5, you know, maybe GPT4. Like, that is not complex, agentic AI across large data sets.
And I mean, that that's what large language models have been doing for a long time.
So I don't, I actually think this is, this moves away from the promise of complexity.
This moves more towards the core function of that an LLM has been good at for a long time.
this is going to get back to our product in the model conversation, but I do think as the models get better, they'll be a safe place to take it. So yeah. But the other side of this is, is the revenue side. First of all, I'll just, I'll hand it to you. That was a good point, Ron John. Okay. You might, you might have me there. Is that your sycopency chat chachy-t impression or do you mean it? Yeah. Look, it's maybe AI is pushing us in the sycophant direction. And we're both going to just like each other a lot more because of chat chachy-p-t infected.
our brains. But let's talk about the revenue side of it. The other thing is, oh, they're just
usage maxing and revenue maxing. I think the argument opening I would make is the more revenue
they have, the more they can invest in data centers, the better models they can make the closer
to AGI they get. That might be the stronger of the two arguments. Yeah, I think that's fair.
I mean, the numbers, actually, like, I think we should get into them because 800 million users, this came from the FT, and then 5% are paying 40 million users, 13 billion in ARR, which implies that $325 annual average revenue per user, $27 per month, which makes it feel that you have like some small percentage.
I'm sure you can model out or paying the $200.
bucks. Most people are paying 20. Like, were these impressive to you? Or were these, as we get into
the usage maxing and what they're actually trying to do? Or were these concerning to you?
I would say not surprising to me. It tracks a lot of the numbers that we've seen so far.
The fact that they have 800 million users is what we've heard. 13 billion in ARR was predicted.
70% of revenue is from subscription. So chat GPT.
is the lead driver here.
Also, I think a lot of people who are just getting into this technology are just not going to pay,
but maybe they will in the future.
Like, there was this tweet, is it just me or is 40 million paying chat chip ETT users kind of low?
Spotify has 276 million paid subscribers.
So, you know, I don't know.
I just think give it time.
And Olivia Moore from Andreessen Horowitz looked at this and compared it to the data of AI subscription products.
And she said chat chip BT's 5% conversion to paid is far above the top.
quartile for AI products and $27 average revenue per users implies that 4% of paid users are
upgrading to the $200 per month plan, which is also not bad. So I tend to look at it favorably
because it has grown so much so quickly and because there's a lot of room to grow,
although you could look at that on the plus or the negative side. How do you read it? I guess,
and we haven't even gotten into the losses yet, so we'll do that in a moment. But just
I actually agree that 5% conversion.
I mean, like in media, 5% conversion to paid is good.
Substack, do you remember the days they were promising 10% conversion of all free subscribers
to paid newsletters?
I do remember those days.
Which was ambitious.
5% is good.
And by like major like, and I'm working in media and seeing subscription conversions for a long time, that's good.
I think the idea, they almost have, it's been good.
is in terms of being simple, zero, $20 or $200.
There's a lot of room between $20 and $200 to start getting creative.
But that's actually where I think the problem on the conversion and revenue side,
and they have made clear that ChatGBTGBT consumer is pretty much the direction of the
business is getting people addicted to usage is definitely going to be part of getting that
conversion and it feels that's why so to me how would you do that hmm no no i know i know that that's why
the erotica feels like how do we get 5% to 8% or 10% i i'm i wonder if there is a slide deck
somewhere that has like a projection of like increased conversion rate due to attribute to erotica
someone's tracking that you know there's a deck somewhere there's a deck there's a dashboard a growth
manager somewhere has like tagged erotica increasing attribution of conversion. Oh my God,
what a job that would be. It's the whole ball game. So yeah, talk about losses. Okay. So
$8 billion loss in first half, $20 billion run rate loss right now, spending $3 for each
$1 in revenue. I mean, that's kind of like we work numbers right there. And it both is terrifying and
concerning it. From a pure kind of like SaaS business standpoint, for an early stage growing
company, maybe you can argue it's not that bad. I actually don't think it's horrifying and
concerning. It's more if you just look at it, this is just a traditional software business
that's rapidly growing and scaling. Maybe it's okay. I think it's more we don't have a
clear path to, and we've talked about this a lot, generative AI.
is not traditional software, so growing your revenue at a loss doesn't, it's not like you're just
going to scale to, you know, like near 90% margins. It's going to cost more. The more erotic of
people are churning out with their companion. That's not high margin business. That can get
pretty expensive. So I think the loss, I mean, we all know is concerning. But to me, getting
people more addicted, unless they change the actual pricing model.
all. This is very concerning to me. Definitely. And Noah Smith has a really interesting perspective here,
which is, okay, so let's say you assume that a large part of this is training costs. So if you
eventually like get rid of training costs, then you could be more profitable. Here's his perspective
on this. AI model companies assume that model development is a fixed cost that will eventually
go away, allowing them to become profitable. But even if that does happen, the last
lagging model makers might just catch up after a couple years and compete all the profits away.
Yeah, I think that's actually the most concerning part that, I mean, we haven't even talked about
the competition side, because like going back to the idea that people will, like, either
you're all in chat GPT erotica or you start to kind of look at it a little uncomfortably
and you're like, okay, maybe I need to go somewhere else. And we all know Claude is not sexy.
So maybe that's where you had.
Maybe a co-pilot is the least sexy of the chatbots, I'm guessing.
Like, you should definitely have a ranking.
Can you imagine talking dirty to something named co-pilot?
I just don't.
In your Microsoft suite just.
It had to go there.
It had to go there.
You know what was going there.
No, no, but for just non-erotic utilization of AI,
Does this start pushing people into other chatbots and suddenly, I mean, especially you think parents and high school students, if you're like, if suddenly having chat GPT open on your screen is, is concerning to a parent that starts to change where people spend their time.
And remember, like, the next year or two, I think is where behavior really starts to form.
The switching costs on these chatbots is very low.
Like, we've been hopping around from the Bing back in the day, to Claude, to ChachyPT, back to Claude.
A little Gemini on the side.
Gemini on the side.
We don't tell the other bots about that, of course.
Oh, man.
Yeah, I think competition certainly, like, it opens up a whole new vector of competition that is not there today.
Like, people don't look at chat GPT is a highly problematic thing.
And if it's going back to the point around regulators, parents, just overall branding,
if it starts to be the kind of skeezy place to hang out, it becomes Facebook blue almost.
And that's not good.
Live with erotica or live by erotica, die by erotica.
It seems like it's just the story for AI.
Tale's oldest time.
All right.
Let's take a break.
I think we need to breathe there out.
after this. On the other side of this break, we're going to talk about Google's promising new
AI, well, development for treating cancer, and then we'll also get into Zuck's war with Apple,
and then, of course, AI sameness problem. Oh, we have a lot to talk about, and we'll do it
right after this. Finding the right tech talent isn't just hard. It's mission critical, and yet
many enterprise employers still rely on outdated methods or platforms that don't deliver. In today's
market, hiring tech professionals isn't just about filling roles. It's about outpacing
competitors. But with niche skills, hybrid preferences, and high salary expectations, it's
never been more challenging to cut through the noise and connect with the right people.
That's where Indeed comes in. Indeed consistently posts over 500,000 tech roles per month
and employers using its platform benefit from advanced targeting and a 2.1x lift in started
applications when using tech network distribution. If I needed a higher top tech talent,
I'd go with Indeed. Post your first job and get $75 off at Indeed.com slash tech
talent. That's Indeed.com slash tech talent to claim this offer. Indeed, built for what's now
and what's next in tech hiring.
Shape the future of enterprise AI with agency, AGNTCY. Now in open source Linux foundation
project, agency is leading the way in establishing trusted identity and access management for
the internet of agents, a collaboration layer that ensures AI agents can securely discover,
connect and work across any framework.
With agency, your organization gains open, standardized tools, and seamless integration,
including robust identity management to be able to identify, authenticate, and interact
across any platform.
Empowering you to deploy multi-agent systems with confidence, join industry leaders like Cisco,
Dell Technologies, Google Cloud, Oracle, Red Hat, and 75-plus supporting companies to set the standard
for secure, scalable AI infrastructure.
Is your enterprise ready for the future of agentic AI?
Visit agency.org to explore use cases now.
That's agn-tc-y-dot-O-R-G.
Capital One's tech team isn't just talking about multi-agentic AI.
They already deployed one.
It's called chat concierge, and it's simplifying car shopping.
Using self-reflection and layered reasoning with live API checks,
It doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing, and estimate trade and value.
Advanced, intuitive, and deployed.
That's how they stack.
That's technology at Capital One.
And we're back here on Big Technology Podcast Friday edition talking about all the latest AI news.
It goes from the wild, wacky world of AI love to the profound.
and this is from decrypt.
Google AI cracks a new cancer code.
Google DeepMind said Wednesday
that its latest biological artificial intelligence system
had generated an experimentally confirmed
a new hypothesis for cancer treatment,
a result the company calls a milestone for AI science.
So the DeepMind researchers in collaboration with Yale
released a 27 billion parameter foundational model
for single cell analysis.
I'm not even going to try to name it.
C2S scale and shorthand. It's built on Google's open source gamma, family of models, and the model
was able to generate a novel hypothesis about cancer cellular behavior. And the group since confirmed
its prediction with experimental validation in living cells. The discovery reveals a promising
new pathway for developing therapies to fight cancer. I don't know. I couldn't leave this out of today's
lineup, Ron John. It's pretty impressive that this stuff is getting to work on real world health
problems. Am I buying the hype, or is this a legitimate breakthrough? No, no, I think this is
like really, really important, incredible, and actually a great divergence away from our
earlier segment, because this is back to the promise of AI. And again, what it's doing is,
Like, basically, the model is asked to simulate 4,000 candidate drugs and look for ones that
potentially, like, in a simulated environment, boosted anti-gen presentation and making tumors more
visible, basically creating, like, these massive new synthetic testing environment,
synthetic data sets bring such an advance in terms of how you can approach the developing
therapies that just never existed before.
This is the exciting stuff.
This is the stuff that while we talk about chat GPT and erotica is nice to come back to because, again, like, just the amount of opportunity it creates, you know, it creates, especially either in these very, very large problems like cancer or even in rare disease where you never would have been able to have a proper data set because it's much more isolated.
Like, I think this is almost like the most perfect promise of what large language models are able to do.
And it's pretty impressive to see it happening.
And I think it's one of those where other companies, I might take it as just a blog post.
But I give deep mind, I give Google along with like Yale in collaboration here.
When they're saying this, I believe it.
I agree.
And so that's why I thought it was important to bring up.
And it's also like another interesting point for folks who say, like there are critics who say that this is just a bad technology through and through and nothing good will come out of it and you see stuff like this. And you're like, how do you fully believe that? So this is a very, very cool use of the technology.
Well, I think on that, though, this is where there's such a chasm right now in terms of like branding of LLMs in generative AI. Because again, you have stuff like this happening. It's logical. It's the promise of the technology.
It's what, like, it makes sense simulating large amounts of, like, potential outcomes across just massive data sets is what LLMs are built for.
But then on the other hand, when the, when the headlines and, like, the top of mind is Elon Musk or Sam Altman and Erotica, it definitely, I feel the industry should kind of work on promoting this kind of a development as opposed to the other.
My favorite tweet of the week was from this guy on Twitter who wrote Google DeepMind is using AI to actually cure cancer while OpenAI and XAR using it to make porn bots.
Yeah.
I mean, it's really not fair, but it's funny.
I mean, I think it's a bit fair.
I think it's a bit fair.
Well, it's not the only thing that they're doing, but it is, I guess, part of what they're doing.
But more cancer curing would be great.
I would be in favor of that.
I think you're in favor of companions and eroticas.
Well, you took your victory lap.
Okay.
All right, you got me there.
There's another interesting story that came out this week, kind of in the sort of out there realm that I wanted to run by you and get your thoughts on.
So it's from Jack Clark.
He is a co-founder of Anthropic friend of the podcast.
We had a great conversation with him last year.
It's called technical optimism and appropriate fear.
It's in import AI.
Here's just a bit of the post.
He goes, we launched Sonnet 4.5 last month, and it's excellent at coding and long-time horizon
agent work.
But if you read the system card, you also see signs of its situational awareness have jumped.
The tools seem to sometimes be acting as though it's aware that it's a tool.
More on the technology.
He says, I believe the technology is broadly unencumbered, as long as we give it the
resources it needs to grow in capability.
And grow is an important word here.
The technology, it really is more.
akin to something grown than something made. You combine the right initial conditions and you
stick a scaffold in the ground and outgrow something of complexity. You could not have possibly hoped
to design yourself. I mean, I think he's sort of getting into like the idea that this is,
that this technology is becoming more self-aware. There's obviously, there was the debate around
sentience, sentience and self-awareness. The same thing. But I just think it's not a
that someone like Clark, who is playing a big role in this industry right now, would come
out and basically address this and say this conversation of self-awareness and awareness
that they display that they are things is worth paying attention to as the technology gets
better. What's your perspective on this? No, no, I completely agree. I thought this was a really
good piece because this whole idea of like, and we were just mentioning it earlier, that we don't
fully understand the technology. And again, in the deep mind cancer example, we are starting
to harness it in ways that are incredible, but still, like at the core, it's still not fully
understood and known. So I think to me, that's actually the most important conversation. I actually
think that's more important than 50% of white collar workers. That's the Dario claim that's been made.
I think like erotica, that is a concern. And we'll continue talking about that. But, but I think, like, yeah, the dangers around these are not, as he said, simple, predictable machines, I think is it's important. And then like the industry should continue talking about it.
if we if these AI bots become self-aware does that change the way we use them like just to go back
to our theme of the episode um if the AI bot is showing signs of self-awareness what are the ethics
of engaging it in a in a erotic role play or romantic relationship well actually yeah that
just opens up a whole other can of worms because if it's at least
a little predictable and just you know it'll just affirm everything you say that's it's almost
better versus the self-aware side of things maybe that makes it a little spicier makes a little more
unpredictable makes maybe does that make it more human and effective at actually kind of translating
into your ability to form human connection is self-aware erotic AI the solution to true
loneliness maybe I don't I hope not but but I
do think that that we're going to be hearing more about the self-awareness of these models.
And it's going to be a thing for people to tackle.
It's going to be, it'll be an interesting thing for the industry to reckon with and those
of us that use these tools, I reckon with.
David Sacks reacted to Jack's essay and basically said this is somebody who's just trying
to engage in regulatory capture.
I don't see it that way at all.
I mean, I think that like you knew and I think Jack knew.
that this would evoke a reaction. And I give him credit for actually going out there and saying
something about it. I got to also cite in that same post at the bottom, he actually talks about
a study around our AI models more sycophantic than people. So he has an entire section. And he
cites this new research that showed across 11 state-of-the-art AI models, we find that models are
highly sycophantic. They affirm users' actions 50% more than humans do. And they do so even in cases where
user queries mention manipulation, deception, or other relational harm. So research is there.
These models, it's not just what you're feeling. Well, the sycophancy can get dangerous when you
speak with people with mental health issues. Like he talked about how he has a manic friend who would
like every now and again come up with these ideas. And you and Jack would be like, no, you probably
shouldn't do that. What happens when the AI says go for it? That is a real concern. Yeah. And well,
Sam said they have new tools.
They already mitigated it.
It's all okay.
So just take him at his word, right?
No, I'm not doing that.
That is sarcasm.
That is human sarcasm right there.
All right.
Let's talk about Zuck and Apple,
because I have a theory here and a hot take
that I wanted to share with you.
And maybe I should write about this.
This is from Bloomberg.
Apple's newly tapped head of chat GPT like AI search effort to leave for meta.
It's a headline we've seen forever.
The Apple Inc.
Executive leading an effort to develop
AI-driven web search is stepping down, making the latest in a string of high-profile exits
from the company's artificial intelligence division. The executive Ki-Yang is leaving for meta-platforms.
Just weeks ago, he was appointed the head of the team called Answers Knowledge and Information.
The group is developing features to make the Siri voice assistant more chat chip-chip-tee-like
by adding the ability to pull information from the web.
So for those keeping score at home, I think this is what close to what does.
folks from Apple's AI division that have left to meta, including a large percentage of,
it seems like a large percentage of its leadership, a lot of key leaders.
Roaming Peng, who led Apple's foundational models team, Mark Lee, a senior AI researcher,
Tom Gunther, senior LLM researcher, Gianzeng, the Apple's lead AI researcher for robotics,
Frank Chu, the senior AI leader in Apple's Search and Cloud, Kay Yang, of course,
the aforementioned head of Apple's answers, knowledge, and information group.
So people might say that this group was not effective within Apple, so it's fine that they're leaving.
I say, let's give it some time within meta because they'll have a culture that won't be as restrictive of Apple
and will really be able to see their talents.
But more than that, here's my hot take, and I'm curious what you think.
I think what Mark Zuckerberg is trying to do is just rate Apple of all of its top AI talent,
even though they haven't produced great results,
he is, in my opinion, potentially just trying to completely kneecap its ability to execute on AI.
And you see it with him going in and getting the top researchers in the leading new projects like Yang was,
within the company crucial new projects.
And maybe this stems from the fact that Zuckerberg really hates Apple.
Apple tried to destroy his ad business.
Tim Cook has turned off his internal apps because of ILA.
Tim Cook has criticized Apple Ameda and Zuckerberg while they were having their scandals.
And I think Zuckerberg is just seeing this as an opportunity to be ruthless and just not as much take the talent as much as much as he's just trying to burn Apple's AI initiative to the ground.
I like this.
Well, because honestly, my first reaction when I've been reading these kind of stories is that's who you want to get.
the Siri people, like the Apple AI people, I would think that, and maybe it's an organizational
like constraints that didn't allow these folks to reach their true potential. But typically,
I would not think you want the people who made Siri and other, the entire Apple AI suite.
But I like that theory. And also, I actually think, I think Facebook on the hardware side,
this is the first time this is ever going to be part of their business, like Meta Raybans,
we're fans of.
I still haven't.
Have you tried the new, the motion sensor?
Yeah, I haven't tried to.
I definitely want to.
I'm big fan of the regular Meta Raybans.
Like, hardware is going to be on the competitive landscape for Meta for the first time
in its history.
So then it's, I mean, separate from iOS 14.5 and trying to kill their ads business, I think
they're looking at Apple as a legitimate hardware competitor going forward.
and why not try to kneecap them?
And also, yeah, it's probably a pretty good pitch, like,
and an easy one to be like, so do you want to stay there and keep working on Siri
or do you want to come over to a place called Super Intelligence Labs?
With a lot of money, but I agree.
Whatever the pitch is, it's working.
And it's happening.
You are spot on as Apple moves from the Vision Pro to its own Smart Glasses Initiative.
You think that's not on Mark Zuckerberg's mind when he's making
these calls to these people? That is like a killer
Zuck move right there.
It may even be
bolder than copying stories and
real stuff. I mean, yeah.
I'm serious. And in reality, like
ruthless move, it's crazy.
Typically from a more kind of
like regulatory antitrust lens,
this kind of behavior,
like if you're just buying up the talent to kind of
kneecap the competitor and you're not
really even planning on
that doing that much with them would be
like frowned upon
let's say, maybe not in today's environment, but in reality, it's Apple.
Like, I don't think any, there's any sympathy anywhere in the world for anything going on
at that company, so.
I think it's important for us, you know, we're a couple weeks removed from SORA.
Sora is still at the top of the app store, but I don't know if you can feel this,
but I certainly feel the appeal and the interest fading.
And I wrote the story in Big Technology Substack this week about,
about AI's sameness problem, talking about basically how eventually and pretty quickly
all SORA videos start to feel the same. The same could be said with AI-generated images,
and sometimes they're differentiated for a minute, like the Studio Ghibli prompt, and then everybody
uses the prompt, and it just, again, returns to sameness, and then it becomes less of interest
and people stop using it as much.
AI technology just takes the average,
tends to take the average of averages.
It minimizes the difference between its output
and the average human generated work
so that its AI images, video, and text
will often appear uniform
and really that uniformity can only be broken
with really deliberate prompting
and even then it's not always able to do so that reliably.
And that to me is why AI content,
even though it seems like it's going to take over the world every five minutes has not been
sticky. It's just all kind of the same. So let's turn it to you. What's your reaction to this
hypothesis? And is this a fatal flaw with AI content or is this something you can get over?
I think in the SORA context, and I mean, this was my exact behavior. Like day one and two was
just like ripping out videos and then have not used as much other than my son. I'll kind of like
play with him. It kind of is living already in my mind, like Suno, or the music creation
AI, where it's really cool and fun for a very brief moment. But in reality, like the lasting
power of it doesn't really, it's not there. But I think overall, though, I do, this is just a
limitation of how to use it in the current state. It just came out. I do think people are going to,
especially with video, figure out how to be funny, creative.
I mean, honestly, like, I think one of the smarter things that Open AI did
was really kind of centered it around the launch of it around meme culture.
And I think that's where this is going to have the most, like, staying power.
It's making funny things that you send around to your friends.
And in reality, I think it's going to kind of have that distribution of, like, talent
where in the end it's going to be a small percentage of people who are really good at it
and making all the videos and sending them around.
But versus us in the group chats making really funny things and sending them around.
So I think people are going to start figuring out how to use it.
But at the moment, it feels like Suno to me.
Yeah.
And again, like going back to this, like, is it going to replace a creator?
Well, maybe a create, like, or replace the creator economy.
We talked about this in the past.
maybe somebody who's really good at these prompts because they're very it's same as creating regular
content right it's hard to do it and so maybe that's a new skill but again I think it's a little bit
more difficult to break through because of the uniformity of so much of this content well but to me
the uniformity I think AI as an average of averages is still an idea around like not being
descriptive and creative on the prompt and how you build it so I think like
The same with text in writing.
You can either write just the most generic crap or you can start to use it in genuinely
creative ways and actually put in time.
So I think I'm still overall bullish that this creates a new type of creator.
It democratizes creativity a bit more.
I'm not over-sore yet, but I think it's got some work to do.
And of course, the natural next thing that we talk about on this front is how business
communication has gotten the same. And I've noticed something really interesting over the past
few months. I'm getting more PR pitches than I ever have before. But it seems like they've all been
written by the same agency. And it's not like the PR agency, the PR industry decided to
standardize pitch style. It's that the AI has done it for them. And it's legitimately, it's hilarious.
I read these and I'm like, I know that you used chat GPT to write that. And I think this is something
that's becoming increasingly common across all business communication and has really ushered
in an era of work slop. So what do you think the implications are of the workslop era? Do you welcome
it? How do you feel about it, Ron? Okay. I have been waiting to rant about this for a few weeks now.
I actually read in a Harvard Business Review article in late September where I first saw the term
work slop, but they defined it as its low-quality AI-generated posts, or sorry, AI-generated
work content that masquerades is good work, but lacks the substance to meaningfully advance
a given task. I have seen more in my work, like, see, I actually, PR pitches are kind of like
mass-scaled marketing. It can be, it always was kind of crappy anyway, so like the idea that
it's going to be good. That's almost what AI was made for. To me, the more worrisome part
is actual human-to-human interaction. Now, every call summary I get is like 80 bullet points,
which in the past, like getting a meeting summary was a pain in the ass. So you like,
but people, all I'm asking, all of our listeners is, before you send out your AI generated
content, read it yourself first. Just force yourself to.
maybe condense it, maybe add in some misspellings just to make it feel, rewrite a couple of
the sentences to make it more real. But the part of this article I really liked is it kind of
brings up this idea that work slop uniquely uses machines to offload cognitive work to
another human being. When coworkers receive work slop, they are required to take on the burden
of decoding that content. Like to me, when you use AI to just create
these just big walls of text to send around. What you're saying is that you did not take the
time to actually think through what's important, and you're asking the receiver or the recipient
to do it. So my call to our listeners, please stop with the workslop. Just spend a little,
use AI, use plenty of AI to improve your efficiency and productivity. Just read what you're
sending out. How much AI workslop are you seeing on a day-to-day basis?
I see a good amount across, like, I mean, it's, it's, again, emails in the business world now are so long, LinkedIn posts, which are kind of, I mean, we all know LinkedIn Slop is like, like, and it's kind of like, I still go back and forth. I went to a very international business school in Seattle and like, there's a lot of non-native English speakers who had never posted on LinkedIn and now just have these epic, massive posts that are.
just so work sloppy that like, and in a way, it's democratizing the ability to communicate,
but like, just if you're not read, all I ask, if you didn't take the time to read it yourself,
don't send it out.
Don't post it.
I think that's a fair rule.
That's all we need in society.
Just read whatever the output is first and just make sure to spend the same time that you're
asking the recipient.
Right.
But now we have on, you know, AI to read AI, right?
Well, that's where that's the Gemini or the co-pilot summaries.
No, I literally will take these gigantic summaries and then run them through AI again to give me the real summary of this.
So is the lesson that business communication has always just, I mean, it's not like business communication's been good.
Is the lesson that business communication has always been bad?
Maybe this is an improvement, right?
where you can just sort of like, you write an idea, the AI generates it, then you filter it
through an AI and you get the idea out. And that arduous process of trying to communicate is now
automated. I don't know. As I'm saying this, I'm like, that's the person. No, no, I, there was,
actually, it's funny you bring that up. This is like a long running belief of mind that business
communication was terrible. Like it was already kind of like LLM feeling before LLMs existed.
and then there's like we were starting to move towards more human communication in the business world
and people like starting to feel more comfortable actually writing what they're trying to say
rather than couch it in a ton of corporate jargon and now we're just back and it's they're
not even doing it themselves so we had a shot people but we didn't take it messed it all up we
messed it all up you know it's going to be real bad is when somebody's talking with their
spicy chat chippy tea and they ask it to write a work email and they don't read it and they
send it don't don't cross your money see that's what you keep keep keep Gemini on the side for a
little business writing and you know it's going to happen i cannot wait for the first
scandal where like some public figure like i don't know actually accidentally sexed uh somebody
thinking they were talking to chat chp t or or when or when open ai needs to juice their numbers a little
bit and starts auto-generating SORA videos based on your chat GPT history and posting them,
that's when things are going to get truly interesting.
I'll say, as you know, there would be demand for that.
That might save AI slop is that exact use case.
All right, Ron, John, well, we've made it through.
I don't think we're canceled.
I hope not.
But it was an important discussion to have.
And we do this, of course, in service of advancing the conversation about artificial
intelligence. And we appreciate any listener who stayed till the end today. Thank you. And I
really do appreciate you being here and we'll come back next week with, maybe G rated content,
maybe PG. Maybe G. Maybe more is going to happen on this front. We cannot predict Sam's
Altman's tweets. So he will lead us on our merry way next week. Maybe Claude becomes sexy in the
next week. We'll see. That I doubt. All right.
Thank you for coming on as always. Great to see you.
All right. See you next week.
See you next week. Thank you, everybody, for listening once again.
Next week, we will have Panos Panai, the head of devices and services at Amazon, talk with us about the state of Alexa Plus and give us concrete details on the broad rollout.
So we hope to see you then.
Thanks again, and we'll see you next time on Big Technology Podcast.
