Big Technology Podcast - New Era Of Tech Accountability? + The Impending AI Ethics War
Episode Date: January 13, 2023Ranjan Roy of Margins joins us for another Big Technology Podcast: Friday Edition covering the week's news. This week, we check in on the following stories: 1) JPMorgan's failed acquisition of Frank, ...a college financial planning platform with an exaggerated userbase 2) SBF's new Substack 3) The SEC vs. crypto 4) AI's impact on the tech giants 5) The impending 'AI ethics' war 6) Is anyone still using Mastodon? For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/
Transcript
Discussion (0)
Welcome to Big Technology Podcast Friday edition, show for cool-headed, nuanced conversation of the tech world and beyond.
And we're here once again to break down the day's news, the week's news, as we will do every Friday.
I'm joined, as always, by Ron John Roy of Margins.
Ron John, welcome.
Thanks for having me back here.
It's great to have you back.
So we're going to start this week's show.
We have so much to cover.
We're going to talk about accountability.
We're going to talk about AI ethics and the fraught nature of what that field might bring,
and maybe the battlefield that we're going to see with that field.
And then we're going to talk a little bit about some of the big stories with AI,
the competitiveness with the tech giants and, of course, Microsoft's investment or potentially pending investment in open AI.
but last week we ended on a fun story this week let's start on a fun story there's this company frank
i'd never heard of frank had you heard of frank i had not heard of frank but i also have not been a student
in a long time exactly okay so frank is a company uh that sold to j p morgan for 175 million
essentially it was lead gen for jp morgan to reach younger customers students and it exaggerated its
customer base at least according to jp morgan's lawsuit that it's bringing by an order of magnitude of
maybe 10 it really had a few hundred thousand customers but it thought it had four million and the
lawsuit has some unbelievable facts inside of it that matt levine really did a great job breaking down
this week.
The overarching theme, though, here is that we're in an era of accountability that potentially
we weren't in previously, where, like, in the past, it might have been a situation where
it's overlooked or whatever, and now it's front and center.
And this person, the founder of this company that sold it to JP Morgan, is now getting sued.
So what do you think about this?
What do you think about the story?
What do you think about it in terms of what it heralds?
Yeah, when we say that the numbers were exaggerated by a factor of 10, I think that's
understating just how crazy this story is. So for a bit more background, the platform Frank,
they said they had 4.3 million customers when they're sold to J.P. Morgan. J.P. Morgan had
pressed them during the due diligence to provide them more information about this. At first, the founder
had resisted under the guise of data privacy and user privacy, but then still the JPMorgan pushed,
so they had to come up with a list of 4.3 million users. So what they did, they actually went
out, first they tried to have someone internally do this. That person ended up leaving the firm
because they refused to. Then they went to a data scientist, and actually we'll probably be
talking about generative AI and chat TPT later. They used, they were.
were early adopters on this because they actually worked with a professor and using
basically to create synthetic data to take their data set of 300,000 actual users and create
an entire new world of 3.1 million users. And that's what they submitted to J.P. Morgan to
actually sell for 175 million. It's a pretty freaking crazy story. Yeah. There's so many
questions to be asked you. How did this even get past J.P. Morgan's due diligence department
I mean, how did J.P. Morgan not look at Frank's email addresses beforehand, if that's what was really something they were interested in?
I mean, obviously, my first answer will always be ZERP because this still happened in the headier days.
Oh, it did happen during the ZERP.
Yeah, this happened, I believe it was 2019.
Right.
But even then, it's still, it's such a reminder that when an acquiree fits a very specific profile, again, J.P. Morgan,
trying to do things that are more innovative, trying to reach a younger user base, trying to
reach the next generation of financial consumers, clearly will get a bit of ahead of itself
and probably let certain things slide. Also, they did have an outsourced consultant that
apparently did vet this and prove this during the diligence. Maybe they should sue them.
I mean, I guess in terms of intent from who really tried to defraud who, it's pretty clear.
from everything that came to come out here.
Yeah.
And I think it's the idea that they thought they could get away with it and they did is actually
the most fascinating part to me, that something that's so over the top and clearly fraudulent,
the fact that a founder still went ahead and did this and tried to get away with it when clearly
at some point you will be caught at some point.
And I mean, and the story of how they actually were caught, J.P. Morgan did try to, you
email 400,000 of the clients, of the emails on the list, I think it was about three quarters
of them bounced, almost there's an incredibly low click rate, open rate, and from there,
that's when they started to dig a little deeper.
And the real question here is, are we going to start to enter this age of accountability,
which I teased at the top here?
There's a great story about this.
First of all, the journalism on this has been unbelievable.
It is always wild when, A, a company buys something that really isn't very real,
or allegedly not very real for so much money,
$175 million.
That's crazy.
But then typically,
and this is something that Eric Newcomer brought out in his newsletter,
typically what happens is founders will just distance,
or investors,
acquirers will just distance themselves from the fraudulent asset.
Think about, you know, a quibby.
Not fraudulent, but not really real.
Think about quibby, right?
It wasn't like, you know,
the investors.
I did not think quibby would be coming up on this Friday,
but I'm glad to hear the name.
be. But like when stuff fails really poorly or it doesn't really hit the way that investors think,
usually the investors just kind of shirk away. But in this case, J.B. Morgan is suing. And this is
from Eric Newcomer. He says, these kind of lawsuits are so rare. I have to believe there are many cases
where a public acquires simply writes down their investment and tries to prevent shareholders
from figuring out that the acquisition was an obvious mistake from the beginning. I wonder,
and this is the key point, if founder accountability might get a jumpstart this year, as the
downturn and private tech continues to play out. That's a really interesting thought. I hadn't
thought about it that way. And maybe Eric is on to something here. Yeah, I will admit it's not
off the top of my head. I did just Google it, but JP Morgan had $78 billion in free cash flow last
year. So the idea that they would just normally write off some investment like this where clearly
things didn't work out, clearly they were fleeced. I think in the last few years or five years ago
would have been the case. I do completely agree that right now, I think there is a complete
mindset shift where businesses realize that they have to start holding this type of behavior
accountable. And it's because that this was allowed to go on for so long that this is happening.
And again, I think it's clear from everything that's coming out just how obvious this fraud was.
And I think that's why they chose to go after it. But I think we're seeing that in a lot of different
arenas and technology now where things that were allowed to be slipped under the rug clearly
theranos was the big story a few years ago but there wasn't much else i think we're going to see
between this we're going to see more and more instances where if it's just complete fraud you're
going to i mean they're going to go after you so the ages i mean one of the interesting things that
people are bringing up with this frank story again so many accounts that weren't real then getting
effectively packaged into a product that J.P. Morgan bought was,
didn't they know that this day was going to come eventually?
I mean, is it indicative of the days that we've come from
and to the days that we're going to that this type of stuff was allowed to happen?
It's pretty interesting to me.
What do you think?
Yeah, I think that, so in this case, from what I saw in the coverage,
she had actually sued J.P. Morgan two days before they filed their lawsuit,
so it appeared to be a bit preemptive.
And clearly, the more that came out, there was a lot of bad blood that she had come in as the head of student solutions, and very quickly things went downhill once they started uncovering all of these fake email addresses. And it became essentially a head-to-head battle. So I think in this case, again, there appears to be a bit of that back and forth where they may have just written it off if this did not get personal. But at a certain point, when the fraudster tries to come after.
you once they actually fight back in some way. I mean, J.P. Morgan is not going to take this
lying down. And my favorite part of this is in her complaint, the founder, Charlie Javis,
about J.P. Morgan, first of all, her lawyer is Alex Spiro, who is Elon Musk's lawyer, was, or is
Elon Musk's lawyer, heavily involved in the Twitter, in the entire Twitter drama, was
reportedly a big part of the transition team when Elon took over. And let's remember, he was
on the other side of actually filing the lawsuit saying that Twitter was having fake users last
year. So, I mean, clearly a litigator who is happy to play both sides. But when you see all this
coming up. Good point. I'd even thought about that. Yeah. Yeah. I mean, yeah, Alex Spiro. I feel
there's going to be more that comes out that's interesting around just him as a character and
all of this, everything that's going on in tech right now.
But, but, yeah, I think the fact that J.P. Morgan is going after this publicly and aggressively,
I mean, it's a huge sign that founders will be held accountable.
And, I mean, again, SBF, Gemini, today, all of these other areas, we're seeing people
being held to account.
So let's move to those couple stories.
First of all, SBF and substack.
SBF is obviously not keeping quiet about the allegations against 10.
him. He spoke to one of my friends, Teddy Schleifer at Puck News. Teddy wrote a terrific story
about it. And he also started a substack. I mean, come on. I can't. That was astonishing.
So, full disclosure. I didn't read it. I understand that he has a story to tell. I've heard
enough about it at this point. And I didn't personally want to hear any more about it from him.
I found it interesting. Okay, he's doing a substack. It was when he initially was charging people or
like had a paid option. And then he's like, oh, that was a mistake. He turned it off.
what do you make of the fact that I mean is he's in this position and he probably I don't
maybe he should be talking maybe he shouldn't be talking it's it's interesting the traditional
corporate playbook is if you're in a place of being prosecuted don't talk but he has this view
that he wants to talk and talk more it's very interesting well full disclosure I did read it
I tried to trudge and power through it and it was the same thing over
over and over. And again, you can try to argue that, I guess, the most generous interpretation
I'll give is he has been consistent. He has with a straight face told George Stephanopoulos, told,
I can't even remember who, Andrew Ross Sorkin, whoever else, that I did not steal money. This was
just a bet gone wrong. This was a leveraged exchange gone wrong. That happens. A London
medal exchange a few years ago. Like, over levered exchanges can go bankrupt.
hedge funds can make bad bets. He just keeps saying it over and over. He keeps saying that there were
letters of intent to fund FTX up to the tune of $4 billion right up to the bankruptcy. He keeps saying
these same things over and over, even though everything has shown that they're just completely
untrue. I mean, Caroline Ellison, Gary Wang, Nashad Singh, everyone has been coming out.
And it's clear in all the filings that are coming out that this was just fraud from the
beginning. This was funneling money from FTX to Alameda and personally making political donations,
buying luxury real estate, having, what is it, like $100,000 catering bills. None of it was,
you know, just a bet gone wrong. I'm sorry, this happens. This was just outright fraud in terms of
actually, you know, misrepresenting to customers saying your account is safe with FTCS when you're
just funneling the money over Talamita. And he's still just not answering that.
simple fact.
Okay, so the investor, Bill Ackman, I want to get your take on this, had a thread about
this, and he said he previously had been accused of something and found out that, and it came
out that he wasn't actually guilty of it.
And we know that Caroline Ellison is guilty because she's pleaded guilty.
We don't know about Sam.
When you hear that, what do you think?
So the Ackman thread, I did read, the one bit of empathy,
sympathy, I might feel for him is, I do think the more I learned after the fact, I remember living
through it but being younger, Elliot Spitzer and in terms of kind of overzealous for political
purposes regulation, I do think, you know, he definitely pushed the envelope in the other direction.
And so when you see that name, maybe Bill Ackman was the target of something that might have been
a bit overzealous. However, still comparing that era of, you know, like hyper-regulation,
really aggressive policing of Wall Street, Henry Blodgett being barred from the industry,
you know, like whomever else, there was every number of cases that were going on. I mean,
trying to compare that to today, this is what we're talking about. There has been no accountability
over the last few years. So when regulators are going after you, when the SEC, when there's
criminal complaints. I mean, right now, I think it's very clear that there's a reason.
The envelope is not pushed so far in the other direction where they're going after absolutely
everyone. I mean, you are the biggest marquee fraudster of the moment. You're going to be a
target. Now, speaking of targets, let's just do one more crypto story and then we can move on to
some of the other stuff that we want to talk about. The Winklevoss twins, by the way,
I didn't realize, but it has become common lingo that people just call them the Winklevye, which is hilarious.
Like I heard them called the Winklevye on CNBC, and I was like, oh my God, that like meme from the social network is now reality.
They are in a bit of an issue.
The SEC is coming after them for their earn program.
And it's this convoluted relationship between them and this company called Genesis, where people were effectively, imagine they loan their crypto out.
They get 8% on the money.
I mean, you have to ask, why am I getting this 8%?
I think that's one of the lessons we're learning with these crypto firms.
And ultimately, they haven't been able to get the money out.
And now the SEC is coming after them saying this was a security.
What is your read on what's happening here?
And is this sort of, are we at the end here of these type of things with crypto or is this still, are we still in the middle?
I guess we haven't had finance yet.
I was talking about that with Kate Rooney from CNBC who was on earlier this week.
But what's your read, Ron, John?
I mean, we're certainly in the middle because Sam Bankman-Fried in that Teddy Schleifer piece
is still living a pretty comfortable existence in a large Palo Alto home.
The Winkleby are still tweeting, like, I mean, we're definitely still well in the middle of
whatever is going to happen.
I do think maybe we're approaching the end game because this is the point where everyone
is pointing the fingers at each other.
everyone you know the good vibes of good morning we're going to make it are long long gone and now
it's everyone else is a crook um and the jemini story i think it's very interesting because as you
said gemini and it gets a bit confusing because there's jemini and genesis it's really an annoying
story to follow i know even reading through every article and trying to track which one is which is
is tough. So Gemini, they had their earn program, promised exorbitant yield. To get that yield,
they lent that money to Genesis, which is under the digital currency group, Barry Silbert.
And then Genesis has had some trouble from the FTX fallout. They're not able to give that
money back. And this is where you're getting letters between Tyler Winklevoss and Barry Silbert
and everyone's saying whose money is who.
And that is, I mean, that's the one of, all of this,
Sam Bank and Fried has $450 million of Robin Hood shares.
Does BlockFi get it in their bankruptcy?
He's claiming that it's his and it's had nothing to do with FTX.
There's all these pockets of money that's left and of Viat money or Robin Hood shares,
which theoretically are liquid.
So everyone is fighting over them.
And the biggest thing about today, the SEC or,
yesterday once they filed this enforcement action against the Gemini. The thing, in terms of
accountability, this made my blood boil in terms of Tyler Winklevoss. He was tweeting, and I copied
this down, that this action does nothing to further our efforts and help earn users get their assets
back. Earn is the program Gemini. Earn, their behavior is totally counterproductive. And he says
super lame. It's unfortunate that they're optimizing for political points.
out of every this is exactly what SPF is doing somehow you lost everyone's money and now you're saying
you are the one who should be entrusted to get back their money and then the last one I noted in this
threat of his we look forward to defending ourselves against this manufactured parking ticket like
I mean the the condescension towards the largest regulatory body of the United States government
It's so palpable there, but it is.
You have not had any pressure.
You just lost theoretically $900 million of your customer's money.
And still, they're just not treating this as a huge deal, as a criminal deal, potentially.
They're treating this as it's an issue.
We're the ones who can get it back, and you guys are screwing up our efforts to do well by our customers.
I mean, the way that they're talking, and I also copy down the point where they were like
the super lame I mean talking about the SEC like it's residual from a moment where there
was an accountability but maybe what we're seeing right now is accountability is making a
comeback at least in some way that's the hope do you think that that's the case I think so
Matt Stoller who uh the antitrust fade yeah we got to get mad on with us on one of these
Friday episodes yeah that would be a fun one one one thing I really have taken away from things
he's written is that government needs to govern and has forgotten how to govern. And this is something for me over the last few years, I really think the whole attitude of deregulation. I mean, and obviously this extends to everything. Airlines were seeing issues right now, whatever else, but especially around financial markets, this attitude of, like you see it again, super lame, manufactured parking ticket. They're calling out the regulator who has just filed an enforcement action against you.
and like making fun of them essentially because they're so used to not having any type of
being held to any type of accountability. And I do think, I think again, SBF, if he goes to jail
with the Winkle Vi, who is responsible financially, does more stuff come out? We'll wait to see
with Binance. And that's all in this area. But again, going back to Frank and J.P. Morgan,
the fact that their founder is still aggressively coming out against J.P. Morgan.
And saying that there's something around like J.P. Morgan just wasn't ready to help us
attracting a young, diverse, new audience that they weren't ready to do that.
Like, just saying ridiculous things when you are under the microscope and like caught red-handed
around doing very bad things. I think once we see more action, this, I mean, this era of
accountability will definitely be here to stay.
We're here on a new Friday edition of Big Technology podcast.
Ron John Roy is with us.
We started doing these last week.
It's a new thing we're doing in 2023.
The flagship episode on Wednesday, where I interview newsmakers, talking deeply about
what's going on in their world, is still going to continue.
And we're also going to do these shorter ones on Friday, where we break down the week's news,
which is what we've been doing in the first half.
If you like it, please rate us five stars on your podcast app of choice.
And if you really like it, please stay tuned because we'll be.
back right after this break.
Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled with business,
tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and
informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers
break down the biggest business headlines in 15 minutes or less and explain why you should
care about them.
So, search for The Hustle Daily Show and your favorite podcast.
podcast app, like the one you're using right now.
And we're back on Big Technology Podcast Friday edition, Ron John Roy, is here with us.
Second week in a row.
Great to have you, Ron John.
Oh, you're muted.
It's Friday again.
Let's do this.
Let's do this.
So, by the way, we're going to do these live on LinkedIn every single week.
But I messed up the technology this week.
So we're not live.
But I will post the video.
Not this week.
No, we got some questions last week.
We did get questions.
So next week I'll make sure to get it done.
Join us will be live 11 a.m. Pacific 2 p.m.
Eastern 8 p.m. Central European time, if you roll that way like I am at the moment.
And let's get into the second half.
So obviously artificial intelligence is a huge topic of interest for anyone focusing in on the technology world right now.
There was a really excellent piece by Ben Thompson, Instratum,
techery talking about how AI is going to impact the five big players.
And they're obviously interesting things about apps on Apple being powered with new technology
that can lead to things like image generation with an app like lensa, thing like meta using
AI to do better targeting of ads now that Apple's cut off its ability to target precisely.
One of the things that I found interesting reading through this article is Ben thinks this
is a real shift change, a shift change in the way getting the people.
PC was, going to the internet was, moving to cloud was, moving to mobile was, and now there's
this new shift. And of course, it feels big as we're watching it happened. If you've used chat
GPT, if you use Dolly, if you use anything of this nature, you realize that we're in a new
world right now. But my question is, how much is it going to shake up the business world?
Because we did have Aaron Levy from Box on a couple weeks ago who really thinks that this is
a sea change. And I know that you read Ben's piece, Ranjan. Did you think that it put the changes
that we're going to see with this wave of AI into the appropriate context.
And it's also interesting that this was all happening all along the way.
And right now we're talking about it because we have these generative use cases.
But it's more than just the generative AI.
So what's your read on the magnitude of the shift here?
Yeah, this is one area where, I mean, Ben's piece, I think, is fantastic at really laying out the concrete use cases for where artificial intelligence is going to change these businesses.
As you said, each company, each of the Big Five, Microsoft, Meta, Alphabet, everyone has their own, Amazon, has their own specific use cases.
And I do think this is going to completely change the way companies operate, business is done.
The meta, again, they actually, Business Insider just had a really good long piece around.
it was like meta's advertising business is back after the Apple tracking efforts in iOS 14,
I think it's now a year and a half ago and losing the tracking abilities in their like
advertisers and agencies are starting to say that it feels like it's back and that has been
very good.
Oh, I was waiting for that story.
Yeah.
Yeah.
It was inevitably, it's going to take back a little bit and then there's inevitably going
to be a story where someone writes, oh, everything's hunky dora again because
advertisers are moving a little bit of their spend there.
But I think that's overblown. But go ahead.
You think, all right, all right. No, I think it, but theoretically, like, let's take a meta.
Like, just kind of like heavy-handed tracking, like cookie-based tracking, app or device-based
tracking, that is going away, that we're moving away from that world.
So instead, leveraging artificial AI to actually create models where still being able to create user
profiles and still be able to target in new innovative ways, it seems theoretically possible.
And if anyone is positioned to do it, I think they are. Obviously, I think the one everyone is
excited about and interested in is Google search versus, I mean, if Bing becomes a name and a player
in the 2020s, that would not have been on my bingo card a year ago. But the, I mean, this idea
that Microsoft, and we can definitely get into their relationship with Open AI, I mean, Satya Nadella is the
goat CEO right now because carefully fostering this relationship and having firsthand access to
GPT3, Open AI, all their technology. And again, chat, we talked about this last week. Search will
change. Search already has changed. Google search is a mess right now, but they've already
Star, Google is the famous one, with snippets, trying to, without you going out to another
page, try to extract what is the relevant answer to your query and trying to give it to you.
I mean, large language models create an entirely new way of doing this.
I still am skeptical about Casey Newton on your show saying that he will find a new pair
of shoes via chat GPT, but I do think, I mean, this will change search.
It's inevitable.
if anyone who's used the technology, being able to synthesize a large number of websites and then
into a coherent, simple answer, I mean, that's what search the promise of it always was.
And it's going to change a lot of the internet, because the advertising-based internet,
click here, use Google to go to my website so I can run an ad against it.
I think that's going to change and everyone needs to start preparing for it.
Right. And Ben seemed to be most pessimistic about the future of Google.
but let me now stand on the side of Google and talk about why Google is actually going to be better.
Of course it's more that the old Google search might be better, just for the sake of argument.
Of course, it is going to be fun to talk to these things.
But there is definitely a utility in being able to poke around different websites and do your own research.
Because whether we understand it or not, part of our way of interacting on the internet is looking at different sources and
making a determination of what we believe after we triangulate.
That's, I think, an important part of using the Internet.
And that's going on for everybody.
And if you have a chatbot that is going to get things wrong and inevitably will,
then you're going to end up, or it will present one perspective.
Because I can sit there and talk you through all these different perspectives for every question you ask it.
So I do think that it's going to be, it's going to be tougher than a lot of people imagine
for that type of experience to replace the classic search.
I think classic search is pretty evolved.
However, on the other side, generative AI, I do think it's like most potentially malicious
impact, call it, is going to be just millions and billions of websites and content created,
trying to game SEO.
And completely, Google needs to change the entire way to approach his search in order to deal
with the coming influx of just mass generated content.
They've said in the past, like Danny Sahl.
Sullivan, their search liaison, we will penalize AI generated content.
That was before chat GPT was public.
That was before, I mean, we will, like having used to play with this technology a lot,
I mean, the idea that there's going to be very simple ways or even complex ways of being
able to perfectly tell what is AI generated and what is not, I don't think is going to be
feasible.
And I actually think there's going to be a lot of good AI generated content, and that
should not be penalized in any kind of search.
So I think Google is going to have to rethink the simple idea
because it was built on a foundation of a website that is on the Internet
has inherent value.
Someone put effort into it and now we are going to rank these.
And obviously there's always been Black Hat SEO and just content farm-e things.
But I think the scale of that is going to change massively.
So, yeah, I do think from a search perspective,
this creates challenges on both sides for Google.
On one side, you have this idea that now your search query is returned to you in this highly personalized narrative form.
And then on the other, the actual way Google search works, I think is going to be impacted.
Well, you use the one argument that hurts the most for me and sort of back me into a corner because I've been the victim of this this week where I have a story up on big technology about a publication on Substack.
that was started up a week ago, and on Saturday, published a story that was not a carbon copy,
but a plagiarized copy of the story that I wrote about the crater economy the week prior,
using exact sentences, but also having remixed a lot of the content and spitting out this new article.
And the interesting thing about it was the writer who put it together,
who was anonymous called themselves Petra, ended up admitting in the comments after this,
after this plagiarized post went to the front page of Hacker News,
which is worth thousands of views to any publication,
this person admitted in the comments that they had used AI to remix
or to quote unquote improve readability.
And actually, they meant, you know,
they never cited the fact that the original content was coming from me.
So on that point, yeah, I think that there's some serious,
there's a serious issue with it.
I just would, I also wouldn't want chat GPT to be like,
oh you want to learn about the creator economy you know here's a summary of what
Alex just wrote and then not having anybody come to my page if that happens I'm in trouble
and so is the entire I mean those type of applications really hurt the whole web but yeah
it's gonna be it it's changing everything I I really think I mean from our as writers
we have to think about this as Google how they rank web pages they have to think about it
Right.
And no one's going to raise their hand and be like, oh, by the way, I used AI to write this and Google, please derank me.
Yeah, yeah, yeah, exactly.
Substack wouldn't take the post down.
The AI generators had no way to find the person.
So the traceability of all this, because at the end of the day, it's just text is really difficult.
No, no, and we're going to have to live with it.
Who is to say that a person did not take your post and just rewrite it?
Exactly.
I mean, the plagiarism has happened since the beginning of time.
time. So yeah, I think it is really important and there needs to be like open, honest conversations
around this right now because it is going to change how everyone approaches just content in
general. And I do think like, I mean, the Sam Altman, whoever else, they need to be leading
these discussions. And I think the ethical considerations around this are huge. And I'm actually
incredibly bullish on all this technology because as someone who has written very boring content
projects, I can tell you there's a lot of the world and there's a lot of, there's a lot of copy
produced in the world that is not exciting for someone to produce. And the more that gets automated,
great. However, what are the other implications of this plagiarism, someone rewriting your post? I mean,
this stuff, I agree, there needs to be, we need to at least start thinking about what the guardrails
look like.
Did you see that CNET is now having AI write posts for it?
Straight up, write articles for CNET.
Yeah, I mean, so I thought, so I had looked at, I've been like following the space very closely.
Even years ago, there was, I think it was like 2015, there was a company called narrative science.
And they, I think it was Royers or the Associated Press.
Yeah.
And again, Forbes also used them.
Okay, yeah, but, but, but, and it made sense.
So earnings reports, sports scores, taking, but that was basically essentially a fill-in-the-blank type of service.
It's like we have these, essentially what are templates, earnings report comes out, give the headline.
There's some if-then statements around if, you know, like the earnings per share are lower than the estimate.
You frame it in a more negative way.
That stuff's been around for a long time.
And in some ways you could argue that, you know, chat TPT is essentially a smarter version of them.
that. But, but yeah, I do think, I think journalism and content providers will be incorporating
more and more of this technology. And I think we just have to figure out what that means.
I'm still a little skeptical, though, whenever anyone says, there was definitely a while where,
like, you know, this article was written by GPT. And, you know, like, no, you, you've spent out
a draft and then you edited the entire thing, but. Yeah. Okay. conclusion of this is I do think
Yes, Satya Nadella made some very smart bets with his closeness to Open AI.
And I'm about to speak with Todd Bishop about it.
Todd is the founder and editor-in-chief of Geekwire.
And it's the like Seattle trade publication that covers the tech out there.
And I'm very eager to hear what his take is on the whole Microsoft situation,
especially when it comes to Open AI.
So listeners, if you're interested, check that out this upcoming Wednesday,
Todd Bishop coming onto the podcast.
But Rhonda and I agree with you.
The Satya thing is he's obviously the goat's,
at this point.
Yeah.
So to put it into context, one of the most brilliant maneuvers, I mean, when I actually, I remember
when Microsoft invested a billion dollars in Open AI in July 2019, everyone's interested,
this is where are they going with this?
Apparently what happened was a good amount of that was Azure credits.
We touched on this last week.
It's brilliant because chat GPT is the greatest marketing tool that has ever touched
the space of language large language models.
I mean, apparently Google has a highly functional comparable.
Well, that's Lambda.
Yeah.
And I mean, they invented the transformer model, which this is all based on.
So they should be pretty far ahead.
But instead, suddenly making open chat GPT essentially free,
exposing everyone to this and branding yourself around it,
that has been an incredibly, incredibly expensive thing to do.
But instead, what happens with Microsoft, the brilliance of this investment, they invest the credits to Open AI.
Open AI uses those credits to build its brand.
I mean, it's almost like, okay, this will be a stretch, but like a crypto firm giving tokens to its service as the investment dollars.
Yes, but.
Web 3 was real.
This was Web 3.
Microsoft gave OpenAI tokens.
They used those tokens smartly.
And that's the reason everyone is talking about chat GPT and not any other open source service.
Google's still private, quiet with whatever it is doing.
So I think that that move alone was brilliant.
You know that every cloud conference that's going to happen for the next two years is going to have a big Azure advertisement that says use the tools that behind chat GPT or something like that.
Yeah.
And apparently, apparently Open AI before that investment was always.
already spending $120 million a year with Google Cloud.
And then because of that investment, yeah, it was in the information, because of that investment switched completely over to Azure.
And now, I mean, from a branding standpoint, just overall, again, Satya, Goat CEO.
So for our final topic, we don't have a lot of time, but, and we could probably spend a full show on this.
Maybe we will in the future.
But let's talk a little bit about AI ethics, because there is going to be a battle that goes on pretty blatantly about.
what happens with the ethics of these models and ethics in some ways is of course it's a good-natured way
to make sure that they're not causing harm but the undertone or the undercurrent of all these
conversations is that people want to control what the models do and there's going to be a battle
which just like there has been in content moderation there's going to be a battle about what ethics
should these AI models have it's not just like should they be ethical it's what ethics should they
have. And there's this post that Sam lesson put together about how AI ethics is the most dangerous part
of 2023 AI. And he talks about, he says, the morality limits that companies like open AI place on
GPT, he says, I find the morality limits that companies like open AI place on GPT3 so deeply
troubling. It's a free speech problem, but a massive steroids where a small group of people are
taking a technology slash a fundamental power and then centralizing the mortality, morality power
of what it will and will not allow the tech to do.
This is just a preview of what the big debate is going to be.
I think content moderation debate is going away.
I think the AI ethics debate is coming.
And I'll just say what I think about it and I'm curious to hear what you think.
But my perspective on this is that it is a little bit naive because when you create a model like
this, you ultimately are making decisions about the content, just like when you create an
algorithm and you build in functions like the retweet and the like, you are making decisions
about what to elevate or not. So you do have an influence on content. And the fact that you
are the company that's making the product means that you are a small group of people that are
influencing it. So this idea that we shouldn't put limits or we shouldn't have a discussion about
it or there should be no content moderation, there should be no AI ethics, I mean, obviously,
it's naive there's no should or should not it exists and that's the that's the debate that's
going to happen so i'm curious what you think about it ron john yeah sam first all sam i think
has perfected the art of like fitting a blog post into a tweet with a screenshot i wouldn't call it
perfected i hate reading those things i hate reading i like it i like it whenever i come across one of
those yeah go ahead i'm a fan it's the apple notes uh extension but okay i first of all i'm so
happy that this is a conversation right now because if you think about it you know like six years ago
this wouldn't have even been to uh this wouldn't have even been anywhere on anyone's radar we would
have just built built built and then gone and whatever happened we would try to clean up the mess after
so i think it is a good thing that you know like there's a lot of smart responsible people who are
tackling this the other thing is i'm actually not that concern
because chat GPT, sometimes it could feel a little arbitrary as to what is blocked.
What's not Dolly definitely sometimes feels like, you know, you try to type something and, you know,
this isn't that bad and it's just like, sorry, we cannot produce this image.
However, I don't think this is like a bigger view of where this whole world is going.
I don't think these fairly generic large language models are the end solution and the end goal.
That's how we will interact with them.
I think already, open AI, the business case and the value is going to be their models like
Da Vinci, Babbage, Curia, that you can actually fine-tune and train for your own use cases.
That's where businesses are going to go.
That's who's going to use them.
And again, I think there's going to be all types of fine-tuned models.
There's probably going to be a white supremacist-faced fine-tuned model.
There's going to be, if Sam's worried about woke speech, he doesn't have to interpret.
interact with that type of model. I think like it's going to get narrowed down for specific use
cases and we won't, we will move away from this idea that there's going to be just one
overarching model that controls all speech. Because I do think right now that is what chat GPT is
for most people interacting. And I do think it's crazy that I don't know who at OpenAI is making
these decisions. I don't know how they approached it. Clearly they're concerned.
concerned about the topic of content moderation or societal harm. And I guess that's a good thing. But being done behind closed doors, I don't think is good either. So, so I, but, but I don't, I think in the medium term, that's not as much of an issue. I think, uh, I think all of this stuff is going to become much and much more customized, fine tuned. And I think we all types of models will be available, services will be available. But just because it's going to be distributed, I don't think we've seen the end.
or really the beginning of this AI ethics fight,
because the people that control the technology
will inevitably set the ethics
or be held accountable for the ethics.
And that's going to be really interesting.
I don't know.
Maybe the government will be involved
and the government will govern somehow
as terrifying that will sound to...
But do you really want the government coming in
and setting rules for what AI can and can't do?
I mean, do I want Elon Musk governing
what can and cannot be tweeted?
Is that a better solution?
Well, I mean, his governance is basically tweet whatever.
Let's just stand on this.
Last thing, I'll say, just let's think about Twitter for a moment.
I mean, there was all this hubbub about it.
I tweeted earlier that Twitter right now really kind of feels exactly the same as it was before Elon,
except with bigger losses than that previously.
We made it till the end before the name Elon Musk came up.
Right, right.
Well, you dropped it.
Yes.
Yeah, I know.
Hey, I've muted the, I've muted Elon. I've muted Musk. It's made the experience a lot better.
Not they don't care what he's doing, but like at a certain point, Twitter just became Elon Central.
That being said, what do you think about the thesis? I mean, it doesn't really seem that that different.
And I know you have a Mastodon URL in your, in your handle. How long is that going to stay?
I log on to Mastodon a number of times a day. I think like I've limited, I still am on Twitter, a fair amount.
I will say I tweet less.
Mastodon has been getting more and more interesting for me
as a way of finding information,
interacting with some of the people that I would interact on with Twitter before.
But I do think at this exact moment,
I don't think, yeah, I can't think of anything crazy
or main characterish that Elon's done for a couple of weeks now.
And whatever the case of that is,
maybe the Q4 was so bad that even he realized maybe I should
tone it down, maybe Tesla's stock dropping, however many percent kind of forced his hand on this.
But at this exact moment, from a user standpoint, I think everything might be quiet.
However, I don't know if you saw TweetBot, apparently, the API was cut off.
There's glitches.
I mean, if he cuts off TweetBot, if they start making aggressive API decisions like that,
I think he'll be right back in the center of conversation.
and I don't even know who the most recent scandals there's been plenty they'll fade away and he'll be right back
that's right well I think that will bring us home for this edition this Friday edition of big technology
podcast it's been fun Ron John thanks for coming on thank you we can do a whole episode next week
about the future of Elon versus messed up but I'm sure we'll have plenty to talk about maybe
we get Sam on we talk about AI ethics
I'll be sweet.
All right, everybody, thanks for listening.
Great having you, as always.
Again, Todd Bishop from Geekwires coming on Wednesday.
And then Ranjan and I will be back next week.