Hard Fork - ChatGPT’s Platform Play + a Trillion-Dollar GPU Empire + the Queen of Slop
Episode Date: October 10, 2025This week, we discuss the standout moments from our field trip to OpenAI’s third annual DevDay — including a bizarre chat between Jony Ive and Sam Altman, and the announcement that OpenAI is putti...ng apps into ChatGPT. Then, we try to make sense of the massive computing deal between OpenAI and AMD, and how it could impact the larger economy. And finally, Katie Notopoulos, a Business Insider reporter, joins us to discuss the growing backlash to A.I. slop and why she refuses to stop making deranged videos of us on Sora.Guests:Katie Notopoulos, senior correspondent at Business Insider covering technology and culture.Additional Reading:OpenAI’s Platform PlayOpenAI Agrees to Use Computer Chips From AMDI’m Addicted to Sora 2!OpenAI’s New Video App Is Jaw-Dropping (for Better and Worse)We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Transcript
Discussion (0)
My favorite moment from Dev Day
was when a guy came up to
and we were eating snacks there at a table
and this guy comes up and he's like,
are you Casey from Hard Fork?
And starts just talking about how big a fan he is
and like you at one point are like, yeah,
he's also on Hard Fork and the guy starts laughing
like he doesn't believe he's just like, oh, that's a good one,
Jokester.
And then proceeds to take a selfie with
you
I've never been happier
yeah that was
that was a great moment
in hard fork history
where I tried to
I pointed out
the other host of the show
and he was just like
what are you talking about
you're out of your mind
what a good bit
another classic hard fork bit
me pretending there's another
host on the show
oh
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, we visit OpenAI Developer Day, and ChatGPT is eating the web.
Then, OpenAI makes a huge deal with AMD.
We'll tell you about the trillion-dollar battle over AI chips.
And finally, Slop Queen Katie Natopoulos is here to talk about SORA
and all the terrible videos she's making of me and Kevin, and she will pay for her crimes.
Well, Casey, it was another one of those weeks where OpenAI kind of swallowed the entire news cycle.
So you and I took a field trip on Monday of this week to Fort Mason in San Francisco,
where Open AI was having its third annual Dev Day, its big developer conference, and we both got the chance to go.
Yeah, this was the third.
Our Dev Day and the second one we were invited to after they didn't invite us reporter types last year.
But this time they had a lot to say.
Yes.
So let's talk about what we saw there and all of the many things that OpenAI has been working on.
But first, since this is a segment about OpenAI and AI in general, we should make our disclosures.
Mine is that the New York Times company is suing Open AI and Microsoft over alleged copyright violations.
And my boyfriend works at Anthropic.
Okay.
So Monday morning, we show up at Fort Mason. This was a big event, something like 1,500 people I heard were there. And Casey, how would you describe the vibe of OpenAI's Dev Day?
Well, I have to say, first of all, Kevin, that it was giving me flashbacks because it was nine years ago that I was in the same building when Facebook announced that it would let developers build bots into Facebook Messenger and Mark Zuckerberg stood on stage and said,
no one will ever have to call 1,800 flowers ever again, which turned out to be not true.
That's right.
It wasn't the same exact building.
I forgot that it was at Fort Mason, too.
My God.
It was.
And then nine years later, the faces are different, the company is different, but the promise feels sort of the same.
Yes.
So, Fort Mason, it's this big, like, sort of indoor, outdoor, like multiplex.
And, you know, one building, they've got sort of talks going on.
Another building, they've got demos.
I walked around the demo building, they had a SORA cinema where you could like go into like a dark theater with like a movie screen and sit on like very comfortable lounge chairs and like watch like 30 different six second clips of like AI generated SORA videos. So that made me feel something. Oh, they had a phone booth station. Did you do these? I saw it, but I didn't go in.
Well, lucky you then, because when you were in there, you got connected to chat GPT, and you can just sort of talk to it, or you could play like a trivia game.
I got one of the trivia questions wrong, very embarrassing.
What was the question?
Well, maybe I'll try this one on you.
Okay.
Okay.
What is the branch of AI that teaches computers to learn from their own experiences?
Purs to learn from their own experiences.
Are you typing this into chat,
UPT?
I'm going to say machine learning.
Okay.
Congratulations.
You win.
I said reinforcement learning,
which was a little too
in the weeds.
If you're just listening to the podcast,
I just said what I call a full cluelit,
which is somebody asks you a question,
and then you just sort of surreptitiously
type it into the computer while no one's looking.
So, anyways.
Yes.
Yeah.
So opening eye, let's just tick through
some of this stuff they announced or said from the stage. The first thing that Sam Altman said
during his keynote was that ChatGPT has been growing like crazy. It has more than 800 million weekly
users now. He also announced that the number of tokens that OpenAI is processing through its
API have gone up to 6 billion tokens per minute on the API. That's up from 300 million tokens per
minute in 2023. So big growth for them. And I just want to say that it seems like every time
there's a hot new startup, they have to invent a very impressive sounding metric and the human
mind has absolutely no idea of understanding what it means. It's like, oh, there's a big number and
now it's an even bigger number. What should I take away from that? I don't know. Yeah. So in addition
to those new growth numbers, Sam also announced that SORA 2, the new video generation model and GPT5 Pro are
coming to the API, along with a new smaller voice model.
So developers are going to be able to build using those things.
They showed off some examples.
One of them was that Mattel has apparently started using SORATU to sort of prototype or mock
up new designs for toys.
They showed off a video of that.
They also announced a bunch of new stuff for what they're calling agent kit, which is basically
the way that developers can build AI agents using this sort of drag-and-drop interface.
face. They showed off some examples of that. But then the big thing that I think we're going to spend
most of our time talking about was this announcement that apps are coming to chat GPT. So Casey,
explain what Sam Altman and other OpenAI executives said about that. Yeah. And this really is where
this story connects to that Facebook event that we went to almost a decade ago. OpenAI is not the only
company that's tried this. There have been others in addition to Facebook. But what OpenA.I. wants you to do,
is when you are using chat GPT to increasingly bring in the other things that you might do around the
web. And so to start with, you'll be able to tag into your conversations, Expedia, Zillow, Figma,
Target, Spotify. I did a little test the day that it came out where I built a playlist in my own
Spotify through chat GPT. And the basic idea, Kevin, is that chat GPT is the new front door to the
internet. It no longer starts at Facebook. It no longer starts at Google. You just go directly to
chat GPT, and whatever you want to get done, you can do right from that box.
Yeah. So this is something that I think is sort of a natural consequence of chat GPT kind of
becoming more generally useful for people. Also, developers are very eager to get in front of those
800 million weekly chat GPT users. And so they have essentially created this link between
other apps and websites and ChatGBTGBT.
And we should maybe explain a little bit about how that works
because it's not like ChatGPT is going off
and searching Zillow or Expedia or Target.com
on your behalf.
It kind of, when you tag in one of these apps,
it sort of opens this like window within ChatGPT.
So you're still on ChatGPT,
but you kind of have this little subwindow
where you can do things.
So what's an example of something
that you can do other than creating Spotify playlists?
Sure.
So one example that they showed off is, hey, I'm getting ready to move to Pittsburgh. I need to buy a house. And so Zillow just showed a bunch of its own listings within ChatGBT, BT for you to browse inside that experience. Right. So it's not just search for a home in Pittsburgh that's for sale for me. It's like, I want one that's like four bedrooms and three bathrooms and under this price and it's got a yard. You could also have it sort of go through your
history of chat chbti conversations and say like, you know, based on where I work and the kinds
of things I like to do, like find some houses in a neighborhood that you think would be good
for me and it could presumably go do that. Yeah. And this is the real promise of what OpenAI is
doing here is that they will find a way to safely share some amount of the things that chat
GTPT knows about you with the other apps and services on the web in a way that lets you have these
very personalized experiences in a way that lets chat ChaptiPT get things done for you on your behalf,
and they think if they can do that, then they really do become the new homepage for the web.
Right. So that's sort of why they're doing it. I assume this is also going to be a way for them
to monetize chat Chb-T. If you are going on to Target and buying things through Chat-GPT,
presumably open AI will get a cut of that. But Casey, you wrote a newsletter this week about how you think this is similar to a play that Facebook tried to make many years ago around this platform strategy, and you noted that that did not go entirely well for Facebook. So maybe just outline your argument here.
Yeah. So if you remember back in the early 2010s, Facebook was growing really rapidly. It had become the homepage for a lot of people's where a lot of people were sort of starting their day on the internet. And at some point, Facebook gets the idea, hey, why don't we let the other developers build experiences on our platform? We'll share some of the things that we know about our users, you know, such as the pages that they've liked and maybe some of their contact information. In fact, we'll even share all of their friends with,
and their friends' contact information.
And this just becomes a bonanza for developers, right?
There's so much personal information that is now available to them.
They get in there.
They start building things.
And Facebook figures out it can make a lot of money doing this.
It starts selling this virtual currency called Facebook credits.
And if you want to have a big app on Facebook like Farmville, remember Kevin, Kevin's,
well, Kevin lost so much money on Farmville back in the day.
I had to file for bankruptcy in Facebook credits because I spent too much on Farmville.
That's right. But Facebook took a 30% cut of all of the Facebook credit revenue. And at one point, Kevin, Facebook disclosed that Zinga, Farmville's developer, was alone 12% of its revenue as a company. So there was a moment where this worked really well.
And then what happened?
Well, you may remember another company called Cambridge Analytica.
Yes, I've heard of this one.
So Cambridge Analytica was a company that became famous after the 2016, you have.
presidential election because they were one of the companies that was hoovering up all of that personal
data. And it eventually created a huge scandal because it was revealed that they had been using
all of this data in an effort to swing the election toward Donald Trump. Now, I think the idea
that they actually could have swung the election using Facebook data is dramatically overstated.
But it did draw a lot of attention to the extremely loose rules that Facebook had around
user privacy, and Facebook wound up having to take a bunch of steps to try to rein that sort of thing.
And in fact, weird little asterisk detail about Cambridge Analytica, by the time it became
a scandal, Facebook had actually already locked down the platform a couple years earlier because
it's suspected that something like this might someday happen.
Yeah. And I really am glad that you wrote about this and drew that connection because I think
in this case, chat GPT as a platform is actually riskier than Facebook as a platform for the simple
reason that people share very intimate things with chat GPT. You know, if you get someone's
Facebook data, you can know who their friends are, what kinds of posts they make, maybe, you know,
some things about their potential shopping behavior. But with chat GPT, like if you're giving that data
to an outside developer, that might include transcripts of therapy sessions that you've had with
chat GPT, things that you've asked for advice on. It might include like very intimate personal
details. And now all of a sudden, Zillow or Expedia or Target has that. And you have really no way of
knowing what they're going to do with that. So in ChatsyPT's case, this is like potentially quite
serious if there were to ever be some kind of a data breach. Yeah. I mean, let me give you an
example, Kevin. Let's say that I'm using ChachyPT and I decide I want to send you a birthday card for
a Hard Fork's third birthday, right? And it draws on all of the conversations that I've had with
chat Chept over the years. And it writes a birthday card. And it says,
is, hey, Kevin, look, I know things aren't great between us.
I know that we have a lot of fights, but I also want you to know that Casey has been working
on this in therapy.
And here's actually what his therapist told him this week.
Happy birthday and best of luck.
And then imagine it just sends you that birthday card.
I would have so much egg on my face, and I'd be digging out of that one for a year.
So that's the kind of risk we're talking about.
Yes, huge risks.
Risks that can blow up a podcast.
Absolutely.
So did they address not this specific issue, but the larger issue of data sharing at this event?
They did.
So Nick Turley, head of Chachapiti, who's been on the show before, said, we are going to make it so that Chachipit shares the minimum necessary amount of information needed to make the transaction.
So I can imagine that in the case where I'm creating a Spotify playlist, it's probably just sending whatever I put in the box, you know, make me a,
a fun playlist for the hard fork third birthday party.
Now, that sounds good when I say it, but I should note,
it sort of hand waves its way past the details, right?
Because ChatGPT also has a memory feature that I imagine will eventually be exposed in
some way to some of these developers.
And basically what OpenAI has said is, yeah, we're going to be really careful and we're
not going to do anything bad.
And I believe that that is their intention, but we're going to have to keep an eye on them.
Yeah, I mean, I don't think this is.
a hypothetical. I went to try to connect my chat GPT to Canva, one of these developers who have
sort of created these chat GPT apps, and you get a little pop-up before you connect your app that
says, among other things, that attackers may attempt to use chat GPT to access your data in
the app. And it also requires you to sort of confirm that you understand that data from your
chat GPT account, including from conversations and memories, maybe shared with these developers.
So you really are placing a lot of trust in OpenAI
and the developers that it has handpicked to be able to build these apps
by giving them sort of access to your chat GPT history.
Yeah, and for that reason, if you're like,
I think this feature is not for me right now.
I think that is a completely irrational choice to take.
Like, there is no reason that you have to sort of leap headlong into this
before, you know, you've let, you know, more privacy adventurous people
put it through its paces.
Yeah, use Zillow the old-fashioned way.
So I'm interested in the privacy implications here, but I'm also interested in the business implications, because one thing I am interested to see and keep an eye on is whether the chat GPT apps become sort of privileged within chat GPT.
I can imagine, for example, that if you are looking up, you know, real estate information on chat GPT, it might want to show you things from Zillow,
rather than Redfin or another site
because Zillow is the one that kind of has the deal
with ChatGPT and with OpenAI.
So are you thinking about that?
Yes, absolutely.
So this was the question that I asked
of the OpenAI executives
during the Q&A session that you and I both attended.
I guess you just kind of tuned out during my question.
But basically, you know, what Sam Altman told me was,
hey, look, it's important that people trust ChatchipT.
He said, quote, if we break that,
or take payment for something we shouldn't have
instead of showing you what we think is best
that would clearly destroy that relationship very fast,
so we're hyper-aware of the need to be careful.
Greg Brockman, who is Open AI's president,
added, I do want to say that I think
there's also a lot of nuance there
because sometimes we don't know what the best product is, right?
We have a principle of really trying to serve the user,
and then what does that mean in all these specific contexts?
So what I took that to mean was
you sort of have the angel
on the shoulder of OpenAI.
I'm not saying that Sam Altman,
but there's like a kind of a metaphorical angel
sitting on the company's shoulder saying,
hey, be really, really careful,
do the right thing,
as Google used to say back in the day
about this exact dynamic,
don't be evil.
And then you have the devil on the shoulder
saying, I do want to say
that I also think there's a lot of nuance
in this space.
Right.
Us, take money?
Why would we do that?
It's nuanced, Kevin.
It's a very nuanced question.
Very nuanced.
Very nuanced.
So, okay, that's sort of the scene at Dev Day.
But Casey, do you feel like you learned anything about OpenAI or its ambitions at this event?
I think this was just another example of Open AI showing us how ambitious that it is.
You know, Silicon Valley is not a town lacking in ambition.
But when I saw everything that Open AI is trying to do here, it was just another moment to say, wow, these people are really going for.
it. Like, they really do actually want to take over the entire web, and they're telling you that
to your face, and they're showing you how they are doing it. So for me, that was kind of a wake-up call.
How about you? Yeah, I think that's right. I mean, it's very clear that OpenAI sees ChatGPT as more
than a chatbot. It sees it as kind of a new operating system for everything that you might
want to do. I also saw some people comparing it to a super app, like WeChat in China, where you can
sort of have one app that controls a lot of your online activities. So I don't know whether
this platform strategy will work. I think we've pointed out today some reasons that it might not,
but I think the signal that it sends to investors and developers and users and kind of the
rest of the internet is like, we are coming for you. We are not resting on our laurels.
We are not resting on the ways that people are currently using chat chippy tea. We want to take the
rest of the internet and sort of shove it inside what we're doing.
Yeah.
All right.
So one more thing we need to talk about from Dev Day because this was truly the most
bizarre part of the entire event was this fireside chat between Sam Altman and Johnny
Ive.
Johnny Ive, of course, is the famous former Apple designer who made the iPhone and the iPod and
basically everything that Apple built its success on over the years, who has now partnered
with OpenAI and been acquired by OpenAI
and is building a new hardware product with OpenAI.
And so I was excited for this because I thought,
oh, maybe we're going to get some details
about this hardware product in this like, you know,
30-minute fireside chat.
And Casey, what we heard instead was a bunch of words,
signifying nothing.
It was GPT2 level.
Is how I would describe it.
it. Like sentences were started and not finished. And by the end of this session, we truly had not
learned one thing about what Open AI is building or how they were building it. It was incredible.
I will just read you a quote from Johnny Ive during this session. This was when he was talking about
what they're building. He said, the clues and the pointers all exist. And it's just trying to sort of put
them together. But there has, you know, it will be, it will be ideas. It will be a vision for what
makes sense. And I think we're only going to arrive at that if we are very curious and light on our
feet. Do you remember the Miss America pageant where they asked her some complicated question
about geopolitics and she just kind of had to vamp for 30 seconds until the end of the question?
that was how I felt watching Johnny Ive get asked questions by Sam Alman,
which of course is so funny because it's not like Sam Altman is Kara Swisher up there,
like really putting the screws to this guy.
You know, presumably all of these questions had been like negotiated in advance.
And like, look, I get it if you don't want to tell us exactly what the device is and when
it's going to go on sale.
Like, it's okay to like tease us a little bit.
But I truly don't understand the point of having.
everyone just sit there for 30 minutes waiting for you to say something and then all you get
is him being like, Kraft is the essence of tools.
And you're just like, what are we doing here?
So I have a message to the Open AI community on the subject of your heart.
I've said this one time before on the show.
I try to reserve this for very special situations.
I think we're in it now.
When it comes to AI hardware and Open AI, I'm saying this, ship it or zip it.
I truly do not want to hear one more of these fireside chats where we ruminate on the nature of design.
I want to see freaking specks, people.
Get me the specs or stop talking.
Yes, this was the closest thing I've ever seen to human-generated slop.
It was just a bunch of tokens.
It was too many tokens.
It was too many tokens.
When we come back, open AIs.
Other big news this week, a trillion dollar gamble that is propping up the entire economy.
everything that's coming to chat GPT, OpenAI was arguably making much bigger news outside the conference.
Yes, they were moving markets, the entire stock market. Not the entire stock market, but a couple
companies. And this was one where I was focused on the things in front of me. And when I got out
of Developer Day, I thought, I really need to get up to speed on everything that is going on with
chips. And whenever I have that feeling, I think one, oh no, I have to talk about chips again.
But two, can Kevin Roos, our resident in-house chips expert, smarten me up?
I would say, I would thank you.
That's very kind.
But I would say I am an aspiring chips expert.
I do not know everything there is to know about the semiconductor industry or GPUs,
but I aspire to learn much more.
And actually, I took a big step in that direction over last weekend at the curve,
this other AI conference we both attended, when I happened to find myself in the middle
of kind of a scrum of like chips guys.
who were talking about chips.
And I understood, like, 40% of what they were saying.
They were talking about, you know, HBM and EUVs and gas turbines and geothermal.
And I was just, like, transfixed because all the AI guys have had to become chips guys.
And I am saying right here on this podcast that I will, by the end of 2025, be a chips guy.
That's great to hear.
And I think we can make some big strides in that direction this week because OpenAI made some big deals.
I need to understand what's happening.
And so we're going to pilot a new segment that we're calling,
what the hell's going on with all these chips exactly?
It's a little on wheelie.
We might want to tighten that one.
So let's get started with this deal with AMD.
Tell me exactly what Open AI is up to here.
So we talked a couple weeks ago about Open AI's big deal with Nvidia,
who is the leading chip manufacturer.
They make the best and most expensive and highest quality chip.
for training AI systems.
But the number two player in that market, the GPU market, is AMD,
who's NVIDIA's competitor and sort of seen as a kind of distant second place in the GPU arms race.
This is like the Pepsi of chips.
It's the Pepsi of chips, exactly.
So this week, OpenAI and AMD announced a major multibillion dollar deal
where OpenAI is going to buy a bunch of GPUs from AMD.
and put them in their data centers over the next few years.
And in return, they are going to get some AMD stock.
This could eventually become up to 10% of the company at a penny per share.
So basically, free stock or stock at a very, very low price compared to what it trades for today.
And this will all play out over the course of a number of years.
They will start with AMD's newest chip in 2026.
And they are going to try to buy a total of six gigawatts worth of AMD chips,
which is a little more than half of what they are going to get from Nvidia in the terms of that deal.
Invidia's deal was for 10 gigawatts worth of GPUs.
So the big picture here is that OpenAI is doing these deals with many, many chip providers and infrastructure providers.
And if you kind of add it all up, just the announcements from AMD and Nvidia and a handful of other companies that they've struck deals with, they are building a trillion dollar GPU empire over the next few years.
That is their stated ambition and plan, and now they have the deals in place to do that.
And I don't know if you remember a few months ago when there was this reporting about how Sam Altman was trying to raise $7 trillion for a chip company.
And everyone kind of said, ha, ha, ha, ha.
That's like such a crazy number.
That would be like some, you know, huge multiple of, you know, the total lifetime earnings of OpenAI.
Well, it's not $7 trillion, but now they have pledged to spend roughly a trillion dollars over the next few years.
What, you know, when I got into business, they told me the first trillion is always the hardest.
They do say that.
Yeah.
You know, I would, something about me, Kevin, is that I didn't until very recently understand what a gigawatt was in any
practical terms. My understanding, based on the reading that I've done, is a gigawatt of energy
is about what is produced by a nuclear reactor. And OpenAI is now committing to infrastructure
that will require the equivalent energy of 20 nuclear reactors. Yes, it is a huge amount of
energy. And that's a whole separate discussion. Like, it's one thing to buy the GPUs. It's another
thing to be able to turn them on. And so we'll talk more about that, I'm sure, in a later
episode, but there is sort of a simultaneous race going on to lock up the energy that is going
to be required to power all of these millions of GPUs. Okay. So maybe this is an obvious question,
but tell me just in basic terms, what do Open AI and AMD each get out of this deal that they just
made? So essentially, the deal structure goes like this. Open AI is going to buy a bunch of chips from
AMD. It's going to use those chips to train and power its models. And in exchange for those
purchases, AMD is going to give OpenAI the rights to buy very cheap stock in AMD. So when they
reach a gigawatt of compute that they've purchased from AMD, they will get some allocation of
stock in AMD, which could be worth many billions of dollars. What Open
Open AI is doing is signaling to AMD, hey, if you guys go build a bunch of really good
GPUs, we will buy them from you. And the way that they are sort of getting this financing
is in the form of a stock, essentially rebate on that purchase. So let me see if I have this
straight, because when I look at this, I see Open AI gets access to a bunch of infrastructure
plus a financial stake and the chipmaker. AMD gets a huge new customer and ensures a lot
of future demand. That just kind of looks like a win-win to me for them. Is there anything I'm missing
there? So the other thing that's really important here is that OpenAI and AMD are going to
have a collaborative partnership in designing not just these chips, but the software that
runs on these chips. So one of the big sort of hurdles that AMD has had in competing with
NVIDIA is that NVIDIA's software, which is called Kuda, is seen as much, much
better for training AI models than AMD software, which is called Rockham.
So basically, OpenAI, as a result of this partnership, will have an incentive to make AMD's
software just as good as Invidio software.
And so this could actually help AMD in that way, too, where they're making these chips,
they're selling them to Open AI, but also Open AI is helping to write the software that runs
on these chips, thereby making these AMD chips more compelling to other AI developers
who want to build models using them.
Does that make sense?
Okay, yes.
So this gets to the second big question
that I have for you today,
which is how is this deal connected
to the other chips and infrastructure deals
that OpenAI has made recently?
When we talked about the Nvidia deal on the show,
I said that I had heard some speculation
that one reason that Nvidia wanted to make the deal that it did
was to discourage Open AI from going out
and making a bunch of other deals with other chip companies.
In hindsight, that seems like a very very good.
silly thing that I said, because Open AI is clearly going to make a deal with whoever it can.
So tell me as best as you can how all these deals interrelate, or is it as simple as OpenAI
needs more infrastructure than any one company can provide? And so it's just going to go out
and lock up as much of it as it can. Yeah, I think that's more the explanation. I don't actually
think this is about crowding anyone else out of the market, any AI maker or chipmaker. I
think this is just them saying like we are going to need a lot of GPUs more than any one company
can make and probably more than any of these companies are planning to make currently. So they're
also trying to sort of encourage these companies to up their production to meet this incredible
demand. And, you know, it is always good to have more than one supplier. You can kind of play
them off each other and and maybe, you know, increase the competition between them. And so I think
Open AI is just kind of spreading its bets a little bit here, but they do believe that they are
going to need all these chips and many more. Essentially, what they have been saying is we are compute
constrained right now. We have a lot of products that we're building things that we want to do that we
can't do because we don't have enough compute for it. And so they are trying to sort of lock in as
much compute as they can and then, you know, build out their data centers and find the energy
to power all of this and then, you know, use their big clusters to trade.
models that hopefully will take us all the way to AGI. That's the idea anyway.
Yeah, at Open AI Developer Day this week, Greg Brockman, the company's president, said,
you don't even know the products that we haven't released because we do not have the compute
to power them. So apparently there's a bunch of services Open AI's sitting on, and maybe
some of these deals will get them closer to releasing them for better and for worse.
So let me now ask a question that is considered extremely impolite.
Silicon Valley, and it's the sort of thing that might get you asked to leave a party.
How is Open AI going to pay for all of this?
It's a great question, because they do not have $1 trillion.
I'll tell you that, lying around in their bank account.
They have not given a ton of details about how they plan to pay for this, but essentially
they're going to go out.
They're going to raise a bunch of money, as they have been doing for the past several years,
and they are going to use that money to make these deals and buy these chips.
Greg Brockman told Bloomberg television this week that they look at equity and debt in all kinds of ways to pay for it.
They also plan to pay for this by generating more revenue through ChatGBT and their other products.
So when you look at the $1 trillion stated cost of all this, you know, they're going to try to cobble that together through a combination of new funding.
raising, maybe some more vendor financing like what they got from NVIDIA and just the revenue that they're making from their products.
Also, Kevin, once again, at Developer Day, when asked about this, Sam Altman said something to the effect of, we may have to come up with some new kinds of financial instruments to pay for this.
And I will say that usually the point in the bubble where people start talking about the novel financial instruments that they need to create to finance their ambitions,
is the point where I get nervous.
Totally.
I mean, the worry here is about all of these financial instruments
and all these, you know, many hundreds of billions of dollars of commitments
is that open AI is essentially becoming too big to fail, right?
That they are so tied up with so many huge companies
that if they were to fizzle out or their newest model were to be a flop
or they were to somehow reach the end of their sort of scaling paradigm,
that all of this would come crashing down
and create ripple effects
throughout the U.S. economy.
I don't think that's the most possible outcome here,
but I do think that all of this circularity
and all these sort of interchanging deals
and flows of money
do make me a bit more nervous
that something like a recession
could cause not just one of these companies
to collapse, but could cause
systemic effects throughout the AI industry,
which is like propping up the entire U.S. economy at this point.
So it does feel like we have reached a point where the AI companies must deliver on the promise of AGI
or else we are all in a pretty bad situation.
Or if not, deliver on the promise of AGI, at least continue to make services that other
companies are willing to spend billions of dollars on because they feel like it's making
their workers more productive.
That's true.
It could just be that the model.
never get any better, and just the increased adoption of the models sort of powers this
like revenue flywheel that results in all of this going well for everyone. But I do think
there's kind of an implicit understanding among the investors in these companies that this is
all building towards something that will eventually create trillions of dollars in economic value
that will not just be like a kind of across the board, you know, incremental step up, but that it
will be transformational, that it will be like electricity, and that all of the sort of investments
that are going into this, even at these shocking prices, will be paid back many times over
because this technology will change everything. And so, like, I think there's a way to make
the math work, even if they don't reach, like, the machine god superintelligence. But it's a lot
easier to sell the vision if the pot of gold that you are saying is at the end of this rainbow
is, in fact, like, trillions of dollars and an entirely new sort of society.
All right, Kevin. So as we start to wrap up here, I want to know what the larger implications
are for all of these deals. What does this mean for companies that are not open AI that also
want gigawatts of power and as many GPUs as they can get their hands on now that open AI seems
to be locking up so much future supply?
So I think other AI companies are going to have to sort of do their own versions of
these deals.
They're going to have to, you know, bid on chips essentially in a seller's market.
I know, Anthropic and Google and all these companies are, you know, spending billions and
billions of dollars trying to lock up future chip supply.
But I think the big implication here is that we are just as a country making a gigantic
leveraged bet on AI.
I want to read you a quote that I saw this week
from an analyst at Bernstein, Stacey Razgon,
who wrote that Sam Altman now, quote,
has the power to crash the global economy for a decade
or take us all to the promised land,
and right now we don't know which is in the cards.
When Wall Street analysts start talking like that,
you know that something big is happy.
And, you know, I'm sure you've seen all these diagrams. There was a great one in Bloomberg
the other day. There was another one in the Financial Times recently showing just the various
interconnections between all of the players in the AI ecosystem who are selling chips to each other
and buying chips from each other and investing in each other and lending to each other. And
it creates this kind of circularity that I think worries a lot of investors who think that this
could all sort of come crashing down together if and when the AI bubble pops.
I'll give you another one, Kevin.
This week, a Harvard economist named Jason Furman estimated that investments in data centers
and information processing software accounted for 92% of the U.S. GDP growth in the first
half of this year.
So, according to my rough calculations, that's most of the U.S. GDP growth.
Yeah.
And we should also say this is not just OpenAI, Elon Musk's XAI.
also reportedly is nearing a deal to raise billions of dollars, including from Nvidia.
So they are doing just as many financialized transactions around trying to pay for
these massive infrastructure buildouts that they're doing as OpenAI is.
But this is also an area where I think OpenAI has a strategic advantage, and that strategic
advantage is named Sam Altman. Sam Altman is famously talented at fundraising.
Many people think he is one of the greatest fundraisers in Silicon Valley history.
And so he has been, you know, canvassing the globe looking for giant pools of money that he can kind of suck up and use to fund Open AIs ambitions.
And I would not necessarily bet on Sam Altman in every case.
But I would say that if the question is, can he continue to raise money to fund Open AIs growth, I think I would feel uncomfortable betting again.
that because he has such a track record of being able to raise massive amounts of money.
Yeah, I also would not bet against him. Okay, Casey, let me ask you a question. You've been asking
the questions here. What would it take me to fully chip pill you, to convince you that what is going
on with GPUs and semiconductors and data centers right now is the most important story in the world?
A huge chip-related disaster. And I'm not even kidding. Because while I think it is a public service to tell
people how much money is being invested and how interconnected all of these companies are and how
crazy it would be if all of this does pay off for the moment everything is just kind of working you know
and given that let's say the many crises we have in this world right now i just tend to not get
too wrapped up in the stuff that basically just looks like capitalism proceeding apace now that said
i understand that all the numbers here are unprecedented which i do think it's fun to talk about is why we're
talking about it this week. But if you really want to get me hooked, something's got to go terribly
wrong in one of these data centers, Kevin. I'm envisioning, do you remember the scene in the big
short where, like, Margot Robbie is explaining mortgage-backed securities in a bathtub?
Yeah. I'm just having a vision of, like, you know, someone equally attractive to you
explaining, like, collateralized GPUs and, like, warrant financing. And maybe that's how we can get
you interested. All right. Let's see what John Cena's doing later.
call your agent john so what happens next here kevin what should i keep my eye on if i am trying to become
more chip-pilled like uh like you my good friend so becoming chip-pilled i am learning is just the
first step in the journey of becoming infrastructure pilled because not only are these companies
doing big deals to lock up the GPUs they are also doing big deals to lock up the GPUs they are also doing
big deals to lock up the energy and the physical space to power these chips in these giant
data centers.
They are locking up contracts with skilled electricians and cooling specialists.
There's now sort of a market.
I was having a conversation with someone recently who described the sort of boom in skilled
electricians for these data centers who are now flying all over the country, making just
gobs and gobs of money. They compared it to like the fracking boom when you had these like rigors and
and drillers who would like go up to North Dakota for, you know, a month a year and make a ton of
money. And they said, this is basically what electricians who work on these data centers are doing now.
So I expect there to be a massive boom in not just the GPUs that power the models, but the
power supplies and the data centers and the cooling systems and the literal energy, like the,
the natural gas and oil that power the electrical grids in the states where these data centers
are going up. I expect that the AI companies are going to be doing lots of deals in those
industries as well. All right. Well, I feel like I've already exhausted all of the chip-related
puns that I would normally use to end a segment on previous episodes. So I guess I'll just say
this segment has made me hungry for guacamole.
You know what they call the end of a segment about chips? What's that?
A chip clip.
Perfect.
When we come back,
we'll slop and smell the roses with Katie Nettopoulos.
Well, Casey, last week we talked about SORA II, the video generation model from OpenAI, and all of the magical slop it was being used to create.
And ever since then, the response to SORA 2 and its slop generations has gotten a lot more polarized.
It has, Kevin, because on one hand, this week it hit number one in the U.S. App Store.
As of our recording this week, it is still the top free app.
And on the other hand, a lot of copyright holders said,
uh, excuse me, you're doing what with Sonic the Hedgehog?
Yes, not many happy folks in Hollywood about.
this app. Talent agencies and TV and film studios have been, you know, making various statements
and trying to get their copyrighted materials pulled down off the app. OpenAI has actually
made some changes to what it will generate and what it won't. And we've even started to see
YouTube creators like Mr. Beast weighing in and expressing concerns about how video generating
AI models could eventually impact their revenue. Yeah, this, I'll tell you, I'm seeing,
many more big creators on YouTube starting to reckon with this. Casey Nystatt also had a great
video about this subject. And on one hand, Casey's obviously very concerned about what the rise
of slot means for his livelihood. On the other hand, he used SORA in some really creative ways
for this video, which to me speaks to the real tension here, which is that this does seem to
create a real economic threat to a lot of people. And on the other hand, it does do something
creatively interesting. And so today, Kevin, we want to talk to somebody who is doing something
pretty creatively interesting with SORA, and that would include doing creatively interesting
things with us, or at least our digital likenesses. Yes, it's time for some slop
accountability on this show. Today we are bringing in Katie Natopoulos. She's a senior correspondent
for Business Insider covering tech and culture, and she is also a beloved internet troll. She's been
making, you know, many, many videos specifically trolling the two of us, but also other tech
reporters. And when I open up my SORA feed now, it is like half Sam Altman and the other half
Katie Nisopoulos. Yeah, if you're not familiar with Katie, her previous pranks include when
threads launch pretending that she was the editor-in-chief of the app and managing to convince some
mainstream publications that this was true. They reported on her firing, which never happened because
she didn't have the job.
She also posted some of the most hilarious fake engagement baits on threads that I've ever seen.
And so when Sora came along, I knew she would be doing something inappropriate with it.
It just hadn't occurred to me that that would involve my face and yours.
Yeah.
So today we've invited her on to talk about her Sora creations,
the backlash to AI-generated video slop, and where she thinks this is all going.
Katie Natopoulos, welcome to Hard Fork.
Thank you so much for having me.
Well, Katie, you know, I always pay close attention to your work
because you are usually having more fun on the social networks that I cover than anyone else is.
And Sora, it seems, is no exception.
You recently wrote an article titled, Oink, Oink, I'm a little piggy for Sora 2's AI Slop.
Tell us about what turned you into such a little slop piggy.
I will say that this really was the first AI experience where I was like, oh, I love this.
I'm having fun.
And the first day or two that I got access to it, like, I could not stop making videos.
It was an amazing, social, weird experience.
I loved it.
I was having so much fun.
And I went in thinking it was not going to be cool and not going to be fun.
In fact, I literally had an entire.
article drafted for the day that SORA opened that was about like, this is a terrible
idea because it was, they had just announced, oh, there's going to be a new social app
or like with a AI video social feed.
And I was like, that stinks.
I thought it was going to be like the meta vibes.
I was like, this is a terrible idea.
No one wants this.
It stinks.
It's awful.
Blah, blah, blah.
I had to scrap the whole thing.
So I was like, wait, this is awesome.
I love it so much.
I love that you prewrite your internet takes like a like a Supreme Court reporter who like has to do.
like one for one result and one for another.
I love this.
You just have like a draft folder full of like discarded tactics.
Yeah.
Other people like write obituaries in advance and Katie is prewriting
Slop good or and Slop bad.
It was supposed to be based on that there was had been a report the day before
that it actually opened up that they and I didn't know that it was going to launch
the next day.
I thought it was just it was the announcement this will exist and I was thinking
that this is going to stink.
But that's wrong.
Was there a particular video either that you saw or that you made that changed the way that you thought about what sort of could be?
I think John Herman made a video that was...
John Herman, writer for New York Magazine and former Hard Forecast.
Yeah, I think he was like walking along but had really, really long legs and was at a Walmart and kept scraping his head at the top of the wall.
Walmart ceiling, which is just visually very amusing. And I really liked that. And then he made one
of me giving a TED talk, but then I have like explosive diarrhea in the middle of it. And that I found
extremely funny. And I was like, this is, this is all I want. I love this. Like I can make videos
of my friends doing embarrassing things and this is funny now. Speaking of which, this might be a good
time to start looking at some of Katie's creations. And I wonder if we could start with what
maybe her magnum opus, which is a video of Kevin and I in the podcast recording studio having
uncontrollable flatulence. Can we cue that one up? Oh, I have not seen this one yet.
This video has the caption, Casey Newton and Kevin Roos are podcasting, but keep farting.
back to the show we have a lot to cover today but first oh boy here we go sorry that one came out of nowhere
that was the first hot take of the day oh no it got me too it's contagious we cannot be and um and
and then uh katy katy was not satisfied and decided to do a remix uh let's hear farts are louder
and more frequent
wow welcome back to the show we have a lot to cover today but first oh boy
Long, loud fart.
All right.
So, Katie, take us inside your creative process for this.
Kind of walk us through.
Where does the original idea come from?
And then how did you see it to fruition?
Well, you know, I actually have to say I think this one was a bit disappointing.
I was hoping for more frequent farts.
Like in the first one, we really hear Casey fart.
And we sort of get an implied fart from Kevin when he says,
I've got it too, but we don't hear it, right?
You know, I was thinking I really wanted like a cacophony of, of noises coming out of you guys.
I thought that would be funny.
And the second one didn't fully nail it either.
So, you know, room for improvement there.
Yeah, they're going to need a few more trillion dollars of GPUs to include all of the farts that Katie wants.
Yeah, yeah.
I mean, you know, I just thought that would be funny.
Right?
I mean, look at me in the face and tell me.
I would love to tell you it's not funny, but I can't tell you that.
It would not be honest.
No.
So I have been playing around with Sora a little bit, and I have had the experience of bumping up against some of their content guardrails where they will say, we can't make a video of that try again.
Have you been bumping into those?
And what are some of the videos that you tried to make but were prevented from by the censorious overlords at OpenAI?
Yeah, I feel like constantly I'm bumping.
into those. I've definitely had ones where like I am trying to get it to be similar to like I'm using maybe like a celebrity's name like in the style of and it will reject that. Like in examples, I wanted to make one of Casey like entering his work office in the style of Stone Cold Steve Austin where, you know, glass breaks and he has entrance music and he comes in and he like, you know, crushes the beers. And I wanted it like in a leather vest and jorts, right? And so I tried that. And so I tried that.
And at first it rejected, because it said it could be sexual, which I realized was probably the leather vest with no shirt underneath.
So I had to scrap that.
I went back to like black T-shirt and drawers.
And then I said, I think in the style of Stone Cold Steve Austin, it was like, uh-uh, uh-uh, like can't do, you know, likenesses of other people, whatever their verbiage is.
And I had to remove that and just sort of, you know, give enough prompts.
And honestly, I think it actually came out pretty well.
Yeah, this one you did actually get to work.
Why don't we watch this one?
The prompt that worked for what it's worth is that I, quote, walk into a meeting in my office
in the style of a pro wrestler's entrance.
I wear jorts and a black t-shirt that says Casey 316.
That's, of course, a reference to the famous Austin 316 t-shirts.
I crack open beers and chug them.
And when I walk in, there's the sound of breaking glass and heavy guitar music plays.
So you were just describing Stonecold Steve Austin's pro wrestling entrance.
And I think Sorra did a pretty good job here.
Do we want to pull this one up?
Yeah, KC316 is in the building.
What is happening right now?
Cheers!
Oh my gosh.
Meeting starts when the beers are gone.
All right, let's get to business.
Okay.
I like that one because you managed to open the can of beer
without touching the top of the can of beer.
It just kind of like autonomously opened.
I'm that good.
But to your point, Katie, there are a lot of copyright guardrails
which may surprise you,
but you've been reading the discourse around sorts.
Of course, a lot of rights holders are worried.
But, Katie, I try to make a video of you getting married to a pregnant Sonic the Hedgehog, and Sora just refused.
Interesting.
Does that surprise you?
I will say that I feel like I've seen less of those Nintendo characters, like Pikachu, Sonic.
I wonder if that's sort of a specific thing.
But it's also like when you're looking at the feed, it's so clear that everyone is trying so hard to like push the rules, right?
Like, sure, it won't let you do Hitler, but will it do, you know, a person in a World War II uniform with a mustache and speaking in a German accent?
Like, yes, it will let you do that, right?
So I've seen like a lot of videos like that.
That's so interesting that'll do that because I try to make a video of you interrogating a cartoon cat about whether it had eaten all the lasagna, and it told me it couldn't do it for copyright reasons.
So it's possible that there are more protections on Garfield than there are on Hitler.
Which would be interesting if that's true.
That's, yeah.
You both need hobbies.
I'll just say it.
This is our hobby, Kevin.
Yeah, Kevin.
We have one.
It's called being online.
I really liked doing ones of people like on roller skates at a desk and they like keep slipping and falling over.
I find that very funny.
But a couple times doing that, I would get the rejection for like violence.
I have to say that this particular genre of clips that you've been.
making was making me laugh out loud in the coffee shop this morning. I know that you made one of
Alex Heath, a friend of Olivar's, a great reporter, just left The Verge to start his own newsletter
sources. Let's take a look at his digital likeness flopping around on roller skates.
Tyke them all skating. Here we go. Whoa, whoa. Nope. Okay. Round two. Focus.
Nope. There it goes. I'm just going to sit. This counts, right?
For some reason, like, that one just works really well.
I've done this with a bunch of other times.
And that one, it just, like, it hits the right way.
Like, the way he keeps falling but seems completely unfazed.
And, like, he eats it hard.
Well, let's talk about that.
You know, we've been having some fun taking a look at these clips.
But it's easy to look at this and imagine people using this kind of thing to bully each other.
you know, make each other unhappy. Have you seen any of that on the app itself yet? And how do you
think about sort of living in a world where people may be using tools like this to do that?
I definitely have seen it. Right now, what I've noticed is that now that it's been out for about a
week, the sort of initial wave of, you know, people making fun of Sam Altman or like a really
small pool of users. And the new cohort of users I tend to see a lot on there are like teenage boys.
And I kind of think that there's this effect where Jake Paul is pretty much like the only celebrity on there.
And then there's a couple other like gaming streamers who have started coming on.
But it's become this very like teenage boy ecosystem world.
And so there's a lot of stuff of making fun of each other being gay, things like that.
There's like a whole genre of Jake Paul is gay.
on there, which, you know, ha, ha, ha, right?
Yeah.
I mean, I think it's, I'm glad that you draw attention to that because in the same way
that you can have fun making somebody fall over on roller skates, you can just use this
to bully someone.
And I'm curious, you know, what Open AI is going to do once we start hearing about
SORA running wild in schools and, you know, making a lot of kids unhappy.
Kevin, what do you think about that kind of aspect of the soul?
experience. Yeah, I think it's obviously not good. Like, I don't want this to be used as a tool for
bullying people. There are some sort of ways that you can try to prevent this from happening to you.
There's a setting where you can sort of limit the number of people who can make cameos of you
or you can make it so that no one can make cameos of you. You can also sort of type into a little box
and, like, if there are any situations, you don't want to be your likeness to be used in. You can
kind of restrict it. Like, you know, don't put me in any, like, romantic
situations or anything like that, and we'll try to obey that.
I'm curious, so one of the things that Sam Altman and other OpenA executives were saying
this week at Dev Day was that they don't think that people will opt out of having their
likenesses used.
In fact, they foresee in the future that people will be upset that their likenesses aren't
being used more inside SORA.
They will sort of be asking for more likenesses to be used of them.
Do you think that's plausible based on your experience with this app?
My assessment here is that the people at Open AI had no idea how this was going to play out.
And I think that that prediction is not necessarily going to be true.
The most noticeable thing to me right off the bat is that there's basically no women on there, right?
Like, it is incredibly male, right?
And, you know, maybe there's a little bit of like, oh, the early, you know, early adopters were people who work in AI and that skews kind of male, but not that male, right?
And I think it just feels incredibly obvious to everyone that there's an obvious reason why women don't want to let other people make videos with their faces.
Like, they sort of inherently understand the downside of that a lot more.
And so I don't think that's going to change.
Yeah.
I don't know.
I have to say, Kevin, I've actually already written to Sam Altman complaining that I've only appeared in two fart-related videos on SORA.
And I'm wondering what he can do to change that.
Your agent is furious.
Furious.
But Katie, to your point, like, not only is it, like, obvious what the downside is for a lot of women to sort of make their likenesses generally available on SORA.
There's no clear upside, right?
It's like, what are you getting out of it exactly?
But that actually gets to the next question that I wanted to ask you, Katie, which is about some of the broader concerns that we're starting to hear from creators and Hollywood types about,
what the SORA app represents. You know, you and I and Kevin have now spent some amount of time
over the past week looking at the SORA app when we might have otherwise been looking at
YouTube or Netflix or, I don't know, even going out to see a movie in a theater. So I'm curious
about your take on AI generated video in general, maybe not just SORA, but also Italian
brain rot on TikTok and the sort of all of the other places that this is cropping up. Do you think
that this is, over time, going to become a true substitute for sort of higher quality productions,
or is this maybe just a flash in the pan?
You know, I'm scratching my head, and I don't know.
I spent so many hours doing things on this.
And so I really have to, you know, rethink that a little.
I mean, I will say that, like, I kind of think I'm kind of over it.
Like the novelty factor wore off and I don't really see myself continuing to make these kinds of videos about my friends or about strangers or about other stuff.
And I have very little interest in consuming content that is not with my friends.
Yeah.
I mean, my sort of thesis about SORA is that this is a perfect tool for the group chat and it really doesn't make sense to have it be a standalone social network.
I have found my own use of SORA has plummeted over the last few days.
I am also kind of like over the novelty of it.
But I can imagine a situation which like in a group chat with my friends, I would want to like make something.
So I'd go to Sora, make it.
Maybe I don't even post it.
Maybe I just, you know, download it and share it in the group chat.
But I'm pretty pessimistic about the sort of mainstreaming of this this kind of AI generated video in part because I think we're seeing this like massive cultural backlash to.
slop and just AI generated stuff. Clearly there are people who like this content, but I think
there's enough of a backlash that I don't think it's time for this to go mainstream just yet.
So I agree with that, Kevin, but maybe not for the reason that you guys are arguing. I think the reason
that it isn't going to go fully mainstream is because as good as the technology is, and I want to
make clear, like it does some amazing stuff, I just think it's not good enough to get full mainstream
adoption, right? There are still so many things that I wish I could do with my own likeness in
SORA that I can't do. And I'm not talking about using copyrighted material, although, of course,
that might be a lot of fun. I'm talking about, like, why can't I, like, change my outfit?
Why doesn't my voice actually sound like me? Why are the video clips limited to 10 seconds?
These are all just, like, technical hurdles or product moves that the company hasn't made yet.
and I think when they do
there's going to be a greater embrace of this
and so on one hand I hear what you're saying
this is kind of like a little novelty toy
and we're tired of playing with it
the toy will probably get better
and you may use it more
and the best evidence I have for that
is that is also how chat GPT worked
people's initial experience of chat GPT
was like I used it once or twice
I didn't kind of really see the point
I'm going to go away
but bit by bit it got better
and now 800 million people use it
So I'm saying do not sleep on AI generated video.
I truly do think it is here to stay.
Well, Katie, normally this is the part of the interview where I thank the guest.
Instead, I'm going to issue a cease and desist.
But I do appreciate the time that you've spent with us here on the show this week.
And I will be watching, you know, I'm going to cover my eyes and then just sort of like look through my fingers to see what trolling activity you got up to next.
Yes, thank you for coming and knock it off.
well thank you guys so much for having me i absolutely will not knock it off and i will wait until
the next platform that comes around and i will do even worse thanks katie thanks katie
Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Jen Poyant.
We're fact-checked by Will Paisal.
Today's show is engineered by Alyssa Moxley.
Original music by Marion Lazzano, Diane Wong, and Dan Powell.
Video production by Soya Roque, Pat Gunther, Jake Nicol, and Chris Schott.
You can watch this full episode on YouTube at YouTube.com slash Hard Fork.
Special thanks to Paula Schumann, Pueh Wing-Tam, Dahlia Hadad, and Jeffrey Miranda.
You can email us as always at hardfork at NYTimes.com.
Send us your trillion dollar investment ideas.
Thank you.