TBPN Live - Big Tech to Pay for Power, Anthropic Abandons Safety, the Adoption Paradox | Diet TBPN
Episode Date: February 26, 2026Diet TBPN delivers the best of today’s TBPN episode in 30 minutes. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays 11–2 PT on X and YouTube, with ea...ch episode posted to podcast platforms right after.Described by The New York Times as “Silicon Valley’s newest obsession,” the show has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.TBPN is made possible by:Ramp - https://Ramp.comAppLovin - https://axon.aiCisco - https://www.cisco.comCognition - https://cognition.aiConsole - https://console.comCrowdStrike - https://crowdstrike.comElevenLabs - https://elevenlabs.ioFigma - https://figma.comFin - https://fin.aiGemini - https://gemini.google.comGraphite - https://graphite.comGusto - https://gusto.com/tbpnKalshi - https://kalshi.comLabelbox - https://labelbox.comLambda - https://lambda.aiLinear - https://linear.appMongoDB - https://mongodb.comNYSE - https://nyse.comOkta - https://www.okta.comPhantom - https://phantom.com/cashPlaid - https://plaid.comPublic - https://public.comRailway - https://railway.comRestream - https://restream.ioSentry - https://sentry.ioShopify - https://shopify.com/tbpnTurbopuffer - https://turbopuffer.comVanta - https://vanta.comVibe - https://vibe.coFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
So I was nerded out about this Fed paper because when you told John Collison 80% of
businesses are getting no value from AI, I'm glad he wasn't here in person because he was
about to throw down. He was about to open up a can of whoop. It was about to be a bar fight in
the cheeky part pub in the chinus pub. No, seriously, it was a great question because I think
we all agree that like AI adoption is real, it's valuable, it's happening, but it is a very
interesting statistic and I think it's a mistake for tech people to like dismiss this
stat because of where it's coming from like it's not coming from some like doomer
anti-AI blogger who's going for clicks like this is the National Bureau for
Economic Research this is a research paper that could be circulated probably will
be circulated within the Fed and I think that it's already getting quoted by the New
York Times in that dot-com bubble AI bubble piece and I and I'm just I'm thinking it
through like this could be something where you see Fed policy or government legislation
that's sort of mismatched with what is actually happening in reality and so we
should go through some of the some of the stats to actually break this down because
the headline is 80% of firms reported that AI was having no impact on their
productivity or employment and that's actually like a misquote like what they
mean by that is that it's not shaping
their hiring plan yet. They actually are using AI. And so basically this stat comes from this
survey from the National Bureau of Economic Research. And it's pretty interesting because a lot of
the polls that you see online are online surveys. They run some digital ads and they say,
are you a CFO of a company? We don't really care what company will pay you $10 to take this
quick survey. And what kind of people want to make $10? A lot of liars. There's a lot of liars out.
there who say I am absolutely a CFO and please send that Amazon gift card right my way.
And so for this one, they actually did the work.
They called up and ID verified and then also reality check the position.
So if you say, yeah, I'm the chief pirate officer, I'm the ninja hero, whatever, you got
some fake title, you're out of the so they did some reality checking and they pulled together
six thousand of these business leaders across firms that are domiciled in the U.S., UK,
Germany and Australia. The line from John Collison that has been sort of going viral, that was he dropped it on sources. He said it to us too. It's a good line. No one wants a refund on their tokens. Everyone is using AI. They spend is increasing. Although I'm sure some CEOs heard that and thought I would kind of do want a refund. I had one team member go absolutely haywire and spend 50 grand. He's one shot. He claims that he rebuilt our entire ERP, but I fired it up and
It didn't even have HTTPS.
What's going on?
The MacFinni wasn't even plugged in.
Yeah, the Macfini wasn't even plugged in.
He was just chatting.
But clearly, there is a disconnect between like the Stripe data is very real.
The value creation is very real.
The revenue is very real at the labs.
But when just random Joe Schmo, CFOs, CEOs get a call from the feds, they say, like, yeah,
we're not really getting that much value out of AI.
And so the questions that you need to dig into, there's actually,
four key findings that the one headline that the New York Times is is pushing is
this 80% number 80% report little impact or no impact on employment or productivity
but there's actually a bunch of positive signals there's a bunch of mix signals
in here so 70% of firms actively use AI and particularly younger more
productive firms second while over two-thirds of top executives regularly use AI their
average use is only 1.5 hours per week and one quarter of executives report no AI use at all.
Not for-
Why would I need that? I have a telephone.
The last major finding that we should touch on is firms predict sizable impacts over the
next three years. Forecasting AI will boost productivity. Sizable impacts, productivity increase,
1.4%, which is like it's very sizable if you're an economic researcher, but it's not particularly
sizable if you're in like the fast takeoff scenario measuring AI adoption is a mess many
people use AI without even knowing that they're using AI because it's buried deeper in
SaaS products that they already daily drive like if you're just I run a coffee shop and I'm using
toast for you know payment processing like there's probably some AI features in there already
and when you go to you know type in okay we're adding a new cinnamon roll to the to the you know
the menu there's probably a button now that just says like do you want to just generate an
of a cinnamon roll. You could upload one still. That's probably a feature that already exists.
But like we could also just generate one for you and you can probably click that, but you're
not like, oh yeah, I'm an AI power user just because like you happen to use Toast and Toast
happened to have implemented some Gen AI feature that like you haven't really dug into yet.
So some AI isn't even detectable. You could be talking to a customer support agent on the phone
that is AI generated and not be able to tell. We talked about that that airline interaction
that got something 100,000 likes.
Grace, the woman that had the interaction,
came into the chat yesterday and said it was real.
Yeah, it was real.
Yeah, she out-maneuvered the clanker.
Yeah, but still, think about, like,
she's clearly on-X-in-tech, like very AI-aware.
There are probably tons of people out there that are saying,
oh, yeah, my job, you know, every once in a while,
I have to call this service,
and now the person that picks up is, like,
responding pretty quickly, but I haven't noticed.
They haven't noticed that they're actually interacting with AI
or using AI in some capacity.
Yeah, I still think there's room for a research firm focused entirely on diffusion.
So if you had a group of 10 to 20 people that were spending all their time talking to business owners and executives, operators, and getting a sense of how they're actually using this stuff, I think you could put together some really compelling reports around it that would be pretty useful to everyone from AI companies to Wall Street.
Yeah, adoption max after cluster max and inference max.
They had to rename it.
Apparently, semi-analysis can't use max for some reason.
So they do inference max is now inference X.
And everyone was saying, you need to just change it to inference mock,
which would have been amazing.
But inference X obviously has a much more professional tone to it.
What does it mean to actually adopt AI?
That's very vague.
This paper defines it pretty broadly.
So machine learning for data processing.
So that doesn't even necessarily mean LLMs.
That just means ML, which has been around for a very long time.
Text generation using LLMs, that's what we think of,
is chat GPT, visual content creation, so diffusion models,
but also robotics and autonomous vehicles.
And there's a category just for other,
and firms can select multiple.
And so if you selected yes on any of those,
you go in the bucket of AI Adopter.
And 78% of firms in the United States said, yes, they are using AI by this definition.
And you can also dig in further.
So text generation using LLMs is the single most common use case at about 41% of firms.
So flip that around, 59% of firms aren't even using LLMs for text generation or proofreading.
But again, there's a lot of companies where it's like, yeah, we don't generate a lot of text.
Across the four countries that were surveyed, 69% of firms totally said they currently use AI.
I think Australia was behind a little bit.
dragging that down. Only 75% of firms expect to be using AI technology sometime over the next three years.
Tyler's going to have a heart attack. We're going to bump that up to 75%. And this is weird data.
And you can jump in with their pushback whatever you want. But my point is not that it's not that they're right.
Like I think that I think that they're wrong to predict this. I think that the AI adoption will be very steep and very dramatic.
But I just think it's important to recognize that like this is a paper that people will be citing this.
This is a paper that will shape policy.
This is a paper that reveals some misconception about the impact AI as having in firms.
Yeah.
I still think it's just, it's so hard to like actually quantify this.
The perception I think still does matter because I think that there's a little bit of like
potential self-referentialness here where firms see, oh, like AI adoption's low, I don't
need to go and figure out how to adopt it.
And so that's something that I'm also like keeping an eye on.
The biggest thing was there was a massive diversion in the expectations.
employment impact. So basically 63% of firms still expect no impact from AI. And that
just completely goes against everything everyone's saying in Silicon Valley. So there's still a lot of
optimism among managers that AI will create more opportunities and new jobs, even as some jobs
become obsolete. My read on this data is that the tech talking point about 50% of white collar
work going away is not a broadly held belief among average business leaders. Now they might be
wrong. I do think AI progress is pacing way ahead of public expectations and most managers
are months behind when it comes to understanding frontier capabilities. The bigger takeaway
for me is just that this survey may be somewhere self-reinforcing. I closed by thinking
about like the nature of polling and how do you actually get stronger data on AI adoption.
And I was thinking back to the presidential cycle. So during the presidential election, pollsters
would call people sort of at random and they would ask them, who are you voting for?
And a lot of people would say they'd lie or they wouldn't say or they wouldn't pick up
the phone if they were voting for a particular candidate.
And so the polling numbers did not wind up matching the final election results very closely.
And so there was the story about neighbor polling, which was more effective, where instead
of calling someone and asking them, who are you voting for, the pollster calls and asks,
Who do you think your neighbors are voting for?
Who's more popular in your community?
Who's more popular on your city block, on your street?
And that wound up sort of removing the revealed preference, stated preference, and it wound
up increasing accuracy.
And so I'd like to see a survey of AI adoption using this technique.
Anyway, we should watch a little bit of a clip from the State of the Union because Donald
Trump addressed some of the energy production.
question with regard to like how hyperscalers will be offsetting the impacts.
Many Americans are also concerned that energy demand from AI data centers could
unfairly drive up their electric utility bills.
Tonight I'm pleased to announce that I have negotiated the new rate payer protection pledge.
You know what that is?
We're telling the major tech companies that they have the obligation to provide for their own power needs.
They can build their own power plants as part of their factory.
of their factory so that no one's prices will go up and in many cases prices of electricity will
go down for the community and very substantially down.
This is a unique strategy never used in this country before.
We have an old grid.
It could never handle the kind of numbers, the amount of electricity that's needed.
So I'm telling them they can build their own plant.
They're going to produce their own electricity.
It will ensure the company's ability to get electricity while at the same time lowering prices
of electricity for you and could be very substantial.
for all of your cities and towns.
You're going to see some good things happen
over the next number of years.
What's your reaction to that?
I think it's a good start.
I don't know that it will quell any of the fears
around data centers, just given that people kind of see
the potential for this massive structure going up.
They have so much fear about it.
And again, I think it's clearly going to be necessary
to continue to build data centers
in heavily populated areas, but...
How would you rank
the fears currently. Because I've put, I put my energy bill goes up and that puts pressure on my
income and ability to live my life at pretty much the top. And then the water thing felt, you know,
secondary but also important. And then there's the existential fear of like doom and apocalypse. There's
also job displacement. And then there's also just like I don't like the slop. I would rank
I think it on electricity bill going up is pain today.
And it's so real.
And it's easy to imagine.
And then there's fear around the job loss narrative that is sort of secondary.
And opposing a data center in your local area feels like a way to have some agency around that like overall kind of like job loss concern.
Yeah.
AI is going to get blamed even if there's an, even if like tariffs drive high unemployment.
Like if people lose their jobs, like AI is going to be a scapegoat and it's going to be used both by executives.
Yeah.
It's the perfect scapegoat for executives and for people frustrated with the job market.
Yeah, yeah.
It's like, oh, my business isn't doing poorly right now.
I'm laying off people because I'm getting so much benefit from AI.
The stock should actually go up.
We're more efficient.
There's going to be a lot of that.
But it does feel like it's a little bit early.
Whereas the, like there are a lot of people that just can hold up their power bill.
and show you year over year increases.
And if that goes away, and people don't feel that anymore,
and they don't have that evidence to share,
I think that take gets debunked pretty quickly.
I would say I mostly disagree with the idea
that rising energy prices is the main reason
to be against AI.
The rational thing to do then is say, like,
okay, before you build a data center, my community,
you have to build a power plant,
so then my energy prices go down.
No one's doing that.
If you look at protests and stuff,
they're not saying, please build a power plant first.
They're saying, like, it's gonna destroy
the environment or the water stuff or you're going to take all the jobs because it's going to like
we need to send you to that new jersey new new brunswick protest build the nuclear power plant first
so i think it's much more on like basically job loss of like oh the ai is stealing the IP of
yeah of disney or whatever anyway happy nvita day to all who celebrate except the bears forget
them says take him he's getting fired up for invidia earnings that's it's going to be a fun one today
So we have, this is tearing up the timeline.
A new Guinness World Record, and I want to ask John if this, if you think this should actually count.
This is a Chinese hypercar going for the fastest drift ever.
That is crazy.
But here's the thing.
He doesn't actually pull out of it.
Does he just crash?
Kind of just U-turns?
It's like a really fast U-turn.
I think this counts as a drift.
That's definitely drift.
U-Turning counts as it.
If you saw that car going by, you'd be like, wow, that's drifting.
Hype Tech SSR, formerly hyper-s-S-S-R, is a high-performance, all-electric two-door supercar.
I mean, this is crazy.
This is out before the Tesla Roadster.
We've never seen a two-door supercar, electric supercar.
1,225 horsepower goes from 0 to 60 in 1.9 seconds, and it's set the Guinness Book of World Records
for the fastest electric car drift at 213 kilometers per hour.
which is really, really insane.
I feel like you have to actually stay in the turn and not do a U-turn.
What do you mean stay in the turn?
Yeah, I don't think it counts.
You don't think it counts?
For what it's worth, I don't think that counts.
Like theoretically, if you were drifting,
and I think of drifting, you're drifting around a corner, around a turn,
and if you were to drift and spin out during the drift,
then that doesn't, if you were doing, if somebody was doing that on a track,
you'd be like, you didn't drift around the corner, you spun out.
Yeah, okay, okay. Yeah, the top comments is fast as spin out. That's a power slide at best.
Gabe, fire, whoever called this drifting. That's not drifting. That's losing control. Yes, the chat does not like the drift, the fake drift.
Call Guinness Book of World Records again. Reset, reset completely. Stolen drift valid.
Palmer is sharing something from compound, the research, okay, from their research. From their
annual meeting. They're showing dollars invested in the top 10 companies versus the other
percentage, as a percent of overall funding. You can see there's just heavy, heavy, heavy
concentration in a few names. I would say overall this is or is this is this is this
co-to? No, this is, oh, the source is co-to. Co-2 is part of. Okay. They are part of, I would say,
driving this data. Part of the problem, part of the opportunity. I mean, but so much of this is
about the AI labs just raising more money than any private companies have ever.
$200 billion. Venture as a class in a good year will do like $400 billion and across OpenAI
at $100 billion, 30 for Anthropic, 20 for XAI, then you have a bunch of Neo Labs all picking up
a billion each. Like you very quickly get to a few companies raising half of all the money and that's
shown here. It's an incredible amount of concentration. I think a lot of it is due to companies
staying private this long. The idea of Facebook went public at, what was Bill Gurley saying?
He was saying Amazon went public sub a billion dollars. When Facebook went public at like 60 billion,
it was like, wow, crazy. They waited way too long. And now it's like multiple trillion dollar
companies are still private, which is just an incredible capital sink. So I don't know.
Should you even put those in the same bucket? Are they even venture best?
at this point, if any venture capital fund is putting that in their venture bucket at this point,
it feels ridiculous compared to growth scale. I mean, you're bigger than probably 90% of the S&P.
Like, it's a completely different business.
Some kind of relevant data. We're about to witness three of the largest IPOs in history.
SpaceX is targeting one and a half trillion. Open AI aims for one trillion. Anthropics is valued
at 380 billion. Combined, they're at 2.9 trillion in potential market cap. The scale is
unprecedented, but the real problem isn't the market cap. It's the float. Typical IPOs offer 15 to
25% of their shares to the public markets. This creates enough liquidity for price discovery while
allowing founders and early investors to maintain control. Facebook floated 15% at the 60 billion that
you mentioned and actually traded down, pretty much immediately, right? Google floated 19%
Alibaba floated 15% at 15% float. Here's what these three IPOs would require. SpaceX would be
300 billion or 225 billion opening i'd be 150 billion anthropic would be 57 billion
he was he was he was uh yeah a lot of a lot of dollars he was comparing that to uh saudi
ramco ali baba and soft bank which were uh combined at the IPO i believe uh Saudi
ramco raised 29 billion at a 1.7 trillion dollar market cap so he's making the case you can't
really kind of model how the public markets will absorb these companies off of Saudi Aramco,
even though from a sort of like top line market cap standpoint, it is a good proxy.
We'll see what the labs end up doing. They are obviously wildly capital intensive businesses,
and you can imagine they raise quite a bit more than the Aramco's or the Alibaba's.
Saudi Ramco was such a wild ride.
I feel like they were trying to IPO for like a...
The San Francisco company?
It is.
Yeah.
Founded in California.
Like I remember hearing Saudi Ramco IPO rumors in like 2015.
I think it actually kicked off in 2016.
They finally got out in 2019.
It was, I mean, it was the largest IPO ever.
There were like a million investment banks attached,
like going all over the world marshalling capital.
Anthropic dials back AI safety commitments.
competitive pressure prompts it to pivot away from a more cautious stance.
Anthropic, the AI company known for its devotion to safety,
is scaling back that commitment.
The company said Tuesday it is softening its core safety policy
to stay competitive with other AI labs.
Anthropic previously paused development work on its model,
if it could be classified as dangerous,
but it said it would end that practice if a comparable or superior model
was released by a competitor,
given that they are at the frontier that kind of opens them up to,
would say perpetually kind of avoiding some of their prior policies.
Sure, sure, sure.
The changes are a dramatic shift from two and a half years ago when the guardrails,
Anthropic published, guiding the development and testing of its new models.
Established the company is one of the most safety conscious players in the space.
Anthropic faces intense competition from rivals, which regularly release cutting-edge models.
It's also locked in a battle with the Defense Department over how its Claude Suite are used.
After it told the Pentagon, it couldn't be used for domestic surveillance or autonomous
lethal activities. Anthropic said the safety policy changes an update based on the speed of AI's
development and a lack of federal AI regulations. Anthropic, which started as a AI safety research lab,
has battled the Trump admin by advocating for state and federal rules on model transparency and guardrails.
The admin has, of course, sought to curb state's ability to regulate AI. The obvious sort of criticism
here would be that you were heavily focused on safety when you were far,
away from I would say leading in AI and so switching up now that like there's
actually switching up on their day once switching up on their day ones
now that there's now that there's real competition are they forgetting where
they feels a little feels a little self-serving it's possible the money change them
it's may it's possible the money change them it's possible they they always
plan to switch up on their day once they once they got once they got to to the
level they're at now it could just be that they realize like alignment's pretty
easy and we don't need a worry bath.
What's this new study that's showing,
like they were doing some war game simulation
and almost every model was choosing to drop nukes?
Really?
That's crazy.
That's not good.
I don't like that at all.
The interesting impetus of like this line around,
the policy environment has shifted towards
prioritizing AI competitiveness and economic growth,
while safety-oriented discussions have yet
to gain meaningful traction at the federal level.
I still feel like there's a lack of communication
around what safety orientation at the federal level means.
Like, yes, okay, we'll pass the bill that says the AI can't kill everyone.
Like, obviously everyone supports that, but like, what does it actually mean in practice?
Because I think part of why-
Oh, that's dangerous means million things to different people.
Yeah, part of why I think it's fascinating is they've been pushing for regulation,
as much regulation as possible, seemingly.
Yeah, yeah.
And they're kind of saying, hey, we're not getting what we want.
So now we're not even going to play by the own set of rules that we created for ourselves
because we just want to compete and win.
Yeah.
I mean, like going back to the protesters, there are protesters that would say, like, training
on intellectual property is dangerous.
It's dangerous to my career as a writer.
It's dangerous to my career as an illustrator.
And so, like, this question, like, danger is just too vague.
And no one has really been able to concretize.
it in a meaningful way and I think that's why it's not getting traction on Capitol Hill.
Yeah, I think there's, there's just like so many ways that you can define safety.
Like so if you read Dario's essays, this thing he brings up over and over is like, okay,
we can't let AI get in the hands of like authoritarian government.
Sure.
So there's like a real like safety narrative that you could do which is that like regardless
of if our models are like pretty safe, they still need to be better than like China's
for example.
Because if China gets ahead of us, authoritarian government, right, it's like very bad.
So even if we're releasing models.
that are less safe than we would like, as long as they're better than China's, that's still
like a safety, pro-safety issue, right?
Except they'll just be distilled within six weeks.
Yeah, but, like, obviously, like, I would be very surprised if Anthropic keeps, like,
the same, like, guardrails of, like, API access.
Well, Bucco Capital Bloch has a solution.
He says, it's simple.
We kill Claude.
It's simple.
We kill the Batman.
Well, that was in regards to the SaaSpocalypse.
Okay, okay.
Who knows?
There's so many headlines and the timeline moves so quickly.
Anthropic antagonizing the Department of War, the open source community, the entire media industry, the general population, other developers, other labs, foreign governments, and nearly every single person on Earth.
What is the plan here?
Sell, Claude subscriptions to aliens?
Edward is, it ain't easy having principles.
Hackers use Claude to steal 150 gigabytes of Mexican government data.
It's crazy.
They told Claude they were doing a bug bounty.
Claude initially refused.
A hacker just kept asking and manages to successfully steal some documents.
Apparently, it's four state governments, 195 million taxpayer records, voter records, government credentials.
Anthropic investigated the claims disrupted the activity and banned the accounts involved.
The company feeds examples of malicious activity back in the Claude to learn from it.
In this instance, the hacker was able to continue.
continuously probe-clod until I was able to jailbreak it.
I was listening to someone talking about like I like the ability to jailbreak has
generated me like tens of thousands of dollars in profit.
It was kind of like a hustle like mindset guy and I was just laughing because whatever
you're doing after you jailbreak it is probably not good and so you should probably stop.
But he was talking about like I can sell so many more courses now that I've jailbroken
and Chad GPT or whatever.
Duran says not to worry.
They'll hit usage limits before anything bad can happen.
This was interesting.
Rob Wiblin had a guest on his podcast.
The guest is talking that saying,
every AI lab is working to make their AI helpful, harmless, and honest.
The guest thinks this is a complete wrong turn,
and aligning AI out of human values is actively dangerous.
Today, a nominative determinism because the guest's name is
Max Harms.
Max Harms.
I feel like that name,
maybe you've got to go with Maxwell or something.
I don't know.
Perplexity computer.
Computer.
Computer.
Computer.
Computer.
Launch a perplexity computer vibral.
What is perplexity computer?
Let's pull up this video.
Perplexity, the official account says,
Perplexity computer.
Computer unifies every current AI capability into one system.
It can research, design, code, deploy, and manage,
any project end to end.
Okay, hmm.
So it should be able to get a soundboard app
in the app store, right?
Manage any project, code, deploy, design, research.
It should be able to do that from start to finish.
One prompt, sound board in the app store
using the TBPN sound effects,
which are available online, which we have up there.
This is a good benchmark.
Let's give it a try.
And you can give it a try at perplexity.
Go check it out.
I'm just very curious to see how this does.
It feels like the, again, going from consumer LLMs to a net new product that is objectively just as competitive.
And we'll see.
Best sellers on substack for finance are all DOOMers.
We got to do TBBF for D.
And of course, the treaty is not in.
This is so obvious.
It's a tree.
No, no, this is, we need to treat zone this a little bit.
A Dumer.
He's not a Dumer.
There's plenty of bullish.
Very A.I.
But definitely shot to the top of virality and top of the charts on the back of Doom.
I live this on YouTube.
Like, you put a negative title up and you just get 10 times more views.
But they're lower quality.
And so you've got to balance all that out.
It's really hard to go viral with something like, everything's fine.
Everything's going well.
Don't, don't worry.
Don't click this because you're scared.
scared. Click this because everything's kind of the same as it always has been and you're,
you're going to be fine. And I say stuff's cool, but it's, it's not really going to change that
much. It's, it's going to be pretty incremental. Like that is not getting clicks. You need to be,
you need to be telling, telling a tall tail. You need to be spinning a yarn. It's a bull market in
yarn spinning, folks. Get ready. Get out the yarn and start spinning. So, it's news from
Steve Schwartz. Get the gong. Okay. Hit me. Tell me.
Stephen over at WAP says we're excited to announce that Tether, the largest stable coin company in the world,
is making a strategic investment of 200 million into WAP, valuing us at $1.6 billion.
Our partnership with Tether marks a major step in building the world's largest internet market.
Tether is committed to enabling everyone in the world to participate in the new internet economy.
The way humans work and create value is changing fast.
The world needs both an open internet market, giving people a platform to conduct business,
as well as a transparent payments network.
Fast, cheap, global.
Exactly.
And so, yeah, fast, cheap, global.
Mike Isaac is saying what we're all thinking.
Ready for this to be over.
B.BH talking about the Warner Brothers Discovery, Netflix.
It's in the paper every single day, every single day.
Paramount increases Warner bid.
We get it.
You guys want to acquire this company.
Mark Zuckerberg is planning a stable coin comeback.
They also have a banger deal with AMD.
And if you head to the bar this weekend and you drink too much, you should just say that you were the victim of a distillation attack.
That's the correct turn of phrase.
Anyway, thank you for watching.
Leave us five stars and Apple Podcasts on Spotify.
Have a wonderful day.
