Stuff You Should Know - Large Language Models and You
Episode Date: June 20, 2023There is a good chance that in March of 2023, humans crossed a threshold into a transformative new era when a new, smarter type of AI was let loose in the wild and an AI arms race began. Â See omnystu...dio.com/listener for privacy information.
Transcript
Discussion (0)
So, there is a ton of stuff they don't want you to know.
Yeah, like does the US government really have alien technology?
Or what about the future of AI?
What happens when computers actually learn to think?
Could there be a serial killer in your town?
From UFOs to psychic powers and government cover-ups, from unsolved crimes to the bleeding
edge of science, history is riddled with unexplained events.
Listen to stuff they don't want you to know
on the iHeart Radio app Apple Podcasts
or wherever you find your favorite shows.
Rose!
Fran, how did we make it to the second season
where a podcast and we still have all these opinions?
Uh, pardon my non-binary vibes, but I'm just like, does it all mean to be explained?
Hatch, hook the glasses off her face, put them on America, and those are Betty Blasco.
That's so shh.
Yeah!
In our second season, we'll be covering topics like David Lynch, fanfiction, Golden Girls,
and Star Wars with guests including Harry Neff, Frankie Grande, Bobby Finger, and Mark and Delacato.
Like a Virgin is proud to be a part of the Outspoken Network from I Heart Podcasts.
Listen on the I Heart Radio app Apple Podcasts for wherever you get your podcasts.
Welcome to Stuff You Should Know, a production of I Heart Radio.
of I Heart Radio. Music
Hey and welcome to the podcast.
I'm Josh and there's Chuck and Jerry's here too.
And that makes this a timely topical,
not timely as in four times.
I meant to say timely,
topical,
episode of Stuff You Should Know.
That's a great forecast of how it's going to go to, I think.
Fork cast?
Oh, God.
Is this really us, or is it AI-generated Josh and Chuck?
Until this year, I would have been like, don't be preposterous.
Now I'm like, just give it some time.
You know how we would know as if it said,
if one of us said, of course it's the real us.
We met in the office and bonded over our Van Halen Denham Vests.
Yeah, we'd be like, sucker, you just fell for the oldest trap
in the book, the Sicilian switcheroo.
Yeah, AKA fake Wikipedia entry stuff.
Is that still up?
I haven't been to our Wikipedia page in years, so I don't know.
Well, regardless, we're not talking about Wikipedia, although it does kind of fall into the...
Sure, it figures in.
...the rubric of this.
That's sure if I used that word correctly, but it felt right. We're talking today about what are in the biz known as large language models,
but more colloquially known by basically their public facing names,
things like chat, GPT, or Bard, or Bing AI.
But essentially what they all are are algorithms, artificially intelligent algorithms
that are trained on text, tons and tons and tons
of text written English language stuff,
that are so good at recognizing patterns in those things
that they can actually simulate a conversation with you,
the person on the other side of the computer
asking them questions.
Yeah, this is, it's going to be fun doing this episode over every six months.
It's right. Until we're replaced. Totally. So I think we should say, though, like this is,
like this is such a huge wide topic that is just, we're in the ignition phase, like the fuse just caught, right?
Yeah.
Um, that we're going to really try to keep it narrow, just strictly the large language
models and the immediate effect they're planning on having or they're going to have, hopefully
not planning on anything yet.
Um, but I really would like to do one on how to keep AI friendly and keeping it from running away.
So I say we just kind of avoid that whole kind of stuff.
And really I'm talking to myself right now,
at least for this episode, okay?
Yeah, we're gonna kind of explain how these things work
and what the initial applications look like
and kind of where we are right now
and then what it could mean for like jobs
and the economy and stuff like
that. But you're right. It is, as a whole, Bala wax, as you well know. And this is a great
time to plug the end of the world with Josh Clark, which is still out there. You can still
listen to it. The truth is out there in the form of the end of the world with Josh Clark.
Yeah. That's a great 10 part series that you did. And AIs among those existential risks,
existential that you covered. Yeah, it's up to some four, I believe, and Chuck, like just from
having done that research and forming my own opinions over the years about this, like, I'm,
it's staggering to me that we're, like, we've just entered.
Like what's going to be the most revolutionary, transitional
plays in the entire history of humanity, you can argue.
Everything else took place over very long periods of time.
We started playing with stone tools and then we started building cities.
All this stuff took place over thousands and thousands hundreds of thousands millions of years
We just entered a period where stuff's gonna start happening within weeks pretty soon
As of 2023 the whole thing just started. Yeah, and none of this was around like this when you did the end of the world And that was like five years ago. Yeah, it was 2018. All this was being worked on,
but we hadn't hit that point.
Like, all this was pretty much predicted and projected.
Right.
It was clear that this was the direction people were going.
And it's here, baby.
It is.
It's nuts, but it's actually here.
So what we're talking about are large language models,
which is a type of neural network
that are easiest to think of in terms of like a human brain where you have neurons
that are connected to other neurons, but they're not connected to some other neurons. And all of those
neural connections kind of are activated by inputs that put out something like your conscious experience
or you say a sentence or something like that. It's very similar in its most basic nature, I guess.
Yeah, I mean, Olivia helped us out with this
and she did a great job, I think.
And Google themselves basically say,
you know what, it's really sort of like,
how can you go to search for something
on our search engine tool?
Mm-hmm.
It's here.
Over here, the way I could have said that.
I don't think so.
Our handy search bar.
Then, you know, basically what we're doing is auto completing, like, an analysis of,
like, probability, like, what you're typing, if you type in, you know, John Coltrane
or start to chop, type in John Coll, it might finish it out as John Coltrane,
a love supreme or John Coltrane jazz,
and they're saying, you know,
what is happening now with these LLMs is,
it's the same thing, it's just,
it's got way more data, way more calculations
in the algorithm, so it's not just completing
like a word or two, it's potentially, you know,
hey, rewrite the Bible or whatever you tell it to do.
Yeah, and the big difference is in the amount of info
the neural network is capable of taking into consideration.
Yeah.
So may I for a minute?
Oh, please.
So imagine with one of those auto complete suggestion tools like they have on
on Google search. If there's 500,000 words in the English language, that means that you have 500,000
words that a person could possibly put in. That's the input into the neural network. And then there's
500,000 possible words that that network could put out. So you have 500,000 connections to 500,000 other connections.
So it's like, I think 250 billion connections
you're starting with right there.
That's just the autocomplete suggestion
because it based on those connections
and studying words in the English language
and phrases in the English language,
it places emphasis more on some connections than others.
So John Coltrane is what's his album?
I can't remember.
The Love Supreme is a classic album.
So John Coltrane is much more closely related to a Love Supreme in the mind of a neural
network than John Coltrane, Charlie Brown Disco is, right?
Just to take something off the top of my head.
And so based on that analysis and that weight
that it gives to some things other than others,
it suggests those words.
What the large language models
that like chat GPT that we're seeing today,
they do the same thing, they have all those same connections.
But the analysis they do, the weight
that they put on the connections,
is so much more advanced. Yes, and exponential, that it's actually not just capable of suggesting
the next word, it's capable of holding a conversation with you. That's how much it understands
how the English language works. Yeah, like if you said, you know, write a story about
works. Yeah, like if you said, you know, write a story about winter time, and you know, it got to the word snowy, it would go through, you know, I mean, and this is like instantaneously,
it's doing these calculations. Right. It might say, like, you know, oh, hillside or winter
or snowy day, like these are all things that make sense because I've learned that that makes sense, I being, you know, the chatbot or whatever,
but it probably won't be snowy chicken wing
because that doesn't seem to fit the algorithm
and it learns all this stuff by reading the internet
and you know, put a pin in that because
that's pretty thorny for a whole lot of reasons, but not
the least of which is the fact that some companies, and again, we'll get to it or start and
you say, like, wait a minute, like we created this content and now you're just scrubbing
it for, and then using it and charging people to use it.
And we're not getting a piece of it.
So that's just one tiny little thorn. But in
order to do this, like you said, it's like, it needs to know more. And you came up with
a great example, like the word lemon. And a very basic way, it might understand that a
lemon is a roundish and sour and yellow. But if it needs to get smart enough to really
write as if it were a human, it needs
to know that it can make lemonade and that it grows on a tree in these agricultural zones
and that it's a citrus fruit because it has to be able to group lemon together with
like things. And those groups are either like, you know, hey, it's super similar to this,
like maybe other citrus fruits, or it's, you know it's super similar to this, like maybe other citrus fruits,
or it's, you know, sort of similar to this, but not as similar citrus fruits, like desserts,
and then you get to chicken wings, although actually, that's not true, because lemon, chicken wings.
You could have lemon pepper chicken wings, right?
Yeah, that's what I'm saying. So, yeah. But the instance you use is like Greenland, which I guess
doesn't grow lemons. No, but I mean, I'm sure they import lemons. So there's some connection there. But based on
how connected, how often these words show up together and the billions and billions of lines of
text that these large language models are trained on, it starts to get more and more dimensions
and making more and more connections, right? So as
that happens, words start to cluster together like lemon and pie and icebox,
all kind of cluster together. And by taking words and understanding how they
connect to other words, you can take the English language, just the words of the
English language and make meaning out of it.
That's all we do.
And large language models are capable of doing the same thing, but it's really, really
important for you to understand that the large language model doesn't understand what it's
doing.
It doesn't have any meaning to the word lemon whatsoever. All of these dimensions that it
waits to decide whether what word it should use next, they're called embeddings. They're
just numerical representations. So the higher the number, the likelier it is, it goes with
the word that the user just put in or that that the large language model just used, the
lower the number, the further away it is in the cluster, right?
It doesn't understand what it's saying to you.
And as we'll see later, that accounts for a phenomenon
that we're gonna have to overcome for them to get smarter,
which is called hallucinations.
But that's a really critically important thing to remember.
Yeah, another critically important thing to remember is,
and you probably get this from what we said
so far, if you already know a little bit about it, but there's no programmer that's teaching
these things and typing in inputs and then saying, here's how you learn things.
It's doing this on its own and it's learning things on its own.
What we're talking about eventually, where it could get super scary,
is when it gets to what's called emergent abilities where it's so powerful, and there's
so much data that the nuance that's missing now will be there.
Right. Exactly. So, yeah, that's when things are going to get even harder to understand,
you know, to remind yourself that you're talking to a machine, you know?
Yeah, and the other thing, too, though, even though I said humans aren't inputting this data,
one of the big things that is allowing this stuff to get smarter is human feedback.
It's called RLHF, which is reinforcement learning on human feedback. So at the end of your, whatever you've told it to create,
you can go back in and say, well, you got this wrong
and this wrong, this is what that really is.
And it says, thank you.
I have now just gotten smarter.
Right, exactly.
So one of the reasons why these things are suddenly just so smart
and can say, thank you, I've just gotten so much smarter,
is because
of a paper that Google engineers published openly in 2017 describing what's now like the
essential ingredient for a large language model or probably any neural network from now
on. It's called a transformer. And rather than analyzing each bit of text, let's say you
say one of the very famous things
Marvin Minsky was one of the founders of the field of AI and his son Henry
prompted GPT to describe what losing a sock in the dryer is like in the style of the declaration
of independence, right? Yeah. So depending on how Henry Minsky typed that in,
Right. Yeah.
So depending on how Henry Minsky typed that in, the before Transformers, the neural network
would analyze each word and do it one increment into time.
Maybe not even words.
Sometimes strings of just letters together, phonemes even.
If you can believe it, phonemes even.
Right.
And what the Transformer does is it changes that.
It allows it to analyze everything
all at once. So it's so much faster, not just in putting out a coherent answer to your
question or request, but in also training itself on that text. So you just feed it the
internet and it starts analyzing it and self correcting. It trains itself. It learns
on its own and that
Unfortunately also makes AI including large language models what are known as black boxes?
Yeah, we don't know how they're doing what they're doing
We have a good idea how to make them do the things we want but the in-between stuff
We cannot a hundred percent say what they're doing how they come up with these
Conclusions which also explains
hallucinations and them not really making sense to us.
Yeah, and the T and GPT stands for Transformer.
It's a generative, pre-trained transformer.
And the reason they call it GPT for short is because if they call it generative, pre-trained
transformer, everybody would be scared out of their mind.
Yeah, we just start running around to nowhere in particular.
Yeah.
Should we take a break?
I say we do.
I think that we kind of explain that fairly well.
Yeah, fairly robust beginning, my friend. There's a ton of stuff they don't want you to know.
Does the US government really have alien technology?
And what about the future of artificial intelligence, AI?
What happens when computers learn to think?
Could there be a serial killer in your town?
From UFOs to psychic powers, and government cover-ups, from unsolved crimes to the bleeding
edge of science, history is riddled with unexplained events.
We spent a decade applying critical thinking to some of the most bizarre phenomenon civilization
and beyond. Each week we dive deep into unsolved mysteries, conspiracy
theories and actual conspiracies. You've heard about these things, but what's the
full story? Listen to stuff they don't want you to know on the iHeartRadio app
Apple podcasts or wherever you find your favorite shows.
or wherever you find your favorite shows.
What's up fam? I'm Brian Ford, artist and baker
and host of the new podcast, Flaky Biscuit.
On this podcast, I'm gonna get to know my guests
by cooking up their favorite nostalgic meal.
It could be anything from Twinkies
to mom's Thanksgiving dressing.
Sometimes I might get it wrong, sometimes I'll get it right.
I'm so happy it's good, because man, if it wasn't,
I'd be like, you know, everybody not my mom.
Yeah.
Either way, we will have a blast.
You'll have access to every recipe
so you can cook and bake alongside me
as I talk to artists, musicians, and chefs
about how this meal guided them to success.
And these nostalgic meals, fam, they inspire one of a kind conversations.
When I bake this recipe, it hit me like a ton of bricks.
Oh.
Um, does this podcast come with a therapist?
Ha, ha, ha, he can.
Listen to Flaky Biscuit every Tuesday on the I Heart Radio app Apple podcasts or
wherever you get your podcasts.
With all the chaos and turmoil in the news, it feels like we never get to hear about the
good happening in our world.
We're on a mission to change that.
Welcome to the good stuff.
I'm Jacob Schick, a third generation combat Marine.
And I'm his co-host and wife Ashley Schick.
We believe everyone has a story to tell,
not only about the peaks, but the valleys they've been through
to get them to where they are today,
as we get to tell stories of inspiration and perseverance.
We're joined by some amazing guests who share the lessons
they've learned that shape to they are
and what they're doing to pay it forward and give back.
Our guests range from some of my fellow warriors
to NFL cheerleaders, to extreme sports legends,
to New York City firefighters, who survived 9-11.
Listen to the good stuff on the I Heart Radio app,
Apple podcasts, or wherever you get your podcast.
All right, so open AI launched their chat GPTT very recently in November of 2022. And just in that brief window,
well, it was like six or eight months ago,
things are kind of fly and high
and all kinds of companies are launching their own stuff.
Some of it is, well, first of all, OpenAI is now at chat GPT-4.
Yes. And I'm sure more will be coming in
quick succession.
But companies are launching, and we're going to talk about all of them, kind of like broad
stuff, like chat, GPT, and really specific stuff, like, well, hey, I'm in the banking business.
Can we just design something for banking or just something for real estate?
So they're also getting specific on a smaller level
in addition to these large like Google and Microsoft
and being and all that stuff.
Yeah, and to get specific, all you have to do
is take an existing GPT, large language model,
and add some software that helps guide it a little more. And there you go. Or just
training it on specific stuff like medical notes. That's another one too. One of the other
things that's changed very quickly between November of 2022 and March of 2023 when I think
GPT-4 became available. Just think about that.
That's such a short amount of time.
All of a sudden now, you can take a picture and feed it into a large language model, and
it will describe the picture.
It will look at the picture essentially and describe what's going on.
There's a demonstration from one of the guys from OpenAI who doodles like on a little scrapbook,
piece of paper, some ideas for a website. He takes a picture of that paper that he's written on,
feeds it into ChatGPT4, and it builds a website for him in a couple of minutes that functions the way
he was thinking of on the doodle scratch pen.
I wonder if the only way to slow this stuff down is to literally slow down the internet again.
Go back to like the old days when a picture would load like three lines at a time.
And you'd say describe a picture of be like someone's hair.
Someone's nose. Someone's chin. Don't forget the sour link between.
Yeah, an hour later you have a complete picture. Right.
I don't think there's any way to slow this down because we're in
Not to be alarmist, but we're in a second worst case scenario for
Introducing AI to the world which is rather than state actors doing this which would be really bad
We have private companies doing it, which
is just slightly less bad, but they're competing in an arms race to get the best brightest,
smartest AI out there as fast as they can, and they're not taking into account all of
the downsides to it. They're just throwing it out there as much as they can, because one
of the ways that these things get smarter is by interacting with the public.
They get better and better at what they do from getting feedback just from people using
them.
Yeah, even if it's for just some goofy, fun thing you're doing, it's learning from that.
You talked about the advancements made between the launch of 3.5 and GPT-4 and 3.5 scored in the 10th
percentile when it took the uniform bar exam. And 4 has already scored in the 90th percentile.
And they found that GPT-4 is really, it's great at taking tests and it's scoring really well on tests, particularly standardized
tests.
I think it basically aced all of the AP tests that you would take to get into AP classes
except, well, it took a couple AP classes.
I'm totally kidding.
But the max score is five and I think it got fives kind of on everything except for
math.
Uh, it got a four. Uh, and math, it's kind of, it's weird.
Uh, it's kind of weirdly counterintuitive because it's a number space thing.
Uh, but it has more trouble with math, uh, like rudimentary math than it does with like
constructing a paragraph on, you know, Shakespeare or something or as Shakespeare, does better with like math
word problems and more advanced math than it does just at basic math apparently.
Or describing how a formula functions using writing. The thing is, though, and this
is another great example of how fast this is moving, they've already figured out that all you have to do is do what's called prompting, where you basically take the answer, that the incorrect answer, that
the large language model gives you, and then basically re-explain it by breaking it down
into different parts. And it learns as you're doing that. And then all of a sudden, it comes
up with, it gets better at math. So they've figured out tools extra software. You can lay over a GPT that basically teach it to do math or prompt
it in the correct way so that you get the answer you're looking for. That's based on math.
Yeah. I mean, every time I read something that said, well, right now, it's not so great
at this. I just assume that, and we'll have that worked out in the next few weeks.
Yeah, pretty much. I mean, because as these things get bigger and smarter in the data sets
that they're trained on, get wider, they're just going to get better and better at this.
Because, again, they learn from their mistakes.
Yeah, just like humans, right?
Exactly.
So you mentioned these hallucinations kind of briefly, and this is one of the big problems with them so far that,
again, I'm sure they will figure this out in due time,
but one example that Livia found was
to prompt it with what mammal lays the largest eggs.
And one of the problems is when it gives hallucinations
or wrong answers,
it, you know, it's not saying like, well, I'm not so sure about this. It's saying this
is true, just like anything else I'm spitting out with a lot of confidence. So the answer
there was the mammal that lays the largest eggs is the elephant. An elephant's eggs are
so small that they are often invisible to the naked eye. So they're not commonly known
to lay eggs at all.
However, in terms of sheer size,
an elephant's eggs are the largest of any mammal.
Which makes sense in a really weird way,
if you think about it.
Sure, this little invisible eggs.
Yeah, because mammals don't lay eggs, obviously,
but the way that it put it was,
if you didn't know that mammals don't lay eggs,
or you didn't know anything about elephants,
you'd be like, oh, that's interesting,
and take that as a fact, because it's saying this confidently.
And I saw written somewhere that one GPT actually argued with the user and told them they
were wrong when they told the GPT that it was wrong.
Yeah, which is not a behavior you want at all.
But that's what's termed as a hallucination. And a hallucination is a good way to understand it that I saw is that, again, this GPT,
this large language model doesn't have any idea what it's saying means.
It's just picked up, it's noticed patterns that we've not noticed before.
And it's putting them together in nonsensical ways,
but they're still sensible.
If you read them, it's just factually,
they're not sensible,
because it doesn't have any fact checking necessarily.
It just knows what it's finding
kind of correlates with other things.
So there's some sensible stuff,
and they're like the phrase invisible to the naked eye,
or laying eggs, or elephants and mammals.
Like this stuff all makes sense.
It's not like these are just strings of letters.
Yeah, yeah, sure.
It's just putting them together in ways that are not true.
They're factually incorrect.
And that's a hallucination.
It's not like the computer is thinking that this is true.
It doesn't understand things like truth and falsehood. It just creates,
and some of the time it gets it really, really wrong.
Yeah, it didn't know what an elephant is. No, it just knows that it correlates to in some really
small way that we've never noticed before, the word eggs. Yeah, and this is, that's a problem if it's just like, oh, well, this thing isn't quite
where it needs to be at because it thinks elephants lay eggs.
But there have already been plenty of real world examples where people are using this and
it's screwing things up for their business or for commerce or something.
Where they're client?
Yeah, where they're client.
Well, that's one.
That was an attorney who was representing
a passenger who was suing an airline and used chat BT to do research. And it came up with a bunch
of fake cases that this attorney didn't bother to fact check, I guess. And there were like a dozen
fake cases that this attorney submitted in his brief. And it wasn't like, so like from what I understand,
like the brief was largely compiled
from what the GPT spit out.
So like it wasn't like the GPT just made up
the names of cases, it made up the names of cases
and then described the background of the case
and how they related to the case at hand, right?
So it just completely made these up out of out of the blue. And yeah, that
lawyer had no idea. He said in a brief later that he had no idea that this thing was capable
of being incorrect. So it was like one of the first times he used it and he threw himself
on the mercy of the court. And I'm not quite sure exactly what happened. I think they're
still figuring out what to do about it. Maybe just go spend some quality time with your little chat butt. Exactly.
Similarly, Meta had a large language model that basically got laughed off of the internet
because it was very science-focused and it would make up things that just didn't exist.
Like, mathematical formula.
Like there was one that called the Yoko, or no, the Lenin Ono correlation or something like that,
completely made up this thing that I read
and I was like, oh, that's interesting.
I had no idea.
I have never heard this stuff before
and I would have just thought that it was real head.
I not realized, and no,
ahead of time that it was a hallucination
that this math thing does not exist anywhere.
And then even attributed it to a live mathematician
said that this was the guy who discovered it.
So it really can get hard to discern what's true
and what's not, which again is a really big problem
if we haven't gotten that across yet.
Did they say that mathematician's name was math B calculus?
Yeah.
Another example, and this is, we know, we're going to talk a little bit about replacing
jobs in the various ways that can and already is happening.
But CNET, for instance, said, oh, you know what?
Let me try this thing out and see if we can get it to write an actual story.
And so they got an AI tool to write one on what is compound interest. And
it was just, there was a lot of stuff wrong in it. There was some plagiarism, you know,
directly lifted. So there's, you know, these things aren't foolproof yet. And it's definitely
not something that should be utilized for like a public-facing website that's supposed to have like really solid,
vetted articles about, well, especially seen at about a tech.
Right. Of all things. That's something that the National Eating Disorder Association
found out the hard way. They apparently replaced entirely its human
staffed hotline with the chatbot. And supposedly they were accused of doing this
to bust the union that had formed there.
And so when they released the chatbot into the world
and it started offering advice
to people suffering from eating disorders,
it gave standard weight loss advice,
which you probably get from your doctor
who didn't realize you had an eating disorder,
but in the context of eating disorder, it was all like trigger, trigger, trigger, one right after the other.
It was telling these people with eating disorders to like weigh yourself every week and try to cut
out 500 to a thousand calories a day and you'll lose some weight. And just stuff that would set
everybody off. And very quickly they took it offline and I guess brought their humans back.
Hopefully it doubled the pay.
Yeah, but I mean, this stuff is,
that's already being solved as well
because they point out that GPT-4
has already scored 40% higher than 3.5.
Again, just a handful of months ago
on these accuracy tests.
So that is even getting better.
And you know, where I guess people want it to get to is to the point where it doesn't need
human supervision to spit out really, really accurate stuff.
Exactly.
That's pretty much where they're hoping to get it.
And I mean, it's just, they have the model, they have everything they need.
They just, it just has to be tinkered with now.
Should we check another break?
I think so.
All right, we'll take another break
and then we'll get into sort of the economics of it
and whether or not your job may be at risk right after this. There's a ton of stuff they don't want you to know.
Does the US government really have alien technology?
And what about the future of artificial intelligence, AI?
What happens when computers learn to think?
Could there be a serial killer in your town?
From UFOs to psychic powers,
and government cover-ups,
from unsolved crimes to the bleeding edge of science,
history is riddled with unexplained events.
We spent a decade applying critical thinking
to some of the most bizarre phenomenon civilization
and beyond.
Each week, we dive deep into unsolved mysteries,
conspiracy theories and actual conspiracies.
You've heard about these things,
but what's the full story?
Listen to stuff they don't want you to know
on the iHeart Radio app Apple Podcasts
or wherever you find your favorite shows.
What's up, fam?
I'm Brian Ford, Artisan Baker, and host of the new podcast, Flaky Biscuit.
On this podcast, I'm gonna get to know my guests by cooking up their favorite nostalgic
meal.
It could be anything from Twinkies to moms Thanksgiving dressing.
Sometimes I might get it wrong, sometimes I'll get it right.
I'm so happy it's good because man, if it wasn't, I'd be like, you know, everybody not my mom.
Either way, we will have a blast. You'll have access to every recipe so you can cook and bake
alongside me. As I talk to artists, musicians, and chefs about how this meal guided them to success.
And these nostalgic meals, fam,
they inspire one of a kind conversations.
When I bake this recipe, it hit me like a ton of bricks.
Oh.
Does this podcast come with a therapist?
Yeah.
They can.
Listen to Flaky Biscuit every Tuesday
on the iHeartRadio app Apple podcasts,
or wherever you get your podcasts.
Hola, hola!
It's your girl, Cheekies.
And I'm back with brand new episodes of my podcasts,
Cheekies and Chill and Dear Cheekies.
Last season, I shared so many intimate stories
with you guys and had conversations
with some of my favorite people.
This season, we're picking up right where we left off.
We'll talk about everything from spirituality,
relationships, women's health, and so much
more.
And guess what?
Dear Chiquis is also back.
Seguiré contestando todo a tus preguntas.
I'll be answering even more of your questions.
And honestly, guys, I cannot wait.
As si que no te pierdas, en un momento, the Chiquis and Chill and Dear Chiquis, as part
of the My Cultura Podcast Network, available on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
So one of the astounding things about this that it really caught everybody off guard is that these
Large language models the jobs they're coming after are white collar knowledge jobs
Yeah, they're so good at things like writing. They're good at researching. They're good at analyzing photos now
And that's a huge C change from what it's been like traditionally, right? Wherever, whenever we've automated things, it's usually replaced manual labor.
Now it's the manual labor that's safe in this, this generation of, of automation.
It's the white collar knowledge jobs that are at risk.
And not just white collar jobs, but artists in, in, yeah, like just,
who have nothing to do with white collar or jobs,
they're at risk as well.
Yeah, I'm sure the farmers are all sitting around going,
how's that going for you?
Yeah, how's that taste?
So yeah, art is, when D-A-L-L-E,
doll-E came out, that was an art tool where a lot of people,
a lot of people I know would input, I guess I never did it
I never do anything like that not because I'm afraid or anything, but I just
Just not interested basically
but
I guess you would submit like a photograph of yourself and then it would say well here's you as a superhero or here's you as a
Renaissance painting or whatever
and
You know it's sourcing images from real artists
throughout history, from getty images in places like that.
And there are already artists that are
suing for infringement.
Getty images is suing for infringement and saying,
you can't, even if you're mixing up things
and it's not like a rim brand, let's say,
you're using all of the artists
from that era and mashing it up together in a way that we think basically is illegal.
Yeah, they say this doesn't count as transformative use, which is typically protected under the
law.
This is instead just some sort of mash up that a machine is doing.
To me, it's almost splitting hairs, but I also very much get where they're coming from.
Not just a place of panic, but a real basis in fact that these things are not transforming
because they don't understand what they're doing.
Yeah, and companies are taking notice very quickly.
There are some companies, and I'm sure everyone's
going to kind of fall in line that are already saying,
well, no, you've got to start paying us
for access to this stuff.
We paid human beings to create this content
for lack of a better word and put it online for people
to access, but you can't come in here now and
access it with a bot and use it and charge for it without giving us a little juice. And
there are a lot of companies that are already saying like you can't use this if you're an
employee of our company, you can't use chat bots at all because some of our company's secrets might end up being spilled somehow or databases
are all of a sudden exposed.
Companies are moving fast to try and protect their IP, I guess.
Well, yeah.
One of the, I mean, some of the companies that are behind the GPs that are out right now,
the large language models that are out right now,
are well known for not only not protecting
their users' information, but for rating it,
for its own use.
Like for example, meta is one of the ones with a,
they have their large language models called Lama,
and there is a chatbot called Alpaca,
and it makes total sense that you are probably signing away
your rights to protect your information
when you use those things on whatever computer
you're using it on or whatever network you're using it on.
I don't understand exactly.
I haven't seen anything that says
this is how they're doing it.
Or even that they are definitely doing this.
I think it's just that the powers that be know,
like they would totally do this if they can and they probably are so we should just stay
Keep our employees away from it, you know as much as we can
Yeah, it's like we said it's being used on smaller levels by
one of the
Uses that livie a dug up was like let's say real estate agent
instead of taking time to write up listings,
has a chatbot do it,
and then they can go through afterward
and make adjustments to it as needed.
Or-
But in exchange, that database now knows exactly
what you think of that one ugly bathroom.
That's right.
Or doctors may be using it to compile lists
of possible diseases or conditions that someone
might have based on symptoms.
These all sound like uses that are like, hey, this sounds like it could be a good thing
in some ways.
And it can be in some ways, but it's the Wild West right now.
So it's not like there's anyone saying, well, you can't use it for that. You can only
use it for this, you know what I'm saying? Plus also,
everything that we've come up with as just internet users in
the general public has been what we could come up with in
given three months with no warning that we should start
thinking about this. It's just like, Hey, this is here, what
are you going to do with it? And people are just finding new things to do with it every day. And yeah, some of them are benign,
like having it draft a blog post for your business. I thought they were already doing that based
on some of the emails that I get from like businesses, right? But they definitely are now if they
weren't before. And that's totally cool because there's, it's just taking some of the weight off of
the humans that are already doing this work, right?
What's going to be problematic is when it comes for the full job or enough of the job
that the company can transfer whatever's left of that person's job to other people and
make them just work a little harder while they're supported by the AI.
Yeah, here are some stats that they were pretty shocking to me. I didn't know it was moving
this fast, but there's a networking app called Fishbowl. And in 2023, just earlier this year,
they found that 40% of what they call working professionals
are already using some kind of either GPP or some kind of AI
tool while they work, whether it's generating idealists
or brainstorming lists or actually writing stuff
or maybe looking at code.
And this is the troubling part.
Of those 40%, almost 70% are doing that in
secret and hadn't told their bosses that they were doing that. Right. Those are just working professionals.
We haven't even started talking about students yet. Yeah. I mean, you combine that with work from home.
You got a real racket going on. For sure. Yeah. No, totally. Again, totally again though I mean like if you can use it to do good work and you can now do more work
I think you should be paid for more work like if your productivity's gone through the roof great
You figured it out. I've got no problem with that. It's the opposite that I have the problem with
Well, let's skip students for a second then and talk about that since you brought it up because
Here's the the thing.
This is, the United States doesn't have a great track record of ignoring the bottom line
in favor of just keeping hard working humans at their jobs.
So I think it was Goldman Sachs said that they found
That there could actually be an increase in the annual GDP by about 7% over 10 years because
Productivity increases and I guess the idea is that
Productivity is increasing because let's say you've got
20 to 30% of stuff being done by AI that opens up 20 to 30% of your time for your employees to maybe innovate or, you know, do other capitalistic things.
But what it to me, and this is just my opinion, and again, we're really early in all this,
but it's a bottom line world and especially a bottom line country that we live in.
And I imagine it will, what it would likely mean mean is buy-by jobs more than it means, well, hey, you've
got more time.
And why don't you innovate at your job?
Because for most jobs, it'll probably be like, oh, wait a minute.
If we can teach it to do 40, finish sending your job, I bet we could train it to do 100%.
Yeah.
Or we can get rid of a bunch of you
and just keep some of you to do the other 60%.
You know?
But now see, these people are out of jobs.
It's gonna bite them in the rear though,
because it's not ultimately gonna be,
well who knows, it doesn't seem like it could be good
for the overall economy if all of a sudden,
all these people are out of jobs,
because people being out of jobs mean they're not,
that means the economy is gonna tank, they're not spending.
And it's not like a situation where,
the tractor replaced the plow,
and then the robot tractor replaced the tractor,
but hey, now we've got these better jobs
where you're designing and building
these robot tractors and they're higher paying,
and they're great.
It's not like that because you know, the farmer was replaced who drove that tractor and isn't skilled in the practice of designing robot tractors. And in this case, in most cases,
they're not being, there's not some other job
waiting for someone who got fired in the world of designing AI.
Right.
Does that make sense?
No, it makes total sense.
But yeah, and in this case, one of the big differences is, instead of the farmer having
to go figure out how to work a computer, the people working computers now have to go
figure out how to be farmers in order to sustain themselves, right?
There you go.
But you're right.
We don't have a track record of taking care of people
very well at least who are out of a job.
And I mean, without getting on a soapbox here,
what's either going to come out of this,
because there's gonna be one or the other.
The status quo as it is now or as it was up to 2022,
we don't know that that's going to be around anymore.
Instead, we'll either do something like create universal basic income for people to be
like, hey, your industry literally does not exist anymore.
And it just happened overnight, basically.
We're just going to make sure that everybody's at least minimally taken care of while we're
figuring out what comes next.
Or it's going to be like, good luck, Trump,
you're fired, you're out on your own.
Instead we're going to take all this extra wealth, this extra $2 trillion that's going to
be generated and push it upward toward the wealthy instead.
And everybody else is just the divide between wealthy and not wealthy is just going to exponentially
grow.
One of those two things is going to happen because I don't see how there's just going to
be a regular middle ground like there is now where it's kind of shaky and how we're taking
care of people because there's just going to be so many layoffs and fairly skilled workers
being laid off too.
We've just never encountered that before.
Yeah, that's the thing that the largest corporations might want to think
about is all it's going to take is one CEO of a huge corporation to say, wait a minute,
I think I can get rid of 75% of the VPs in my company. And like who, unless who accept the person at the very, very top of that food chain is
protected, and the answer is nobody.
No, no, essentially at the end of the day, because they make a lot of money.
It's one thing to lay off a bunch of, you know, technical writers that are all sitting
in their cubicles, but if you start laying off those VPs,
if you get those big bonuses,
that's more bonus money.
And are we looking at a situation
where a corporation is run by one human?
I mean, it's entirely possible.
Like you can make a really good case
that what it is gonna wipe out is the middle management.
Like VPs, this is exactly like you said.
And that we still will
need some humans to do some stuff.
Like the board, take care of the board, right?
Sure, of course.
Yeah.
But yes, I mean, who knows?
We have no idea at this point.
Ultimately, it could very easily provide for a much better, healthier society, at least
financially speaking. It could do that, especially given
a long enough period of time.
I'm a cynic when it comes to that kind of trust though, you know.
I am as well, for sure.
But if you look back in history, the history of technology, overall, especially if you
just turn a blind eye to human suffering for a second and you just look at the progress
of society, right?
Uh-huh.
In a lot of ways it has, it's gotten better and better thanks to technology.
There's also a lot of downsides to it.
Nothing's black and white.
It's just not how things are.
So there's of course going to be problems.
There's going to be suffering.
There's going to be people left behind.
There's going to be people that fall through the cracks.
It's just inevitable. We just don't know how many people, for how long and what will happen to be people left behind, there's going to be people that fall through the cracks. It's just inevitable. We just don't know how many people for how long and what will happen to those people on
the other side of this transition.
Yeah.
I was talking with somebody the other day about the writer's strike in Hollywood, the W.G.A.
is striking right now for those of you who don't know.
It's kind of all over the W. G. A. is striking right now for those of you who don't know. It's kind of all over the
place. But one of the things that they have argued for in this round of negotiations is, hey, you
can't replace us with AI. And the studios all came back and said, well, how about this? We'll assess
that on a year-to-year basis. And that's frightening if if you're if you're either a writer in Hollywood or
you're somebody who loves TV and films and and quality TV and films because I don't know if
I think ideation and initial scripts maybe even right now could I could see that happening where
they send they like all right now we'll bring in a human to refine this thing at a much
lower wage. That's probably what they're most afraid of rather than being wholesale
replaced because like you said, these programs are, they're all about just data and
numbers. They're not, they don't have human feelings and that's what art is. And so I think I would be more concerned
if I was writing pamphlets for Verizon or something,
or if I was...
Some pamphlet writer for Verizon just went and gulp.
No, I'm so sorry.
But Buzzfeed back in the day,
instead of having a dozen writers
writing clickbait articles,
why not have just one human
that is a prompt engineer that's managing a virtual AI clickbait room that's just pumping
out these articles that they were paying someone down 40 grand a year to write previously?
Yeah, I mean, it's a great question. That was a horrific, horrible job to have not too many
years ago. So it's great to have a computer do it,
but that means that we need these other people
to go on to be, to have writing jobs
that are more satisfying to them than that.
But that's not necessarily the case
because as these things get smarter and better,
they're just gonna be relied upon more.
We're not gonna go back.
There's no going back now.
It just happened. Like it just happened basically be relied upon more. We're not going to go back. There's no going back now. It just happened.
Like it just happened basically as of March 2023.
And one of the big problems
that people have already projected running into
is if computers replace human, say, writers,
basically entirely, eventually all the stuff that humans have written on the internet
is going to become dated. It's going to stop. And it will have been replaced and picked up on by
generative pre-trained transformers, right? And eventually, all the writing on the internet,
after a certain date, will have been written by computers, but will be being scraped by computers when humans go ask the computer a question.
The computer then goes and references something written by a computer.
So humans will be completely taken out of the equation in that respect.
We'll be getting all of our information, at least non-historical information from non-humans.
And that could be a really big problem, not just in the fact
that we're losing jobs or in the fact that computers are now telling us all of our information,
but also that there's some part of what humans put into things that will be lost that
I think we're going to demand. I saw somebody put it like, I think I can't remember who it was,
but they said, people will go seek out human written stuff. There will always be audiences
for human written stuff. Yeah, maybe like you said, we'll rely on computers to write the
Verizon pamphlets, but we're not going to rely on computers to write great works of literature
or to create great works of art. We're just not going to. They'll still do that. They're going to be writing books and movies and all that, but there will
always be a taste in a market for human created stuff. This guy said, I think he's right.
Yeah. And justine Bateman, I don't know if you saw that. I don't know if it was a blog poster.
Are you having a hallucination right now? Did you mean justine Bateman?
Yeah.
Yeah, justine Bateman, the Jason Bateman sister, the actor and she's done all kinds of
things since then.
I know she has a computer science degree.
So she's very smart and knows a lot about this stuff, but she basically said, and this
is beyond just the chatbot stuff, but she was like, right now, there are major
Hollywood stars being scanned. And there may be a brand new Tom Cruise movie in 60 years,
long after he's dead, starring Tom Cruise. He may be making movies for the next 200 years.
And like, is this what you want actors? Do you want to be scanned and have
them use your image like this in perpetuity for, you know, there will be money involved.
It's not like they can just say, okay, we can just do whatever we want. But what if they're
like, here's a billion dollars, Tom Cruise, just for the use of your image in perpetuity,
because we'll be able to duplicate that's so realistically
that people won't know.
Human voices, same thing, that's already happening.
What?
Yeah.
It is.
So that stuff is kind of scary.
And you know, when you read, I didn't really know this was kind of already happening in
companies, but Livia found the stuff.
IBM CEO, Arb and Krishna said just last month in May that he believed 30% of back office
jobs could be replaced over five years.
And it was pausing hiring for close to 8,000 positions because they might be able to use
AI instead.
And then Dropbox talked about the AI era when they announced a round
of layoffs.
So, it is happening right now in real time.
Pretty amazing.
Yeah, that's, I mean, there's proof positive right there.
Like that guy couldn't even wait a couple months, a year.
Like, this really started up in March and he's saying this in May already in May.
They're like, wait, wait, stop hiring. We're going to eventually replace these guys with AI so soon that we're
going to stop hiring those positions for now until the AI is competent enough to take over.
I mean, how many people does IBM employ? What's 30% of that? I don't know. I would say at least
100. At least 100 people, right? So yeah, like you said, it's happening already.
And then one other thing to look out for too,
that's, I believe is already,
at least theoretically possible,
since AI can write code now,
they'll be able to create new large language models themselves.
So the computers will be able to create new AI.
Well, that's the singularity, right? No, the singularity is when one of them
understands what it is and becomes attention. Yes, that's the singularity. But this leads to that though, doesn't it?
It does. It's hypothetically yes, but we just understand what's going on so little that you just can't say either way really.
You definitely can't say that it know it won't happen.
It's just fantasy and you also can't say, yes, it's definitely going to happen.
Yeah.
And here's the thing, man, I'm not a paranoid techno fob.
You don't have any measure.
I have only a foil cap on.
No, by any measure, I'm a pretty positive thinker.
And this, this is pretty scary to me. I'm a pretty positive thinker and this is pretty scary to me.
I'm just going to leave that there. Agreed Chuck. Okay. If you want to know more about
large language models, everybody just to start looking around earth and when you see people
running from explosions, go toward it and ask what's going on.
You almost said, type it into a search engine, right?
Yeah, steer clear of those.
Yeah, there's so much more we could have talked about.
But this is, if you ask me, this is round one.
I think we definitely need to do at least one or so more
on this, okay?
Yeah.
And then one day, like I said,
a Ijush and Chuck will just wrap it all up
and spank it on the bottom and say, no problems here.
Hopefully they'll give us a billion dollars rather than like a month free of blue apron
instead.
Yeah, we can talk here.
Well, since Chuck said we can talk, real confidential like, that means it's time for
listen or mail.
I'm going to call this conception, not inception. The conception. Oh, I saw this one. I don't know how conception. Not inception.
But conception.
Oh, I saw this one.
I don't know how I feel about this.
Hey guys, last year my wife and I were attempting to get pregnant.
A couple of months in, we made plans to stay with some friends in another town for a weekend.
We can arrive at Happen to Coincide with my wife's ovulation cycle.
As shy people, we both felt a little bit awkward about, you know,
hugging and kissing in a friend's guest room, but we really didn't want to miss
that chance and that time of the month. So we went about getting in the mood as
quietly as possible, and my wife suggested we play a podcast from my phone so that,
you know, if any noise is made outside the room, it would sound like we were just
doing a little pre-bed time listening. I knew I needed something with nice simple production values, so we
wouldn't get distracted, of course, by the whizz-bang sounds and whatnot, and since you were
my intro to the world of podcasts, I've always had a steady supply of yours downloaded.
I picked the least interesting sounding one in the feed at the time.
What was it? How coal works.
Okay. I thought that one turned out feed of the time. What was it? How coal works.
Okay.
I thought that one turned to be surprisingly interesting.
Yeah.
I could see how we would have thought that though.
Yeah, for sure.
We put that on, and we did our business.
Six weeks later, we got a positive pregnancy test.
Wow.
And now, over a year later, we've welcomed our son into the world.
His name is Cole.
And of course, we named him Cole.
That is what this person said.
Wait a minute.
Wait a minute.
They really did name him Cole?
No.
He said it is a joke.
But great minds, right?
Good joke.
We're both.
It's almost like you're both chatbots.
And this person said, you're fine to read this,
but give me a fake name.
And so I just want to say thanks to Jean for writing in about this.
Jean is in gene transfer? Sure.
Okay. Thanks a lot, Jean. We appreciate that. I think, again, I'm still figuring that one out.
And if you want to be like, Jean, I'm making air quotes here.
figuring that one out. And if you want to be like Jean,
I'm making air quotes here,
you can send us an email too.
Wrap it up, spank it on the bottom.
Only humans can do that.
I wonder if when you said spank it on the bottom
if that created any issues.
Yeah, I hadn't thought about that.
Maybe playfully, I'm not that.
Sure.
And send it off to stuffpodcast iHeartRadio.com.
Stuff you should know is a production of iHeartRadio.
For more podcasts, my heart radio, visit the iHeartRadio app.
Apple podcasts are wherever you listen to your favorite shows. So, there is a ton of stuff they don't want you to know.
Yeah, like does the US government really have alien technology?
Or what about the future of AI?
What happens when computers actually learn to think?
Could there be a serial killer in your town?
From UFOs to psychic powers and government cover-ups, from unsolved crimes to the bleeding
edge of science, history is riddled with unexplained events.
Listen to stuff they don't want you to know on the I Heart Radio app, Apple Podcasts,
or wherever you find your favorite shows.
These days, more often than not the success of a company is attributed to its founder.
But that's only part of the story.
My name is Noah Callahan Bever, and I'm proud to present Idea Generations All Angles,
a Will Packer Media podcast.
We'll be talking to all the key players from all your favorite brains, like loud records,
ghetto gastro, and earn your leisure.
So join me each week as we dissect the most dynamic companies in culture, because the only
way to truly understand success is to look at it from all angles
Listen to idea generations all angles on the iHeart radio app Apple podcast or wherever you get your podcasts
Rose friend. How do we make it to the second season of our podcast and we still have all these opinions?
Pardon my non-binary vibes, but I'm just like does it all mean to be explain?
I'm wearing vibes that I'm just like, does it all mean to be explained? Hatch took the glasses off her face, put them on America, and those are Betty's glasses.
That's so shh.
Yeah!
In our second season, we'll be covering topics like David Lynch, fanfiction, Golden Girls,
and Star Wars with guests including Harry Neb, Pranky Grande, Bobby Finger, and Mark
Indylicado.
Like a Virgin is proud to be a part of the Outspoken Network
from I Heart Podcasts.
Listen on the I Heart Radio app, Apple Podcasts,
or wherever you get your podcasts.