Young and Profiting with Hala Taha - Stephen Wolfram: AI, ChatGPT, and the Computational Nature of Reality | Artificial Intelligence E284
Episode Date: April 15, 2024Wide-eyed with wonder about space exploration, Stephen Wolfram's interest in science began in his childhood. Although he had difficulties learning arithmetic as a child, he became a young prodigy, pub...lishing papers on theoretical physics at age 15. Today, he is a prominent figure in computational science and its impact on different domains, including AI. In this episode, he breaks down computational thinking and how it can help us jump ahead. He also takes a deep dive into AI, its implications, and how it works. Stephen Wolfram is a British-American computer scientist, physicist, and businessman whose career spans the intersections of science, innovation, and entrepreneurship. He is the founder of Wolfram Research and the author of several books including, What Is ChatGPT Doing? In this episode, Hala and Stephen will discuss: - Stephen's childhood and early interest in science - The history of AI - Computational thinking and the importance of formalizing knowledge - The Wolfram language and its relationship with AI - The simple rules behind complex behaviors - The training process and workings of ChatGPT - How neural networks generate coherent sentences and new content - Computational thinking as another layer in human evolution - The impact of AI on jobs - The potential sentience of AI - And other topics… Stephen Wolfram is a world-renowned computer scientist, theoretical physicist, founder of Wolfram Research, and creator of Wolfram Language. He received his Ph.D. from CalTech at age 20 and became the youngest recipient of a MacArthur Fellowship at 21. In 2002, he published the influential book, A New Kind of Science, which proposed a novel computational framework for understanding the world. Dr. Wolfram is the author of several other books, including his most recent, What Is ChatGPT Doing? He remains actively involved in researching computational thinking and its applications. Sponsored By: Shopify - Sign up for a one-dollar-per-month trial period at youngandprofiting.co/shopify Indeed - Get a $75 job credit at indeed.com/profiting Airbnb - Your home might be worth more than you think. Find out how much at airbnb.com/host Porkbun - Get your .bio domain and link in bio bundle for just $5 from Porkbun at porkbun.com/Profiting Yahoo Finance - For comprehensive financial news and analysis, visit YahooFinance.com Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap Youtube - youtube.com/c/YoungandProfiting LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, entrepreneurship podcast, Business, Business podcast, Self Improvement, Self-Improvement, Personal development, Starting a business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side hustle, Startup, mental health, Career, Leadership, Mindset, Health, Growth mindsetAI, ChatGPT, AI Marketing, Prompt, AI in Action, Artificial Intelligence, AI in Business, Generative AI, AI for Entrepreneurs, Future of Work, AI Podcast
Transcript
Discussion (0)
More and more systems in the world will get automated.
AI is another step in the automation of things.
This has been a story of technology throughout history.
The founder and CEO of the software company Wolfram Research.
He's a mathematician, computer, scientist, physicist, and businessman.
When ChatGPT came out in late 2022, the people had been working on it,
they didn't know it was going to work.
They didn't think anything exciting would have happened,
and by golly, it had worked.
I think the thing that we learned from sort of the advance of AI is, well, actually,
There's not as much distance between sort of the amazing stuff of our minds and things that are just able to be constructed computationally.
This is the coming paradigm of the 21st century.
And if you understand that well, it gives you a huge advantage.
Unfortunately, whenever there's powerful technology, you can do ridiculous things with it.
Having said that, when you say things like, well, let's make sure that AIs never do the wrong thing.
Well, the problem with that is...
Young and profitors, welcome to the show.
And we are going to be talking a lot more about AI in 2024,
because it's such an important topic.
It's changing the world.
Last year, I had a couple conversations.
We had William talking about AI.
We had Mogadat, which I love that episode.
I highly recommend you guys check the Mogadad episode.
But nonetheless, I'm going to be interviewing a lot more AI folks.
And first up on the docket is Dr. Stephen Wilfram.
He's been focused on AI and computational thinking for the past decade.
Dr. Stephen Wolfram is a world-renowned computer scientist, mathematician, theoretical physicist,
and the founder of Wolfram Research, as well as the inventor of the Wolfram Computational Language.
A young prodigy, he published his first paper at 15.
He obtained his PhD in physics at 20, and he also was the youngest recipient of the MacArthur Genius Grant.
In addition to this, Dr. Wolfram is the author of several books, including a recent one on AI,
entitled What is ChatGBT doing, which we'll discuss today?
So we've got a lot of ground to cover with Stephen.
We're going to talk about what is AI, what is computational thinking, how is AI similar to nature,
what is going on in the background of chat, Jeopardy, how does it actually work?
And what does he think the future of AI looks like for jobs and for humanity overall?
We've got so much to cover.
I think he's going to blow your mind.
So Stephen, welcome to Young and Profiting Podcast.
Hello, though.
I am so excited for today's interview.
We love the topic of AI.
And I wanted to talk a little bit about your childhood.
before we got to the meat and potatoes of today's interview. So from my understanding, you started
as a particle physicist at a very young age. You even started publishing scholarly papers as young
as 15 years old. So talk to us about how you first got interested in science and what you
were like as a kid. Well, let's see. I grew up in England in the 1960s when space was the thing
of the future, which it is again now, but wasn't for 50 years. I was interested in those kinds of
things and that got me interested in how things like spacecraft work and that got me interested in
physics. And so I started learning about physics and so happened that the early 1970s were a time when
lots of things were happening in particle physics, lots of new particles getting discovered, lots of
fast progress and so on. And so I got involved in that. It's always cool to be involved in fields that
in the golden age of expansion, which particle physics was at the time. So that was how I got into
these things. You know, it's funny, you mentioned AI, and I realized that when I was a kid,
machines that think were right around the corner, just as colonization of Mars was right around
the corner, too. But it's an interesting thing to see what actually happens over a 50-year span
and what doesn't. It's so crazy to think how much has changed over the last 50 years.
and how much has not. In science, for example, I have just been finishing some projects that I started
basically 50 years ago. And it's kind of cool to finish something that, you know, there's a big
science question that I started asking when I was 12 years old about how a thing that people
have studied for 150 years now works, the Second Law of Thermodynamics. And I was interested in that
when I was 12 years old. I finally, I think, figured that out. I published a book about it.
last year, and it's kind of nice to see that one can tie up these things. But it's also a little
bit shocking how slowly big ideas move. For example, the neural nets that everybody's so
excited about now in AI, neural nets were invented in 1943. And the original conception of them
is not that different from what people use today, except that now we have computers that run
billions of times faster than things that were imagined back in the 1950s and so on. It's interesting.
Occasionally things happen very quickly. Oftentimes, it's shocking how slowly things happen
and how long it takes for the world to absorb ideas. Sometimes there'll be an idea and finally
some technology will make it possible to execute that idea in a way that wasn't there before.
Sometimes there's an idea and it's been hanging out for a long time and people just
ignored it for one reason or another. And I think some of the things that are happening with AI
today probably could have happened a bit earlier. Some things have depended on sort of the building
of a big technology stack. But it's always interesting to see that, to me at least.
It's so fascinating. This actually dovetails perfectly into my next question about your first
experiences with AI. So now everybody knows what AI is, but really, most of us really started to
understand it and using this term maybe five years ago, max.
But you've been studying this for decades, even before people probably called it AI.
So can you talk to us about the beginnings of how it all started?
AI predates me.
That term was invented in 1956.
Mm.
You know, it's funny because as soon as computers were invented, basically in the late 1940s,
and they started to become things that people had seen by,
the beginning of the 1960s. I first saw a computer when I was 10 years old, which was
1969-ish. And at the time, a computer was a very big thing tended by people in white
coats and so on. So I first got my hands on a computer in 1972, and that was a computer
that was the size of a large desk and programmed with paper tape and so on, and was rather
primitive by today's standards, but the elements were all there by that time. But it's true,
most people had not seen a computer until probably the beginning of the 1980s or something,
which was when PCs and things like that started to come out. But it was from the very first
moments when electronic computers came on the scene, people sort of assumed that computers
would automate thought as bulldozers and things like forklift trucks had automated mechanical
work. And that was the giant electronic brains was a typical characterization of computers in the
1950s. So this idea that one would automate thought was a very early idea. Now, the question was,
how hard was it going to be to do that? And people in the 1950s and beginning of the 1960s,
they were like, this is going to be easy. Now we have these computers. It's going to be easy to
replicate what brains do. In fact, a good example back in the beginning of the 1960s, a famous
incident was during the Cold War, and people were worried about, you know, U.S., Russian, Soviet
communication and so on. They said, well, you know, maybe the people are in a room. There's
some interpreter. The interpreter is going to not translate things correctly. So let's not
use a human interpreter. Let's teach a machine to do that translation, beginning of the 1960s.
And of course, machine translation, which is now finally in the 2020s, pretty good, took an extra 60 years to actually happen.
And people just didn't have the intuition about what was going to be hard, what wasn't going to be hard.
So the term AI was in the air already very much by the 1960s.
I'm sure when I was a kid, I'm sure I read books about the future in which AI was a thing.
And it was certainly in movies and things like that.
I think then this question of, okay, so how would we get computers to do thinking like things?
When I was a kid, I was interested in taking the knowledge of the world and somehow cataloging it and so on.
I don't know why I got interested in that, but that's something been interested in for a long time.
And so I started thinking, you know, how would we take the knowledge of the world and make it automatic to be able to answer questions based on the knowledge that our civilizations accumulated?
So I started building things along those lines, and I started building whole technology
stack that I started in the late 1970s, and well, now it's turned into a big thing that lots of
people use.
But the idea there, the first idea there was, let's be able to compute things like math
and so on, and let's take what has been something that humans have to do and make it
automatic to have computers do it.
People had said for a while, when computers can do calculus, then we'll know that they're intelligent.
Things I built solved that problem.
By the mid-1980s, that problem was pretty well solved.
And then people said, well, it's just engineering.
It's not really a computer being intelligent.
I would agree with that.
But then at the very beginning of the 1980s, when I was working on automating things like mathematical computation,
I got curious about the more general problem of doing the kinds of things that we humans do,
like we match patterns.
We see this image, and it's got a bunch of pixels in it, and we say, that's a picture of a cat,
or that's a picture of a dog.
And this question of how do we do that kind of pattern matching, I got interested in
and started trying to figure out how to make that work.
I knew about neural nets.
I started trying to get, this must have been 1980, 81, something like that.
I started trying to get neural nets to do things.
like that, but they didn't work at all at the time, hopeless.
As it turns out, you know, you say things happen quickly, and I say things sometimes happen very
slowly.
I was just working on something that is kind of a potential new direction for how neural nets
and things like that might work.
And I realized, you know, I worked on this once before and I pulled out this paper that I wrote
in 1985 that has the same basic idea that I was just very proud of myself for having figured
out just last week. And it's like, well, I started on it in 1985. Well, now, you know, I understand a
bunch more and we have much more powerful computers. Maybe I can make this idea work. But so,
this notion that there are things that people thought would be hard for computers, like doing
calculus and so on, we crushed that, so to speak, a long time ago. Then there were things
that are super easy for people, like tell that's a cat, that's a dog, which wasn't solved. And I wasn't
involved in the solving of that.
That's something that people worked on for a long time,
and nobody thought it was going to work.
And then suddenly in 2011, sort of through a mistake,
some people who've been working on this for a long time
left a computer trying to train to tell things like cats from dogs
for a month without paying attention to it.
They came back.
They didn't think anything exciting would have happened,
and by golly, it had worked.
And that's what started the current enthusiasm
about neural nets and deep learning and so on.
And when ChatGPT came out in late 2020, again, the people had been working on it, they didn't know it was going to work.
We had worked on previous kinds of language models, things that try to do things like predict what the next word will be in a sentence, those sorts of things.
And they were really pretty crummy.
And suddenly, for reasons that we still don't understand, we kind of got above this threshold where it's like, yes, this is pretty human-like.
and it's not clear what caused that threshold.
It's not clear whether we, in our human languages, for example,
we might have, I don't know, 40,000 words that are common in language like most languages,
English as an example.
And there's probably that number of words is somehow related to how big an artificial brain
you need to be able to deal with language in a reasonable way.
And, you know, if our brains were bigger, maybe we would routinely have languages with 200,000 words in them.
We don't know.
And maybe, you know, it's this kind of match between what we can do with an artificial neural network
versus what our human biological neural nets managed to do.
We managed to reach enough of a match that people say, by golly, the thing seems to be doing
the kinds of things that we humans do.
But, I mean, this question, what's ended up happening is what us humans can quickly do,
like tell a cat from a dog, or figure out what they're going to be.
the next word in the sentence is likely to be.
Then there are things that we humans have actually found really hard to do,
like solve this math problem or figure out this thing in science
or do this kind of simulation of what happens in the natural world.
Those are things that the unaided brain doesn't manage to do very well on.
But the big thing that's happened last 300 years or so
is we built a bunch of formalization of the world,
first with things like logic, that was back in antiquity,
And then with math and most recently with computation,
where we're kind of setting up things so that we can talk about things in a more structured way
than just the way that we think about them off the top of our head, so to speak.
That's so interesting.
And I know that you work on something called computational thinking.
And I think what you're saying now really relates to that.
So help us understand the Wolfram Project and computational thinking
and how it's related to the fact that humans,
we need to formalize and organize things like mathematics and logic. What's the history behind that?
Why do we need to do that as humans? And then how does it relate to computational thinking in the
future? There are things one can immediately figure out one, just sort of intuitively knows,
oh, that's a cat, that's a dog, whatever. Then there are things where you have to go through
a process of working out what's true or working out how to construct this or that thing.
when you're going through that process, you've got to have solid bricks to start building that tower.
So what are those bricks going to be made of?
Well, you have to have something which has definitive structure.
And that's something where, for example, back in antiquity, when logic got invented, it was kind of like, well, you can think vaguely, yeah, that sentence sounds kind of right.
Or you can say, well, wait a minute, this or that, if one of those things is true, then that or that,
has to be true, et cetera, et cetera, et cetera. You've got some structured way to think about things.
And then in 1600s, math became sort of a popular way to think about the world. And then you could
say, okay, we're looking at the planet goes around the sun and roughly an ellipse, but let's
put math into that. And then we can have this way to actually compute what's going to happen.
So for about 300 years, this idea of math is going to explain how the world works at some level
was kind of a dominant theme, and that worked pretty well in physics. It worked pretty terribly
in things like biology, in social sciences and so on. Imagine there might be a social physics
of how society works that never really panned out. So there was this question that places where
math had worked, and it gave us a lot of modern engineering and so on, and their cases where it
hadn't really worked. I got pretty interested in this at the beginning of the 1980s in sort of
figuring out how do you formalize thinking about the world in a way that goes beyond what math
provides one, things like calculus and so on. What I realized is that you just think about,
well, there are definite rules that describe how things work, and those rules are more stated
in terms of, oh, you have this arrangement of black and white cells, and then this happens,
and so on. They're not things that you necessarily can write in mathematical terms, in terms of
multiplications and integrals and things like this. And so I, as a matter of science, I got interested
in, so what do these simple programs that you can describe as these systems as rules of being,
what do they typically do? And what one might have assumed is you have a program that's
simple enough, it's going to just do simple things. This turns out not to be true. Big surprise,
to me at least. I think to everybody else as well. People, a few decades to absorb this point.
It took me a solid bunch of years to absorb at this point.
But you just do these experiments, computer experiments, and you find out, yes, you use a simple rule and know it does a complicated thing.
That turns out to be pretty interesting if you want to understand how nature works, because it seems like that's the secret that nature uses to make a lot of the complicated stuff that we see, the same phenomenon of simple rules, complicated behavior.
So that turns into a whole big direction and new understanding about how science works.
I wrote this big book back in 2002 called The New Kind of Science.
Well, its title kind of says what it is.
So that's one kind of branch, is sort of understanding the world in terms of computational rules.
Another thing has to do with taking the things that we normally think about,
whether that's how far is it from one city to another, or how do we remove this thing from this image
or something like this, things that we would normally think about and talk about, and how do we
take those kinds of things and think about them in a structured computational way?
So that has turned into a big enterprise of my life, which is building our computational language,
this thing now called Wolfram Language, that powers a lot of, well, research and development
kinds of things and also lots of actual practical systems in the world, although when you are
interacting with those systems, you don't see what's inside them, so to speak. But the idea there
is to make a language for describing things in the world, which might be, you know, this is
a city, this is both the concept of a city and the actuality of the couple of hundred thousand
cities that exist in the world, where they are, what their populations are, lots of other data
about them, and being able to compute things about things in the world.
And so that's been a big effort to build up that computational language.
And the thing that's exciting that we're on the cusp of, I suppose, is people who study
things like science and so on, for the last 300 years, it's like, okay, to make this science
really work, you have to make it somehow mathematical.
Well, now the case is that the new way to make science is to make it computational.
And so you see all these different fields, call them X.
You start seeing the computational X field start to come into existence.
And I suppose one of my big life missions has been to provide this language and notation
for making computational X for all X possible.
It's a similar mission to what people did maybe 500 years ago when people invented mathematical notation.
I mean, there was a time when if you wanted to talk about math, it was all in terms of just regular words at the time in Latin.
And then people invented things like plus signs and equal signs and so on.
And that streamlined the way of talking about math.
And that's what led to, for example, algebra and then calculus and then all the kind of modern mathematical science that we have.
And so similarly, what I've been trying to do last 40 years or so is build a computational language, a notation for computation, a way of talking about things computationally that lets one build computational X for all X.
One of the great things that happens when you make things computational is not only do you have a clearer way to describe what you're talking about, but also your computer can help you figure it out.
And so you get this superpower.
As soon as you can express yourself computationally,
you tap into the superpower of actually being able to compute things.
And that's amazingly powerful.
And when I was a kid, as I say in the 1970s,
physics was hopping at the time
because various new methods have been invented not related to computers.
At this time, all the computational X fields are just starting to really hop
and it's starting to be possible to do really, really interesting things, and that's going to be
an area of tremendous growth in the next how many years.
I have a few follow-up questions to that.
So you say that computational thinking is another layer in human evolution.
So I want to understand why you feel it's going to help humans evolve.
Also curious to understand the practical ways that you're using the Wolfram language and how it relates to AI, if it does at all.
Let's take the second thing first.
Wolfram Language is about representing the world computationally in a sort of precise computational way.
It also happens to make use of a bunch of AI. But let's put that aside. The way that, for example,
something like an LLM, like a chat GPT or something like that, what it does is it makes up pieces of
language. If we have a sentence like the cat sat on the blank, what it will have done is it's read a billion web pages,
chances are the most common next word is going to be matte.
And it has set itself up so that it knows that the most common next word is Matt, so let's write down Matt.
So the big surprise is that it doesn't just do simple things like that, but having built the structure from reading all these web pages, it can write plausible sentences.
Those sentences, they sort of sound like they make sense.
They're kind of typical of what you might read.
They might or might not actually have anything to do with reality in the world, so to speak.
That's working kind of the way humans immediately think about things.
Then there's the separate whole idea of formalized knowledge, which is the thing that led to modern science and so on.
That's a different branch from things humans just can quickly and naturally do.
So, in a sense, Wolfram Language, the big contribution right now to the world of the emerging
AI language models, all this kind of thing, is that we have this computational view of the
world, which allows one to do precise computations and build up these whole towers of
consequences.
So the typical setup, and you'll see more and more coming out along these lines.
I mean, we built something with Open AI back, oh, gosh, a year ago now, an early
version of this is you've got the language model and it's trying to make up words and then it gets to
use as a tool our computational language. If it can formulate what it's talking about, well,
you know, we have ways to take the natural language that it produces. We've had Wolfm Alpha
System, which came out in 2009, is a system that has natural language understanding. We sort of
had solved the problem of one sentence at a time, kind of what does this mean? Can we,
translate this natural language in English, for example, into computational language, then compute an
answer using potentially many, many steps of computation, then that's something that is sort of a
solid answer that was computed from knowledge that we've curated, et cetera, et cetera, et cetera.
So the typical mode of interaction is that's sort of a linguistic interface provided by things like
LLMs, and that using our computational language as a tool to actually figure out, hey, this is the
thing that's actually true, so to speak. Just as humans don't necessarily immediately know everything,
but with tools, they can get a long way. I suppose it's been sort of the story of my life,
at least. I discovered computers as a tool back in 1972, and I've been using them ever since
and managed to figure out a number of interesting things in science and technology.
so on by using this kind of external to me superpower tool of computation. The LLMs and the AIs
get to do the same thing. So that's the core part of how the technology I've been building
for a long time most immediately fits into the current expansion of excitement about AI and
language models and so on. I think there are other pieces to this which have to do with how,
for example, science that I've done relates to understanding more about how you can build other
kinds of AI-like things. But that's sort of a separate branch.
Yeah, we have a super unique company culture. We're all about obsessive excellence. We even call
ourselves scrappy hustlers. And I'm really picky when it comes to my employees. My team is
growing every day. We're 60 people all over the world. And when it comes to hiring, I no longer
feel overwhelmed by finding that perfect candidate, even though I'm so picky. Because when it comes to hiring,
Indeed is all you need.
Stop struggling to get your job post noticed.
Indeed, sponsor jobs help you stand out and hire fast by boosting your post to the top
relevant candidates.
Sponsored jobs on Indeed get 45% more applications than non-sponsored ones according to Indeed
data worldwide.
I'm so glad I found Indeed when I did because hiring is so much easier now.
In fact, in the minute we've been talking, 23 hires were made on Indeed according to Indeed
data worldwide.
Plus, there's no subscriptions or long-term contracts.
You literally just pay for your results.
you pay for the people that you hire.
There's no need to wait any longer.
Speed up your hiring right now with Indeed.
And listeners of this show will get a $75-sponsored job credit
to get your jobs more visibility at Indeed.com slash profiting.
Just go to Indeed.com slash profiting right now
and support our show by saying you heard about Indeed on this podcast.
Indeed.com slash profiting.
Terms and conditions apply.
Hiring, Indeed, is all you need.
Hey, young improfitters.
As an entrepreneur, I know firsthand that getting a huge expense off your books
is the best possible feeling.
It gives you peace of mind,
and it lets you focus on the big picture
and invest in other things
that move your business forward.
Now, imagine if you got free business internet for life,
you never had to pay for business internet again.
How good would that feel?
Well, now you don't even have to imagine
because Spectrum business is doing exactly that.
They get it that if you aren't connected,
you can't make transactions,
you can't move your business forward.
They support all types of businesses
from restaurants to dry cleaners
to content creators like me
and everybody in between.
They offer things like internet, advanced Wi-Fi, phone TV, and mobile services.
Now, for my business-owning friends out there, I want you to listen up.
If you want reliable internet connection with no contracts and no added fees,
Spectrum is now offering free business internet advantage forever when you simply add four or more mobile lines.
This isn't just a deal.
It's a smart way to cut your monthly overhead and stay connected.
Yeah, BAM, you should definitely take advantage of this offer.
It's free business internet forever.
visit spectrum.com
slash free for life
to learn how you can get
business internet free forever.
Restrictions apply.
Services not available in all areas.
Young and profitors.
I know there's so many people tuning in right now
that end their workday wondering
why certain tasks take forever,
why they're procrastinating certain things,
why they don't feel confident in their work,
why they feel drained and frustrated and unfulfilled.
But here's the thing you need to know.
It's not a character of flaw
that you're feeling this way.
It's actually your natural,
wiring. And here's the thing. When it comes to burnout, it's really about the type of work that you're
doing. Some work gives you energy and some work simply drains you. So it's key to understand your six
types of working genius. The working genius assessment or the six types of working genius
framework was created by Patrick Lensione and he is a business influencer and author. And the working
genius framework helps you identify what you're actually built for and the work that you're not.
Now, let me tell you a story. Before I uncovered,
my working genius, which is galvanizing and invention. So I like to rally people and I like to
invent new things. I used to be really shameful and had a lot of guilt around the fact that I didn't like
enablement, which is one of my working frustrations. So I actually don't like to support people one-on-one.
I don't like it when people slow me down. I don't like handholding. I like to move fast,
invent, rally people inspire. But what I do need to do is ensure that somebody else can fill that
enablement role, which I do have, Kate, on my team. So working genius helps you uncover these
genius gaps, helps you work better with your team, helps you reduce friction, helps you collaborate
better, understand why people are the way that they are. It's helped me restructure my team,
put people in the spots that they're going to really excel, and it's also helped me in hiring.
Working Genius is absolutely amazing. I'm obsessed with this model. So if you guys want to take
the Working Genius assessment and get 20% off, you can use code profiting. Go to workinggenius.com.
Again, that's workinggenius.com. Stop guessing. Start working in your genius.
Honestly, you're teaching us so much.
I feel like a lot of people tuning in
are probably learning a lot of this stuff for the first time.
But one thing that we all are using right now is chatGBT, right?
So everybody is sort of embraced chat GBT.
It feels like it's magic, right?
When you're just getting something that is giving you something
that a human could potentially write.
So I have a couple questions about chat.
You alluded to how it works a bit.
But can you give us more detail about how neural networks,
work in general and what CHAPGBT is doing in the background to spit out something that looks
like it's written by a human? The original inspiration for neural networks was understanding something
about how brains work. In our brains, we have about roughly 100 billion neurons. Each neuron is
a little electrical device, and they're connected with things that look under a microscope,
a bit like wires. So one neuron might be connected to a thousand or 10,000 other neurons in one's brain,
and these neurons, they'll have a little electrical signal,
and then they'll pass on that electrical signal to another neuron,
and pretty soon one's gone through a whole chain of neurons,
and one says the word, next word, or whatever.
And so the electrical machine, lots of things connected to things,
that's how people imagine that brains work,
and that's how neural nets are an idealization of that set up in a computer
where one has these connections,
between artificial neurons, usually called weights.
You often hear about people saying,
this thing has a trillion weights or something.
Those are the connections between artificial neurons,
and each one has a number associated with it.
And so what happens when you ask CHAPT something,
what will happen is it will take the words that it's seen so far,
the prompt, and it will grind them up into numbers,
and it will take that sequence of numbers and feed that in as input to this network.
So it just takes the words, more or less every word in English gets a number or every part of a word gets a number.
You have the sequence of numbers.
That sequence of numbers is given as input to this essentially mathematical computation that goes through and says,
okay, here's this arrangement of numbers.
We multiply each number by this weight, then we add up a bunch of numbers, then we,
take a threshold of those numbers and so on. And we keep doing this and we do it a sequence of
times like a few hundred times for typical chat GPT type behavior, a few hundred times.
And then at the end, we got out another number, actually we got out another collection of numbers
that represent the probabilities that the next word should be this or that. So in the example of
the cat sat on the, the next word has probably very high probability, 99% probability to be
Matt and 1% probability or 0.5% probability to be floor or something. And then what
Chiat Chip-T is doing is it saying, well, usually I'm going to pick the most likely next word.
Sometimes I'll pick a word that isn't the absolutely most likely next word, and it just keeps
doing that. And the surprise is that just doing that kind of thing, a word at a time,
gives you something that seems like a reasonable English sentence. Now, the next question is
How did it get all those, in the case of the original chat chb-t, I think it was a 180 billion weights?
How did it get those numbers?
And the answer is, what it tried to do was it was trained, and it was trained by being shown all this text from the web.
And what was happening was, well, you've got one arrangement of weights.
Okay, what next word does that predict?
Okay, that predicts turtle as the next word for the cat sat on the.
Turtle is wrong.
Let's change that.
Let's see what happens if we adjust these weights in that way.
Oh, we finally got it to say, Matt.
Great.
That's the correct version of that particular weight.
Well, you keep doing that over and over again.
That takes huge amounts of computer effort.
You keep on bashing it and trying to get it.
No, no, no.
You got it wrong.
Adjust it slightly to make it closer to correct.
Keep doing that long enough.
And you get something which is a neural net,
which has the property that it will.
typically reproduce the kinds of things it's seen. Now, it's not enough to reproduce what it's
seen, because if you keep going, writing a big long essay, a lot of what's in that essay will
never have been seen before. Those particular combination of words will never have been produced
before. So then the question is, well, how does it extrapolate? How does it figure out something
that it's never seen before? What words is it going to use when it never saw it before?
And this is the thing which nobody knew what was going to happen. This is the thing where the
surprise is that the way it extrapolates is similar to the way we humans seem to extrapolate things.
And presumably, that's because its structure is similar to the structure of our brains.
We don't really know why when it figures things out that it hasn't seen before, why it does that
in a kind of human-like way.
That's a scientific discovery.
Now, we can say, can we get an idea why this might happen?
I think we have an idea why it might happen.
It's more or less this.
that you say, how do you put together an English sentence? Well, you kind of learn basic grammar.
You say it's a noun, a verb, a noun. That's a typical English sentence. But there are many
noun verb, noun, English sentences that aren't really reasonable sentences like, I don't know,
the electron ate the moon. Okay, it's dramatically correct, but probably doesn't really mean
anything except in some poetic sense. Then what you realize is there's a more elaborate construction
kit about sentences that might mean something.
And people have been intending to create that construction kit for a couple thousand years.
I mean, Aristotle started the time when he created logic, he started thinking about that
kind of construction kit, but nobody got around to doing it.
But I think chat GPT and LLM show us there is a construction kit of, oh, that word, if it's
blah, ate, blah, the first blah, better be a thing that eats things.
And there's a certain category of things that eat things, and it's like animals and people
and so on.
And so that's part of the construction kit.
So you end up with this notion of a semantic grammar of a way, a construction kit of how you
put words together.
My guess is that's essentially what ChatGBTGBT has discovered.
And once we understand that more clearly, we'll probably be able to build things like
chat TBT much more simply than it's very indirect way to do it, to have this neural nets and
keep bashing it and say, make it predict words better and so on, there's probably a more direct
way to do the same thing. But that's what's happened. And this moment when it becomes human-level
performance, very hard to predict when that will happen. It's happened for things like
visual object recognition around 2011, 2012-type timeframe. It's hard to know when
these things are going to happen for different kinds of human activities. But the thing to realize is
there are human-like activities, and then there are things that we have formalized where we've used
math, we've used other kinds of things as a way to work things out systematically. And that's a
different direction than the direction that things like neural nets are going in. And that happens to be the
direction that I've spent a good part of my life trying to build up. And these things are very
complementary in the sense that things like the linguistic interface that are made possible by
neural nets feed into this precise computation that we can do on that side.
How does this make you feel about human consciousness and AI potentially being sentient or
having any sort of agency? It's always a funny thing because we have an internal view of the
fact that there's something going on inside for us. We experience the world and so on.
Even when we're looking at other people, it's like it's just a guess.
I know what's going on in my mind.
It's just some kind of guess what's going on in your mind, so to speak.
And the big discovery of our species is language, this way of packaging up the thoughts
that are happening in my mind and being able to transmit them to you and having you unpack
them and make similar thoughts perhaps in your mind, so to speak.
So this idea of where can you imagine that there's a mind that's operating,
It's not obvious between different people.
We kind of always make that assumption.
When it comes to other animals, it's like, well, we're not quite sure, but maybe we can tell that a cat had some emotional reaction, which reminded us of some human emotion and so on.
When it comes to our AIs, I think that increasingly people will have the view that the AIs are a bit like them.
So when you say, well, is there a there, there, is there a thing inside?
it's like, okay, is there a thing inside another person?
You know, if you say, well, what we can tell the other person is thinking and doing all this stuff,
well, if we were to look inside the brain of that other person, all we'd find is a bunch of electrical signals going around
and those add up to something where we have the assumption that there's a conscious mind there, so to speak.
So I think we have always felt that our thinking and minds are very far away from other things
that are happening in the world, I think the thing that we learn from the advance of AI is, well,
actually, there's not as much distance between the amazing stuff of our minds and things that
are just able to be constructed computationally. One of the things to realize is this whole question
of what thinks, where is there computational stuff going on? And you might say, well, humans do that,
maybe our computers do that. Well, actually, nature does that too.
too, when people will have this thing, the weather has a mind of its own. Well, what does that mean?
Typically, operationally, it means it seems like the weather is acting with free will. We can't predict
what it's going to do. But if we say, well, what's going on in the weather? Well, it's a bunch of
fluid dynamics in the atmosphere and this and that and the other. And we say, well, how do we compare
that with the electrical processes that are going on in our brains? They're both computations
that operate according to certain rules, the ones in our brains we're familiar with, the ones
on the weather we're not familiar with. But in some sense, both of these cases, there's a computation
going on. And one of the things that was a big piece of bunch of science I've done is this thing
called the principle of computational equivalence, which is this discovery, this idea that if you
look at different kinds of systems operating according to different rules, whether it's a brain or the
whether there's a commonality. There's the same level of computation is achieved by those different
kinds of systems. That's not obvious. You might say, well, I've got the system and it's just a system
that's made from physics, as opposed to the system that's the result of lots of biological evolution
or whatever, or I've got the system and it just operates according to these very simple rules
that I can write down. You might have thought that the level of computation that will be achieved
in those different cases were very different. The big surprise is that it isn't. It's the same. And
has all kinds of consequences. Like if you say, okay, I've got the system in nature, let me predict
what's going to happen in it. Well, essentially what you're doing by saying, I'm going to predict
what's going to happen is you're somehow setting yourself up as being smarter than the system
in nature. It will take it all these computational steps to figure out what it does, but you are
going to just jump ahead and say, this is what's going to happen in the end. Well, the fact that
there's this principle of computational equivalence, implies this thing I call computational irreducibility,
which is realization that there are many systems where to work out what will happen in that system,
you have to do kind of an irreducible amount of computational work. That's a surprise because
we have been used to the idea that science lets us jump ahead and just say, oh, this is what the
answer is going to be. And this is showing us from within science, it's showing us that there's a
fundamental limitation where we can't do that. That's important when it comes to thinking about
things like AI, when you say things like, well, let's make sure that AIs never do the wrong thing.
Well, the problem with that is there's this phenomenon of computational irreducibility.
The AI is doing what the AI does. It's doing all these computations and so on. We can't know in
advance. We can't just jump ahead and say, oh, we know what it's going to do. We are stuck
having to follow through these steps. We can try and make an AI, where we can always know what it's
going to do. Turns out that AI will be too dumb to be a serious AI. And in fact, we see that happening
in recent times of people saying, let's make sure they don't do the wrong thing. We put enough
constraints. It can't really do the things that a computational system should be able to do,
and it doesn't really achieve this level of capability that you might call real AI
so to speak. What's up, Yap, gang? If you're a serious entrepreneur like me, you know your website is one of
the first touch points every single cold customer has with your brand. Think about that for a second.
When people are searching on Google, everybody who interacts with your brand first is seeing your dot com
initially. But here's a problem. Too many companies treat their website like a formality instead of
the gross tool that it should be. At Yap Media, we are guilty of this. I am really due for an upgrade
from my website and I'm planning on doing that with framework this year. Because,
Small changes can take days with my other platform and simple updates require tickets.
And suddenly, we're just leaving so much opportunity on the table.
And that's why so many teams, including mine, are turning to framework.
It's built for teams who refuse to let their website slow them down.
Your designers and marketers get full ownership with real-time collaboration, everything
you need for SEO and analytics with integrated A-B testing.
I love that.
I love testing and making sure that we've got the best performing assets on the page.
You make a change, hit publish, and it's live in seconds.
Whether you're launching a new site, testing landing pages, or migrating your full.com,
Framer makes going from idea to live site fast and simple.
Learn how you can get more out of your dot com from a Framer specialist or get started
building for free today at Framer.com slash profiting for 30% off a Framer pro
annual plan.
That's 30% off in 2026.
Again, that's Framer.com slash profiting for 30% off Framer.com slash profiting.
Rules and restrictions apply.
Hello, young improfitors.
Running my own business has been one of the most rewarding things I've ever done, but I won't
lie to you.
In those early days of setting it up, I feel like I was jumping on a cliff with no parachute.
I'm not really good at that kind of stuff.
I'm really good at marketing, sales, growing a business, offers.
But I had so many questions and zero idea where to find the answers when it came to starting
an official business.
I wish I had known about Northwest Registered Agent back when I was starting Yap Media.
And if you're an entrepreneur, you need to know what Northwest Registered Agent is.
They've been helping small business owners launch and grow businesses for nearly 30 years.
They literally make life easy for entrepreneurs.
They don't just help you form your business.
They give you the free tools you need after you form it, like operating agreements and
thousands of how-to guides that explain the complicated ins and outs of running a business.
And guys, it can get really complicated.
But Northwest Registered Agent just makes it all easy and breaks it down for you.
So when you want more for your business, more privacy, more guidance, more free resources,
Northwest Registered Agent is where you should go.
Don't wait and protect your privacy, build your brand, and get your complete business identity
in just 10 clicks and 10 minutes.
Visit Northwestregisteredagent.com slash Yapfree and start building something amazing.
Get more with Northwest Registered Agent at Northwest Registeredagent.com slash yapfree.
What's up, young and profitors.
I remember when I first started Yap, I used to dread missing important calls.
I remember I lost a huge potential partnership because the follow-up thread got completely
lost in my messy communication system.
Well, this year, I'm focused on not missing any opportunities.
And that starts with your business communications.
A missed call is money and growth out the door.
That's why today's episode is brought to you by Quo, spelled QUO, the smarter way to run your
business communications.
Quo is the number one rated business phone system on G2, and it works right from an app
on your phone or computer.
The way Quo works is magic for team alignment.
Your whole team can handle calls and texts from one shared number, and everyone sees the
full conversation.
It's like having access to a shared email inbox, but on a phone.
And also, Quo's AI can even qualify leads or respond after hours, ensuring your business
stays responsive, even when you finally logged off.
It makes doing business so much easier.
Make this the year where no opportunity and no customer slips away.
Try Quo for free, plus get 20% off your first six months when you go to Quote.
Quo.com
slash profiting.
That's QUO.
dot com slash profiting.
Quo.
No missed calls,
no missed customers.
Next, I want to talk about
how the world is going to change
now that AI is here
being more adapted by people.
It's becoming more commonplace.
How is it going to impact jobs?
And also, if you can touch on
the risks of AI,
what are the biggest fears
that people have around AI?
More and more systems
in the world will get automated.
This has been a story of technology throughout history.
AI is another step in the automation of things.
You know, when things get automated,
things humans used to have to do with their own hands,
they don't have to do anymore.
The typical pattern of economies, like in the US or something,
is 150 years ago in the US,
most people were doing agriculture.
You had to do that with your own hands.
Then machinery got built that let that be automated.
And the people, you know, it's like, well,
then nobody's going to have nothing to do. Well, it turned out they did have things to do
because that very automation enabled a lot of new types of things that people could do.
And, for example, we're doing the podcasting thing we're doing right now is enabled by
the fact that we have video, communication and so on. There was a time when all of that automation
that has now led to the kind of telecommunications infrastructure we have wasn't there.
And there had to be telephone switchboard operators plugging wires in.
and so on, and people were saying, oh, gosh, if we automate telephone switching, then all those jobs
are going to go away. But actually what happened was, yes, those jobs went away, but that automation
opened up many other categories of jobs. So the typical thing that you see, at least historically,
is a big category, there's a big chunk of jobs that are something that people have to do for
themselves, that gets automated, and that enables what becomes many different possible things that
you end up being able to do. And I think the way to think about this is really the following,
that once you've defined an objective, you can build automation that does that objective.
Maybe it takes 100 years to get to that automation, but you can in principle do that.
But then you have the question, well, what are you going to do next? What are the new things you could do?
Well, that question, there are an infinite number of new things you could do.
The AI left to its own devices, there's an infinite set of things that it could be doing.
The question is, which things do we choose to do?
And that's something that is really a matter for us humans, because it's like you could
compute anything you want to compute.
And in fact, some part of my life has been exploring the science of the computational universe,
what's out there that you can compute.
and the thing that's a little bit sobering is to realize of all the things that are out there to compute,
the set that we humans have cared about so far in the developments of our civilization is a tiny, tiny, tiny slice.
And this question of where do we go from here is, well, what other slices, which now they're possible,
which things do we want to do?
And I think that the typical thing you see is that a lot of new jobs get created around the things
which are still sort of a matter of human choice, what you do. Eventually, it could have
gets standardized and then it gets automated, and then you go on to another stage. So I think that
the spectrum of what jobs will be automated, one of the things that happened back, or several
years ago now, people were saying, oh, machine learning, the sort of underlying area that leads
to neural nets and AI and things like this, machine learning is going to put all these people
out of jobs. The thing that was sort of amusing to me was that I knew perfectly well that the first
category of jobs that would be impacted were machine learning engineers because machine learning can be
used to automate machine learning, so to speak. And so it was once the thing becomes routine,
then it can be automated. And for example, a lot of people learned to do programming,
low-level programming. I've spent a large part of my life trying to automate low-level programming.
So in other words, the computational language we've built, which people like, oh my gosh, I can do this, I can get the computer to do this thing for me by spending an hour of my time.
If I were writing standard programming language code, I'd spend a month trying to set my computer up to do this.
The thing we've already achieved is to be able to automate out those things.
What you realize, when you automate out something like that, is people say, oh, my gosh, things are becoming.
so difficult now because if you're doing low-level programming, some part of what you're doing
is just routine work. You don't have to think that much. It's just like, oh, I turn the crank,
I show up to work the next day, I get this piece of code written. Well, if you've automated out
all of that, what you realize is most of what you have to do is figure out, so what do I want to
do next? And that's where this being able to do real computational thinking comes in, because that's
where it's like, so how do you think about what you're trying to do in computational terms
so you can define what you should do next? And I think that's an example of, you know,
the low level, turn the crank programming. I mean, that should be extinct already because
we've spent, I've spent the last 40 years trying to automate that stuff. And in some
segments of the world, it is kind of extinct because we did automate it. But there's an awful
lot of people where they said, oh, we can get a good job by learning C code, C progress, C++ program.
or Python or Java or something like this,
that's a thing that we can spend our human time doing.
It's not necessary.
And that's being more emphasized at this point.
The thing that is still very much the human thing is,
so what do you want to do next, so to speak?
It's a good story because you're not saying,
hey, we're doomed.
You're saying AI is going to actually create more jobs.
It's going to automate the things that are repetitive
and the things that we still need to make decisions on,
or decide the direction that we want to go in,
that's what humans are going to be doing,
sort of shaping all of it.
But do you feel that AI is going to supersede us in intelligence
and have this apex intelligence one day
where we are not in control of the next thing?
I mentioned the fact that lots of things in nature compute.
Our brains do computation, the weather does computation,
the weather is doing a lot more computation than our brains are doing.
So if you say what's the apex intelligence in the world, already nature has vastly more
computation going on than happens to occur in our brains.
The computation going on in our brains is computation where we say, oh, we understand what that
is and we really care about that.
Whereas the computation that goes on in the babbling brook or something, we say, well, that's
just some flow of water and things.
We don't really care about that.
So we already lost that competition of are we the most computationally sophisticated
things in the world. We're not. Many, many things are equivalent in their computational abilities.
So then the question is, well, what will it feel like when AI gets to the point where routinely
it's doing all sorts of computation beyond what we manage to do? I think it feels pretty much
like what it feels like to live in the natural world. The natural world does all kinds of things.
There are, you know, occasionally a tornado will happen. Occasionally this will happen.
we can make some prediction about what's going to happen,
but we don't know for sure what's going to happen,
when it's going to happen, and so on.
And that's what it will feel like
to be in a world where most things are run with AI.
And we'll be able to do some science of the AI,
just like we can do science of the natural world
and say this is what we think is going to happen.
But there's going to be this infrastructure of AI society.
There already is to some extent,
but that will grow of more and more things
that are happening automatically
as a computational process.
But in a sense, that's no different
from what happens in the natural world.
The natural world is just automatically
doing things that are not
where we can try and divert what it does,
but it's just doing what it does.
For me, it's one of the things I've long been interested in
is how is the universe actually put together.
If we drill down and look at the smallest scales
of physics and so on, what's down there?
And what we've discovered in the last few years
is that it looks like we really can
understand the whole of what happens in the universe as a computational process that underneath them,
people have been arguing for a couple of thousand years, whether the world is made of continuous
things, or whether it's made of little discrete things like atoms and so on. And about a bit more
than 100 years ago, it got nailed down matter is made of discrete stuff. There are individual
atoms and molecules and so on, then light is made of discrete stuff, photons and so on. Space,
people had still assumed was somehow continuous, was not made of discrete stuff.
And the thing we kind of nailed down, I think, in 2020, was the idea that space really is made of
discrete things.
There are discrete elements, discrete atoms of space.
And we can really think of the universe as made of a giant network of atoms of space.
And hopefully, in the next few years, maybe, if we're lucky, we'll get direct experimental evidence,
that space is discrete in that way.
But one of the things that that makes one realize is it's sort of computation all the way down.
At the lowest level, the universe consists of this discrete network that keeps on getting updated
and it's kind of following these simple rules and so on.
It's all rather lovely.
But there's computation everywhere in nature, in our AIs and our brains.
The computation that we care the most about is the part that we with our brains and our civilization
and our culture and so on have so far explored. That's the part we care the most about.
Progressively, we should be able to explore more. And as the computational X fields come into existence
and so on, and we get to use our computers and computational language and so on, we get to colonize
more of the computational universe, and we get to bring more things into, oh, yes, that's
the thing we humans talk about. I mean, if you go back, even just 100 years,
years, nobody was talking about all these things that we now take for granted about computers
and how they work and how you can compute things and so on. That was just not something within
our human sphere. Now, the question is, as we go forward with automation, with the formalization
of computational language, things like that, what more will be within our human sphere?
It's hard to predict. It is, to some extent, a choice. There are things where we could go in this direction,
could go in that direction. These are things we will eventually humanize. It's also, if you look at
the course of human history, and you say, what did people think was worth doing? A thousand years ago,
a lot of things that people think are worth doing today, people absolutely didn't even think about.
A good example, perhaps is walking on a treadmill. That would just seem completely stupid to somebody
from even a few hundred years ago. It's like, why would you do that? Well, I want to live a long life.
Why do you even want to live a long life?
That's because whatever.
That wasn't, you know, in the past, that might not even have been thought of as an objective.
And then there's a sort of whole chain of why are we doing this.
And that chain is a thing of our time.
And that will change over time.
And I think what is possible in the world will change.
What we get to explored out of the computational universe of all possibilities will change.
there will no doubt be people who you could ask the question, what will be the role of the
biological intelligence versus all the other things in the world? And as I say, we're already
somewhat in that situation. There are things about the natural world that just happen.
And some of those things are things that are much more powerful than us. We don't get to
stop the earthquakes and so on. So we already are in that situation. It's just that
the things that we are doing with AI and so on,
we happen to be building a layer of that infrastructure
that is sort of of our own construction
rather than something which has been there all the time in nature,
and so we've kind of gotten used to it.
It's so mind-blowing, but I love the fact that you seem to have
like a positive attitude towards it.
You know, we've had other people on the show that are worried about AI,
but you don't have that attitude towards it.
It seems like you're more accepting of the fact that it's coming
whether we like it or not, right? And to your point, we're already living in nature, which is
way more intelligent than us anyway. And so maybe this is just an additional layer.
Right. I'm an optimistic person. That's what happens. I've spent my life doing large projects and
building big things. You don't do that unless you have a certain degree of optimism.
But I think also what will always be the case as things change, things that people have been doing
will stop making sense.
You see this in the intellectual sphere
with paradigms and science.
I built some new things in science
where people at first say,
oh my gosh, this is terrible.
I've been doing this other thing for 50 years.
I don't want to learn this new stuff.
This is a terrible thing.
And I think you see that
in there's a lot in the world
where people are like,
it's good the way it is.
Let's not change it.
Well, what's happening is,
in the sphere of ideas and in the sphere of technology, things change.
And I think to say, is it going to wipe our species out?
I don't think so.
But that would be a thing that we would probably think is definitively bad.
If we say, well, you know, I spent a lot of time learning how to do, I don't know, write,
I don't know, I became, you know, a great programmer in some low-level programming language.
And by golly, that's not a relevant skill anymore.
Yes, that can happen.
I mean, for example, in my life, I got interest in physics where I was pretty young, and
when you do physics, you end up having to do lots of mathematical calculations.
I never liked doing those things.
But there were other people who were like, that's what they're into.
That's what they like doing.
I never like doing those things.
So I taught computers to do them for me, and me plus the computer, did pretty well at doing
those things.
But it's kind of one had automated that away.
To me, that was a big positive, because it let me do a lot more.
it let me take what I was thinking about and get the sort of superpower to go places with that.
To other people, that's like, oh my gosh, the thing that we really were good at doing of doing
all these kind of mathematical calculations by hand and so on, that just got automated away,
the thing that we like to do isn't a thing anymore.
So that's a dynamic that I think continues.
But having said that, there are plenty of ridiculous things that get made possible
by, you know, whenever there's powerful technology, you can do ridiculous things with it.
And the question of exactly what terrible scam will be made possible by what piece of AI,
that's always a bit hard to predict. It's a kind of a computational irreducibility story of
this thing of what will people figure out how to do, what will the computers let them do,
and so on. But I'm, yes, in general terms, it is my nature to be optimistic,
but I think also there is kind of an optimistic path through the way the world is changing, so to speak.
Well, it's really exciting. I can't wait to have you back on maybe in a year to hear all the other exciting updates that have happened with AI.
I end my show asking two questions. Now, you don't have to use the topic of today's episode.
You can just use your life experience to answer these questions. So one is, what is one actionable thing our young improfitors can do today to become more profitable tomorrow?
And this is not just about money, but profiting in life.
Understand computational thinking.
This is the coming paradigm of the 21st century.
And if you understand that well, it gives you a huge advantage.
And unfortunately, it's not like you go sign up for a computer science class and you'll learn that.
Unfortunately, the educational resources for learning about computational thinking
aren't really fully there yet.
And it's something which, frustratingly after many years, I've decided I have to
build much more of these things because other people aren't doing it and it'll be another
decades before it gets done otherwise. But yes, learn computational thinking, learn the tools that
are around that. That's a quick way to jump ahead and whatever you're doing because as you make
computational, you get to think more clearly about it and you get the computer to help you jump forward.
And where can people get resources from you to learn more about that? Where do you recommend?
Our computational language, Wolfram Language, is the main example of where you get to do computational thinking.
There's a book I wrote a few years ago called Elementary Introduction to Wolfram Language, which is pretty accessible to people.
But hopefully, in another, well, certainly within a year, there should exist a thing that I'm working on right now, which is directly an introduction to computational thinking, but you'll find a bunch of resources around Wolfram Language that explain more how one can think about,
things computationally.
Whatever links that we find, I'll stick them in the show notes.
And next time, if you have something and you're releasing it,
make sure that you contact us so you can come back on Young and Profiting Podcast.
Stephen, thank you so much for your time.
We really enjoyed having you on Young and Profiting Podcasts.
Thanks.
Oh boy, Yapam.
My brain is still buzzing from that conversation.
I learned so much today from Stephen Wolfram, and I hope that you did too.
And although AI technology like ChatGBT BT seem to just pop up
out of nowhere in 2022, it's actually been in the works for a long, long time. In fact, a lot of the
thinking behind large language models have been in place for decades. We just didn't have the tools
or the computing power to bring them to fruition. And one of the exciting things that we've
learned about AI advances is that there's not as big as a gap between what our organic brains can do
and what our silicon technology can now accomplish. As Stephen put it, whether a system develops from
biological evolution or computer engineering, we're talking about the same rough level of computational
complexity. Now, this is really cool, but it's also pretty scary. Like, we're just creating this
really smart thing that's going to get smarter. And I asked them the question, like, do you think
AI is going to have apex intelligence and take over the world? I record these outroes a couple weeks
after I do the interview.
And I've been telling all my friends this analogy
every time I talk to someone,
I'm like, oh, you want to hear something cool?
And I keep thinking about this.
AI, if it does become this apex intelligence
that we have no more control over,
he said it might just be like nature.
Nature has a mind of its own.
It's what everybody always says.
We can try to predict it.
We can try to analyze nature.
We can try to figure out what it does.
Sometimes it's terrible and inconvenient
and disastrous and horrible.
And sometimes it's beautiful.
It's so interesting to think about the fact that AI might become this thing that we just exist
with, that we created, that we have no control over.
It might not necessarily be bad.
It might not necessarily be good.
It just could be this thing that we exist with.
So I thought that was pretty calming because we do already sort of exist in a world that we have
no control over.
You never really think about it that way, but it's true.
And speaking of AI getting smarter, let's talk about AI and work.
Is AI going to end up eating our workforce's lunch in the future?
Stephen is more optimistic than most.
He thinks AI and automation might just make our existing jobs more productive
and likely even create new jobs in the future.
Jobs where humans are directing and guiding AI and new innovative endeavors.
I really hope that's the case because us humans, we need our purpose.
Thanks so much for listening to this episode of Young and Profiting Podcast.
We are still very much human-powered here at Yap and would love your help.
So if you listen, learned, and profited from this conversation with the super-intelligent Stephen Wolfram,
please share this episode with your friends and family.
And if you did enjoy this show and you learned something,
then please take two minutes to drop us a five-star review on Apple Podcasts.
I love to read our reviews.
I go check them out every single day.
And if you prefer to watch your podcast as videos,
you can find all of our episodes on YouTube.
You can also find me on Instagram at Yap with Hala or LinkedIn by searching my name.
It's Hala Taha.
And before we go, I did want to give a huge shout out to my YAPMedia production team.
Thank you so much for all that you do.
You guys are the best.
This is your host, Hala Taha, aka the podcast princess, signing off.
