Behind The Tech with Kevin Scott - Danny Hillis: Inventor, entrepreneur, scientist
Episode Date: November 21, 2019If you’ve heard of the 10,000 Year Clock, you know Danny. He’s a computer scientist, who pioneered parallel computers and their use in artificial intelligence. Danny founded Thinking Machines Corp...oration, a parallel supercomputer manufacturer, and was a fellow at Walt Disney Imagineering.
Transcript
Discussion (0)
Social media was never by itself the thing that was going to advance humanity,
nor is it the thing that's going to destroy humanity.
It's actually a tool that we're going to learn to use with time,
like we learned to is fire.
Hi, everyone.
Welcome to Behind the Tech.
I'm your host, Kevin Scott, Chief Technology Officer for Microsoft.
In this podcast, we're going to get behind the tech.
We'll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes
insights into what's happening today. Stick around.
Hello, and welcome to Behind the Tech. I'm Christina Warren, Senior Cloud Advocate at
Microsoft.
And I'm Kevin Scott. Today, our guest is Danny Hillis.
And Danny Hillis, I'm so excited about today's guest. He I'm Kevin Scott. Today, our guest is Danny Hillis. And Danny Hillis,
I'm so excited about today's guest. He is an incredible pioneer. Yeah, he is. So Danny,
Danny is perhaps most well-known, although it's like a difficult thing to say, you know,
given how much he's accomplished, like what the single best-known thing was. But like when he was
a student at MIT, he started this pioneering
company called Thinking Machines that built the world's fastest supercomputers and really
pioneered a new type of computer architecture that was revolutionary at the time and that has He's informed how we build computers even today.
And, like, he's also been the head of Disney Imagineering, and he's got this crazy invention factory company that he runs now.
So, like, Danny really is, like, one of the most interesting people I know and, like, so creative and, like, such an amazing entrepreneur.
I'm super excited to be able to chat with him today.
I'm so excited, too.
We should get to the interview.
Yeah, let's chat with Danny.
Today, we'll chat with Danny Hillis.
Danny is an inventor, engineer, entrepreneur, and author. As a student at MIT, he founded the pioneering supercomputing company Thinking Machines,
which built the world's fastest computers in the 80s and 90s and paved the way for modern large-scale computing.
After Thinking Machines, Danny ran Imagineering at Disney.
He co-founded Applied Minds and Applied Invention, an interdisciplinary group of engineers, scientists, and artists.
He is a visiting professor at the MIT Media Lab and a great friend.
Welcome to the show, Danny.
Great to be here.
So, you, perhaps more than any other person I know, like have a curious and wide set of interests, which is awesome.
And I would really love to understand, like, how that got started.
Like, were you a curious kid?
So, I was really lucky in that I grew up all over the world.
My father studied hepatitis.
So, wherever there was a hepatitis epidemic, we went there and lived.
Wow.
So, I got to live in countries in Africa that don't exist anymore. But, yeah, lots
of places in Africa. In India, I got to live. In Calcutta, and Europe, and little towns
in the southern United States. So, I really, there was always something to be curious about.
Right.
And, like, beyond the exposure to, like, all of these different cultures and people and ideas,
like, your dad must have been a little bit fearless and adventurous.
You know, it's interesting.
I think in retrospect he was just naive.
Because when I had kids of my own, I was like, why did you brought us into war zones and stuff like that?
I would never do that with my kids.
It's like, yeah, we didn't really understand what we were doing.
But it seemed to have worked out.
It did.
And I got to try every possible kind of education system or non-education system.
So, was he an epidemiologist by training?
That's right.
Okay.
And what did your mom do?
Well, actually, my mom is a great story because my mom quit school to put my father through medical school.
And I always knew she was super smart.
But because she wasn't educated, people did not treat her like she was smart and she had a southern accent.
And in those days, that caused her to be ignored.
Yeah.
Well, I think it might still cause you to be ignored.
Well, when I went to high school, she went back to college and finished college.
And then when I went to college, she went to graduate school and got her Ph.D. in biostatistics.
Wow.
And then all of a sudden, everybody started taking her seriously as a statistician.
Well, I mean, that is an amazing story.
I mean, sad that they didn't take her seriously before she got her Ph.D.
Well, it has a happy ending.
That's great.
And so when you were a kid, it was like this is before the personal computing revolution.
So like what was your first contact with computers?
So partly I was very fascinated by technology because I was far away from it.
So I was in places where, you know, it wasn't happening.
And so that made it sort of more enticing for me. And when I was
living in Calcutta, I went down to the British Council had a library of English books. You
couldn't take them out, but you could read them there. And I found Boole's Laws of Thought.
Wow.
And, you know, it was too advanced for me, but I kind of got the basic idea of Boolean logic.
I thought, this is really cool.
This is how computers work.
And so I wanted to build one, but, of course, I couldn't.
You couldn't buy a transistor or a relay or anything.
So I built my own switches by using screen from screen doors and nails that stuck into them.
And I cannibalized lots of flashlights.
And I built my first computer, which was basically a fixed logic array that played tic-tac-toe.
So you would move the switches, and it would light up one of the nine squares.
Now, that's pretty incredible, actually.
And did you have anybody helping you? I mean,
like, that's certainly not something I would expect one of my children to go figure out.
You know, certainly lots of adults helped me, like, you know, nail the nails in and things like that. But I don't think my parents had much of an idea of what I was doing. But
I always got encouragement from the people around me.
And I think that that's one thing.
I mean, I did a bunch of crazy, stupid things that didn't work, too.
And I got encouragement on those, too.
But that one worked.
Wow.
And so, you know, they always, they gave me a lot of freedom to make mistakes.
And so, you know, that was the end of a lot of mistaken, crazy projects that didn't work.
But that one did. And so, that kind of that was the end of a lot of mistaken, crazy projects that didn't work. But that one did.
And so, that kind of got me on a course.
And it actually won the special prize in the All India Science Fair.
So, that was.
That's awesome.
And, like, that must have been, like, a really important thing.
Like, getting encouragement at the right point when you're young sort of, like, almost starts a positive feedback loop.
Oh, it absolutely does.
And people who are interested in what you're doing
and it's not really necessarily they have the knowledge.
And in some sense, maybe I was a little bit lucky
that they didn't have the knowledge
because I kind of had to figure it out for myself in a way.
But I had resources like the library, for example.
And, you know, I always had people that were willing to help as much as they could and
so on.
So.
It's really interesting.
It's so different than the world we live in now where, of course, anybody can find anything
on the internet.
But in those days, it was really very hard to find information about things.
And so, I wonder about this all the time because I I, like, when I was a kid in the 70s,
it was still two, so no internet, right?
No Google, no Wikipedia, no, like, and you had to, like, we were relatively poor.
So, like, we only had a few books in the house.
And, you know, my parents had bought a World Book Encyclopedia from one of the
like encyclopedia salespeople and put it on a payment plan. But like you had to go to the
library really to go get books. And I sometimes like romanticize this idea of like, oh, what it
would have been like if I were a child now and I had access to the Internet and, like, how good that would be.
And then sometimes I wonder, like, whether or not it would make things too easy.
Well, in some sense, we were all so lucky that we grew up with the technology of computers.
So we got to see it in simple enough form.
And maybe because I'm older than you, I got to see it in an even simpler form.
Like, I remember the first calculator
I ever saw in my life.
And so, in some sense, that was an advantage because we spent years watching it build up
from the bottom of kind of the switching elements and the very simple functions and the machine
language programming and then all the layers of software that gradually got put on top
of that. And now I think it would be very tempting just to kind of skip straight to the real
powerful functionality.
Yeah, and like that's a recurring theme.
I mean, it's funny, like I haven't, like we've never had this conversation before, but I've
had this chat on this podcast with a bunch of other people and like the same theme emerges
over and over again.
And like there's, you know, on the one hand, like all of that complexity and the abstractions that
we built up over the years to sort of package them up in the complexity up in ways where you
can very easily use it to build things is great. And on the other hand, like not having that deep
fundamental understanding of what's really going on from top to bottom can hinder you sometimes.
Right.
Both kinds are really useful because, of course, if you try to work from that level, then you miss a lot of the power of building on what other people have done and so on.
But, I mean, you said something about, like, I made some of the first parallel computers. And I was thinking, I wonder if they really, the audience really knows what it meant to make a computer in those days.
Well, so let's, I mean, let's talk about this.
Like, we'll go back to, you know, sort of, am I, well, actually, let's just go straight there.
So, you, you know, you were traveling all over the world, you world, sort of a precociously curious child,
and then you go to college, and it's MIT, right? Yeah. And actually, I went to college. I really,
I mean, the computer thing was cool, but I never thought of it as a career.
Right. Did they even have a computer science program at MIT when you started?
It was combined with electrical engineering. Okay.
But when I went there, I wanted to be a neurophysiologist.
I wanted to study the brain.
The brain was obviously the most interesting mystery.
And it still is.
Still is.
It still is.
Exactly.
Actually, I just came from the Brain Mind Conference.
Nice.
And, yeah, so this is still basically a mystery.
Yeah.
That's how it works.
We understand little bits of it.
Yeah.
And so when I went to MIT, I had read this paper
that I was very excited about,
that they had stuck probes inside a frog
into the optic nerve.
And they had worked out that the signal being sent
from the eye to the brain was not just like a pixel.
It was actually a pattern.
It detected like a black dot moving across a light background.
So, the nerve cells in the eye were encoding information, basically.
Exactly.
And the paper was called What a Frog's Eye Tells the Frog's Brain by Jerry Letvin and a couple of co-authors.
And I'd read this and I was just so excited
because this is what I want to do
because they're starting to,
felt like you were just starting to figure out
all the neural circuitry.
And my first day at MIT,
they sort of have a party for the incoming freshmen.
And I go to this party and there was this guy sitting there
holding forth with the freshmen and sort of pointing at them,
what are you interested in?
And whatever they were interested in,
he would explain to them why it was a crazy thing to study.
And so he gets to me and he says, you know, what about you?
I said, I want to study neurobiology.
And he said, oh, that crock of bleep.
Yep, yep.
Oh, and I think I know who this was.
Yep, you're guessing it.
That's right.
So he completely tore apart.
He was like, tell me one good paper that's ever been written.
And, of course, he completely tore apart this paper. And, of course, he turned out to be the author of the paper.
And so, but he did sort of convince me that that was not the moment to study the brain,
that it was, you know, the tools were too crude. But he suggested that I go over to meet Marvin Minsky.
And so I followed that instruction, which is another story that's kind of fun.
Yeah.
Yeah, you should tell us.
So I had this instruction to go over and find the great Marvin Minsky, who of course I had heard of.
And he was legendary at this point.
Totally legendary.
And he was the guy that, you know, he and McCarthy had invented the term artificial
intelligence.
Yeah, the famous Dartmouth workshop in the summer of 55.
Yeah.
Yeah.
And so, and the AI lab in those days was often, it wasn't even on campus.
It was in the special building that you had to have special keys to get into.
So, of course, I went over there, and I couldn't even get to his office
without slipping in behind somebody.
But then I get to his office, and he's not there.
And so I hang out, and I manage to get a job in the,
kind of an undergraduate research job in the building, still can't find him.
And I started asking around.
They say, oh, no, he's down in the basement.
You know, there's this new thing called a microprocessor,
and he's trying to make it into a personal computer like Alan Kay was doing often in St. Augustine Park.
And so I go down to the basement, and he only comes in at night to do this.
So I go down in the basement at night, and sure enough, there's Marvin surrounded by
a bunch of his graduate students, and they have big wire wrap boards, and they're working
on this thing.
But I'm so awed by Marvin Minsky that I don't really feel like I can just go up and introduce
myself.
So I sort of hang out and watch the action,
and there were some circuit diagrams sitting on the table.
And I start looking through them, and I find a mistake in one.
I'm like, oh, this is my entree.
So I go up to Marvin, and I say, look, you know, there's a mistake in this diagram.
Do you remember what the mistake was?
It was an inversion of a signal.
Okay. It was clocking on a leading edge instead of a falling edge.
So it was an inversion of a clock signal.
A good mistake for an 18-year-old to be able to catch, right?
Probably 17.
17, yeah.
Because it was when I first arrived, right?
And so I go and he says, okay, well, fix it.
And I'm like, well, okay, I drew the piece.
He's like, no, no, fix it on the machine.
Fix it in the diagram and fix it on the machine.
So I did that and then nothing. So I did that, and then nothing.
So I found another mistake, and I go to him, and he says, no, no, just fix them when you find them.
And so after a while, Marvin Minsky just assumed I worked for him.
And so that was the start of a very long relationship.
That's great.
And so Marvin was your PhD advisor, right?
He was.
Well, I had a couple of PhD advisors.
I had a team of Marvin Minsky, Claude Shannon, and Jerry Sussman.
Yeah, it's like the best PhD advisors ever for the dissertation you were writing.
I mean, it's incredible.
I mean, so for the audience, like Marvin Minsky is the father of AI.
Claude Shannon is like the father of information theory,
like basically the foundation of modern society.
Yes, he invented the bit.
And Jerry Sussman is like one of the most incredible computer scientists
who ever lived.
Like I still like his intro text for computer science.
Oh, it's fantastic.
It's like just a thing of beauty.
Structure and Interpretation of Computer Programs, right?
Yeah, very good.
Yeah.
Just an incredible book.
Yeah, that was a great group of folks.
Well, so, and while you're at MIT studying, this is when you founded Thinking Machines, right?
Yeah, which in those days, that was not a normal thing for a student to found a company.
Still not entirely a normal thing.
I mean, especially a supercomputing company.
Yeah.
Well, it turned into a bigger problem.
I was trying to do it at MIT, and I couldn't hire people because I was a student.
So, why do this at all?
Well, so, it was really for artificial intelligence.
It was kind of a sidetrack.
So, it was very clear that the brain worked much faster than computers did.
And it was very clear that if computers were going to be fast enough, they'd have to have an architecture that was more like the brain.
But in those days, the doctrine of computer science was that if you use more than one processor on a problem, it gets less and less efficient as you get more of them.
It was called Amdahl's Law, if you remember that. And so the idea was, well, maybe you can use four or five,
but you can't use 50 or 60 or 100.
And I knew that that sort of had to be wrong
for artificial intelligence, because our circuits switch
in milliseconds.
They were much slower than transistors, and yet we
could recognize a face in a second.
So I knew that the brain had a parallel architecture.
So I didn't know then what was wrong with Amdahl's law, but I decided that to do AI,
we needed to build parallel computers, very parallel computers, massively parallel computers.
And that was when LSI technology was coming out, and you could make these circuits.
They were in-MOS circuits in those days, but later became CMOS circuits.
And so I made, I think, probably the first chips that had multiple processors on a single
chip and sort of had the basic sort of multi-core idea. And that was considered very radical because
how could you use multiple processors on a chip? Because didn't I know Amdahl? That was every time
I would present this, somebody would raise their hand and say, excuse me, haven't you ever heard
of Amdahl's law? Right. But like what you were doing was, like, let's forget about the fact that
you were a graduate student when you're doing it, which is like one level of incredible. But like what you were doing was just sort of provocatively different.
Like the fastest computers in the world at the time were probably the machines that Cray was building.
That's right.
And so they were deeply pipelined, you know, liquid cooled, super fast switching.
You know, they were liquid cooled because they clocked them as fast as you could possibly clock the, you know, whatever the flavor logic they were using at the time.
And, like, they were.
And they were all about, like, making the wires short so that, you know, the single processor could operate very quickly.
And, like, they weren't, like, I forget exactly what the chronology of things were, but, like, they didn't have, I mean, they never had many processors.
Yeah, they had maybe four or eight processors.
And so, yeah, that was what a supercomputer was.
And I was building this thing,
which was actually,
it didn't do floating point, first of all.
I mean, it was much more like what actually is now
like an NVIDIA chip or something like that, except it filled a room.
But it was – and actually, we built two generations.
The first generation was actually literally like an NVIDIA chip in terms of its architecture.
And that was the CM2?
That was the CM2.
CM1 and CM2.
That's right.
And those were very much single instruction operating on a lot of data and so on.
And then later, we made things that were more like cloud.
Yeah. That came later.
But when we did that, the first one had 64,000 processors.
And that was just like a- Radical.
I mean, people would think you were joking when you said that.
It was incredible.
And I wrote an article for Scientific American.
I said, it's interesting, There will be lots of processors,
and it's much better to put them close together to each other than to people
because they talk at higher bandwidth.
Yep.
So we're going to put, like, all the processors.
You know, the whole country will run off some big pile of...
It'll be like utility.
And Scientific American said...
You predicted the cloud.
They said, look, this is just too implausible.
You can't say that.
It's just too, because this is, and so they said, we'll let you say a single city.
And this was when?
Oh, this was in the early 80s, probably.
Yeah.
So, they made me talk it down to a whole city will run off this.
But that's how implausible it seemed to people.
And it wasn't obvious for a while how general purpose it was.
So, of course, some of the first people to use it were people like, you know, Jeffrey
Hinton, who used it, you know, for connectionist things.
As it turns, you know, and it gave him a few orders of magnitude.
As it turned out, he needed a few more orders of magnitude than that.
And for folks who are listening,
Jeffrey Hinton is more or less the creator of modern deep learning.
Certainly one of the key creators.
That's right.
And he was working on it back in those days,
and pretty much the same algorithms are the ones that have come to work.
And he was compute constrained.
They were just very computationally expensive,
and nowhere in the world was there enough compute to train a deep neural network.
Yeah.
So, indeed, it turned out to be true that the hypothesis that AI wasn't going to make big inroads until it had much more computing, and it did need parallel computing, I think that's finally turned out to be true. And, of course, just 64,000 processors in those days, the clock speeds they were at,
wasn't nearly fast enough.
Right.
Well, and I remember, so I, when I was in graduate school, and this, actually, I know
I was an undergraduate. So I was on a National Science Foundation
research experiences for undergraduates,
assistantship at the University of Illinois
at the NCSA.
And when I got there,
they had just installed the biggest CM5 in the world,
the biggest public. And that was probably biggest CM5 in the world, the biggest public.
And that was probably the fastest computer in the world at the moment it was installed.
It was absolutely the fastest computer in the world.
And I remember seeing this thing for the first time.
And not only was it the fastest computer, it was like this thing of beauty, like this giant sort of 2001 you know, 2001 space odyssey, you know, like black, you know,
monoliths with these red, you know, matrix of red blinking LED.
It was a fantastically beautiful machine.
Well, thank you.
It's funny.
I just got a picture.
Somebody just sent me a picture.
The Museum of Modern Art just opened up, and at the entranceway,
they have a connection machine with the lights flashing.
That's awesome.
The Museum of Modern Art.
And then, you know, like the funny thing, I mean, obviously it wasn't a real connection
machine, but there was one in Jurassic Park.
It was the computer.
Oh, yeah.
That was a lot of fun for us.
That's right.
You do see it in background scenes, but they actually did buy the real shell of one. And so for like five or six years there, your company made the fastest computers in the world.
We did.
And, yeah, there was a list.
And actually, you had to go quite far down the list before it wasn't one of our machines. Which is, again, for something that
you started this company
when you were
a student at MIT, and it
went on to have this lasting mark
on the world. And the thing that
you and I have chatted about a bunch of times
is we're now building
a new flavor of
supercomputers
to train AI models like these deep neural networks.
And the architecture of the machines that we're building right now is more or less what you built 30 years ago.
It's a lot faster.
A lot faster.
But, yeah, it's fundamentally the same architecture.
Yeah, you had this idea that sort of informed, like, three decades of artificial intelligence.
And you pretty much programmed them, you know, the way that we programmed is things like MapReduce and so on.
That was sort of built into the hardware, actually, in those days.
Yeah, and, like, that's, look, that's one of the slightly nicer things now versus then.
The frameworks that you code in are, in are so much more powerful.
So you end up writing a thousand lines of Python code.
Oh, yeah, that's amazing.
And that's the advantage of sort of starting with the advantage of all the work that's been done,
compilers, for example, and things like that.
I mean, when we built that machine, it was literally,
you would take a piece of graph paper and start drawing the shape of the transistors on the chip.
Yeah.
And then you'd go all the way, and then you'd have to write an assembler.
You know, every time I'd make a new processor, I'd have to write a new assembler for it.
And then you'd, you know.
And you had to build the compilers and the operating system.
Yeah, it was a Herculean effort.
And you had, like, really, really smart people working on this.
Well, actually, that's probably the biggest legacy of thinking machines is because the architectures, I don't, that style of massively parallel architectures really didn't become mainstream until, you know, really a couple of decades later when, you know, with the cloud and all.
Two things.
One was the cloud was the sort of multiple instruction route,
and the single instruction route was with the graphics processors.
But the real legacy, I think, was the people.
And just because you had the fastest machines in the world,
it attracted really bright people who had interesting problems.
So actually, one of my favorite examples was I went out to Caltech and asked Richard
Feynman, the physicist, if he had any students that would be interested in coming to the
company.
This was when I was very first starting it, to spend the summer there, summer interns, basically.
And he said, oh, I've heard about your kooky architecture.
I don't have any people.
Caltech students have a lot more sense than that.
There's nobody I know that would be crazy enough to come out.
He says, actually, there is one guy, but he doesn't know anything about computers.
Maybe he'd be dumb enough to do it.
And he's a hard worker.
And actually, I think probably he's your best bet.
And I said, okay, well, I'll hire him.
What's his name?
He said, Richard Feynman.
And so Dick Feynman was like my first summer hire.
Yeah, so look, you have to admit, you had an unusual startup.
So you had, like, Marvin worked for you at some point.
Yes, Marvin did.
So you had Father of AI, Turing Award winner.
You had a Nobel Prize winner.
Well, actually, he wasn't a Nobel Prize winner.
Well, he was a Nobel Prize winner, but then we also had Sidney Brenner, who is the famous geneticist and runs the Broad Institute.
He came, he didn't know anything about biology, but he was like, you know, we're starting to sequence the genome.
And this is the only computer that's sort of big enough to search for the patterns in it and so on.
And so he came before really he was ever heard of in biology. So, it was a really
interesting set of people that came out of that and went on to do really interesting things across
the industry. So, at some point, you stopped doing thinking machines and moved on to something else. Talk a little bit about what happened there.
So, first of all, I didn't know anything about making a business.
So I made a lot of mistakes in how I set up the business.
And at some point, we started taking money away from Cray Computer,
which had been the most profitable Fortune 500 company when we started
and stopped being so profitable as we started selling these parallel machines.
And so they got some laws passed that you couldn't export anything more powerful than a Cray.
And also the DoD had to spend any supercomputer it bought, it had to be opcode compatible with.
Wow.
So we basically all got totally blindsided by that.
And because we had sort of managed the company for growth, growth, growth, growth, growth, it was just on an exponential curve. When that didn't happen, we ran into a cash crunch, and so we had to sort of do, you know,
Chapter 11 distress sale.
You know, so it went from sort of everything looking great and, you know, very, very quickly
went downhill.
I learned a lot of, I mean, if I knew what I knew now, I'd never get a business in that
position.
Right.
But I wasn't really thinking about the business.
But it had a happy ending, which was that the whole hardware side of the company got bought by this work a share of Sun Microsystems stock for its share of Thinking Machines stock.
So they all did very well.
Oh, that's great.
And so it did sort of help make the web happen.
And some of those people went on to have very senior roles at Sun Microsystems and did all sorts of cool things there.
Yeah.
And others went to another part. The part of it went to Oracle and did all sorts of cool things there. Yeah. And others went to another part.
The part of it went to Oracle and did very well there.
So there's still a really nice, I'd say, alum team.
Yep.
And also just among the customers, too.
I mean, really, if you had taken the people that were the customers that were the graduate students that were working on it, there were people like Sergey Brin were off programming connection machines.
If you just had a portfolio that was investing in either the people that were alumni of Thinking Machines or the people that were customers of Thinking Machines.
Yeah, that would have been a great portfolio.
It would have been a great portfolio, yeah. So you went from Thinking Machines are the people that were customers of Thinking Machines. Yeah, that would have been a great portfolio. It would have been a great portfolio, yeah.
So you went from Thinking Machines to Disney?
Well, so, yeah, so that was a very sad moment for me
because that was, you know, unexpected.
Things seemed to be going great.
Yep.
And so I felt I let everybody down.
I felt really terrible about it.
And I just said, and my kids had just been born.
My daughter was actually born on the day that the Thinking Machines filed Chapter 11.
And so that was tough.
So I was just like, you know, I just want to do something fun for a while that I can relate to my kids on.
And I had a friend, Bran Ferren, at Disney.
He said, why don't you come on out to Disney?
And so I had always wanted to be an Imagineer since I was a kid.
Yep.
And I got kind of my second education there.
Yep.
So I thought it was just going to be a lark.
But actually, I learned a huge amount there.
Yeah.
I've sort of seen some of the work that you all do now and some of your team,
and that time at Disney is really important to some of the stuff that you're doing now, right?
Yeah, it definitely is.
I mean, one of the things I learned is what it – I mean, before – because I never had a job.
So that was the first thing I learned,
is what it looks like being inside a big company.
Right.
And I remember the first time I got a paycheck from Disney, and it had, like, benefits.
And I was like, oh, that's why they're called benefits,
because always before those were things I had to pay, right?
But you sort of saw what big companies were really good at.
Right.
And it was so easy for them to do things that were just impossible for a small company to do.
But it was also very hard for them to do certain things that were easy for a small company to do.
So, that made me sort of appreciate that there was a need to kind of do interdisciplinary things that companies, even incredibly creative companies like Disney,
where, you know, it was founded on creativity, that didn't mean they were really good at everything.
It meant they were really great at, you know, building a theme park, making a movie.
And so what are some of the things that Imagineering did while you were there?
Well, a bunch of things.
Well, for example, my favorite project, because I got to see it from clean slate to opening,
was Animal Kingdom.
And that was nice of, you know, what kind of park could we have?
And, you know, sitting around, blank sheet of paper, literally, you know, here's a plot
of land, what could we do with it?
And all the crazy ideas about what to do with it. And Disney has a great brainstorming process called a charrette for
doing that. And then going all the way to the day that, you know, opening day where I brought my
kids into the park. Wow. And the park, I don't know if you've ever visited, all the Disney parks,
one of the design principles is there's always something in the center that's kind of the dramatic orienting thing
like the castle.
Right.
And there's a lot of storytelling reasons for that.
But there's something special.
And in Animal Kingdom, it's this amazing tree with animals growing in the bark and things
like that.
And it's just an incredible thing. And I go in there with my five-year-old kids,
and we look and walk in and look at the tree.
And they look up at me and they say,
Daddy, did you make that or did God?
I was like, okay, this is like the peak dad experience here.
That is like the peak dad experience here. That is pretty good.
So, and you were working with computer scientists, engineers, artists?
Well, there's a lot of different kinds of people.
And what I thought of as an interdisciplinary team before I went to Disney, my idea of interdisciplinary got broadened out. One thing that Disney is really great about, and
actually Hollywood is really great about, is they have a kind of a different way of
doing big projects than is usual in tech, which is the studio model.
So let's say Disney makes a movie.
Actually, there are some Disney employees that make the movie, but
mostly it is a set of people that, you know, Disney knows a great director,
knows a great actor, knows a great screenwriter.
And so they pull those things together.
And a lot of people who are used to working with each other have roles,
and so they worked with each other on multiple projects in different combinations.
And I realized that was really kind of a very efficient way to do innovation.
It wasn't, I mean, and also having seen the sort of downside of what the difficulty of
making a small company where, you know, you have to get a lot of things exactly right
for it to work, some of which have nothing to do with the product or the customers or
things like that.
And so, it was sort of nice to sort of see a different way of doing
things and see how quick and efficient and how much energy it could get out of people and so on.
And I thought it wouldn't be great to do technology projects like that, where you had a core of people
that kind of knew how to do projects together, that were kind of the producer, director types. They have the concept. And then you had a big pool of a network of people
that were really good at doing things
that you could bring on when you needed them.
And that was a really good way to build technology systems.
And so that's what I ended up leaving Disney with,
with Fran Fearn, the guy that I went there with, and we started a company basically to do that kind of project where we would quickly build a system on that kind of studio model.
Right.
And that was Applied Minds?
That was Applied Minds.
And then the company you're running now is called Applied Invention.
Yeah, which sort of evolved from Applied Minds.
And Applied Minds ended up doing two kinds of things, one of which was kind of commercial things that turned into commercial products.
And the other thing was it started doing things for the government, aerospace companies, things like that.
Right.
And those had kind of a different rhythm to it.
And over time, I was more interested in the kind of commercial things.
Brian was more interested in the more kind of aerospace projects, those kinds of projects.
So, we ended up sort of doing two different kinds of things, but kind of handicapping each other a little bit because the processes were different for those things.
So, eventually, I went to Bram and said, look, Bram, let's either stop the government work because it has all these regulations and things like that, or let's split the company.
Right.
And Bram was like, I don't want to stop the government work. I'm loving it. And so,
we split it up, and still good friends, but I took part of the company off and just concentrated on
the commercial work. And it's really interesting commercial work.
It is so fun. It's all looking for things where somebody has an idea of something that's going to change
the world somehow.
They don't have all the elements to do it, but they have some vision.
So that's a commercial partner.
And then we go in and we team up with them almost like we're their skunk works, as if
we work for them. Right.
And we work with them to build up that new product or line of business or something like that.
And your team's still fairly multidisciplinary. You have physicists.
You have chemists.
You have mechanical engineers.
You have computer scientists.
You have electrical engineers.
You have firmware people.
Yeah, people are like, well, I don't get it.
How can you have a team that builds a satellite and a blood test?
Yeah.
I mean, it's –
And like robots that explode bombs for police forces and like the 10,000-year clock, which we're going to talk about in a minute.
It's incredible.
Yeah.
And part of what makes it work is that network of people out there who have deep knowledge in particular things.
Like, you know, we needed a fish psychologist.
We didn't have a fish psychologist on staff, but we knew one, right?
I didn't even know such a thing existed before I knew you.
Fish psychologist.
But there is a kind of expertise that's the systems building.
And at the core of it, of course, everything is computers, too.
And so, at the core of everything we do, there's some sort of big data.
So the parallel processing theme is still kind of there.
Of course, machine learning is now a tool that you use in almost everything.
So AI is still a part of the thread in it, although we usually don't call it AI.
We just use machine learning and things like that as a domain line.
Yeah, and I'm sort of interested in that.
I mean, machine learning is obviously the more technically accurate label for what most people doing AI are doing right now.
It's a very particular thing. It's like you have large volumes of data and, like, you're building some sort of, you know, quasi-statistical model to, like, extract
patterns from the data that let you do classification and inference and whatnot.
And they're doing it because it's working really well now with the powerful machines that we have.
Yeah. But you also have been doing this long enough where you actually know what an AI winter is.
And, you know, so, like, we've been through a few hype cycles.
Like, what's your perspective on that right now?
So, I think intelligence is a mini-splendored thing.
There's lots of components to it.
There's lots of aspects to it.
And I think it's happened before that we find some aspect of it
that's a building block of it, and it's important and useful.
And that becomes AI because we suddenly make progress in it.
So right now it is machine learning pattern recognition.
And it's an incredibly powerful building block,
and there's still lots of great things to be done with it.
And we use it in everything, so I'm a big fan of it.
But there's still lots of things that a human mind does
that don't fit into that paradigm.
So what happens in the past is that when you make a big explosion,
everybody starts talking about AIs taking over the world.
And this happened with machine vision.
It happened with speech recognition.
It happened when the AIs, you know, planning where you can play AIs would beat humans at chess.
And then what happens is people say, okay, well, that thing, now we understand what that thing is, but that's not AI.
And so, you just start using that thing as a tool. And so I suspect that's what will happen with this current thing is people will realize
there's more to general artificial intelligence than these multilayer neural networks.
And of course, you know, there are many people who do realize that.
And so we'll, you know, run into the next set of hard problems while people still continue
to apply these multi-level neural
networks to very important problems. I mean, I don't want to trivialize how much good they're
going to do. But in a certain sense, like the AI boom-bust cycle is just like any other boom-bust
cycle where both extreme ends of the cycle are not helpful. Like the overhype, like where you get reckless
with investment and you have a whole bunch
of people who really misunderstand what's going on
and are sort of making these leaps
of faith, basically, about what is coming next.
Both positive and negative. Correct.
Both positive and negative, which is, I think, really important.
Like, oh, you know, like AI is going to be this apocalyptically bad thing, or it's going
to be this, like, you know, sort of unrestrained utopia.
Like, both of those extremes are, like, bad things to infer from where we're at.
But in the bus cycle, you know, so, that heats up and then, you know, the bubble pops
and, you know, everyone is, everyone's in sort of the doldrums of, you know, the aftermath
of this whole thing.
And like, that's also not helpful.
Right.
Because there's a bunch of people.
Yeah, it gets underfunded and it's very hard to get good ideas to get any traction and
so on.
So, yeah, I sort of feel like we'd be maybe five or ten years ahead of where we are right now
if we had just been able to mediate some of the boom and bust over the past four decades.
Yeah, I think that's probably right.
And it's probably true with technologies in general.
Part of it happens, too, with a lot of what determines the usefulness of technology is
people and people's ability to adapt to it and so on.
So I think part of what causes that cycle is technology grows very quickly and then sort of gets ahead of people's ability to use it and society's ability to adapt to it.
And then it sort of feels like it's not working and it feels like it's bad.
And then you sort of have a reaction to it like we're going through with social media right now.
Right.
And social media was never by itself the thing that was going to advance humanity,
nor is it the thing that's going to destroy humanity.
It's actually a tool that we're going to learn to use with time, like we learn to use fire.
And so, it takes us a while to work those things out.
And sometimes it takes longer to work out the societal response to something than it does to actually develop the technology.
I mean, the thing that I tell technology folks about AI all the time is AI is not a product.
It is like it's a product. It is, like, it's a feature.
It's a technique.
Like, machine learning is, I mean, just exactly what you said.
It's such a useful technique right now that every maker, like, whether you're a computer scientist or another flavor of engineer or scientist or, like, someone who's trying to create something with technology, like, it ought to be a thing that's in your bag of tricks that you can use to help you solve your problem.
And it's a very, very powerful tool, but it's not magic.
Yeah, that's right.
Any more than a hash table is magic.
Yeah, that's right.
Well, and a hash table actually is kind of magic.
Yeah, it is, in a way.
But yeah, and I think, you know, you can say, so we've seen waves of that.
This is the one we're going through now.
But, yeah, everybody should learn about it.
Everybody should learn how to use it.
And I'm not against worrying about the implications of things.
I mean, looking at things like, you know, what happens when you have the ability to recognize everybody's faces?
How does that affect, you know, government's ability to control its citizens?
How does that affect, you ability to control its citizens?
How does that affect?
Those are things we really should be working about.
And those things will take longer than developing the face recognition.
Yeah.
I mean, the thing that I think we need to be doing more of is we need to be having more robust public debate about the pros and cons of these things. Like, I'm personally uncomfortable in a world where the technologists make policy by virtue of the things that we're building. You know,
policy is better made by policymakers with the input of the public in a democracy.
And you have to have everybody sort of playing their part and contributing.
It's also very hard to do.
It's hard to guess what the issues are going to be. You know, certainly the first time I saw Twitter, I didn't think,
oh, well, let's think about the effect this is going to have on the political system.
I mean, it just never occurred to me.
It never in me either.
And, you know, now I can look back and say, well, actually, it's had a very big effect. And, you know, I now I can look back and say, well, actually, this had a very big effect.
And, you know, I think we really need to think about that.
Yeah.
We need to.
But it's hard to see.
So, even if there had been, if somebody had asked that question, which I didn't, but if somebody had, it's not clear to me that they would have been able to think through all the implications and the kinds of emergent behavior that happen.
And ultimately, I do believe that these technologies are going to create a kind of emergent behavior
in society that's good.
Yes.
Because I think that's a trend in evolution.
It is.
Things cooperate, and they do better things as they cooperate.
Well, and like the overwhelming trend with human use of technology for the past few hundred thousand years has been positive.
Like, you know, in a certain sense, like, you could assert, like, I think fairly trivially that, like, human, like, we couldn't even support the population of human beings that we have right now without just the technology that we've developed over the past 50 years.
It's clear things are getting better.
Yeah.
And, you know, and I would say not just getting physically better, but also morally better.
Yeah.
I mean, you know, the world's a nicer place than it was when I was a kid.
Yeah.
And, you know, there are a lot of things that were just accepted that we would no longer
accept right now.
Right.
About how women were treated,
about racial segregation was the norm in the South when I lived in the South.
Yeah.
And my parents, I remember my parents told me,
well, this is wrong, but there's nothing you can do about it.
Well, it turns out, you know, there was things to do about it.
Right.
And, you know, it's gotten better.
And some of the tools, like these social media tools
that are causing problems right now, like actually in places have been helpful for remedying some of these issues of injustice.
Yeah.
And they've helped and they've also created injustices sometimes.
You know, we've had people literally been killed because of runaway memes that have happened on social media and things like that.
So, we'll learn how to use them.
But I think you're right.
The general trend for technology is not that it makes everything better every time, but
it's certainly more steps forward than it is steps backwards.
And that has been the trend, really, probably since fire, right?
Right.
That is exactly what I was thinking when I said 300,000 years.
Yeah.
We probably had fire longer
than that. But like, you know, it's fire. It's agriculture. But I'm sure a few people got burned
on that one, right? Of course. Early on. Yeah, indeed. And it's not that it is monotonically
positive. Like not every step is positive, but like. Well, it's positive because we willfully
and thoughtfully make it into the positive.
And I think that that's the process that you're talking about, is you can't just say, well,
we'll make the technology and it will automatically be positive.
Right.
It's positive because we discuss it, we think about it, we think about how to use it.
And fortunately, there's more people that are trying to make the world better than trying
to make the world worse.
Yeah.
So, one of the more interesting multidisciplinary things that you've done, like that I'm just sort of personally fascinated by, is this 10,000-year clock.
Can you tell us a little bit about that project? So, that actually has its genesis back when I was making the fastest machines in the world
and all my customers were coming to me and saying, can you make it faster, faster?
You know, instead of nanoseconds, can you go to femtoseconds?
And I was like, I'm so tired of making everything faster, faster, faster.
I really want to make something slower.
I mean, it was sort of, in my mind, it was kind of a joke.
But then I heard this story about New College Oxford, which it's called New College because it's only 500 years old or something.
But they were replacing the oak beams in the New College common room.
And you couldn't just go down to the lumber yard and buy a 50-foot oak beam by the by the time they did it, this was in the 1950s.
Right, because there weren't trees big enough anymore.
That's right.
But Oxford had some forests, so they went to the Oxford forester and said,
do you have any oak trees that we could harvest?
And the forester said, yeah, we have the ones that were planted to replace the beams in New College.
Wow.
And when I heard that story.
Wow. That's College. Wow. And when I heard that story. Wow.
That's planning.
Yeah.
Well, it's also, I realized how small my life had become.
And then I started thinking about it, and this was like in the 1990s when I was thinking
this.
And I realized when I was a kid growing up in the 70s, we were thinking about the year
2000.
And here it was, the 1990s, and the future was still like the year 2000.
So it was as if the future had been shrinking by one year per year for my whole life.
And I wanted to do something that stretched out my imagination more.
I'd always loved reading science fiction, things like that.
And I wanted to be involved in a project that let me put my mind forward into the future more.
And so I started thinking, because I'm an engineer, about building something.
And so I started thinking of building a clock because I wanted to be built out of technology that I knew would last
and people could maintain over a long period of time and so on.
And when I started talking to friends about it, I found that it was kind of almost like a Rorschach test form.
If I talked to a musician friend like Brian Eno, he would like, whoa, you know, what kinds
of sounds is it going to make?
Or I talked to a lawyer friend, like, well, how do you write the contract for the land?
Right.
You know, and so everybody would start thinking about whatever it was they thought about, but on
a different time scale.
That's super interesting.
Yeah.
And so I was getting excited about that.
And then Stuart Brand, who is this totally remarkable hero of mine, said, you know, this
is making everybody think about things differently.
We should start a foundation to think about things like that.
And so that was the origin of the Long Now Foundation, which actually Brian Eno gave
the name to.
And so I've been working on building that clock ever since then.
And we've built several versions of it.
We started, you know, the first version is in the London Science Museum.
The next version you can see actually in San Francisco, that orrery thing at the Long Now headquarters.
And then now we're building the real one that will last for 10,000 years in a mountain in Texas.
Yeah, and so describe this thing a little bit.
So, like, you have a mountain in Texas.
You have drilled a shaft.
What's the diameter of the shaft?
So, it's a 12-foot diameter shaft.
It's about 500 feet deep.
The reason for all of this is you can't build a building that lasts 10,000 years.
So, you have to put it in the middle of a mountain.
And, like, not just any mountain.
Like, you had to select the mountain to be, like, geologically stable.
That's right. And so, there was years of exploring around, searching just any mountain. Like, you had to select the mountain to be, like, geologically stable. That's right.
And so, there was years of exploring around, searching for a mountain.
And fortunately, I found one that was on property that was owned by Jeff Bezos, who was a big supporter of the foundation.
And so, he was like, okay, we've got to do this.
And so, he's the primary funder of constructing the clock in the mountain.
But even to make a 500-foot shaft is not as easy as you think.
What you have to do is you actually have to dig to the bottom of the shaft with dynamite.
And then you drill a hole, a little small hole with an oil well kind of hole.
And then you put this giant reamer on it and pull it up through the mountain. Yeah, and none of this,
virtually none of this is off-the-shelf components.
Some of that is off-the-shelf. What's not off-the-shelf is if you want a spiral staircase
cut into the rock around the shelf. Then you have to build a custom robot to do that.
Which you did. Yeah, fortunately I did conveniently have this
company around that was good at doing stuff like that.
So, we built a robot that climbed up the shaft with a diamond saw, cutting a spiral staircase into it.
And, I mean, if you look on the Long Now Foundation's website, you can see movies of this. So I spent a couple of years building the spiral staircase and then
we put into it gears
including giant bells
and the chimes that will
ring a different
sequence of bells every day
for 10,000 years. And just the mechanical
engineering on this thing is absolutely incredible
because you had
to cycle test all of these
things to make sure that they were going to be able to, like, do their job over 10,000 years.
Yeah, actually, that was a funny thing.
What we learned was that actually most of the things that we tested worked fine the first time, but we had to repair the machines that tested them about 10 times.
It's actually very hard to make a machine that tests 10,000 years worth of cycles without
the machine breaking.
Right.
So, yeah, pretty much everything that we've had that moves, we've tested it for
10,000 years worth of motion.
And there's sort of some incredible things in there.
Like you have one of the, you have a mechanism in there that is, that involves a gigantic quartz lens.
Yes, that's the photocell.
How do you make a mechanical photocell?
And the answer is, first of all, you build a meter-wide quartz lens, and it shines light onto a box, which when it heats up, it expands.
And that expansion is what triggers it.
And the reason you need that is because you need something to adjust the clock to keep it on time.
If nobody visits it for a thousand years or something, the way that it adjusts itself is during the summer, the summer solstice, the light shines down the shaft, focuses with that quartz
lens, causes the thing to expand, and then that adjusts the clock.
Yeah, and to me, it sounds almost like something from Indiana Jones or a Tomb Raider movie
that some future civilization is going to discover this, and it's going to be an archaeology
project to figure out why this thing is there doing and it's going to be like an archaeology project to figure out
why this thing is there doing what it's doing.
Well, that is also a lot of what I was thinking about when designing it is what
would be like to find this after a thousand years of it being lost or undiscovered.
And actually the fun thing was that I realized that you'd really like to know how long it had been undiscovered.
So, one of the interesting things about it is that when you go up to the clock, it initially reads the time the last person was there.
And then you wind it.
So, it knows what time it is, but it doesn't tell you until you wind it. Oh. And then when you wind it, it moves the dates forward, and it moves the astronomical display of the sun and the moons and the stars forward until it gets to the current time, the current date.
And then everything stops, and now you've reset it.
So when the next person comes, they'll see when you wound it.
So you're even thinking about the gamification of, like, how you get the clock wound.
Well, it's definitely, or the storyfication.
You know, that's one of the things I really learned at Disney is how important, well, first of all, the hardest part of building a 10,000-year clock is that people have to care about it.
Because if they don't care about it, then they'll just salvage it for metal or something like that.
So that's the hardest design problem.
And I think I learned things at Disney that helped me with that design problem.
In the same way, I kind of learned stuff at MIT that helped me with the material science problems and things like that. Yeah, but I am just sort of fascinated by the breadth of your curiosity and expertise,
because for many people, like you would go get like a PhD in a thing and like that would be
your area of expertise and like what you, and like, you know, the way that you achieve
impact in the world is like, you know, getting deeper and, you know, more focused on this thing.
And like you just, you know, like we have these crazy broad conversations about things.
Well, I was incredibly fortunate in the mentors that I had.
I mean, if you just think of people like Marvin Minsky, Claude Shannon, Richard Feynman. Those were all people like that.
They were people that had curiosity.
And so, you know, I was just lucky to have had them as an example of sort of a different way of approaching the world.
And so, it's just, you know, I have a lot of fun every day, I have to say.
So, we're almost… But let's face it, we're both really lucky to be alive at this time. I have a lot of fun every day, I have to say.
So, we're almost – But let's face it.
We're both really lucky to be alive at this time.
Yes.
Because probably they would have burned us at the stake or something during a lot of times in history.
Yeah.
Yes.
They probably would.
It's one of the things that I am very grateful for about modern society is society is not just tolerant of curious people, but actually encourages it.
And I hope we never lose that.
Yeah, and that's not a normal thing in history, actually.
Yeah, it is not.
And you can go read plenty about how curious people were persecuted in the past.
So, hopefully not something we lose anytime soon. So, we're almost out of time. And like I always ask people,
you know, what they do outside of work for fun. It may be a peculiar question to ask you,
because I think you've structured your life brilliantly where you have fun in all of the work that you do.
Yeah.
But what do you do outside of?
Well, let's see.
I mean, really, I have to say that there is kind of a blend of fun and work for me.
But I do some things that I have no excuse for doing at all at work.
Like I make perfume.
Oh, I didn't know that.
And I do that really just because it uses a different part of my brain than everything else I do.
Because I tend to be an overthinker, very logical in my thinking.
You can't be logical about perfume.
You can't even really give names to the.
Right.
So it's sort of a meditative thing for me because it turns off what neurophysiologists would call my default mode network.
Right.
So my default mode is very analytical, but in that you really just have to be experiential.
So I look for excuses to do that, hanging out in nature, those sorts of things, to complement it.
Yeah, that's super cool.
Well, this was great.
Every time I talk to you, I learn something new.
Thank you so much for being on the podcast today.
Well, it's a pleasure being interviewed by a like mind.
Awesome.
So that was Kevin Scott chatting with Danny Hillis.
Kevin, that conversation was incredible.
I would love to just, like, I would love to watch a Netflix show about Danny and about Danny's brain, you know?
Yes, for sure.
And, like, I always have a great time chatting with Danny.
I've known him for a couple of years now.
And, you know, I think the thing that always strikes me about Danny is just this mindset that he has that I think is a big part that a problem is solvable until, like, he's got some pretty serious proof that it isn't, which lets him just dive into things and persist.
Like, you can sort of see it even in this tic-tac-toe machine that, you totally can see how a kid who's in India,
who is taking books off the shelf and figuring out how to build this machine, how that person goes on to be an adult who's working at Disney and running the Imagineering stuff and is, you know,
building all these machines and making all these inventions. Like, it completely makes sense.
Yeah. No, I mean, the arc totally makes sense. But, like, you really do have to appreciate, like, what an unusual set of circumstances that was.
So, this is Calcutta, like, I think in the 1960s.
So, before the personal computing revolution had even begun to start.
And, you know, he's just sort of figuring this out on his own without mentors.
But, like, you know, I think in the conversation when we pressed him, like, he, you know, he has this sort of humility about what he did.
And so, granted, like, he's an incredibly smart person.
But, like, the thing that's really important, I think, for all of us to understand is that he had a whole bunch of failed attempts at building this
thing before he got the successful thing. And like that mindset and ability to like not only jump
into a problem in the first place, but then to persist through even when you fail a bunch of
times at trying to get to the solution, like that is an incredibly important part about being
a really effective creator or even entrepreneur. Without a doubt, because realistically,
you know, you're going to fail. There are going to be things that don't work out. And
I think it's really inspiring to see someone who's been so successful and has
done all these amazing things and is so smart admit, oh, I've had to, you know, try a bunch
of times. I've had to figure things out.
It hasn't just been super easy because sometimes the myth is just, oh, you know,
it was just, you know, I just snapped my fingers and it was done.
And to know that it took persistence and creativity to think about how to solve problems differently
and to try and try and try again is really inspiring.
Yeah, and I think this is one of the things that people who aren't in the day-to-day grind of creating technology,
engineering new things like doing science sometimes don't get to see.
So, like, what you see is, like, the end thing that pops out after we've been successful. And the reality is, even for like incredibly brilliant people,
you have more failures than you do successes on your way to that success.
There's this great episode of The Simpsons where Homer is obsessed with Thomas Edison.
Yeah.
And he's trying to come up with his own inventions. And when he goes to the Thomas
Edison Museum, he finds a list of Thomas Edison comparing himself to Leonardo da Vinci,
and he suddenly feels better about himself that he didn't, you know, achieve what Edison had.
I think that's a good thing to kind of put into perspective that there's a lot of failures and
there's a lot of attempts that, as you said, you know, we don't always see when we see the
final product, but is part of the process of creating things.
Yes, indeed. And so, like, for everyone listening to the podcast,
like, that's just more encouragement for you all
to, like, go out and, like, try and try and try again
because, like, ultimately that's the only way
that anyone ever gets to, like, something new
and interesting and successful.
Absolutely.
All right, well, we are about out of time.
As always, we would love to hear from you at BehindTheTech at Microsoft.com.
So tell us what's on your mind.
Maybe tell us about some of the various tech innovations that you're excited about.
Maybe tell us about some of the ways that you've tried to do something.
Maybe you failed.
Maybe you've succeeded.
Tell us about your tech heroes, and maybe we will invite them on the show.
And of course, be sure to tell your friends and colleagues about the show. Thanks for listening.
See you next time.