CyberWire Daily - John Maeda author of How to Speak Machine [Special Editions]
Episode Date: November 29, 2019In this CyberWire special edition, a conversation with John Maeda. He’s a Graphic designer, visual artist, and computer scientist, and former President of the Rhode Island School of Design and found...er of the SIMPLICITY Consortium at the MIT Media Lab. His newly released book is How to Speak Machine - Computational Thinking for the Rest of Us. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network, powered by N2K.
Calling all sellers.
Salesforce is hiring account executives to join us on the cutting edge of technology.
Here, innovation isn't a buzzword.
It's a way of life.
You'll be solving customer challenges faster with agents, winning with purpose,
and showing the world what AI was meant to be. Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
Hello, everyone. I'm Dave Bittner. In this CyberWire special edition, my conversation with John Maeda.
He's a graphic designer, visual artist, and computer scientist,
and former president of the Rhode Island School of Design
and founder of the Simplicity Consortium at the MIT Media Lab.
His newly released book is How to Speak Machine, Computational Thinking for the Rest of Us.
Well, I realized that when I talked about design as classical design,
kind of a design like the way RISD does design,
design thinking, which is about basically post-it notes and Sharpies,
collaboration type of design,
and computational design, which is about anything involving Moore's
law and design, I realized that people would ask me, what is computation?
And so I started the book off as a book about design, and I actually made it about a book
about computation.
Well, let's explore that.
Let me ask you the basic question.
By your estimation, what is computation?
Let me ask you the basic question.
By your estimation, what is computation?
Computation is this material that anyone in cybersecurity knows intimately.
It's the cyberspace world that William Gibson described in Neuromancer.
It's the upside-down world in Netflix Stranger Things. It's this world where a lot of things are happening that the average person cannot imagine or see with their own eyes.
And it's a place where computation powers the cloud.
It's everything we cannot see that's running everything today.
Is it real?
Absolutely real.
It's real in the way you can feel it through writing code, talking to APIs.
It's out there. You just can't see it in one place because it's pervasive.
I have to say, I really enjoyed reading the book. There are many things that you describe in the
book that paralleled my own personal experience coming up with through technology and the early days of computing and 8-bit computers and all that sort of stuff.
One thing that struck me was you pointed out that it's not just the functionality of the
code that we find satisfying, that it's the elegance of the code as well.
that it's the elegance of the code as well.
Well, like any person who creates software has an appreciation for poetry.
And whether that's the spare use of language
or the clever use of language
or the unexpected combinations of different parts of language,
it's like, wow, that's a beautiful idea
expressed as a text.
And I think that code has the capability to be beautiful in that same way.
It's like, well, it's just a small bit of code, and it does so much,
and it's expressed in a way that makes complete sense.
And how did you do that?
So there's that kind of aesthetics of code out there.
One of the points that you make in the book is that computers run in a loop and they never get tired.
One of the things that struck me about that was I feel like sometimes our brains get caught in a loop.
And many times that's in the middle of the night.
You know, you wake up and it can be frustrating and even maddening to try to break out of that loop.
It just struck me as an interesting contrast between humanity and the computational power of the machines we interact with.
Wow. Never thought about that.
I mean, yes, it's that mistake you made.
And like, oh, I got to remember it.
What happened? I can't believe I did that.
I did that? What? And you
play it over and over. And in that sense, it seems like the nature of our brains is to keep reminding
ourselves of that dumb mistake you made so you might not make it again. But I think with a
computer loops, it doesn't have a conscience or a consciousness. So it's just like, oh,
I'll just start doing this.
I'm going to do it.
I keep doing it.
That's all right.
Yeah, you describe in the book when you were a youngster, your first experiences with programming computers in BASIC
and an experience that I think many of us share.
I know I certainly did.
That whole 10 print John, 20 go to 10.
And the feeling that comes over you when you first experience your ability to kind of control the machine.
Yes, it was a weird feeling that I could tell it to do something and do it forever and it would never stop.
But actually, when you mentioned the looping thing, how we sit there in a loop,
when you mentioned the looping thing, how we sit there in a loop, and when you think about machine learning, which uses iteration, error optimization, it's basically thinking to itself
over and over in a loop. And it's cleaning itself to find a kind of, I will not do this again,
I will not do this again, I will not do this again, as a weird parallel to what you described just now.
One of the things that you bring up, again, in sort of that journey through the early
days of computing was the ELISA computer program, which is another thing that I remember vividly.
Do you suppose that a modern computer user would find ELIZA compelling? Have we moved on beyond where
that sort of interaction is interesting to people? Well I think ELIZA is the
foundation for all the chatbots and I think we still are fascinated by it when
we're talking to Alexa or Google Home or whomever, whomever. And it responds in ways that make sense sometimes
and doesn't make sense other times.
So I think it still works.
It's a good trick.
It's like a magic trick, a parlor trick.
I've heard stories about a family that was making use of Siri.
They had a child who had developmental disabilities.
And the fact that this child could interact with Siri and that Siri had endless patience.
Siri would answer all of the child's questions and never get frustrated, never get tired.
And they found it extraordinarily helpful with this child's development.
Love that.
Yeah.
Well, I never thought about that.
It's, well, actually, it's kind of an example.
Another example of that is there was at Paro, it was like a robotic seal.
It was a stuffed animal, plush toys type thing.
A toy was made and the toy was a flop.
Senior care facilities were buying these on eBay
because for older people to hold on to this baby seal
and have it react like it's a real living thing
was a big deal.
It reduced stress.
And no one else would take the time to reduce their stress.
So I guess in that sense,
some of these robots' living systems
can provide infinite,
well, at least the guise of infinite attention.
Is there a danger there?
I mean, one of the things you point out in the book is sort of keeping grasp on our humanity.
If we come to depend on machines for their infinite patience,
is there a risk that we lose something there wow I actually
was I actually was thinking about this just like a month ago actually I wrote something cogent on
a blog post but I think if I remember what I wrote it was something to the fact that you can either
you can sit on either side of the fence like oh my it's terrible. Like the robot has no feeling. How could you do
that? Does it really matter if the thing doesn't feel if you feel differently? So it can go both
ways. I'm struck by the notion that you understand someone better when you can speak to them in their
native language rather than going through a translator.
Where do you think that leaves us when it comes to understanding the machines that we interact with?
Do we, the folks who don't need that translation layer, are they at an advantage?
Oh my gosh.
Yeah.
Great framing.
Oh my gosh. Yeah, great framing. I think anyone who can think technically,
anyone who can think about how the system works
can actually do things differently
because they have insider information.
It was just like just now when I'm in a hotel room
and the phone called and I was like, wait a second, this is going on. No,
no, this is actually, you have an isolated line. You don't have to have that sound in your voice.
So when you understand how the system works, you can definitely do things so much better,
so much easier. Do you think that with our interactions with machines, the machines we interact with every day, should computation fall into the background? Should it not be noticed? Should it not draw attention to itself?
definitely isn't supposed to be there and tell you, here I am.
But if it's doing anything that might be wrong on your behalf,
you do have to have a critical thinking lens attached to it and ask questions about it and ask the companies as well.
Otherwise, you will not get the most value out of what you're renting, purchasing.
Transat presents a couple trying to beat the winter blues.
We could try hot yoga.
Too sweaty.
We could go skating.
Too icy.
We could book a vacation.
Like somewhere hot.
Yeah, with pools.
And a spa.
And endless snacks.
Yes!
Yes!
Yes!
With savings of up to 40% on Transat South packages,
it's easy to say, so long to winter.
Visit Transat.com or contact your Marlin travel professional for details.
Conditions apply.
Air Transat. Travel moves us.
Cyber threats are evolving every second,
and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker,
the cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions
designed to give you total control,
stopping unauthorized applications,
securing sensitive data,
and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today
to see how a default deny approach
can keep your company safe and compliant.
I'm interested in your take on
what I would perceive as being a detachment
that can come from programming, from spending a
whole lot of time with a computer.
I remember as a teenager, you know, in the summertime when all I had was available time
and I would just bury myself in front of my computer and, you know, hack away at the keyboard
and come up for air, you know, food and water only occasionally.
Again, I wonder, is there a hazard to this?
Do we risk emotional development or detachment from friends and loved ones?
You know, like now in hindsight, you can think of people who make like embroidery or like knit or like build in the days where plastic models were built a lot as a kid.
It's like those are engrossing activities that require attention and you enter the state of flow.
I guess when you're writing software,
it's not dissimilar.
And people who made plastic models and who do embroidery or knit,
they seem to be okay today.
I think the difference is that
because computation,
if you are making things that affect other people
at the scale of hundreds, thousands, millions,
you can detach from realizing that you're working not just with numbers, but with people as well.
And that may be a different kind of growing up experience.
Yeah, and that brings up a really interesting point, which is the ability for a seems as though they could have a an oversized
influence without a whole lot of pushback in relative to the number of
people who are making those decisions absolutely and I think that it is
something that a lot of folks never considered because they are making too
much money to care about it.
And then suddenly when things happen,
and people can actually use the platforms themselves to raise these issues,
you start having to become aware, become awake.
So we're in this weird time where the platforms that can awaken people up can also shut down that wokeness too at the same time
and that's why i look at joseph weisenbaum and how he considered how eliza could be used for harm
it's like so early in its evolution um because he he grew up in the naz Germany era and fled Nazi Germany. So he could imagine what people with
bad intentions could really do if they had that power.
And I think the people in tech didn't consider that a lot.
Big tech. This is like saving the world. Do no evil.
Oops. Oops. We did that?
How is that possible? Yeah. Isn't that interesting? I mean,
that decades ago he was thinking about that with the comparatively rudimentary abilities of
computers to be able to see forward. And when we look at what's playing out today,
how forward thinking he was. Absolutely. You know, I i mean the fact that he could imagine that
is i find boggling um but i guess in that era it was the era of post-world war ii
darpa could do anything it was like a whole new world so maybe people thought differently back then.
Oh,
actually,
someone told me
this once,
how he was
worried that
most of the
world's national
research labs
used to be run
by Manhattan
Project era
physicists,
who all had
to deal with
the consequences
of creating
an amazing
piece of science
that was an
amazing weapon.
So, they brought a moral conscience to their work as research lab directors,
and that's gone.
Oh, isn't that interesting?
Yeah.
So I wonder, huh, maybe they all could think like this,
and we lost that competency?
You know, we all grow up learning to read and write,
and we learn basic mathematics.
Do you suppose that computational thinking will become a core competency for the generations ahead who are coming up?
I don't worry about the generations ahead, because I think they grow up computational because of all the tools they use.
I'm more worried about the people who are older if you think of like all
the statistics of the world population how we used to be this population pyramid where it was young
a lot of young people at the base and a few older people at the top but all projections into the
middle of the century show that it's going to be a population rectangle,
which means there'll be just as many older people as there are younger people.
And I think if they don't become computationally literate,
they are going to maybe act the way we see today,
with kind of a nationalism and like, let's make it the old way and hold back progress.
Do you suppose that there are certain cultures that with their backgrounds and their history,
that they may be predisposed to take better advantage of computational thinking?
Wow, that's so interesting. I think the Japanese, if you think about their fascination with robots, is one example of how they are open to that future.
Also because they have the aging crisis of too many older people, like no younger people.
So they've adopted robots as the future because they're computationally minded. It's okay. There's no stigma around that.
Near the end of the book, you compare cooperation with collaboration.
You contrast the two.
Can you describe to us the difference there?
Yeah, that's one of my favorite insights of organizations and how when you cooperate, you don't have to be really codependent.
Like, okay, I'll do it.
Collaboration, you are codependent.
We're going to actually work together.
We're going to actually get involved together.
So collaboration is such a wonderful thing.
It's so much harder.
Cooperation is easy.
Collaboration is hard.
harder. Cooperation is easy. Collaboration is hard. Do you suppose we're by necessity moving into an era where it's going to require more collaboration?
Exactly. It's going to require collaboration between us and the
computers. It's going to require collaboration between us and the people
who control all these technologies too. You strikes me that when I think about technology and design
and the intersection of the two, particularly online,
there is so much bad design out there
that when we come across good design,
I think we find ourselves delighted.
Does that resonate with you?
I think designers will always say that.
I like Paul Rand.
When I interviewed Paul Rand, I was like,
I asked him this question of what is design?
And he said something like,
design that is good is rare you know the fact
that bad design exists everywhere it was only it was like you know he was that
kind of iconic designer Paul Rand and he said he was basically saying my design My design is good. Everyone else is bad. Bad design.
That's why hire me.
Fair enough.
So designers like to say everything else is bad.
My design is good.
Hire me.
What do you want people to take away from the book?
What's the take home message here?
I'm hoping that people who are not technically oriented can get more curious about
like how computation
controls so many things
and it's invisible and powerful
and not to be afraid
of it but to be curious
about it
alright well John thanks so much for taking the time for us
this is a real pleasure
getting to chat with you. Thank you, Dave. Our thanks to John Maeda for joining us. The book
is titled How to Speak Machine, Computational Thinking for the Rest of Us. For everyone here
at the Cyber Wire, I'm Dave Bittner. Thanks for listening.