Embedded - 87: Make My Own Steel Foundry
Episode Date: February 4, 2015Chip Gracey spoke with us about founding @ParallaxInc, chip design, and the Propeller with its many cores. Parallax Some notes on open sourcing the Propeller Propeller One Verilog forum Propeller pro...ducts Elecia has a very old Propeller Starter kit but is tempted to get the PropStick USB. Many years ago, Chris got a Basic Stamp 2 module (like this one) to control a camera in his RC airplane:
Transcript
Discussion (0)
Welcome to Embedded, the show for people who love building gadgets.
I'm Alicia White, alongside Christopher White.
If you are one of the many folks who got started with the basic stamp, we have a treat today.
Parallax's Chip Gracie is here to talk with us.
Hi, Chip. Thanks for being on the show.
Well, hi, and thanks for having me on.
Could you tell us a bit about yourself? Well I'm 47 years old. I grew up in Carmichael, part of
Sacramento. I always loved things that were electronic and mechanical and I was always
dreaming of making robots from when I was a little kid, maybe eight or nine.
And I loved audio.
I've been an audiophile for a long time.
And when I was 12, I was able to start playing with computers.
My dad worked at Aerojet, and he had an Apple II system that he could bring home on weekends.
So that kind of gave me a place where I could really start to learn how to program things.
Before that, I was taking calculators apart and wiring them into my stereo. And I could, you know,
in retrospect, I was hearing things like keyboard matrix scanning tones that, you know, were never intended to be heard. But for the first time then, I could actually program a computer. So that
was just, I love doing that. And pretty soon I was doing assembly language, then I was making
board level stuff, and then I've always wanted to make chips, and that's what we're doing now.
And so you are the founder of Parallax, is that right?
One of them, yeah. My friend Lance Wally and I started the company, and it wasn't a company at
the time, it was just something on paper. My dad had the
foresight to have us set up as an S-Corp. And that was in 1987. We had both just graduated from high
school. Lance and I had been working together on projects since we were in the seventh grade.
And his specialty, he understood transistors. And I didn't at that time understand how a transistor
works.
So he would always provide these simple transistor solutions to switching issues I had with digital
stuff I was doing.
And we started Parallax together in September 87, and we started out making basically add-ons
for the Apple II computers. And we eventually kind of settled into making development tools,
which are assemblers and device programmers, things like this,
and ROM emulators that would help people make embedded systems.
And that's where we've kind of been ever since.
And so did you ever go back to college and do the degrees path,
or has it all been very home-taught?
Well, I did go to college for a while.
My parents and my girlfriend at the time were convinced that if I didn't get a degree,
that I would not be able to function in the real world, and I needed to get through school.
So they convinced me to go, and I got through about two and a half semesters,
and I wound up with a 0.95 GPA, and I was on academic probation. And I just kind of
fell out of the bottom of it and just went back to doing computer stuff, because that's really
where my heart was. And that's just all I've pretty much, that's how I've done things. I've
never been a good student unless I really want to know
something. If I want to know something, I can find out what I need and I can put it all together and
put it to work. But I was never that great of a student. So my teachers probably supposed I
wouldn't, you know, have ever gotten much done. But you have, and you employ 40 people? I think it's something like
that. It's probably mid to high 30s right now. And a lot of them, I mean, really engineering-wise,
there's myself and, let me think, maybe three other guys within the company that can do engineering work.
And the rest of the people there are kind of and packaged up and tested and shipped out to customers.
And, you know, what little marketing we do also.
The business side of the business.
Right, right.
And you do most of your manufacturing in the States, right?
Yeah, in Rockland we have... California, Rockland, California, right? Yeah, in Rockland, we have...
California, Rockland, California, right?
Rockland, California.
We have a surface mount assembly line
where we can build board level products.
We have had some facilities in China,
but we're kind of moving away from that
because the lead times from there are very long,
and it takes months to get a product straightened out, even if we have our own people over there.
So we've kind of spun that off into something called Simpletronics,
and that's being run by a really sharp guy who's worked with us.
He came from Argentina into America, and we helped bring him over in 1999.
He and his family, his name's Aristides Alvarez.
And he runs Simpletronics now.
So for what we need from China,
we just work through him today.
But otherwise, we're trying to build
more and more stuff in Rockland
because we have a lot better control over things.
And we wouldn't want to let any high-value IP stuff
out of the
country it's safer to do that here i've been reading that's becoming a trend where people
are looking at china because that's what's traditionally done but finding that in order
to be agile and to have more control like you're saying over over the production process to bring
it closer to home because i guess when you work with china it's you know you're turning
on a giant machine that is hard to steer once it's going and you don't really have a lot of
visibility even if it's right you know on paper wow this is a lot cheaper you can make it up in
in other ways by being more local it's fascinating to me that things are shifting that way
yeah i mean labor is slowly becoming more expensive there, but it was not really labor that made China so viable.
It was that the same parts you would buy that are sourced from American-based companies or stateside companies would be for sale over there at a fraction of the cost that you'd you know acquire them here in america
through distributors so your parts costs were maybe you know less than half of what they were
over there for the same things what they would cost here and of course labor was also cheaper
and it still is a good margin cheaper but you know when when we were building a lot of high run rate,
low IP stuff over there,
we were getting finished product.
I mean, built, tested, packaged, delivered to us
for 60% of what the raw materials
would have cost us in the US.
So even if we work for free,
we'd have to be, even working for free can't get you
ahead. You know, that's how great the disparity is. But that's, you know, slowly changing. And
if people can, which I don't know if we're doing, but if you move up the food chain and you get
better margins on your stuff, you don't need to build anything in China you can afford to do it here and so your focus on most of the things you build is educational is that right it has become
that way and a lot of that has to do with my perhaps failings in that I've made traditionally
our core products and it's just taking me longer
and longer to finish things. Like the last propeller chip took, which is the first one
we've sold, right? And the first model it's been out for nine years now. I worked on it for eight
for these last nine years, I've been working on the next one. and it's all kind of a kooky circumstance that you know it's
it's hard to fathom how this could work and I you know we hope that it all will work but uh
it just takes me a long time the projects I do seem to get more and more complicated and they
take more and more years to finish and uh so I'm feeding things at a slower rate into Parallax. So Parallax is kind of expanding out with the basics into education that we've had established for a while.
And we can do really well in that market.
We can make a good, consistent base of curriculum material.
And we can meet with teachers and train them.
And we have, I think, pretty good customer service.
And we're good in that area.
We can be good in that area without needing new products all the time.
So you mentioned the propeller, and it's kind of an odd processor.
It uses eight cores? I think you call them cogs?
Cogs, yeah.
And so you have eight separate things going on at once.
Yeah, see, forever we've been using microcontrollers.
And I've been thinking, you know, from a long time ago,
I started wanting to make a chip in 1990,
but I didn't, you know, I was reading books on chip design
and kind of assimilating what I needed to know.
But the tools were expensive.
It was ambiguous how we get from idea
all the way to finished chip, but we slowly progressed.
But the thing I always wanted to do was,
you know, I had written a lot of software for like PIC chips,
which are very low-level processors for microchip.
But it's amazing how you can pack these,
you know, and everything works like this in electronics.
It's many, many low-level things coming together to do something complicated.
And what I ran into was that it would be great if we could do multiple things concurrently.
So if we have multiple processors running, or even one processor that has some deterministic instruction time,
like, say, one or two or four clocks, and then it can rotate from one process to the next and come
around and around Robin, then you can run say four programs as if you had four separate processes.
They're virtual because you're using the same hardware, but I did a lot of experimenting with
that using FPGAs, and that was kind of where I knew we wanted to go because when you're designing
real-time systems or anything that has to address
asynchronous things that are going on to try to coerce a single threaded architecture into doing
that job is always a pain and you always wind up with jitter and you're always kind of fighting
things to to get your jitter down or to get your response time to where it needs to be
and we can talk more there's a lot to say about all that kind of stuff.
But if we could have a processor that can run simple
instructions on an absolutely clock-edge-based schedule, and
we can run these things concurrently, and they can
maybe share variable states through RAM registers that are
communal, then designing embedded systems
that have to do real-time things,
the complexity collapses
because now you don't have to write,
you don't have to coerce a single-threaded machine
into doing 10 different things
with all these state variables.
You now have separate processors
that can each run separate programs.
And no matter what is going on elsewhere,
nobody's getting hiccuped by anything else.
So they all run consistently.
So it's just a joy to write i love doing that kind of programming because it's like making a
finely tuned machine and it's an odd kind of a thing i mean the market doesn't make machines
that do that you can you can have an fpga do that but you've got to define everything in
hardware description language which is really tedious and it's not simple. It took me years to kind of
adopt the mental framework so I could even know how to think about doing those kinds of things.
But if you can make something that allows you to program in software to do concurrent separate
things, then it's a lot of fun. You have kind of the flexibility of an FPGA, the ease of programming
a microprocessor, but without the
constraints of a single machine. That's a pretty different philosophy than doing it all in software
with ever larger chips. I mean, I can think of many times where I've needed to handle multiple
threads and done that with a real-time operating system, but your processor truly separates them. So it's not multi-threaded in a software sort of sense.
It's true multi-core.
Is that right?
Yeah, I think you've got it.
The idea is that you actually have hardware.
Instead of an RTOS splitting up bandwidth,
you have actual separate hardware
dealing with each one handling its own tiny little program that runs consistently and it repeats on the same nth clock edge what it's doing.
I mean, if you write your code that way, you know.
So it's really like having hardware that you can define with software.
And it is a lot easier to write code, I mean, processor code, than it is to define it is to define what flip-flops are doing and what's between them.
So how is this different than some of the larger processors we hear of going in products?
I mean, I'm thinking about the ARM line with the Cortex M0s and M3s that I've been using a lot are single processor, single core, 32-bit processors.
But yours was made eight years ago, and those are fairly new.
Right.
Where do you fit in the market?
Well, we have kind of an odd place i mean who likes our chip
is typically older men who might have remembered how computing was a long time ago and they love
the idea of being able to have actual control over a system and then also some younger people
who discover that it's fun to be able to actually control things down to the clock edge
via software. So, but about like how we might differ from an arm. And I got to tell you, I've
over these last nine years being holed up working on this, I am feeling a little bit like a dinosaur.
I just got a fancy Samsung phone the other, well, around Christmas, and now I can talk to it, and it's
amazing how well it can understand voice. And so I'm just kind of catching up with where a lot of
people have been for years in the general compute area. But generally speaking, if you take something
like an ARM chip, I don't know how the really inexpensive ones work, but let's say generally
that's kind of a C-driven architecture,
right? The idea is that someone can express what they want to have happen in C, and that thing's
job is to crunch through it as quickly as possible. And the cheap ones probably don't have caches,
but who knows, they might these days. I mean, process densities are very high. You could put
things like caches on cheap chips now, but let's say that you take a medium-level ARM.
Well, it probably has a couple level of instruction and data caching,
a couple levels of those things.
And it will get through.
It'll crunch through compiled code pretty quickly.
But if you're trying to write any kind of code that does things on a schedule,
it just becomes you're going to be fighting it all the time because you don't know when you're going to write any kind of code that does things on a schedule, it just becomes, you're going to be fighting it all the time
because you don't know when you're going to get a cache hit.
You might discover, well, if I run through this loop once,
then I can repeat it again, and I use the same datas,
and I can get what looks like very consistent timing.
But if you switch to another chip in the family,
that could go out the window,
and you'd be facing maybe some other dynamics. So these where computing is gone seems to me is towards crunching through code
as quickly as possible and then anything that requires a real-time manipulation is handled by
some kind of dedicated hardware block which deals it hides all kinds of clock latencies and everything from the user.
Dedicated hardware block like DMA and SPI controllers?
Sure, and Mac Fs and USB things.
The other way to handle some of that
is to put a little cheaper 8-bit microcontroller out there
to handle special timing things
that you don't want to bother your 32-bit processor with.
But I guess you put out seven other 32-bit processors.
Yeah, well, you know, though,
we have limited hardware peripherals. I mean, when someone goes to make a. Yeah, well, you know, though, we have limited hardware peripherals.
I mean, when someone goes to make a chip today,
they'll just grab an IP block
that is a USB 2.0 or 3.0 interface,
and they'll throw it in there,
and they'll pay $100,000 to the IP owner,
and they'll move forward.
And so much of the expense in modern chips
is purchasing the IP to realize all these
blocks that you need that you know customers expect you to have so our chip doesn't really
have any of that stuff um it would be good if it would be easy to interface to some chip that could
do that for you if you can buy a one or two dollar you know2 USB high-speed 8-bit controller or whatever that has that high-speed interface
and then talk over a few pins to our chip, then that would be a solution.
But that's not the highly integrated kind of thing that customers at first blush would expect.
They'd expect everything to be built in, but that's just not the kind of chip that we're making.
So if someone uses our chip, they're going to have to choose it on the basis that it allows them to program in ways that they can't program with other systems.
And that's the draw.
And then they'll have to compensate for these other things externally.
And so going back to the licensing of the blocks, ARM licenses their processor core.
And then, you know, ST or NXP or anybody, you know, lots of other people buy the core and that's a huge part of the CPU cost.
You make your core and do you license it to other people?
Well, it's never come up.
Somebody else could use it.
We open sourced the Prop 1, I think, in, was it last?
Last spring, I think.
Yeah, sometime last year.
I think it was maybe August or so.
I can't remember, but it was at that DEF CON show.
We introduced, or what would you say, introduced it.
And so people can go ahead and use it right now if they wanted to.
And as long as they kept open their work, they wouldn't need to pay us anything.
But who knows? Is there anyone that wants to do that? There might be. But I think what's happened over time, the whole compute
paradigm has gone in the direction of big code libraries to do complex things, married to IP
blocks that handle high-speed protocols. And so it's made what could be very, very difficult
very, very simple.
But I think then retroactively,
it kind of constrains what kinds of things
can be and are going to be made
because they're all kind of within that paradigm.
And so I think a lot of programmers today
have never felt what it's like to program a chip
that actually does what you say exactly when you say
to do it. And you can do that in eight concurrent instances. So for some kinds of things, I suspect
people out there are yearning for this kind of thing, but they don't even know to rationally
wish for it because they don't even know it's on the program. It's not on the agenda anywhere. So we're kind of off in a strange space, and we have some strange customers.
But they're a pretty enthusiastic lot of people.
And so the propeller kind of scratches an itch that some people, you know, got a long time ago or some people have discovered.
But it's not the, you know, $1 not the $1 do-everything arm chip. It's a whole
different animal that exists for different reasons. And so what have people done with it?
What applications have you seen? I could say this. It's gone into a lot of systems where
some kind of concurrent multi-processing is very helpful for them to realize their goals.
So it's gone into some machine tools where you have motors and things running concurrently,
and there might be several of them, and their activities need to be coordinated.
It's really good for doing that kind of thing.
Or even for some kind of thing where you've got a few different, maybe some kind of user interface
where you have some kind of communication protocol and then you're also monitoring some
other thing that's ongoing in the background and you need to tie all these things together.
It's been used for that, but I can't really just, it's sort of silly, but I can't
recall any specific application. I mean, I know this from our customers, the chip finds its way
into applications that are low run rate, high ticket products. So things that are not like machines that you know most people would have
occasion to see they're things that go into factories or maybe some medical equipment
or something that's a little obscure it's not it's not like consumer electronic kind of stuff. But you also do have this video display generator
that lets you have these cores that can control robotic things,
but also you can put up a really nice display.
Yeah, yeah.
We do have a little peripheral in each cog that can generate video,
and it does it on like a pixel output basis
where you can take 32 bits worth of data
and shift it out in a couple different fashions
in order to achieve some kind of pixel signaling.
And then by wrapping some more code around that,
you can get the horizontal and vertical sync signaling.
And so for VGA monitors and things like TVs,
that works great.
Now the world is kind of moving towards an expectation of having HDMI. And that's kind
of another thing that is hard for us to address with 3.3 volt IOPins. We'd have to make some
dedicated 1.8 volt IOPins and a 180 nanometer or smaller process.
And so that starts to become very process dependent. So I kind of hate to see over time
things moving away from simplicity, like very simple open protocols that you can observe on a
scope to things that are harder to observe and just about impossible to generate without dedicated
custom silicon. I know with HDMI, there's some protocol interaction. It's not just
sending out pixels and the monitor figures it out. I think there's more going on than that.
And so I don't know how we're going to address that in the future.
This next chip we're working on can generate analog pixels of high quality at a good rate,
but the world's already moving on to digital, super overclocked serial things like HDMI.
And so we don't have anything that's going to do that.
When we get this chip done, it won't be able to do that.
Well, to be fair, I mean, I don't think there's many micros that do that now either.
I mean, most of the projects I see people doing, even with FPGAs or generating VGA,
I think scaling the HDMI wall
is difficult
in every context right now
and I think it does need
dedicated hardware
because I think it does HDCP
which is some, like you're saying
some strange negotiation but it's
doing some authentication
and encryption stuff
it's overloaded and it starts to
venture into like you got to pay to play
do you want to buy a license so that you can make something that can have the sticker on it that
shows you're doing it right usb's been that way for a while and i i'm personally i don't get
enthused about stuff like that because i mean laws of physics don't necessitate such complexity i
like things that are beautiful and simple and open and are accessible to people.
But the world's becoming increasingly closed
in all kinds of little ways.
Well, it's both, right?
I mean, there's a lot of hacker community
and the maker community that are fighting against that
and making interesting things with chips like yours
and other people's chips and devices.
But the consumer product world is trying to drive cost out at all,
at any cost, which is a terrible pun.
Yeah, right.
So they end up with these more complex and purpose-built solutions.
And I think what's happening, what you're saying is happening in hardware
has also sort of happened in software. Because I think back to the Apple II, like you were mentioning, you could drop into the debugger and you could hand edit assembly and have something happen right there. python or something but even that's so heavy weight and it's so tailored toward less about
thinking about how your design works or how your processor's doing things or the the hardware are
really understanding the computer and more about can i piece together these pre-built libraries
that i've downloaded in an extremely expressive way to do complicated things with as little
code as possible and it's a weird it's a weird tension because i
feel that same desire to get back to the apple too yeah that's that's that's really the only
thing that interests me is that kind of feeling this this uh other way of having things closed and
you know impossible to know i mean i don't think there's there cannot be anybody alive that can
fully understand what's happening inside of any of our phones and desktop computers. It's become way too complex.
And, you know, I mean, there are deep things in there to keep you out of those regions too. So,
but like this next chip, you know, what I've always wanted to do is make it so that it can
support its own development environment on the chip.
Because we've had this problem for a long time.
Let's say you make something with a microcontroller and you come back a few years later and you've got a Revit.
Well, now your tools are incompatible with the current version of Windows, although this dates me.
I mean, I understand now that Chromebook is the new thing.
And, of course,
the iPad's been out for a while. So if you can get your stuff running on those platforms,
you're ahead and the Android. But we come back later to fix something and we have to go spend all kinds of more money to buy updated tools. And then we can't even do it in the sensible way we
used to. Now we're locked into some other paradigm that we never wanted and stuff just keeps going down this road. And I would like to make something
where you can come back in 25 years and pull up the source code that you last worked on,
modify it in situ and hit a button that stores that and now it runs that. So you don't lose
your work. I like that idea quite a lot. What are the rumors about the propeller too? Can you tell us? Can you give
us a timeline or a hint about such things? Yeah, well, yeah, boy, this timeline thing's
been going on, I tell you, for nine years now. So I'm, well, I'll just tell you what I'm doing.
Right now, there are two portions to the chip. There's the full custom side, which has our special I.O. pins,
which have A to D and D to A converters in them,
and a bunch of neat little I.O. modes.
And that stuff is not something we can just buy in a block and put in the chip.
We design all that.
And that has to be laid out as a custom layout effort in order to exist in the chip.
So right now I've finished the schematic for that, and I use Spice.
And boy, I'll tell you, Spice will teach you everything.
If anyone wants to learn chip design, what you've got to do is get first some kind of schematic entry tool
and a SPICE simulator, and then something to show the SPICE output.
And then you need to go get some models, some model files,
for some process that you'd like to use.
Right now we're using an on-semi 180 nanometer process.
So if you can get those files, they really in detail define what a pmos or an nmos transistor
is with various you know voltage ranges the different oxide thicknesses they're like maybe
150 variables that define those models so when you build stuff you know from a schematic basis
and then you turn that into a spiceiceNet list and simulate it, you
will see just about everything that could happen if you were to fabricate that.
There's even this, what's called Monte Carlo simulation where you can have it randomize
device to device variants so that you can see if I really build this and I have these tiny devices and there's you know how much variance between identical you know next door neighbor transistors
sitting on the die you know two micrometers apart how much are they going to vary in actuality and
you can simulate all these kinds of things so you can and that spice will teach you everything. It'll be like, I suppose, going to college or something.
And you'll learn just from feedback and playing around, all kinds of stuff.
And when something isn't what you think, you investigate and you realize there's some reason and it starts to make sense.
So that's how I've really learned chip design.
And then when we fabricate stuff, it works like it did in the SPICE simulation.
Anyway, so we have two portions of the chip.
The first is the IOPins and the crystal oscillator
and the reset input and all that.
That's all full custom.
So that's designed at the transistor level
and that has to be laid out.
And then that will marry to a giant block
of synthesized Verilog code. And
Verilog is a hardware description language. So if you get an FPGA, you could write code for it in
Verilog. And that's how we do the chip design. We write it in Verilog and run it on an FPGA and see
how it works. And when it's good, then we have that synthesized, and we make a chip from that. And so the synthesis process actually takes your logic netlist and then refits it to a library that you have from the foundry of all sorts of simple combinational gates like NANDs and NORs and funny gates with a couple, you know, a true and inverted output and some different types of flip-flops with different drive levels on the outputs.
And it fits your design optimally into their cell set,
and then it makes a giant net list,
and then it proceeds to put it all together and wire it all up.
And this is the job that a person could not do in their lifetime.
A computer can do it in maybe a couple of hours.
Anyway, I'm having to translate the schematic.
I got it all done, but I used my own symbols,
which I liked because they just had the transistor with length and the M factor.
And I did everything like that,
but then in order to facilitate schematic-driven layout
on the side
of the people who are going to be doing the polygon pushing, the full custom layout. So they
take and say, okay, he wants a transistor of these dimensions and I'm going to lay it out here.
And he needs, for them to do schematic driven layout, I cannot use my own symbols. I have to
go use the factory symbols from OnSemi, which have like 50 properties for every single little element.
And I'm having to redo everything, and it's a big, tedious pain.
The biggest problem is not so much switching to their symbols with all of their properties and then reentering all of my dimensions, but they're on a grid of 10 internal units.
Mine were like 32.
So it's like going from English to metric.
Every single thing I drew has to now be completely redrawn. And in order to know we didn't make a mistake, we're going to have to do a net list comparison between the two from what I started
with and what I wound up with to make sure everything's all right. And then they're going
to proceed to do the layout.
And at that point, I'll get back on the Verilog, finish the architecture, and then hopefully
everything will come together at once. When they're finishing the custom layout, I'll be wrapping up
the, I've got about four more months worth of Verilog work. And then that thing, that big blob
is going to get synthesized and it's going to fit right in the middle of this big open space that the full custom layout will leave and the chip should come
together and then we'll build it and then we'll hope it works there are a lot of parts of making
a chip work yeah so going back to the open sourcing of the propeller, which, I mean, was a lot of work as well, what was behind the decision to make it open source?
You know, my brother thought it would be a good thing to do.
Ken, he's our president.
And I'm fine with that.
I mean, you know, we've been selling this part for nine years now, and there are some customers that have really used it.
But in the scheme of things in the greater world, it's kind of like a trivial matter.
Some people have heard of it.
Some haven't.
So we thought there's nothing to lose, really.
It might generate some interest.
It is kind of a completely alternative way of writing code for the person who programs it.
So maybe by making it more accessible or even giving it the open source flavor, it might draw some more people in and give it more exposure.
And I don't know that we materially stand to gain much from it.
I mean, at this point, someone could take our code and go make their own ship, and
I don't believe
they'd ever even have to let us know.
No, probably not. I mean, that is
what it means.
Open source.
Right. But, the other
side of the coin there is
so much of Parallax
is focused around education,
and hopefully people can use that open source code
as a way to learn.
I mean, you just told us about the Propeller 2
and all of the steps you're taking,
and with Propeller 1 being open source,
I could do a SPICE model on it, couldn't I?
Well, you could do a Verilog simulation.
SPICE is lower level than that even.
SPICE is the actual, you know, how the transistors signal each other and what happens.
There's a lot of interplay with charge sharing, you know, from transistor to transistor when something happens.
So that part is kind of underneath the whole Verilog side. But the Verilog code is sufficient to
make it run on any kind of FPGA or to then take that Verilog code and synthesize it and make your
own chip using just right from the foundry, their IOPADs, which won't do anything special, but
you can get digital IO that way. But it was the I.O. pads that needed to be designed using SPICE simulations
so that it was certain that they're going to behave in a sensible way.
So I could take the propeller and put it on my FPGA test board.
We got a Papilio board that has a pretty big FPGA on there.
Yeah, you could run it on there.
It probably would take just a small fraction of the total gates.
Put two of them on there.
We could probably put four, maybe.
Think of all of the cores we'd have then.
Yeah, right.
And you could take that code and you could say,
okay, I don't want an eight-core architecture.
Let's make it a 32 core architecture.
That's what I was about to ask is how,
how easy would it be for somebody to take it and build on it and say, okay,
let's now I understand this. Let's, let's expand this.
Yeah. It, it, it wouldn't be hard. I mean, you'd have to get,
you'd have to look at it for a while and run it until you understood.
And there's not much code there.
I think zipped up
that whole thing is like six kilobytes or something there's just not much there um but if you got to
understand it you could do that and i'm sure we've had some customers who ventured into that if you
go to our forums there's a sub forum called propeller one verilog code and people on there
have added all kinds of little peripherals to it
and i i've just been so holed up working on this prop too i haven't you know looked much lately
sometimes i'll go just to see out of like you know fearful curiosity boy is anybody still there
but fortunately i go there and there's like you know 130 people on the propeller forum and there's maybe 12 or 19
people in the prop one verilog code forum so it seems to have a life of its own and there's still
active discussion but i feel kind of scared to look because my contributions lately have been
so sparse but there's a community of people there that just like to talk about you know what they're
working on and it has that thing has kind of a life of its own.
It really wouldn't, you know, it's silly for me to suppose I need to be involved.
It carries on its own.
And is this a good way to learn about chip design and FPGAs?
You know, I come at it as a software engineer, and I'm always kind of curious about that stuff.
Should I download PropellerOne and check out the forums and try it on my FPGA? Or is there, what advice would you give to people
interested in getting started with chip design? Well, I mean, here's the big bridge, right? How
do you go from a hardware description language that's defining what logic exists between flip-flops basically
that's all you can really do but that means you can do everything so that verilog description
of the propeller chip will make a propeller chip and then that chip can be talked to you know over
a serial link and loaded up from a pc with software that actually runs on it. So it is a whole system.
I mean, if you start off, it's a whole system that you can observe and you could make little
tweaks to and see the effects of your tweaks. Whereas if you started off otherwise, you'd have
to think, okay, well, I got to make an architecture. It's going to pick up some bit length of
instruction. It's going to execute it in some fashion. And you've got to build up a critical
mass before you have anything that you can talk to. And that could take you a year. I mean, it would take me that long, you know,
starting from scratch. It just takes a long time to bring everything together. So it's a huge leg
up on seeing a whole and very simple system in action. One of the things I keep threatening to
do, which I don't actually get to, is to learn how to make a very basic CPU on an FPGA.
You know, as sort of a getting started with FPGAs project.
And the more I've looked at it, it's like, well, it is easy, but not easy.
And so seeing an example like that might, like you said, be a great way to start.
Oh, yeah, it's a lot of fun.
I mean, because for me, it's always been magic.
From the time I got the Apple II, it's like,
how do they make the ball go across the screen
and act like it's under some kind of gravitational pull?
Those things just completely mystified me.
And I used to think, like, in that Battlezone game,
it was possible to wander right off the map,
and it's possible that there's more to it.
But the illusion was so great for me that I thought these things must be infinite.
So always to be able to move forward and make a step and get further into something that works
and making a processor, it's really intriguing to me and it's very rewarding. So I think for
some people it would
be also but to have something that can pick up instructions and execute them and run them and
have some way to write some quick software once you get that loop running you're off to the races
you can make it as into whatever you want and once we get the processor running on our FPGA or if we take the shortcut and just buy a parallax propeller
this is written in a special language it's not C is that true oh you mean the language that the
propeller is programmed in yes yeah so switching back to the software side yeah we just well
for some reason I don't know why I've, I've never been a fan of C myself.
And so I've never looked at C code and had any epiphanies or anything.
It always seemed kind of cryptic to me.
And in fact, you can write cryptic C code, right?
It's high on special characters that do things and kind of low on keywords.
And you could format it however you want,
you know, tab space-wise.
So we just wanted to make a language
that would be very, very simple
and kind of like almost look like pseudocode,
you know, where, you know, it uses indention matters to it.
It counts white space. so like python yeah like
python does and um that's but if you were to write pseudocode you'd probably write it down like that
anyway if you're just trying to get an idea straight on paper so we made something that
was like that and has very simple keywords like all loops, use the keyword repeat.
And there's very minimal monkey motion surrounding repeat.
You can just say repeat and indent a block of code and it'll repeat it.
You could say repeat space 10
and put an indented block of code underneath it.
It'll repeat it 10 times.
You could say repeat some variable from one to 10 and it'll count that variable through, and you'll
have access to that variable within the loop. So, or you could say repeat while, repeat until, but
very, very, very simple constructs that don't over clutter everything. Like the way a for loop
works in C always just turned me off, you know, it's like all these semicolons and brackets and all this stuff. And I
know that it's simple, but I could never get into that. It looked always cryptic to me. So
we tried to make a language that was very simple and it was untyped so that the bugaboo I've always
had programming, and it might be because I come from an assembly background, is that I hate when types start second-guessing what I'm trying to do, and I can't untype things
sufficiently to get what I want to happen. So, you know, 32 bits, everything in spin is 32 bits.
Even if you access a byte, it's promoted to 32 bits for computational purposes. It's not regarded as 8 bits through the computation.
It's just 32.
And everything is 32, and it's effectively 2's complement
because that's just how the math circuits work.
And so it's very, very simple.
And if you understand what happens when something rolls over,
there won't be any surprises. And it kind of just
gets the heck out of your way so that you can just write code and it works. And at least if
you're coming from a, you're understanding binary phenomena enough to anticipate certain things,
it's nice because it just gets out of your way and lets you express what you want to do. And
you could write a nice little program
and anybody
can look at it, even if they don't, I think,
anyway, even if they don't know spin,
they can look at it and pretty much figure out what it does.
So it almost harkens
back to the Apple II. If I'm
understanding correctly, it's a tokenized
language that's interpreted
bytecode on the device itself. That's right. So it's it's a tokenized language that's interpreted on right interpreted uh byte
code on the device itself that's right so it's not compiled right but on the next chip i mean
it's going to be a while before we get into the software side of it we could make it compiled too
you know i just did a bytecode interpreter on the prop one because we had 32K bytes of RAM, and a bytecode interpreter can be very efficient,
you know, overall memory-wise.
When you compile code, it tends to get a little more verbose.
But bytecodes can be very, very dense.
How did the propeller come out of the basic stamp?
Are they related?
Not, well, in that they are designed to be easy to program and they are
kind of practically chip level systems they're similar but the the innovation of the propeller
is that it allows you to write you know eight different programs that run on eight different processors that share a global memory.
And so for doing real-time stuff or practically any kind of real-world application has to deal with asynchronous events
that are never going to be handled completely gracefully by a single
threaded architecture particularly if they're high bandwidth so it's a platform that permits
you to easily develop those kinds of systems but with the ease you know of something like a basic
stamp yeah i do look back with fondness on the basic stamp because it was the first and the basic
stamp 2 was the first microcontroller-y thing
that i started playing around with um you got a dev kit to take pictures from your airplane well
i think i originally got it to to make a little robot and then that project kind of dwindled but
then i later used it on a yeah on a radio-controlled plane where I hooked a servo receiver output to it
so that the basic stamp would control a camera
based on my RC controller.
And I also think I threw an altimeter on there.
So it kept track and logged altitude
and took pictures and controlled the camera,
and it was pretty cool.
That was a while ago.
Yeah, that was at least...
Pre-2000. Nah, maybe 10, 12 while ago. Yeah, that was at least... Pre-2000.
Maybe 10, 12 years ago.
Yeah, that sounds like a fun application.
You had drone before drones were cool.
But the cool thing about it was it was simple.
It was easy to program.
The language was clear.
But it also still taught you those concepts of this is how a microcontroller works.
It has these ports, and its data's over here, and its code's over here,
and a lot of those concepts follow on to all the other microcontrollers
that I've used later on that people tend to use in commercial projects.
But I really enjoyed working with it.
Oh, go ahead i think about the basic stamp was sort of the arduino of its day and right now arduino is so popular
and that was what the basic ship was you know it was the way to get started yeah i i i'm sure i've
heard the arduino guys say a few times that they had been using the basic stamp.
But in Europe, I mean, there's horrendous markup and value-added tax on everything.
It was costing them $100 to put together a basic stamp in Europe.
And this was buying parts through distribution.
And I mean, here, this was based on a chip that we were selling stateside for like $6.
But it was costing them $100. So that was, I think, a motivation for them to make something alternative that they didn't have to acquire through any special distributor.
And they just wanted something that would be functional but a lot cheaper than a basic stamp was in Europe.
They sure took a different tact, where the basic stamp and the propeller are very clear
about their microcontroller nature.
I mean, writing in bytecodes, of course, you know this is what the processor's doing.
The Arduino hides all of the complexity as much as it can, which is nice for beginners, but really tough when you get past that really beginner level and you want to get into the complexity, you have to break down the Arduino barriers.
Yeah, well, that seems to be, I mean, the nature of all modern things seems to be that way.
And I kind of liken it to, let's
say you want to build a custom home, right? And you can only shop at Home Depot. So you have to,
you know, if Home Depot doesn't sell it, you can't use it in your house, but you might have a need
for something that, you know, if it was just a little different, but you're stuck using these pre-supplied parts
and that's all you've got and that's all you're going to get.
But if you have a system that allows you to go all the way to the bottom level,
all the way down to the hardware at the clock level,
and you could define what you want to have happen,
then you always have the ability to do something custom
and you're never going to get marooned on some island where you've hit the limit. You just can't go any further.
But all modern stuff to me seems to have taken this new paradigm where you're building out of
pre-configured blocks that they got great curb appeal, but when you get into them,
there's some facet of their behavior that's not quite what you want
and it's impossible to get around that you just got to live with it so i think a lot of these
makers today are making stuff that um they just live with a certain degree of of uh
how would we call it it's it's it's not efficiency inefficiency and it's not doing really what they'd
like it to do but that's what they got, and I think they just accept that.
You know, this is the way, I mean, nothing,
it's like they're building big things out of a bunch of,
and I don't know this authoritatively,
but I suspect from my experience using everything else,
like my phone and my computers and what I hear through the grapevine it's like
everything is in some state of brokenness and they're shoveling all this brokenness together
and they're trying to realize things that should ideally be super reliable but they've accepted
that reliability is just not going to be on the program so don't don't worry about it just make
it work 70 80 percent and that's enough and that's
all that's possible because that's just how how things are but to me i i like i'm i can't work
like that i have to be able to go all the way down and make what i need even if it means for
the custom home making my own steel foundry where i can make my own sand castings, where I can pour my own components
to make the fixtures that I really wanted in, say, the kitchen.
You're really a first principle sort of guy.
I guess so, yeah.
I guess you could say that.
Yeah, and to be deprived of the ability to do that, to me, just becomes really uninteresting.
I do think there's a balance.
As somebody who comes new to the
field it's sometimes
nice to be able to just go to Home Depot
and buy it
yeah
at least until you understand what it is you're buying
well it depends on what your goals are
some people are very project oriented
and they want their thing to work without
necessarily understanding
every last bit of how it works
and then I know i had
this problem in school with with physics that's true was you know i'd be assigned i'd be assigned
a problem and you know they'd give us some machinery to solve it and i but i wouldn't
trust it because i couldn't go back to first principles because they hadn't taught enough
for me to understand what was really going on. And so I totally understand that instinct.
And I feel that way about computers as well.
But sometimes you end up with practicality.
I have to get a job done, so I can't understand this all the way down. And like you said, there's a balance.
Well, yeah, I think market pressures drive things to be the way they are.
And what to me seems like a compromise just to a practical engineer might just be reality that shouldn't even be questioned.
Just deal with it, live with it.
There's no other way.
And accept it and do your best and move on
because tomorrow we've got to go do something different.
Well, I think speaking of tomorrow and doing something different,
it's getting a bit late.
And so I'm thinking about wrapping it up.
Do you have any thoughts you'd like to leave us with?
I hope people hear this
and they get maybe something useful out of it.
But I really love being able to get down to the bottom level
and to make things reliable.
And I'll tell you, there's a fantastic satisfaction in knowing that you can understand and get down to the first principles and work from them when you need to.
And that's kind of what motivates me.
I just love working like that. and I think other people would too, but I suppose that this has been off the table for so long
that people don't remember the simple joys
of working in Apple II,
and the world's changed,
and I might just be an anachronism at this point
that's irrelevant.
I sometimes wonder,
but I find great satisfaction in working with the basics.
Well, I kind of disagree.
I think there are some of us who still push for understanding,
at least remembering to understand how the basics work.
And there's a lot of makers out there who are breaking stuff down to its parts
to figure out how things work.
So I don't think that's completely lost.
Yeah, I i hope not i think the human spirit wants to do those kinds of things um so i suppose it's kind of a timeless
matter that you know something can be made that just works properly and works consistently and
i don't know that that kind of thing will ever go completely out of style we have some friends who
have an audio company called Pass Labs,
and they make really high-end audio equipment.
And they said a while ago when the CD thing was coming up
and people were making digital processors
that they were tempted to kind of start using FPGAs
and digital signal processors
and make processing stations before DACs.
But they realized, in his words,
that that would have just been the slow road to hell.
What they did instead is they stuck with the basics.
They've always made really good amplifiers
and preamps, and they have a good following.
They have amplifiers that are like,
I think he said their latest is like $84,000 a pair,
and they sell these things.
And people really like to have something basic quality that works.
And had they gone off and chased this digital thing, they could have wasted their energies.
But they stuck with what actually is fundamentally good and timeless,
and they're still operating, and they have a successful operation.
I think there are people like that,
and I'm happy that there are people who are looking to make it work.
So thank you for being here.
Thank you very much for talking to us.
Well, thanks a lot for your interest.
It's been a pleasure, and maybe we'll talk again sometime. Thanks, Chip. Thank you very much for talking to us. Well, thanks a lot for your interest. It's been a pleasure, and maybe we'll talk again sometime.
Thanks, Chip.
Thank you.
My guest has been Chip Gracie, designer of the Propeller,
founder and director of research and development at Parallax.
And as always, thank you to Christopher White for co-hosting and producing,
and thank you to you for listening.
If you'd like to say hello, hit the contact link on embedded.fm
or email us, show at embedded.fm.
We do like to hear from you.
And so that's it for the week.
A final thought for you from Norman McCaig.
I don't know who he is, but I liked what he said.
However, I learned something.
I thought that if the young person, the student, has poetry in him or her,
to offer them help is like offering a propeller to a bird.
Oh, right, he was a poet, actually, now that I think about it.
Yeah, that makes sense.