ACM ByteCast - H.-S. Philip Wong - Episode 39
Episode Date: June 20, 2023In this episode of ACMByteCast, Bruke Kifle hosts H.-S. Philip Wong, the Willard R. and Inez Kerr Bell Professor of Electrical Engineering in the School of Engineering at Stanford University. He is al...so Chief Scientist of the Taiwan Semiconductor Manufacturing Company (TSMC), where he was previously Vice President of Corporate Research. His works have contributed to advancements in nanoscale science and technology, semiconductor technology, solid-state devices, and electronic imaging. Philip’s current research covers a broad range of topics including carbon electronics, 2D layered materials, wireless implantable biosensors, directed self-assembly, device modeling, brain-inspired computing, non-volatile memory, and 3D system integration. He is an IEEE Fellow and has received numerous awards, including the J.J. Ebers Award, the IEEE Electron Devices Society’s highest honor recognizing outstanding technical contributions to the field of electron devices that have made a lasting impact. Philip starts by sharing how he entered the field of electrical engineering, fueled by an interest in science and physics. He talks about the key challenges of scaling down technologies and what he believes will be the next major technological breakthrough, which will create exciting opportunities for those just joining the industry. He discusses the potential of drawing inspiration from biological systems in designing better computing systems and developments in non-volatile memory. Philip also talks about exploring the practical applications of technology in his roles as Faculty Director for Stanford’s NanoFab Lab and Stanford SystemX Alliance, as well as at TSMC. Finally, he offers advice for aspiring engineers and touches on the ethical and environmental implications of some of the biggest emerging trends.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the
world's largest education and scientific computing society.
We talk to researchers, practitioners, and innovators who are at the intersection of
computing research and practice.
They share their experiences, the lessons they've learned, and their own visions for
the future of computing.
I am your host, Brooke Kifle.
Today's episode will take a closer look at the exciting and ever-evolving field of
semiconductor technology and its impact on our daily lives. From smartphones to laptops,
from electric cars to smart homes, semiconductor technology is at the heart of the devices that power our world. And advancements in the field have continued to the creation of
smaller, faster, and more efficient electronic devices. Semiconductors are truly the building
blocks of modern technology, and they're really shaping the way we live, the way we work, and the
way we interact with the world around us. Today, we have the honor of speaking with one of the
foremost experts in the field, Dr. Philip Wang. Dr. Wang is a renowned professor
of electrical engineering at Stanford University and the chief scientist of the Taiwan Semiconductor
Manufacturing Company, also known as TSMC, the world's largest semiconductor foundry.
Prior to Stanford, Dr. Wang was with IBM Research for 16 years.
Shortly after, from 2018 to 2020, he was on leave from Stanford and served as the Vice President of Corporate Research at TSMC, and since 2020 remains the Chief Scientist.
He is a fellow of the IEEE and has received numerous awards for his research contributions
to solid devices and technology.
He is the founding faculty co-director of the Stanford
System X Alliance, an industry affiliate program focused on building systems,
and the faculty director of the Stanford Non-Volatile Memory Technology Research Initiative.
And finally, the faculty director of the Stanford Nanofabrication Facility, a shared facility for
device fabrication on the Stanford campus that serves academic, industrial, and governmental researchers across the U.S. and around the globe.
Dr. Philip Wong, welcome to ByteCast. Thank you very much for this introduction and the
invitation to speak with you. Yeah, we're very excited to have you here. You know, I want to
start off with a pretty open-ended question that I like to ask most people. You know, you have such a remarkable and a very interesting career that spans both, you know,
academia, research, industry, still have deep engagements, you know, in industry. Describe
some of the key points in, you know, your personal and professional career and background that have
ultimately led you into the field of computing and motivated you to pursue your field of study today?
Yeah, that's a great question.
Well, I came into this field kind of, and it wasn't really planned.
I am interested in the physical sciences, physics, and electrical mathematics and things like that.
And so during my undergraduate years, I got interested interest in solid state physics and solid state electronics.
But I don't want to just do a deep kind of ivory tower physics type things.
I wanted to make something that is of practical interest.
So I went to electrical engineering.
And in that particular area, at that time, it was the beginnings of what is known now as semiconductors or microelectronics
now to speak my interest because it is a maybe kind of like a cross between solid state physics
and the practical application of those solid state physics because as you mentioned earlier
in this podcast semiconductors is the heart of everything that we do, more so now than before. But even back in
maybe like 20, 30 years ago, there were already indications that many of the electronic products
is going to be further improved or enabled by advances in semiconductors. So that's how I got
into this field. And I was very lucky because it wasn't expected some 30, 40 years ago that semiconductors would make such a big impact in society.
But it turns out to be the case. So I was really lucky.
Oh, that's very remarkable. Were there any aspects of your personal upbringing or your personal background that motivated some of your scientific interests? Well in semiconductor devices, I work on device
fabrication and device physics and a lot of device physics and also device
fabrication involves materials and chemistry and I was kind of interested
in chemistry and materials and that kind of fits pretty well with this broader interdisciplinary field.
And as I move forward in my career, the fundamental materials in chemistry led to advances in
devices and advances in devices leads to new circuits and new systems, and that would be
a broader impact.
So throughout my career,
I started from really more basic physics type things
and gradually moved up in terms of
what the engineering people call
the abstract, hierarchy abstractions
of abstractions,
moving further up into the abstractions.
I see.
Yeah, I think there's something very interesting to be said about the cross-disciplinary nature of your work. But, you know, we'll get into
that shortly. You know, one thing that I do want to touch on is the semiconductor industry relies
heavily on, you know, nanotechnology to continue shrinking, you know, the size of transistors and,
you know, other components on computer chips. And that's ultimately at the heart of what's leading to faster and more efficient devices.
But I think many people may not actually appreciate how difficult, but also how remarkable it
is to deal with materials and structures on that scale.
We're talking, if I remember correctly, this is one billionth of a meter, right?
And I'm sure this exposes many very unique properties and functionalities that
maybe you are not seeing at bulk materials. So what are some of the key challenges, but also
the opportunities in scaling down electronic devices to the nanometer scale? And how do you
approach them from both a scientific point of view, like you said, but also an engineering perspective?
You point out a very interesting aspect, which is devices get very,
very small right now. We are at the atomic scale, nanometer and atomic scale right now.
And nanometer is the obedience of a meter. So that's really, really small. And if you cut up
a computer chip today, you look under a very powerful microscope, you can see the individual
atoms. And a typical transistor today, you can actually count the number of atoms that you have
in a transistor. That's really amazing. At that particular name scale, the interesting physics
will come about. And the physics that are in operation for larger bulk macroscopic scale materials and devices will change as you go down to this small scale.
And that gives rise to a lot of interesting things, both in the physics physics know that for the better part of several centuries, probably, people were interested in high-energy physics to look into the deep physics of how the atoms behave and how the electrons behave and so on. And that oftentimes involves very high energy and you get accelerators, building up huge
size of accelerators to hit particles and see how they behave when you hit them with
high energy.
And that gives you insights into the basic physics.
But at the nanometer scale, many of the physics are really beautiful.
And so in recent years, even in the physics world, a lot of the interesting physics shows up in what we call solid state physics, namely nanometer scale physics,
the physics that exhibits the behavior as these very small nanometer scale devices.
So you see, for example, you look at recent Nobel Prizes, for example, many people got Nobel Prizes because of the study into these kinds of nanoscale phenomena.
So that's kind of interesting from a fundamental physics point of view.
But beyond that, in the practical application, many of these nanoscale physics are actually used today, every day, in the electronic devices that we have. I suppose many of your audience have a cell phone or use a computer, and those cell phones
and computers today, you typically have data storage devices called flash memory.
And those flash memory operates on quantum mechanics, and those things happen only when
you are at an nanometer scale. So those deep physics do have
many practical applications and the transistors that we have today in our phones and computers
of such nanometer scale that we cannot possibly understand how they work unless we invoke these
nanometer scale physics that is in operation.
So lots of interesting scientific implications, but also very real practical applications to to like our day to day use cases. You know, I'd be remiss if you know, in this conversation about
transistors and integrated circuits and semiconductors, if I didn't bring up Moore's
law, right? Yes.
You know, it's this observation for those who don't know that the number of transistors on a, you know, dense integrated circuit doubles every year.
And it's kind of been this very interesting, self-fulfilling prophecy, but, you know, quite
honestly, the driving force behind a lot of the rapid advancements that we're seeing in
the industry for almost half a century, right?
So, you know, one thing to call out, though, is as the complexity of the technology continues to
increase, and we see limitations in actually scaling down the size of, you know, transistors,
there's actually growing concern that, you know, the end of Moore's law might be near.
And at the end of the day, you know, it's not a law of physics. It's, you know, just a relationship that quite honestly may not hold forever. So in your opinion,
what do you see as the next technological breakthrough that will actually drive the
industry forward and hopefully prevent the end of Moore's law? Yes, that's really a question
that I've always been asked, and I'm glad that I have an opportunity to kind of talk about it.
One is, there are two things I wanted to bring up. One is the Moore's law actually is a very
interesting phenomenon. It's particularly in the semiconductor industry. The industry are able to
make predictions about the future, namely doubling the transistors every two years and so on.
And this is very unique across different industries.
There are no, I would venture to say,
there are no other industry that has these kind of long-term predictions
that holds true for decades and decades.
And like if you look into other industries like automobiles or airplanes or aircraft and things like that,
other industries don't have these kinds of predictable advancement of technology.
And that is very unique to the semi-automatic industry.
And as a result of that, it really proplled the entire industry toward a very rapid pace of innovation.
Because then everybody, up and down the what people call the value chain, from the material suppliers to equipment manufacturers,
to people who actually design and make the chip, to the users of those chips,
they all have a kind of like a roadmap of what will happen in the future.
So therefore, they could make plans for the future very accurately.
And so this kind of activity, this kind of situation needs the whole industry to advance
not only at a regular pace pace but also very rapidly because
we all know what our competitors are doing and so therefore if you are a competitive company or a
competitive researcher you will try to outdo everybody else right and because now everybody
know what the general direction is then everybody wants to try to outdo that.
And therefore, it leads to a very rapid evolution of the industry.
And if you look at transistor miniaturization, which is one of the main driving forces behind Moore's Law, doubling the transistors every so often. If you look at transistor mini-electrization,
then everybody knows what to do.
And so therefore, we have a very well-defined path going forward
for the last five decades or so.
And as a result of that, the ways to go forward is clear.
And everybody has worked towards a common goal.
And the industry moved forward very fast.
Now, we are kind of at the end of this.
So it's kind of like if I would draw an analogy, it would be like walking inside a tunnel.
There's no other way you can go.
Just go forward.
And that makes it single-minded and therefore very easy
to move to go forward because you don't have to think about something else. And the way to do it
is to shrink in two dimensions. But of course, as mentioned earlier, shrinking into my two dimensions
do have some limits. We are down at the atomic scale and you cut it, as I mentioned before,
you cut up a transistor you encounter number of
atoms and if you shrink further you cannot have half an atom so you can naturally see there's
there is a natural limit in there so this this tunnel that we've been walking inside is coming
to an end now you can think of this as two ways, half full or half empty.
You can think of, oh, we're at the end of the tunnel with that. That's the typical kind of
reaction. But if you're at the end of the tunnel, that means you are going out of the tunnel.
And there can be many, many possible paths going forward other than two-dimensional miniaturization and that is
really exciting for for researchers and people who want to get into the field is that the path
going forward is plenty there are plenty of paths going forward we don't know which one will work
but there are many many possible paths unlike in the past there's only one path forward
and that's where the excitement is if you're a like engineers or researchers you would rather
have a lot of options to figure things out rather than having one thing to do right so
i think this is really exciting time right now we are at the cusp of a new major revolution
that can have future electronic systems
that are way better than what we have before.
Even though, of course, I should say that the path going forward is unclear.
There are many paths forward.
The opportunities are exciting.
So certainly as you reach this saturation point,
there's definitely a lot of uncertainty.
But as you said, it seems like there
are many new, exciting, open research directions that can unlock or bring a lot of value to the
field. So I think it's going to be quite exciting to see how some of the current research that,
you know, your group and labs across are working on will help drive the future.
The optimism is really based on the demand signal that we see from society.
And when you say, okay, something is saturating, okay, well, there's nothing more to do.
It's saturated already.
Why are you working on it, right?
But then the demand signal is pretty high.
If you look at society's demand, many of the things that we want to do, from self-driving cars to high energy efficiency or AI systems and so on, really depends on continued advancement of technology.
I say continued advancement, not just what we have today, but you need to have continued advancement in order to fulfill our expectation of what electronic systems would do or computing systems in general
would do.
So the demand signal is high.
And so therefore, when there's demand, there must be innovation.
Yes, yes, certainly, certainly.
You know, that's very interesting.
And I'm sure as part of this continued research innovation, there will be a need to draw inspiration
from different fields. You talked about earlier how
your personal interests have been motivated or influenced by your interest in chemistry or in
biology. So within this sort of field of nanotechnology, how do you draw inspiration
or insights from biological systems, for example? Or are there, you know, other fields, whether it be material science or physics or processes in those fields that guide your research on nanoscale devices
and systems? Yeah, very insightful observation that, you know, these days, many of the advances
occur at the kind of like a joining two or three different fields together and capitalizing on the good properties
or the advantages of different disciplines
and putting them together.
One of the things that I think would be exciting to do
going forward is to be able to do
really energy-efficient computing
for everything we do involves computing, right?
And now, where does the optimism come from?
Today, if you look at computing at a data center level or running an AI training model
type applications, you're talking about megawatts of power required to power up a computer that
will do these kinds of computation.
But you and I know that the human brain
is way more energy efficient,
and the human brain operates on about 20 watts.
So there is a million times difference in energy efficiency
between what we human beings do today, every day,
and what our human-designed computer can do today.
So there's room for a million times of improvement.
That's a tremendous vast space for improvement now.
How do we get there?
We don't know.
And that's where the exciting things are.
And some people are thinking that maybe we can draw some inspirations on how we understand the brain works,
and then say maybe we can design computing systems based on those principles.
Now, so this is a very active field of research in which I myself have been working on
with a number of collaborators, both at Stanford and also outside of Stanford.
But the interesting thing about this is that our understanding about how the brain works is very little.
It's more or less like you want to take apart a computer chip
and look at the chip and see how the chip works.
And we know that this is almost impossible, right?
So right now we're doing exactly the same thing with the brain.
We look at the brain and see how it works
and try to figure out how it is wired
and what different units are doing and so on, like searching in the dark.
This is a very long way to go.
And we're at the point where we capitalize on,
we make use of our understanding about neurosciences
and draw some inspirations on.
Maybe we could design electronic systems that
Take some inspiration how it works and get the energy efficient
Computing out of that. Hmm. I see. So I think you're referring to, you know, neuromorphic computing
So beyond the energy efficiency
Faster more power efficient for existing computing tasks are there other like implications or application areas whether it be
i don't know in medicine or in autonomous systems are there other use cases of you know this idea
of neuromorphic computing oh yeah well apart from in addition to the remover computing, the way we understand
the brain would help us design better computing systems, but also the way we fabricate our
electronic systems and the way we understand how to communicate will also help understanding about biology many of our colleagues are working on bioelectronic systems
or for example human brain interface machine brain interfaces and i have a pet projects going on with
putting a chip inside a living cell and chips can be very small now and you can make very small chips
and you can actually build a bunch of electronic circuits so small that you can fit inside a cell but biology can help under can
help design better electronic systems and also the way we can we understand about how electronic
systems work and how electronic systems are being built you can also help understand biology
so that's remarkable you just said you know we're able to have chips in a cell that's So that's remarkable. You just said, you know, we're able to have chips in a cell.
That's a pretty significant milestone. Like what does this mean in terms of
therapeutics or medicine or, you know, personalized care? This seems like actually a pretty
significant achievement, I would say. Yeah, I'm not saying that we have done it,
that we have accomplished that yet. We are moving towards that goal, right?
And our goal is really to...
Because if you think about altering
or monitoring cell physiology today,
the thing that we can do is to make a different cell,
for example, through biological means,
or find ways to put chemicals inside the cell
that will affect the cell physiology.
But you can also imagine that you can use electrical means to change the way the cell behaves.
So in addition to making a new cell, which is a difficult thing to me,
or putting chemicals in there, which is a different way.
So one of the things that we think would be interesting to have
is to be able to use electrical means to alter the physiology of the cell
and then use that capability to study how the cell works.
I see. Very interesting.
And when you pursue these lines of research,
are you usually motivated by the potential practical applications and work backwards?
Or is it the scientific or research exploration eventually leads to practical applications, whether it be in healthcare or medicine or computing?
What sort of drives the direction of research?
Well, I guess it's a combination of both.
Oftentimes when
you work on something kind of totally new, there is no application yet, right? Because we haven't
seen anything like that. And that part will be driven by, hey, we can do this. And if we can do this and what and if you can do this that would be kind of revolutionizing how the way we
see things or understand things or have a tool that could help us understand things that we
were not able to understand before and so that would be more kind of discovery type
investigation but at the same time these kinds of investigations at least i
because i'm an engineer i'm a natural engineer only go so far uh because at some point you need
to find an application you need to find a use case for it to set the direction of your research
because if you're just discovering there are so many things you can discover you can get
really get lost all right it's really like walking into a forest and you don't know where to go
but they put a target application in mind would help drive the direction of the research and that
would actually move the research forward better and faster so you can go on for a little while with the curiosity-based type
investigation. But in my opinion, eventually, sooner than later, you need to find an application
as a driver for your research direction. I see. I see. So in line with that, I know one area of
work that is an important research focus for you is non-volatile memory. Maybe can you start
off by describing, and it is, you know, an area of work that does have many practical applications,
like you alluded to earlier with, you know, smartphones or laptops, but maybe can you
describe what exactly is non-volatile memory and how do you envision, you know, the use of
non-volatile memory in future computing systems?
Great question.
First of all, for the audience, non-volatile memory, let me decompose it.
Non-volatile means it disappears, right?
Non-volatile means it doesn't disappear.
And memory means you store some information, such as I remember what I did yesterday, right?
Store some information.
So non-volitor memory are electronic devices
that stores information.
Some of them store information at a shorter timescale,
like in less than a second or so.
Some store information much longer, like in 10 years.
So the information we want to store on our computers and phones,
we want it to stay for a long time.
But for example, the keystroke that I typed just a second ago, I need it to stay for a long time but for example the keystroke that
i type just a second ago i need to remember it for a second but after a second i don't need it
anymore so there's a variety of non-volatile memory based on what we need to do and in fact
the most numerous electronic devices that human made are these non-volatile memory because we have a lot of
these non-volatile memory like any cell phone a modern cell phone would have hundreds of gigabytes
of non-volatile memory and that means hundreds of 10 to the ninth bytes times eight of these
kinds of devices on your phone.
There's a lot of them, right?
So, okay, so that's a very important part of the general information and communications technology
equals kind of the electronic device.
Now, we're already using these on monitor memory.
And now going forward,
what do we use these on monitor memory for?
It's improving the energy efficiency of computing.
Computing requires you to do computing on data.
Where do the data come from?
Data is stored in some kind, somewhere, right?
And those data are often stored in memories. So now, currently, a lot of the memory resides on a separate computer chip than the chip that does the computing.
And the act of moving the data from one chip to another chip consumes not only energy and power, but also incurs time.
It takes time to move from one place to another.
So this is the fact that memory chip is on a separate physical location than the computing
chip, that is the situation by and large today, results in a lot of waste in also in energy,
and also in time and reducing speed.
So the research work going forward,
a lot of the researchers in the field
that we're working on right now
is how to put this memory right on top
or right next to the computing chip,
right on top of it.
So what we're working on
is building three-dimensional chips.
All chips are two-dimensional right now,
such as similar to all the houses in Los Angeles.
They're all sprawl, urban sprawl,
spread out over millions of miles.
And the way we get more and more of these houses
is to build, it's a strength up,
two-dimensional miniaturization,
building smaller and smaller houses.
At some point,
people don't want to live in smaller houses anymore.
So what do you do?
You go to Manhattan and build things on top of each other,
and therefore you gain space to do things.
So going from Los Angeles to Manhattan,
this is what we're doing right now for the computer chips.
Try to build computer chips that are three-dimensional
with multiple layers of computing
devices and memory devices on top of each other and that is the bulk of my research right now
i really love the uh la to manhattan analogy i think um that really captures the line of work
in a very easily understandable way but yeah i think it's very exciting to see the efficiency that will be seen
both from a performance point of view in terms of energy, in terms of time.
So it certainly seems like a very exciting application for the future of computing systems. ACM ByteCast is available on Apple Podcasts, Google Podcasts, Podbean, Spotify, Stitcher,
and TuneIn. If you're enjoying this episode, please subscribe and leave us a review on your
favorite platform. You know, looking forward, I kind of want to pivot now and talk a bit more about some of your work bridging, you know, industry and academia.
One of the roles that you hold is, you know, as the faculty director of the nano facility, the nanofabrication facility at Stanford.
And what makes this very interesting is that it's a shared facility that's serving government, industry, academia researchers globally.
This seems like a very difficult undertaking.
So how do you manage the demand and access to this state-of-the-art facility?
And outside of your responsibilities as a researcher, as an academic,
what are some of the best practices and lessons you've learned
from running such a complex operation?
Yeah, that's a great question.
First of all, the most important ingredient is to have very highly skilled
and capable staff that runs the facility, and I really appreciate that.
And, of course, running such a facility requires a lot of resources
in terms of money and space and everything else. And the strong support from funding agencies as well as the universities are clearly instrumental in this.
Our facilities are supported in part by the National Science Foundation.
And, of course, our universities have also invested heavily into our facilities for both capital investment as well as operational
expenses and so on. So those are really kind of necessary conditions, but clearly necessary but
not sufficient conditions. And one of the key things about nanofabrication or semiconductor
manufacturing or fabrication is that many of these tools are rather complex and expensive.
And so it is very difficult for individual faculty or researchers to acquire enough of
these tools because you need not just one, you need a collection of these things to compose
a process.
Just like in a kitchen, you need an oven, you need a chopping board,
you need a lot of things, right?
So you can't just operate with one tool.
So in order to have these complete set of tools,
then you necessarily have to come together
and share the use of those tools
to number one, amortize the cost of those tools.
And number two, probably more important than the cost, which a lot of people
don't appreciate, the second part is that when people
come in and use these shared facilities, they
necessarily reside in the same facilities, they talk to each other.
And that's where the innovation comes in, because I have seen
many, many instances in which my students would come into the nanofabrication facility, do their work.
They meet other students, other researchers, postdocs, and other industry researchers.
They talk about things when they meet each other, and new ideas come about.
And that is clearly important.
So that a shared facility,
such as an NF application facility at Stanford,
is not just a collection of tools,
but rather is a complete community and ecosystem
in which researchers would not only share their information,
share their knowledge about the fabrication techniques and processes, but also
is really a fertile ground for innovations, for new ideas. And many, I would say, many, many papers
have come about from students meeting each other in the nanofabrication facilities and to say,
hey, why don't we do this together? This sounds fun.
There are many instances like this. Yeah, I think we've seen many great successes of these kind of
collaborations, you know, across academia, industry. So all that to say, I think you're
definitely advocating for the importance of, you know, collaborating between all different
stakeholders and in the hope of fostering innovation,
but also advancing R&D and computing.
Absolutely.
Just creating this environment and ecosystem for innovation to occur.
This is, apart from the cost amortization, I would say this is even more important than
the cost amortization.
Because money you can always get, but collaboration is hard to come by.
Certainly. I think that's a great quote.
You know, I think you wear multiple hats, which I think is very remarkable.
You, on one end, have a role in academia as a professor, as a researcher, as an advisor. But in addition to
serving as a faculty director for the Nanofab Lab or the System X Alliance, you also serve in your
capacity as chief scientist at TSMC. So you have a deep engagement within industry as well. So
how do you find your role in industry informing some of your research directions or
your teaching in academia and vice versa? Do you find some of your engagements and learnings in
academia as a researcher, as a professor informing some of your roles in industry?
Especially in the engineering field, which I am in, in electrical engineering,
because engineering is about practical
applications of stuff of technology like coming up with new technologies and understanding about
basic discovery science discovery and translating them into practical technologies in that arena
then being able to clearly understand how industry works and what they're looking for and what is their pain points and bottlenecks in bringing new products into market.
That insight is clearly important.
And that insight will bring back to not only the research at universities, but also teaching.
I was just teaching my class on transistor design yesterday,
and I was reviewing with the students the latest advances in transistor design.
And if I were not conversant about what industry is doing,
I wouldn't be able to do that because I just don't know, right?
But the fact that I am heavily engaged with the industry
allows me to impart that knowledge to the students.
And I think that is really important in that direction
from industry to academia, both for the research,
because on the research, you need to know where to go, right?
So both for the research and the teaching as well.
Now, at the same time time on the other direction um academic research uh how would that impact industry
is really as i mentioned earlier especially in today's environment today we are not quite clear
what to do in the past we kind of know what do, so industry knows what the next step is, or even the
next three steps needs to be.
And so the need for
academia is,
of course, it's there, but
not as high as it is
today, because if you ask
people in the industry, especially in the
industry, if you ask them,
do you know what we need in
three, five, ten years? years most of them i would say
they say i actually don't know because the path ahead is less clear when we're out of the tunnel
we don't know which path to take and that's where academia comes in because academia is a place
where a you can explore a lot of things very quickly with very low cost.
And that's one.
And two, academia is filled with people who have no experience.
These are students, right?
And you may wonder, well, what do these people with no experience, what can they do?
Well, they do very interesting things because they have no experience.
They have no preconceived notion of how things should be done.
No constraints.
Constraints, yes. And therefore, they will come up with things that nobody in industry have thought of.
Because industry are used to think about things in a certain way.
And these students have no idea how people were
thinking about it before so they come up with very very interesting things that nobody thought about
and that's where new ideas come from yeah i think that's you know a very perfect way to like this
idea there are this notion of constraints in industry there are objectives that you have to
meet there are business targets or organizational targets. Those kinds of, one may call them constraints, can stifle some of the progress or innovation or moonshot thinking. But I think you captured it perfectly where in academia, where presumably some of those requirements or constraints don't exist, that's where you can see the true success from an
innovation point of view. Absolutely, absolutely. Yeah, so in terms of, you know, looking forward
future directions, I think the widely known that the COVID-19 pandemic has caused a lot of
significant disruptions to the global supply chain, but particularly for the semiconductor industry. And I think it was
interesting because the pandemic sort of created this dual shock to both supply side and demand
side, right? You see a boost in the demand for these devices and products as people are shifting
to remote work. But then you also see a hit from a supply side on the global supply chain, right?
So, you know, moving forward, what do you think needs to be done to address these challenges and
ensure the industry's, you know, resiliency, but also continued growth and success?
Yeah, that's a really timely discussion here. You know, COVID-19 and also the geopolitical situation today causes a lot of disruption in the global supply chain
and also awareness of supply chain resiliencies.
And you see a lot of regions and countries who want to be able to have local industry and things like that.
And that has several implications. One is that collaboration across countries,
across boundaries has become a lot more difficult. And hopefully, smart people will come up with
policies and ways to navigate around this so that cross-border collaboration can flourish.
Because, you know, knowledge knows no boundary.
There's no reason why one region knows everything, right?
So knowledge knows no boundary.
And in order to advance technology, which basically benefits the entire world,
we want to benefit the entire world.
In order to advance technology.
We really need global collaboration and we are hoping that smart people come up with
policies and methods to enable this to continue. Secondly, we see very strong demand signals for advanced,
continued advancement in semiconductor technology,
because as you mentioned earlier,
it's a foundation of almost everything modern society would do,
from solving energy sufficiency problems to food security to climate change.
We all need electronic systems that would help us do our job better.
So from that point of view, the demand signal is very strong.
And so we would need to have a rapid advancement in technology.
And now where do these advancements come from?
They come from people because the ideas come from people, the research
and developments come from people, and the manufacturing comes from people. So educate,
having, cultivating talent is probably the most, one of the most important things that every country
and region needs to do, cultivating talents. And the talent is clearly the driving force for technology
development going forward so the most important thing for society to do is ensure that we have a
very healthy industry so that young people who are contemplating getting into a new career would consider going into this direction because it has a healthy industry.
And having that healthy industry is a necessary condition for talent and workforce development.
I see. And I mean, you yourself actually play a very instrumental role
in that, right? As a professor, I'm sure you've taught and mentored many students who've gone on
to become, you know, successful researchers who've, you know, joined industry, who are entrepreneurs,
who are leaders in their own fields. So what are some of the skills and qualities that, you know,
you really look for, but also you try to cultivate in your students to ensure they are ready for that next stage?
And more generally, you know, what advice would you give to young aspiring engineers, scientists who really want to, you know, make an impact in the world?
First of all, technical excellence is really the necessary condition, first of all.
We can think about everything about more higher level things,
such as solving societal problems and things like that. But in order to solve societal problems, you have to have the technical expertise to do that.
So technical expertise, technical excellence is clearly the necessary condition, but it's not sufficient, obviously.
A sense of curiosity is important and an attitude of questioning what is normally done.
Is that really the way to go?
Some would call it questioning the status quo.
That is important because that's where new ideas come from. But as you question the status quo, you need to be sure about your understanding about
the status quo. If you come up with new things and you need to be able to say how this new thing compare with what we do things today.
And,
uh,
you know,
in order to make that comparison,
you need to know exactly what is done today.
So a lot of people kind of miss that part of it in sense that,
Oh,
I come up with new things,
but okay.
New things.
It's different,
but is it better?
If it's not better,
then why is it good?
So being able to understand
the status quo is important and also
being able to retain
that level of curiosity
is clearly important.
That's from a technical
point of view.
That
is only probably a
necessary but not sufficient condition.
Really, everybody I ask my students to maintain a broad view, a broad perspective, not only of the technology, but also from the applications as impact.
Because those broad view will often take you to places where other people will not go.
They will not be aware of there's opportunity there.
So, for example, combining different technical areas and make a progress view of the application space
because the application will drive your research direction.
So being able to have a broad perspective is important.
Going deep is good, but going deep by in and of itself is not enough.
I see.
So technical depth and excellence, the intellectual curiosity,
the ability to question things, but to do that with a good understanding of the status quo.
And then finally, having this broad perspective or this broad view of what are the practical applications?
What are research collaborations? What are ways to intersect this line of work with other fields. Absolutely. Yeah. And also, I should mention that I maybe quote my former dean of engineering at Stanford,
Jim Plummer.
He said, engineering is a team sport.
If you're a loner, you won't make a lot of progress.
It's a team sport.
So you need to collaborate.
Excellent way to capture it. So looking ahead, what do you see as some of the most exciting research opportunities in the field of semiconductor technology?
I know we mentioned the dark tunnel and finally reaching the light.
You also discussed some of the work with 2D shrinking moving towards this LA to Manhattan analogy.
What are some of the exciting opportunity areas that keep you up at night?
Well, two things. One is that I mentioned earlier, building 3D chips and how to build 3D chips and how to come up with a variety of device
technology that I would call is application domain specific device technology. Let me explain. For the past few
decades, we have one device technology, silicon transistors, and that does everything from storing
the data to do the computing, to running your radio for the cell phones and things like that.
So one technology does everything. Now, of course, when you do one thing that does everything, that's wonderful.
But also there's inherent inefficiency.
It's more or less like you drive an 18-wheeler truck every day because you expect to move from West Coast to West Coast one day and say, therefore, I drive this truck, even though I just drive the truck to buy groceries.
That is not efficient.
So this is what we have today.
But going forward, we demand extreme energy efficiency.
So therefore, we need different device technology to do very specific things that makes it very energy efficient.
If I want to just go to campus and teach a class, I ride my bike because it's way faster.
I can park my bike right in front of the classroom.
But I can't ride my bike to move my house.
You need a truck.
So we need to develop very specific, what I call domain-specific device technology to make these 3D chips because then we need energy stream, energy efficiency.
So that's from the kind of base fundamental device technology level.
At the higher level, then going into like how to build a system point of view,
then the people who develop semiconductor device technology
really need to work very closely with people who develop semiconductor device technology really need to work very closely with people
who develop the applications because how you develop the technology is closely coupled
because of the efficiency requirement has to be closely coupled to how you're going
to use it.
Whether you use this chip for an automobile or a medical device or a wearable device,
have a very different design point.
You've got to design the technology very differently.
And in order to achieve the highest energy efficiency, you need to co-design these kinds
of systems from device technology all the way to how you're going to use it in the system. So that kind of co-optimization requires people who would understand across what we call the
system stack, across different levels of abstraction from device technology to system design to
even software design.
So that is a big challenge for people in this field, is how could somebody comprehend so many things?
That's where the team sports come in.
You need people in a team who can talk to each other, who understand each other's languages.
That's where the research direction, apart from the basic device technology, the other direction would be more and more closer and closer coupling between the user application and the fundamental
device technology.
I see.
So domain-specific technologies and this idea of co-optimization.
One thing that I would love to just raise quickly is, obviously, we're in an AI arms
race, right? We're seeing rapid evolution
and growth with generative technologies. And I'm sure one of the biggest practical application
areas is accelerating or improving the efficiency of how we run these large-scale language models.
So where do you see the implication or the impact of some of the advancements from semiconductor technologies on accelerating this AI development that we're seeing in recent years?
The advances that we've seen in recent years in AI are actually really enabled by three things.
First of all, new AI algorithms and new architecture of doing the computation required for AI, certainly.
Second, availability of a large amount of data, because many of the AI machine learning models are trained on data.
So the second thing is availability of large amounts of data, enormous amount of data, such as all the data you find on the internet and data you collect wearing your
iWatch or wearable devices and so on. They're collecting data all the time, right? So,
availability of this huge amount of data to train the AI model. The third is that you need to have
really powerful computers to do the crunch of this AI model, to train an AI model, the GPT takes months of computer crunching numbers 24 hours, 7 days a week.
So three things, right?
New algorithms and architecture.
Second, availability of large amounts of data.
And third, very energy efficient and high-speed computers. Out of these three things, two of them
rely on semiconductor technologies. Well, of course,
energy efficient computing relies on semiconductor technology. That's very
obvious. But the availability of a large amount
of data also depends on semiconductor technology because
where do the data come from? They are
collected by devices that operate on chips. So without this ubiquitous deployment of chips,
you wouldn't have this big data that is available to all of us today. So in that regard, the potential advancement of AI will necessarily be gated by advancement
in some kind of technology.
Take an example, you wouldn't be able to run GPT using computers that are 20 years old.
There's no way you can do that, right?
So that is very important to realize.
And going forward, the ai revolution will
revolutionize many things that we do as i mentioned before one thing can influence another and then
another one thing would come back and influence to put the other thing that's just like biology
influencing electronics and electronics influencing violence our understanding of biology. And this two-way street also exists, right?
The AI, the electronics, fundamental semiconductor technology
will help propel AI to go forward
because then you can train even more powerful models,
even more complex algorithms and so on.
That's for sure in one direction.
In the other direction, the application of AI and machine learning would revolutionize
the way we fabricate and manufacture these semiconductor chips.
Today, you probably hear about building semiconductor fabs and so on.
We need a lot of people to run the fabs and so on.
Why do we need that many people to run the fab?
We don't need that many people to run the fab.
We need to produce 10 times more chips,
we couldn't afford to have 10 times more people to run the fab.
We need to then be more efficient in running the fab
and being able to run the fab with 10 times less people.
And how do we do that?
Well, AI and machine learning would be able to help us on that going forward.
So this is kind of symbiotic relationship I see going forward will be very important
to have.
I see.
Yeah, I think this triad of data, compute and algorithms, maybe some folks, I myself
included, failed to realize the importance of compute on the data as well, not just the
computing technologies for training and running inference, but also how important those computing technologies are for enabling the massive amounts of data that
are powering a lot of these models. I have maybe one more question. Thinking about some of the
potential risks or challenges, I know, for instance, with the emergence of LLMs, there's
growing discussion around this idea of responsibility,
of ethics. So with emerging technologies in semiconductor technologies, how do you think
about some of the ethical implications, for example, of, I don't know, neuromorphic computing,
or, you know, the environmental impact of semiconductor manufacturing? Like, in general,
what are some of the potential risks or challenges associated with some of these emerging technologies and what's the best way to think about addressing
them yeah and it's basically i'm glad you played up this environmental aspects of the semiconductor
manufacturing so the industry has already been moving toward what we call green manufacturing,
recycling things and so on.
For example, a new modern fab built today,
99% of the water is recycled. So no one drop will be wasted.
Everything is recycled in terms of water, for example,
and in terms of power consumption, energy consumption,
that is also heavily invested.
Companies are also very heavily invested
in reducing power consumption and so on.
But one thing that I point out,
an interesting study that I saw just recently,
that for every unit of electricity
that a semiconductor manufacturing
fab used, they would produce chips that would save four units of energy that would not otherwise
have been saved.
And so for every unit, it's a great investment for every unit of energy, electricity they
use to produce a chip. That chip will in turn save four units of energy, electricity they use to produce a chip.
That chip will in turn save four units of energy.
That's a pretty good trade-off.
Yeah, that's a pretty good trade-off, right?
A good return on investment.
I see, I see.
I want to just end with one final question
from the perspective as a visionary in your field,
but just looking more broadly into the field of computing,
what's maybe one or two emerging technologies that you're excited about? And then what is maybe one or two
grand challenges or open questions that you'd like to see addressed by the computing community?
Yeah. So the first one, the exciting technologies, I am very optimistic that we will go beyond
the use of silicon as the only technology for computing and computation. I think going
in the next decade or so, we will see the emergence of other materials that will be
used in a computing system. I'm pretty optimistic about that.
In terms of your second question, the kinds of impact that we will be making is that I think
there will be more and more use, broader and broader use of the computing systems going forward.
And that would really bring about a sea change
in the way we operate.
Because many of the things that we wanted to do,
from self-driving cars to energy efficient electric grid
and so on, and energy storage and so on,
they're all gated by advances in the
basic semiconductor technology.
And going forward, I'm expecting that the society will continue to progress, really
propelled by advancement in semiconductor technologies.
The semiconductor technologies are kind of
largely invisible to people, right?
We see it in phones,
we touch the screens of the phone,
we listen to things, we watch videos,
but oftentimes we don't see
what it is powered by.
And with the recent attention
and supply chain residencies, pandemic and so on support, I'm expecting that the general public, the general society would recognize more than before the innards of what drives these things and therefore recognize the importance of these basic technology. And I think that would be useful for everybody,
helpful to propel the advancement of technology going forward.
And I think people certainly have.
So the future is bright.
And I think we're certainly, as you alluded to earlier,
nearing the end of the tunnel.
So very excited to see where the future, the field will continue to be,
especially with individuals like yourself
helping drive the future directions.
So Dr. Philip Wang,
thank you so much for taking the time to join us.
It was really a fruitful conversation.
Thank you for the opportunity to speak with you.
Thank you.
All righty.
Thank you so much, Professor.
ACM ByteCast is a production of the Association for Computing Machinery's Practitioner Board.
To learn more about ACM and its activities, visit visit our website at learning.acm.org
slash b-y-t-e-c-a-s-t. That's learning.acm.org slash ByteCast.