No Priors: Artificial Intelligence | Technology | Startups - How Diamond Cooling Could Power the Future of AI, with Akash Systems
Episode Date: December 12, 2024In this episode of No Priors, Sarah sits down with Felix Ejeckam and Ty Mitchell, founders of Akash Systems, a company pioneering diamond-based cooling technology for semiconductors used in space appl...ications and large-scale AI data centers. Felix and Ty discuss how their backgrounds in materials science led them to tackle one of the most pressing challenges in tech today: thermal efficiency and heat management at scale. They explore how Akash is overcoming the limitations of traditional semiconductors and how their innovations could significantly boost AI performance. Felix and Ty also talk about their collaboration with India’s sovereign cloud provider, the importance of strengthening U.S. manufacturing in the AI chip market, and the role Akash Systems could play in advancing satellite technologies. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil |@AkashSystems | @FelixEjeckam Show Notes: 0:00 Introduction 0:30 What is Akash Systems? 2:12 Felix’s personal path to building Akash Systems 4:45 Ty’s approach to acquiring customers 6:40 Challenges of operating in space 7:54 Live demo on diamond’s conductivity 9:50 Heat issues in data centers 15:38 Heat as a fundamental limit to technological progress 20:44 Akash’s role in the semiconductor market 22:54 Growing diamonds 25:10 Collaborating with India’s sovereign cloud provider 28:15 Importance of American manufacturing for AI chips and outlook on current data capacity 29:45 The Chips Act 31:22 Future of national security lies in satellite and radar tech 32:46 Critical issues in the U.S. AI supply chain 36:34 Deep learning’s role in material science discovery 40:16 The future: AI expanding our possibilities
Transcript
Discussion (0)
Welcome to No Pryors.
Today I'm chatting with Felix Ajekum and Ty Mitchell, the founders of Akash Systems,
which makes diamond-based cooling technology for computing platforms, from space satellites to AI data centers.
Their innovation uses highly conductive diamond to help computers run cooler and faster while using less energy.
Felix and Tai, welcome to No Priors.
Good to be here.
Thank you, Sarah.
I think we should start with just a quick introduction as to what Akash is.
Sure. Again, very good to be here with you, Sarah.
Akash Systems, we are a venture back to company based in the Bay Area that is starting from the ground up at the material science level.
And we are making, using proprietary materials, materials specifically of diamond that we grow in the lab, using it to make electronic systems that are disruptive in the world.
by an order of magnitude.
It's in contrast, oftentimes when we start companies, even in the hardware space,
we tend to start injecting ourselves in the middle of a supply chain at Akash.
As material scientists, we come in at the periodic table level, and we start there to build
up chips, boards, systems that ultimately change the lives of our society, whether you're
in business or as a consumer.
And we do that in several ways.
We change the structure of a basic material.
The systems that we've chosen to affect, we started off in the space world where we make some of the fastest satellite radios ever made by humans.
And then we go over, as we are doing now, to AI, where we are able to cause compute a GPU to go faster than has ever been done before since the beginning of.
of this new space, or reducing energy consumption in the data center by a significant amount,
all because of innovative material science that we've pioneered at the ground floor.
So maybe that's a good segue into how you got started working on this,
because you've had this idea for a long time, as you said, he started on space applications earlier.
Can you talk a little bit about your background and, you know, the original scientific idea and how you thought it would be applied?
Sure. So my background is in material science and electrical engineering. I obtained a PhD in electrical engineering with a minor in material science and device physics from Cornell. And in my PhD, I focused on bringing together very dissimilar materials in such a way that one plus one equals 10. Okay. And, and, you know, for example, silicon, very well known, ubiquitous material that's ushered in the,
current modern era that we have today. But then there are other materials, plastics, other types
of semiconductors that don't actually do as well as silicon, but they have their own strengths.
And so for my PhD, I looked at ways of trying to bring together, say, the optics world with
electronic silicon and merging them together as such that the overall system is incredibly
powerful. That philosophy I've brought to Akash when I started Akash with Ty in 2017 to try to do
the same thing. I often found personally, I think it's a very good metaphor for humans, how we
interact together. When you bring different people that have different strengths, the combination
can be incredibly powerful in ways that excel and exceed the simple summation of the part.
And that's exactly what we do at Akash, where we bring artificial diamonds, well known as the most
thinly conductive material ever grown in nature or ever to occur in nature, and then silicon
or even gallium nitride, and quite frankly, any other semiconductor, when brought together,
amazing things happen.
I'm very happy and excited about doing that in the world of AI.
We did that in space, where we have now made and launched.
the fastest radios ever made by man.
And now with AI, we're able to achieve performance levels,
whether in energy efficiency levels or compute speeds,
ever obtained by simply using these artificial materials that we've created in the lab.
Ty, are you the silicon or are you the diamond here?
I'm actually a little bit of both.
I'm the silicon carbide guy.
My PhD was on silicon carbide.
What that taught me when I went into the business world working for a company Cree Wollspeed is that Cree and Wolfspeed that developed a very good silicon carbide materials level technology.
And applying this technology to any sorts of systems like radar systems or power electronic systems for EVs or light-emitting diodes, one of the things you learn is that when you have a materials level advancement, as the person who has that material level advancement, you really have to make the system to convince people that you have the solution.
If you just go to someone who is making, let's say, a car, and you say, hey, I've got this great silicon carbide diode, a shocky diode, or a moscow.
They'll say, all right, great, you know, I already using a silicon IGBT.
If you meet their price, I'll put you in.
And you say, look, if you put my part in, I think I can increase your range 200 miles.
I can increase it 40%.
They'll say, okay, yeah, sure.
However, if you make the car, okay, you find a partner to get your part into the car,
or you make the box, you actually make the MossFet, you make the module, the power module,
The farther you go in the system, the better chance you have of convincing the customer that you have the solution.
So that's the approach that we took at Akash, was even though we had this materials-level technology, we would make the system and then go directly to the consumer or to the customer and be able to prove our technology out that way.
And with AI, why do we go into AI?
We were in space, solving problems that were very difficult, actually more difficult than the AI problem.
we face today because in space, you have a limited area. You don't have any fluids that you can use
to cool because there's no airflow in space. And you have a bunch of other reliability and survivability
requirements that are much more difficult to meet than an AI. So we thought, all right,
AI is a very difficult problem they have. Heat is a very difficult problem they have to solve. It's growing.
Nobody really has the right approach to solve the problem, and we think we can help.
So that's what caused us to dive in.
And just to give a couple of numbers to what Ty just said, comparing space to AI,
the power densities that we cool in space are at the level of 10 to 3 watts per square
centimeter, so 4 to 5,000 watts per square centimeter.
The chips that we cool in on grounds, in AI, and a typical server, is a full order of
magnitude less than that. A couple hundred watts per square centimeter. So that's what gave us
a confidence that if we could address the problem in space, that we absolutely used in the same
technology, even backed off a little bit so we can rapidly ramp. If applied to a server,
it would be a home run. Yeah. I think you guys had a demo to just sort of explain the, you know,
advantage in connectivity that diamonds have, or that specifically. We do.
Thank you for making that very nice segue.
What I'm going to show you is how diamonds can very effectively cool down or rather melt ice.
So this is an ice cube that you're looking at here in my little video.
This is diamond, a little piece of diamond.
This is a diamond wafer that we grow in the lab.
You can see the Akash name.
And what I'm going to do is show you that heat from the ambient, and more specifically my fingers,
my body temperature will flow through this diamond rapidly into the ice cube and melt it as rapidly as I touch it.
It'll be just like butter.
I wish you could touch it yourself.
It'll feel cold to the fingers.
So I'm just going to wedge it here and you will see.
And there you have it going in.
I'm feeling very cold right now.
I don't know if you can see it, but there you go.
Very cool.
You can see it wedged in.
And cut ice.
Yeah, you can cut ice with your fingers.
And now, rotate it so you can see.
And that's the feature.
That's the property that we bring to bear with our chips, with the GPU.
We're looking at reducing temperatures.
Initially, we're starting off with 10 degrees, which is already worth millions for any
data center who has a small number of servers.
But we're looking at further reductions, 20, 30, 40 degrees down the line.
over the next 12 months.
In space, we already reduced temperatures there 80 to 90 degrees.
So it is quite significant the effects of this,
and the economic impact is far reaching.
Yeah, maybe this would be a good time
to actually just talk a little bit about why heat dissipation
is a problem at all for AI chips in AI GPU servers and data centers, right?
So, you know, if you imagine these chips,
in servers, in racks of servers, in big rows of them, in the data center.
Like, we have cooled large data centers for a long time with fans.
Like, how does this fit into the, I guess, alternative set of fans and liquid cooling?
And, like, how has large-scale AI training changed the game at all?
So this technology that we have, this materials-level technology using synthetic diamond,
it actually fits with any other cooling technology that's used today.
Today, the cooling techniques that are used are really at the data center level where they do airflow containment and air containment, you know, keeping the air from mixing.
It's at the rack level where you were using liquid cooling, you know, CDUs and manifold and pumping liquids right to the devices to keep them cool or using fans.
And you have it at the chip and package level where people are using techniques to,
either speed up the chips, make more transactions on the chip so that you get more efficiency
transactions per watt or in packaging doing things like fan out packaging where you're spreading
out the heat as much as possible for any chip or group of chips. Also things like HBM, remember,
you can stack up right in the same package with the chip and give it more efficiency and
essentially also more transacted to watts.
So these are all techniques that are being used today.
And the beauty of our approach is that it works with all of these.
You can use diamond cooling by itself, or you can match it with anything else that you're
doing and give yourself additional operating margin, give yourself additional performance margin
that you can then use to drop your temperature, run your system hotter,
and give yourself the opportunity to perform more transactions.
And that's something that's critically important because you see just in the last year when
NVIDIA has introduced a couple of new chips.
First, the Samsung HVM, there were heating issues back then.
Then with Blackwell, again, heating issues came up where that rollout was delayed due to heating
issues.
And this is now starting to, for the first time, I think if you listen to the last conference
call. With NVIDIA, heating issues were mentioned. This is something that's going to increasingly
be on the radar for companies, for investors. I think that Jensen was able to not directly answer
the question. People are very skillful in these conference calls, but more and more of these
questions are going to be asked, and people are going to need approaches to address them.
we have what we think is the most effective approach because it goes right to the heart of the
device. Yeah, it's interesting because intuitively not being from the data center management space,
like I get nervous when anything has a mechanical component, right? But I do have, you know,
friends running large-scale data centers of this type. And I think even though there have been
announcements of, for example, like the Liquid Cool G-B-200 and VL-72 system, like and some
interest in adoption of liquid cooling, like fans and liquid cooling come with their own
reliability issues, of course, right? It's just complex to go implement that and keep that
from, I don't know, leaking and breaking and things that movement requires.
Yes, Sarah. Actually, the servers that we ship today, the H-200s from Nvidia, for example,
is both liquid-cooled and diamond-cooled. So just to illustrate Ty's point that our diamond
technology layers on top of whatever technology that you use, whether it's liquid, fan, or both.
But to push on that point further, we believe that a cautious system is that if a material science
and more specifically a physics or chemistry approach to solving the heat problem is not used
today that the needs of AI data centers around the world based on projections today will crash
the grid as we know it. And if it doesn't crash the grid, we believe that the cost of
electricity will be exorbitant, okay? It's not sustainable at the path that we're taking today.
And that's part of our inspiration for attacking these problems, starting with physics and
chemistry. So taking an example is your laptop, okay? Your laptop has a whole bunch of chips inside
of it. You put on your lap and you feel the warmth of it, okay? If you bring an ice pack and put
your laptop on top of an ice pack, nothing will change. It will not speed up your CPU. You're not
going to change the heat extraction of your CPU because there's just such a great distance,
a thermal barrier between your lap and the GPU or the CPU. Going straight to the heart
of the heat, the heat source, the chip, the material with physical chemical solutions allows
you to make a difference. And that's what we're doing. We don't see a lot of approaches like that
out there, we think that this is going to be really key to curb the consumption and actually
allowed the AI vision, I think that we all hope, love to see, actually come to fruition.
If I think about the complaints that people have running large data centers as blockers or
issues to deal with, it is chip supply, GPU reliability, power supply, heat.
Like, how does heat relate to all of these other issues?
everything you mentioned is heat i mean uh we see the same thing by the way in space uh every problem that
one addresses in space it's almost like a whackamol unless you go to the source of the problem
which is the heat the heat producer you're really just playing whackamol you knock it down here
it'll show up elsewhere um everything you've just described is is a is a heat problem
and by the way this is right there are billions of these chips one simple server the ones that we ship
have eight GPUs in each single blade, and then scores more of other chips, equally heat producing.
And so we at Akash are actually just scratching the surface of this problem.
It's a pervasive problem up and down the supply chain.
You just mentioned the way that it shows up in the world, the fact that we're trying to do more
with what we have.
All it's doing, it's breaking the bank.
It's costing a tremendous amount of money.
it's leading to reliability issues.
Servers, oftentimes, it's not uncommon for them to suddenly have an infant mortality shows up
and it has problems right away.
When it gets up to that 80, 90 degrees heat temperature, it starts to curb back performance,
thermal throttling.
This is something a lot of your listeners are going to be familiar with,
the fact that the operating system has to back off the workloads on the GPU,
which means slowing it down so that they can do the inferences that,
the customers are asking of it.
So it is a significant problem, and I think that just attacking it at the network level
or sort of putting band-aids at the system level will just kick the can or add cost to the
overall ecosystem.
You have to attack it at the material science, at the level of physics and chemistry.
And Sarah, you used an interesting word with a blocker.
And we are getting to the point where people are starting to get stuck with this issue.
We mentioned the issues with the device rollouts and the delays that this has caused.
You're going to see this happen more and more going forward where the drive to increase performance, you know, whatever, two to three times every two, three, four years.
We're not going to be able to do that just because drawing power.
and water to the site, and then getting the parts to run with this heat buildup, and you are stuck
inside the server. Anything you want to do inside the server, because right now you've got layers.
You've got your chassis, which is probably aluminum. You've got like a copper heat sink.
Then you have some sort of epoxy bonding material. Then you've got your chip material, which is silicon.
then you've got some sort of solder, gold tin, then you've got FR4 or some other polymer board,
then you've got more gold bumps, then you've got another board.
So you've got this sandwich, and I'm just listing off some of the layers,
and every time you have an interface between those sandwiches, that's a thermal barrier.
Okay, so what are you going to do about that?
Because they're really, you have to attack that.
And right now, that's not really being attacked.
and this is going to have to be attacked.
Otherwise, it's not going to block creation of data centers,
but you look at the multiples in the market,
you look at earnings.
This is something that probably the people at AMD and Nvidia
are thinking of the results are growing now, right?
But look what happened at Intel.
You miss two, three quarters,
and you go from being at the top of the mountain,
to, you know, listening to those bells tolling.
And I think this is something that's going to be very, very,
this is going to be top of mind for any executive at any of these companies.
Yes, I understand.
You've got to cool the sandwich down from the inside as the sandwich gets bigger and hotter.
I think one of the things that really resonates with me, at least,
hearing from friends in the industry, is the thermal throttling that you describe.
The fact that you see and have to manage erratic behavior when these GPUs are.
at higher temperatures. The vast majority of people working in machine learning right now,
it's a very abstract software field, right? And so the idea that you have these
challenged non-deterministic behaviors based on how the materials themselves are interacting
and you have to account for them, there's a real, like a burden of that is just a new
domain to think about. Maybe just because it is your area of expertise, how do you fit into
the sort of partner ecosystem of like the invidias of the world, the supermicros of the world,
other SIs, et cetera.
So just to be clear, so we are buying chips.
We're not making GPUs.
We're taking the hottest chips in the world and we're cooling them down so that we can open
the envelope of performance for the system architect.
Okay.
And so we fit in, we're coming into the world as a server maker that is opening performance
envelopes for folks in inference work, folks training models, data center operators,
cloud service providers, okay, that's our entry into the world. It makes sense that we would go
to the most challenged parts of the market, the folks that are struggling most at that
performance edge. Sir, I'm going to go out on a limb here and say that we think that with our
diamond technology, that we will be able to hyper-accelerate Moore's Law,
so that, you know, in two years, we will be achieving what previously folks had to wait six, seven years in the past to get to in terms of performance.
Because remember, Moore's Law is about squeezing transistors closer and closer and closer together, but you can only go so close before you have thermal crosstalk between these devices.
And so right now, the limits we see in AI, the pace at which we can do inference work, is limited by that thermal cross talk between these devices.
If we open that thermal cross dock and we're able to allow greater densities, then all of a sudden, you know, we can create a feature-length film in seconds rather than the time scales of, you know, if you're doing it offline, you know, months, years, if you're doing it right now with the thermal limitations that there are in AI, probably days, but, you know, I think we'd like to see seconds in production time to do a full 90-minute feature-length film.
And that will happen because of the unblocking of the thermal limitations inside the GPU.
I'm going to ask a silly question, but you've mentioned it several times.
You're growing diamonds.
How does that process work for the form factor that you want, assuming, you know, the vast
majority of our audience has only ever heard of the concept of growing diamonds in the, you know, realm of, like, jewelry?
It's really no different than growing other semiconductor materials.
If you're growing silicon or silicon carbide or gallium arsenide or indium phosphide, any of these electronic substrates,
you start with a seed crystal and then you use typically some sort of process chemical vapor deposition
to grow out from that crystal to grow perfect single crystal material.
out from that crystal. And that's the same way, that's the same way you grow diamond.
Diamond is just carbon, right? So, so you take a seed crystal of perfect carbon of diamond and,
and you, you, you use a plasma to grow the diamond in a, in a reactor. You know, it takes
very high, very high temperatures, you know, very high pressures to do this, but, but it's
essentially a similar process to growing, to growing silicon or silicon carbide wapers.
And Sarah, our specialty, our secret sauce, lies in how not only we grow the diamond, but also how we intimately couple that diamond with the semiconductor using physics and chemistry.
So it's not a trivial process.
It does take some work.
You were asking about why does it take so long?
It does take time to do material science.
And, but that intimate coupling of the atoms of diamond with the atoms of a semiconductor is what we understand.
And that's what we bring to bear in both space and in AI.
And that's what makes this not so easy.
But we're very excited about it.
We're deploying it in the servers that we ship.
And very strong market pool that we see right now today, given.
given what's going on in the world.
You guys announced an exciting customer just this past week, NextGen Data Center and Cloud
Technologies.
Can you talk about why they're a good early customer?
And they're also like a sovereign cloud player.
So I want to talk about that as well.
Sure.
So NextGen is the largest sovereign cloud service provider in India.
they handle the country's data in a very careful way,
making sure that it stays within the sovereign borders of the country.
We see that requirement coming from countries all over the world.
Nobody wants their people's data leaving the boundaries of their country.
We see that, by the way, in space.
When data is coming down from a satellite or being pulled away,
that satellite has an obligation to keep the data within the boundaries of that country.
So that was, that's an opportunity for us.
It means that we're going to be able to address this issue with every country individually.
So that's number one.
Number two, they are the leader in that region in India.
And so we thought that that would be a very good test case to show the world what is possible,
the fact that we as a small growing company can scale
to the kinds of volumes that they need.
And we can scale rapidly.
You know, we got to ship all of this stuff
within the next quarter.
And a lot of small companies trip at that.
NextGen selected us believing
that we have that ability to scale.
Thirdly is the fundamentals of the technology.
I think they saw very quickly.
And they're led by some very innovative leaders
that this is a problem.
that will stay with us for a very long time, unless we get to the very heart of it, the material science, nature of a solution, you're just going to be tippot-toeing around the big elephant in the room.
And so we were very excited when they saw that opportunity, the fact that, okay, this is a company that's coming at this problem from the material science.
We jumped at it when they saw that we could scale. We were very excited about that. We jumped at that.
And then, you know, I think next gen is positioning themselves and using our technology to not only scale within India, but potentially scaling around the world, okay, again, respecting the sovereignty of country data within that country.
So we think that this is a very, very nice match.
They open with this size order.
We're excited about the things that are even coming down the pike with them in just 2025.
This is just the beginning.
And these problems are also faced by U.S. companies as well.
You know, India is not the only country that's dealing with this.
So, and we're talking to them as well.
This is, this is definitely a very important topic that you brought up, Sarah.
Yeah.
What do you think is the importance of American manufacturing of, of AI chips and data center capacity?
This is especially relevant given you're one of the only small companies that is a Chips Act recipient and, you know,
Gelsinger just stepped down. What is your current view of American capacity and your outlook for it?
My view is that the U.S. is not doing enough and needs to do a lot more. This is a technology that
the U.S. needs to be the leader in, and it needs to lead all the way up and down the value chain.
We can't just rely on what NVIDIA or AMD has done to date. We have to continue to invest in
not only the larger companies, but also the smaller companies like us, like others, who are
working on some of the very critical problems because, as you know, AI is not only a critical
technology for business, it's also a critical technology for national security. And these are
things that the U.S. cannot rely on other countries to develop for it. And so we have to really
drive technology development. We have to drive manufacturing all the way up and down the supply chain
and put a lot of investment into this technology. It's going to be very important for the future
of this country. And, you know, we're there to support that. And that's what we're focused on.
Let me add to that by saying that our receiving the Chipsack is a testament to our support of
USA, USA, USA. We're all about doing the things that time mentioned, strengthening our supply chain.
That's one of the key tenets of the Chips Act. Supporting national security, we supply to defense.
It's public that we work with Raytheon, iconic American defense company. This technology allows
Raytheon and U.S. defense to maintain defense military supremacy around the world in a way that
there's never been done ever in the history of mankind. This technology secures our commercial
supply chain in a way that I think we started to slip and it became bare. COVID laid that bear
when we saw that, oh wow, we're depending on others to backfill key chips that we used to be able to
make ourselves. Now we can make them at home right here in California and in Texas. We're going to
be creating jobs, okay, in both California and Texas. We have support from a broad spectrum of
investors, brilliant investors, Vinal Kosla, Peter Thiel, among them. So I think that this this Chipsack
is something that's going to enable us to fulfill all of the tenants, mandates of, of
the Chips Act, but also things that everyone in the country can be very proud of.
You said you'd worked with Raytheon. You'd worked on space applications before. I think it may not
be intuitive to every listener, like why, you know, satellites and radio communications are
so important from a national security perspective. But I think increasingly you're going to
see conflict and warfare defined by your understanding of the RF spectrum, be it space or
other systems. And I think the ability to support that is critically important. It's totally separate
from any of the AI system work that you're doing. 100%. When radar was developed during World War II,
that was a huge game changer. Without radar, it would have been very difficult for Britain to win
the Battle of Britain because that early warning system was critical for them. Just on a personal note,
my father-in-law was a radar operator in World War II.
And he was one of the first people to get exposed to this technology.
And he said that they would chase German submarines off Florida.
And the submarines would go beneath the surface.
They wouldn't know how they kept finding them.
Okay.
So it just goes to show that when you introduce these new technologies,
they have outsized impact on the world.
And, yeah, it's true with RF.
and it's also true with AI.
Maybe because you think broadly about this problem as a participant, like in the AI supply chain,
you know, the U.S. is not doing enough, needs to do more, there's increasing risk.
What do you think the other critical problems are that, like, are even feasible to take on?
Like, is it credible that the U.S. is going to have fabs and lithography machines and sort of, you know,
these other core components than in any near-term time wouldn't.
Yeah. The U.S. will do it. The only question is whether the U.S. will do it because it has to do it or because it wants to do it. The U.S. can accomplish anything. We have the people. We've got tons of natural resources. We've got the capability. And the only question is, part of it is corporate culture is driven by earnings, right? And if we only focus on earnings, then if it's cheaper to make something,
in Asia, make it in Asia. Okay. But there's a national security component to that where maybe it's not
best for the country if you make it in Asia. Maybe you need to make it here. So we need to find a way to
bridge that gap so that everybody's not just chasing that last penny of earnings and sending
manufacturing of these critical technologies overseas doing it here. This is a sector where we should
start it correctly and start doing all these things here all the way up and down the supply chain,
everything from the chips and the server, the software, you know, to the frames, the housings,
the racks, right, all the big, dumb metal pieces. And then the centers themselves, we can do it all
here in the United States. And we should, we should be doing that and focusing on that and focusing, you
federal programs and funding to make sure all of that happens here.
I do think that AI will play a role in manufacturing and sort of 10xing or even 100xing
manufacturing so that we can actually outperform humans in other countries.
I think that's the way it's going to look.
So we will not have to reduce labor costs in order to perform and
compete with China. I think that that will come through, you know, extraordinary feats in
AI-powered manufacturing. You know, universities today, when I was in grad school, everyone had to
get training in a machine shop. Okay. Cornell, I remember, had about four or five machine shops,
and freshman year engineering, you had to get training in how to use these machines and operators.
Today, every professor, almost every professor has a 3D printer, so that's just, you know, added jet fuel to the capacity of everyone on campus to manufacture whatever they want, whenever they want, and however they want, at a very low cost.
I think you're going to see the same thing with the use of AI in manufacturing where we will be able to make per capita a thousand times, a hundred thousand times.
more components, more equipments, more parts, more chips compared to anyone else in the world.
And that's what AI makes possible.
No, I was just going to say, I believe that is possible, too.
And it's a much more aspiring, you know, inspiring vision for the future than we completely seed the supply chain and are strategically, you know, at the mercy of others, right?
Yeah.
You both have, you know, very esteemed backgrounds in material science.
It's one of the most interesting things as a, you know, I'm coming from software, computer science world, but the applicability of transformers, diffusion models, and just effectiveness of deep learning overall as it scales is very interesting in that it applies to so many different domains, and there's increasing excitement about its applicability to material science.
Do you guys think about that yourselves?
100%.
Yeah. And it's very appropriate to research and helping research go a lot faster. Because if you think about how can AI help you and how can AI models help you do work, you know, personally, I've been using AI as like an assistant that greatly increases your pace of research because a lot of trying to solve a problem in technology, especially core technologies like material science, is first figuring out what's been done today because there's a
lot of smart people in the world, and there always have been a lot of smart people in the
world. And the key to your solution might be something that somebody did back in 1963,
but never went anywhere because they didn't have the ability to do as many iterations as you can
do now. So your ability to go back, find that information, and then apply it to the problem
that you can solve now, this is one of the things that I'm really excited about applying these
models to, because I think it can really help drive innovation.
And, you know, we're not all going to be robots and slaves to AI.
AI is going to help us, help us innovate even faster.
Do you feel optimistic about these ideas around using AI for inverse design or better exploring chemical space or accelerating DFT simulations are more fundamentally in the process of discovery?
Yeah. So, you know, chemical space, density functional theory, you know, any of, you know, any of,
these, any of these topics that you're talking about are all just, they're all just problems
to be solved. So, no matter what, what topic or what approach, anything that can be solved
through asking questions and iterating, it can be done. So any of these approaches can
apply to material science. It can apply to the medical field. It can apply to software, right,
to coding. And we've already seen it at lower levels. But really, we are limited by our ability
to frame the questions. And it's really, that's it. We're limited by our ability to frame the
questions and only by our own imagination, really.
So I'm very excited to really get into this and to figure out how we can use it because there's problems that we want to solve right now that we don't know how to approach solving it because we know it's going to take so many iterations and there's so much information we need to find to take a good run at our hypothesis that it's difficult to get started.
But if you've got somebody working for you and working with you who can do these iterations,
a billion iterations in a second, I mean, it's it's really exciting to think about.
The North Coastal, our investor, and I would have these conversations about how sometimes I think
as entrepreneurs, entrepreneurs can sometimes be limited in how they imagine because you're
constantly guided by the boundaries, the constraints of the world as it is today.
And what AI does is open up those boundaries so that we can begin to imagine the things that you're talking about, inverse design, right?
So can we, you know, think about, like, what if compute, if we could, if we could, you know, run processors a billion times faster than today, because the thermal envelope is no longer there, could we accelerate the calculations, the modeling that would have taken a hundred years, but now it takes a second?
What does that even mean?
Like what is not possible?
I think that I think the greatest challenge is our own imagination.
I think Ty hit the nail on the head.
I think that the difficulty is trying to ask the right questions
because now we almost have infinite processing capacity.
And that's the difficulty is how do we get out of the way
so that compute can try and solve these problems.
I think that in the biotech arena,
getting drugs that are dialed in to every form of cancer is now well within reach.
I think that being able to have battery capacity that is so optimized
because we don't have the thermal constraints that we do today
that can take us from SF to New York in with one charge.
I think that that is well within reach.
I think that really the sky is the limit, and I'm excited about that future.
On that note, Felix Tye has been a wonderful conversation. Thanks so much. And congrats on the progress.
Thank you, Sarah. Thank you. Thank you, Sarah.
Find us on Twitter at NoPriarsPod. Subscribe to our YouTube channel. If you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no dash priors.com.