a16z Podcast - Jobs of the Future, Harnessing Earth Observation, & Gaming Tech Advances
Episode Date: December 20, 2024As 2025 begins, industries are evolving at unprecedented speed: robots are revolutionizing manufacturing, terabytes of earth observation data are driving new possibilities, and gaming technology is tr...ansforming how we design, train, and innovate across sectors.In this episode, a16z General Partner Erin Price-Wright, Engineering Fellow Millen Anand, and Partner Troy Kirwin discuss the trends reshaping the future of hardware, software, and beyond.We explore:How robots and full-stack engineers are driving the next industrial renaissance.The explosion of Earth observation data and its potential to revolutionize industries.How gaming technology is moving beyond entertainment to reshape training, design, and more.This is just the beginning of our four-part series on 50 Big Ideas for 2025—don’t miss the full list at a16z.com/bigideas.Resources: Find Erin on X: https://x.com/espricewrightFind Millen on LinkedIn: https://www.linkedin.com/in/millen-anand/Find Troy on X: https://x.com/tkexpress11Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
How do companies take the magic that happened when we all got up at 5 a.m.
and watched SpaceX catch a massive multi-story rocket with chopsticks?
Yeah.
Terabytes and terabytes come down from orbit every single day, soon to be probably petabytes.
We're talking about job creation for a whole class of folks that don't necessarily need a degree to participate.
All the technology exists today to have that experience.
in an amazing, intuitive way, and yet it doesn't exist.
What is the next generation water treatment engineer look like?
It's someone that knows their way around a robot.
In a matter of days, we'll say goodbye to 2024 and start 2025.
It's hard to believe.
It's been 25 years since Y2K,
21 years since Facebook was founded,
and even nearly 10 years since Open AI was brought to life.
So if it feels like things are moving quickly, you're not alone.
That's why every year, we ask our partners, who are meeting every day with the people building our future, what they think is in store for the following year.
Last year, we predicted...
A new age of maritime exploration.
Programming medicine's final frontier.
AI post schemes that never end.
Democratizing miracle drugs.
And on deck this year, we'll be exploring...
Infrastructure independence and hypercenters.
Super staffing for healthcare.
Regulation will become code.
And throughout this four-part series,
you'll hear from all over A16Z, including American Dynamism, Healthcare, Fintech, games, and more.
However, if you'd like to see the full list of 50 big ideas, head on over to A16C.com slash big ideas.
As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16C fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16C.com slash Disclosures.
Today, we focus on the increasingly interesting intersection of hardware and software.
Up first, we're seeing a renaissance in technical disciplines that cross the hardware software chasm.
The robots are coming.
Someone will have to build, train, and service them.
That was.
Aaron Price-Rade, a general partner on the American Dynamism team.
And here's Aaron's big idea.
In the 2000s and 2010s, if you weren't coding, it seemed like you'd get left behind.
The number of computer science majors exploded while degree programs like mechanical engineering
and electrical engineering shrunk on a relative basis.
Now we're beginning to see a crucial shift amid the push to reshore.
manufacturing, the mass retirement of skilled workers across unsexy industries like water
treatment, commercial HVAC, and oil and gas, and the rise of autonomy across defense,
enterprise, and consumer applications. People are building physical things again. It's really
exciting to see. And that's going to require this full-stack skill set of people who can cross
the hardware software chasm. So I think we've really seen over the last two decades a mass
migration of engineers to the software and computer industry. And what I think we're going to see
is huge demand for engineers in areas like electrical engineering, mechanical engineering,
controls engineering are going to be in really high demand over the coming year as industries
from defense to industrials, manufacturing, even things like HVAC and water treatment are
looking for people that are really ready to wrestle with and bring AI software into these
really complex hardware contexts.
Over the last few decades, we have seen a lot of people migrate to software, and now maybe
we're seeing a shift. Can you speak to the macro trends that are really underpinning that?
I think the last 20 to 25 years has really been the software is eating the world, not to steal
our own tagline, but it's really true. Most of the growth in the economy came from the software
industry. So with first the rise of software as a service and then more recently the kind of
explosion of AI, the skill sets that have really been in demand have been software focused.
But with the kind of very recent crossing of the chasm from software to hardware, there's
huge demand for engineers that can both speak software and speak hardware. Yeah. And as we do feel
this pivot back, are you seeing that in the data when it comes to the degrees that people are
getting are people no longer getting software engineering degrees at the same clip and maybe choosing
to take a mechanical engineering degree, for example? We're not seeing that necessarily yet in degree
programs, but we're seeing that in the demand in companies. And so I think that's going to filter
out into what people end up studying in school. So the highest demand jobs in our portfolio
today, other than perhaps AI engineers, are people who have full stack hardware and software
experience, whether it's mechanical, electrical, et cetera. And what we're finding is that actually
those companies are having to go outside of traditional tech feeder schools in order to find that
talent. So we're actually seeing an interesting rise of schools like Georgia Tech, Colorado
School of Minds, University of Michigan. Some of these really hardcore engineering programs are
really coming into talent pipelines for some of our top-tier tech companies in the portfolio,
which is really interesting. So interesting. And let's actually touch on that
specifically, this idea of us needing all of these new kinds of engineers, some of them previously
existed, but there seems to be a gap. And there's this question of who's going to train these
people. Is it the four-year degree program? Is it something different? There's definitely a
spectrum. And I would frankly really credit Elon Musk, both from the SpaceX and Tesla
perspective, with a lot for really training up engineers that have fluency in hardware and software.
As more companies that are building physical things grow, companies like Skydeo and Andrel and
others are really taking on that mantle and training a new generation of full-stack engineers
that are really comfortable and fluent across hardware and software. I do think the university
systems or some sort of education system has to catch up. I expect that schools like Stanford
and MIT have great engineering departments outside of computer science programs. I'm hopeful that
those programs will continue to grow as the demand and the market continues to grow. And then I also think
It's not just engineers with four-year degrees that are going to be valuable.
Like we're talking about job creation for a whole class of folks that don't necessarily need a degree to participate in this sort of reshoring AI or autonomy-driven hardware economy.
So technicians who will help service and test robots on the manufacturing floor.
That is a well-paid, secure job of the future, I think.
Another example that we're really excited about is robotic teleoperation.
So you have someone remotely operating a robot in an environment where maybe it's dangerous or hard to get to or just really difficult to staff humans.
It's really difficult doing robot teleoperation.
It's a valuable skill set.
I expect that there'll be an entire kind of class of jobs related to that arising over the next few years.
And again, that's an example of something you don't need a four-year degree to get really good at robot teleop, but it's a very valuable skill if you do it.
Yeah, and there's no current university degree that would offer that, right?
Are there any other jobs, new jobs that stand out that maybe we don't even see quite yet?
I think that when you look at, for example, the recent results that came out about the TSM factory in Arizona,
the governor, Katie Hobbs, has introduced this big apprenticeship program where they're helping to train new semiconductor manufacturing employees alongside folks who have come over from TMC.
So with kind of the rollout of the Chips Act, I expect as chip manufacturing comes back to the U.S.
that it's an entire class of jobs that we just have literally never had here.
Maybe we did for five years in the 60s right at the beginning of the industry,
and I expect reshoring we'll see a lot of those coming back.
I think another example more in the industrial sector is you have lots of industries
that hired people when they all modernized roughly in the 80s.
So there's a large class of it's oil and gas, it's water treatment, it's chemical engineering
and HVAC, there's a class of these heavy industries that introduce a lot of new equipment
and new ways of operating in the late 80s, hired a whole bunch of people to work on it,
and then essentially never really had to hire anyone else. They're great jobs. There's low
turnover. They're well paid. And now we're seeing kind of mass retirement across some of these
heavy industries and companies really looking to incorporate more autonomous kind of control systems
and the way they manage some of these processes, but there's no one to operate them.
So what is the next generation water treatment engineer look like?
It's someone that knows their way around a robot, a control system, a PCB board.
They're going to have to know their way around sensor data, and they're going to have to
be able to understand how all of that integrates together to be able to troubleshoot when things go
wrong.
So huge opportunities in some of these industries where we're just seeing massive labor gaps
over the coming, I don't know, three to five years.
Wow. And that's pretty soon, actually. Yes. It's happening now. Yeah. And so that sounds like an incoming shortage. Are there any other shortages as we think about the supply and demand? Maybe not so much of completely new jobs, but existing jobs. The thing that really stands out in our portfolio, especially when you look across industries like aerospace, defense, robotics, it's this notion of a full stack engineer. And I don't mean a full stack software engineer. I'm talking about a full stack.
hardware engineer. So someone that can write some code, they can write some firmware, they can
fiddle around with electronics, maybe design of really simple sort of PCB board with components,
figure out how it fits in a mechanical system, sort of troubleshoot when things go wrong.
There are just very few people who have that kind of end-to-end skill set. And many of these
industries are still relatively early on their robotics or autonomy journey. So there's still a
lot of iteration to go around. Things aren't fixed in time. So the biggest demand that we're seeing
in our portfolio is that fluency across multiple different domains that includes hardware or
software. So it's not necessarily that you need to be the literal world's best electrical
engineer and have a Ph.D. in electrical engineering, it's more that you're flexible and
comfortable and familiar moving across different parts of the stack. You're able to unblock yourself
and you're really able to try things and implement things yourself as all of these companies are
figuring out what the next generation of hardware systems looks like. Fascinating. And as we think
about closing that gap, right, producing more of these people who are full stack in the hardware sense,
what do you think is needed? And let me preface by giving the example of we saw this wave of software engineers over the last few decades. And that was pulled by the market in many senses. You had these large companies like the fang companies providing a lot of really impressive benefits for these people to come in. And it became the kind of job that everyone wanted for a period. Do we need the same kind of, you could say, marketing? Are there other effects that you think need to influence the larger system? I think a lot of it will be market driven.
It's also about capitalizing on this sort of romantic desire to build stuff that I think is in the air right now.
So how do companies take the magic that happened when we all got up at 5 a.m. and watched SpaceX catch a massive multi-story rocket with chopsticks in the air.
that's so cool. And I think the companies that can figure out how to translate that feeling
into pulling people who might otherwise go down a safer route of, oh, I'm an ML researcher,
so I'm going to go work on LLMs because I know I can go get a really big paycheck at one of these
research labs. I'm a smart person and I'm going to go code a chat bot. I think it's up to these
companies that are building in these hard and gritty spaces where it is harder because having to
worry about hardware is harder than dealing with software alone. But if they can pull that talent and
inspire people to really get into the guts of what does it mean to deploy a model on a physical
system and what is the complexity and challenge in that, I think it's like capturing the lightning
in a bottle that exists in the moment right now and turning that into early movement from the
pure software or pure AI industries. We're already seeing this with larger robotics labs that
have gotten well funded over the last 12 months or so. They're managing to pull really incredible
talent. I think the companies like Andrel and Waymo and Skydeo and others are pulling incredible
talent still. And that's going to hopefully translate into more and more 18-year-olds being
inspired to take a mechanical engineering class or build a robot rather than just study pure
software like they maybe have been for the last decade. Only since you just mentioned the kind of
18-year-olds who would be starting their skilled journey from scratch, do you also think that
the folks who are older who have maybe gone through their career already partially, is there a
role for them to play in the re-skilling and entering the workforce maybe for a second time? Yeah, for sure. I think
I see the rise of autonomy and the benefit that's going to bring to the labor market in terms of manufacturing jobs in the United States, in terms of high-skilled technical work replacing low-skilled labor, a huge job market opportunities, like how to manage that training process and training pipeline.
I think it's going to take a lot of different forms, whether it's apprenticeship programs, teleoperation, outposts.
I think we're going to see a whole bunch of different things emerge.
But for people who are curious and like to build and are highly technical, even if that doesn't
mean that they've gotten a college degree, electricians, for example, like there's a huge shortage
of electricians in places like Texas and Georgia, et cetera.
I think I was talking to someone from Microsoft recently who said that as they were standing
up some of their new data center work in Georgia, they employed at one time, one third of
the registered electricians.
state, which is crazy. A third? A third. So there's just a huge dearth in some of these
skilled trades that are going to become really important as the AI and autonomy expands across
the United States. Absolutely. And even in your big idea, you cover so much ground, all kinds of
jobs, all kinds of industries, whether it's mining or energy or things like autonomous vehicles.
Everything physical that we interact with or even everything physical that affects our life that
don't even know about is going to be impacted by autonomy over the next decade. And that's a lot of
jobs to fill. Is there any particular area within that that you think is especially important
to get right? I think that it's really important that we figure out how to build things again
in the United States at scale in production. And I don't think that's going to happen
with the way
sort of labor economics work globally
if a huge part of the way
we manufacture in the future
is not driven by autonomy
and it's kind of a chicken and egg
problem like we need the robots
to build the factories
and we need the factories
to build the robots
because right now
all the components for robotic arms
everything that might go into an autonomous factory
is all coming from Shenzhen.
Right.
So how do we start bootstraff
this supply chain native to the United States or the U.S. and our allies, so that we're less
fundamentally existentially reliant on another power. But we have to start somewhere, and I think
it's going to take people who really want to build and get stuck in and don't mind the hairy
complexity of having to deal with AI that doesn't quite work and hardware that doesn't
quite work and supply chains that aren't quite ready, but are willing to just tackle that problem
vertically head on to do it. Absolutely. And this was a big idea for 2025. So as we prepare to start
that journey, what are you looking out for? What are you thinking about? What do you hope to see?
I hope to see more founders and teams that are willing and ready to tackle this problem
head-on vertically. So they're not outsourcing their hardware. They're not outsourcing their supply
chain, but they're really recognizing that in order to build something new here in the U.S.,
they have to figure out all of those things and have to develop skill sets across all of those
categories and become kind of generalists. I think something Elon Musk has done really well,
not to just be an Elon Schill on here, but it's something he's done really well, to really own the kind of
end-to-end supply chain for birthing something into the world.
And I think we need more of that in founding teams.
We hope to see many more founding teams in 2025 crossing that hardware software chasm.
One frontier with a flurry of activity is space with...
The number of Earth observation satellites doubled in the last five years from 500 to over
1,000.
There's more data than ever downlinked to the Earth, and it is easier than ever to access
imagery, although still far too difficult.
That was...
Millen, I'm an engineering fellow on the American Dynamism team.
Mellon thinks there's an opportunity in 2025 to harness all this Earth observation data.
My big idea is really around building verticalized Earth observation tools.
There's been a lot of work over the last decade or so to actually get a lot of Earth observation sensors
and set up the infrastructure to send pixels down to Earth from space, which is really a remarkable task.
For 2025, I'm really excited to see entrepreneurs build verticalized solutions that go into different industries,
and actually solve customer problems.
There has been a boom in Earth observation satellites going up.
What's really driving that growth?
I think first off is launch costs and access to space.
In decades previously, it was very, very difficult and expensive
to send satellites up to orbit.
These days, it's honestly getting as easy as sort of like a bus service
with things like SpaceX's transporter launch service.
So the cost per kilogram of satellites getting to orbit has really, really come down.
Also, architecturally, satellites have changed a lot in the last few decades.
They're no longer sort of school bus-sized $100 million per unit satellites.
They've come down to the sizes of a loaf of bread.
And so with that, you get a lot of proliferated sensors.
You get a lot of coverage over the Earth and costs have really remarkably come down.
And then I think the last thing I would point to is communications infrastructure.
There's been a lot of work to set up ground stations and to actually have satellites communicate with each other in space
to have a more efficient way to send pixels down to the Earth.
And so as we think about those economics, and,
The economics have come down so much on getting the satellite up there.
How does that ultimately ladder to the applications that can be built and the economics around that?
To me, it's a really exciting positive flywheel here.
The more satellites you have up there, the more pixels that are collected of the Earth's surface every single day,
there become more products that you can actually build and more use cases that you can unlock.
Prices sort of come down, and that allows and unlocks entrepreneurs to really cheaply solve problems for the entire Earth at once.
Since we're talking about economics, maybe you can give listeners a sense of what those are,
like how much does it cost for us to get this data, process it, or for real end applications to be made?
So the unfortunate answer is that it depends a little bit.
Broadly, there's actually a huge amount of freely open data.
So NASA's Lansap program provides free open data for the entire Earth.
The European Space Agency has Sentinel, which also provides free data.
And then there's a lot of commercial companies as well who provide medium resolution.
the prices range in the sort of dollar to $5 per kilometer squared.
And so that's really not too expensive.
And then it tends to be more expensive as you go up into the high resolution 30 and 10 centimeters per pixel.
But those are typically for more specialized use cases.
And I think it's not very well known that even could be this cheap.
And there's a whole suite of archive data that goes back years and years from these companies
who have collected data from orbit every single day for years and years.
That's a gold mine waiting to be really leveraged.
And we've seen different private entities, public entities, governments, universities, really poor in resources to get a lot of this data.
But speaking of the applications that in some cases do already exist, what are some of those?
There have been some really, really exciting applications so far.
I think agriculture is one of the ones that is sort of most talked about.
Farmers across the world are really using Earth observation data to monitor their crops, predict crop yields,
and understand ways that they need to irrigate or fertilize their crops to really produce,
better outputs here. So I think that's really, really exciting, especially as we face food shortages
across the world. Other applications include defense. Governments across the world use it to monitor
things like troop movements, ships and ports, and fleets of their equipment across the world. Energy
use as well. There's a lot of interesting work that can be done in forecasting clouds and
large-scale grids and looking at how much solar production will happen on any given day planning
of utility solar farms, like looking at land and where you can actually place utility solar farms.
Obviously, this is your big idea for 2025.
Why do you think this maybe has been under-explored to date,
Earth observation in particular?
I think it's really, really difficult.
And I think that it's not so easy to work with the data.
Companies are making it a lot easier these days.
It's sort of a new thing that you can even go on to a website
and buy an image through a portal.
You used to only be able to talk to a salesperson to be able to do that.
There's a lot of open-source work.
There's a lot of companies chewing off different parts of the puzzle.
The main thing in my eyes that I'm looking at,
forward in the next year to two years is entrepreneurs actually going into specific industries
and really taking solutions vertically.
So in one sense, if you take something like SpaceX and they have Starlink, and they're the
ones who create the satellite, put it up there, and then also sell the data that people
can use.
Are you basically saying that you expect to see more of that specific solutions for, let's
say, like you said, agriculture or energy instead of one company putting the satellite up there
and then other companies in the middle distributing that data, processing it, et cetera.
It has traditionally been hard for incumbents and the actual satellite manufacturers
to go into industries like agriculture and really get granular in terms of solving problems.
Just an example of that would be like automating any sort of farming equipment.
It's hard for an existing Earth observation player to go really, really vertical
all the way straight from orbit to the farm.
And right now that's traditionally stopped somewhere around providing analytics,
or insights, but I'm looking for in the future to actually provide automation.
And maybe we can drive a lot of that farm equipment, for instance, or actually irrigate
fields directly through a closed loop Earth observation.
Maybe a follow-up there is we've obviously seen a bunch of machine learning and AI tools
come up in this last few years.
Does that change the game in terms of being able to parse data process it, make sense of it
for these verticalized players, for example, or how does that really reshape the ecosystem?
Certainly, yeah.
I mean, terabytes and terabytes come down from orbit every single day, soon to be probably petabytes.
Wow.
And we don't have enough humans on the Earth to actually look at all those images.
So we really need advanced techniques here.
And I think there's been some exciting progress here.
The really famous example a couple years ago, where a company actually found the Chinese spy balloon through training their model with AI prediction of exactly what the balloon might look like.
And they were able to sort through pictures of the entire U.S. taking over days and days and actually tracked it.
down the source of the spy balloon for the first time. So we really can't do that with humans.
And I think new use cases like this will be unlocked every year or so.
And as we think about the roadblocks or the challenges that may be on the road to us,
really this earth observation economy, you could say, proliferating. What are those?
One of the most pressing is how difficult it is to work with the data. I think right now,
typically it still requires specialized knowledge of orbits and different types of sensors
and how they are calibrated and correlated.
I think it's not so easy to apply techniques from one Earth observation constellation to another right now.
We almost might need some sort of middleware here where we can abstract away the nuances of each Earth observation constellation
and make data sets that are really sensor agnostic for non-space engineers to use.
Changing this from a very specialized climate scientist, GIS scientists,
to any sort of ML engineer can start to use these techniques.
What about regulation?
I mean, I think about maybe regulation mostly impacting the satellites that go up there,
But maybe you could give us a wider perspective of, is regulation hindering us at all,
whether it is to get the satellites up there or utilize the data, proliferate it, etc.
Yeah, there's been a lot of exciting regulation changes recently.
I think most famously, NOAA agency lowered the restriction for the maximum commercial resolution
from 30 centimeters per pixel to 10 centimeters per pixel very recently.
So this kind of unlocks a lot more higher resolution products.
And there's a whole suite of applications that you can start to target.
On the imagery sort of side, I think that there's been a lot of conversation about licensing and different data sort of rights.
Right now there's a lot of complex contracts where customers will buy exclusivity and there'll be 24 hours where nobody else can access an image.
Yeah.
And it really hinders folks that are trying to build products with like daily applications or daily monitoring.
So I think licensing and opening up archive data and making it easier and less restrictive to use could be a good regulation change as well.
Where are you looking next year and what applications in particular do you find really exciting as we look toward the future?
I'm really excited, honestly, about the energy sector.
I think that's one of the main areas that we can really make a difference with Earth observation data.
We already touched on predictive analytics for solar farms.
There's a lot of work with renewable wind sources as well.
So I'm just looking for entrepreneurs to take Earth observation and really solve our most pressing challenges with these new data set.
Speaking of new data sets, we're at a unique juncture.
where we finally have the data and technology to build entirely digital worlds.
But also, these virtual simulations are increasingly having a direct impact on what we can do in the physical world.
Anderol leverages game engines for defense simulations.
Tesla creates virtual worlds for autonomous systems.
BMW is incorporating AR in future heads-up display systems.
Matterport revolutionizes real estate with virtual walkthroughs.
That was.
Troy Kerwin, partner on the games team at A16Z.
And here's his big idea.
The big idea is games are essentially virtual simulations.
And those virtual simulations have been designed for fun over the last couple decades,
but increasingly we're going to be seeing them used in the real world
for all kinds of use cases, whether it's training and learning and development
or training grounds for robotics and other autonomous systems,
or visualization to allow folks to see things come to life in real-time 3D.
Amazing. And I love this prediction because we've actually seen versions of this from the past
and this idea that we all have games in the gaming industry to thank for a bunch of technologies
that exist outside of gaming. So can we talk about those of the last couple decades maybe?
And also, are there any that you think maybe are overlooked? Everyone references GPUs today
as maybe one example. But are there others? Totally. People forget that Navidia was a game
gaming company, almost all of the revenue in the early years was for gaming graphics cards.
These new processing units for computationally intense and matrix multiplications, which were great
for rendering images and animations and videos. But then we soon found that it was useful for
things like cryptocurrency mining and of course now. It feels like everything, digital biology.
The idea of this accelerated computing is now being used just from everywhere.
Totally. And I was looking back at some of the videos early.
websites and the headline was the future is 3D and it's so funny that 25 years later
while it's been slower than one would have hoped for this real-time 3D to intersect
with all these other industries and we'll talk a little bit about why we think now is the right
time but you know if we go back the 90s was about text on the internet the 2000s was about
images the 2010s was about video and we feel pretty strongly that the 2020s is going to be
about interactive 3D and gaming technology being used in the enterprise.
Super interesting.
And maybe just to take a step back, why is it that games or the gaming industry and the
technologies that are derived from that?
Why is that a crucible for innovation?
I mean, Jensen said himself that he allowed consumer spend to fund the R&D to bring
it to what it is today.
And I think that that's an interesting lens to think about gaming technology.
In the gaming industry, technology innovations are celebrated.
It's new technology, whether it's new platforms or new features or evolutions that allow new game designs to emerge and flourish.
And at the end of the day, like the gaming community, both players and developers, it's a hacker mentality.
And so it's no surprise that that's where big breakthroughs have emerged in the past and we're going to see them continue to emerge.
Yeah.
And some of those breakthroughs aren't always obvious as breakthroughs.
a good example is multiplayer, right? Multiplayer has existed forever in gaming. And then it took a while
for that to really penetrate, or really companies were built off the idea of multiplayer. You take
something like Figma, right? So on that note, today in the gaming industry, there's still lots of
innovation happening that maybe again in a decade or so we'll see elsewhere. And so let's talk about
those tellwinds. You talk about three in your big idea. Maybe you can just talk about each one and
how you're seeing that reshape. Before A16Z, I was at Unity for close to five years.
years and got a front row seat to seeing how all these various industries were beginning to
experiment with real-time 3D for some of the things that I talked about before, whether
it's like visualization for architects to be able to walk through their design before it's
constructed. And they can see if there's errors or other sort of imperfections that they
wish they had known when they were designing. Or for automotive manufacturers, they use
real-time 3D for also the design, but also virtual test drives.
And now the sort of heads-up displays that you see, and Rivian is powered by Unreal, BMWs is powered by Unity.
And then there's the virtual training, whether it's for heavy machinery operations or other operations tasks.
But some of the bottlenecks for a lot of these use cases that seem so obvious are really some of the same constraints that game developers have faced.
And so it's bottlenecks on the content creation side within a game studio.
more than half of the spend goes towards creation of the assets and the art and the content that goes
into these virtual simulations. And the same is true for these non-gaming use cases, except they don't
have 3D artists on staff to build those. And so now when we have AI for asset generation, whether
it's images or audio or now 3D assets, it makes that so much easier. So that's one. The second is
for 3D capture techniques.
So, of course, for a lot of these not gaming use cases,
they want to capture the physical world
as it's built and as it's seen.
There is a correct version.
Yes.
In a way.
Yes.
And there's been technologies in the past that have allowed this,
things like photogrammetry,
or in the case of Matterport, for instance,
where it's basically just a 360-degree image,
but you can't actually interact with the environment
the same way you can with a video game.
Well, now with newer technologies, nerve or neural radiance fields of a couple years ago,
and more recently, other radiance fields technologies like Gaussian splatting,
which allow consumers to capture in a much more efficient manner,
and it's photorealistic, lifelike.
And it's immediate, right, in terms of the capture?
Exactly.
And so it allows these use cases to be unlocked.
So it's the second.
And then the third is for some of these non-gaming use cases,
this is where we're going to see the prevalence of VexR and being able to, going back to the construction or architecture,
but put on a headset and see how the BIM model overlays on the construction site or for medical surgery, simulation,
or other use cases like this.
And as we have better headsets lighter with eye tracking and other amazing technologies,
there's still lots to come in terms of development there, but I think that's going to unlock some of these.
Absolutely. And as we talk about all three of those tellwinds, so again, the content creation, the capture that you mentioned, and then the devices, it feels like each one of those has their own cost curve. And we're traversing down that cost curve pretty quickly across all three. Can you speak to the economics there? I mean, you touched on it a little bit in terms of even games, you said 50% goes toward content creation. So how quickly is that dropping? And then same thing for the devices. Yeah, it's interesting. Particularly for these non-gaming use cases, some of which photo
realism is everything, and that's why as Unreal and other 3D engines have progressed towards
photorealistics, these use cases have been unlocked. But for other use cases, actually, you don't
really care what the bin model looks like so long as it has utility for you. And so as some of these
asset classes are up to par with what they would expect to use for these, the cost dropped
dramatically. But more importantly, I think, particularly if you think about virtual simulation
virtual training use cases where, let's say, we wanted to train our workforce on how maintenance
and repairs for a robot or some other piece of equipment, well, you would build this experience
and you would fund the development of this virtual simulation. But then after the fact, if the team
wanted to update it or add content to it, they'd have to go back to the outsourced agency
who built them the original digital twin. And now they'll be empowered.
to do that themselves internally.
And so this content and curriculum doesn't go stale,
but they can constantly improve it and update it over time.
So what you're pointing at is it's not just a one-to-one,
how is the economics of creating one thing?
How is that changing?
But also how it's integrated into the entire system.
That's really interesting.
Let's talk about applications.
You've already touched on a bunch,
but you've mentioned several companies,
which are very different, right?
Andral, Tesla, BMW,
and then you've also talked about workforce training.
Tell me a little bit more about those applications and where does it end?
Or is it really, we're seeing it everywhere.
Autonomy is deeply rooted with these virtual simulations.
Anderil as a great example.
Funny enough, Anderil's first acquisition was a game studio.
Really?
Which would be surprising for a defense tech company.
I guess if you take Palmer's past.
True.
But they were interested in acquiring it for the game engine that this studio had developed.
And they used that technology for strategy simulation and other.
autonomy workflows. And then with other companies, so applied intuition as an example, it's just
impractical with the scale of training data that you need to capture this in the real world. And so when
you have these virtual simulations, you can not only scale the amount of data, but also the
fringe and edge cases that you would never be able to experience or capture in the real world,
whether it's extreme weather or human intervention that is one in a thousand situations.
But of course, for these things to be deployed, they need to take into account all of these
inch cases.
I remember when we talked to Waymo a year or so ago, they were talking about that, how they
ingested all of the crash documentation, which exists somewhere on pieces of paper or
not in the real world that a Waymo can necessarily interface with at that moment, right?
But again, like you're saying, these virtual environments, allow you to simulate it.
And so speaking to that, one way to put.
what you just said around applied intuition is that you can actually do something new that you
couldn't do before with the ability to simulate at scale. Are there other downstream opportunities
or like second, third order effects that you can think of that we get from these virtual
environments? Totally. While in the past, we had the ability to use these virtual simulations
for physics, training, environments, or the learning and workforce development that we talked about.
But these were mostly either physics simulations or hard skills. But now,
Now with, we call them AI-MPCs in the gaming context, whereas before MPCs were scripted,
but now with autonomous agents and LLMs, these agents can take on a life of their own.
They can observe the environment, they can reason and plan, and then they can act.
Well, when you have a multi-agent simulation, now when we think about the next pandemic response
or immigration policies and how those impact a civilization,
we're going to be testing these in a virtual environment
with these agents who can interact with each other
and decision-tree out all these different developments.
Yeah, and instead of just decision-tree on paper,
what you're getting at is that we actually get to simulate these ideas
that existed in the ether.
So far, a lot of the applications you've mentioned
have been more enterprise-focused, right?
A company like Andrel or Tesla,
gaming obviously existed to begin with in the consumer sphere.
And so do you see more consumer applications already also coming up?
So one of the ones that I've just been so excited for is I just moved into a new apartment.
And as I wanted to plan out the space, I was still using grid paper and pen.
Same.
Despite the fact that we've had the Sims for 25 years where we can, in a 3D environment, drag and drop furniture and see how it fits.
And all the technology exists today to have that experience.
in an amazing intuitive way, and yet it doesn't exist.
But we should be able to, and we can scan our space
and develop a digital twin of the 3D environment,
we should be able to show it,
design inspiration that I find from Pinterest,
have it finds the pieces of furniture or the artwork
that closest matches my inspiration,
fill the scene, and then either be able to walk through it
in a virtual world or use augmented reality
and see how it fits into your space with your dimensions.
Like a 3D Wayfair, if you will, where there's, the end consumer has a lifelike digital twin visualization of their space.
So looking to 2025, so far we've talked mostly about technologies that have been invented over the last few decades.
But there's obviously this wave of new technologies that are really exciting, haven't really found their footing necessarily in terms of applications.
Is there anything you're paying attention to there and maybe how that intersects with gaming?
Yeah.
There's some really interesting research and work being done in the HMI human machine interaction space, where you can imagine all kinds of different use cases.
But as with most emerging tech, there's probably going to be initial use cases in gaming that are the wedge for these companies to use consumer spend to fund their R&D similar to Navidia.
So obviously, Applevision Pro made huge progress this year with eye tracking.
But we are going to see soon BCI type tech.
technology that reads energy signals from your brain to actually control and interact with the
computer and the virtual environment. So we can think about VR use cases where I can use
strictly my brainwaves to interact with the scene, which is amazing. And then the inverse is true
too. And we've seen technologies that allow sort of sensory or digital touch based on solely
wearing a ring on your finger for increased immersion in the virtual world, which is sort of like
the dream of every gamer to be able to be fully immersed with that haptic feedback, not just
in the game controller, but actually throughout your body.
All right, I hope these big ideas got you geared up and ready for 2025.
Stay tuned for parts two, three, and four, where we discuss.
The search monopoly ends in 2025.
On device and smaller generative AI models.
Romanticizing inorganic growth.
Again, if you'd like to see the full list of 50 big ideas,
head on over to 816.com slash big ideas.
It's time to build.