a16z Podcast - a16z Podcast: Breaking Into Bio
Episode Date: May 2, 2018with Atul Butte (@atulbutte), Daphne Koller (@daphnekoller), and Vijay Pande (@vijaypande) Whether you’re an academic seeking to move out of research and into industry, or simply interested in worki...ng at a bio startup, this episode of the a16z Podcast is for you. It covers everything from how to build a brand in the space when you don’t have one to how the bio and how the healthcare startup ecosystem is different from traditional tech (or traditional pharma), to how to choose the right co-founder -- or even identify what problems to solve and build a company around. The discussion (which is based on a recent event at Andreessen Horowitz) features Atul Butte, Distinguished Professor and Director of the Institute for Computational Health Sciences at UCSF; and Daphne Koller, founder and CEO of insitro (former professor at Stanford, co-founder of Coursera); in conversation with a16z bio team general partner Vijay Pande. Together, they provide practical how-to's -- for those coming from machine and deep learning backgrounds, but also for anyone, really -- for how to break into the bio space. The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
Transcript
Discussion (0)
The content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. For more details, please see A16Z.com slash disclosures.
Hi, and welcome to the A16C podcast. Whether you're an academic seeking to move out of research and into industry or simply interested in working at a bio startup, this episode of the podcast is for,
for you, from how to build a brand in the space when you don't have one, to identifying what the
problems that really need solving are, and how to build a company around it. This episode of the
podcast is based on an event held at Andreessen Horowitz, with Atul Butte, a distinguished professor
and director of the Institute for Computational Health Sciences at UCSF, Daphne Kohler, founder and CEO of
InCitro, formerly professor at Stanford and co-founder of Coursera, and is moderated by A16Z
bio-general partner, VJ Ponday.
So first I want to talk about, for those thinking about founding your own startup, you know,
how do you connect the dots from going from academia to a company?
A lot of times people have really amazingly interesting technology, but there's this big gap
between technology to a product and product to a company.
What criteria would you be advising people look at to say, you know, this is something
where I really think I should turn into a startup.
Like, how do you reach that bar?
So it's tempting to start companies, right?
The thing is, though, that especially in bio and medicine, we see so many superficial companies.
One of the harder things, compared to just generic machine learning, AI types of platform companies,
in this field, you have to go deeper.
It means learning, vocabulary, it means really spending time with the folks to truly learn what the pain points are, right?
You think you've solved a problem that's a pain point, but you don't really know what the pain point is.
So, for example, we have a lot of apps that you can take pictures of skin and do something with,
deep learning of moles, cancer, because that seems like a very intuitive kind of problem,
but that's an easy one. We have many, many harder problems in biology medicine, but they take
time to learn. So you have to be patient. Some of that time is spending coursework and
hanging on labs and rotations and PhDs and stuff like that. But it's worth it because without that
right pain point, it's tough to really start an effective company that's really going to solve
something that people actually need solved. One of the things that you really need to do in this
space is connect with your target customers and figure out what problem it is that they really
need help with. And I think it's easy to underestimate in this field the complexities and the
barriers that prevent adoption, even for something that seems like it's a no-brainer. It obviously
makes things better, but there's so many obstacles in terms of the bureaucracies and the approvals,
not only government regulations and so on, which are much more significant in this space than
in many other applications of tech,
but also just in terms of
we've always done things this way.
We really always have.
And it turns out that in many of those cases,
even really simple methods
actually work much better
than the current standard of care, state of the art.
And the barrier to adopting those things
hasn't been that people haven't recognized
that those are better methods.
It's been that the system is really complex.
So you really need to take time
to understand not only
what your technology can do, but also how it fits into the larger ecosystem, and to what extent
you can circumvent the other obstacles to adoption. It's not really about the machine learning
inside the box. It's about how do you get it so that the physician doesn't even have to think
about how to use your system. It just happens naturally as they're doing their rounds. They
write on little sheets of paper. And that's the problem, not the machine learning. When I think about
medicine. It's literally this dot matrix printer fax machine kind of land. And we're not talking about
DNNs versus logistic regression at that point. And so the challenge then becomes, if you're sort of
coming out of academic lab, how do you even just know what the problems are? I mean, you may have
guesses, but unless you spent time in a hospital system or payers or providers, and you know,
the go-to-market for healthcare in general is probably the hardest go-to-market. So how does one even
gain that knowledge? And it's crowded, too, right? So a lot of us have passed around this
CB Insights logo like the tree 108 companies with deep learning machine learning and that's already
like three years old right I mean it's probably like five times that number now it takes time
it takes patience it takes experience to go deeper to get to more interesting problems I think
there are other kind of silly things that companies do sometimes that are unnecessary like say things
like physicians are going away right with AI and deep learning yeah the greatest way to
make us not want to accept your product right that kind of happens with
microseconds.
And that one is both naive and, like, just plain stupid.
Yeah, exactly.
But I think it's really important to realize that this is not your typical, you know,
you have three people sitting in a garage, writing a web app in the cloud,
and it goes on everyone's phone and it goes viral.
That is not this space.
This space is you have to deal with a lot of different stakeholders.
They have a lot of history, a lot of entrenched interests,
and you're not going to be able to just break in by having an app that goes viral.
Yeah, it's also a very conservative space, right?
When we graduate, you know, we are reciting oats that are a thousand plus years old
that include phrases like, do no harm, right?
And then you're asking us to try something new, right?
So it's a very conservative space.
And you have to learn what a BAA is, what an IRB is, what HIPAA is.
You cannot just walk in and not know what those are.
One thing I tell a lot of folks, you know, how to get started with the startup.
So here's the problem.
You're surrounded with Stanford and UCF, which are like the best of the,
best here, right? So you're thinking this will be an easy sale, but you're talking to two of
the best in the country. It's a tough first customer to have, right? There are other hospitals
around here, medical systems and practices that are not Stanford and UCSF, and that's the first
account you should try to get, not Stanford to UCSF. I always send to El Camino Hospital
in Mountainville. That's a great little hospital. They're partnered with neither of the two,
but they're a tech-oriented hospital. But if you can get something to work there and get a white
paper out of it, and they're your reference account, we'll answer the door if you knock then.
you really need to either spend serious time in either a hospital or a company, an existing company,
that actually has that as a market, or you get a co-founder who's had that.
Because for someone who comes in from a technology background,
you have this cool, shiny new technology that you want to apply,
but you don't understand the problems and you don't understand the path forward,
I think it's a very, very tough trajectory to follow it.
One way to handle the crazy go-to-market that is healthcare is to get a co-finding.
You know, when you're thinking about sort of starting a company and you think you have the pieces that sort of reached that level, you know, how do you pick the co-founder?
To me, picking the co-founder is probably one of the most important things that you do.
It's the thing where changing co-founders later is probably one of the most painful things you do.
And so how do you find that right magic?
How do you find the right person?
So first of all, as it relates to in general, picking co-founders as well as picking some of your earliest employees, which is the next step beyond that, you need to be really, really honest.
brutally honest with yourself about what you do not know, about the parts of this world that
you're completely unfamiliar with, and then be willing to go out and get people who complement
you. And that, by the way, is not just technical knowledge. So, for instance, if you're more
on the medical side and only lightweight in machine learning, you want to get a machine learning
person and vice versa. But it's also on all those other things, like the go-to-market,
understanding the space. And then even beyond that, the management skills, all of the
these things are things you can learn, but it's a lot easier if there's someone at your company
who's been there, done that. These things, none of them is rocket science, but figuring out all of
them at the very beginning while you're also trying to get your business model and your business
partners and the technology off the ground is just really hard. So at the beginning, build a team
and think about how to get someone who has experience that complement your own. To put it in terms
a kind of formulaic kind of way to describe it.
The co-founder has got to have skills you don't have, right?
A major skill could be going after funding.
They know how to make a story and make a pitch,
especially if you don't have that skill.
Their determination should be greater than or equal to your determination, okay?
If it's less, there could be issues down the road, I think.
And their risk tolerance should be greater than or equal to your risk tolerance.
But what if they're using that criteria with you?
With me?
This does have a solution.
This does have an equilibrium.
You're bringing something to the table, right?
You're absolutely right.
I mean, so there's a greater than an equal to.
And at least you got perceived that, right?
That they're willing to go all the way with you.
Because everything, in a medical world, we talk about survivor bias, right?
So we write stories and glorify all the survivors of cancer, right?
We certainly have a survivor bias with companies in the Bay Area, right?
You see every day, all the survivors, we never hear about all the ones that don't work, right?
And that's the majority still, right?
And so it is hard.
And just, you know, a Bayesian prior would tell you you're going to fail.
So you have to beat those odds here.
And that takes a lot of luck, skill, determination, and patience, especially in biomed.
I think some of the hardest things are the unknowns, which you don't even know to ask.
You know, what are the counterintuitive things or unknown unknowns that you don't think we got to
that you think people in the room should know about if they want to be a founder of a company?
The most frustrating one is really competition.
We surround ourselves with the kind of bubble filter here in the Bay Area.
We think if we don't see someone on our campus that doesn't have a company like ours, we're free and clear, you know, freedom to operate, whatever.
Competition will come from anywhere and everywhere.
There are smart people in many campuses in many countries now.
And it's super frustrating and you will just get beaten down when you see the competitor getting 10 times the funding that you thought you were going to try to get, and they just close them around.
And then you're kind of narrowing and narrowing what you do.
Learn what others are doing not to kind of steer around.
But just to be aware, like, what is the pace you're going to have to keep up with?
What are the milestones you're going to have to get to?
Because it is getting, especially in the computer and AI and machine learning world,
there are many people getting trained now.
And they all have ideas of companies in what started.
There are going to be collisions of these.
As you think about founding a company, one of the questions is where to seek funding.
A lot of people select funders based on who's going to give them the most money at the highest valuation.
because they think about their percentage of the pie.
What you really want to think about is not your percentage of the pie,
but the total value of what you get,
and I think to me, even more importantly, of what you build.
So I would much rather have a smaller piece of a larger pie
that even if it's financially neutral,
you've built something greater, you've impacted more people,
you've changed more lives.
So as you think about who you bring on board
and the dilution you take from that,
who you get funding from and the dilution you take from that.
Think about who's going to help you make your pie the biggest and most successful that it is.
And there are the ones that will often tell you the unknown unknowns,
whether it's your co-founders, your early employees, or investors who've seen dozens of these companies,
those are people who can really save you for making very bad mistakes
and can tell you about those unknown unknowns.
I think a lot of people make a big deal over shares and options and things like that.
And I think it's important to pay attention to some of those things.
But if it's successful, it's going to be successful.
If it's not, it's not.
I think in general, what I've seen is if you're going all out to be involved with the founding team of one company,
you're probably going to do more than one.
And I think having one under your belt, the second one is the one that people always kind of glorify in some weird way.
I don't know how to explain that well, but, you know, get a base hit for the first one.
You'll get a home run on the next one, right?
So don't freak out about the first one, right?
Yeah, yeah, yeah.
It's like pancakes.
Yeah, it's a good analogy.
So some people may decide that they don't want to be a founder of a startup, but they may want to join.
Yes, yes.
Yeah, yeah, yeah, and that meant immediately be a founder.
That's actually an excellent point.
But they may want to immediately go into joining a startup.
Joining a startup is still a very sort of opaque process.
You know, how do you find the right startup?
How do you network?
So let's just start off with, like, how do you pick the company?
Like, how do you develop the criteria and understand, like, this is the thing?
company you want to join. Especially when it's so early, it's not like joining Facebook or Google
or something like that where there's an obvious track record. If you're new into this space
and you've never been at a company before, the amount that you have to learn, both about
running your own company and about go-to-market strategy and all those challenges that we
talked about earlier, it's really hard to do it at the very start. And so a path to founding
your own startup, even if that's where you're headed, could well be first spending a few years
it's somebody else's.
And so I think that's just something to think about.
Even if you're really entrepreneurial in nature,
do you really want to do it right now, right out of school,
or do you want to do it in five years,
and when will you be most successful now or later?
In terms of picking a startup, I would say pick the one
that you wish you had thought about.
Pick the one that you was like, wow,
this is such a cool idea,
and I'm so excited to be part of that.
And then at the same time, think of this as,
do I really want to spend time working with the founder
or founders of this company
because you're going to be spending a lot of time
with this person or these people
and it's going to be tough times because this is not Google.
Most startups are not money printing machines
and you will have times when you think
you're about to hit bottom and go deeper than bottom
and you need to be willing to stick with that person
and trust them that they will
be able to get you out of whatever mess it is that the company has dug itself into.
And I guarantee you this will happen at every startup.
There will be moments when you think you're hitting bottom.
So you have to go in with that realization,
and you have to go in with the trust that the person that or people who are leading this company
are there for the long haul,
are going to be willing to do hard things to get the company on the right footing.
And you have to really believe in the vision,
because it's going to be really hard if you don't.
To me, if I had to pick a company,
it would be one that goes after an important problem.
I think there are many important problems
that need solving in the world.
I think that's probably a high criteria.
And then that if this company solves it,
it's going to be super significant for the world, right?
That's a kind of amplifying effect.
When you're in the beginning of a company,
it's pretty clear you're at the beginning.
They don't even have a table or desk or nothing for you.
So I think that in some ways,
a lot of that comes down to your own risk tolerance.
Not everyone has the same level of risk tolerance.
And we shouldn't all think everything is equal there
and don't feel pressured or threatened
or anything that you should be doing something
of someone else to do it,
because everyone's risk tolerance is different.
I'll say something which may be inflammatory,
but it is generally my observation is that
a lot of people go to grad school
because they're actually not very risk tolerance.
Yeah.
You know, that grad school is a comfortable thing to be on.
It's something that there has to be this shift
from sort of doing something which was a safe thing to do
from something that sounds crazy.
For me and like my parents, for me,
to say, like, oh, I'm going to, like,
MIT grad school because I'm going to do a startup.
They were like, what?
But yet, like, looking back on my life,
that was, like, the time to do it.
So, I don't know, the risk tolerance is a great topic, I think.
I mean, like, how do people even choose the risk tolerance
or what advice would you give them?
I was asked today, what advice would you give to your, you know,
21-year-old self who started PhD at Stanford?
And the advice that I would give is, really,
this is a time and a place of amazing opportunities.
The opportunities are boundless.
Think big.
Be willing to do something really significant, really impactful.
Because at the end of your life, when you look back, that's the thing.
I don't know very many people who regret having tried for something big even if it fail,
but I know a lot of people who regret never having tried.
And so I would go ahead and do it.
And honestly, I'm going to disagree here.
Your risk is not that high.
As in, you have an amazing skill set.
of machine learning today, that skill set's not going to go obsolete anytime soon.
So if you go and do the startup thing and three years later the startup fails,
sure there was an opportunity cost for those three years
that you could have spent maybe doing something that had more remuneration,
more success, but I think the opportunity cost of not trying to do something
that you think is really meaningful is much, much larger.
Definitely, it's a point which I commonly tell people in career advice
is that I even see in my own career
is that I feel like my biggest mistakes
were not trying for more.
And like 80%, 50% of amazing
is much better than 99% of good.
Fantastic.
So let's talk a little bit about machine learning
in biology, something where I think a lot
of us share interests.
And you know, I heard this rumor
that apparently there was machine learning
before deep learning.
No, that's just not true.
I don't know.
I mean, have you heard this rumor?
I mean, what?
I think you.
It's a thick news.
Take news.
Yeah.
No, I mean, yes, I was doing machine learning back in the mid-90s before there was this thing called deep learning.
I mean, there were neural networks.
At the time that I was teaching machine learning back in those days, we used to say that neural networks are the second best way to do just about anything.
But that's actually, that's a deep statement, actually, because to do anything.
Yeah, yeah, yeah.
And I think, you know, to some extent that was true.
It was sort of, if you had to take something out of the box and not think about it very hard and not really engineer your model,
then you could throw a neural network at it and would do decently well.
Over time, we got to the point that lots of other things would do equally decently well,
like kernel machines and random forests and so on and so forth, would probably do about the same.
And then we hit a saturation point.
And the reason we hit a saturation point in terms of how well machine learning models were doing wasn't because there weren't
smart people around thinking about new innovative things was really because we hit a plateau in
terms of the amount of available data. And there's only so much you can do if you have a thousand
samples. You can innovate and innovate and innovate, but there's only so much performance that
you can eke out of that. When I started Coursera back in 2011, 2012, a big data set was a couple
hundred samples. That was really big. Now we're in a world where big
data in biology is actually a reality, data that is being collected in large human cohorts,
as well as data that one can produce in laboratory settings that allow us to, we can now
engineer, perturb and measure model systems in an amazing range of different ways that really
allow us to uncover new science using machine learning in ways that we just couldn't do before
because you could generate millions of samples in a matter of a few weeks as opposed to the matter
of years. And what has happened in so many of those fields where machine learning is now
transforming entire sectors, images, text, speech, video, is that the amount of data is now almost
limitless. And that allows different models to now start differentiating from each other.
Now you can have a model that is really much better than anything else because if you think
about how to capture the right structure and all of a sudden you have enough data to really
refine that. So I think that's a really important thing. And we are hitting that era in biology and
health. We're not there yet. Our data sets are still way smaller than the ones that you see in images
and text and so on. And isn't that the problem though? It's a lot harder to deal with less data.
So this is a more challenging space, you all. You're going to have to be more creative in some ways
and a lot of the people who are lucky enough
to be working in recommender systems for advertising
where there's unbelievable amounts of data.
And so this is an interesting point for us.
There is enough data that machine learning models
can really start differentiating
and some are going to do much better than others.
But we're still not in the large, large data regime
where, you know, blind architectures
that don't exploit the structure of the problem.
problem can just work out of the box.
So you really have to understand your problem in the main and figure out how to exploit
the structure that you have in these biological or medical data sets to eke out those percentage
points of improvements that's going to make your system stand out relative to everybody
else's.
Yeah, so I'll go even further for the distant past.
There was AI before machine learning, right?
I mean, so Stanford, Ted Shortlift, early 1970s, came up with Mycin, which was a recommender
system. This was all rules-based stuff. There were literally New England Journal of Medicine
perspectives in the mid-1970 saying computers and AI were going to just really
revolutionize medicine. And replace doctors. Right? And so it's very instructive to realize
though that there was that big cliff and a desert. And right now in the middle of this,
we can't imagine how could that have been and how could we have another one of those? But there
was incredible over-promising and still things didn't catch up. Lack of data, lack of integrated data
could be one of those that could slow things down a lot. And when Daphne says being creative,
I'm going to take that further, you have to be creative to convince people with data to share and
collect and aggregate. I think one of the big risks that we run as a machine learning community
is the incredible amount of hyperbole that's going on right now, where it's like we're going to
have general intelligence right around the corner. We're not. Okay, we really aren't. As you
start your own company, do not
over-promise. It's much better to
under-promise and over-deliver than the other way
around. That's right. We haven't done
this before. It's very experimental.
We think we could try
and make headway on this problem, but we don't
know we're going to achieve this level of performance.
Be really, really
careful because this desert
is not a theoretical possibility.
Let me push back on that, because when
you're a tool boot or Daphne Cole and you're going to say,
I'm going to try. You can
like sort of soft sell it and people will have some
optimism, you know, for if you don't have that brand, you know, I think it's a tricky balance.
A lot of companies in this space really ignore the importance of branding. Branding equals trust,
okay? You're going to sign, for example, a BAA, business associates agreement where we're going
to give you some data, for example, imaging data or whatever. And yes, it's a legality to that,
but the paper doesn't do much. We have to trust you. And so why should this be a trustable company?
Super important point.
Don't ignore that important aspect in a startup.
To answer the specifics of how do you build that brand,
you want to be really rigorous and scientific in every claim that you make.
So publications might seem like a dispensable thing
now that you're no longer in school.
Publications are your way to get the stamp of approval by trusted peers,
and we all know that the peer review process has its issues,
but it's certainly better than not.
So invest the time in doing rigorous science, being really careful about your controls, being
careful about how you generate and present your data.
And again, not without hyperbole, publish it in strong peer-reviewed venues.
And then the other way to get branding is to go and talk to experts in the field and get
their stamp of approval by having them be on your scientific advisory board or whatever
or even just tell their friends that you can be trusted.
Do not underinvest in this and be sure that your foundations are sound.
I mean, we all know Theranos, that's an extreme example,
but the fact that they never had a peer-reviewed publication,
they never presented their data in any way,
they kept even potential customers from looking at the raw data.
I mean, those are all really bad things for a company to do,
even if you're not there on us.
So what should people expect in terms of real life,
sort of in a startup machine learning.
You know, when you're in academia, you want to try to develop, you know, a beautiful new method.
Whenever I use this reference, I think about in Rogers' Lost Ark, where Indies, like,
dealing with the guys and the guys are, you know, doing the swordsmanship and he just shoots them.
You know, it's usually it's not the elegant, gorgeous thing that is the best.
It's the thing that gets the job done.
And so it seems at odds with a lot of the way we train people in academia.
So what is real life machine learning like in a startup?
I tend to run my lab like a startup.
And sometimes, especially the CS folks I run into,
are so concerned about accuracy and F scores
and sensitivity and specificity,
you enter these competitions every year
to try to boost point one or point two.
I'll tell you one funny thing,
and maybe this will get me criticism,
I forbid anyone in my lab
to enter those competitions.
Because that's playing someone else's game.
The game player is the one who,
the game runner, they're the ones who win those games
because they got you, like, answering their questions.
There's so many unsolved problems out there.
The hardest part in everything we do,
is figuring out what is the question to ask?
What is the pain point that you realize this is askable and answerable?
This is modelable now, right?
Five years ago we didn't have the data, now we do.
That's way more important than the accuracy, sensitivity, specificity.
If you figure that out and kind of defend your advantage in some way, that's the way to go.
The single most important thing is what is the question that you're asking.
After that, the first thing that you should do is to try the simplest possible,
thing that you think has a chance of addressing it.
And I will share an anecdote that I heard from a colleague at the NIPS board meeting just recently
of someone who came into interviews an intern, and she posed him a problem and asked,
what would you do to solve this?
I was like, well, there's an LSTM, and then LSTM feeds into the recurrent neural network,
and then there's three convolutional layers, and it's like, okay, no, what is the simplest thing
you can do?
Well, we could take out the LSTM.
And then you're like, well, how about just a plain neural network?
It's like, you can do that?
Really?
So try the simplest thing and then figure out what is the metric of performance that you actually care about?
The area under the ROC curve is rarely the thing that you actually care about.
That was devised for radars back in the 50s, okay?
Receiver operator curve, that's the name that it comes from.
What you might care about is specificity as a given sensitivity.
It depends on your application.
So think about what it is that you actually care about
and then ask yourself whether when you go from your logistic regression,
which is, by the way, a single layer neural network.
When you go from that to the next level beyond that,
does it improve the metric that you actually care about
that's going to make a difference to practitioners,
not to some competition, not to the graphs that you present in the paper.
But will someone care about the fact that you brought this number up from this much to this much?
Will it make a difference in clinical practice?
Maybe just end with two last quick things.
Let's say you're a computer scientist doing machine learning and you're excited about the biology space.
What advice would you give to them to break in?
If you're a machine learning person, I would say look for a company where you have that continuum of skills,
which allows you to kind of move
and learn more of the biology
without needing to necessarily come in
and know it all at the very beginning
because the biology is a very, very steep learning curve.
There is an infinite amount of stuff to know.
I've been doing this for about 18 years now
and I still feel like a novice
in terms of the amount of biology that I still don't know.
So find a place where there is people around you
who are this far out,
from you in terms of being closer to the biology,
but not just that far out,
because those people over there probably will not understand you
and you will not understand them.
So you need people in the middle
that can sort of bridge those gaps
and help these people communicate better.
One thing I think, especially CS students,
never take advantage of,
is the fact that we have these academic medical centers
right on the same campuses.
So if the hardest part,
if you've just heard that the hardest part
is figuring out what the pain point is,
what's the unsolved question,
They have this amazing concept at academic medical center is called Grand Rounds.
And they bring in an expert, right?
And they talk about a disease, and it's maybe 30 minutes of everything you know about the disease.
The last 10 minutes says, this is everything we still need in this disease.
We don't have a this, and we need a this, and we suck at this.
Go to Grand Rounds, right?
Go to the academic medical centers, see the seminars.
There are a long list of seminars.
Join those mailing lists.
Learn the lingual, learn the vocabulary.
And a lot of the vocabulary isn't necessarily the hand.
on skills, it's more what the thought process is. I spent one year in a wet biology lab during
medical school. And so I used to go to seminars, and one biologist would be trying to convince
another biologist. And one biologist will say, you know, here I've shown that this protein
interacts with this other protein. And the second biologist will call bullshit, I don't believe you,
right? How do you know? And then the first biologist would say, I did a co-immunopreciprecipitation.
Note to self, right? When this biologist is trying to convince that biologist, they did something
called a co-amina precipitation, right? I don't know how to do one, but I know if that's what
you're trying to prove, that's what you do. You want to know, how does one biologist convince
another? How does one physician convince another, right? What are the levels of proof and evidence
they use in that field? And even if you don't know how to get there, you can at least learn
what you should be aspiring to. And don't be afraid to ask really stupid questions. One of the
things is if you're working at the boundary between two disciplines, you need to go with the confidence
that you are an expert in your domain
and it's okay for you to appear
like a complete idiot in the other one
because if you're not going to ask those questions
you will never know the answers.
So ask what is co-immunoprecipitation?
I want to add one quick thing on that, right?
We have this Silicon Valley chutzpah.
Let's learn some Silicon Valley humility
to ask those questions, okay?
We have so many folks that come in and say,
oh, you guys don't know anything in medicine,
we know everything.
Don't go in that way, right?
Have some humility when you're asking that question.
We'd love to answer them.
That's an awesome place to send it, and I think we have a little bit of time if people have questions.
So we've talked about exciting questions that startup should ask.
If you have two startups that are asking the same question, how do you judge between the two of them?
You mean for funding, let's say, right?
Or if you're trying to join one.
Coursera or Udacity?
I don't know.
Should I join?
So, Coursera and Udacity is an interesting example because these are two companies that started out asking the same question
and ended up going in very different directions.
So I think part of the answer is they likely won't end up at the same place.
But I think ultimately most of the success of a company by far is not about the quality of the idea.
It's about the quality of the execution.
Absolutely.
Absolutely.
And I've seen companies with ideas that are fairly mundane, seem fairly mundane,
but they executed the hell out of it.
and they will generally thrive much more than the people who have like just completely awesome out of the box
and they just totally flubbed it and you know they lost focus oftentimes sadly creativity is anti-correlated with focus
ultimately you want to make the judgment not just between two companies that have equivalent ideas
but companies that have ideas that are equally exciting to you I would look very closely at the quality of the team
and the quality of the execution you expect from them.
Yeah, I think that's a great point.
Also, it would affect the enjoyment of being there.
There's just a fundamental difference being around A plus people
versus A minus versus B plus and down.
It just changes the experience dramatically.
I think one distinction between the biomedical space,
let's say the vision or NLP space is if you're finishing up your degree
or earlier on your career, if you're an NLP or vision,
you can get the data pretty easily to do a problem that you're interested in.
Daphne said earlier on how, as opposed to,
So when she started her academic career,
there's way more data that's out there for biology.
The thing is, it's behind this massive wall
in a lot of places at big companies.
So what's your recommendation if you don't have the brand
of a Daphne Collar or Tool of Butte?
Are we really at the stage
where we can found companies and get the data?
Or does one really need to work with more established people
to do that?
The easy way to break in is go after smaller hospitals,
whether it's El Camino or the East Bay hospitals
in our area or the like.
But I think one thing that a lot of companies do, and a lot of folks do, actually in the Bay Area, is to not pick the partners well, okay?
U.S. healthcare system is a $3.2 trillion a year system, 3.2 trillion.
And so we look at this as engineers, boy, this is such an old, creaky, inefficient system, and they're gears, and they're so rusted, if I add oil in the right place, the gears will go smoother, and they will save money.
Friction is the wrong model for what needs to get fixed in the health care system.
It's resistance.
Somebody makes every dollar in the $3.2 trillion economy here.
They like making that.
They like making that money.
I see a lot of companies that come and say, yeah, we're working with payers, and now we'd like to work with you as providers.
Oh, my gosh, why would I want my data to go to the payer, right?
You have to understand the dynamic that there are competitive natures here and just,
Learn what they are, and don't be naive about that.
So pick a partner.
Either you're going to go after providers or payers.
You're rarely going to do both as a startup.
My kind of worldview of the system now is you've got pharma and devices in one corner.
You've got providers in one corner and payers in the other corner.
It's strangely missing our businesses and patients.
And we can talk about patients.
They're the least powered in the system.
And any one moment, two of them are ganging up on the third, right?
So pick your players well, what your first one, two, three, N customers are going to be.
it will not be a mix of these, right?
If you think you're going to get a mix of these,
something's wrong with the model.
It's not going to happen that way.
Because the minute I see that logo of another player,
I'm not going to put my logo on your site either, right?
So be super careful about that.
Get experience and get an idea of how competitive the spirit is.
Going into the space is a long-haul game.
This is not a game you're going to win in two to three years.
And so you want to be patient in how you approach this.
So right now you might not have the credit.
ability to go to a large hospital or a large pharma company say, hey, give me all your data.
Sorry, but you might not have that brand. But if you go in with humility and say, well, I'm willing
to work as a consultant for six months. And I'm going to do it on your side. I'm just giving
that as one trajectory, it's not the only one. And so I'm going to solve a problem for you. Tell me
what problems you have. I'm going to come in. I'm not going to require that, you know, a huge
valuation or whatever. You gain experience. You figure out.
what the problems really are, which are usually not the things that you thought going in,
you develop a case study. One thing I would ask as you go into these relationships is less
about how much they're going to pay you and more about will they let you use the results as a
proof of concept when you go try and pitch this to other customers. Think about the long term,
not the short term. And so over time, you build a better understanding of the space. You build
more credibility. You build customer success stories that you can go
provide to someone else. And then you go and pitch your company and build it around that much
deeper level of knowledge. So the more track record you build both via that type of customer
interaction and via scientific publications that people can read and respect that are peer-reviewed,
that's what gets you that credibility. That's great. I mean, that's, I think, some of the
best advice I think one could give and also not incompatible with what you could do while in
academia in terms of industrial collaborations. You mentioned that in terms of data sharing with
companies, you typically don't want to share data from like, if somebody's gone to Stanford,
then you're probably not going to want to share your data with them, right? But what if
it's something that's very disease specifically? You're working with one provider to get
info on, like, diabetes patients versus working with someone on like COPD. Yeah, that's easy.
For example, an easy model might be an app that gives advice, data-driven advice, to chronic
disease management. Something like that, you could possibly get data out of multiple different
systems. You'd find the right kind of clinician that clinician is willing to go through
their food chain to get permissions to give out the data. There might be some shares involved or
some license involved. That's only money and paperwork. But in the end, you could get multiple
competing institutions to give you data. Where it gets harder is sensitive subjects, like how much
we charge, how much do we get remunerated, you know, quality of care, right? Things like that. We're
super sensitive about that. What kind of talent do you see is like in shortage? Say, like, is it the
most difficult to hire people with biology background, but also has, like, machine learning
and computational expertise, or, like, the other way around with computer science, you know,
experts, but you know what they know biology? Or is it, like, the product managers, or, like,
the clinical experts, domain experts? My unicorn here would be the intersection of two fields.
And my critical one is AI machine learning, even analytics, and knowing something about
medicine. So, for example, you know, a straight CS graduate student could run amazing cancer
data set, and up on the top is some amazing gene, and they have no idea what to do next,
no idea what that means, or they're looking through all my lab test data, there's major
important lab test shows up there, and they don't realize they don't have that insight.
That's painful to me, right?
I need them to have that insight, too.
So those are my unicorns.
They don't have to be the very best at CS, but they should be really good at two fields.
I would say that biologists who know how to program their way out of a paper bag are also
really scarce on the ground.
And actually, there's more of them than you might think,
but the demand for them is unbelievably high.
Every pharma, every academic lab knows that they need people like that.
So while there might be more of those than the unicorns that Atul was talking about,
the demand for them is exceptionally high.
Absolutely right.
And maybe I would just add, you know, what happens if you don't have this?
And you try to substitute this with an A-plus computer scientist joined with another person who's an A-plus biologist.
The problem there is they just do not know what the other one doesn't know.
And unless they're like telepathic, this is not going to be nearly as good as the one person that is, let's say, it's hard to do. It's hard to do. And they can't communicate. They speak different languages. Their mindsets are different. It's like people speaking, they might be both well-intentioned, super collaborative, and they might as well be speaking time Swahili to each other.
I just had one probably naive question, which is it seems like a big challenge of starting a company in healthcare is getting around the bureaucracy in the United States. What are your thoughts on starting elsewhere?
Estonia has a blockchain for medical records, I think.
We went so far without mentioning blockchain.
Shoot.
Look, I mean, so other countries have solved a lot, but they're tiny countries, right?
I mean, you know, okay, so you could, they're simpler kind of subsets in the U.S.
The VA health systems, eight million, single EMR system, more or less consistent rules.
We have 15 million University of California.
We have a total of 15 million patients.
that we have raw eHR data on.
So that's every drug, every dose, every vital sign,
every lab test result for 15 million people.
So that's about 5% of the U.S. population
gets some care in University of California.
So if you're trying to go after data,
that's a pretty big data set.
If merely filling out these forms
gets you to what you need to get to,
my God, that's kind of easy if you think about it, right?
I mean, so it's not that hard.
Yeah, you could go to Estonia and Denmark
and all these great places that have great data sets,
but they're also small.
And for impact, they may not have the same impact.
as it would have in our crazy health care non-system here.
I mean, that's the potential.
And let me just add that those countries don't necessarily want to ship their data over to the United States.
And that there's often a lot of subtleties and working in a particular culture and a particular system
that tend to diminish the further way you move from that.
And it all seems nice and simple.
Grass is always greener.
And as you get closer, there's a lot of complexities that,
are very hard to appreciate from the outside
that are multiplied a thousandfold
by the fact that you don't necessarily speak the language,
you don't understand the culture,
you don't have any network of connections.
Maybe you do in Estonia, but I don't.
And I think it's a lot harder than you think.
And in fact, the very last A16B podcast I listened to
was on the GDPR, right?
I mean, they have a lot more privacy rules.
You know, I now have to count how many patients
we have from Europe in the University of California.
I mean, just because they're smaller
and they seem easier to deal with.
They have their own set of rules.
Yeah.
Well, with that, let's close the session and thank our speakers today.
Thank you guys.