a16z Podcast - a16z Podcast: What Technology Wants, Needs, Does
Episode Date: May 23, 2017Turnabout is fair play: That's true in politics, and it's true at Andreessen Horowitz given our internal (and very opinionated!) culture of debate -- where we often agree to disagree, or more often, d...isagree to agree. So in this special "turnabout" episode of the a16z Podcast, co-founder Marc Andreessen (who is most often in the hot seat being interviewed), got the chance to instead grill fellow partners Frank Chen (who covers AI and much more), Vijay Pande (who covers healthcare for the bio fund), and Alex Rampell (who covers all things fintech). None of the partners had any idea what Marc would ask them. Putting them in the hot seat at our recent a16z Tech Policy Summit, in Washington, D.C., Marc asked them policy questions such as the implications for tech of the American Health Care Act or AHCA (which itself was being hotly debated that exact same day, just a few miles away); the role of regulatory arbitrage; and what happens to companies big and small if Dodd-Frank is repealed. Oh, but they also covered so much more: the pros and cons of using tech to "discriminate" for better risk pooling; the role of genetics in addiction (can/should it be used to determine risk?); the opioid crisis (can tech help?); applying AI as a "salve" for everything (what's hyped, what's real, what's easy, what's hard?); the line between redlining and predatory lending (and where/when did sentiment flip?); and the ethics of artificial intelligence (beyond the ole Trolley Problem). Throw in a classic nature vs. nurture debate, a bit of 2-D vs. 3-D, and some fries (yes)... and the future arrives in this episode in 35 minutes or less.
Transcript
Discussion (0)
Hi, everyone. Welcome to the A6NZ podcast. Today we have a special Turnabout as Fair Play episode
with Mark Andresen, who's always in the hot seat being grilled for answers, instead asking a bunch of tough tech and policy questions of partners Frank Chen, VJ Pandane, Alex Rempell, who cover AI, healthcare and fintech respectively.
They were put in the hot seat at our recent tech policy summit in Washington, D.C., where they covered everything from the implications of the ACHA for healthcare innovation, Dodd-Frank for FinTech innovation, and tackled a bunch more tough topics around using tech to discriminate.
for risk pooling, nature versus nurture, or rather genetics versus behavior, addiction and the
opioid crisis, redlining versus predatory lending, and finally the ethics of AI and machine learning
beyond the old trolley problem. What will the future look like? I'm really, really fortunate
at our firm to get to work with some super bright people, and I think you have a sense of that
from earlier today, but this is the session where I get to ask the questions that I want to know
the answer to, so many of these questions I have not actually asked these guys before.
So this is my free fire zone.
And then to make it hopefully a little bit more enjoyable for me, they haven't been pre-briefed on the questions.
And so I'm hoping to get at least one look of shock and alarm along the way.
So Vijay, big day for health care.
The AHCA passed the house today.
So two-part question.
Part one.
So what, you know, as you've been involved in as we're involved in all these different areas of innovation, new health care technologies, new business models for health care,
and then the sort of omnipresent kind of question for anything new for kind of how to insert into the health system.
And then ultimately for the new companies, who pays, kind of being a key question.
What's been the biggest impact of the ACA on how, in your view, health care innovation
and especially health care innovation and technology?
I think the ACA is part of a graduate evolution that started with macro for pay for value.
And so pay for service.
And I think that is a huge thing.
And so pay for service is like you think of health care having someone like renovate your house
or something like that.
You know, the carpenter comes in, he puts up a wall, you charge for the wall or whatever,
you're paying for the service versus paying for value,
you're thinking about, are we keeping the patient healthy?
And from a patient point of view,
I'd actually rather pay not for service.
You know, I'd rather not have to go through treatments.
And I want to pay for value.
I want to stay healthy.
And the key thing here is, if we align it right,
we can pay for proactive treatments or involvement
such that we can actually minimize the cost.
Yeah, I was going to say, is pay for value always better?
Yeah, I think.
Always better for the patient?
And then is it always better for everybody else
in the ecosystem, or who does it?
Who's ox to the gore?
I can't at the moment think of a counter example,
but I think there is this general idea
that if we keep people healthy, they'll be cheaper.
And that does seem to be generally the case.
From a systemic standpoint, at least.
Maybe on an individual basis,
some treatments don't, some procedures don't happen,
but then we should all be happy they're not happening.
Exactly.
So that I think is a huge thing.
I think we need to keep that going.
The idea that we can also, with enough information,
be in the construction analogy,
the general contractor of our health and our body would be key.
And that also started off with, I guess,
under George W. Bush to have larger deductibles meant that people were a little more
incentive to take care of things. That combination actually could be quite powerful.
Right. And then, you know, granted, it's truly to tell what the true impact of the HCA is,
because it's only passed the House, hasn't passed the Senate, hasn't been signed into law,
but in the event that it just simply gets passed and approved in its current form,
what do you think will be the biggest impact on what we all do?
One of the things I think reading between the lines and, you know, it's just been a few hours,
that the states will be given much more prerogative in terms of what they decide to do.
What we're going to see is a lot of disparity between whether you're in California or whether you're in Texas and so on.
This will be an interesting challenge for starters because they're going to have to deal with a much more heterogeneous system.
And that's the first one that I would worry about.
There's concept of regulatory arbitrage.
The good news about state-level regulation versus federal is with federal, if the government decides and else you do it, it applies to all 50 states with states, if some allow you do it and some don't, that can actually be better for startups for it to be regulated.
Do you think that is there possible benefit here, or does the complexity end up overwhelming that in it?
You probably don't want health care done at like the county level, but at the state level,
in that sense, it could be an opportunity.
We'll see how it all shakes out.
Okay, got it.
And then this is a hybrid question.
This involves both health and fintech.
There are laws against using certain kinds, for example, of genetic information for the purposes.
Risk scoring has been a huge part of the Obamacare debate and the HCA debate.
But a huge issue at the last minute with this was what to do about the high risk, right?
And I guess there's part of the HCA, now they break out some of the high risk patients in
their own pools.
And so this whole concept of risk pooling and health insurance is kind of central to everything.
thing is happening right now. And then the nature of technology evolution, at least as I understand
it, correct me if I'm wrong, is with genomics and with quantifying in all these new diagnostics
and all the rest of it, we're going to have a much more of a technological ability to risk score
by person than we've had in the past. There's a federal law called Gina, signed in 2008,
that prohibits the use of certain kinds of genomic data for the purpose of risk scoring.
There's an anti-discriminatory case for it, but the counter case, we have this information
we could use and we're basically plugging our ears and choosing not to, which seems like an untenable
long-run position. And so I guess the question is, if we have much better individual-level
risk-scoring, does the current model of risk pool of health insurance, like, does it survive
in any form over 10 or 20 years, or at some point, does that entire concept just start to cave in?
I think the reason why the law might make sense is that I don't know if we have the technology
to go from genome to risk quite yet. So that's an important key. And if you did that incorrectly,
you know, you could have very nasty downside effects. People would not get health insurance
that they might need or vice versa.
So I think that would be an issue.
And also, environmental issues would be key.
You might have great DNA, but if you have every meal at a fast food place, that might not be great.
And actually, you know, who knows, maybe your phone GPS couldn't register the check-in to McDonald's and Burger King,
and then we would know for risk anyways.
So the big problem is that it's a long gap between genomics to risk, and actually the environmental factors are also so key.
But let me push you on that.
So I think 23 and me got re-blaster, re-blast by the FDA, and one of the things that they'll give you,
for example, is increased risk of Parkinson's.
Yeah.
And I know it's a big concern of the FDA's part
with these personal genetic tests is what actual value
they have, but apparently they do have value,
or at least the FDA claims they have value.
And so how can the FDA claim they have value
in that dimension and then say that they shouldn't be used
for the purpose of...
I think this is a broader societal, almost philosophical question
of you have immutable traits, and in many cases those are genetic.
The problem is that they're not entirely deterministic,
I think to your point.
It would be very nice if you saw that this snip
actually led conclusively to this particular problem,
but you might have...
I did 23 of me when it first came out and it says,
you have a 0.15% chance higher of getting X.
It's not really deterministic.
But the idea of discriminating based on immutable traits,
I feel like we as a society agree is wrong.
And if you start up with that as your axiom,
you can follow a lot of different things,
at least for deterministic.
It's like if X, then Y, we should not discriminate on that.
There are other cases where if it's behavioral,
you have a broader set of questions.
Because discrimination, as it's used
as a word in the English language,
is basically only pejorative.
You want to discriminate against people
that don't pay their bills on time.
You want to discriminate against people that smoke.
And insurance companies do that.
They say, do you smoke?
Yes or no.
That's the very first question that you ask,
if you're a life insurance company,
if you're a health insurance company,
because that is a choice that you're making.
You could argue that there might be a genetic component
to being addicted to nicotine.
Therefore, was it really a choice?
By the way, which there is.
Which there is.
So I totally get that, which is why I started saying,
it's not purely deterministic.
It would be very, very simple if you could just bifurcate the world into like immutable
traits, which are genetic, and we both have young kids, and I think about, like, wow,
you do have these immutable traits, and then you have other things that are behavioral-based.
And what you want to do is you don't want to have lopsided risk pools that are made up,
like, forget about the one that we've almost axiomatically agreed as a society that we will
protect.
You don't want to have a lopsided risk pool around behavior.
Because I see this in the lending space a lot.
Like, the only way that you reduce interest rates for people is you either discriminate
based on outcome-based payments, which often requires machine learning.
more data, or you're able to charge higher interest rates over time.
So if you have like a massive pool and 50% of people don't pay you back and you can't
filter that pool out anymore, then you have to charge a 100% interest rate in order
to break even.
And if you can't charge 100% interest, you just don't loan money to the whole pool.
And that's bad.
Like I don't think that people should be denied access to credit because they're poor or
because they're part of a group.
So in that sense, even though it sounds bad, like discrimination against the non-payers,
assuming that it's not based on immutable traits, like gender,
ethnicity and whatnot, but it's actually based on past behavior, that seems, but the devil's in
the details, because what you might call an immutable trade might not be, and vice versa.
So it is, because then going out on a slightly longer time frame, I mean, there is more
and more neurological research showing, showing genetic or showing biological origins of behavior.
My understanding, you tell them from wrong, there are genetic traits that predispose people
to addictive behavior.
Yep.
Such addictive behavior might include things like gambling, might include things like being a more dangerous
driver, and so if you go out 10, 20, 30 years, you know, there are behaviors that
we would obviously like to discriminate against, that all of a sudden we may start reclassifying
as non-free will-based.
Right.
But with CRISPR, we could just take care of that.
Ah, yes.
Do a little, do a little snipping, snipping here and there, as they say.
Yeah, yeah.
Let's move on.
Frank, so obviously AI's a hot topic in general and it's been a hot topic today.
What in your view, what's the most surprising AI upside thing, like, let's say over like a 10-year period?
And it can't be like an obvious answer, like self-driving cars.
But like, what's the thing over the next like 10 years where we'll just be like,
wow, I can't believe that they could get it to do that.
So I don't know if this is going to come true,
but already what we have is we have AI
that can do creative things like create music.
But that's with training data.
It sounds like something you're familiar with, right?
So make it sound like jazz,
or make it sound like Baroque music,
or make it sound with this singer's voice.
What I'm looking forward to is,
could an AI create a brand new genre of music
that people find desirable?
Which is, it doesn't sound like anything today,
but it's just this breakthrough new genre.
So in 10 years, I think we have a shot at doing that.
Now, would that be scored according to whether humans like that?
Yeah, I think it just be like, is it at the top of the billboard top chart?
So maybe then the AI just gets very good at weaponized virality.
Yeah.
Gets very good at like marketing the next, yeah, figuring out like how to be the next Justin Bieber.
Yeah, that's exactly right.
So, you know, sort of true creativity, right, as opposed to derivative, which is it sounds like something that I'm going to remix something that it already sounds like.
Or would that be true creativity or would it just be that it starts to understand us really well and starts to learn how to really manipulate us?
Yeah, so the question is, did Mozart do anything different than that?
Yeah, yeah.
I don't know.
Yeah, exactly.
And then on the other side, AI has become like the salve that people like rub on themselves to convince themselves that you'll be anything in the future is possible and almost likely and AI can do everything.
So what's the one thing that maybe people in the current dialogue might view is like, yes, clearly AI is going to be able to do X and in 10 years we'll still be sitting here saying why haven't they've been able to get it to do that?
Yeah. So first, just to echo your comment about AI being the SAV. So this is my favorite conversation now to have with startup entrepreneurs. They all come in and they're like, I have an AI startup. And I ask, that's awesome. What machine learning techniques are you using in your software? And now they're kind of squirming and like, the good ones that's become like that cliche, right? Five years ago, this was, I'm mobile first or cloud native. So this is the new flavor du jour, which is I'm AI central.
By the way, the other tip-off is they'll have the slide that shows there are five areas of differentiation, and AI will be the fifth bullet.
Right.
So now that I got so excited about telling that story, I forgot the question.
So what's the wall?
Like, what's the AI wall this time?
Like, what will we not be able to get?
So chatbots are the applications that you go to either on a website or through Facebook Messenger or through Skype, and you basically have a chat conversation, right?
You're using text.
Most of those experiences are miserable because the second you wander off what it understands, it does some pretty
ridiculous things.
So, you know, there was a conference.
There was a chatbot conference, and the t-shirts all said, what do we want?
Chatbots, when do we want them?
I'm sorry, I didn't understand the question.
Best conference swag ever.
Look, because AI is so hot, the expectation around what the AI can accomplish is just unrealistic, right?
What you're expecting is sentience.
And what we actually have is just automation on steroids, which is we can now automate things
that we couldn't automate before because we don't have to painstaking.
describe the rules behind it. We're just letting the computer sort of figure out the rules
with data. But behind that, you know, the expectations are, wow, like I expect to be talking
to a normal human being, and we're not there yet. So we've all gotten up here on stage today,
and we've said, you know, self-driving cars are going to happen, and autonomous drones are going
to happen, and AI forecasting heart attacks and diagnosing cancer and all these things.
Like, what's so hard about chatbots? Like, why is it, why at this point, if we can do all these
other amazing things, what is it about language that makes it hard to do this?
This is like almost philosophical question.
So my favorite story from the literature on this is in the early days of natural language translation,
what they would do is they would take English, turn it into Russian, and then go backwards, right,
just to sort of make sure that you could close the loop.
So you feed in a sentence, the spirit is willing, but the flesh is weak.
Turn that into Russian, send it back into English.
And in the early days, you would get some truly hysterical results, and this is my favorite,
you get back, the meat is rotten, but the vodka is excellent.
Right. And you can actually see exactly how it made that mistake, right?
So one error of thinking is why is English so hard is because words don't mean what they mean,
or they only mean what they mean in a specific context.
And so we actually had generations of AI researchers who, having reached that epiphany, said,
okay, the answer to this is a million rules.
There's a professor originally at Stanford.
He said, oh, we're never going to be able to solve AI unless we can solve this corner case.
So I'm going to codify all of these rules.
Since the mid-1970s, he's been doing this, and now he has a system, and then a rural processing engine that can actually figure out the priority of the rules.
Like, if I have two rules in conflict, which one is real?
So to give you a sense of the type of rules that he'd put in the system, when you go to a quick-serve restaurant, the rule is you pay first, and you get your food later.
If you go to a fancy restaurant, the rule is you order, get your food, and pay at the end.
Okay, so we must have a rule for that.
And so rule number one in the system, that's how restaurants.
And so you can imagine down this line of millions and millions and millions of rules,
and he's been trying his entire life.
And it just didn't work.
Because for every set of rules that you, there is some exception.
There is some weird idiom.
There is some trick of the words that, you know, the ambiguity resolves some other way,
where you just don't get the right result.
Is any of this unique to English, or is this true of all languages?
It's true of all languages, and every language has their, sort of the different quirks.
So for instance, in Chinese, you have this quirk of homophones.
Lots of words sound exactly the same.
exactly the same and are deeply ambiguous.
When you're trying to type Chinese,
you have this very hard problem because one,
they have no alphabet, right?
So even if you gave them an alphabet,
you'd sort of type a series of letters,
and then you would get, here's the 12 characters
that could mean that thing,
and then you have to go and select.
That's literally how a Chinese keyboard works for anybody.
If you've never seen this, it's terrifying.
You type the letters, and then you have to pick
one of 12 characters that could be.
This is why talking to your phone in Chinese
is now three times faster than typing, right?
which is why Baidu is spending so much money trying to do this speech-to-text magically.
And so every language has their own quirk like that.
And the holy grail of this space would be sort of one learning algorithm to learn them all.
In other words, one deep learning framework that could equally learn all languages effectively.
I don't think we're quite there yet.
Okay, Vijay, back to you on health.
So much, much more serious topic.
Opioids.
The inventors of opioids, I would imagine, had a very strong moral positive view that they were doing.
a wonderful thing for the world because lots of people suffer from pain of various kinds,
and it seriously degrades people's quality of life, has serious economic impact.
And so fundamental advancements in pain medication is viewed as, like, the research is involved
as a very good thing.
And yet we now have this public health crisis on opioids leading to some very, very adverse
effects for human beings and also for our country and for our economy.
Does the opioid crisis, has it changed your view of how to think about innovation in healthcare?
Actually, you know, it suggests a different type of innovation in that historically, you know,
Opioids are derived from the same stuff in poppy.
And so it's really old.
I mean, the pharmaceutical version is better and so on.
What was the innovative?
I would just tell like maybe how to cure it.
I actually don't know.
What's new and different about opioids,
given that it is derived from?
Actually, chemically, it's very similar.
It's just slight modifications to make it more powerful
or last longer or things like that.
So even like remifentanyl or something like that
has the same connection back.
So in a sense, there's very old things.
This is like morphine or.
Yeah, it's like morphine.
in the Old West or something like that.
I guess, so then it would say it's been consumerized.
It's been, or medicalized, it's been made available for these descriptions.
So it's really the innovation there is that people have pain and therefore they need pain
treatments.
There's an interesting sort of psychological or societal aspect of this because I don't know
if you know the study with rats, so-called rat park.
You take a rat and you have in a cage and it has a nasty life and give it the choice
of food versus opioids.
It'll take the opioids until it'll take the opioids until it dies.
Yeah, yeah, it'll just overdose.
And you take rats and put them in rat park, which is, you know, with all their friends,
it's a great thing, it's a great place.
It's like clubbed for rats.
And then you give them the same choice.
They don't overdose.
And so this is the intriguing thing is that we talked about addiction.
And part of addiction is what are your options?
And so if you're living in a box, if you're in a war that you don't want to be fighting,
I mean, there's lots of reasons why people's lives can be very nasty, that this becomes an alternative.
So I think we could turn it around, and the opioid crisis might not be as much about the drugs as about the state of these people's lives, and that alternate intervention would be the way to go after it.
So with that in mind, is this an area in which we could, we, we being our broad community of innovation and healthcare, innovation in technology and healthcare, can we have a positive impact, or do you think the answers to this, therefore, lie outside of?
My guess is that it's much more societal, but there are interesting alternatives.
And so we've seen technologies like Tend's technologies, which is an electronic device that can be used for certain types of pain care that doesn't have this addiction.
And so that will be that will play a role and those devices are already rolling out.
So there'll be some areas that technology can help, but I think this is a much more systemic issue.
Alex, obviously, health care regulation in the news today.
Financial Services Regulation is another hot topic.
And the new administration is making statements that they plan fundamental repeal, potentially including everything up to an including potentially repealing, dot Frank in some form.
A two-part question, I'll start with the first part.
What's been the impact of Dodd-Frank, do you think, on FinTech and what it means to be a
fintech innovator today versus, say, in, you know, before Dodd-Frank?
So part of Dodd-Frank is this idea of risk retention, which is a great idea in theory,
especially if you're Chase or Wells Fargo, if you're originating trillions of dollars of subprime mortgages
as an example because that never happened.
That would be crazy.
And, like, you originate.
So originate means, like, okay, I have all these brokers, they get these people to sign up for loans
they can't afford. I know that they're all bad. I get somebody to say they're not bad
because they haven't actually looked at all the data and they don't know the fact that they're
bad. They just look at the properties. Oh, the properties look like they have good value. I bundle
them up and then I sell them. Done and I earn a commission on both sides. That sounds like a great
business to be in. The idea of risk retention is one of the ways that you prevent that is not just
being able to clawback bonuses from big bank executives, but actually have the bank itself
retain some of the risk of these securitizations.
Meaning hold some percentage. Yeah, so 5% is the legal requirement.
Right. If you take your clients down, you're going down, you're buying an interest.
So you have both personal in terms of like, my bonus can get clawed back, and you actually saw that with Wells Fargo for a different issue.
And then you have just corporate-wide where it's like, okay, we shouldn't do this because we have to hold on to the risk.
Actually, we have to finance that risk.
So for every 95 cents, like five cents of that is ours, like we don't have infinite money, so therefore we have to be careful about how we are our lending practices and whatnot.
So, you know, that's probably a good idea for a brand new company that has no.
money in the bank, for them to have to do risk retention around loans that they might
originate, that's kind of tricky.
And it might not make sense.
And also, like, what kind of societal downfall can come from a brand new company that is
helping refinance your existing high-rate debt to kind of low-rate debt, and they only
pony up 1% alongside that, and only make 100 loans because it turns out there's no demand
for their product.
So a lot of the, like, that's just one example.
Regulations apply to small companies, big companies.
And Dodd-Frank, it just has so many different components.
Like, there's part that regulates interchange,
which is how much the credit card companies charge for debit card products.
Like, that's probably a good thing.
Small merchants actually benefit if they don't have to pay as much money.
If you have a low-margin consumer service,
and you're paying 2% out the door to the credit card companies
or the credit card conglomerate, that's not good for you.
So part of Dodd-Frank had this thing called the Durbin Amendment,
which allowed the Fed to actually cap debit interchange.
And if you actually look at some of the financial results
of somebody like Stripe or Square, they have this great tailwind from the fact that debit
interchange went down dramatically.
So, you know, it's a multifaceted thing.
Yeah, if you're doing trillions of dollars a year of loans or you somehow pose a systemic risk
to society, okay, I can buy the fact that we don't want to let one company bring down
the entire financial system.
But when they're just trying to figure out, do we buy Google ads or Facebook ads to get
people to find that about our product, it's a little bit premature for them to vaporize
half of their funding on legal bills because of regulations that are really meant to watch
after companies with hundreds of billions of dollars in market cap.
There are a lot of things that predate Dodd-Frank, which I actually think are the most
anachronistic.
So this idea of fair lending, it actually goes back to this machine learning concept.
Fair lending is a very, very well-intentioned law.
Like, I should not, as a bank, be allowed to discriminate based on marital status or ethnicity,
religion, any of that stuff.
So fair lending has an appendage called disparate impact.
It's not just are you discriminating outright on one of these factors, but are you having a disparate impact against somebody in one of those factors?
Now, the funny thing about computer code is that, A, you can examine it, and B, it's dispassionate.
Like, if you have a, in your code, if there's a statement that says, if race equals Y, then reject applicant, like, great, go to jail.
I'm totally in favor of that.
But that's not how machine learning works.
You basically, it's like linear algebra.
You have, like, here's every borrower ever.
Here's every attribute ever.
Let's look for patterns over time.
By the way, we're not even collecting things like gender, ethnicity.
We're not inferring any of that kind of stuff.
So it might turn out you have four cats, you have two cars, and you like to watch Seinfeld
every night.
And the combination of those factors means that you are a bad risk for lending.
But you have purple hair, and that's a protected class.
Therefore, you can't use those prior three things.
Even though we might consider the developing world as being behind, in many respects they're
ahead, because the regulations and everything there can actually recognize the fact
that you are using newer techniques to positively discriminate against deadbeats and only deadbeats,
and you can look at the code to verify that, as opposed to doing it based on anything that really
should be a protected class, whereas here, you have to actually, when you get denied from a loan
in the U.S., it's like here are the three reasons why you were denied for a loan.
And if it's like if there are 4,000 micro reasons that altogether sum up for this,
like you can't say Seinfeld and Katz and cars, like it doesn't compute.
There was a time not that long ago.
There was a huge redlining debate around mortgages and lending and was lending being
fundamentally denied to people for reasons by virtue of where they live, which was code
for who they are.
Right.
At some point, through some consequence of either regulatory changes or political changes or what,
you know, incentive changes or whatever, at some point it flipped from those banks are racist
and denying loans to people who should get them, to these banks are predatory.
So they're evil because they're denying loans to people versus they're predatory because
they're giving loans to people.
I mean, in particular, many of the same people, in a lot of cases that were being redlined,
like people living, for example, in lower income neighborhoods.
And so, you know, by 2007 or 2008, everybody knew for a fact that it was evil for a bank
to give a loan to somebody in a local income neighborhood who couldn't pay it off.
And I know bank executives were like, well, that's the exact opposite of what we were being accused up five years ago.
So, for example, new lending companies or new insurance companies, how do we think about the line,
the line between redlining and predatory lending?
Well, ideally, and this is where risk retention or reputation retention really makes sense,
which is, I mean, part of the problem there is that people were willingly and willfully making loans to people
that they knew couldn't afford it.
And my view of regulation is that it should enforce transparency.
The reason why payday loans are bad
is because the fine print isn't even like nine-point font.
It's not even eight-point font.
It's like two-point font.
You can't see it.
And then if you are the even scummier payday lending company
that makes it one-point font, you'll win against the payday lending company
that makes it three-point font.
So it's this race to the bottom, and nobody wins.
Everybody loses.
It's a tragedy of the commons.
And that's where regulation can be helpful is,
like, okay, let's make this incredibly transparent.
When you have securitizations where you kind of pass the buck off to the greater fool,
that's where you have the predatory problem.
And actually, regulation, I really do believe, has a role.
You know, before joining this firm, I ran an ad network around payments.
We had a lot of what I would consider not very friendly competitors.
And I think in their heart of hearts, they didn't set out to build bad businesses or be bad people.
It's just the only way to win is to kind of out-trick consumers.
And that's the greatest role that regulation can really have.
I think when we back a company, ultimately we're at the stage where it only works if the consumer wants it.
And you have to kind of fight for the hearts and minds of consumers that have 9,000 other different banking services available to them.
And then, and only then, will you have a chance of being used by that particular consumer?
And, you know, we're trying to back things that fundamentally have a 10x improvement over existing products in the banking system.
Okay.
Another ethical question.
If you ask somebody who's kind of following the whole discussion on AI, they bring up AI ethics, they'll cite something called the trolley problem.
And the trolley problem is basically this problem of you've got a self-driving car and it's barreling down the street.
And there's different versions of the problem.
But basically it has a choice, it has a choice to make.
It's either going to keep going or it's going to slam on the brakes.
And the computer is going to calculate because the computer is going to know a lot about what's happening around it.
If I keep going, I'm going to run into a car and I'm going to kill five nuns who are all seven years old.
And if I hit the brakes, I'm going to hit and kill two six-year-olds with their entire life in front of them.
And I have a 30th of a second to make the decision.
Right.
I think, Frank, you and I would probably agree that that is what might be politely called an edge case.
That is a hypothetical, which just goes to say that hopefully nobody in the audience had to make that decision recently.
The actual decisions that we have to make is, am I going to, you know, how can I text all the way through the stop sign?
Or should I look up at some point?
So, so contra that, or tell me if you disagree that, what do you think is, for example, a very, very serious AI ethics problem that people are maybe not discussing enough of?
That is a good question.
But before I answer it, I want to say a little bit about the trolley problem and sort of approaches industry is taking to sort of trying to solve this problem.
So one, you might posit the existence of a ethical decision-making as a service company.
And you're like, why would we want one company to hire a bunch of philosophers to do this?
So what that would mean is literally a company where you submitted an ethical problem and it gives you...
Yeah, on the face of it, it sounds crazy.
How could we have an ethical decision-making as a service company?
Well, your alternative is that you're going to allow all of your motorcycle vendors and car vendors and ambulance vendors and truck menders to figure it out themselves.
No philosophers.
And maybe not even take that into consideration at all.
If you look at the current crop of machine learning algorithms that drive autonomy, they're not making high-level decisions like, let's calculate the life expectancy of the people that I'm about to wipe out.
They're not doing that.
They're looking at there's an object in front of me, and I'm trying to figure out the probability.
What is the highest probability safe path?
That's the calculation it's performing.
But let's say, like, we can do all of the probability-adjusted safe paths,
and we could actually take into account the sort of the life expectancies,
the, you know, what's the utility function of a human?
So what if we had everybody's genomes?
What if we could do risk story?
That's right, that guy was going to get addicted to nicotine anyway.
Don't worry.
What if we have the five healthiest nuns in the history of the world,
and they're all going to live to be 140?
That's exactly right.
So, if we could get to the point where those calculations started factoring into what the car would do,
you kind of want the philosophers to encode those types of rules.
By the way, you've never met people who are as fired up about the evidence of self-driving car as philosophers.
Yeah.
So we're not at the point where that's even entered the conversation except for futurists who are positing the existence of maybe we ought to have an ethical decision-making sort of as a service.
So what kinds of things are people not taking seriously?
Well, it's the conversation we just had, which is these machine learning applications.
algorithms might be circling people for loans or not loans or diagnosis, you know,
should you be part of this clinical trial or not, right? Where there might be a high correlation
with somebody that you want to be a protected class. And we don't really understand why it is
that algorithms are making those decisions. Broadly speaking, there's sort of classes of
machine learning algorithms where if you sort of poked at them, you could figure out why did you
make that decision? A classic algorithm where you can do that is called a decision tree.
You can actually look at all the branches in the tree and say, oh, you made this decision because you followed all these branches down the tree, and that's why you rejected this person for a loan.
Right.
The modern, more powerful, more accurate algorithms that belong to a class of algorithms
called deep learning algorithms, they don't have that feature.
They are notorious black boxes.
You cannot ask it, why did you make this decision?
All it is is basically a vast linear algebra matrix of weights.
And so there's no way to sort of query it.
Now the kind of argument, and the reason these things are being used as opposed to the old-fashioned
decision trees is they're more accurate.
They're just plain and simple.
They're going to make the right decision more times than the decision trees.
And so when you challenge somebody in the community, he's working on these black box algorithms,
their defense will be, hey, look, and I just actually went through this, my 16-year-old son just
got licensed to be a driver, right?
Terrifying experience for everybody who's had teenagers.
There's actually no way to query in his brain what he's going to do in these edge cases either,
but yet we licensed him to drive.
We made a regulatory decision as a society that just because he passed a couple behavioral
test and a couple book tests, he's now licensed to drive.
And so you have no more inspectability or understandability of the 16-year-old's brain compared
to the deep learning algorithms.
All you have is behavior.
And at the end of the day, what we're going to judge cars on is accidents per million miles
driven, just like you judge humans.
Well, the way that you'd interrogate the deep learning is you'd just have the deep learning
algorithm run in a simulation and run through a trillion scenarios with all kinds of variables
and then see what comes out the other end, which is much harder to do with the 16-year-old.
So you can only watch him playing Grand Theft Auto so many times before you infer behavior.
We're going to send the self-driving algorithms through tons of simulation.
In fact, most of the big companies will say how many real miles versus how many simulator
miles will feed the networks and they're saying like it could be 50-50.
Yeah.
So final question for all three of you, so a lot of us in the audience, a lot of people on
have kids. What will be the single biggest change in daily life when all of our kids are our
ages? So 20, 25 years out, let's say. Daily life. I definitely believe they're not going to
be driving. They'll be picked up. And, you know, whether it's two. The big question will be
2D or 3D, right, getting you from point one. I actually think here's another interesting thing,
which is if there were self-driving cars, trucks, robots, shopping carts, there will probably be a
Cisco routing service on top of that that can calculate the best way to get object A from
point A to B, which is like some combination of handoffs between the bikes and the shopping
carts and the, and somebody's out of right, the routing algorithms. I haven't been actively
looking for one of those. So if you see one, send it my way, I think there will be a Cisco class
company that basically moves Adams from point A to point B. So how stuff moves will be far more
complex. Like if you were, like if you were the business school student designing FedEx now, you would
basically be riding on top of all of the things that can move themselves.
Now, you wouldn't be buying trucks and cars yourself.
I'd also go for the 2D versus 3D.
So if everybody knows the Fermi paradox, if there are roughly 100 billion, 200 billion galaxies,
each one has roughly 100 billion, 200 billion stars, each one probably has at least one or
two planets orbiting it.
Why haven't we heard from anybody?
And they're like, it's really interesting.
There are 100 different explanations for why.
It could be that we're about to implode.
It could be that they all implode, lots of different reasons.
My favorite one, though, actually relates to this, which is, you know what, traveling interstellarly, it's not fun.
Like, you're, like, just sitting in a spaceship, like, to go to Mars, that takes nine months.
If you just kind of upload yourself to the machine and you just, why do you travel, why do we travel here?
If telepresence is actually sufficient, and I believe it will be in the future, like that would be a big change.
And by telepresence, you mean, like, what would the experience be like?
I mean, it's probably AR, VR, but, you know, without clunky goggles, you know, or a headset or something, where it's almost as good.
really is almost as good as being there, and you have the bandwidth for it, and you can kind
of simulate experiences. I mean, if anybody has tried like a VR headset now, that is already
amazing, I think, right now, but just if you follow the trajectory of that and where it will be
in 20 or 30 years, like you don't have to cross across the country, much less the galaxy,
much less the...
What would be the biggest, let's say, second-order impact from if telepresence gets almost
as good as actually physically being somewhere?
The real estate prices would crash, I guess. That would probably be the biggest one,
because location, location, location would be shrunk down to, like, location doesn't matter.
That's probably the biggest one.
Which could also mean, by the way, to the extent that geographic concentration is,
but to the extent that urbanization is driving inequality,
because if you're in a city, you have access to economic opportunity,
and if you're not, you don't, that could be very positive from an inequality standpoint.
Yeah, you still need to grow food, but maybe they'll figure that out too.
We'll have robots.
We'll be drinking soiling by then.
Yeah, exactly.
Can't work.
If you're not already.
Soil.
DJ.
On the healthcare side, you know, we're already getting closest.
Imagine you wake up, you pee, that gets analyzed automatically.
You know, you've got sensors on you just as part of how you are.
It's a point where living to a thousand sounds ridiculous, right?
So, and I think that's pretty far off.
But just imagine what that would be like.
If you had to have a car that lasts even a hundred years or a thousand years, what would
that look like?
Well, you gotta take good care of the car.
You don't smash into things, you don't do silly things.
But then also as parts sort of wear out, you replace them.
And so, you know, with these new technologies like,
stem cells where you can make new organs and CRISPR where you can modify those before they
get made.
You can imagine replacing parts with not just a heart transplant from a donor, but with your
own heart.
What's your prognosis?
What's your best guess for a timeframe?
When would that mainstream?
It's hard to know.
I mean, they already can grow beating heart tissue from your own blood that created stem
cells.
So you can do that right now.
And actually, you could do that to test drugs on not just like how it behaves in a mouse or it
behaves in someone else, but how it behaves in meat.
So that's already there.
Growing a whole heart, that's a big deal.
But then again, like 25 years, that's a long time.
Something to look forward to.
Go out and have a double serving of French fries tonight.
Good.
Thank you, everybody.
Thank you all so much for coming.
On behalf of the whole firm, I would like to thank all of you for coming out here today
to discuss tech policy and wrestle with some of these issues.
Thanks very much.
See you next year.