a16z Podcast - Alex Blania on Proof of Human and Building World's Identity Network

Episode Date: April 2, 2026

a16z's Ben Horowitz and Erik Torenberg speak with Alex Blania, cofounder and CEO of Tools for Humanity, World, and cofounder of Merge Labs. World is building the largest real human network, a proof-of...-human layer for the AI era. They cover the technical challenge of proving human uniqueness at scale using iris biometrics, the privacy architecture behind World ID, and why platforms from social networks to dating apps to video conferencing will soon require proof of human verification.   Resources: Follow Alex Blania on X: https://twitter.com/alexblania Follow Ben Horowitz on X: https://twitter.com/bhorowitz Follow Erik Torenberg on X: https://twitter.com/eriktorenberg Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 How do you prove somebody is human? It is a surprisingly hard problem. I think that people are going to start getting accused of being bots. What we currently see is less than 1% of what it will look like in probably a year or two. The idea that AGI will lead to some very fundamental shift seems obvious. AIs are really good at programming humans. Much better than humans are at programming AIs. Absolutely.
Starting point is 00:00:24 And AI will be able to have a GitHub account and will be able to post. And also attest to five other AIs that these are in fact humans. and even though they're not. Honestly, if you don't take it serious now, then I think you should get a different job or something. Those agents are very, very clever. How do you prove you're real? In 1950,
Starting point is 00:00:42 Aoun Turing proposed a test. If a machine could fool a human into thinking it was also human, it had achieved intelligence. For decades, that remained theoretical. Today, AI agents run thousands of social media accounts at once, outperform humans in controlled persuasion tests, and generate hundreds of videos a day that audiences believe are real.
Starting point is 00:01:05 The Turing test didn't just get passed. It got commoditized. Every platform built on the assumption that its users are human, now faces a problem no one has solved. Facial recognition fails at scale. Government IDs weren't designed for a global internet. I speak with Alex Bonja, co-founder and CEO at World, which is building the largest real human network,
Starting point is 00:01:28 a proof of human layer for the AI era, alongside A16Z co-founder and general partner Ben Horowitz. Alex, welcome to the podcast. Great to have you. Thanks for having me. So proof of human is having a moment right now. When you first give a background for people who are unfamiliar, what is the moment that's happening and how did we get here? Yeah, and what is proof of human? Proof of human, as the name suggests, is do you know if you interact with a human or like something else on the internet? or not. And I actually think the kinds of questions that we're asking is, are you interacting with a human, an agent on behalf of a human, or just an agent? I think these are like roughly the three areas that we want to split apart. And describe a little bit the difference between
Starting point is 00:02:14 just an agent and an agent acting on behalf of a human. How do you see that distinction? Yeah. So quickly explaining just the term proof of human and I think what is hard about it and then I'll explain how it fits into an agent on behalf of a human. So what proof of human really means is, is that every individual that interacts on a platform has only one, ideally one account or a limited number of accounts and stays the owner of that account. Like that's kind of the property that you're looking for. So like you're looking for initial verification
Starting point is 00:02:45 that ideally should be something like anonymous or very extremely privacy preserving and then ongoing authentication that the same person remains in control of that account. And then there's like some secondary properties that I think are put to half. But that actually tells you that they're really hard thing is uniqueness. Like what is happening on a platform like Twitter right now is that
Starting point is 00:03:05 there's all these accounts, all these bots there in replies that there's probably one human sitting somewhere and sending out 10,000 or 100,000s of AIs. And there's this catch-up game or Twitter and X are trying to just find them and block probably millions a day of these. Which is what, a 100th of the bots? That's right. That's how it feels like. And then agent on behalf of human. I think how it will look like is I think all of us will have agents. It's unclear how it will look. It's going to be one or there are multiple ones,
Starting point is 00:03:36 maybe with different tasks and different even types of characters. And I think it will then come down to I approve a certain action of my agent. I give him certain rights. So act on my behalf. Okay. Post to my ex account, post to my Instagram.
Starting point is 00:03:51 For example. But it's my Instagram and I'm a unique human that onset. That's right. You know, that X or Instagram could decide if that's actually something they want as a platform. Right. But that's how you could do it.
Starting point is 00:04:03 That makes sense. And so how do you prove somebody as human? It is a surprisingly hard problem. Yeah. Those agents are very, very clever. It's funny. We started this company now a couple of years ago, before ChachyPT and before all of that.
Starting point is 00:04:19 But we kind of took it as an assumption that eventually we will have AIs that pass the Turing test so they can just claim to be a human. You will not be able to tell them anymore. more on the internet, and also that they would be highly agentic and just run around to the wrong thing. And so that makes it really, really hard because back then when we started the company, there were like roughly three big ideas that people were interested in. One was this idea of web of trust or like related ideas. So this idea that you look how someone behaves
Starting point is 00:04:51 on the internet or did behave in the past. So like usually a combination of you have the certain number of accounts that you own since a couple of years, and then you post regularly or you comment regularly to GitHub. These were the kinds of things that people are using. And then, let's say, all three of us have them. And then I attest also that I know you in the real world and I attest to you that I know in the real world. And that's how you would build a certain graph. And that was like a very hot idea back then for this. But we disregarded it basically immediately because we assumed that eventually everything that is just digital and AI will be able to do. as well.
Starting point is 00:05:29 We're there. Yeah, exactly. So an AI will be able to have a GitHub account and will be able to post and own an account and also attest to five other AIs that these are in fact humans and even though they're not. So there was area number one. Area number two was to just use government IDs for everything, which we just all see be disregarded for a couple of reasons.
Starting point is 00:05:50 One is I think, you know, it's strictly better if the government would not control such infrastructure in terms of free speech and actually breaking that apart, but then also... Right, you lose anonymity instantly, right? You could hypothetically set up a system that maybe preserves it, but it's very hard to do. And then the second thing is also the government identity system is just not built for that. And what is so hard about this problem is it's going to be a global problem. And so it doesn't really matter if one government maybe has the perfect infrastructure. For example, Singapore is like an example of a government that has...
Starting point is 00:06:25 perfect infrastructure all around. But that barely doesn't matter because, for example, I don't know, meta is a global product with 3 billion users and a lot of other countries. Singapore is 2 million people or 1 million people. Yeah, exactly. So do you want to lock everyone else out? And then there's a long list of other things, why we disregard that basically immediately.
Starting point is 00:06:42 And then the last one is biometrics, which actually immediately gives us this egg reaction. It's like, and it even went further because what is so hard about this problem, as I mentioned in the beginning, is uniqueness. And so just like in very simple words, how you can describe the problem is, well, first of all, for example, what does face ID do? Face ID checks that I'm the same person again using my phone. And so it's a one-to-one authentication. So there's an embedding start on my phone.
Starting point is 00:07:11 It takes a picture of my face, creates a new picture, compares to the debris one. And if that is close enough, I can use my phone. So that's a one-to-one, one embedding to one-new embedding. To solve the proof of human problem, you will need to... to distinguish one new individual from all previous individuals. You need to make sure that Ben is trying to sign up, and Ben did not sign up before. Yeah.
Starting point is 00:07:34 And then suddenly it goes from one to one to one to N. And it's the size of your network, essentially, that you're trying to prove that to you. Right. And then you can just do the math, and you can calculate how much mathematical entropy, like how much information, just information theoretically do you need to prove that.
Starting point is 00:07:51 And it turns out that's a pretty high number because it's an exponential problem. And so then you can just do the math and you find out that things like face or even fingerprints or something doesn't work. Then you would basically hit a wall after tens of millions of users.
Starting point is 00:08:05 And so then you end up with something like Iris, which is the mouth of your eye. That actually has enough entry. That it's unique. That is unique. That is unique enough. And how do you also then solve the one thing that biometrics
Starting point is 00:08:18 have been subject to historically is just replay attacks? Where, okay, I may not have your eyeball, but I've got enough information that I can run a replay attack on you. So, again, it is important, I think, to split up the problem in verification,
Starting point is 00:08:35 which is essentially in all terms. It's like you're getting your passport. Right. And then authentication, which is you showing your passport, constantly for certain kinds of things. And on the verification piece, we've went down, if you know,
Starting point is 00:08:49 world, you know that we've built this thing called an orb. It's doing a lot of things. to prevent these kinds of attacks. So, for example, it has multiple sensors in the electromagnetic spectrum to just make sure that you cannot show a display to it and it would recognize that. So I think on that side, we've got it handled.
Starting point is 00:09:08 On the consumer side, you know, to then re-authenticate, it turns out to be much harder because you would need to trust the phone in some sense. Because what we actually do in that moment is when you verify with an orb, not only do we check your uniqueness in a fully anonymous and privacy-preserving way,
Starting point is 00:09:24 and we should talk about that. But also, we sent to your phone a signed face image that you then can later use to reauthenticate against it. Right. And with a new iPhone, you can have meaningful amount of trust against that, but with old Android phones, basically. Oh, yeah, yeah. Because, like, you can just show a deep fake,
Starting point is 00:09:42 essentially either through a display or just directly injected in the camera stream. So that's the problem. And so it's going to be a mix-off, you know, if you have a new enough, let's say iPhone or a general phone, then you can just reauthenticate against that picture that you took on verification. Otherwise, you would probably have to even go back to an orb somewhat frequently. Let's say a couple of times a year.
Starting point is 00:10:03 I see. Right to reauthenticate. Yeah, that's right. Interesting. And then one of the kind of incorrect criticisms of the approach early was, oh, my God, they've got my eyeball. You know, now they somehow have access to my privacy and they're going to do all these things to me
Starting point is 00:10:25 and that's my access and then they can, WorldCoin can impersonate me and all these kinds of things, but that's not the case. So that was also like a non-trivial engineering problem. There was very much non-trivial. So actually I think one point on Iris
Starting point is 00:10:43 that I think people don't appreciate it enough and that's a bet we took back then, but it was essentially that Iris will turn out to be super normal as a modality just because I think we will all wear AR and VR systems to do that. You know, Apple already does it. Yep. Already has RISID and the Vision Pro.
Starting point is 00:11:03 So I think it's, so maybe that's a general point. I think it's going to become something that we will use across many different devices and will normalize in that sense. But I think on the privacy piece, that took us a lot of time. Because like when we decided back then that, you know, with our assumptions, you know, which was six years ago, that we will need a custom hardware device for biometrics. It was actually quite scary, you know, to come to the conclusion.
Starting point is 00:11:33 Yeah, that's an expensive conclusion. It's like, it's like very expensive. And then just having this idea that you would need to distribute them all over the world, like that just assumes that you would be able to like somehow bring up billions of dollars and to like a massive effort to just resolve the world. But then also the privacy challenge of like, how could you build such a system that has all the requirements
Starting point is 00:11:54 that we care about. And the two main high level ideas on how to solve it were multi-party computation and zero knowledge proofs. And so, again, what is different to face ID
Starting point is 00:12:10 because face ID actually can be very private just because the embedding is stored on the phone. It doesn't have to leave the phone ever just because it's just you against you in the past. But to check uniqueness, you need to check
Starting point is 00:12:26 against all previous people. So something needs to leave. Yeah. You know, something needs to leave something and be compared to someone else. And that's a much harder challenge. And how we approach that is
Starting point is 00:12:40 with a multi-party computation. And so that essentially means that in our case, when you verify with an orb, you know, we take all these pictures, they get computed on the device and then they actually get split up in multiple pieces. So for example, we take a picture of the iris,
Starting point is 00:13:00 we calculate an iris code, then we break that iris code in multiple pieces and send it to multiple computers such that there is no central database in some sort. So no one actually has the information about you. And then you do some clever tricks of how these different parties need to come together to do a computation
Starting point is 00:13:21 that still leaves the pieces apart. Right, right, right. In such a way that... Nobody has the whole thing. Yeah, so no one has the whole thing. And also during the computation, no one has the whole thing. Yeah.
Starting point is 00:13:32 But they do some, you know, some clever interactions to come to the conclusion. A little like a zero knowledge proof kind of technique. It's... I mean, it's very different, but I think in terms of the properties it achieves, it's somewhat similar.
Starting point is 00:13:45 We're like you... No one knows anything about you, but you can actually together make a statement about you. And so, you know, you send it to this multi-party computation and what comes back is, yes, that individual is unique. And then the second thing we do is we separate
Starting point is 00:14:01 all of this from you with a zero-knowledge proof. So meaning you have that secret on your phone, but no one else has it, no server has it, we don't have it. And then you can later go back to this multi-party computation and say like, hey, I have a secret that is part of that computation and I am in fact unique and that you can prove that to a platform
Starting point is 00:14:25 you could go to the social network and prove that you're unique users to the social platform without us knowing anything about you or the social network knowing anything about you and so it's this like very counterintuitive property that you there is like even though it uses
Starting point is 00:14:41 by metrics you you know you preserve anonymity and and extreme levels of privacy which I think is super cool you know social media is one kind of vector of, you know, things that were annoying and are now becoming overwhelming in terms of just bots, you know, particularly with sciops, propaganda, all these kinds of things. What are some of the other, you know, uses of bots that are going to be kind of impossible
Starting point is 00:15:11 to live with if we don't get to proof of human in the future? Yeah, actually, I think the simple model I have for it is every moment on the internet. that that is primarily about humans interacting with each other, you know, or even indirectly interacting with each other. So, you know, you can start with simple ones like dating, you know. There really matters. The other side is in fact a person. Yeah, well, the, you got bad news for listeners.
Starting point is 00:15:39 Well, and the person who you expect it to be. Yeah, yeah. Yeah, exactly. Yeah, we did problems even before. The whole catfish thing. Yeah, exactly. Yeah, yeah. So that's an always one.
Starting point is 00:15:51 And so, for example, Tinder is already using it for that reason. I think. And what's the Tinder use case? So we started in Japan. And like as a test market, and is essentially exactly what we just discussed. It is if you verified with an orb, you get a little batch that, you know, signals to other people, that you are in fact to humans as a high level of verification. And then also, I don't think that's live yet, but what will come next is that you're actually the person you claim to be.
Starting point is 00:16:25 So, meaning you have a world ID that is associated to the kind of profile pictures that you use. So you just run a quick check that this is all correct. And so, you know, you then know you're not interacting with bot, but also you interact with a fully authentic profile. Another fun one, because I think it's somewhat constitutive, but I think it will be video conferencing. Because, you know, you already have deep fakes.
Starting point is 00:16:51 Yeah, just to, I don't feel like going to this video conference. Just put my deep faker. Yeah, and actually, you raised it to me first, and that's why we started building a product for it. Because, you know, it would actually start with very high-value users. Yeah. Like, for example, people, you know, like yourself, that maybe manage a fund.
Starting point is 00:17:10 And, you know, sometimes calls actually could be very high value if it's about borrowing money or... Oh, yeah, yeah. Well, so somebody can be me and say, Eric, can you please wire this Nigerian prince? $400 million. Right, exactly. It would be good to know. Yeah.
Starting point is 00:17:28 Yeah, like, you know, that's still slightly hypothetical because these things are not fully real-time and you can somewhat... We're very close. But we're very close. And so I think, you know, in a year from now, it's just going to be a full commodity and it's going to be super photorealistic and absolutely real-time. and you will just not know anything anymore on these videos. And so I think that's another one. I think another one then will be, which I think is fun, but it's going to be gaming, you know, because.
Starting point is 00:17:57 Oh, yeah, yeah. Because gamers really care. Oh, yeah, that they're playing an AI. Holy cow, that's frustrating. Especially if we bet money. Exactly. And you lose money, you train multiple hours a day, you get really good at this thing,
Starting point is 00:18:11 and then suddenly you get, you know, you get destroyed by an AI. that is just superhuman on every dimension. Funny enough, I wonder what do you think about this, because I don't have a good mental model about it, but even the whole model for video platforms, I think, is about to break because there's a couple dimensions that are a problem, but one, if the creation of content is becoming super scalable.
Starting point is 00:18:41 Like, for example, I heard about this one guy that created, I did, I think, like, I was like on the order of a hundred videos a day on YouTube and made tens of thousands of dollars a month. All of them were fully AI generated. Yeah. And people just fell for it. So another question is, is that actually something that YouTube wants to monetize that way? Yeah. Like, is that?
Starting point is 00:19:02 Yeah. Well, it's interesting, right? They fell for it. But maybe they liked it. Yeah. Yeah. That could be. But it would sure be nice to know, like, okay.
Starting point is 00:19:15 Okay, this is a human video or this is an AI video. Actually, my thesis about this is like something along the lines of, I think there's categories of content that are clearly just fictional. Yeah. Like movies are that. You know, it's like you don't care that there's any connection to reality. It's just a fully fictional story. But now if you think about something like TikTok or, you know,
Starting point is 00:19:37 all these kind of things, like people actually really care about them mostly because there is some connection to reality. Yeah. Yeah. Well, there's reality and there's connection to human, right? That's right. You can create a pretty good pilot. Like you can take a scientific paper and give it to Gemini and say, make this into a podcast.
Starting point is 00:19:56 And, you know, it'll be like a pretty entertaining podcast. Right. And it will be reality in that it came from, you know, some real thing. But you would like to know that. You would like to know that. Yeah, I would like to know that. And then it continues. As an advertiser, you would like to know that a human watch it.
Starting point is 00:20:14 Yeah. Or did an AI watch it? Yes. Right, right. Well, right. That's the other thing is I created 100 AI videos. I had a million AIs watch it. And then I made a lot of money off of YouTube.
Starting point is 00:20:27 Exactly. I actually saw that a video day of a YouTube farm. Yeah. They had these like thousands of phones that just watch videos all day for a reason. Yeah, yeah. And then like that's got zero value to the YouTube advertisers. Right. And so that's actually a real problem for them.
Starting point is 00:20:42 Right. the whole sort of the creator economy platforms of the last decade you know substacks Spotify and all the people who support artists or you know Patreon is that creators YouTubers they have a personal relationship with with these people it's not just they like the the art and so if they all of a sudden found out that they were you know bots that might you know they might not want to support them in the same way yeah you might not want to give them a big YouTube tip or yeah I think there's a certain subset of people who support want to support actual people
Starting point is 00:21:14 and feel like they're having a real relationship. And the thing that I think people don't really get is that it should be obvious, but I don't think people really understand the consequence of that. I think two things. One is that what we currently experience
Starting point is 00:21:29 is like a super, super tiny thing of what is about to happen, you know, just because... Yeah, right, it's a glimpse. It's a glimpse. Like, you know, cost of the jet, the talent is dropping almost exponentially. agentic capabilities are increasing
Starting point is 00:21:43 in some superlinear form. So like, yeah, what we currently sees less than 1% of what it will look like in probably a year or two. And then second, these things will be actually, they will be superhuman in many ways. They would be like perfectly able to understand you and like talk in the right way to you.
Starting point is 00:22:01 For example, there's this is like one paper that I think it could lead it after. But it was the change my mind subreddit where the University of Zurich at this thing where they had AIs actually interact with Changeable Mind and they were like superhuman
Starting point is 00:22:20 in their ability to change it because they were going back to their profile of the people posting it and were like understanding their political motivation the way they talk and they're just interacting in perfect in the perfect way and just like hit all the buttons
Starting point is 00:22:34 and like AIs are really good at programming humans that's much better than humans are at programming AIs. Absolutely. There's no question. And so I think that's going to get quite scary also. But I think at least if you know you're being a victim of a sciop, then, or it's a very advanced
Starting point is 00:22:52 one done by an AI that would be extremely useful to understand. Totally. Talk a little bit more about the state of the product and the business today. Like how many IDs are out there? Wanted to give it a little bit of it? Maybe you can talk about the evolution as well. Well, first of all, it's a multi-sided problem. And I think there's like roughly three that you have to consider.
Starting point is 00:23:11 One is, well, you need platforms to use the technology. Then, you know, like things like Reddit or, you know, X or, you know, things like that. Secondly, you need distribution of these devices. And I think the right mental model to have for it is, how many minutes does it take a person to reach such a device on average? And, you know, currently, if you would take the global average, it would be terrible number. It would be like, you know, days or something because many people would need to fly. But, you know, how do we get the down to below 15 minutes across the US?
Starting point is 00:23:50 And so that's probably roughly around 50,000 devices that you need to deploy. That's like, it's not crazy, but it's also not nothing. It's, you know, it's hard to do. And then the last one is, how does all of that come together to something that a lot of people really want to use it? And that's a combination of the utility of all the sub-platforms, essentially. But all of that layers on top. Maybe you can use it in your Reddit account. Maybe you get a certain amount of TadipD subscription for free.
Starting point is 00:24:18 So I think it's going to be a combination of things. But you need to land all three at some point at that same time, which is hard to do. We are now 18 million users that are verified, 40 million in total in the app. But the biggest thing is because of the past, administration because we use, you know, we use crypto. We did not really invest in the U.S. for a long time. And that's not the main shift that we're going through. Like, for all of this, the main thing that matters is the U.S.
Starting point is 00:24:48 And hopefully we get the Clarity Act pass shortly. Yeah, exactly. That would be really great. So to get clarity on that. Yeah. So the big focus that we now are going through right now is to kind of go all in in the U.S. So I think over the next year, 90% of the effort of the company
Starting point is 00:25:09 is just going to go about the US. And how do you get, for example, device distribution up? How do you eventually have this in every Starbucks? So it becomes just super normal and people just use it every day. So that's kind of the. And then on the platform side, actually, we went through a, it was a very interesting experience to go through personally because I think like a couple of years ago,
Starting point is 00:25:34 universally people just made fun of us it was like the universal reaction well minus and recent and a couple other people who believed in it but yeah like and the press like the amount of fun making of something that it just shows how short-sighted people
Starting point is 00:25:52 are that's right it's like you don't think the bots are coming what did you think when we first pitched actually because even you must have thought this is crazy well because you had the orb like the orb was so wild you know okay we're going to scan
Starting point is 00:26:09 people's retinas and that's how we're going to know they're human and so forth and this was I mean you pitched us six years ago six years ago yeah it was before COVID because you were there with the orb right
Starting point is 00:26:19 and you know AI just hadn't happened yet and you know you could kind of see but there you know there's bots but they were kind of very crude and you know
Starting point is 00:26:32 compared to what there are now But it seemed inevitable, at least. At the time, you know, the thing was it was so out, it was so from the future that, you know, we always worry about, okay, like, what's the timing of this and this and that and the other and so forth. But, you know, you were impressive enough, and it was going to happen eventually,
Starting point is 00:26:58 and it was an exciting enough idea that I think all those things kind of got us to go, okay, we're in. But it was not, it wasn't obvious that like it was going to work in that time frame. It seemed very inobvious for a long time.
Starting point is 00:27:14 And how different was that pitch from what it ended up being? It was actually pretty much exactly the same way. I think it's the same thing. The device changed. You know, they've made it much more economical and convenient, but it's... But the initial instinct was right.
Starting point is 00:27:28 It was basically everybody's going to have to prove there, you're either going to have to have some proof that you're human in cyberspace or like, it's going to be a very bad world. I mean, the robots are going to get us.
Starting point is 00:27:45 We're done. Right, and then actually the second piece that was, like, this was the first thing, like, it's going to be, that itself is going to be a big deal, but then second of all that, you know, when it's going to become a big deal, we will be able to build one of the most valuable networks as a result of that. Because in a world
Starting point is 00:28:00 of AI, having a human network is going to be this incredibly important thing. And so actually, yeah, two things. One, you will need to prove a human, but in second, it will have very strong network effects. And even as the platforms, as you get into the platforms, even as the platform's largest problem has been bots. I mean, you remember Elon and, you know,
Starting point is 00:28:19 he backed out of buying Twitter because all the stats were based on bots. They still, even knowing that, it was hard for them to get all the way to the future and they're thinking and go, yeah, we need proof of human. Yeah. Like, it's kind of obvious. Yeah, because people were like, what does it even mean?
Starting point is 00:28:39 You know, like, what does proof of human even mean? We can just, you know. And did you have the language? When did you call up with the language proof of human? We had, actually, we had proof of personhood for the longest time. It's even here on this brief. Yeah. But then at some point, we're like, shit, well, at some point,
Starting point is 00:28:56 AIs will have personal too. So. So, like, that's not going to fly. But they're not going to have retinas for a long time. That's actually... Although that's coming eventually. It was actually really funny. It was like some of the opening eye people that I met were like,
Starting point is 00:29:14 man, Alex, this is going to be so dark. Like, people will hate you for, like, not giving personality eyes. I was like, Jesus. Let's call it proof of human then. That's funny. So that's how you. changed. But then actually, so then I would say like last year.
Starting point is 00:29:32 Then there was like a big shift post chat chabit. Like people were like that was like the AI suddenly got real to people. And then actually I think and so that's when people started talking to us but still we're not like you know like it's a future problem.
Starting point is 00:29:47 It's probably a couple of years out. Like we don't really care about it. Let's stay in touch. Like there was like the common response and then you know and well but you also You had a couple of CEOs that really believed it and were willing to take the long-term bet to give them credit.
Starting point is 00:30:04 But I think the second big shift was actually Claudebots and Moldbook recently. Yeah. Just because that kind of means like the cow is way out of the barn. Yeah. And so like honestly, if you don't take it serious now, then I think you just, you should get a different job or something. Yeah, what are you doing?
Starting point is 00:30:27 Yeah. They're just not thinking about problems in the right way. Like it's, and so that's, that was like the moment when many, many people started reaching out. And now it feels like much more of an executional problem, not, not any more, a market risk. Like a market risk or like a thesis problem or like just a, and which is still a big fucking problem. Like how do you, how do you get 50,000 devices out there? How do you make it cheap enough? How do you make it economic?
Starting point is 00:30:51 Like, you know, how do you meet all these three things at the same time is still a very hard problem? How do you normalize the behavior? So people aren't weirded out in a Starbucks or something. Although I think that's now going to be... Just because I think people will hate the alternative so much. And I think people are going to, by the way, take a lot more pride in being human, particularly online because I think that people are going to start
Starting point is 00:31:18 getting accused of being bots. I mean, it's going to get really weird. And without like clear... delineation, it's going to be a mess. Like, I don't understand how somebody can think they're going to have a social media platform that doesn't distinguish between humans and bots. Like, that seems absurd to me. It seems absurd.
Starting point is 00:31:41 I think we will, my guesses over the next couple months, we will see, we will see things like these platforms trying to use things like face biometrics on the phone, which, you know, I know it will break, so it's fine, but I think we'll go through that cycle now. And, yeah, so we just need to get to scale fast enough to meet the market to what comes after, which I think something like the orb is the only solution. I think currently there's no real competition. I think we'll also see that. I have not seen a competitor, yeah.
Starting point is 00:32:14 That's true. Because it's so ridiculous. It's so ridiculous, and it's so hard to get to in terms of building it. And then there's a massive network effect, right, which, like people are starting. six years behind you on that. But yeah, I'm sure they'll come because it's just such an obvious problem now. What actually do you think about, like, AI continues,
Starting point is 00:32:39 what in your minority economic policies that we will need to implement or directionally? I think governments do have to figure out how to send citizens money. They're good at taking money from citizens but not reverse. I mean, well, just, if you go back to COVID, the stimulus program,
Starting point is 00:32:56 like, I think. $400 billion was stolen. You would have liked to know that you were sending the money to unique humans. I mean, even if not citizens, as long as they were unique humans, that would have been good. Yeah, I mean, the social security system, for example, is a mess. Yeah. It's insane. It's a total disaster.
Starting point is 00:33:15 So we're going to have to get to some kind of way to cryptographically strong way to identify who's the citizen of what country. Like that's going to be a really bad problem, I think. Otherwise, there's no way to even have a democracy. I mean, you know, like it's pretty crude what they're trying to do with the SAVE Act, but it's not completely insane, which is how do you even know, like, the people are voting or actual people or living people or anything? And we really don't know now. Like, we genuinely don't know.
Starting point is 00:33:54 And then if you go to, I mean, the whole mail-in ballot thing, like, is built for a whole very different world, right? That's right. So, like, I don't think in an AI world where you can have, like, very high-scale impersonation that, and then with a broken social security system that, like, you're going to have the will of the people anymore. Like, I think that's going to be gone pretty fast. So I think we're going to need some kind of, you know, cryptographically strong infrastructure on like who's who. And then, you know, similarly, I think we're going to have to be able to get people money
Starting point is 00:34:34 much more efficiently than through these crazy apparatus of social programs that we have. Just because, like, how lossy is and fraudulent is Social Security or Medicare or any of these things. I mean, like, Medicare is so frustrating for people that they shot the CEO of United Healthcare. Like, and people are happy about that, like, really happy. So, like, think about how bad a system that is when, you know, and the government spends a lot of money sending you money for your health care,
Starting point is 00:35:08 but they do it in a, like, super inefficient way. But we have the technology to do that now. So I think that AI is going to make that problem so bad, because the ability to file fraudulent claims and create fake, you know, buy social, I mean, you can buy social security numbers on the black market. Like, for those of you don't know,
Starting point is 00:35:31 that's an easy thing. That's a real thing. Like, that is, like, everybody's social security number is for sale. And so, you know, like, AI is just a way of making that kind of loose black market underground fraud thing, just massive and extremely scalable. I agree with that. Yeah.
Starting point is 00:35:55 So I think, you know, proof of human is a piece of a very important puzzle where we have to upgrade that entire infrastructure or we're not going to be a democracy anymore. I mean, that'd just be my guess. I agree with that. Share more. You said, okay, next year, go-to-market is focused on the U.S. Say more about how you're thinking about that is the incentive for people to to do it because they get to use a set of services,
Starting point is 00:36:19 is there some other economic incentive? How do you envision it? Basically a month ago, we entered a very different phase as a project where I do believe many of the platforms that we're not integrating with will really, you know, bring a lot of users to our platform.
Starting point is 00:36:34 And that changes, you know, how you think about it entirely. Like if you have a platform of a billion users, sending users to you, then it's really just all about, like, how do you meet that demand? It's like, you know, and that's what we're now entering
Starting point is 00:36:50 and so yeah so I think the response is first I think you will see and we're already working on it but you will see a lot of really large platforms that you know integrate in the near term future I think that will just to set expectations I think that will be
Starting point is 00:37:09 slow initially because it also should be just to understand the product it will be focused on certain geographies like what we did with Tinder Resort in Japan just to, you know, to test the product and also to just normalize the concept. But that will happen.
Starting point is 00:37:27 And then secondly, which is now becoming like one of the main priorities for me is just how do you get this orb distribution up which is broadly speaking. There's a couple of different dimensions to that. But one is first of all the product needs to work at scale. without supervision,
Starting point is 00:37:47 which turns out to be much harder than you would think. Every engineering problem at scale turns out to be much more complicated than you would think because fighting for 1% of improvement in quality is this cluster fuck of all these dependencies to come together. So I think that's one of the biggest engineering focuses right now. But then second, you need to find places to deploy them at
Starting point is 00:38:12 And the way to think about it is there are large-scale distribution partnerships that could be something like Walmart, you know, or if you're very ambitious, it could be something like Starbucks. Or it can just be, you go to one of, you know, hip coffee shops and you just put it there. Or, you know, and then you could eventually even go to the DMV
Starting point is 00:38:36 and just put it right there. So that's the problem we're currently trying to puzzle together. And, you know, it's going to be some of all of that. I think there's going to be some large-scale distribution partnerships, many one-off coffee shops. Actually, one thing that we will launch soon, and the team is going to hate that I'm saying this now, but it's going to be orb on demand.
Starting point is 00:38:59 So in the bay. Just because actually it's such a gnarly problem to, you know, to get an orb to truly everyone. You know, it's like to get that, the cap-backs is insane. So it's actually much cheaper and easier to just put an orb on a motorbike and drive it to you. As crazy as it sounds. So like in places like the Bay Area or New York, you will just be able to say like, yeah, I want to verify now. And 50 minutes later, there's an orb comes to Europe and you can verify.
Starting point is 00:39:35 Did you ever think about, I don't know, this is probably a terrible idea, but having kind of different levels. like, we know you're a unique human or like, this guy may be a unique human because he's done it on his iPhone. It's not quite the same. Yeah, yeah, we have that. So actually we, you know, generally we just have the, you know, we have the principle of, you know, whatever it could be useful for this problem, we just build it.
Starting point is 00:40:03 And so we have something called Face Check that does that. So it uses face from the camera. It still uses multi-party computation, what we've built for the entire system, so you're still anonymous. And, you know, it, of course, reaches wayless accuracy. So, you know, as a system, you will know something along the lines of,
Starting point is 00:40:28 well, this is, you know, at least one person cannot create 100 accounts. Maybe it's just 10 or 20. So it's like at least at some measure of rate limiting. And I do. think, just to set a disclaimer, I think with deep fakes and all the stuff, I think that will fundamentally break. So it's a temporary solution that I think can get us to scale. That's kind of how I think about it. We also actually use government IDs. Similarly, we use just the ones
Starting point is 00:40:58 that have an NFC ID chip. And we use multi-party computation, so you remain anonymous. And platforms can choose to use that as well. But no one really did. It's just somehow they have this very negative stigma, which I think makes sense. But yeah, basically whatever could do it. Yeah, by any means necessary. That's right. Well, thanks so much for coming to the podcast. It's been great.
Starting point is 00:41:21 Yeah, thank you. Thank you. That's great. Thanks for having. Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcast, and Spotify.
Starting point is 00:41:39 follow us on X, A16Z, and subscribe to our substack at A16Z.com. Thanks again for listening, and I'll see you in the next episode. As a reminder, the content here is for informational purposes only. Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16Z.com forward slash disclosures.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.