Pirate Wires - A.I. Lies & WTF is a Digital Land Acknowledgement?! | PIRATE WIRES EP#4

Episode Date: July 7, 2023

EPISODE FOUR: Breaking down the discource around A.I. and what everyone is getting wrong with special guest John Luttig. The insane concept of digital land acknowledgement, and the people shaping our ...A.I. policies in DC. Featuring Mike Solana and River Page Subscribe to Pirate Wires: https://www.piratewires.com/ Topics Discussed: Hallucinations in AI: https://www.piratewires.com/p/hallucinations-in-ai DC’s DEI “Experts” Shaping AI’s Future: https://www.piratewires.com/p/dcs-dei-experts-shaping-ais-future Pirate Wires Twitter: https://twitter.com/PirateWires Mike Twitter: https://twitter.com/micsolana Brandon Gorrell: https://twitter.com/brandongorrell River Twitter: https://twitter.com/river_is_nice John Luttig: https://twitter.com/absoluttig TIMESTAMPS: 0:00 - Intro - John Luttig Joins The Show! Special Guest Article In This Weeks Pirate Wires 1:30 - Everything You Need To Know About A.I. Discourse 22:40 - Bye John! 22:55 - DC’s DEI “Experts” Shaping AI’s Future - Reviewing River's Article 29:10 - A Digital Land Acknowledgment?! LAND TURTLES?!31:15 - Stolen Land Virtue Signaling 34:25 - More On Digital Land Acknowledgment 38:00 - How History Gets Turned Upside Down4 4:00 A.I. Is Racist 46:15 - Woman Claims Credit Card Denial Over Gender 55:00 - See You Next Friday! Pirate Wires Podcast Every Friday!

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome back to the Pirate Wires pod. We are here today with the one, the only, the legendary John Ludig, the little brother of Founders Fund, who I think is now like pushing 30. So I've got to stop calling him that. 27. Okay. Okay. No need to get defensive, but I understand. John is an investor at Founders Fund and a friend of mine. And I would say an insider in the space of AI and certainly in the space of AI discourse. He's an investor. He's looked at a lot of these companies. He is pretty plugged into the scene in San Francisco where a lot of the stuff is
Starting point is 00:00:36 being developed. And he just wrote a piece that I thought was great. And we published it at Pirate Wires in which he did sort of what I think we love to do at Pirate Wires, which is he analyzed the way that we're talking about something, broke it down and explain sort of, you know, what is really being said here. And in the context of AI, I think, as in the context of most discussions, the bounds of that debate, is AI going to destroy us? Is it going to save us? Are we going to make a lot of to destroy us? Is it going to save us? Are we going to make a lot of money in AI? Is it all over? The discourse is sort of determined by the incentives of the people speaking. So I think probably just to catch kind of everybody up to speed here, the what do I mean by the AI discourse, sort of the artificial intelligence discourse?
Starting point is 00:01:25 What I'm talking about is just like the way that investors and technologists, people sort of in the technology industry are talking about this new technology and the way that sort of trickles down into the media and then sort of into your living room at home. And also problematically often into these sort of halls of Congress. problematically often into these sort of halls of Congress. I want to maybe just start with Ludwig, like your sense of what that discourse is. Just paint me a picture of kind of who the major players are, what they're doing, what they're saying, what has it been to date? Cool. Yeah. Well, thanks so much for having me, Solana. Excited to be here. I think the background thing going on here is there's very few people and few companies defining the AI frontier. So if you look at OpenAI plus Anthropic plus MidJourney and a few others, you have fewer than maybe a thousand engineers building frontier models.
Starting point is 00:02:16 And that gives the rest of the technology industry a lot of FOMO. And that creates a really noisy discourse where most people don't really know what's going on, but everybody has something to sell you. And if you're just getting up to speed as an outsider, or even an insider, it's hard to tell what's real, who you can trust. So the point of my article is to give people a lens through which they can understand the incentives at play. And so the key biases I outline are hope, cope, and mope. Hope meaning i hope the future unfolds in a way that benefits me uh cope is i'm not winning but it's okay for x y and z reasons uh and then mope is the game is already over and lost and so we should just give up and these are the you're saying these are the opinions
Starting point is 00:02:59 that sort of the average like insider has on the topic exactly Exactly. Yep. Yeah. And there's, there's a few ways that these manifest as what I call hallucinations. And so it's like things that people want or need to be true. And so you have this idea that foundation models have already plateaued in terms of their capabilities, meaning, you know, GPT-5 won't be that much better than 4. I think that's driven by the cope of other foundation model laggards that are not winning right now. And then there's another hallucination that's this idea that open source models are going to totally dominate. And I think that's a hope driven by developers that hate big tech, in for companies whose reason for existence is this kind of multimodal world,
Starting point is 00:03:40 big tech companies that don't stand to win in the foundation model race. big tech companies that don't stand to win in the foundation model race. Another hallucination is this idea that only incumbents will win. And I think of that as in the Mope category where many people would rather give up than find a way to win in the new world. And the reality is it's very hard to figure out what to do because it's true that the incumbents have done disproportionately well. But at the same time, there will be new companies that win. There's already been OpenAI, Anthropic, MidJourney, all of which are relatively nascent on the commercial side. And then the app layer is even more nascent.
Starting point is 00:04:16 And then there's this VC narrative that there's many different VC investable AI opportunities. VC narrative that there's many different VC investable AI opportunities. And I think that's driven by hopeful VCs that are not in one of the very few AI companies that matter. So if you think of AI companies that have won so far, there's NVIDIA, Microsoft, opening up MidJourney. The reality is of the companies that have won so far, the VC ownership in those companies is very small. So historically, when a new technology platform took off, that was almost perfectly correlated with VCs doing really well. And in AI, that doesn't actually seem nearly as true. What is the breakdown of, because we're talking about OpenAI mostly.
Starting point is 00:04:56 So what is that? Who mostly owns OpenAI? Yeah, largest shareholder in OpenAI is obviously Microsoft. I think that's public knowledge. There are some VC shareholders, but it's a much smaller fraction of the company. Obviously, Nvidia, Microsoft, large public companies, and then Midjourney was entirely bootstrapped. And so, of many of the leading companies, there's a relatively small, like a historically anomalous level of VC ownership
Starting point is 00:05:27 in each one of these that's, I think, terrifying to many VCs. And so they need to tell a story that there's going to be many different VC investable opportunities. Well, in the broader context there, I mean, why is it terrifying? It's terrifying because there's nothing else happening. I mean, this is, and they say this, people, you see the VCs, venture capitalists are talking about this right now publicly, pretty much constantly, how we were in this crazy bear market and AI is inspiring and we're excited again. And there's finally something going on. I've heard VCs actually talk about this in the context of politics, where they say, you know, perhaps the reason venture capitalists picked up this reputation for speaking about politics over the last few years is just because there was nothing else happening in tech. It was just it was kind of a boring time to be an investor. And this, this feels important
Starting point is 00:06:08 because it, it definitely is something when you use chat GPT for especially, um, you just, you know, that you're in, it's not only like VR in which it felt like you were interacting with the future. It's practically useful. I mean, people people are I use GPT-4 all the time this is going to matter this is going to replace a lot of things I would for one wanted to replace my lawyers can't stand them um but and I and I do think there's there will be VC backed winners um but I think I think it was I want to say Matt Turk that put it pretty succinctly that VCs need AI more than AI needs VCs. Um, I think that's probably the best distillation of the, uh, the way, yeah, the biases that informs the VC
Starting point is 00:06:53 narratives. Word. Well, I guess maybe, uh, sorry, my mom's calling. She's trying to have a birthday cake delivered for me. And I'm busy, mom. We can keep that in. I want to talk a little bit about the pushback you received. Because I mean, inevitably, you're talking about, you know, the discourse, you're going to become a part of it. People have opinions. People have opinions about you having opinions about their opinions. have opinions people have opinions about you having opinions about their opinions yeah i think at high level i think the piece resonated because most are too timid to say anything many people feel like they don't know enough and the reality is very few people know enough uh so might as well jump in the ring um i think one of the things that was most polarizing is the open source section where i claimed that claimed that open source has a lot of green shoots, lots of promising projects. But there are a lot of market forces against it as well.
Starting point is 00:07:53 And people, this is a huge lightning rod. People either thought it was obviously correct, meaning open source is in fact overrated, or obviously incorrect, where they would say, but have you seen these 13 new high capability open source models? I think this is going to sound really stupid, but there are a lot of people listening right now probably don't even know entirely what you mean about, what do you mean by open source in the context of AI? Yeah, open source is basically this idea
Starting point is 00:08:20 that you write code, upload it to GitHub, and then any developer can clone that code, download it themselves, change it as they want. And I think this is a really powerful thing. It powers a lot of cloud infrastructure. A lot of developer tools are written in an open source way. And so I have no doubt that it's a critical part of the ecosystem, you know, Linux on the operating system side is, is, is open source. Um, but I think the open source narrative is particularly muddled because there's so many people that want open source to be, to be true. And so it's, it's very hard to, um, it's very hard to assess the open source
Starting point is 00:09:02 foundation models. If you're not an insider. There's basic things like what evals are you using, meaning what sort of test questions are you asking the open source model to assess how good it is compared to GBT-3 or GBT-315 or 4. And it's very hard to falsify the open source claims unless you have a rigorous understanding of the questions on the eval. So, for example, if the eval asks a first grade question, then it is all first grade tests, then maybe the open source models do just as well as GPT-4. But then when you throw it harder questions around software engineering or complex philosophical concepts or literature, then it can fall on its face. And so even just benchmarking these models, because they're so non-deterministic, is actually
Starting point is 00:09:55 quite hard to do. And so you were saying about, I mean, the pushback was it was a mix of people who either thought it was obvious that open source was not going to be sort of the path forward and people who thought there was no way that it could not be open source. I want to key in on the, you sort of referenced, you made a sort of mean comment about open source people in the piece. Maybe it was supposed to be a little more veiled than I felt it actually was, but it is sort of like pseudo-religious obsession almost in tech. It's part of the ethos. And in a way that sort of the ideals of the open
Starting point is 00:10:31 internet of the 90s, like the anarchist internet, it's just part of Silicon Valley culture and the way that people think of themselves as technologists. I mean, open source is the biggest part of that. The idea that this information is free, that anybody can build, it all belongs to us. And now you're saying it doesn't matter because the biggest companies in AI are run by a handful of people and that stuff. I mean, open AI started as pretty open source. It's not anymore. Tell me a little bit more about the pushback from them specifically, because it feels like the technology industry is kind of, I don't know, approaching a crisis of identity. Yeah, I think open source is a very critical part of how Silicon Valley was built. And so
Starting point is 00:11:20 you definitely don't want to dismiss it. It's, it's a bit of a protected class in that, um, everyone wants free stuff. And so, uh, and people don't like corporations getting a lot of the benefits. And so it's a very touchy subject to defend the corporations. Nobody really wants to do that. It's not very popular. That's why I had to approach it in a somewhat indirect way. Um, you know. It's kind of unbecoming to be mean about it. And then I think even in the AI context, open source will be very critical. So OpenAI release it's whisper models, just like a open source audio model that's been really useful. A lot of people have built on top of it. I also think there's enterprise use cases
Starting point is 00:12:03 where people use open source models to run local deployments, fine-tuned kind of specialized models that's really valuable for particular use cases. But I think on the sort of consumer app side, it is very hard to beat a full stack approach of amazing technology that's closed source run by a company where the product is built in-house, essentially the Apple approach. And I think mobile is a fairly instructive analogy where Apple only has, I think it's on the order of 20% market share in terms of number of devices, but then 50% of the revenue, 80% of the profits. And so it's not to say that open source will not be a very important modality, but in terms of creating and capturing value, it's hard to envision a world in which closed source doesn't
Starting point is 00:12:58 somewhat dominate. I think before we get, you mentioned earlier, you had sort of a piece that you'd written, you wished that you'd written a section of the piece that you wish you'd written. Before we get to that, I do want to know just, I mean, it seems the read of your piece was like, it's pretty bleak to be an investor right now in the sort of, maybe in general, but certainly in the context of looking for something in AI. How do you think about that bleakness as an investor? Yeah, and I don't want the piece to be too pessimistic. This is one of the key messages I want to get across is the reason you want to think deeply about the space is and uncover all the biases that is super exciting and promising. It's the
Starting point is 00:13:42 biggest force multiplier on productivity since the internet. And so I just want to help people find the truth more easily. Definitely don't want to tell people to give up or the game is over. I don't want to be a moper myself. Well, maybe not give up. When you say it's, I mean, but as an investor, it's maybe less hopeful than as someone who's building interesting things. Like, obviously, there's lots that you can do with AI right now. Is that maybe what? Yeah. Yeah, I think, you know, on the question of sort of what do I do?
Starting point is 00:14:14 Because everyone knows it's important, right? Everyone knows it's the future. I think it's important that your answer to what one should do can't be nothing. Like, you can't just sit it out. This is clearly happening, clearly the future. You don't want to be left behind. And so I think on the investor side, you know, the article mentioned there's very few people driving the frontier.
Starting point is 00:14:32 And I think the corollary there is there's relatively few companies that matter. This is a general truth about startups. There's obviously Peter's power law concept from zero to one. I think it's uniquely true in AI given the economies of scale and accumulating advantages of running a large foundation model. And so the reality is,
Starting point is 00:14:53 on the investor side, there will be few companies that matter, but they will probably matter in a big way. So the key is to identify those. And then on the founder side, I think people have a general bias to try what's already working. So I've seen a lot of founders raising very large seed and A rounds to build foundation models that are very similar to the incumbents, but maybe some flavor of like new geo or new modality or like better pricing or something like that. I think the better way to do this is assume the foundation model layer is somewhat set. And then if you live on the frontier from there, there's a lot of things left to be done.
Starting point is 00:15:34 So I think chat is likely not the final form of the interface. Chat will be an important modality, but it's not going to be the only way things are done. I think of chat-to-DVD almost as like a proof of concept of how to interact. Yeah. It's funny. I think, I think David Holtz actually might've, I think I remember him tweeting it, something to this effect. Like they, people just kind of forgot that this happened in images that there's like a whole other sort of, I don't want to, it's easier to just refer to it as an alien. There's a whole other kind of alien artificial intelligence just right in the room with us and no one's talking about it right yeah the way i think
Starting point is 00:16:10 about it the way i think about llms from a founder opportunity is not as a chat bot even though that is the primary way people use it right now and more as a translation layer so you can translate between any two languages and by, and I define language very broadly. So like it could be from a textbook language to like a seventh grade comprehension level language. It could be translating from unstructured texts to a user interface. It could be from code to natural language. Um, and so the key is to the thing that hasn't really been done yet is translating things into the language that makes the most sense for the user, whether it's like an image, a paragraph,
Starting point is 00:16:49 user interface rendered on the fly, tons of opportunities left to be uncovered that the incumbents won't be super well positioned to do. I think there's a lot of low hanging fruit that the incumbents did integrate very quickly, whether it's in like customer support with Zendesk or some of the stuff that TripActions rolled out. But in terms of this much more creative back to the drawing board thinking, there's a lot that has not been done. And I think of that as a huge opportunity for founders. Okay. Last one for you before you get out of here and we start talking about the diversity, equity, and inclusion experts who are shaping, literally shaping our AI discourse, our AI policy
Starting point is 00:17:32 right now in Washington. What is the thing that you just wish that you had written about that you'd included right from the start in the piece? Yeah, there was one section I wish I had written and I didn't even think about it until after I published. I weeded it in some areas, but should have been a standalone, is this narrative that AI isn't that smart. I think that's very popular, particularly among very smart people, interestingly. And so you get a lot of people pointing out that it's just doing next token generation. Right. So it's like producing one word at a time. There's no way that this understands this understands concepts in the way that a human can. And, you know, you have like Jan LeCun saying that ChatGPT isn't really that innovative compared to what the frontier is.
Starting point is 00:18:28 the frontier is. And then, and I think the reality is, and then you also have the lots of examples of how ChatGPT fails at math or logic, or like if you give it a puzzle, it'll fail. And I think of this as major cope that makes people either feel better about their capabilities as human, or better about their capabilities as an AI researcher or an engineer. And yeah, there's a good quote from Ilya at OpenAI saying like, you know, on the surface, it looks like we're just running statistical probabilities. But the reality is that text is some sort of representation or projection of the real world. And there's actually a lot to be learned from it. And that that's roughly the framing I would have on, uh, on that section. If I could have written the piece again, Brandon river, do you guys have anything to throw
Starting point is 00:19:18 at him before we part ways? So my, my, I guess, hallucination or like my cope, or I mean, it's not really a cope or a hope or a because i don't have you know i'm not an investor of a writer like i i guess i don't have like too much stake in this unless it takes my job um but um what happens when ai reaches a point where like most of the content ever generated in the world is ai generated. Doesn't that get incorporated into the new data sets? And then it's just sort of becomes this sort of like clown house, like reflection of itself. It's like AI reflecting itself back.
Starting point is 00:19:53 So it's no longer sort of imitating human language or like human art. It's imitating its own art. Doesn't that get us farther away from like something that's doesn't that just get too uncanny, I guess? Right. Yeah, I do think there is something there. The reality is it's a frontier that we don't totally know. One broad category of training the models is synthetic data, meaning you basically generate labels with the model and then, uh, and then train the model using those
Starting point is 00:20:25 labels. And it seems weirdly self-referential and I agree it's likely to lead to some exaggerations in pockets of the model, but how that plays out exactly, I don't know. I mean, a version of this is like what I wrote about in demonic. This is like, this was, was this last summer. And this was in the sort the picture-based... What was it? I forget which one it was, actually, which model it was. But you were doing basically reverse image searches. So you give the AI a picture and have it generate a piece of text. And then you take that text and you say, okay, now generate an image based off of this text. And I believe it was Marlon Brando or something was the image that was put in there. I'd have
Starting point is 00:21:04 to go back and double check this. So let's say Marlon Brando has put in this thing and was like, AI, give us some text. It gives us some text to sort of describe the picture that you just fed it. And then you reverse it. Or was it, they ran like an opposite, they like describe the opposite of this or something. Then you put that text into the generator. And what was revealed was not Marlon Brando, as you would expect. What was revealed was this like horrifying, literally demonic woman from what looked like some kind of fucked up hell dimension. And naturally everybody who found out about it started asking more questions about it and sort of unraveling this entire other alternate, entirely AI-generated hell dimension.
Starting point is 00:21:49 And what we were sitting on was maybe the first sort of portal to the negative spiritual realm via AI that we ever encountered as a people. Some people disagree with me on that, but I like to think they're right. Yeah, it'd be interesting to rerun that experiment on the latest version of mid journey or next generation of dolly whenever it comes out comes out some of us don't like to i try my best not to uh challenge the spiritual realm in this way one demonic one demonic summoning was enough for me to steer clear i was raised catholic i don't fuck with this stuff if you if you call demonic, Solana, the AI will remember. Just a note. I'm not putting any qualifiers on that in terms of like, I think you're a bad person. I'm just saying I respect it enough to leave it alone. Cool. Well, Solana, thanks so much for having me.
Starting point is 00:22:44 Really fun to publish this in PowerWars. Thanks for joining me, man. And thanks for publishing it. I will see you on the internet. All right. Thanks, guys. See ya. River, you wrote a banger this week on the DEI experts who are kind of shaping policy, AI policy for the White House. And I want to ask you some questions about that. Let's have a conversation about it. But first, just maybe like a little bit of background. I don't know about background, actually. Like first, just like out of the gate kind of an opinion. I don't know. I'm not actually convinced that anybody in D.C.
Starting point is 00:23:19 actually wants to regulate anything. It's like we have these people in place to go and work on this problem. But we watched the hearing and it seemed like senators, for the most part, didn't really understand what they were supposed to be regulating or why. There are a few people who really care about this and they're kind of in charge of making appointments of the kind that you're about to describe. But I think maybe the situation is not so dire right this second. Now, I think the moment that people become hysterical about the ability of AI, it becomes a lot more serious. And right now, the kind of the experts, quote unquote, in place, defining the bounds of what AI is going to be permitted to be and do are pretty fucking crazy. So why don't you just go
Starting point is 00:24:06 ahead and break down what you discovered? Yeah, let me just say, I agree. I think how the reason that it's like this in some ways is because people in Congress and the Senate were being told, you need to do something about AI. It could kill us all. It could do like whatever. And so there's this like idea that something needs to be done, but we don't really know what. So we're just going to create this like council and then this other office and they're going to figure it out. But because it's Democratic Party politics, the people that they put in charge are not really concerned about like the end of the world or whatever um which i mean people can judge whether or not they think that's like a real possibility or we're all just freaking ourselves out but they're more concerned about um sort of
Starting point is 00:24:58 like dei like bias and ai these sort of like niche issues that for reasons I'll explain in a minute, aren't actually issues for the most part. It seems like a rerun of the social media conversation for them. They don't really have a, it seems like most of these people don't have a framework for thinking about an intelligence capable of replacing, I don't know, most human labor on the planet, or at least certainly intellectual labor. And so they're kind of defaulting back to this more familiar argument in which really like the problems are mis quote misinformation as defined by these people as things with which we don't agree. Um, and, and, and bias, which I think is really just a weapon to stop things that you don't like. Right. People on the committee also don't,
Starting point is 00:25:45 don't have specific expertise in AI. If you look at their bios, they're all just like managers. Right. Out of some other. Well, that's what open society. This is what disinformation,
Starting point is 00:25:56 this is what disinformation researchers are. I mean, you just put it in your Twitter bio and you're good to go. Well, some of them are like, they do have, they run AI NGOs or AI, like there's AI in the name or AI like startups, but they are specifically focused on bias. It's like pretty
Starting point is 00:26:14 much all of them. But I should go ahead, I should go back and explain what Congress created. It's called, I've been calling it NAIC in my head. I'm not actually sure if that's how it's pronounced, but it's NAIC, National Artificial Intelligence Advisory Committee. This is the committee that we're talking about. What they do is they basically develop policy recommendations for the executive branch of government in essence. In essence, so they advise the president. They also advise NEOel she's the president ceo of equal ai which again is a ngo that wants to eliminate unconscious bias in ai there's another lady on this committee janet haven she is a soros alumni a a longtime Soros or Open Society's alumni. So, you know, the infamous George Soros funded NGO. I was like peeking around Data and Society's webpage and
Starting point is 00:27:40 found some of the things that their members have been publishing and stuff like that. And there was one, like the first one that I found was where they were accusing Facebook of perpetuating racial capitalism by refusing to censor people in the third world. people in the third world essentially the the argument was that they are using the third world as uh like guinea pigs essentially for i don't know open dialogue or just people being able to use the internet freely it really was baffling um they also have the digital claim was their specific claim was that facebook wasn't applying the same level of moderation exactly to certain countries in the third world as it does in the u.s exactly yeah and then there was also something about how it feeds the claim is that this actually feeds back into colonialism or whatever because then it creates these uh dystopic images of black and brown people killing each other uh because of social
Starting point is 00:28:54 media or something like it's like insane um they also have like a digital land acknowledgement which is hilarious um i mean they're like talking we need to i need to hear everything about that so do you want to read no no no tell me what a digital land acknowledgement i want to know what a digital land acknowledgement is yeah should we explain what a land acknowledge if you're like not an internet person let's just start there i mean i don't know there are people who are not super online and they're going to be shocked by a land acknowledgement, let alone a digital land acknowledgement. So let's just take it from the ground floor. Yeah, well, I mean, what is Turtle Island? I'll explain that.
Starting point is 00:29:33 Go ahead. it was like one Algonquin tribe or something called like the continent of North America, turtle Island. And so now they're like, that's what all Indians call it or something. Like, it's like, it's,
Starting point is 00:29:52 it's one of those things. It's like the two spirit thing. It was the name for the United States. Yes, basically. Or like all of North America, I think technically, but yeah.
Starting point is 00:30:02 Yeah. Which no native American had any concept of because they didn't have cartography. They didn't map out the continent. It's gigantic. It's just insane. Yeah. And also they weren't Algonquin.
Starting point is 00:30:13 Not all of them. Most of them weren't. Yes. Right. So people that do land acknowledgements call America turtle Island. Not all of them. I don't think, but like, I mean, in this one they do. That's it's one of them i don't think but like i mean in this one they do um that's it's one of those terms that's used in like when you want to be like
Starting point is 00:30:31 in some like native american um activism circles um which are mostly dominated by white people by the way like fake pretendians as they call them. And it's sometimes using like land acknowledgements, but yeah, they call it Turtle Island because it's like from the mythology of like one tribe that I'm pretty sure is extinct now. And they've just basically decided that it's, that's the correct term. That's the term we should use because America represents colonialism and Turtle Island, I guess, is it? So the way I've seen land acknowledgements used in the past, and also let's just keep it in the tech context. I saw one. It was, I believe it was the Apple.
Starting point is 00:31:16 This is a couple of years ago. It might have been the most recent one, too. When they do the new products on stage and you have the people in there, it's like the white people show up on stage and they have to be like people and they're like, it's like the white people show up on stage and they've got like a big scarf and they have this like very strange, like fashion. That's almost designed to become a meme on Twitter. Uh, and they're revealing the new products before they begin the whole monologue. They say, Hey, like before we start our presentation, we just want to acknowledge that we are standing on land that was stolen from. And they usually name a local tribe that you've never heard of that was stolen from, and they usually name a local tribe
Starting point is 00:31:45 that you've never heard of that apparently existed in San Francisco and only San Francisco. First, we want to acknowledge that the land where the Microsoft campus is situated was traditionally occupied by the Sammamish, the Duwamish, the Snoqualmie, the Suquamish, the Muckleshoot, the Snohomish, the Tulalip, and other Coast Salish peoples since time immemorial.
Starting point is 00:32:11 Some, it sounds like, say, Turtle Island for all of North America. And the purpose here is what? Just to, it's like just to respect the people that Americans stole land from. It's to be like, hey, like, we are living on stolen land. For some reason, that is this important thing that they've decided to work into literally every public interaction that they have. Yeah, I believe like, sort of strangely, because you don't think of it as like a woke country, because they're, you know, just a bunch of like, foul mouthed drunks. But like, I think it happened in Australia. Like, that's where it started. That's what my Australian friend told me. me he said that it's like everywhere you go every time you touch
Starting point is 00:32:48 down on a plane in australia they're like we acknowledge the papal past prison what like you know what i mean they just like go on and they do a whole landing dog fit like when you're landing in sydney um so somehow that's been recorded over here they do that yeah it's like you kind of can't do that there are only certain countries where you can do that? Yeah. It's like you kind of can't do that. There are only certain countries where you can do that. Australia maybe is one of them.
Starting point is 00:33:09 America is certainly one of them. And the reason is that you sort of like need a small, you need a small enough population of the original people to sort of for it to matter in
Starting point is 00:33:19 some way. It falls apart in a place like Mexico where so many of the people are actually like racial, they're like racially part indigenous and part colonizer, part Spanish, it just like it ceases to have meaning in that way. So just I really, I highly doubt that they're doing land acknowledgements like that on the regular basis in in well probably anywhere in
Starting point is 00:33:45 south america no not at all don't don't i think when i was in uh uruguay i saw one but that's because they wiped out all the indigenous so it's again like there's like no there's nobody there like so you can just say it and like because if you do it in you know bolivia or whatever they're like 80 of the population that's indigenous and also poor. Like, they're going to be like, OK, we'll give this land back. If you're, you know, so like the political economy, I think is different when it's like, you know, two percent of what, like one, two percent of the population. All right. So this is level level one. That's a land acknowledgement. What is a digital land acknowledgement?
Starting point is 00:34:24 All right. So this is level one. That's a land acknowledgement. What is a digital land acknowledgement? A digital land acknowledgement is when you do the same thing, but you're saying like our servers are on stolen land and we operate on. So it's like you have to like all of the mechanics of, I guess, where your setup is, like all the different components or whatever. So actual websites, it's like if Google were to do this, I mean, Google doesn't exist anywhere. It's everywhere. It's like Google is acknowledging
Starting point is 00:34:55 that somewhere down the supply chain, there are some servers sitting somewhere that once belonged to some other people that were not white. there are some servers sitting somewhere that once belonged to some other people that were not white. Yeah, it says our website, www.dataandsociety.net, runs on servers located in Turtle Island. But they also were like, I mean, it's even longer than that. Here's why. In the United States, much of this infrastructure,
Starting point is 00:35:29 they're talking about the infrastructure of the internet, I guess, sits on stolen land acquired under the extractive logic of white settler expansion. As an organization, we recognize this history and uplift the sovereignty of indigenous people, data, and territory. What is indigenous data? Well, what is indigenous people? What's interesting about this Turtle Island shit to me, when you're doing a digital land acknowledgement for all of Turtle Island, you're explicitly not talking about any actual group of people who had anything stolen from them in the past. And now, first of all, like, just right off the bat, I have a problem. This is like stuff that happened hundreds of years ago. And this is the history of all of the world is a history of conquest and taking shit, including that is the history of indigenous America as well. It's tribal warfare. We have ample evidence of this, that tribes took land from other tribes throughout history. The Aztecs,
Starting point is 00:36:11 we've talked about this, that it was an empire that was constantly invading other places and taking slaves and things like this. History is brutal. But let's just table that for a second and pretend that the sort of Ferngully myth of the Americas that people like this who work for Soros like to believe, let's just pretend that it was true. Turtle Island erases them from history. At that point, what is Turtle Island as a concept really about when you're doing a digital land acknowledgement, acknowledging Turtle Island? It's about white people. All you're really saying in that statement is white people are bad. And so like, that sounds maybe nice to an idiot, but what you're really, what you have really just done is you've, you've erased the brown people who you purportedly are obsessed with. Am I wrong?
Starting point is 00:36:54 No, you're not wrong. And it's also that it just ignores the complexities of not even like the Native American warfare that happened between tribes before, you know, colonialism, before the United States was colonized. But also like what happened after, like, yeah, the Trail of Tears was bad, but the Cherokee also brought black slaves with them on the Trail of Tears. Like the last, the last nine people. We need a chest down here so I can have them look it up. That's like a, I'm like, look it up. Someone look it up. We're going to get roasted.
Starting point is 00:37:30 No, no, it's true the the chair there was a recent controversy actually because the tribe kicked out a bunch of black people basically who were like on the dawes rolls they were like enrolled members of the tribe because they were descended from the slaves because that you know their ancestors had been born and lived in indian territory and a lot of them were part native too just like usually illegitimate you know um and they like kicked them off sources confirming the the the existence of black slaves cherokee the cherokee did it the seminole did it the creek did it the um i want to say the chickasaw maybe um yeah there's a most of the southeastern tribes had black people as slaves and they took them with them when they went to oklahoma and since oklahoma was still indian territory and
Starting point is 00:38:11 remained neutral during the civil war um black people belonging actually specifically to the cherokee were the last black people enslaved in the united states they were enslaved like i want to say a year or two after the Civil War ended, before the federal government finally forced the Cherokee to free their slaves. So that should have been a, I mean, we should have written a Juneteenth piece, and it should have been that. Yeah. This makes me feel like a really good startup idea would be to get funding to start a company that
Starting point is 00:38:45 house their servers on totally pristine non-stolen land somewhere in like Africa or something. Well, you would think that would also constitute, that should also presumably be Europe, you know, like what, obviously it's a history of white people stealing land from white people, but there's no like indigenous native american tribe you thought and yet when you google around for this there are all of these weird like scandinavia has a weird culture of talking about its indigenous population which i i mean need to dip in it seems like you know something about this. Sami. What is that? So the Sami are, they live in Finland, Switzerland, or not, sorry, Finland, Sweden, Norway, and like some parts of Russia.
Starting point is 00:39:34 Their language is like closer to Finnish. They're like reindeer herders, basically. Very culturally distinct, convert to Christianity until like the late 1800s or something. But they're basically, like they are indigenous to that area, but they weren't driven out of the lower areas of Scandinavia by the Norse. Like they migrated like from sort of like middle Siberia into like northern Scandinavia before um the norse but they didn't occupy like the same territory until like pretty recently and there was like some stuff that like i think especially in sweden um like some like forced assimilation stuff and i think they might have actually like sterilized some of them like so there was like some unsavory stuff that happened like in the 1900s with those people.
Starting point is 00:40:28 But they didn't, they're no more or less indigenous than like the majority of Nordic populations in those countries. Right. It's like they just like there was like a fairly brief period in some of those countries where like there was discrimination and stuff against them but for most of that but for most of like history they've just kind of like existed almost like in separate worlds like the norse didn't really go that far north because there was no reason for them to like they they didn't have the same like way of life they weren't interested in becoming reindeer herders they were you know fishing and doing viking raids and farming and stuff so they didn't even want the land what's interesting about the it's just the entire world americans america's dominance is insane our cultural dominance we have essentially exported our own history our own historical baggage onto like the rest of the planet. And everyone,
Starting point is 00:41:25 especially the rest of Europe has now some version of some version of our own struggle with like very specific struggles with, for example, the legacy of slavery, um, which is somehow translated into right now, the struggle between, uh, like black Muslims in France and the white French population where it like just makes zero sense at all. And things like this in the context of Native Americans, it's wild. Yeah. And if you actually like look at like the talk to French people or look in actual French coverage of a lot of like what's going on in there. The French are like, they have like a national policy of colorblindness and it's actually like,
Starting point is 00:42:13 there is like a problem with assimilation there. On the other hand, like rioting is like the most French thing you can do. Like they're constantly doing it. You know what I mean? Like, it is confusing in this context it is yeah yeah um and a lot of the root for yeah um so but but like the problem the the suburbs of
Starting point is 00:42:33 paris are like rough but if you go in those areas like half of them are like they call them like the working class areas like it's not like as much of a there are some areas that are like racial ghettos but like as a whole it's more of just like entrenched poverty including like white french entrenched poverty in these suburbs because they're just like economically destitute areas that are dangerous and nobody wants to go to them and you know the french have their own sense of what it means to be french which is um you know to be secular and to riot but like only in like a this part of like a union strike or something it has to be a labor riot right um and yeah i mean you can't wear a crucifix into like the dmv and so like it's like yeah they're not gonna let you wear a burka in there, but that doesn't mean like they're being racist. It's literally like, that is part of being French is like being like to like religion and
Starting point is 00:43:33 like, I don't know, children smoking or whatever. So, but people are like exporting, you know, I think American ideas about race and religion and all sorts of stuff over there where it's just it's a completely different society. Well, to cap off the conversation on the digital land acknowledgement thing, it's I think the important thing here is just this is the craziest thing I've heard in, I don't know, weeks. And I've heard a lot of crazy shit. I live on Twitter. And these are the people who the Biden admin has tapped to shape AI policy. Now, as we said at the top of the conversation, it's really unclear what that even means. You know, how much power are they really going to have until they do? But for now, I mean, it's pretty dire. These are not the people that we want in charge of what the AI is and is not allowed to say. Right.
Starting point is 00:44:22 of what the AI is and is not allowed to say. Right. And I would just like to make the point of like the, the problems that they are obsessed with the, it's really just one product. Like sometimes they talk about like facial recognition, like facial recognition doesn't work as well on black people. So black people could be like misidentified if we use this in like law enforcement context or something.
Starting point is 00:44:42 I'm like, maybe there's something to be done about that, but I feel like you could probably just train the ai on like a more diverse data set but like they don't want easy solutions right um what they're actually more concerned with is bias in ai when it comes to to college admissions when it comes to getting loans, when it comes to job applications and stuff like that. And there is no bias in AI that we can really see. Like, the cases that they try to present are so, like, flimsy. And I think the worst one that I saw was Navreena Singh, who is the founder and CEO of Credo AI, which is like they basically sell like AI products where you can like download it and make sure that your AI data set isn't discriminating against people. It's unclear how they do this or whatever.
Starting point is 00:45:41 How many possible customers could they even have? I guess. I mean mean i guess you you don't need that many if you're targeting united healthcare and atna and whatever i mean these are really big contracts i guess um but good point um fair enough i there's her she She had prepared testimony before Congress last year, where, citing an example of bias in AI, was talking about, in very vague terms, a case where a woman with the same joint income as her husband and the same address was denied a credit card increase. And when her husband's credit card limit was 25 times that of what she
Starting point is 00:46:39 had. And I was like, that sounds weird. And so I Googled it and like Googled basically the details of this case. And what came up was a 2019 controversy where this Danish entrepreneur said on Twitter that exactly this, that his wife had a better credit score than him, but they had the same joint income. And yet his credit limit was 25 times hers and the Apple credit card or whatever wouldn't give her a credit increase. There was an outrage about this, especially because Wozniak jumped into the comments. It was like, the same thing happened to my wife. And it's just New York, the state new york said that we're going to do an investigation of this goldman sachs who actually issues the credit card the apple card comes out and they're like hey we don't we don't look at joint income when we're determining credit limit increases it's individual income so that's why you're seeing these cases it's not sexism and ai it's just you know joint income doesn't matter
Starting point is 00:47:45 if your husband makes more money than you he's probably going to have a yeah his husband is literally a billionaire right he's gonna make more money than his wife probably right safe assumption yeah i mean you know we're in the i mean does she even work if like if she does she doesn't have to right so i mean how much right so a year later i don't know why it takes a year for this to happen but new york state regulators come out and they basically confirm what goldman said um in like 2020 i think they said oh there's this common misconception that if you're married and you file jointly on your income taxes that you can claim like your the combined joint income as your income but credit card companies don't actually look at that they look at individual income so that's why there's discrepancies there's no evidence that goldman discriminates against women um and a year after the state of new york says that they clear they
Starting point is 00:48:51 confirm what goldman said at the very beginning this woman is in congress and prepared testimony which means written thought out beforehand giving this exact case as an example and i think intentionally obfuscating details of it so that nobody will notice that she's talking about something that wasn't real and basically amounted to people don't know how applying for a credit card works and and it's kind of insane like no i i even i was like did anybody notice this nobody noticed it so you know smart for her for you know giving so little information but it was obviously that case it was obviously that she was talking about the apple car case because there's no other case where a credit card company has been accused of
Starting point is 00:49:35 um discriminating against women it was this is just one case this is this is how all political debate works and probably has always worked in that you pick up details of stories that are advantageous to you. You reshape them, reframe them, distort them, sometimes maliciously lie about them and do everything in your power to get some other thing that you want. And in this case, who knows what that is? It's like, maybe they want, I don't, they want greater representation of some specific class of people in some specific body, whatever it is. They're not, they're not being honest, whatever, table it. The problem is when you take this kind of dishonesty and you like root code it into the search engine that we're all going to use for the rest of our lives as if it's fact rather than political discourse. And most of this, most of everything is, it seems, political discourse. That's most, everyone is is biased everyone has a perspective everyone on
Starting point is 00:50:47 any actually important topic is is forwarding some kind of an of an opinion and uh and so this is the this is the sort of problem of problems here it's like like the entire paradigm the whole framework is wrong we should not be looking to people to inform you know some some kind of regulation concerning what what is and is not proper speech that it will never work it will always end in like draconian authoritarian dictatorship that's just the only way it can end right well but they do want to restrict speech like obviously but that's only the only way it can end. Right. Well, but they do want to restrict speech, like, obviously. But that's only one part of it. I think what they want just as much, if not even more, is a sort of like AI-generated form of affirmative action.
Starting point is 00:51:36 And basically all forms of life and finance and job applications and college admissions. I guess they want that back. It's like the Navarro seeing what she's giving her prepared testament to Congress. To Congress, she's talking explicitly about parity of outcomes. So and she gives the example, you know, does your candidate ranking system recommend black women get hired at the same or similar rates it recommends white men? And so, it's like,
Starting point is 00:52:17 maybe, maybe it doesn't, but like, you have to look at like, how many black women versus white men are applying? What are their qualifications? It's like that doesn't even I mean, personally, I think that like you shouldn't even use AI in hiring if you actually like want to build like a good company culture, because it's hard to to just hire people based off like a computer reading a resume. Like people lie and like you can't really judge the vibe and all that but i mean um she even gives this example and um for also like um if a credit risk prediction system she says that judge black women is less credit worthy than white men um that would be an example of an unfair parity of outcome. But credit worthiness is determined on income, on past credit history, debt to income ratio, among other things. So rectifying this parity of outcome would be like a violation of the law as it stands now. And I guess we'll see if they can get away with it, if they actually- They cannot.
Starting point is 00:53:26 But like, yeah. As we now know in the context of college, because I mean, the Supreme Court just ruled on this. There is a long and robust history of law that will prohibit this. And this court is going to uphold those laws. Right. I mean, it feels like this whole thing whole thing i mean when it comes to like affirmative action in college but especially when it's coming to things like credit worthiness which like there's a limited amount of money i guess that like can be lended so like it seems like their attempt to sort of like get even for this like perceived like past, I mean, in some real like past inequities by just like, you know, giving any black woman who wants a loan alone at the expense of what would actually be like more
Starting point is 00:54:16 like working class, but responsive, financially responsible white men or Asian, because, you know, I mean, they're not going to deny a billionaire or millionaire alone but it could be you know the guy who makes you know 60 70 you know grand a year working like in the trades or working on an oil rig or something like that and has an okay credit score but then he goes to try to get a loan they They're like, eh, you know, sorry. It's, it's, and like, they would have to do that, not just because the amount of money that can be lended is finite, but because they want, they would want the AI,
Starting point is 00:54:58 the outcome, the outcomes to be like parity. So they would want to have like an equal number of black women and white men. So necessarily you're going to be sacrificing one for the other. If one is determined to be more credit worthy based on these objective, um, things like credit history and all that. All right. Uh, I think, uh, we're going to save that one for another day. I got a piece to wrap up and we'll hit you back next week. Talk to you guys later.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.