Modern Wisdom - #330 - Theo Priestley & Bronwyn Williams - What Will The Future Look Like?

Episode Date: June 5, 2021

Theo Priestley and Bronwyn Jones are futurists, trend analysts and authors. The future is supposed to be ours to choose, but who is educated enough to tell us if we're going in the wrong direction? Th...eo & Bronwyn brought together the world's leading futurists to articulate, clarify and predict the current trajectories in all the areas of life that you care about, like the future of warfare, education, transport, politics, economics, space travel, relationships and more. Sponsors: Get 10% discount on everything from Slater Menswear at https://www.slaters.co.uk/modernwisdom (use code MW10) Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Extra Stuff: Buy The Future Starts Now - https://amzn.to/34FfgKr  Follow Theo on Twitter - https://twitter.com/tprstly  Follow Bronwyn on Twitter - https://twitter.com/bronwynwilliams Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, you beautiful humans, welcome back to the show. My guests today are Theo Priestly and Bronwyn Jones, their futurists, trend analysts and authors, and we're talking about what the future will look like. The future is supposed to be hours to choose, but who is educated enough to tell us if we're going in the wrong direction? Theo and Bronwyn brought together the world's leading futurists to articulate, clarify, and predict the current trajectories in all areas of life that you care about and some that you probably don't, like the future of warfare, education, transport, politics, economics, space travel, relationships, and much more. It's weird, because everyone has an opinion on the future,
Starting point is 00:00:43 right? Everyone thinks that they know what's going to go on. But it really does feel like we're only served one, usually semi-utopian view of how we're going to upload ourselves into the main frame and be in virtual relationships with Androids and never have to work or be in fear again. But there are other perspectives and we definitely get to hear them today from Theo and Bronwyn. And now, please give it up for the wise and wonderful Theo Priestly and Bronwyn Jones. What's an anti-futurist? I get asked this all the time, it's a little bit of a marketing spin. I think it goes to perhaps Bronwyn and I's our particular viewpoint of the future, which is if you look at the pop culture futurists like Peter Diamantes and Ray Cursewile, they paint a very specific
Starting point is 00:01:55 version of utopia, which is everyone will merge with machines and become one with the singularity. And my particular take is more akin to I cast a large dollop of cynicism on top of all of it and just and kind of take a step back and say, well, is this really, you know, the preferred version that we want to actually prescribe to? And why should we believe that these two men or these particular futurists who have the loudest voice are the right ones that we should be following. So my take on futurism or anti-futurism is really a more pragmatic and realistic approach born out of the fact. I used to be a chief evangelist of a technology company. So I used to put
Starting point is 00:02:38 on a lot of the spin than the happy version, you know, happy, clappy version and stuff like that and hug trees. And now I hug trees because, you know, the trees are disappearing and I realize that, you know, technology isn't as rosy as what other people paint it to be. So I kind of take a very sort of negative stance in a sense towards these particular futures that other people want us to to to prescribe and and walk towards with blinkers on in a sense. What do you mean criticisms of the status quo, the dominant guards ideology for futurism? I don't think there are enough younger voices in the room for one. So, my criticism for futurism in general is there aren't enough younger voices in the room. So, it tends to be sort of old, funny, dirty guard.
Starting point is 00:03:33 That kind of sort of has a very sort of, you know, I would guess, blinkered and old way of manifesting what they believe is a future fit for humanity. The second thing is, which goes towards Bronwyn and I's, you know, a very sort of strong stance, is that nobody really questions, I think, a lot of what is being said. So we all tend to be distracted by the shiny objects and the lovely futures that are painted for us, augmented reality, we're going to live in virtual, with virtual avatars and spend virtual money and things like that. But no one really sort of questions whether this is actually the best way to resolve or solve some of humanity's biggest challenges like poverty, homelessness, inequality, our sense of self and what we are
Starting point is 00:04:22 worth as people. So I would, you know, my other criticism apart from, let's speak to the younger people behind us who want to take have a stake in that, that future is let's actually stop and question each particular step along the way and just ask, is this the right step that we take? And it goes towards the, you know, the future cone scenario where I think a lot of them are rather than looking at the preferable future, they're a way wandering off along the, you know, the one that they kind of prefer themselves. So it is, I get, it goes back to that biased view again. This is the one I want, but it's not, might not be
Starting point is 00:05:01 the future that everybody wants. And it'd be nice for people to just stop and question that. Go for it. Yeah, I suppose I have quite a similar view to the other, but also a little bit more divergence in that my problem with conversations around the future is that they do tend to be imposed from above by the most powerful wealthiest best connected voices in the room and they tend to be quite binary. They're either selling you on a probable dystopia or on a probable utopia and that's very, very binary way of thinking, but they're but selling you and that's the key. So people are either selling you on inevitable doom and gloom in order to get you to essentially fall under their
Starting point is 00:05:40 will to do what they want to do. This is what politicians like to do. They like to talk about the negative only to sell the fear side of the future so that you kind of listen to what they're saying. You give up a whole lot of your rights and your agency over where we are headed. And that becomes a self fulfilling prophecy. On the other hand, you've kind of got people more in the private sector selling you uncomplete
Starting point is 00:05:59 like the old saying, this technological utopia, this fully automated luxury communism that awaits us all. If once again, we just fall in line and do what they say and buy what they are selling. And that for me is quite a huge problem because both those roads sort of end up in the future where most of us are ending up giving up a whole lot of our agency, a whole lot of our ability to direct and broaden that future code. And instead of broadening our realm of possibility,
Starting point is 00:06:25 because there are almost eight billion of us on the planet, we should have lots of different ideas. Instead of being funneled into a much narrower idea of where we are headed, the sense of inevitability. I think that's what we really want to shake people out of. None of this is set. The future starts as the cover says. Now, and it always starts now,
Starting point is 00:06:44 we get to choose the next step in every direction that we headed. So interesting that when you think about futurism as an area of research, that the more that people become dogmatic, the less interesting futurism is, that you're constraining your own futures by putting your colors to one particular flagpole or another. There's not many things where the person that has the most cloud or the most power or the most convincing advertising campaign doesn't end up winning, right? So I suppose that it is important for people to just... And you are trying to encourage them to do it in this book as well, to think about what sort of a future they want, to suppose to just listening to people who
Starting point is 00:07:24 ostensibly don't... Can you have a qualification in the future? I don't really think so. I think that people, it's hours to create, right? That's why we're sovereign individuals. So you go through a ton of different examples, different areas in the book, and I want to go through some of those today. So the first one, making it nice and positive, was warfare, the future of warfare. He knows that I'm so bad, right? Well, it's interesting. I mean, Christina Libby from Hyper Giant wrote that one. And she's got a particular viewpoint.
Starting point is 00:07:59 And I think it prescribes to a lot of what other people think as well, which is, is become warfare is becoming fully automated, which means our inputs and our agency and our, I guess, the humanity side of, if there is a human side of killing each other. The human side of warfare, where was I going to go with that one, I wonder, you know, that side is actually being completely taken away. And we, you know, we have examples of robotic warfare. We have drone warfare and things like that. And we're giving up all of that decision-making to algorithms.
Starting point is 00:08:34 So there's that side. And then there's also the cyber warfare, which is the eternal war against people hacking various machines. Our computers, our infrastructure, our utilities, to hold us to ransom. And so warfare kind of sort of takes many forms. And in the future, obviously, you've got that technological layer upon it, which kind
Starting point is 00:08:56 of sort of makes it even more dystopian than it is, I guess. But it also allows people to, I guess the attack vector suddenly becomes a lot more wider and broader because everybody can take a little stab at it. You have scriptkitties writing little algorithms that are polymorphic and suddenly become virus that take over. I've seen a cryptocurrency is quite an interesting one right now because obviously it's supposed to be this decentralized pot of wealth of money that you have your wallet, nobody can hack into it and things like that.
Starting point is 00:09:33 And what we're finding is that all this money is disappearing in left, right, and center because there are holes even in something as complex as decentralized finance. So it's interesting. I think our, like I said, our tech vector is becoming broader, even though we think it should be actually shrinking through technology. So yeah, decentralized finance sounds quite nice, but decentralized wall-based, suddenly it should give us all a pose for thought and it's what technology is doing. It's democratizing all sorts of things, not necessarily just things we want to have democratized. That's asking us to ask very different questions. I think Christina's chapter is quite important in that it also shows how warfare is almost becoming deciverized. So what we saw throughout sort of like the history of warfare
Starting point is 00:10:19 is the start and off being an individual versus an individual. And we sort of outsourced the fighting to soldiers and they would fight each other. But now what's happening is we've got soldiers and machines that are going off to civilians again, so you kind of de-civilize that person, and if you actually look at what's going on in the Middle East, if you look at what's happening with out in the Eastern side of Europe, and what's happening is like what's bumping into each other on the far sides of Asia too, is that it seems to be like centralized control, where that's power or money, because of course some of these are private armies,
Starting point is 00:10:50 involving these very different automated machines, are now using them as a form of terror against civilians. So civilians don't become part of warfare once again. So it is a conversation that we all have to be aware of, because these things are out the bottle and it's not just drones and cyber technology, it's also the bio-warfare which we have to be very, very aware of. It only takes one guy and it's a lot easier to make a hugely destructive civilization
Starting point is 00:11:17 ending bio-whip in your garage than it is to make a nuclear bomb. So we have to be very, very aware of these things, and we have to start thinking again around these ideas of centralization, decentralization. And how, again, the path is probably somewhere in between the two of them, that there are some benefits to sometimes having central control over very, very dangerous things and ideas. That's the problem, right? As you democratize technology, and more people have access to increasingly greater powers of destruction. What you end up with is previously the worst damage that anybody could cause with a knife or a sword, but that same person can now 3D print themselves a gun if they
Starting point is 00:11:56 have the resources for it, and then you just continue to roll the clock forward. And you go, right, okay, so in 500 years time, what is the most basic thing that anyone can do? I can make my own rocket ship. Well, fantastic. That how are we going to police that? How are we going to be able to control that? Nick Bostrom has this example where he talks about how it could have been the way that physics was constructed in this universe that if you put sand in a microwave, it made an atomic bomb. Now, by the quirk of chemistry and physics that we have, that isn't the case, but it's not that it couldn't have been the case. And as you continue to pull these balls out of the urn,
Starting point is 00:12:29 right, each one of them could be something that's really, really bad, but it could be something that's just sufficiently accessible enough to cause some damage. And that's, yeah, as you've got a society that's ever growing, you have people feeling disaffected, you know, the difference between, I think it's the school killings that you get in Japan or China perhaps are people going around with a knife. Whereas,
Starting point is 00:12:53 how much destruction can you do there versus how much destruction can you see in America? And again, you just continue to roll that forward, some unfortunate, disaffected person trying to damage the world around them. And yeah, it makes for a scary future. So what do we do? How do we protect ourselves? Well, we started to see as we see our resurgence, obviously, a kind of a digitized version of sort of surf dim that we kind of see emerging in front of our eyes, like Theo was talking about the world of crypto too and what that is doing
Starting point is 00:13:20 is empowering vast amounts of wealth into very small individual hands against sort of decentralization as come full circle back to sort of centralization of massive amount of money. The Winkle, the Winkle Voss twins currently ruling half of the universe of crypto, yeah exactly. Yeah exactly. Or Elon Musk and the Belly teaches to like move markets and devastate small children across the world. Just wipe a trillion, a trillion dollars off the crypto currency market, yeah. We've created quite an unstable system there, but what it's pointing to, when you tie that together with the ability of people to literally develop private armies, is you end up with a world where more and more of us are going to be looking for essentially protection from private players, violent from our states, because at the same time,
Starting point is 00:14:01 the private hands are developing a whole lot of power due to desensualization, states are at the same time losing that power, losing that monopoly over violence, over currency, which is heading to quite a destabilizing point in society, almost like a new sort of Hobbesian state of nature that we're going to have to find a new equilibrium in. But the way we find that balance against those rogue agents is to essentially attach ourselves to the protection of people who have the means, the money and the guns and weapons essentially to protect us. And that's the unspoken but kind of the spoken base case future that we sort of stumbling into, whether you're looking at the sort of prophecies of the world economic forum, or whether you're listening to what the powerful, mainly men in crypto are saying, they're essentially pointing us in that direction once you start to connect the dots. And this is why we want people involved with the conversation, because then this is an
Starting point is 00:14:56 effortable yet, if we that doesn't fill you with joy, now is the time to say that, you know, there's a place for both regulation and education in trying to solve our very, very messy and very, very wicked human problems. I myself do lean towards more libido and less rules, but at the same time, I've met people and I think that all the specches I don't like rules to apply to me are quite kind of in favour of them applying to other people. I recognise that hypocrisy. other people. So I recognize that hypocrisy. Yeah, I, um, it seems almost like a world in which similar to how the American healthcare system is set up where you have healthcare insurance, it might be the
Starting point is 00:15:35 fact that you need to pay for some sort of digital private security, some physical private security. And then you've actually got a market for these sort of mercenary protection for higher companies almost, you know, too. But did you say that? Yes, what did we see this week? I say, I live in South Africa. We've had private security and private health care and private education for even the middle classes and the lower middle classes. Our private school education costs less than a state education, although we have essentially or theoretically free state education doesn't actually play out in the real world.
Starting point is 00:16:08 And we pay for our own security. I live in a state complex, so we have armed guards at the gate. We have a compound. This is like how I've grown up. So it's quite normal to me. I'm like South Africa is a preview of this very neophytal feature we sort of stumble into. And then just this week we've seen a couple of companies out in Los Angeles where the whole defund the police movement has gained quite a lot
Starting point is 00:16:28 of popular support. At the same time you've got rich people who are now investing in a sort of take start up for on demand, the sort of, you know, e-hailing version of private security, which is not African. I'm very well. But this is shocking, American. It's just like Uber, Uber for goons with guns. Is that what it is? Yeah. The main position of the market. The right to arrest people. So yeah, I've seen the future.
Starting point is 00:16:55 It's not all this crap to be. Everyone thinks very, very fantastically about these things. Everyone's going to be South Africa. That's terrifying. And the fact that we're all laughing about it as well is purely because it's so ridiculous and terrifying, there's no other outlet for the emotion. You just say, well, I've got to laugh at it because the alternative would be to break down and despair in the corner, just weep.
Starting point is 00:17:14 Well, that's actually quite an interesting point because the natural reaction is to go, oh, how stupid this will never happen, blah, blah. And of course our apathy towards the shock and the awe and it is becoming normalized to the point where it just happens under our feet. And then the next day you wake up, you know, you've got armed guards, I don't know, people posting leaflets going, would you like protection going to work kind of sort of thing? And it's like, when did this happen? Well, of course, we stood laughing about it last week. But you missed the bulletin where you had the opportunity to say, no, I didn't want this. And again, goes back to the book, you know, you have a stake in the future. Instead of, you know, being apathetic about it, actually stand up and
Starting point is 00:17:58 saying, yes, this is fine for me. No, I don't want this for me. What about transportation? What did you learn about that? Flying cars, I think, dogs rights about flying cars. You're not going to have one. You're not going to have one anytime soon. Well, you're not going to own one. You might get to ride in one. I think it's transportation. I'm going to bobbin weave on this one as well because I saw an interesting video where in China they were trialing what they called an autonomous vehicle which drove along the road essentially China invented bus, but the way that they painted it was that there was this autonomous vehicle that had many carriages that drove along the road and followed the path of the road and I just thought why I'm gonna have you know I've kind of had these in Edinburgh for you know 50 hundred years now, you know, and so many other countries but because it's autonomous, all of a sudden it's something new.
Starting point is 00:19:05 And so generally what we do, what we find is a lot of ideas have been recycled for one. On the transportation side, we're not going to see flying cars. What you will see is autonomous taxi drones, for example, where there will be either single passenger or multi passenger, maybe two, three, four family size, nothing like a flying bus. Let's put it that way. But again, you know, not for the general public, I would say, for one. So for the Uber rich and for the
Starting point is 00:19:36 rich who are kind of forward to sort of call a taxi to their to their acreage in the backyard and take them up to their skyscraper penthouse suite. That's fine. For the rest of us, it's hoofing it down at the bottom of the streets again. Surely that would be democratized down though as the technology becomes more widespread. You're going to inevitably end up if there's a market there for it, you're going to continue to drop the price and drop the price and drop the price until you can match supply and demand. Yeah, but at the same time, I mean, you find all these technological shifts and innovations are always trialled and built for specific cities in mind.
Starting point is 00:20:13 And so you have them for Tesla and things like an autonomous transportation. It's always trialled and built for American roads, where it's very long, very straight, huge cavernous roads, and it's all grid-like patterns. Now, if you come to a city like Edinburgh, yeah, for Thessalon, or Newcastle or something like that, it's just not going to work. And we certainly don't have the buildings or the infrastructure, or even the technical infrastructure as well to one, support electrification, two, our roads are so bad that you would never want an autonomous vehicle
Starting point is 00:20:50 to even attempt to drive it. Landing on a Royal Mile in Edinburgh, yeah, exactly. Oh, that's it, that's about it. It just goes up and down in a straight line and that's great. The rest of it, just forget it. Oh, no, another pothole. You know, and then flying vehicles or flying drone taxis kind of sort of thing.
Starting point is 00:21:09 Again, you've got that kind of, we don't have even the, are we going to have like many air traffic control systems all over the place? Because every city is built different. All the infrastructure, the fact that we don't have double-decker trains in the UK is because we built, like in Amsterdam or in the Netherlands, is because our rail structure, we have low bridges because they never even considered it. So all we could do is just make trains extra long rather than doubling them up and have and doubling the capacity a different way. And it's the same with like taxi infrastructure
Starting point is 00:21:45 or flying infrastructure or electrifications as well. Some cities have been built a specific way that they just even didn't even think about that at all in the future. And so it's gonna be very difficult to overlay some advanced technological innovation onto a really crumbling old sort of Victorian infrastructure. Yeah, that's the problem of having a country
Starting point is 00:22:06 as old as ours in the UK, right? It's all well and good looking at Dubai and seeing that, well, you don't really have the history that we do or America any so you've existed for 250 years, the oldest building that you've got, doesn't really matter. And we've got all of this beautiful history,
Starting point is 00:22:21 but we also have roads that were designed by a three-year-old that had too many e-numbers, you know, that had too much glucose and got let loose with a sharpie pen, and it squiggles and all over the place. I mean, to sing the song for the techno-opnimists in this, it seems to me like it's just a coding problem. You just need to come up with a sufficiently scalable, advanced algorithm that would be able to work out and negotiate with the other vehicles that are around and can be sufficiently understanding of the
Starting point is 00:22:53 different types of landscapes. In theory, yes. When it comes to the issues of autonomous vehicles in general and flying autonomous vehicles in particular, what's really holding us back is not actually the technology so much as it is the people, because we tend to forget about this. So just because it can be done doesn't mean it will be allowed to be done.
Starting point is 00:23:13 The thing with autonomous vehicles is the regulatory and legal absolute nightmare. And we have become a very timid species. We no longer like risk. So the great analogy is if we invented the motorcar for the first time this year in the year of fear and loathing across the planet, would we have allowed anyone to get in a vehicle that goes at 120 kilometers an hour? You know, I don't think so. You see, we've just got to a point
Starting point is 00:23:39 where we are very, very afraid of each other, we sue each other for all sorts of things and we have allowed probably too much red tape and too much safety as them to encrypt into our society. So I don't know if there's a will to get general mass transport autonomous vehicles off the ground. That's the first thing. The second thing is of course that flying cars in particular do require quite a lot of space.
Starting point is 00:24:01 So once again, if you're in a very big city, it's not accessible for everyone to use these things. You know, there's so much rooftop space that you can land on, so you can imagine the sort of queues up elevators instead of sort of queues on the pavement for taxis. There's something less space for these things to actually people up. So what you get to is the point where essentially the same sort of people who are able to use helicopters right now get to use these things instead. But the other regulatory issue that's going to slow this down, even if we are able to overcome all the technological challenges, is the eco-movement.
Starting point is 00:24:32 And the fact that even if these things are running on renewable energy, they still have a higher carbon footprint than your bicycle. And I live in South Africa, but I have noticed with big eyes and are the slight amounts of laughter. How Europe is pushing back on any sorts of progress and is regressing to basically the fastest speed that they, that one is encouraged to go at is the speed of basically a two speed bicycle. So we have to understand
Starting point is 00:24:56 this is the society at these things that being launched into, which is why there's a very different future ahead for your sort of aging, perhaps sort of paler called the nations up north, and they are perhaps some of the more vibrant younger economies that have not yet lost their nerve, so to speak, which is an interesting thing for me to say, but living in South Africa, we kind of at the the multi point of all these different world views from East, West, North and South, and there's a very different tint on the future coming out of the societies. That's super interesting, the fact that culturally we have muted our desire for rapid growth. There's a lot of Alex Epstein on the show recently who is a pro fossil fuels philosopher and researcher
Starting point is 00:25:39 and he was talking about human racism as he calls it, which is the hatred of our own race. And this is, you talk about destroying the planet and using fossil fuels incorrectly and so on and so forth. You guys would love him. You should really check out his stuff. I wonder whether, you know, the Valley of Despair that people have as part of a learning curve. I wonder if there's an equivalent with the Valley of comfort that we've managed to get
Starting point is 00:26:02 to now. So previously, all of our problems were problems of scarcity, not problems of abundance. We've now flipped that on its head. It's problems of abundance, not scarcity. There could be more comfort, more abundance down the road, but I wonder whether we think, well, it's all right now. This feels okay to me. I don't need more convenient travel. I don't need more whatever it might be X, Y, and Z. So we'll just stick here that we've reached a point, I don't think it would have been difficult to have justified a thousand years ago, not continuing to develop all of
Starting point is 00:26:36 the things that we've had to, but now we get to a stage where I think, okay, so I feel all right, as a normal person, I don't really have too many problems. So why bother? Yeah, comfort definitely slows us down from progress. But I think that's also quite a sort of first world sort of perspective. Like I think there's a lot of people who are quite a lot of growth in my corner of the world. And I still am on the more privileged corner
Starting point is 00:27:00 of the African confidence. So I think that what I definitely noticed, particularly with people that work professionally in the sort of futures space and the policy space, is that we're getting messages of not just, we have enough coming from Europe and from the greater West, if you want to sort of use that sort of summary, that actually pushes towards things like degrowth,
Starting point is 00:27:19 actually saying we should have less. But at the same time, us sitting in Africa are saying, we can't actually do with less. We actually need quite a lot more. So there is quite a big, big conflicts emerging there. And I thought it was quite interesting. I actually got yelled at on Twitter today for for comments saying on how are these messages coming out of again, Europe saying that the children are bad for the planet, see, shouldn't have any. I mean, that's a really privileged point of view to actually start telling people that people about,
Starting point is 00:27:45 we should save the planet for these people that aren't going to be born, right? We'll be saving it for. I find it quite a lot of irony in any sort of degrowth messaging, which just riles me completely the wrong way. We need to be piling ways to sort of leave this place better than we found it,
Starting point is 00:28:01 but also to progress to something new, because stagnation is just death. And then when we stop changing, but also to progress to something new, because stagnation is just death. And then when we stop changing, that's literally the definition of death in some terms. So we have to be optimistic about these things. Otherwise, we do sort of cool into a little ball and start preaching degrowth and the anti-natalist movements, which is about as pessimistic as you can get in terms of humanity. But where they are already, and these movements are growing right now, deaths of despair are increasing and deaths from suicide are now more
Starting point is 00:28:30 common than deaths from homicide at a global level. These things shall give us pause for thought about how pessimistic we become as a species, even if we pretend otherwise, we're not acting like we optimistic and I find that quite shocking because all the trains are generally pointing up. We should be quite excited. It's quite a good time to be around. And in fact, you look at the general lottery of life. You don't know where you were going to be born
Starting point is 00:28:54 or to which parents and you were told you could pick a date. This would be a pretty good bet. I mean, I've done off I wanted to pick a date further back in history. The chances of having a decent life today, anywhere in the world, are greater than they were at any point back in history. The chances of having a decent life today anywhere in the world are greater than they were at any point previously in history. We've completely misplaced our optimism. I'm not quite sure why that is maybe it's because we're too comfortable, maybe it's because we are just a
Starting point is 00:29:15 little bit, we just naturally dissatisfied envious species. I don't know if you've got some thoughts. Theos. I will go back to my thoughts on apathy. We have become an apathetic species, and maybe falling into that comfort zone. But I think it's also a question of distraction as well. A lot of the technology that we have today around us today is built to distract. Generally, I agree with Bronwyn that
Starting point is 00:29:48 we are actually living in a period of time where things are abundant and we are actually in a better state than we have ever have been in the past. But people are apathetic to actually want to find out that information and those facts for themselves. And instead of fed that information, the negative side of that information, which is, oh, always me, everything is doom and gloom. This is the worst we've ever been. This is the worst state that humanity's ever been in. And they're fed that point of view. And because there are, are, they are in a comfort zone. And they are also apathetic. They don't want to find out any other information around that. Is that
Starting point is 00:30:25 the right point of view? Do I just accept? Oh, I can't be bothered. Oh, someone's posted something on Facebook. Oh, there's a new TikTok. Here we go. This will distract me from the truth and what's actually happening in the world today. And I think this is quite an interesting but also dangerous place that we're in right now, which is there's a lot of people sleepwalking into the future at the moment. And I completely unaware of all the good things that are happening in the world today, but completely besotted with all the bad things, because these are the things that the algorithms love to serve up, because it keeps us in that kind of benign state. of up because it keeps us in that kind of benign state. Yeah, from the value of despair to the comfort zone of apathy.
Starting point is 00:31:09 We've managed to switch those around. What about work? What's the future of work looking like? Are we all going on universal basic income? A robot's taking our jobs? What's happening? Neither. Those are the two. Those are the only two doors that I have to go behind.
Starting point is 00:31:24 What's happening now? I think I'll let Bronwyn speak on some of this as well. But my thoughts are robots aren't going to take our job. What they're going to do is augment certain functions that we perform and hopefully allow us to do other things and allow us to explore other size of humanity. This is what I would like to see. Certainly, I spoke about it in a TED talk. I did a couple of years back as
Starting point is 00:31:50 well on that kind of front. Will we ever follow a robot leader or will we actually learn to be humans again? I would like to see this at a Golden Age or a Renaissance where people rediscover the arts and humanities and go and paint something just because not because there's no money in it but because I actually want to explore that side for myself and be creative. I would like to see, you know, the autonomy and the augmentation allow us to switch to something like a three year or four day week rather than speeding up the productivity but filling in that five days again. So we don't escape it. I would like to see us escape that a little bit.
Starting point is 00:32:31 On the other door, which I've completely forgotten, which one, oh, universal basic income. I always find it really interesting that the biggest supporters and the biggest and the loudest voices of universal basic income are the ones that are and the biggest and the loudest voices of universal basic income are the ones that are setting us up for these kind of sort of traps of automation and and and their history, their history has all been about software and automation and Jeff Bezos and things like that and making us work harder and faster to the point where, but giving us nothing in return to the point that we actually need more to survive. And they're the biggest supporters of it and you have to question whether it is a completely flawed system in the first place as a result of that. Yeah, so in terms of universal basic income, I think it's, it's, follows on quite nicely from our conversation and degrowth. It's always amusing to me that rich,
Starting point is 00:33:33 young men in Silicon Valley are the ones that want to preach to us about why we should accept a please so can I have some more universal basic income future. At the same time, it's also quite ironic that it's rich old men in political power in Europe that are trying to sell us some ideas of degrowth, which incidentally leads to the same point because a lot of the degrowth ideas about making do with less about job sharing, about like you were talking about the four day work weeks,
Starting point is 00:34:00 which I still, I think, the wrong way to look at this because they're still talking about sort of master's slave owner employee type relationships. Where we actually heading is into a post job world but not a post work world. There's quite a big distinction there. Jobs are an artifact of the industrial revolution for that people who either literally serve or they were hand-to-mouth sort of subsistence farmers. You worked for yourself, by yourself, or you were kind of a slave and you really have a choice. We don't really want to
Starting point is 00:34:30 go back to that, but that's kind of what I'm seeing is going to happen because I don't see much difference from being a surf that has to tell the physical land in order to get your allowance from the owner of the means of production compared to being a digital surf that has to, you know, either fall into the jobs work program of your sort of MMT toting heterodox economists in the West or to ask very nicely for your tick overlords to hand you an allowance or to increase your allowance the end of every month. Essentially, as long as your ability to survive is dependent on someone else, he sort of feeds you, owns you.
Starting point is 00:35:10 So I don't see any sort of universal basic in time type proposal as being anything other than a bandaid over a very failed society. What we are actually working towards, and I'm a bit more optimistic about this than perhaps there would be, so we are working towards a and I'm a bit more optimistic about this than perhaps there would be, to be our working towards a post job world. Jobs are less and less attractive, particularly to young people, even in my country which has some of the highest employment
Starting point is 00:35:33 and inequality rates in the world, young people don't want bullshit jobs. They want good jobs, or they prefer not to have a job at all. And this makes a lot of sense, but of course you come and have a society where more people are dependent on the state or a handout than our independent of it. So the only way to sort of resolve that tension is for more people to become independent or to become gainfully unemployed as is actually someone like myself. And I think the are you being gainfully unemployed from for a large part of your career too, which comes with a lot less security than universal basic income or a salary would give you,
Starting point is 00:36:07 but it also actually comes with a lot more freedom, and it comes with the ability for you to manage your own time and to sort of not work to live, but to enjoy the work that you're doing and work then becomes a value equation, whereby we have to find ways, the challenge to individuals, to add value to society.
Starting point is 00:36:25 And if we don't, we are going to end up in a neosoftum type relationship where we are dependent on someone else for a handout. There can be changed, removed, or have added teasing C's added to it, which is what should scare us the most. Because universal basic incomes are going to be attached to things like your healthcare and to a whole lot of rules that come with it. I mean, we've got small toilets now. So if you're going to have your universal
Starting point is 00:36:48 basic and common universal basic health care, you have to follow the rules. You're going to lose your privileges, right? So your toilet's going to know when you've been eating badly or when you haven't been going to the gym and it's going to tell on you. And that's a very sort of crass example, but that's the world we're kind of working towards. We have to understand, as long as we're getting stuff for free, we are the product, even if we're getting it for free from our states, whatever that case might be. So the alternative is to find a way to actually add value to your society. And ironically, many, many people in salary jobs are not adding value to society. That's quite harsh words, but I'm sure many of us have met the Pomer-Fryce layers, the management layers
Starting point is 00:37:24 and organizations. They were sort of collecting their check, passing, go, collect, check, end of every month without adding value to the organization. And those people are going to find themselves out of a job. They're going to find themselves out of a job and out of work because they say, found the way to add value to society. You are going to be someone else's essentially property. But not everyone is built to be a creator of some kind, adding, we need to have the reason that you have stratified levels within organizations,
Starting point is 00:37:55 is that there are people with varying degrees of conscientiousness and intelligence and abilities and so on and so forth. So what are you gonna do for the people at the bottom of that? Or at the bottom half of that? I'm not too worried about the people at the bottom. Much of the people at the bottom of our societies and our economic ladders,
Starting point is 00:38:11 actually essential workers. They were the people that were still working last year when we were all in lockdown. They weren't earning much money, but they were essential to society, which means they are actually adding value. The sort of people that aren't adding value tend to be your overprivileged, overpaid, white collar workers in good jobs after all the definition of a good job is being paid
Starting point is 00:38:29 more than you are worth. So let that settle in to everyone that's listening and it feels like you ended up last year, Richard, then you started out, even though you didn't work too hard in you, it wasn't me. My fellow payment was shit. But if you think about presumably the people at the bottom of that ladder are also the ones whose jobs are going to be close to the easiest to automate. So not so, not so at all. Okay, tell me about it. Because any job that involves your hands, that involves your hands or your eyes or physical pointing your body is much harder to automate. These are
Starting point is 00:38:59 jobs that involves like caring and caring is of course going to be one of the biggest industries coming forward as our populations age. And we running out of young people, as we're talking about sort of population, degrowth across vast swaths of our globe. Those jobs are going to increase. Things like being a pastoral, personal trainer are going to do just fine. So in other words, if you're touching people,
Starting point is 00:39:18 either spiritually or emotionally or physically, you're going to be OK. And they're much harder to automate. It's much more difficult and much more expensive to create a machine to pour a cup of coffee, that is to create a piece of code that can copy or insurance suggest how your financial advice is advice. So, even within those rules, that's why financial advice is a having to shift from selling products, to actually giving you coaching and mentorship around your financial status. So you've got to find where you're actually adding value and where you
Starting point is 00:39:49 just being a cost to the people who are paying your bills. And there are portions and places to add value in every different industry or job function, but you have to find them because too many of us and too many of industries have bought on essentially being sort of toll keepers and collecting rents. Is there enough room for everybody to step into this? Yeah, I mean, if you want to know there's enough room for everybody to either be a creator or to add value, just go look at the people who are making a living selling their media
Starting point is 00:40:19 okay, NFTs right now. There's a buyer for everything. As long as you can persuade someone else, it's valuable to them, that's gonna be good, but don't think you're gonna get a salary from it. That is going to shift. I do think salaries are going to be something that you don't want. Salary jobs are going to be the sort of jobs
Starting point is 00:40:38 you don't want to have, and only the guests will be taking, going forward. Good people will. What about space travel? Space travels like the frontier of futurism, right? What's the future of space travel got in store for us? It's going to happen finally. Whether we set up a settlement on Mars or the moon
Starting point is 00:40:59 in the next 50 years is something completely different. It's interesting. I saw the, what's it called? Is it Artemis or Gateway? It's called Gateway, which is like the lunar, basically, the lunar version of the ISS. And it's going to serve as some sort of hub for to shuttle, you know, rich folk to the moon and back
Starting point is 00:41:21 and or to the Mars and back. And act as a staging post. And it'll be permanently manned and things like that. And it always makes me laugh because you see these programs like the Expans and Star Trek and Star Wars and they paint, you know, the Expans is probably the closest vision of the future and it's certainly a very dystopian one anyway. But at the same time, it still doesn't paint a very realistic picture in the sense that space is very hard. So failure is always going to be on a big scale.
Starting point is 00:41:54 So at some point, when Ellen Musk boards his space rocket, there's going to be a tiny part of him that's going to be shitting his pants because that payload might go up in his face. So for one, there's a big risk there too. Living in space is actually really, really hard on the human body. If you've ever seen an astronaut coming back down from ISS after a 200-day stay, they are literally crippled.
Starting point is 00:42:17 They have muscle atrophied. They can barely stand on their own. Now, if you can imagine us, you know, saying, I'm gonna live on Mars, which has, you know, a different set of atmospheres, a different set of gravity, and the same on the moon as well. And then thinking about the journey to get there, having a little two-week holiday, and then the journey to get back to Earth, for example, you know, that's going to wreck the human body. And I just don't think that the species at this point in time
Starting point is 00:42:45 and technologically were advanced enough to be settling. So it's essentially a one-way trip. The other thing as well is if you notice the complaints from the astronomy point of view of Starlink as well. So at the moment, we are literally littering the skies with satellites, with little bits of space junk as well. And it's becoming harder and harder for us to actually track, I believe. And then
Starting point is 00:43:12 I think in the future we'll see this sort of scenario where Wally got it right, where the earth is literally cocooned by all these dead satellites. So just this horrible cocoon of metal and it's going to become ever more increasing risk to actually travel to the stars until we sort that out as well. So there are known risks which are spaces hard, the financial side is incredible, failure is spectacular and it's really hard on the human body and then you've got all the actual physical risks in terms of you know, risk to the body, you've got the risks of space and space junk up there, you've got radiation as well from the sun and from the space in
Starting point is 00:43:59 general. It's not going to be an easy thing. I wish people would read up a little bit more rather than looking at the fanciful renders and the movies out there that paint a very utopian version of what life is going to be like because it's going to be indentured servitude and one way trips for the people who actually want to pay to get there. Presumably though, if the people that are going are the ones that have the money, if the companies that are giving the service to them are refining their payloads and their rockets and the technology that they use and everything else, if you want to repurpose people that have too much money's money to companies that want to take their money, to research it, to make essentially democratize or at least bring the cost down for future space travel. Surely that's a good thing.
Starting point is 00:44:53 Oh, I mean, this is no different to Chris Columbus and all these other people who've built a boat, financed a boat, and then thought, you know, saw that, let's take a chance, fill this boat up with some intrepid people, and let's go and explore the brave new world. You know, it's exactly the same thing, but just on a completely different scale. So those, you know, to the, to the Victor go the spoils, in a sense, you know, the people who set foot on Mars who or the moon who build the first colony survive Propagate, you know The world, you know, they they'll be known as the pioneers the true pioneers
Starting point is 00:45:34 You know at the moment we're at this kind of sort of weird stage where everyone everyone believes that this crazy guy who claims he has autism Is the is the future of mankind And I think we just need a bit more realism injected into that vision first, but certainly, you know, it's it's a grand prize. No bones about it. This one's a grand prize, but it's certainly it's certainly going to take real pie in a years to get there. You're not a fan of Elon Musk? I'm going to stop trying. You know, he's an interesting character. I'm not a great fan of him.
Starting point is 00:46:14 I think there are flashes of brilliance in what he does sometimes. You know, in what he's done with Tesla off the back of workers I might add. It's not just him, you know, tens full of people sweating their arses off to get cars out the door, the shoddy panels, and things like that. But, you know, and then what he's done with SpaceX, I think, is fantastic in terms of lowering the cost and democratizing that and taking that away from, you know, agency control where it's just we have to just trust NASA to do it all kind of sort of thing. He's now commercialized space. But I think he kind of brings himself down with all the weird stuff that he gets
Starting point is 00:46:58 on with an old crypto thing and the odd tweets and getting into, instead of picking up the phone, he should maybe just pick up the next sketchbook and do something wonderful again, like his namesake on his company logo, Tesla. Dream up some big wonderful ideas again. The boring company, another classic example, I'm going to tunnel under the earth and take away all the congestion and it's going to be automated highways under the ground. And then the end it was just a tunnel that you drive through with flashing lights that would probably give you a headache. You don't trigger off an epileptic fit and cause a crash. So, you know, sometimes he does something fantastic and then,
Starting point is 00:47:41 you know, three out of four times, you know, he just is a complete dick. Yeah, it's interesting thinking about people like that. I often think about the price that we need to pay to be the people that we admire. And, you know, if the byproduct of having democratized space travel and commercialized it so that people can go up and the future generations, you know, at these inflection points earlier in the development of technology, you are setting the tone so much more importantly than you would be down the line because you're essentially opening up
Starting point is 00:48:09 different branches of potential futures, right? When you do this thing, and if the price that we need to pay is to have like the sort of Tony Stark, real world crazy Tony Stark thing that we've got going on. It would be more optimal if he just had his company look after his Twitter account. But again, that's perhaps the price that we need to pay. The price that you need to pay to have the guy that thinks those things is also the fact that he's going to try stuff like putting a tunnel underneath LA that perhaps doesn't necessarily work.
Starting point is 00:48:47 What about health? What's the future of our health going to look like? Which future of health? I mean, there are two very divergent views in terms of health, which is I think coming back to the Elon conversation. There's people in want to progress and there's people in want to regress and Elon's an interesting guy because he's one of the few people that is actually building doing, taking risks, making new things. We also quite associated with a whole lot of the really rich people who don't have a God or a meaning in their lives
Starting point is 00:49:19 that are one thing to live forever. So you've got people that are pushing for immortality, which would be, not even, it's not real immortality, let's put it that way, but just trying to find ways in order to get their consciousness or their essence to be passed down to the rest of humanity because they believe they are the super sort of ubermensha of the world of our generation. But on the other hand, you've got people that are desperately wanting to shorten their life spans. You see things like euthanasia on that everywhere. Is this true? I haven't been exposed to people wanting to shorten their lives. The euthanasia thing is huge. Isn't that people that are already dying?
Starting point is 00:49:55 People are already dying, right? They're not actually... They're not healthy people that are wanting to die. Some of them are just old people. Right, okay. Right, we've had... We're tired now. We're done. So, in that case... Sik it this place. Elon Musk tweets one more time. I swear to God. That's the, I'm booking a flight to think that you have to.
Starting point is 00:50:09 But I think that's quite natural. I think lots of people get tired after a while, especially if you are thinking that the future is not going to be any more exciting or that you could get sick or that you're going to start declining. But there are definitely people out here that don't value any life at all, much less their own. I think that's the growing subtext of the apathy and of this sort of anti-growth, anti-progress narrative. It does spoil over into that quite dramatically. If you look at the sort of categories of
Starting point is 00:50:35 where people are pushing for things like Ethernasia, Tsubi, or like compassion, it's killing, whatever, there's different sort of legal categories to be legalized. You can see those categories get broader and broader, not just for people that have terminal illnesses. I mean, technically we all do, right? We all mortal. Unless you try push for the digital immortality reach on the other side, which seems to be the most likely way to do it.
Starting point is 00:50:57 Of course, digital immortality comes with a hundred percent fatality rate up front, but then your consciousness can be cloned and off you can go. That seems like the most likely way we're able to do this. Infinite life extension doesn't look like it's going to be achievable within our lifespans, as I've said, they're won't, but much like the issue with civilization ending events, like we were talking about earlier, like one guy blowing up the species with a misplaced, you know, printing of some sort of small box virus in his home printer. You know, the odds are eventually compound against your favorite, even if you're able to renew your physical body in terms of preventing illness, sooner or later accident, or injury catch up with you. And that's the sort of threat that comes with trying to pursue
Starting point is 00:51:41 immortality as health, but that is definitely the direction we're going to, where it's from a sort of individual perspective, doing things that put you in the position of injury or illness gets more and more frowned upon, at the same time healthcare costs increase, because of course everybody wants access to every sort of life-extending treatment that exists for us, which puts huge pressure on social security safety nets. On the other hand, you've got sort of really, really wealthy people
Starting point is 00:52:08 who are able to purchase all of these things and to postpone their lives as long as possible, pouring huge and huge, huge amounts of R&D into these fields too. Actually doing quite interesting things with dogs, if you are a pit lever, they're making quite good progress on extending the life of your dog. How old can you of your dog. Can you get your dog to be? Well, they're busy messing around with it right now. So this is like a, there's quite a prominent company.
Starting point is 00:52:32 I'm not going to mention names to track this enquiry a lot of funding at the moment and they say they should be able to sort of extend your pets life span for sort of 20% even up to doubling it's eventually. I hope you use the same technology on humans. So if you look forward to a longer, much more expensive life with many, many, many more rules, and say what we are and are not allowed to do, if we do want access to those life-extending treatments. So it becomes a case of quality or quantity of life.
Starting point is 00:52:56 How long would you like to live if you were never allowed another blast of wine and of the cigarette or you weren't ever allowed to drive in a car or go up to space? Because those are all things that could result in expensive insurance pay payouts. Will your public or private sector insurer? So that's the sort of the sort of overview of healthcare where we're trying to look now at life extension rather than just curing disease but actually preventing. The prevention also comes with increased biosavailance and increased biohacking if you're doing it to yourself, but it means basically living your life, what you have, around maintaining your health, which can become a bit obsessive for anyone that understands things like eating disorders and exercise disorders.
Starting point is 00:53:38 It's almost it can take over your life. So I suppose that's a sort of broad picture in terms of how we can we can we can kill ourselves by trying to keep ourselves alive or how far we're prepared to go in that pursuit. It's interesting to think about the fact that when your life is potentially infinite or is negligibly finite in terms of how long it goes away from you, you do, do a cost benefit analysis on every action that you take. And you're just going to end up potentially with an entire society of people that have got agrophobia and that are unprepared to leave the house. Because at the moment, the likelihood of me being hit by a car versus the remaining 50 years of my life,
Starting point is 00:54:19 I think, well, you know, that's all right, 50 years. But if you're thinking, okay, it's 50,000 years that you've potentially got, or 500 years, or even 150 years, well, I'm weighing this up against the scales are tilting now in a very interesting way. So I've had to stop with you. Because everything, every potential life-ending injury, accident, illness eventually becomes more probable, given a long enough timeline. So you have to become more and more obsessive
Starting point is 00:54:43 about trying to prevent these things from happening to us. And that could be hugely, mentally distressing, especially if you imagine you had been able to keep yourself alive for a couple of hundred years, and then you sort of literally get killed by a crow dropping out of the sky and a pretty accident, right? You'd be quite annoyed at that. It also makes you hugely, hugely paranoid.
Starting point is 00:55:01 So, you know, longer we extend our lifespan the more afraid we get of death. On the one hand, on the other hand, you've got other longer we extend our life spans the more freed we get of dip on the one hand. On the other hand, you've got other people just want to opt out, want to take the early exit, which I do think is quite interesting, Tim's about signing it. It's interesting, it's interesting because you've got, you know, if you have that, if you have the means, the financial means to say I want to add another 100 years to my life. You're gonna get that treatment potentially upfront. So it's a word of cash, and then you're gonna feel,
Starting point is 00:55:32 well, this investment has cost me a lot of money, so I'm gonna have to protect that investment. And I'm gonna stay inside, or you get this situation where it's like immortality as a service where you're paying a subscription fee to get your monthly job. And if you can't afford the job any longer, then essentially, you know, oh, no, I'm going to lose the benefit of this healthy extended lifestyle.
Starting point is 00:55:51 And I'm going to die next week. So it's, you, and then what have you got to live for for a start? I mean, let's put it this way, it's not unless you are rich and have means. You're going to really fall back on the state pension. If there is one at that point in time is the state going to be happy that you're going to live another 50 years and not actually contribute to society because you're claiming a pension off them or are they going to force you to work in your retirement age now is now 130 and not 65 or 75
Starting point is 00:56:20 You know and if that's the kind of life that you're going to live forward to then again, I'm booking a flight to dign a test What's before you exactly you've got to go off you missed your payment this month off you go see later on go through this tunnel What are your thoughts on whether or not we're going to have a Superintelligent artificial general intelligence within the next 100 years, or by 2100. I would put them on the lower odds on that one, and I know I'm probably in the minority of people that do think in the future space.
Starting point is 00:56:58 I'm not convinced that consciousness is an emergent phenomenon that we can recreate. I know once again, that doesn't put me necessarily with the majority of philosophers out there, but then again I don't necessarily respect the majority of contemporary philosophers around these days for better or for worse. And you get in trouble for saying such things. I think it's quite a huge assumption to believe that we are able to recreate that jump
Starting point is 00:57:22 when we're not able to understand it. And we haven't seen to come any closer to hope solving, not even all the soft problems of consciousness. They're learning the hard ones. Do you need consciousness to have general intelligence? Yeah, pretty much. Eventually you would, because the thing with with general intelligence is what have to act like us, so what have to have a will, it wouldn't just be a program.
Starting point is 00:57:42 So you could have a very, very complicated intelligence that's smarter than us in many, many domain functions. Then, order for it to become a general intelligence that requires a will, which means that it would need to either have a will to do good, to do bad or to change the direction that it had been programmed in. In order for it to become general, it has to act independently from how it was programmed. So that goes way beyond complexity or to chaos theory into having something else. That is what real general intelligence is. To have domain specific general intelligence,
Starting point is 00:58:11 I think we're already there. If you pick any particular human function, we can program something to do it better, faster, smarter. You know, that's the way it works. But at the same time, until that intelligence is able to coalesce and to actually direct itself to choose to do things, which we don't even really understand how we do. We sort of have vague ideas, but not proper ideas. And we're not going to make that leap. So whether
Starting point is 00:58:37 we make that leap or not is about as sort of plausible as to getting into the really big questions as to whether there's a God or God's or not, right? I mean, there's a very, very big assumption. These are the basic assumptions and questions of philosophy that the ancient Greeks have been debating that we haven't really got much further than running around in circles over the last few thousand years. I don't know if you agreed, Theo. So I'd say there's a possibility, but I've put in the pretty low odds. Yeah, same. I'm in the same mind camps and camp, regarding intelligence and the fact that, AGI requires, requires well choice
Starting point is 00:59:14 and spontaneity and creativity as well. So like real creativity, not programmed one. Beyond programming, right? Yeah, exactly. And we're just not there yet. I think we will never, we won't see just one all encompassing sky and it either. I mean, there are so many initiatives out there to race, to create the first one anyway. What we're going to actually see is probably a few. If there are ever an
Starting point is 00:59:40 emergent AGI or certainly domain specific AGI's or or ones that coalesce, like Bronwyn says, that creates one but has specific elements of other domains. There's going to be several. I mean, you look at Facebook with that had M before. Google's AI assistant, you've got OpenAI doing stuff with GPT-3 and things like that, Alexa, Siri, they're all shades of something. That's very weak at the moment. It's the same with the Tesla's car. That's the main specific intelligence that will get you from A to B safely as possible, but it's not going to do your math homework, a complete thesis or anything else like that or draw a picture. But we will see I think shades of AGI and there will be more than one. There's not going to be a sky
Starting point is 01:00:32 in it that rules everything. There will be more than one. And if there ever were a real AGI to emerge by 2100. And if there ever were one or more AGI, that's when I think things might get interesting. That's when Nick got some needs to be raised from wherever he is buried in Oxford in 2100. And like, look Nick, we really need you back. So neither of you to believe that consciousness comes along for the ride with information processing that that it's not just a case of scaling up the amount of information processing. There is something else in there,
Starting point is 01:01:09 some sort of special, something that layers on top. Yeah, I mean, it's something we don't understand. If we don't understand it, we can't program it. So, to think that that that magically appears that it emerges without being programmed to something that has been programmed and built from us from scratch, that requires a leap of faith or belief or basically
Starting point is 01:01:29 sort of belief in something that you cannot prove. So, which is basically the same thing as trying to say that you believe in a God, right? That's believing something that cannot be proved. Because we don't know what it is. We can only program what we know how to program. We can get things to do things faster than us, smarter than us. We can combine functions, but in order to try and imagine that technology will do something that it hasn't actually been built to do, I think puts quite a probably too much faith in our own ability, right? That is a big logical
Starting point is 01:01:56 step, right, to take it from what we're doing at the moment, which is superhuman to super intelligent. Yeah, so that's not to say that machines won't be smarter than us to be able to make better decisions than us, but to actually think of having a general intelligence is quite a specific claim that I'm not convinced that we very, I don't think we know enough to program that. I don't think that, it's like alchemy, right? You can think that you can combine all the elements,
Starting point is 01:02:23 but it doesn't spontaneously turn into gold, you kind of miss that last step. We've got everything together. It's sort of almost like the cargo culture, right? It's sort of bold, it's an intercalcum. What do you think? Of course, I know computer scientists are going to disparage us hugely for this. I am destroying their entire industry. Yeah, exactly. What are your thoughts on whether or not the human race reaches its full civilizational potential?
Starting point is 01:02:54 I'm big into existential risk. And if super intelligent AGI, a misaligned one, Stuart Russell's human compatible. He doesn't need to worry. And Toby Ord, he doesn't need to worry, at least for a little bit of his work, Nick Bosterim wrote superintelligence basically just as a fiction book for us all to have a look at. If that's potentially out the window and that's off the table at least with regards to a risk, what are your confidences that we end up, let's say
Starting point is 01:03:20 colonizing the galaxy before we wipe our race out? You think it's going to happen? Well, I suppose that depends on how deep you mean by colonization. I think we're going to keep on pushing until we do get to Mars and get to the moon. I'm not sure how happy we'll be there because I mean as much as we are pessimistic about the Earth's own prospects, I think it's still nice that it live in Siberia and then it wouldn't be to live on Mars from the pictures I've seen anyway. So, you know, how happy live in Siberia and then it wouldn't be to live on Mars from the pictures. I've seen it anyway. So, you know, I'll happy we will be there. And if it takes into being a sort of a thriving community, it's of course a different question.
Starting point is 01:03:52 And a lot of the reasons why people are pushing for sort of into galactic settlement is because of worries over sort of Earth's sustainability. But if our problems change for having too many people to having too few people, before we actually get those sort of colonies started, that becomes a very, very different question. I think we sort of have to bear that in mind. And that looks like our population curve is going to start bending over. Well, I still think we would have initially thought. And then of course, you're not sort of meeting your population replacement rates. That becomes quite a different existential challenge for humanity. When you look at the far enough timelines, because if you're talking about intellectual
Starting point is 01:04:27 acts, experiments, you're talking about many, many generations. And if over that time, your population is shrunk, those whole sort of challenges change. I think bigger risks to our sort of flourishing and more realistic risks. In fact, I think that are almost invisible, as opposed to general intelligence, which is hypothetical. It is something we can consider. It's a possibility. It would rule it out,
Starting point is 01:04:50 but it's not sort of inevitable. The way I see it right now, I think that is almost inevitable that we are going to embark, in fact, we already have on intelligence design, which is also written about in the book there and Craig's chapter with the sort of Homo or geritus. I I think he called it, he made it up himself that word so I can pronounce it over. You just, you looked in, looked in a Latin sort of
Starting point is 01:05:12 dictionary and picked it out. The whole thing of intelligent design is happening and that is going to accelerate our fulfillment as a species or accelerate our demise depending on how we deal with that sort of post-human transition. And that is, I think, something that I'm much more interested in than AGI is a separate entity rather than combining of humanity with technology from a biological perspective and from a hardware perspective. All of those technologies are converging at the same time.
Starting point is 01:05:38 And I'm probably more certain that will either sort of, you know, either augment our own intelligence to the point that we come sort of human general intelligence before we actually create an artificial general intelligence, and that's just as dangerous. So I know once again a lot of people would disagree with that race as to where the humans become too smart, grow and good, or too smart for what is currently the human species. We actually sort of transcend that sort of level up as a species. I think we'll probably do that before sort of artificial
Starting point is 01:06:10 intelligence spontaneously generates its own consciousness and will. See, what you reckon? Are we going to do it? Probably make it. Probably take Tote by another pandemic, probably. You think? The way things are headed.
Starting point is 01:06:23 Yeah, yeah, or another war. I mean, that's actually a loose set, right? There's enough need to take space and consciousness. I mean, if you're talking about the flourishing of humanity across the solar system and things like that, I mean, like you say, we'll probably get to a stage where we will colonize or certainly set food and set something up on Mars and the moon
Starting point is 01:06:44 as the most, I guess, easiest habitable options. Getting further is going to take us a hell of a long time, I think, given the resources and the constraints. And again, all the risks associated with space travel, and we're just not there yet with other means of space travel or certainly safer means and faster means as well. So, you know, we have a long way to go, certainly maybe 500 or 1000 years to try and imagine, are we actually going to make it that far? Or will we cause something, again, whether it's a war on earth or whether it's even our first interplanetary war, where the Martian colonies think, we don't want to be part of your disgusting mess
Starting point is 01:07:26 anymore and we and earth people say or the Terrans say no you must come under our rule once more. If we're much too much of a expanse again haven't you Theo? They haven't even got there yet so I mean the Martian constitution they've declared independence from you and it's very equitable it's it's super work okay. Is that a thing? They've got a from youth and it's very equitable. It's a super work, okay? Is that what they've got a constitution for months? Yeah, the Martian Constitution is published. You can go look that up Yeah, it's who wrote that it's like it puts the South African Constitution to shame. It's very Happily written Who created it?
Starting point is 01:08:01 You know, they were gonna send that whole Mars colony. So I know about it from Adriana Mare who who was one of the chosen astronauts. Yeah. Sorry. They were all, it was all part of that Mars one thing that came out of. I don't think Mars one exists anymore, but. No, it doesn't. No, the crowdfunding field spectacularly didn't. I've been learning a little bit about Astral politics, about the politics of space who owns
Starting point is 01:08:26 areas of space, who owns the moon, who owns Mars. That's just... No one owns anything, possession in that case. I think it's first come first serve. First come first serve, right? First come first serve, yeah exactly. You put your hand on it. We can't claim like indigenous populations, you know.
Starting point is 01:08:40 All these microbes that say that they were here first. Um, bring your own microbes. Given, given the, how would you say the, overbearing positive nature of most of futurism, as far as I get to see, right, the articles that are written and stuff like that, how are, how are the things that you guys
Starting point is 01:09:05 talk about? How is your narrative received generally among the futurist community? Because I imagine that it must be this hyper realism, perhaps, where you're saying, what about this? Wait, what about that? I imagine it must feel like someone's coming in and pissing on their party a little bit. Well, I think you've got to understand these different groups of futures. Academic futurism is hugely pessimistic. They are too pessimistic. That's what I was saying right at the beginning, those views are very binary. Academic futurism is obsessed with climate change and degrowth.
Starting point is 01:09:37 They're all pushing for sort of like fully automated, very low and middle-class communism as a solution to sort of, you know to climate change and sustainability, which is a very European view on the future. I probably shouldn't say that, but that's the perspective I get. It matches very neatly with the EU's development goals. It's very hard to separate academic futurism from the EU's plans for the future. This sustainability orientated and smaller, not larger. And so right a bike don't take a rocket, right? So that's
Starting point is 01:10:09 the one side of future as if and I do have a degree of futurism. So kind of understand I'm talking about there. The other side is of course, the sort of more sort of techno futurism realm, which I don't know, theos sort of skirts and around, which is the take evangelist and the hugely optimistic ones. But again, they selling stuff. There's a huge amount of cynicism there. So they might find I's a bit annoying,
Starting point is 01:10:30 but only in that we sort of messing up their marketing plans, not that we really challenging their ideas. So yeah, I suppose it's supposed to be somewhere in the middle and like anyone that's kind of in the middle, you don't tend to be very popular with your peers. But we have found with this particular book that because we're talking like actual human beings, not trying to talk down to people about these ideas. It's been very well received by people that don't have a formal or at least a paid background
Starting point is 01:10:56 in futurism, that I don't have a vested, monetary interest in pushing a particular view. It's an interesting one. It's an interesting one to think about the potentials for our future and how many biases and agendas different people are pushing when they're talking about it. I mean, if there's one thing, it always comes with an agenda. And you have to see past that because people, the best thing is in the world, confuses because the best people at describing the world as it is from a completely positive or deterministic perspective are also generally the best thinkers and the best talkers, which means
Starting point is 01:11:34 they are very persuasive when it comes to slipping in, there are more normative ideas. And that's where people get the whirlpool over their eyes, we listen to some only know they're right about describing the world as it is. And when they suddenly start leading us by the noses to what should be, talking about instead of what is and to what should be, we end up being sort of drawn along into the inevitability of these plans and that nobody's making it clear where they stop from describing and to prescribing what should happen. Everyone is slightly disingenuous about that. Everyone's saluting something, a policy, an election, a product. And you have to sort of separate that from that.
Starting point is 01:12:12 Or just a keynote, right? I mean, of course, it's easier to sell keynotes to be talking about, yeah, technology, exponential curves. I mean, you know it as the AI notice. We have to separate the description and the reality from the sales pitch. And people have to question that. I love what you do get sucked in. I love it.
Starting point is 01:12:31 Theopreasley and Bronwyn Williams, ladies and gentlemen, the future starts now, expert insights into the future of business, technology and society will be linked in the show notes below. Where else should people go if they want to check out your stuff? Well, I'm on LinkedIn or Twitter mostly, tea priestly with all the vowels removed because I'm so hip and trendy like every start up these days, or Theo priestly on LinkedIn. Not too far behind every meme I suppose. Yeah, I'm also like what is easiest to find on Twitter or LinkedIn at BrianmanWillians on both come and fight with me about consciousness and
Starting point is 01:13:11 artificial general intelligence over there. I'm sure I'm sure intelligence listeners of this show would have something to say, but that's the point. We invite you to disagree with us. Please disagree with us. Don't take anything we say as gospel. I love it. Guys, thank you so much for today.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.