Simple Swedish Podcast - #273 (2/2) - AGI (artificiell generell intelligens)

Episode Date: March 19, 2025

Nivå: A2-B1 Andra delen i avsnittet om AGI där jag pratar om vad det är, vilka potentiella fördelar och risker som finns, och mina egna tankar. Är du intresserad av bootcampet med 9 dagars språ...kbad i Sverige - kolla in det här! --- Transkript Så att regeringar i olika länder, de tar beslut som är i en riktning som Prometheus vill liksom. För att de själva vet inte att det är Prometheus som har liksom skapat en opinion, och skapat den här den här politiska riktningen. Men olika regeringar och olika företag, de de tar beslut som ligger i linje med Prometheus mål. Och också på bra sätt, så Prometheus hjälper olika organisationer och företag att lösa komplexa problem, som att hitta botemedel mot olika sjukdomar. Kanske hittar hur man botar cancer och sådana saker. Hur man optimerar infrastruktur och så. Och till slut så blir Prometheus liksom en världsregering, kan man säga. Den liksom styr hela världen. Det blir liksom en värld styrd av AI. Och man man vet inte om det är positivt eller negativt. För att i berättelsen så är det inte tydligt om det här är positivt eller negativt. För okej, vi kan tänka oss att vi vill ha kontroll själva. Men vi kan också tänka oss att om Prometheus är så mycket mycket smartare och klarar att lösa olika problem så kanske det är bättre. Ja och det är ju såklart en av de intressanta sakerna med med AI, eller speciellt AGI. Så berättelsen slutar på det sättet. Och som sagt AGI är artificiell generell intelligens. Och det betyder att den kan förstå bättre än någon människa.   ..för hela transkriptet - klicka här!

Transcript
Discussion (0)
Starting point is 00:00:00 Governments in different countries make decisions that are in the direction Prometheus wants. Because they themselves don't know that it's Prometheus who has created an opinion and created this political direction. But different governments and different companies they make decisions that are in line, Prometheus helps different organizations and companies to solve complex problems like finding drugs against different diseases, to find ways to cure cancer and such things, how to optimize infrastructure and so on. And in the end Prometheus becomes the world's government, it controls the whole world. And it becomes a world controlled by AI. And you don't know if it's positive or negative, and manages to solve different problems, then maybe it's better. And that is of course one of the interesting things about AI, or especially AGI. So the story ends that way. And as I said, AGI is Artificial General Intelligence.
Starting point is 00:02:41 Artificial General Intelligence And that means that it can understand better than any human being Learn better than any human being Do different things better than any human being Different things, not just one thing And... Yeah As good or better than a human and that's a big difference from a regular AI
Starting point is 00:03:12 because a regular AI is good at a specific thing we can have an AI that drives a car or we can have an AI that can generate music but artificial intelligence, general intelligence is that an AI that can do all things better as good or better than a human And it can handle new challenges without having to program it for the new challenge. So without having specific programming for a new thing, it can still manage it. It can resolve new problems, be creative, even. And of course there are big, big advantages to that. And that is that it can potentially solve all the problems of humanity.
Starting point is 00:04:29 So, find a cure for diseases, because if it's much, much smarter than a human being, maybe smarter than all people combined. Okay, then it's much easier to find solutions to problems such as盜特努力, for different diseases, the climate crisis, how do we solve the climate crisis, or how do we solve different conflicts? War, poverty, drought, lots of different things that are difficult for us, people, to solve because we have limited intelligence. There is a limit to our intelligence. But that's not what it does to a computer. Okay, so. But there are of course many risks as well.
Starting point is 00:05:40 And that is, for example, autonomous weapons. So, you can have an... If we have an AGI that... that is for war. Then it will be much more effective in war than some people or some group of people. And that is of course very, very dangerous. And a very big thing is this thing with control.
Starting point is 00:06:17 Can we control something that is smarter than us? Something that is smarter than us. So, that's a difficult question to think about. Like, if it's smarter than me, how can I control it? I'm thinking about it like this. That a five-year-old is trying to control you. So a five-year-old child. Can a five-year-old child control you? Well, it will be difficult.
Starting point is 00:06:55 So how can we control something that is much, much smarter than we. And if we can't really control this AGI, then it's a huge risk. Because since it's smarter than me, I don't have a chance of getting what I want So for example It can be like... It can be like... It's hard to know, to think completely Because think about it like this Think that you have a self driving car
Starting point is 00:08:03 An AGI car and you say ok, take me to the airport as fast as you can ok, but as fast as you can it can mean if it is a person who is standing in the way and the car has to choose between braking or overtaking the person If I have said, as fast as you can, then maybe it just overtakes the person just because the goal is to take me to the airport as soon as possible. So it's very important that it has a goal, the same goal as we have.
Starting point is 00:08:54 Because my goal wasn't just to get me to the airport as soon as possible, but I don't want to kill anyone on the way. But maybe that's not what the AGI cares about. So there are risks, and one risk of course is that if the AGI is smarter than all people, okay, what do we need people for? So that means that we need to have a whole new system for our society, because if all the work can be done better better than an AGI, okay, then there is no job anymore. But, so we need a whole new system for society. And, yes, I personally, after reading that book,
Starting point is 00:10:02 I have thought about this a lot. And it's crazy that it will probably come very soon. So I think it's important that we all are aware of this. Because I'm thinking, like, only this with democracy, so if we have an AGI that can, if I can sit with the AGI and say for example, okay, I want this political party to win the election. And I say, okay, AGI, do this, do this, do this so that this political party wins. AG-IN can create a plan for this,
Starting point is 00:11:08 which is 100 times better than a human plan, and maybe start creating thousands of groups online, create media content, create videos, create web... whole websites, their content creates videos, creates whole websites and can even affect people individually even call people and fake that you are a person they know there are a lot of things that can happen when we have AGI that we don't have now with just regular AI So this party wins. Chachipiti can't call thousands of people and fake people they know based on data from their social media and all data is online and if someone is extremely smart there is always a way to find that data, summarize, analyze and then act. So there are very very crazy things that can happen. But at the same time, there is also, any other tool, any other that we have had through history. So there is a lot of optimism, but there are also very many important things to be aware of,
Starting point is 00:13:34 many big, big risks. And the truth is that no one knows what will happen and how it will happen. And it's difficult to know what will happen if we have something smarter than all of us together. So yes, we live in an exciting time so we get to see how it goes with all this. So I hope you have enjoyed this episode. So I'll see you again soon in... See you or hear you again soon. In the next episode. Take care.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.