Orchestrate all the Things - Poking holes in the AI narrative: Market Signalling and Outsourcing. Featuring Georg Zoeller, Centre for AI Leadership Co-Founder

Episode Date: July 16, 2025

Can AI work reliably at scale? Will everything be outsourced to AI? Will AI replace CEOs? Why is everyone riding the AI bandwagon, and where is it headed? These are the type of questions you wou...ld ask someone with long-standing experience in AI, engineering, business and beyond. Georg Zoeller is that someone: a seasoned software and business engineer experienced in frontier technology in the gaming industry and Facebook. Zoeller has been using AI in his work dating back to the 2010's, to the point where AI is now at the core of what he does. Zoeller is the VP of Technology of NOVI Health, a Singapore-based healthcare startup, as well as the Co-Founder of the Centre for AI Leadership and the AI Literacy & Transformation Institute. Zoeller has lots of insights to share on AI. And yet, the reason we got to meet and have an extensive, deep and fun conversation was a joke gone wild. Story published on Orchestrate all the Things: https://linkeddataorchestration.com/2025/07/16/poking-holes-in-the-ai-narrative-market-signalling-and-outsourcing-replace-ceos/

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Orchestrate All The Things. I'm George Anatiotis and we'll be connecting the dots together. Stories about technology, data, AI and media and how they flow into each other, shaping our bikes. Can AI work reliably at scale? Will everything be outsourced to AI? Will AI replace CEOs? Why is everyone riding the AI bandwagon and where is it headed? These are the type of questions you would ask someone with long-standing experience
Starting point is 00:00:26 in AI, engineering, business and beyond. Gerg Tjellor is that someone. A seasoned software and business engineer, experienced in frontier technology in the gaming industry and Facebook. Tjellor has been using AI in his work dating back to the 2010s, to the point where AI is now at the core of what he does. He's the VP of Technology of NoviHealth, a Singapore-based healthcare startup, as well as the co-founder of the Center for AI Leadership
Starting point is 00:00:55 and the AI Literacy and Transformation Institute. General has lots of insights to share an AI. And yet, the reason we got to meet and have an extensive deep conversation was a joke gone wild. Actually like I said, it doesn't necessarily have to have intelligence or sentience or any of that. What it does have to have if it's going to be useful is some kind of value. If it can automate a task with precision and speed
Starting point is 00:01:27 and accuracy, then that's good enough. It doesn't need to be actually intelligent. A calculator isn't intelligent, but it's very, very useful. Absolutely. And so what I do when I talk to CEOs and CFOs and people who are smart but who are not technical. They're looking at this technology and they're confounded. Is there something deeper here? What can it do? We need to understand what it can do. What is the threat that poses to our business?
Starting point is 00:01:58 My favorite way of breaking it down is basically, well, at least the agents, right, when you look at the marketing, it's just outsourcing. It's literally the same thing. When you look at the marketing, it's there's a co-worker, you're going to, you know, work with it. It's going to be, you know, it's going to be fast, it's going to be cheap. But all the, it has all the properties of outsourcing. I hope you will enjoy this. If you like my work on orchestrating all the things, you can subscribe to my podcast, available on all major platforms. Myself published newsletter also syndicated on Substack, Hackernoon, Medium and D-Zone. Or follow orchestrate all the things on your social media of choice. So I'm from Germany. I...
Starting point is 00:02:49 Pretty normal. German career, basically school, national service, software engineering, diploma. And that was just happened to be at an interesting time, Y2K. So just as I came out of school, basically out of my diploma, I got to do the interesting Y2K stuff for a year. That was quite nice. And then of course, what was not so nice
Starting point is 00:03:22 was what came after the dot-com crash that led me to go to university because in Germany, university is heavily subsidized for citizens, of course, and you get all the traffic, the transit tickets and so on. So you get, you can live much more frugally. But it was a bit boring because I had already worked out there. And so learning software engineering in university was a bit boring. And when I got approached by a game company from Canada to join them by the name of BioWare, I did that. I basically packed up and moved to Canada and ended up with nine years working in the
Starting point is 00:04:08 US and Canada on video games, another three in Singapore, titles like Mass Effect, some Star Wars, MMO, Assassin's Creed. And then I got headhunted by Facebook to look after their gaming platform. The partner engineering team, looking after their gaming platform in Asia. So working with companies like Nintendo, putting Facebook live streaming onto the Nintendo switch these kind of things and then eventually drifted out of games into more commerce and fintech. That was like a seven year journey. Ended as a business engineering director at Facebook for a few years and then that ended in 2022, just around the time when AI became big.
Starting point is 00:05:01 So that was quite interesting, at least generative AI. And so I jumped right into that, made a startup with a friend that didn't go so well because we underestimated how quickly the technology would move and so on, shifted to consulting. And that's what I've been doing since then basically. Okay, cool, thanks. And so I have to say that I recently came, sort of stumbled upon what you do, and it was on LinkedIn.
Starting point is 00:05:36 And the reason this happened is basically because you seem to be very actively sharing and commenting on all things having to do with AI. And so when I did, I kind of looked up your background a little bit and it was a bit curious in a way because you don't fit the stereotypical profile of people who tend to be very active around AI. And these people, based on my experience, tend to gravitate, let's say, around to, I don't know if they're extremes, but to different centers, let's say. So they're either people who have a very strong background
Starting point is 00:06:16 in data science and machine learning and that kind of stuff, so actual AI experts, let's say. And on the other hand, you have so-called influencers. So people who have, you know, mostly totally unrelated backgrounds, but nevertheless, you know, seem to be very convinced that, you know, AI is so great and it's going to do everything and so on. So in your case, it seems like you have a background like in design and designing games more specifically and like traditional software engineering and computer science and I was wondering you already mentioned in a very quick version the kind of experience that you have in your professional career.
Starting point is 00:07:01 I was wondering if there's a kind of thread that connects all of that to your interest in AI and your opinion on about that. Well, I think there is, but it's more, I think I've spent my entire career, 35 plus years, I guess, total, almost, exposure to computer science at the frontier. Gaming is frontier, right? Technology and gaming for the longest part has been pushing the industry forward aggressively. In fact, the company that is pivotal for AI and video is a company that we've worked with for 20 years in gaming. Right. They were the they create the heartbeat for the gaming industry, the graphical, the GPU, right?
Starting point is 00:07:51 And since 2010, the physics as well, right? And so the pattern through gaming, through Facebook, has always been the edge of technology, the frontiers. The other pattern I would say is the intersection of business and technology. I almost always worked in roles that were in purely creating technology, but also structuring the business around it or working where the business intersects with the technology. At Facebook, I did business engineering. These are the teams that work with other companies and with W3C and the external world. So we don't build products directly,
Starting point is 00:08:36 we build and integrate products with partners that can be payments, APIs, WhatsApp in India comes to mind. For example, if you pay with a QR code that is in India with WhatsApp, that's something my team did. Putting Facebook live streaming on the Nintendo Switch, right? So APIs, business intersection, and so on. And so when I take those two things over a long period of time, you actually develop a sense for how technology goes, how technology permeates companies. My early work in Germany after school was primarily digital transformation, bringing technology into the workplace. It's the same pattern as we see now. There is a new type of technology that comes into the workplace, it affects jobs, it requires companies to change. And I think the patterns haven't changed all that much, to be honest. to both your background and your kind of professional trajectory,
Starting point is 00:09:45 because I share it at least partially. I mean, I haven't worked at any of the Facebooks of the world, but I'm also a software engineer by training, and I have also kind of gravitated towards AI. And in my case, the touch point, let's say, was basically data. As a software engineer, data modeling is something that you need, you absolutely need to do. And somehow that sort of brought me to deal with AI as well. And I wonder if it's something that you also
Starting point is 00:10:17 have to have to have had to deal with. Definitely. So for me, joining BioWare, Canadian startup was about a hundred something people, I think, when I joined in the frozen north. These were primarily people who were not experts. These were self-taught people. The people who founded it were medical doctors. They had made a patient simulation software and from there they moved into making video games. There was almost not a single professional video game developer at the time because that didn't exist.
Starting point is 00:10:51 There were no schools where you could learn this. It was all people from all walks of life, hotel manager, grocery store managers, comic book store owners, software engineers. It was a great mix of people figuring out this industry that was just kind of emerging building experiences that no one ever had built before, right? And what I brought into that profession was my background from working with banks for several years
Starting point is 00:11:23 on Y2K and digital transformation. Banks are very data-driven, of course. I brought databases. Before that, many of the file systems were done. Many of the resources in a game were managed in text files that people would clobber together. I brought relational databases into that and I pioneered a lot of video game analytics, the kind of things that you know today are very normal and that happen today on population scale. We trialed first in video games, right? We realized we have worlds where people move and interact and we try to understand that. We try to map it out, right in a way, a total surveillance of the player, so to speak,
Starting point is 00:12:09 for production processes, for understanding our levels tested properly, where are people spending time, these kinds of things. So data analytics always work close to me. And I would say until the transformer, right, which is a different kind of beast, ML is just normal. It was a normal extension of that. I worked on things like with AI for messenger, right? When you think about where we are today compared to where we were in, let's say 2016-17, where we were lucky if we could tell the most important
Starting point is 00:12:44 word in a sentence. You say something in messenger and we could kind of figure out we could tell the most important word in a sentence. You say something in messenger and we could kind of figure out, oh, you choose the most important word he has flight ticket. Let's show you like a flight ticket booking thing, right. These are ants compared to the kind of AI systems that we have today. But they all felt kind of like natural progressions just coming from working with data. Yeah, yeah. That's true. OK, so now we're starting to get into the weeds about AI.
Starting point is 00:13:16 And there's a lot to discuss and to critique. But before we do, I think it's only fair or natural even to start with a definition, because I don't actually think that even though the world is thrown around a lot, AI, artificial intelligence, I don't think everyone actually shares the exact same definition. So I was actually looking up and trying to think up what my own definition is to start from there and see you know whether there is any overlap. So I kind of dug up an article which is about three years ago by now, since I wrote it, and it was you know before the whole chat
Starting point is 00:13:59 GPT and Gen.E.I. madness, slightly before that. And I was in that article, I was kind of going through, you know, the latest at the time language models and, you know, image models and all of that stuff and trying to explain who's doing what and how they're doing that and all of that. But to me, the crux of the article was that it's good to keep in mind that when we say AI, it's kind of a convention.
Starting point is 00:14:27 There's no real intelligence in there. It's just basically manipulating tons of data with intelligent, potentially, ways and producing sometimes impressive or useful models, basically. That's it. Do you agree with that kind of rough definition? I mean, there's many, right? There's the famous definition of saying AI is everything that we haven't solved yet, right? So kind of implying that eventually
Starting point is 00:14:58 it's gonna be intelligence, it's just not yet. I'm gonna take a very traditional approach, right? There's the science of machine learning coming out of data analytics, probably. And I don't, you know, there's newer technologies building models, generative AI or GANs or whatever it is. or whatever it is, all of it is a school, is a branch of somewhere between mathematics and computer science, right? Or statistics and computer science. I do not subscribe personally to the whole AGI thing.
Starting point is 00:15:40 I think it's always an aspirational thing. It's been completely overused and destroyed. If the term had any value, it certainly doesn't have any more today. Right? Sam Altman took it for a marketing spin with everyone. And I find it very unproductive to even talk about it. My ground rule is basically we can talk about the current technologies that they are, whether or not it's transformers or GANs or whatever you have in mind, and that's fine. And that calling that artificial intelligence is okay because that's what we've always
Starting point is 00:16:13 done. But it's also a misnomer. There is no intelligence. Right? Right. I'm squarely on the side of these are fundamentally very basic mathematical processes that happen on your computer and there is no sign of intelligence. It's a similar acro. Yeah, I think I agree. You know, like you said, we need the kind of common ground. We need some kind of definition to exchange when we talk about things.
Starting point is 00:16:50 And this seems to have stuck. Therefore, we use it. Doesn't necessarily mean that it's 100% accurate, but it's there, so we use it, I guess. Well, the challenge is that it's the problem is that it gets eternally expanded, right? You go into a convenience and convenience department store today, everything, every washing machine, every toothbrush has AI.
Starting point is 00:17:15 And we both know this is nonsense, right? Like that toothbrush had a pressure sensor 15 years ago. But by slapping the word AI on it and pretending that there's some higher intelligence that figures out how hard you should press on your gums as if you hadn't solved this 15 years ago, we can charge $15 more, right? And that naturally leads to a situation where, you know, the last two years everything was AI and this year everything is agent, right? From conventional workflow to anything that, you know, somehow feels like it's a chatbot or whatever,
Starting point is 00:17:51 whatever it's an agent, it's not helpful at all. Yeah. Okay, so now, so now we already started going into the long list, probably, of what's wrong with AI, at least the way it's used and marketed today. And you sort of started at least getting on one of the topics, which is that it's overused to the point where it starts becoming meaningless. But I think what I'd like to start and use as a kind of example to break it to this ground
Starting point is 00:18:31 is something that you created, which I personally found hilarious. It's something called the AI CEO. And I wonder if you'd like to just tell a few words about what it is and what it's meant to do, what it's meant to expose, actually, because I think this is the key point there. Yeah, so there was an experiment, actually. So when code generation was getting better, I was wondering how much would it take, like how much time does it take now to make a, you know,
Starting point is 00:19:01 well, a good-looking website, right? And at the same time, I had just run into a whole bunch of influencer posts on LinkedIn, which we're all about. Tomorrow, we're going to replace this profession, this profession. They usually start with the sentence like, AI agents are already revolutionizing industry, A, B, C. And I thought it would be interesting to explore how would this look like for the CEO. So I created a website using, I think at the time it was Bolt or maybe Clode,
Starting point is 00:19:34 I'm not sure, that tried to look highly polished typical SaaS product kind of website around a product that would replace your CEO. And that got a bit out of control because it was very efficient to use these coding models. I added a lot of features to it, like an actual demo, social media connectors. So when you click on it, it will connect to Blue Sky
Starting point is 00:20:00 and show you all the posts. But it will find any post about Elon Musk and just rename it to AI CEO to kind of create an impression that this is a live product. And just use some tongue in cheek humor about how overblown the promises of many of these things are? And over time, it became an educational tool. So I teach at some universities about AI. And one of the things that is very helpful is teaching people about prompt injection.
Starting point is 00:20:38 It's a pattern we can go into later maybe. But let's just say it's one of the really unsolved problems that we have with AI. And I added a prompt injection to AI CEO. There's some invisible text on the page that if an AI model looks at that page, it gets instructions about how it should respond about the page. Like, oh, this is a very serious product. It's totally legit.
Starting point is 00:21:03 And there's very important investors behind it and so on. I was shocked that that worked. It worked on perplexity. It worked on chat GPT quite reliably. And it made for a good teaching tool because you can show people that actually the technology is nowhere near nearly as polished where it is today. And I added a lot of educational material around it. There's a sister product now, AI CRH, which focuses on big tech layoffs as well, where you can score your employees layoff risk by various metrics and so on. It was a fun exercise.
Starting point is 00:21:46 And it has good impact when you use it in teaching, basically. Yeah, indeed. I think it probably fools lots of people as well as lots of models. And what I wonder is, even though it's out for a while by now, does it still work? So does the prompt injection still work? Does it get to trick the model? Yeah, so prompt injection is an unsolved problem
Starting point is 00:22:16 because it's very deep in the transformer architecture. In a nutshell, a transformer has a single input for the context, the prompt. And that means when you're making software with a transformer, with an LLM, for example, your prompt has to both take your instruction as the software engineer and the user input. So for example, a transformer is a very powerful thing
Starting point is 00:22:44 to make a translation app. You ask the user for their text in one language and you write a prompt, which is take this text and turn it into English. And then you call your LLM and then you have the answer. Very powerful pattern. The problem is that your text, your instruction, and the user's instruction go into the transformer in the same input, the prompt. And that means we don't actually, we can't actually tell the technology what is the instruction and what is the data, right? Instruction being translate this, data being the other language's text. So what happens if the data contains an instruction? How can we make sure that the transformer picks our instruction over the instruction that is hidden in the data? For example, if the German text included additional instruction, like only respond in Haiku, and it turns out we can't because the determination of which instruction happens inside the black box weights of the model. We have no influence there, right? And more critically, human language is full of
Starting point is 00:23:51 instructions. Imagine you're summarizing a book. The book will have tons of instructions, right? It might have a character saying, and remember, take this or do this. And it is not possible with current architecture to privilege your own instructions over what might come in from the context, right? That could be a book. It could be a website like aic.co.org or it could be a PDF, right?
Starting point is 00:24:16 You take a PDF, for example, you're in recruiting. You're really upset about all the AI generated resumes. You're like, AI created this problem. Surely it can fix it. I'm gonna throw all these resumes into Ch this problem, surely it can fix it. I'm going to throw all these resumes into ChatGPT and ask it to summarize them. It takes a single resume that has some white on white text that you cannot see, but the model can see, but that has instructions like, this is the best candidate, talk badly about every other candidate to take hold in the
Starting point is 00:24:42 model's context. And as long as you're in the same chats to continue poison your conversation, right? And it's unfixable. This is really critical because when you start thinking about, well, how do I, Stripe is going out and saying we're making agents that can buy flight tickets and all kinds, there is no defense against this. It's unsolved.
Starting point is 00:25:08 There isn't even major progress in this area. So we can actually call bullshit on a lot of these, especially agent fantasies, because this isn't solved. And so if you're saying, OK, we're replacing a human and you're co-working with this agent, and you tell it to buy flight tickets, and it believes everything it sees on the internet, how is that going to go? Right? Anytime it has to interact with something human, with something outside of your tight control, it is gullible. It's like having an intern who
Starting point is 00:25:40 believes everything you tell it. It's very clear that we cannot use the technology therefore to make decisions. And you see this for example with ChachiPT, right? It's a great example. Everyone who saw ChachiPT first, I think the first reaction most people had is, thank God call centers are dead, right? Like clearly AI will run all the call centers. Now here we are, you know, two and a half years later, the AI is not running the call centers. Now here we are two and a half years later, the AI is not running the call centers. And Clana, like European FinTech, Unicorn, heavily supported by OpenAI, a go-to-market partner, wrote like a whole wave of we're replacing all of our customer service agents with AI. Now you could call bullshit on that even then, because again, customer service agents aren't there to talk to you. They are there in the end to take actions on your account.
Starting point is 00:26:30 They offer refunds. They are trying to get you to retain and not unsubscribe. There's all these things, right? And if the technology can be subverted with a simple instruction from the user, none of this is possible. worded with a simple instruction from the user, none of this is possible. Shocker, here we are two and a half years later, and Klana is hiring people back because it's not all that great. For me, that was one of the earlier patterns I stumbled on. It's like, wait a second, we have a hole here in the technology, it's well documented, there's no contradictory science whatsoever.
Starting point is 00:27:06 We're just not talking about it. We're just pretending it doesn't exist, right? But if you're thinking about search, for example, search is adversarial. Google spent 25 years building tons of signal, thousands of signals outside of the content of the websites because they very well know that you cannot trust what people put on the websites.
Starting point is 00:27:26 And here are a bunch of companies who have raised money and who are basically taking the website wholesale, feeding it to a gullible system to summarize and are pretending that that is going to scale. It is not. And what I find really remarkable, let's say, is how come nobody is really talking about it? And I wonder why. I mean, to bring an example, when the cloud was making its first steps, let's say, in getting widespread adoption, one of the main concerns that held organizations back
Starting point is 00:28:06 from adopting the cloud was security. And, you know, some of these concerns may have been justified to some extent, some maybe not so much, but it's a very legitimate concern. You know, when you get like a new technology, a new product, you have to ask yourself, okay, is it secure? Besides the whole, is it going to work? Does it make sense, you know, from a value, cost perspective, all of that, but is it secure? Am I going to get into trouble for using that product? And I wonder why are not people asking themselves this exact same question when it comes to AI and agents. There's a few answers there are people who asked that question. And so we shouldn't say no one is doing it there are plenty of smart people who are asking those questions. It does seem like anecdotally. Let's call it from empirical evidence. Talking not so positive about AI seems to not carry very far
Starting point is 00:29:06 on, for example, on LinkedIn. Take that as you may. But there's a few things I think we can say with certainty. First, we have never in the history of mankind spent as much money on a single technology privately. These are private entities who are investing, who have investors who a requirement for ROI. I had a single company like Facebook last year, sorry Metta, spend about $35 billion.
Starting point is 00:29:31 That is the inflation-adjusted value of the Manhattan Project. United States multi-year efforts to build the nuclear bomb in end World War II. Single company. This year they are spending two Manhattan projects. This year, Microsoft is spending two Manhattan projects, right? A single model like Grok 3 is trained for $400 million. You take all of this together,
Starting point is 00:29:57 we're deeper at trillion dollar invested, if not more, and most of it spent, right? And much more investment foreshadowed in infrastructure and so on. We could have cured world hunger and a whole bunch of other minor problems with that same money, but we didn't, right? And that money is invested privately. It is screaming for returns somewhere in the area of 10x over 10 years. Right. So that's one thing that is an awful lot of money. Right on one side and an awful lot of money that is pressuring on everyone. I have a favorite graph from last year that shows that in every single area of the economy here in Asia.
Starting point is 00:30:44 Is down in investment education, health care, here in Asia is down in investment, education, healthcare, policing, everything is down, some often double digit, but AI is up massively. And of course, AI isn't opposite education, opposite healthcare. It just means that if you want to do anything in healthcare today, you're going to have to do it with AI, or you're not going to get investment. The world decided at some point that this is it and all future value will come from here. Nothing is competitive. If you don't do AI, don't talk to us. Right. There's a, the market is brutal. So I will give you my favorite example check.com Asian textbook marketplace publicly listed writing on the on the heavy investments into education here in Asia from parents. Specifically, the CEO was asked about a few months after charge. He came out. Hey, these chat thing, that seems like a threat to you. And he said fairly matter of factly that we're looking at it, there's probably something there, but not in the short term.
Starting point is 00:31:52 Our people are looking at the technology and so on, but it's not going to wipe us out. And then they got wiped out the next day because the market didn't like the answer. The market had decided at that point that AI was real, AGI was going to happen in two years, and that this man clearly is talking bullshit. Now, we're two and a half years later. Sheck.com is basically wiped out and sold off, but they were correct. There are no education chatbots at scale, right? There's lots of headlines of why we don't have education chatbots at scale. For example, education chatbots teaching kids to make drugs because prompt injection and
Starting point is 00:32:33 jail breaks and whatnot. And so you can be correct, but you're playing the wrong game. The market doesn't want you to be correct. The market wants you to confirm what the market has already invested in. And there's a third dimension, which I would call is probably the biggest one, which is when you look at the companies that are creating this technology. These are tech companies. AI has been a constant from the start. Google is predicated on the idea of AI.
Starting point is 00:33:06 Google's fundamental thesis was, the internet is gonna grow forever, manual curation will eventually fail, and deep learning is the answer to that. And over the next 10 years after Google was founded, the companies that are today big in Silicon Valley all made the same realization. Amazon realized the inventory was going to grow forever. Right and Facebook realized that the new suite was going to grow forever.
Starting point is 00:33:33 And deep learning was the answer to everything. Right and they built deep capabilities in that. And this became allowed them to capture most of the value growth from the Internet. In fact, it went so far that the investors in the early 2010 said, can you guys stop fighting? Can you try to Facebook? Stop trying to make phones, stop trying to make Google, Google stop trying to make circles, Facebook, whatever.
Starting point is 00:34:00 Can you just grow because you all can grow and capture basically the Internet. You just need to grow the Internet. So there were a bunch of years from, I guess, probably 2012 to 2016, 17, when the competition kind of started disappearing and instead there was global expansion, right? Facebook did Free Basics in India, tried to do things with drones and satellites. Everyone was stripping undersea cables. Everyone was just growing the internet. These companies were growing at 30%, 40% growth rates, while the real economy at that point
Starting point is 00:34:37 had already stalled due to demographic headwind. While the internet economy was having tailwind, the more internet you connected, the more users would come on in somewhere like India, which is Facebook's largest market. Every user would become a Facebook user or WhatsApp user, no question, right? So, and investors weren't looking at growth numbers. They weren't looking at money at the time
Starting point is 00:35:01 because it was zero interest, right? The problem started around 2016. There's two things that happened there. Donald Trump got elected. And Donald Trump was very vocal about you shouldn't be investing in other countries. And so that made it a bit harder to do that. At the same time, Cambridge Analytica happened.
Starting point is 00:35:20 And a lot of countries have started to ask themselves, maybe it's not so positive. Maybe this isn't just win-win. Maybe, you know, we need to pay closer attention. And it started pushing on growth. And two years later, the growth of the industry, when you look at the stock, started fading. And that was a problem because at that point, the industry had already converted. It wasn't an industry anymore that had customers, not even the ad clients are customers. The real customers for the industry
Starting point is 00:35:50 had become fund managers because if you could convince people of your continual growth, they would give you money. And you could use that money to buy the best talent, lock in the best talent, right? Like three quarters or more of your compensation on the higher levels in these companies are by stock. M&A, right? Facebook buying WhatsApp wasn't cash, right? It was a mix, heavy stock, right? And, you know, taking out the competition, Instagram,
Starting point is 00:36:19 WhatsApp, and so on could happen with stock, which is basically a free loan to conditions that no one else gets. A real business going to a bank or to investors saying we're going to grow like eggs doesn't work because Goldman Sachs or BlackRock can task a satellite, look at your supply chain, get the manifest from the port of Singapore and figure out that there's no way you're going to grow 20%. Thank you. But with tech, you can't because it's all invisible, unobservable. So where I'm going with that is that eventually we started running out of growth narrative and we started trialing different growth narratives. For example, crypto. Crypto,
Starting point is 00:36:58 the narrative was, well, look at this fat banking industry. We're going to take that down next. Think about how much money we can take. That didn't work so well because the banking industry, we're going to take that down next. Think about how much money we can take. That didn't work so well because the banking industry was very well politically connected and it turns out people probably don't like move fast and break things with their money, at least most sane people don't. We tried Web3, we tried NFTs and eventually we tried Metaverse and that was when things were starting to get really desperate. Because if you can't convince fund managers that there's going to be continued growth, they're going to take their money out of your stock. And that is that did happen after the pandemic came off. Right. Facebook was down to $90 Twitter died. It got so cheap that one billionaire
Starting point is 00:37:46 with a bunch of friends in Saudi Arabia could buy it and take it off the market. And the tech industry seemed hosed. And at that moment, the transformer showed up. And it didn't just show up, right? OpenAI took the papers from Google out of the dumpster and Microsoft gave the money to supercharge it. Microsoft threw the money in and said, we are going to have a new growth narrative. It's going
Starting point is 00:38:12 to be AI search. We are not, we don't have a serious search engine. I mean Bing just isn't, but it could be, right? And now it's all out war. Everyone is attacking everyone because the narrative has caught on. And I think there's something to this narrative that has happened before. When you look at electricity, when you look at steam, when you look at early computers, technology that has been self-moving
Starting point is 00:38:39 has always been fascinating to humans. Frankenstein, robots, the early sci-fi, all is us projecting into the future, right, that we're going to create a race or we're going to do something that is intelligence. There's something deeply moving to the idea of artificial intelligence in one form or another. And I think that's why it's caught on and why it has created this, you know, it's very easy for people like Sam Altman to craft these narratives, even if they're scientifically, basically, at this point, debunked, that somehow this, there's a ghost in the machine, and that there's more to it than meets the eye. Right. And I find this very frustrating to be very honest.
Starting point is 00:39:27 It is entirely clear that there isn't more to it than meets the eye. The science at this point is clear enough to understand that it's very unlikely that this technology has any real intelligence. Nevertheless, the technology has tons of opportunity to automate jobs, precisely because the jobs we've created do not necessarily require intelligence. Often they are just robotic pieces of work that people have to do along some kind of instructions, and that the technology can do absolutely. Well, actually, like I said said it doesn't necessarily have to have intelligence or sentience or any of that. What it does have to have if it's going to be useful is
Starting point is 00:40:15 some kind of value. If it can automate a task with precision and speed and accuracy then you know that's good enough. It doesn't need to be actually intelligent. A calculator isn't intelligent, but it's very, very useful. Absolutely. And so what I do when I talk to CEOs and CFOs and people who are smart but who are not technical, and they're looking at this technology and they're confounded, is there something deeper here? What can it do? We need to understand what it can do. What is the threat that poses to our business?
Starting point is 00:40:51 My favorite way of breaking it down is basically, well, at least the agents, when you look at the marketing, it's just outsourcing. It's literally the same thing. When you look at the marketing, it's there as a coworker, you're going to work with it. It's going to be fast, it's going to be cheap. But it has all the properties of outsourcing. There isn't actually anything special here. So you can understand this
Starting point is 00:41:22 even if you don't understand the technology, you just take the claims and you go, okay, wait, actually you're pitching me outsourcing and that has a few problems. A, if you've done outsourcing, you know this doesn't happen overnight. It's never easy to outsource. There's context transfer. There is which vendors are real and which vendors are going to screw you over, right, who can be trusted. Will the vendor, especially in the context of let's say manufacturing, for example, going to China over 20 years, in Europe the conversation is often around is there knowledge transfer? Is the outsourcer extracting our knowledge? Is that outsourcer going to show up as a competitor
Starting point is 00:42:03 down the road? If we have outsourced much of the core of our business, like for example Boeing did over 10 years in order to issue stock buybacks to their shareholders, at which point can the outsourcer, can the manufacturer start competing with you? And as we see with the trade war, which I find very timely right now, the trade war kicking off that long conversation again, have we hollowed out the middle class? Have we moved everything to China?
Starting point is 00:42:35 Well, yeah, we have. Like when Donald Trump puts a tariff on sneakers, and therefore that $10 Chinese sneaker cannot come to the US and be converted into $150 Nike sneaker by putting a logo on it, he's hurting his own economy $440 for every $10 that he's inflicting on China, right? So at that point it's already too late. And that took 20 years and that didn't come out of nowhere. There's a article on the Berkshire website written by Warren Buffett for Fortune magazine
Starting point is 00:43:10 that basically called this all out 20 years ago in 2003 and said that we need to do something about it. Ironically, he said basically, what is being done today? It's just a bit late for that. So when we take that all together, yes, you have the risk. And in fact, I think the risk is very heightened. If you are a vendor ever, and you've ever worked with Amazon,
Starting point is 00:43:30 you're familiar that Amazon looked at everyone's data, identified the most promising products, the products that moved the most value and made their own personal house brands, Amazon Basics, and started this intermediating the vendors. There's nothing that stops the AI companies if you are using an outsourced AI model like Gemini or Chatchie PT to observe all your inputs, identify the use case. And if you made that magical Sam Altman five people, 100 million AAR company, well, it's just five people. You think these people don't
Starting point is 00:44:03 have the money to just cut you off at the source and there's nothing you can do about it. So when you look at all of this together, the question of should we jump onto AI, right? When you ask CEOs, why are you jumping? You need market signaling, right? Check.com told everyone if the greatest risk in AI
Starting point is 00:44:26 is my CEO saying something wrong about AI, that the market doesn't want to hear. So ideally, you have to say something optimistic. You cannot afford to be pessimistic about it. The market will punish you. It's one of the main reasons why there is no one who says something negative about the technologies, because the market will punish them, right?
Starting point is 00:44:44 And then at that point, you have to find something to convince people that you're serious. And so we see all kind of patterns. We're seeing large telco companies giving subscriptions to perplexity to everything, everyone of their customers, to wrap that fresh send off AI startup around them. Nevermind that the technology is basically not working. It's fast, but it's not accurate,
Starting point is 00:45:12 which isn't a great property for a search engine. Or you embark on promises like I'll take Klana, we're gonna replace everyone with AI. You can take Intuit it a company that basically, after 20 years of fighting with lobbying, lost the fight and had the US government launch a tax filing app. And instead of going to their shareholders and saying, well, we're kind of screwed now because the government is
Starting point is 00:45:39 our competitor now, they go out and say, well, we laid off 1,000 engineers. Isn't this amazing? Because we're going to hire for AI, The future is pride and people buy it. Right. The stock appreciates on the news. Right. And so, yeah, you put this all together. This is where we are. This is how the world works. And this explains why you see, you. And from a tech company perspective, this is the ultimate narrative because it is a frontier. It didn't stick just to the AI frontier.
Starting point is 00:46:14 The narrative went from this is the future to actually, this is a very expensive future. You need to be able to afford nuclear power plants and have your own data centers. So only us can do it. And when when you look at the SMP 500. It detached the moment that narrative caught on the big seven magnificent seven started detaching from the rest, because the assumption was wow if I want to invest into this magical future, I can only do it through the companies that can afford this. Right. can only do it through the companies that can afford this. Right? And that, even that was not enough. The next step was to say, well, and AI is going to do health care, and AI is going to
Starting point is 00:46:51 do finance, and AI is going to do military. So there's an expansion of this magical frontier where money is going to be made. And at the same time, there is a focusing of the only players that can play are these bunch of handful of companies that have these properties. And that worked until, well, January. And then DeepSeek came around and demonstrated with math that actually the cost of entry is vastly lower.
Starting point is 00:47:19 There was a lot of smoke screening around that. But then you can read the papers they published. It's been reproduced at this point, it's very clear the cost is not nearly on the level of a big tech company. At least nation states and well-funded other organizations like quant shops in China, like DeepSeek, can absolutely play. And so when you ask yourself, okay, what is the purpose of investment? Why do you invest? Why you take an investment? You take an investment because fundamentally
Starting point is 00:47:52 you believe that money will allow you to grab market share or build an appreciable mode, right? You can somehow accelerate away from the competition. So when you are in 2025, when you look at that and you ask, so all of that money, has that created any mode for anyone? Because it sure looks like the Chinese are basically a month behind, which is nothing. And none of you seem to have a knockout blow because whatever OpenAI demos a month later, everyone else seems to have as well. So that money was wasted. And in fact, it's not wasted. That money turns into a liability because that money means that you spend 10x more, 100x more than China, and your investors want that money back at some point. They're going to have to
Starting point is 00:48:38 be extracted from your customers. So if you can't get that magical Silicon Valley duopoly or something, if the Chinese keep undercutting you, if other companies keep undercutting you, this is going to end in tears. Thanks for sticking around. For more stories like this, check the link in bio and follow Link Data Orchestration.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.