Orchestrate all the Things - Breaking the AI bubble: Big Tech plus AI equals economy takeover. Featuring Georg Zoeller, Centre for AI Leadership Co-Founder
Episode Date: August 19, 2025Whether we like it or not, and despite tales of its powers being greatly exaggerated, the AI genie is out of the box. What does that mean, and what can we do about it? In another twist of abysma...l AI politics, OpenAI CEO Sam Altman just admitted that we are in an AI bubble, and AGI is losing relevance. You may find this baffling or hilarious, or you may be wondering where does that leave the AI influencer types. But despite the absurdity, AI and the associated narrative have gotten way too important to dismiss. Connecting the dots to make sense of it all calls for long-standing experience in AI, engineering, business and beyond. In other words, for people like Georg Zoeller: a seasoned software and business engineer experienced in frontier technology in the gaming industry and Facebook. Zoeller has been using AI in his work dating back to the 2010’s, to the point where AI is now at the core of what he does. Zoeller is the VP of Technology of NOVI Health, a Singapore-based healthcare startup, as well as the Co-Founder of the Centre for AI Leadership and the AI Literacy & Transformation Institute. In a wide-ranging conversation with Zoeller, we addressed everything from AI first principles to its fatal flaws and its place in capitalism. Today, we discuss regulatory capture, copyright, the limits of the attention economy, the new AI religion, the builder's conundrum, how the AI-powered transformation of software engineering is a glimpse into the future of work, AI literacy and how to navigate the brave new world. Article published on Orchestrate all the Things: https://linkeddataorchestration.com/2025/08/19/breaking-the-ai-bubble-big-tech-plus-ai-equals-economy-takeover/
Transcript
Discussion (0)
Welcome to orchestrate all the things.
I'm George Anadiotis and we'll be connecting the dots together.
Stories about technology, data, AI and media and how they flow into each other, saving our types.
It's a singularity.
It is an oversubscribed term in context of AGI.
But if you think about it, the singularity is that point when progress goes so straight up
that you can no longer look behind what is happening the next day or what is behind the curve.
And we're already here.
And we're already here, not because the technology is so fast yet.
There's still room to go, unfortunately, but because our ability in our institution,
governance, organizations, is vastly outstripped.
Whether we like it or not, and despite tales of its powers being greatly exaggerated,
the AI genie is out of the box.
What does that mean and what can we do about that?
In another twist of abysmal AI politics, open AI CEO some old.
Altman just admitted that we are in an AI bubble and AGI is losing relevance.
You may find this buffling or hilarious, or you may be wondering where does that leave the
AI influencer types.
But there are two threes that, despite the absurdity, AI in the associated narrative have gotten
way too important to dismiss.
Connecting the dots to make sense of it all calls for long-standing experience in AI,
engineering, business and beyond.
In other words, for people like Georg Scheller, a seasoned software and business engineer experienced in frontier technology in the gaming industry and Facebook.
Cherler has been using AI in his work dating back to the 2010s, to the point where AI is now at the core of what he does.
He's the VP of Technology of Novi Health, a Singapore-based healthcare startup, as well as the co-founder of the Center for AI Leadership and the AI Literacy and Transformation Institute.
In a wide-ranging conversation with Scheller, we addressed everything from AI-first principles to its fatal flows and its place in capitalism.
Today, we discuss regulatory capture, copyright, the limits of the attention economy, the new AI religion, the Bildrens Canander,
how the AI power transformation of software engineering is a glimpse into the future of work,
AI literacy, and how to navigate the brave new world.
I hope you will enjoy this.
If you like my work and orchestrate all the things,
you can subscribe to my podcast available on all major platforms.
My self-published newsletter, also syndicated on Substack, Hackernium, Medium, and DZone,
or follow, orchestrate all the things on your social media of choice.
That was a long analysis, but I didn't want to cut you in
because you said out loud many of the things that I guess, well,
certainly myself and I guess many other people are thinking out quietly because most of them
don't get the chance to say them out loud. So it seems like I think the key word out of this
analysis is the is the frontier. Right now basically the whole capitalistic growth engine was
kind of stuck and they needed a new outlet to push the whole thing forward and AI just happened
just happened to be it and to that to that narrative I just want to add another
point of view that another guest had previously in the podcast said so he's his
his view was that well we talked a lot about the different types of AI and he's he
comes from a background which is similar to mine so mostly like symbolic AI
knowledge representation and all of this that kind of stuff and for him the
the conclusion of long conversation we've had was well
So look, it doesn't matter what you think about, you know, this whole AI narrative, it's happening
anyway. It's happening anyway because there's lots of money and very powerful lobbies and
interests behind it. It's going to happen anyway. The only question is, what are you going to do
about it at the personal and the organizational level? And to that, I was also asked the same
question, by the way, when I was teaching AI to a class of managers.
like middle management in a company.
And like many of the people that you mentioned,
these people don't really had the technical background
and they were kind of downfunded by all these, you know,
stories being thrown around, okay, is it going to replace everyone?
Is it sentient?
What's going to happen?
You know, what's the future?
And after a few hours of class,
when we talked about how everything really works
and what can and cannot do and a little bit of hands on
and they actually go to see, okay, so it's actually
not intelligent, one of them kind of stood up and said, well, okay, so after you've shown us
all of these things, like, you know, all the ways that it can go wrong, do you still advise that
we should use it? And my answer to that was, well, look, I can't advise you whether to use
it or not to use it. What I can tell you is that everyone else is going to. And even though
there's a bunch of disadvantages and downsides that come with it, you can also say the same
think about cars, for example.
It's not that you don't use them, but you are aware of what they can do and why is in which
they are dangerous and you take like an insurance policy and so, you know, when something
bad happens and you're using the car, you can have a fallback.
So that would be my advice.
Do you use it, but use it with caution?
Yeah, I, look, this isn't just the technology, right?
This is only half a joke, but I would say Open AI could have never made the money back that was invested, right, which was mostly wasted, right, because their competition basically arrived at the same point, spending 100 or more times less, which means they will never be as competitive, right?
But it doesn't matter because when you took enough money and you invested in the right people and let's just say regulatory capture in the United States, you have the president of the United States, you know, host your fundraisers, you have the vice president of the United States, go to Paris and to the AI summit and dance the no safety dance.
You're no longer facing us, you know, a company that, let's say, is like could be a V-work or a Tesla or whatever you look at it.
And, you know, eventually the chickens come home to rules because it is backed by the United States government.
It owns the United States government, right?
And we know that a lot of money now is flowing into these companies purely because people are forced to feed every single person who crosses over the border their social media history into grok and get an assessment.
Should we be doing this?
Obviously not.
It's complete nonsense from a democratic perspective because the technology isn't explainable.
It's not transparent. It would be a rights violation in every normal country, but we're not in a normal scale anymore.
Right. And so at the second point is the technology has already transformed the economy.
And whether or not you like it that or not, the transformation is real. I'll give you two examples.
One is you write a book today. Within hours of publishing on Amazon, maybe you spend two years of your life, your university, prof, your, you'll
final project, you spend two years of your life at your own self-expense writing the book,
you publish it, you know you had a decent audience with previous books, it's going to make its
money back. A day later, two other books are out there that contain the same content, but not
the same words. And because the people who made those on the cheap for $3 of chat GPT
never spend any money, they're spending the money on advertisement. And when you understand that
the more volume there is, the more, the only thing that matters is, will anyone actually see my product?
And the only way your product is seen is you have enough likes, which can be bought organically.
There's vending machines of it here for Instagram likes and Amazon likes here in Asia, right?
Or whether or not you give the vendor direct money, right?
You will, as a creator, as someone who created a book or an app, will always be at a disadvantage.
towards the grifters who take what you did and didn't spend the money on creating and are pushing the app or the book out there with social media amplification and paid ad spend.
We have inadvertently broken copyright, right? Without having, you know, a rosy view of copyright in the Angloxon world, copyright was created to encourage, to incentivize creators to continue to
to create, not because the world liked artists, but because the work like publishers and
they lobbied really hard and they had a lot of money and power.
And we broke that.
We have broken the incentive system that has kept creation going.
And we've created not just here, but also on the internet at large.
Many of these models are trained on Web 2 concepts, which means stack overflow, right?
That place where every software engineer goes and checks for the answer.
and shares their knowledge freely was a Web 2 phenomenon that no longer works in the world of AI, right?
And so with the incentives being broken, you cannot escape that.
You cannot go, you cannot graduate out of university and believe that you just need to make the right app and people will come.
No one will see it.
It will die lonely in the corner in the street.
And the grifter who saw your app and realized it was useful will make all the money because he spent on ad spent.
ad spent because we have failed to regulate the attention economy and attention is more valuable
that it's the underlying primitive for everything it is more valuable than money because attention
represents opportunity and we've given attention basically full control of all attention to
three four platforms right who are going to benefit massively from ai Facebook isn't giving
a Manhattan projects worth of AI away for free because they love open source they're
doing it because accelerating content creation creates more and more content on the supply
sites, video games, movies, all made cheaper, meaning more of them, and they all ran into
the door on the platform on the supply side and they have to pay to get through the door
to the customers. So the more supply there is, the more video games are being made because
they're easier to make with AI. The creators don't get not paid anymore. Neither are the investors.
because in the end you're going to spend the same money on Facebook and on Google trying to find a single player for your for your games right and the brutal thing about this is the absolutely brutal thing about about this reality is we we don't have we haven't created the regulatory means to fix that there is no more public market the public market in on the digital economy is mediated by a handful of companies
companies, the cost of user acquisition for small SMEs, often hundreds of dollars per user.
For your medical practice and you're acquiring users for your product, they will have paid $200, $300 to META before they even walk through the door.
A silent tax on every item on your desk, on the headset on your head, everything in your in your room because advertisement is baked into the cost of every single product.
AI on digital products, on knowledge economy products is commoditizing, right?
And when we look at industrialization, right, we have a bit of a rosy view of industrialization
because we all have cars and fancy things in our homes that previously were limited to rich
people because, you know, it created commoditization and allowed the price to be dropped.
The problem is with that is in the digital economy, the prices are all.
already commoditized.
You can play the most successful games in the world,
like Fortnite for free, free to play.
For $15 a month, you can get as many videos as you want to watch
and TV shows as you want to watch or music, you want to listen.
So the reality is we're not even going to see a supply expansion
because we are heavily limited by attention.
People just don't have more time to spend on their phones,
time to play more games,
time to play more movies.
So fundamentally, when especially I think governments,
you know, are very careful to not regulate
the technology out of fear of not getting that magical growth
that it might deliver, we're all a little bit in a delusion here
because where is that growth going to come from?
It's going to come from cutting jobs.
It's the only outlet we have for actually changing.
And that is purely redistributing of your GDP.
Right?
So what does that leave us?
And I mean, where can we, where do we go from here, basically?
Well, you can take the, you know, let's put it this way.
There's a, the optimistic, techno-optimistic fuel tends to be basically a poison pill.
If you worked at big tech companies, you know that basically we always talk about the best things
that will happen because it forces regulators to weigh present current negative.
effects to untold future riches and no one has the guts to walk away from growth in a very
growth constrained world because we are just at that point where capitalism is kind of running out
of growth right um and uh so the super techno optimist uh takes on AI can mostly be debunked
even from first principles like these these models cannot rise over their own intelligence
like whatever is trained into these models is contained in there like every which is every book we ever written probably so that's a lot but there's no scenario where that turns into superhuman beyond speed because you want the answer to climate change it's already in there we know the answer we just don't like it but we don't like that we have to cut carbon costs right so but the problem is easily made that oh it's going to figure out climate change yeah right right and because people feel
technology has become a bit of a religion, right? People don't feel like when people start
bubbling tech, it's like Latin, the high priests of growth. And so on, no one wants to fight
that, but it's nonsense. Where it leaves us is probably fundamentally mediated by your view of the
world. If you feel that, you know, it's worthwhile pushing back, you still need to learn the
technology because you can only credibly talk with hands-on exposure. There's no way around
There is no path either on the positive or negative side of AI that somehow absorbs you from having to learn this technology.
And I find that it is hugely valuable because when you actually work with the technology, you realize how much hype there is.
And also how much underappreciation there is for some of its effect.
As someone coming out of the video game industry, the industry is toast.
Complete toast, not because the technology can, you know, replace all creativity, but because the technology takes a large number of jobs in that industry and wipes them from the, from the face of the arc.
3D modeling is no longer going to be a thing for most people because the technology will just be able to do that.
Right. We've seen how quickly we went on images, right, and how non-durable, oh, but it, you know, it can't draw hands.
was. These weeks, right, Google VEO, I think, is showing everyone that the average person will not be able to tell a difference anymore. That doesn't make the AI slop good, but it makes it in distinguishable, it's only a matter of time until a good storyteller gets their hands on it and integrates that into their process. Right. And for games, the problem is just we see that today we could make an AI native game studio, but in six months, and this is where we go
back to frontier and in a year, that would be completely different because the technology is
moving too quickly. When we're in the frontier phase of technology moving very quickly,
when all the easy early gains are found, the kind of things that Deep Sea came up with,
clever but in the end somewhat obvious, right? Then the technology moves so fast that whenever
you step of the elevator to actually build with it, oh, now we're going to build a game
with that, you're going to be left behind by the rapid progress and you're completely
who didn't spend a dime and start a six months later than you will be arriving at the goal in front of you.
So we had a weird situation where it is too early, often month over month, to dive into the technology and really build with it.
But you should dive into the technology to get familiar with it and to get that sense of when will it plateau out,
then will be the right time to build.
The game industry is toast because you cannot invest in anything because we know that in six months,
month or a year, there will be that magical point at which starting really makes sense.
And before that, you're mostly doing R&D that everyone else gets for free from open source,
right? And I would wager there's other industries that are kind of in that same bucket.
At the same time, when the technology interfaces with the real world, and this is where,
you know, the hype is completely off base, especially around agents, it's just not working.
In medical, the reliability issues of this technology, which are fundamental, hallucinations cannot be removed.
Wipe out most of the use cases.
And the industry is pushing hard.
It's like we should, you know, poor people can't afford health care.
What if we gave them unreliable health care instead?
Let us do that.
It's nonsense, right?
It's a red line we should never cross, really.
Especially, you know, it's mostly expensive because it's for profit in the U.S., but that's a different story.
I, you know, working with a medical company right now, the technology is just not ready for almost anything.
We will find magical things with it.
We will use it to, you know, find needles in the haystack.
It's really good at that and so on.
But this is not something that is like a thing that a normal company can just implement.
Even something as simple as document extraction, right?
Like when you look at clothes for the latest clode model, the best,
practices instruct you to don't tell Claude what not to do. Tell it what to do. And the reason
for that is because when we use negation in a sentence, there's a risk that the negation token, the
word not, can disappear in summarization and in other scenarios, right? And leading to exactly the
opposite outcome. When we look at, for example, medical transcriptions, we find that the error,
Both humans and AI make errors, but AI errors tend to be malicious because of loss of tokens like negation, right?
And these are not minor things.
Take this medicine or do not take this medicine.
It's a huge difference.
And humans don't make that particular mistake very often, but AI models make it at a much higher rate and much more unpredictable.
So in all of this, I think the answer to your question is, unfortunately, you have to work
this technology and it's initially scary and it will make you feel almost demotivated like when
i code with it it's not fun because you know what code review sucks code review of code written by
a machine that makes stupid errors and then writes brilliant code is like about the most tedious thing
i can imagine and it sucks the joy out of the you know act of architecting and creating
but nevertheless i have to acknowledge that uh for some scenarios it's just so fast that i can no
longer justify doing it by hand like components for a website or something like that yeah so then
i guess the follow-up question is fine i guess everyone will have to get to educate themselves in one
way or another so how do they do that and to put that a little bit into more context
this is i guess in a more formal term is the so-called AI literacy which has also been part of the
latest EU AI act like every company that uses AI in one way another deploying systems or
producing systems or whatever should get their staff to be AI literate and this is something
that i know you are involved in as well because you have founded an organization which is whose
goal is to promote AI literacy so what's your what's your recommendation how do people get
AI literature. Well, what we saw and what prompted us to found this particular company was
that currently institutions of learning don't state the standard chance in general. The industry,
the frontier nature of the technology, how quickly it changes, is incompatible with the internal
bureaucracy of most universities, of most schools, right? Because it's written out in the frontier,
It requires people who are using it, who are trying it, who are reading the papers constantly.
It's a full-time job, to be honest, for many of the people I work with to stay on top of this technology.
And so we provide education solutions, basically.
We create boot camps, for example, software engineer to AI-enable software engineer that contains the things that aren't just
minute facts, right?
Like, I think you see a lot of prompt engineering courses early on.
You see a lot of chat GPT courses, they're not very helpful, right?
In fact, everything that was taught people in the first year about prompt engineering is wrong.
Now, models have changed.
If you use the things you learned a year ago about how to prompt, it's not working.
Right.
So these are the wrong things to teach.
What you have to teach are the fundamentals that have.
help people connect how this technology works to how what they are experienced with.
Right. And so I give you one example that I think is broadly underestimated.
This technology for the first time introduces non-determinism in the abstraction layers
of software engineering. It's a mouthful, but what it means is, you know, when you, when
you're like me, you grew up in the 80s, went to school in the 90s and university and so on.
you were taught how to use a, to make a microprocess, how to you write assembly languages,
how to go to the next level, right, C++, higher level languages.
Today, we have so many layers of abstraction.
The first games I wrote, you wrote into the screen, memory.
That's it.
Right?
Now you have graphics drivers and buffers and, like, God knows what.
And so there are many, many layers of abstraction, but they are all deterministic.
Meaning, you can test them.
One input produces one output reliably, right?
And testing is a huge part of what we do in software engineering.
It's really important.
And a smart person, an educated person, can, in the end, drill down all the way to the
microprocessor to find whatever the bug is in the software and make it work.
And the moment you add generative AI to it, it's game over.
Because generative AI is non-deterministic.
You can have one input and it can have infinite outputs, and you can.
can test a hundred times and a hundred and one time it's wrong and that might not sound terrible
but when you have 10,000 patients a 99% success rate is not good enough right and I think this
is massively underestimated as a it's something that you need to teach because testing doesn't
work anymore you have to switch to observability in order to figure in order to keep your software
running safely in order to satisfy some of these AI act requirements
if they are taken seriously, which, you know, I don't know.
But these are the fundamentals you have to teach.
And currently, I think it's really hard for universities to get those out.
I think everyone is still struggling with the fact that we kind of murdered academic peer review.
Everyone just publishes to AXIV.
I would assume that 60%, 70% of what is published to AXIV is actually hidden press releases.
Silicon Valley figured out by 2022 that people read active papers, they don't read PR Newswire, right?
And so then the industry came up with an antidote, papers with code, right?
Who are you going to believe?
Well, the one who can run the code I can run, right?
But even that, I think, has not arrived in academia in many places.
And so outside of specialized classes like machine learning and so on, which had to
to adopt. The technology has such broad implications for everyone, HR, business. When should you
use a transformer? When can you risk using a transformer? These kind of things. We're not very good at
that. We're instead running around teaching chat GPT. And so that's what my company is trying to
fix is education solutions and stakeholder, like helping non-technical people make sense of this
madness, primarily decision makers. Exactly. This was precisely what I wanted to get at
because you use the example of software engineering,
which is in some ways good,
but I would also say not representative for most people.
Because a software engineer by definition
already does have a technical background
and therefore it's easier for that person
to understand the implications
and to also understand how to apply AI,
the effect of doing so and so on.
But for people who are non-technical,
I think you have to start
the fundamentals actually before getting into deeper into the weeds. So how do you do that?
Yeah, we have a bunch of workshops and formats basically to help that. And primarily, honestly,
at this point, we're only focusing on decision makers, sea level, maybe down to line manager.
Because what we found early on is you need like it's an actually fairly depressive topic often.
And you have to go through kind of a throft of this,
illusion that you could avoid the effects on your company and so on.
And so when you're hitting people who have no agency over that with it,
it's not very helpful.
In fact, companies need to have a communication strategy.
Companies need to think very carefully about change management.
And this is not going very well, right?
Like we're seeing, for example, the need to very aggressively
virtue signal to investors is costing a lot of companies,
a lot of money like duolingo, right?
The CEO clearly is virtual signaling to investors.
about relaying off a whole bunch of people and so on, but the users don't want to hear that.
They hate it, right? Because what you're really saying is you're replacing human work with
machine work, and that's not just threatening to me because my job might be on the line at some
point, too. It's also cheapening. It's clearly cheaper for you, and I'm still paying the same
price. So my product is being, you know, going from handmade down to, you know, machine delivered,
and I don't trust this technology.
And so Duolingo did probably a billion dollar
PR damage to itself by not getting that one right.
And so we focus primarily on the upper levels of the company
to help them come up with understanding about what is the disruption risk, right?
Some companies get wiped out in six months by this technology
when you look at stock photos, for examples, right?
And about every couple of months, there is a new advance
that is putting companies on notice.
is that you're going to either move your business model or you're going to die.
So that is the worst case in IOs, but also then the risks of, you know, not having policies,
the regulatory risks, right?
Like the risk of people using this technology, for example, to evaluate candidates.
In the EU, if you're using generative AI to evaluate a job candidates,
you are going to violate the law.
You're going to be discriminating against people.
And everyone is doing it because you,
you are already watching your HR department getting 100x the number of applications.
You haven't given them any more manpower.
And so they're helping themselves to AI.
And you're going to have to face that.
And you can't just say, don't use AI.
You're going to actually have to face that the technology have broken recruiting.
It has broken recruiting.
It is irretrievably broken now, especially with deepfakes, right?
And it has broken corporate security measures like voice authentication and all kind of things,
right?
In banking, for example.
And these are things that you need to become aware of.
And then you need to work on strategy, not how to fix these things because there's going
to be more and more, but how you stay literate about the technology, how you can stay on top
of it without having to become, you know, an AI major yourself.
Yeah, I think, you know, the takeaway from all of that is that it all seems to be moving at superhuman speed.
Nobody can really keep up.
I totally subscribe to what you said, that it's a full-time job to be able to even superficially stay on top of things.
And, you know, I see it from personal experience.
I try to do that, but, you know, at some point you have to give up really, like, okay.
You know what it is?
We have a word for this, right?
It's a singularity.
It is an oversubscribed term in context of AGI, right?
But if you think about it, a singularity is that point when progress goes so straight up
that you can no longer look behind what is happening the next day or what is behind the curve.
And we're already here, not because the technology is so fast yet.
There's still room to go, unfortunately, but because our ability in our institution,
governance, organizations is vastly outstripped.
And I do believe that is intentional because I worked in big tech.
And what was sometimes a drawback for us, our six-month performance cycles, for example, are now pluses.
These companies have realized that, you know, literally they are incapable of running projects
that are longer than six months because of the incentive system, right?
And that is good because you cannot run projects longer than six months right now.
If you're planning a project that takes longer than six months in the digital space right now,
you are saying, I can predict where AI is going to be in a year and that it's not going to make all my investments obsolete.
Just to, you know, when you look at the road killed, avatars, there were tons of companies founded in the pandemic to do virtual avatars.
They are all dead.
None of them will survive because meta open sourced the technology and it's fusing with AI, Gossian Splats,
and so on, and mostly in open source.
So whatever proprietary technology you build, you're dead, right?
And so anything you are pursuing in technology, including adjacent areas like robotics,
if you don't have a good grasp, you cannot build anything longer than six months.
And so it's my belief that these companies are actually accelerating technology very intentionally, right,
with open source, with all the releases, with all the pressure,
because it keeps the rest of the economy entirely vulnerable to them.
the rest of the economy is either going into the wrong adventure or is paralyzed.
So basically it sounds like a big tech takeover because, and I think I totally agree with what you said about the whole speed.
And this is an argument that oftentimes people use to draw a parallel with previous technological breakthrough.
So, you know, it happened before.
Don't worry about, you know, job description and distraction because eventually we're going to get new jobs that nobody can predict and so on.
predict and so on. Yes, but what's different now is that the speed. I mean, we had some time
to adjust the industrial revolution and whatever other previous breakthroughs, but this time
it's so, so fast that nobody can really keep up. Well, I teach a class on this, right? Like one of
the classes I teach at a local university has a significant part going through history, going
through industrial revolution and these effects, because they are incredibly valuable to
understand in this context, right? From the first principles perspective, it's different. It took
us 100 years to string all the wires for electricity across America. It took us many years to
get the standards right, right? Like ACDC, all of these things, right? And it took us 50 years to get
the cell towers everywhere and the internet and all of these things were measured in decades.
And this is the first time we have a technology, because cloud built basically the previous infrastructure.
This is the first time we have a technology that has no preconditions we need to achieve.
Even the GPU is a 20-year-old technology, 10-year-old with, you know, the heavy AI kind of acceleration load.
There's nothing new here.
And that means we are now at a place where, you know, it took the invention of the printing press took almost 100 years to lead to the 30-year war.
which was a result of it, right?
The propagation of Luther's writing,
which previously would have never managed to propagate
without the printing press.
Today, we are compressed
because back then, the speed at knowledge traveled
was not at the speed of light.
We're at the speed of light for that.
And because there's almost no physical infrastructure,
you can adopt an API,
you can download an app, the stores, the protocols,
you know, the fiber lines, they're all there.
So that's different, which means we have a lot less time, right?
And when you look at it, time is the one thing that you need.
Humans are not very good ever to adopt a new technology.
That's a fair case to made.
It's always, we always have to be pushed, right?
And it's often not pretty, but time makes all the difference in the impact,
especially when it comes to the stability of society.
So that's one.
The other thing is we have never quite faced something like this.
So the best map to machines out of industrialization
is software, because software is the same thing.
You write something specific, a specific process,
a specific machine for the knowledge economy
that automates a specific task.
AI represents the ability to print the machine.
And it's not created by a human anymore.
It's diffused by training, distilled, so to speak.
And when you look at the properties of the transformer,
it basically comes down to if there is enough data available about the output of a profession
and there are patterns in the profession right and enough volume it can be automated
the machine can be printed so when people say we've always figured out new jobs that is
all right but we've never faced a technology that can automate any job that does have patterns
So I find these appeals to history not very helpful because from first principles, we can identify where are the differences here?
What is new?
What is a new property?
And I remember in the early years, like three years ago when I attended conferences and so on, I would have the MIT crowd very loudly propagate, you know, the human plus AI means something greater, right?
And you're going to collaborate with it.
It's not going to displace you, but that is wrong.
The transformer is literally trained on human labor output to replace that human labor.
There's no arguing.
It's completely substituted, right, which means these equation, human plus AI equals something greater, can only be true if we are finding new place for the human to be.
Right.
We either replace the human or we relegate the human right now to quality assurance.
Right?
That's what the software engineer does.
You become quality assurance for the model because the model is not.
perfect. And not only does that cost, of course, significant job satisfaction, right? And I can only be, you know, sad to some extent because if we look at it, software engineers have done this to the rest of the economy for the last four years. We've built machines for the knowledge economy that took the fun part for people's jobs away. So it's only fair that we're on the receiving end of this one now, right? At the same time, when we look at software engineering, because there's this surprising product market,
fit with writing code and we by first principles look at why is that there's a few things that
stick out first you know we have the best data set uh we uploaded all of our profession to
git hub and stack overflow maybe we shouldn't have done this but it's too late um and our the output
of our work is to some extent testable compiling uh and running is a is a huge benefit so
AI is much more successful because of these two properties and there's not many jobs out there that
have those two properties. So the next couple of years, people will spend anordinate amount of
money to get those properties, to get people to train the AI. I already see this in like some
big tech companies, the sales teams are forced to train their own replacement AI. So to get back
on track here, like I do think that we are basically just running another industrial revolution,
but this one has new properties that make it extremely dangerous for society um and you know there's
always in the back of my head there's always the question of how unlikely i mean the system collapse
is not so unlikely in my lifetime and uh we've seen the soviet union collapse which you know for
people living in the eastern block must have felt like a full system collapse i think we're
watching the same thing in the united states right now um and ai layers on top of that um
pretty dramatically, right, because it is projected over large platforms that are owned in the United States
all over, all over the world, right? And so taking all of this together, no, I don't think we can say
we're going to be okay. We're going to have to work for being okay and we're going to have to face
and we're going to need leadership to be okay. They're just appealing to this is how it always has
spin doesn't work. Just we adopt the technology because that's how you avoid disruption.
That's what we did with cloud. It does not work because cloud does not fundamentally affect
your business just quite in the same scale. Yeah. I have to agree. And yeah, I think normally we
don't end conversations with a kind of warning tone that this sets. But I think it's only fair
that we do it this time because, well, I wouldn't want to super code that message.
Thanks for sticking around. For more stories like this, check the link in bio and follow
link data orchestration.