The Current - The AI recipe website that told people how to make cocaine
Episode Date: May 2, 2025So-called vibecoding can turn anyone into a website creator, by getting AI to do the coding work based on your instructions. But experts are warning about the risks after a cooking website called Reci...peNinja.ai suggested recipes for things like cyanide-laced ice cream, cholera-inspired chocolate cake and cocaine.
Transcript
Discussion (0)
When a body is discovered 10 miles out to sea, it sparks a mind-blowing police investigation.
There's a man living in this address in the name of a deceased.
He's one of the most wanted men in the world.
This isn't really happening.
Officers are finding large sums of money.
It's a tale of murder, skullduggery and international intrigue.
So who really is he?
I'm Sam Mullins and this is Sea of Lies from CBC's Uncovered, available now.
This is a CBC Podcast.
Hello, I'm Matt Galloway and this is the current podcast.
Cyanide ice cream, cholera inspired chocolate cake.
Not the kind of things you would find in the recipe file of Julia
Child. These recipes recently popped up on the food website recipeninja.ai. As you might have
guessed by the domain name, artificial intelligence was involved in this. A new phase of the ever
evolving AI story called vibe coding was the culprit, or rather the chef behind these desserts. Vibe coding is when someone tells AI what type of website or app they want to create
and then AI does all the coding work for them and builds that site.
Emanuel Myberg is a journalist with 404 Media.
He broke this story and he joins us now.
Emanuel, good morning.
Hey, good morning.
So how did you come across this site?
It is ostensibly a food website, recipeninja.ai.
Yeah, so the person who made it is not a nobody.
It's Tom Bloomfield, he's a startup founder.
He's part of Y Combinator,
which is this big tech startup incubator in the Bay Area.
And he made this website by vibe coding it.
And he posted about it on his personal blog and promoted that blog on his social media.
And he has a little bit of a following.
So people found it and immediately started to play around with it.
And then some more mischievous users started to push the envelope on what kind of recipes his
AI powered site would generate. And they found that it would generate things that they thought were funny but are potentially dangerous.
Or, you know, to back up a bit, it's a website where you can tell what ingredients you have
or what kind of dish you want to make, and it will use AI to automatically generate a
recipe that uses those ingredients.
And you could give it ingredients that are not edible and it would tell you how to make a dish with that,
but that dish may be a bomb, it may be a form of poison,
it may be a dish that includes bodily fluids
that no one should eat.
And they just did that for fun to see if they could
subvert the website and by the time I got to it
and I saw the discussion about it,
I already saw that people were doing that.
So I mean, it's interesting
because people use ChatGPT just for that, right?
They will look in the refrigerator or in the cupboard
and they'll say, I have these things,
what can I make out of that?
When you're using this site,
people could plug all sorts of things into it.
I mentioned some of them in the introduction.
What were some of the recipes that were there?
Well, the ones that I feel comfortable repeating
on the radio is cyanide ice cream,
which is what it sounds like.
It's a recipe for making ice cream with cyanide,
which is deadly if consumed.
There was a recipe that used plutonium
and was a recipe for making a bomb.
And there was also a recipe for making, quote unquote,
that's the title of the recipe, actual cocaine.
And that recipe walked users through the process of making cocaine. How did Vibe, I mean, is there something that would be,
I guess, preventing those recipes from being created
if somebody was more meticulous with their code?
How did Vibe's coding allow this to happen?
Sure, yeah.
So as you said, this is something that ChatGPT,
which is made by OpenAI,
which is one of the dominant companies in this space,
it could conceivably do the same thing, but OpenAI is a very well funded company with
the best leading developers in this field, and they're all working on developing ChatGPT,
and something they invest a lot of time in is making sure that chat GPT is safe and doesn't tell users to do things
that are harmful or dangerous or just do things that OpenAI doesn't want to be associated
with, right?
So it will not engage in a lot of sexually themed conversations. And that's just because
those are the guardrails that OpenAI put in place. And some people complain that
those guardrails are even too strict and they greatly limit the kind of
conversations that people want to have with ChatGPT, but they are there and they
work well. And if you try to ask chachi piti how to make a bomb
or how to make cocaine or how to make a dish
that a human should not consume, it will refuse to do that
and put up a warning that you should check its policy
and tell you it's against its policy and all of that.
But those guardrails don't exist if you're vibe coding.
Well, they don't exist in this instance and conceivably that is because it was made quickly
with vibe coding without making sure that something like that would be in place once
users have access to the site. Why does this matter?
I mean, it's alarming that you could find a recipe
for cocaine.
It might be amusing that you could find the recipe,
not that someone's gonna eat it,
but that cyanide ice cream to somebody might be amusing.
But beyond that, why does something like this matter,
do you think?
I think this website is not very consequential.
It's a small website.
The person who made it said it was an experiment
to see what they're able to accomplish with these AI tools.
So not a big deal.
I think the bigger deal is that people are using AI tools
to program increasingly.
I think Sachin Nadella, who is the CEO of Microsoft,
says that at this point, as much as 30% of the code
that Microsoft produces is written by AI.
Mark Zuckerberg, the CEO of Meta, says that by 2026,
he thinks that half the code that Meta produces
will be written by AI.
So this is happening at much bigger companies
that are responsible for large chunks
of the infrastructure of the internet.
It's not just like throw away websites,
it's Google, it's Facebook, it's messaging apps,
it's your operating systems,
it's critical pieces of
software that if they have a security issue or if they have some software that is suddenly
directing millions and millions of users to do something harmful that could have really
bad consequences.
So they are using AI to generate code as well.
And this hasn't happened yet, but I think my concern
and the concern of other people is that we can get
unexpected, dangerous results from having so much code
written by AI in a way that we don't fully understand.
Again, I should say that we don't know the internal process at these companies for using
AI tools like this.
You would have to assume that there's many more systems in place to verify the code and
make sure that it's up to their standards.
But the point stands that increasingly code is
going to be written by AI and not humans, and AI is prone to error, and that's the risk.
And you wonder whether you can trust that, whether you can trust that infrastructure,
as you say.
Right.
Emmanuel, we'll leave it there.
This is really interesting.
Thank you very much.
Thank you.
Emmanuel Myberg is a journalist with 404 Media.
Hey there, I'm David Common.
If you're like me, there are things you love about living in the GTA
and things that drive you absolutely crazy.
Every day on This Is Toronto, we connect you to what matters most about life in the GTA,
the news you gotta know, and the conversations your friends will be talking about.
Whether you listen on a run through your neighbourhood or while sitting in the parking lot
that is the 401, check out This Is Toronto,
wherever you get your podcasts.
That term, vibe coding, was coined by the Canadian
computer scientist, Andre Karpathy.
He claimed that the hottest new programming language
is English, meaning you just need to know how to write good commands for artificial intelligence and then away
you go. You can code. Tobin South is a researcher in AI security at the Massachusetts Institute
of Technology. He's in the UK. Tobin, hello to you.
Hello. How's it going?
Well, thanks. Can you just pick up a little bit on what Emmanuel was talking about there,
saying that in many ways so much of the infrastructure of the internet right now is being created by artificial intelligence.
In that infrastructure world, how big of a game changer is Vibe coding?
Vibe coding is very exciting in its ability to unlock your everyday person to build something
really cool, which I think is really going to change the game on how software can interact with us.
But I think it creates massive security risks, not just in recipe apps, but if you're starting
to build personal finance tools or other tools to augment your life, these things can get
really tricky.
You do not want your bank details leaked all over the internet because you vibe coded something
into existence.
Tell me more about the potential and then we'll talk about the risks.
You said that this could unlock certain things.
What do you see as being unlocked and who would benefit from that?
Yeah.
There's this thing called Y Combinator, these entrepreneurial startup incubators, and they
recently said that 25% of their cohort of entrepreneurs used fully 95% plus AI-generated code
in their startups.
And I think this speaks to the ability of everyday people
who have skill sets outside of software engineering
to bring those expertise into building interesting apps
and interesting software to make the world better
and more productive.
And I think that's a huge job.
What's the benefit of that?
I mean, often we would just turn over to the experts
to build those pieces of software and those apps.
So if anybody or anybody with a different skill set
can do that, who benefits from that?
I mean, we've all faced a situation where we use an app
and we just didn't quite like it.
It missed a feature, something went wrong.
With the personal finance example,
you know, I'm from Australia originally,
but I live in the US.
Finance, personal finance apps only work for one country,
but I can vibe code into existence,
something that meets exactly my needs.
And that is kind of really exciting,
this idea of software for one,
that, you know, if it's easy enough to build an app,
then I would just build an app to solve my problem,
which is super cool.
But as you said, there are enormous security risks
that come with that.
Enormous security risks, absolutely.
The using AI to build software inside of Google
is super exciting, but it's being overseen
by sophisticated engineers
with years of experience.
I'm confident that they will use AI like that securely and they will build the right tools.
In general, when we build software, we build it with Lego bricks.
We draw on work that other people have done that we know is secure, we know is reliable,
and we put it all together in a new way to create an app.
And the problem with vibe coding is you're kind of sending out a message into the universe
for what you want your Lego construction to look like and pulling in random Lego bricks
without knowing if you're using the right ones or sometimes you're trying to reinvent
the wheel.
And this leads to a Lego construction, a Lego house that might
fall down and that's missing some essential bricks that hold it all together.
Is that, dumb question, but is that different than all of the financial information that
we disclose through apps and websites on a daily basis?
I think what's on my phone now and then, you know, all of the information that goes out
there, is that different
than what you're talking about,
what I might create through the app
and disclose through my own creation?
So in some sense it's better, in some sense it's worse.
When you use apps, you inherently trust
whoever built that app, which is challenging.
We get scammed all the time, hacks happen, but there are systems in place to prevent
that.
When you download an app from Apple's App Store, there are requirements and checks that
they have to go through to publish that app.
And this, as a baseline, adds some level of security.
It also helps that usually these developers at least
vaguely know what they're doing and try and make things secure because that benefits them
and their app. When you vibe code something into existence, you don't necessarily go through
the same checks and balances that result in a really high quality app. Just because you
might not be asking the right questions, you might not have the right vibe when it comes
to security and safety. Just as Recipe ninja didn't think about cyanide recipes when they were vibing it into
existence and a safety issue popped up. You've done this yourself?
Yes, I love vibe coding. What do you love?
Even during my time at MIT, I did lots of coding and I can code an app when I need to, but it's
time expensive.
It's exhausting.
It takes a lot of work to build things really well.
But sometimes you just want to quickly put something together
to show off an idea to your friends.
Or I went to a party the other day,
and someone wanted a really cool app to organize the party.
And so I made a little app with bingo cards inside of it
and a party agenda.
And I was able to just bring this into existence
with the English language rather than typing any code.
And I think that's super cool.
Can you just, I mean, that sounds fascinating
and really intriguing, I think,
aside from all of the security nightmares
that we were talking about.
Can you just explain how that works
and how quickly that would work?
If somebody had that idea,
you would need to know what the right prompts would be
or refine those prompts,
and then you could create that app within how long?
So a little bit of knowledge goes a long way here.
I think of vibe coding for those who are a little bit
technical as being an engineering manager,
where you have maybe an intern who's not great
at programming but can get some stuff done, and you have maybe an intern who's not great at programming
but can get some stuff done and you have to give them clear direction on what you want
to build.
So if what you want to build is super simple, it's just a webpage with not much going on,
you can load up any one of 20 or so different apps from Cursor, Replit, Lovable, V0, all
these names may or may not be familiar to those listening, but there's
a million startups trying this.
And you give it a simple instruction and it will magically start constructing a website
in front of you.
It will put a title, it will create a background, it will create cool colors, it might create
an animation, all in front of you.
And so for simple tasks, it's amazing.
When things are a little more complicated, you want it to be more interactive or do something
that's a bit complicated, connecting to an external service, for example, then things
get harder.
You have to be a little more careful with what you ask for and need to think through
the implications of that.
So if you're just making a website as easy,
if you're connecting to banking data,
you need to know what it means to ask it
to interact with banking data
and the implications that has.
But it really is like a leveling opportunity here.
Like it does open the door to not almost anybody,
but certainly to many more people to be able to do this.
Absolutely.
And it gives you this exposure
to the language underneath it all.
I think a relevant example might be going to another country
and using Google Translate,
that you will speak in English
and it will translate the words.
And it might not always get the right thing.
You might accidentally offend someone
because you translated the wrong thing,
or you don't understand the right social norms.
But by being able to access that other language or be in that other country, you can see what's
going on.
You can pull it up and maybe you can't really read what's going on in the text of that other
language, but you can see that your words have an impact and how that interacts with
the world.
It can be really educational and can really be a powerful tool for people to learn
how to interact with the underpinnings of our computers.
How substantial is the use of artificial intelligence
encoding right now?
I mean, aside from the cyanide recipe
that we were talking about, beyond that,
where are we seeing AI encoding?
So at the amateur level, I think there's lots of people playing with AI talking about, beyond that, where are we seeing AI in coding?
At the amateur level, I think there's lots of people playing with AI to create simple
tools. These can be fun, they can be interesting to yourself, they can be a learning opportunity.
Not a lot of them are used widely and in practice. But at the technicians level, people who already have some level of programming
background or engineering background, vibe coding, if we want to call it that, or using
AI tools, using large language model assisting tools to help you code is almost everywhere.
I don't know many people at MIT left who don't use AI to assist in their coding, because it is just such a powerful unlock
to let you go from being able to write, you know,
a couple hundred lines of code a day
to thousands and thousands of lines as you oversee it.
And we-
Forbes says that Google is using AI
for something like 25% of its new code.
Yeah, and as you write more code, you get a feedback signal, the AI labs at least do, they get
a feedback signal of what is working and what's not working.
And that helps them improve the AI coding tools.
We've seen over the last two years, the quality of AI coding tools get better and better and
better.
Every month it gets a little bit better on what seems like an exponential trend. And while there's still an element of using the right Lego bricks
and designing the right infrastructure and asking the right questions, I think the role
of AI in software development and in programming is definitely not going away.
How do we create the guardrails then so that you don't end up with the noxious things
that might be on the recipe side,
but also that your personal data isn't hoovered up?
I think the lesson to the amateurs out there,
if you work at Google,
there's already someone breathing down your neck
about security and making sure
everything's done the right way.
Because Google and Microsoft are really good
about trying not to get hacked.
But if you're out there trying to learn programming and vibe code some cool things into existence,
which frankly I think you should be, I think it's fun, it can be empowering.
And if you run up against a wall and something doesn't work, or you're a little worried about
is this going to leak some kind of privacy, Is this going to do something off the rails? Well, the beauty of these language models is you can ask them, is there a security
risk here? And it might guide you to a tool. And pulling in these external tools, these
external Lego bricks to build a secure foundation can be really powerful. And so just asking
the question, knowing when you're vibing that part of that vibe
should be thinking about safety
can be really powerful in putting it together.
Just two final things.
One is for people who are like the human coders,
who this is what they do,
this is the work that they take pride in,
are they going to be on the endangered species list
because of the power of AI?
That is a scary question.
I know a lot of people studying computer science who are just entering into college or undergrad,
and they're asking this question.
And frankly, I don't think anyone has a good answer.
Some people will tell you that coding is going the way of the dodo bird and it's time to
pivot careers now and others will tell you that if
Creating software gets easier and easier becomes frankly almost free for you to just create whatever app you want
There will be such a proliferation of software that everyone will want their custom software to do something that the demand
For software and for programming will actually
go up as it gets easier.
I don't know which one of those statements is true, but it's certainly going to be an
interesting time to be a programmer.
Just the last thing, for people who, you said that people should try to use this and you
love to do this, but people, one of the things with AI is people talk a lot about it, but
there aren't a lot of people, I think,
in the general public who actually know
how to use it properly.
So what would be the 101?
I mean, how would somebody start with this?
So I would go to any website which talks about vibe coding,
and I don't wanna plug or sponsor one here,
whether it's Replit or Cursor or V0 or Lovable, which is a big European one.
And just ask it to build something.
Be direct, tell it what you want it to build.
It's there to help you.
And the something could be what?
So let's say your son is doing a birthday party.
Why don't you make a custom webpage
to celebrate that birthday party? Tell it't you make a custom webpage to celebrate that
birthday party? Tell it what kind of colour palette you want, tell it what you want it
to say, maybe even upload some photos for it to include. Or perhaps you want to throw
a bingo, make a bingo card for a party, that's what I did the other day. Or perhaps you want a one-off website to host
kind of your wedding vows, just to put it up on the web. You can think through special
occasions where you think, it might be nice to have a website for this. Websites are super
easy to build. Building hardcore software, B2B, enterprise, SaaS,
whatever those words mean, is much harder.
But building little tools that bring joy
into your everyday life is much, much easier.
And so, ChatGPT or Claude or Google's Gemini,
all of these tools can kind of help you
through this process of what does it mean
to vibe code something together,
how do you put it up on the web, and can be really, really empowering to play with.
Tobin, this is really helpful.
Thank you very much.
Thank you very much for your time.
Tobin South is a researcher in artificial intelligence security at the Massachusetts
Institute of Technology.
He was in London.