Daybreak - A social media ban for under-16s is Big Tech's get-out-of-jail-free card. Here's why
Episode Date: March 10, 2026Karnataka just announced it wants to ban children under 16 from social media. Goa and Andhra Pradesh are considering the same. And on paper, it sounds like exactly the kind of protection kid...s need — platforms like Meta have spent years knowingly exposing children to addiction, exploitation, and harm, while spending millions lobbying against any legislation that would stop them. So a ban feels like the only way. But here's the thing: when Karnataka made the announcement, Meta's response was more compliant than history would have suggested. And that restraint might be the most telling part of this story. Host Rachel Varghese explains.Tune in.
Transcript
Discussion (0)
Hi, this is Rohan Dharma Kumar.
If you've heard any of the Ken's podcasts, you've probably heard me, my interruptions, my analogies,
and my contrarian takes on most topics.
And you might rightly be wondering why am I interrupting this episode too.
It's for a special announcement.
For the last few months, I and Sita Raman Ganeshan, my colleague and the Ken's deputy editor,
have been working on an ambitious new podcast.
It's called Intermission.
We want to tell the secret sauce stories of India's greatest companies.
Stories of how they were born, how they fought to survive, how they build their organizations and culture,
how they manage to innovate and thrive over decades, and most importantly, how they're poised today.
To do that, Sita and I have been reading books, poring over reports, going through financial statements, digging up archives, and talking to dozens of people.
And if that wasn't enough, we also decided to throw in video into the mix.
Yes, you heard that right.
Intermission has also had to find its footing in the world of multi-camera shoots in professional studios, laborious editing, and extensive post-production.
Sita and I are still reeling from the intensity of our first studio recording.
Intermission launches on March 23rd.
To get alert, as soon as we release our first video.
episode, please follow intermission on Spotify and Apple Podcasts or subscribe to the Ken's
YouTube channel. You can find all of the links at the ken.com slash I am. With that, back to your
episode. Last week on Friday, Karnataka chief minister, Sidharamaya, announced that
the state plans to ban children under 16 from using social media. It wasn't really a plan per se,
or at least not a detailed one.
It was just mentioned as a point during the state's budget speech.
Here's what he said.
To prevent the adverse effects on children from the use of mobile phones,
the use of social media will be prohibited for children under the age of 16.
Other states like Goa and Andhra Pradesh have also been studying the possibility of similar bans.
Even V. Ananda Nagashvaren, who is India's chief economic advisor,
proposed age-based limits on access to social media platforms,
he described as predatory.
Now, yes, predatory might sound like too strong a word.
But the thing is, it also does seem to capture the fear and anxiety
about children navigating social media in today's age.
Because it's no longer some abstract conversation like saying there's a monster
hiding under a kid's bed.
Platforms like meta, YouTube and X have repeatedly been called out
for the harmful effects their algorithms and design features have on
children. Addiction, exposure to cyberbullying and sexual exploitation are all issues that
kids around the world have been facing online. And recent lawsuits and trials have alleged
that companies like Meta have been aware of exactly what's been happening. But they've
held back on making any meaningful changes. Because of course, they can't hurt their engagement
because that would affect their revenue. So, state sanctioned bans do seem like the only way to go.
because it's not like platforms are doing the work to protect kids online anytime soon.
In fact, in the US, Meta actually spent $20 million in 2024
lobbying specifically to block child safety legislation.
And to make such a ban work, there will be a massive update to how all of us use these platforms.
Because the only way to make the ban stick is to allow platforms to unquestionably verify an account creator's age.
Currently, the easiest way to do that is through government-issued IDs, facial scans or other biometric methods.
Imagine having to upload your Adhar or driver's license just because you feel like opening a new fake account.
Even on the company's side, that's a lot of complex and expensive technology to execute.
It would require building a whole new data collection system while also taking on compute costs for processing and storing this data indefinitely.
And Meta especially has strongly opposed any sort of change to how its platform works for years now.
Because how its platform works is exactly how it gets advertisers to pay it the big bucks.
But when news about the Karnataka ban broke, Meta's response, unlike in other countries, was surprisingly muted.
The company simply said that it would comply with local regulations.
There was zero pushback.
just an acknowledgement of the fact that banning a few apps when children used like 40 others
was unlikely to be very effective.
And think about it.
If this ban comes into effect in Karnataka and consequently in other states,
Meta stands to lose millions of accounts and about billions of dollars in ad revenue.
On the surface, it seems like a massive loss.
But Meta's reaction seems to suggest that there's more to the story.
Welcome to Daybreak, a business podcast.
from the Ken. I'm your host, Risha Virgis, and every day of the week, my co-host, Nikasha and I
will bring you one new story that is worth understanding and worth your time. Today is Wednesday,
the 11th of March. Now, blanket bans like the one Karnataka announced and the one Australia
enforced late last year, have had many critics. And all of them have maintained a singular point.
Feens are not the problem. The problem is inherent in the way the platforms work. There's even a major
trial going on in the US right now.
That's exactly about this.
1800 complainants, including children, parents, schools and even state attorneys,
have filed a lawsuit against major social media platforms like Meta, YouTube, TikTok and Snapchat.
The main allegation against these platforms is basically this,
that the addictive patterns that you see users fall under while using these platforms
are not an unfortunate byproduct.
They are, in fact, intentional features.
and they're specifically meant to entrap teenagers.
Of course, lawsuits against big tech is not a new thing at all.
But what makes this trial stand out is the fact that the complainants are going at it from a different angle.
They're not fighting against harmful content on the platform.
They're arguing that the features of social media as a product are intentionally harmful.
Features like endless scroll, auto play, algorithmic content, push notifications, etc.
were all introduced to make social media an inescapable trap for your attention.
And that is a totally separate concern from the content itself.
Take Meta, which has been exposed as having been aware through their internal research
that it was having several adverse effects on its team users.
Here's one concerning example.
Allegedly, an internal audit found that Instagram's accounts you may follow feature
recommended more than a million potentially inappropriate adult profiles to teen users in a single day.
Even then, meta was very slow to make the move that would have all profiles for people aged under 16 be private by default.
In another instance, a leaked internal Instagram employee chat from 2020 revealed that when one employee asked what Instagram was doing about child grooming, the response was this.
Somewhere between zero and negligible.
Child safety is an explicit non-goldest half.
So that's basically what users and regulators are up against.
A platform that is quite fine with putting its youngest users in danger of sexual exploitation
with zero to negligible plans of doing anything to fix it.
The thing to note here is that meta does not want to change how it works
because how it works is exactly how it makes billions of dollars every day.
So guess what a ban does then?
It makes it easier for platforms like Mehta to decide that it does not have to change.
Australian researchers from a digital advocacy group called Digital Child
in fact said that bans basically let platforms of the hook.
There's no longer pressure for them to actually build safer online environments for all users,
not just kids, and in fact push teen social media use underground.
Basically, that means that children will find workarounds,
ending up on the same platform or be pushed onto newer or something.
smaller platforms that are even harder to regulate.
And while losing teens as an audience would certainly mean some losses in terms of ad revenue,
it's not as big a slice as you would think.
A Harvard study found that Instagram earned $4 billion from US teens aged 13 to 17 in 2022.
But against Meta's total 2024 ad revenue of over $160 billion, that's under 3% of the business.
We all know that these platforms work by selling are.
attention to advertisers.
And making any changes that risks losing attention,
like giving up some aspects of data collection
that would end up reducing algorithmic accuracy,
it means that it has less attention to sell to advertisers.
And of course, content that invokes strong emotion like anger, hatred,
and shock inherently captures attention much better than regulated content.
A 2023 study also supported this claim,
because it found that social media platforms that run on ad revenue
produce more extreme content than subscription-funded ones
that choose to moderate their platforms,
which means that this advertising model actually incentivizes less moderation and not more.
So, meta stands to lose much more if governments pushed for real regulation instead of bans.
But when platforms are so unwilling to change,
it's not like the government is left with a lot of choice either.
It's just easier to ban teens from using social media than making big tech comply.
Platforms are allowed to function as they do,
waiting until the day when a 17-year-old opens a new account
and walks into the same toxic mess all over again.
Which takes us to what happens when a ban is actually enforced.
Stay tuned.
Let me tell you about this case in Texas from 2022.
The state of Texas, USA, filed a lawsuit against Meta,
alleging that the company had been running facial recognition software on pretty much every face
from every photo that was uploaded on Facebook and Instagram.
This they did without informing or taking any consent from users.
They then used this data to build an AI program called Deepface.
It was built as a facial recognition software with a 97.5 accuracy rate.
Meta eventually had to pay Texas $1.4 billion in a settlement.
The lawsuit alleged that Meta's practices put Texan's well-being, safety and security at risk,
because biometric data is unique, permanent and susceptible to misuse.
Now, like I mentioned earlier, when it comes to making a ban like this stick,
it's important to let platforms verify the ages of people creating accounts.
And that means they need to be able to collect verifiable government IDs or use facial scans or other biometric data.
This shift is something that research,
are calling an age verification trap.
Basically, companies like Meta now have the license to build a massive personal data infrastructure.
These are all incredibly invasive measures.
In a fortune article that covered the age verification trap, it noted that at the heart of
the issue is the fact that there is fundamentally no tool that can verify a user's age
without inherently violating a user's privacy.
I-Triple-E Spectrum, a technology magazine, frames it as something that's pretty much structurally inevitable.
And what's worse is that what starts as a one-time age check becomes a permanent system.
See, most data collection based on privacy laws today work on a simple principle,
where specific data is collected for short periods of time and then deleted.
But with an age verification system, platforms will be required to log, retain, and quote,
correlate user-level data as long as the account that's tied to that data remains on the platform
so that they can prove their compliance throughout. It's basically a surveillance architecture,
and it doesn't disappear after the check. It becomes part of the product. This infrastructure
is far from safe. For example, a 2025 breach of Discord's age verification vendor exposed over
70,000 government IDs in a single hack. So, with age bans and age verification,
becoming legally required, it's basically like handing tech companies some of the most invasive,
sensitive data on a platter. And the timing couldn't have worked out better either. Because while
meta had shut down deep face due to privacy and ethical concerns in 2021, it's back at its facial
recognition software game again. Mid last month, Facebook announced that it would be introducing
a feature called name tag for its re-band smart classes. Want to guess what they do? Well, the promise
is that wearers can identify people and get information about them through META's AI assistant.
It's supposed to be able to do that only through publicly available information.
But think about it.
Meta has historically been thrown to collecting and using sensitive data for its own purposes for years now.
The Texas case I brought up is just a recent example.
So how exactly are we supposed to trust it with our faces, names and other personal details?
And speaking of AI assistance, it's important to note that all major tech companies, including Meta, are in a race to build the best AI, and specifically Agenic AIs.
Meta especially has seen an increase in its capital expenditure by $39 billion, which it has attributed directly to its AI and superintelligence labs.
Now, Agentic A. AI is different from your regular chatbot AIs because they don't just respond to prompts.
They carry out entire tasks with their own autonomous agency.
And the very foundation for this kind of tech to work is an understanding of who the user is.
Mark Zuckerberg has claimed that when it comes to shipping agency K-I tools,
Meta's edge lies in its unparalleled access to personal data,
data that is only going to get richer with access to government IDs and biometric information.
So at the end of the day, if all this ban is doing is possibly,
keeping teams of the screens for a few years,
while also making you hand over your most sensitive data
to companies who have historically misused it.
The question to ask is,
who exactly is this ban protecting?
Daybreak is produced from the newsroom of the Ken India's first subscriber-focused business news
platform.
What you're listening to is just a small sample of our subscriber-only offerings.
A full subscription offers daily long-form feature stories,
newsletters and a whole bunch of premium podcasts.
To subscribe, head to the ken.com and click on the red subscribe button on the top of the Ken website.
Today's episode was hosted and produced by my colleague Rachel Vargis and edited by Rajiv Sien.
