Tech Won't Save Us - Smart Glasses Are Ushering In An Anti-Social World w/ Chris Gilliard
Episode Date: October 16, 2025Paris Marx is joined by Chris Gilliard to discuss how tech CEOs are pushing a new generation of AI-powered smart glasses by promising they’ll be stylish and indispensable to workers in a desperate a...ttempt to convince us we should want their luxury surveillance gadgets. Chris Gilliard is the co-director of the Critical Internet Studies Institute and is working on a book called Luxury Surveillance. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is made in partnership with The Nation. Production is by Kyla Hewson. Also mentioned in this episode: Harvard students used Meta glasses to dox strangers. Apple tinkered with its AI shortly after Trump was inaugurated. AI may start answering 911 calls. Studies are suggesting that AI may be making doctors worse at diagnosing cancer. Rob Horning has written about overreliance on AI. Chris mentioned Emily Bender and her analogy comparing LLMs to plastic.
Transcript
Discussion (0)
What their need is is for you to have a computer on your face, not for you to be able to do X, Y, and Z better.
It's not to liberate you from your screen.
They want you to be a constant source of data, not only a constant source of data in terms of how you are ingesting the world when you wear this technology, but also the extent to which you would be in constant communication with AI.
Hello and welcome to Tech Won't Save Us,
with The Nation magazine.
I'm your host, Paris Marks, and this week my guest is Chris Gileard.
Chris is the co-director of the Critical Internet Studies Institute
and the author of the forthcoming book, Luxury Surveillance, that will hopefully be out next year.
As I'm sure you've seen, all of the major tech companies seem to have decided that smart
glasses are their next big thing. Meta is rolling out its new display glasses in partnership
with Luxottica. You know, it has its Raybans and now it's Oakley's. Google has this new
partnership for Android XR with Warby Parker. Apple is apparently working on some smart
glasses and, you know, there are plenty of other companies doing this as well. They seem to have
decided that this is the moment to push to get screens in front of our faces, constantly,
not just on our phones, but right in the lens of the glasses that so many people,
people wear day in and day out. And I don't know about you, but to me, this is a pretty
dystopian vision of the future. And it's no surprise at all that they're trying to finally
make the smart glasses happen. And of course, using generative AI as part of the justification
to do that. And so when I was thinking about who I wanted to talk to, to understand more about the
wider implications of this push by the major tech companies, by the Mark Zuckerbergs and the
Sergei Brins of the world, I can think of no one better to talk to than Chris Gileard.
Chris has been on the show in the past, of course.
I think he is one of the most insightful critics of the tech industry going today.
And of course, his work on luxury surveillance means that he is perfectly positioned to
talk about this effort to expand the tech that we're wearing, the cameras that are on our
faces, you know, everything that is associated with this.
And you will not be disappointed by the insights and the criticisms that he provides.
in this episode. He makes it abundantly clear that we need to be pushing back against this vision
for what our future should be, where we're all wearing these glasses, where so many people are
being recorded, where the surveillance infrastructure that these companies have already created
is taken yet another step forward and reduces our ability to ever escape the net of surveillance
that they are creating. So really, like, what is there to say other than that, other than
you need to listen to Chris's insights? And of course, you should be following him.
over on blue sky as well if you're not already doing that to me he was an essential follow on
twitter and when he made the move over i was like i got to follow chris i cannot lose the insights
that chris provides me because i just think he's fantastic so with that said if you enjoy this week's
conversation make sure to leave a five-star review on your podcast platform of choice you can share the show
on social media or with any friends or colleagues who you think would learn from it and if you do want
to support the work that goes into making tech won't save us every single week so we can keep having
critical in-depth conversations on so many different aspects of the tech industry and the ways
that it affects the society that we all live in. While getting access to ad-free episodes
and even stickers, if you support at a certain tier, you can join supporters like Danny from
Phoenixville, Ali from San Diego, Rakash from Kingston in the UK, and Toby in Los Angeles
by going to patreon.com slash Tech Won't Save Us, where you can become a supporter as well.
Thanks so much and enjoy this week's conversation. Chris, welcome back to Tech Won't Save Us.
Thank you. It's, as always, great to be here.
It's been too long since you were on the show last. And I was like, man, I need my fix of Chris
Gilliard. And so now is the moment, right? Because I feel like me and I'm sure so many more
of the listeners, like, appreciate so much your, like, feed of criticism of all of this
bullshit on social media. But it's also nice to, you know, get a long form on a topic like this
because you're so knowledgeable on all of this stuff, which is why I was like, you know what,
all these companies are doing this smart glass shit. Chris needs to come back on the show and tear
this apart for us. Yeah, I mean, I relish the opportunity. Let's start in the obvious place.
I feel like we had defeated smart glasses. You know, 10 years ago, a little more than that,
Google rolls out, it's Google Glass. It really wants to make sure that we're sticking computers on our
faces and, you know, makes a big push to try to make that the future in that moment.
You know, Sergey Brin is showing it off and thinks it's so cool.
And obviously, it gets ridiculed, right?
You know, people are called glass holes, and ultimately it ends up in, like, you know,
the dustbin of history.
And now, 10 years on from that, smart glasses are everywhere again.
The companies are pushing it again.
What is going on here?
And what do you make of the return of smart glasses a decade after Google Glass?
Yeah.
I mean, I love the example of Google Glass because it offers us an opportunity.
to think about what rejection of a technology looks like.
Smart glasses, or I think they're a great example of a profoundly antisocial technology.
Now, unfortunately, we have several current examples, but I think smart glasses are one of them.
They were roundly rejected, and, you know, it seemed for a moment that they would not come back.
But meta is out with their raybans.
Amazon is supposedly developing a version.
Apple now wants to get into the game.
There's a heavy push by a lot of these large companies
to get you to put a computer on your face.
It's so depressing, right?
Like, I don't know, there's computers in so many places.
The last place they should go is, like, directly in front of my eyeballs.
But it has been, like, really interesting to see that push.
And I keep wondering, like, if the kind of campaign that was levied against Google
class can be replicated.
Because I feel like one of the really distinct differences between now and then is like, okay, on the one hand, we are much more used to like having screens and even cameras everywhere now than I think we would have been 10 plus years ago.
But one of the things that was quite distinct, I feel like, about Google last is that it was quite ugly, right?
It was easy to ridicule it.
It really stood out if someone had it on their face.
And it feels like the companies have really learned from that this time around where like they're explicitly going to the luxaticas, you know, the ray.
bands in the Oakley's or Google has a new partnership with Warby Parker, you know, trying to
make it like fashionable, right, to have this on your face. So it tries to blend in other than the
little black camera holes like on the side or in the middle or whatever. I wonder on how you like
think about those those differences and, you know, whether the companies are as vulnerable now to
the campaign that was levied against Google last 10 years ago. Yeah. So I have a lot of ideas about this.
I think first, the aesthetics, I think, are really important.
I mean, this is what I talk about a lot when I talk about luxury surveillance, that the goal, in part, is to make these things sort of blend in, make them seem cute or cool or fashionable in ways that try to make it appealing or less ugly.
In doing some of the research for the book, I happened upon an article by Sarah Roberts and Sefia Noble that talked about Google Glass.
and how they tried really hard to market it as a luxury item, to, like, drum up appeal for the glasses.
I mean, it didn't work, like some rich people bought them, but they were roundly rejected.
That is one of the lessons that these companies have learned, you know, so they are capable of learning.
To have them be so obviously a surveillance device, for my taste, they always look like a scouter from Dragon Ball Z.
for people who, you know, remember that anime.
But to have them be so obnoxious and obvious was a signal to everyone about what was going on.
And so in most cases now, they are made to appear just like sort of typical or normal eyewear, perhaps a little more bulky.
That is, yeah, it's a piece that they've learned to change.
I think that rejection is going to look somewhat different.
And that's for a couple reasons.
One is it's not always obvious that someone is wearing these glasses.
The other thing is I think what they've learned is there's a strong push by these companies to have everyone wear them.
And what I mean by that is, for instance, Amazon is planning to have a version that, say, delivery drivers will wear.
Now, Amazon says that this is going to shave seconds off of delivery times so that if a driver is wearing the glasses,
they can receive turn-by-turn instructions.
And so if they're getting off an elevator,
instead of them mistakenly turning left and delivering your package a few seconds longer,
you know,
then they anticipated the glasses can instruct them to turn right and,
you know,
sort of optimize the route, right?
Like it's Taylorism's final form.
You're relying on some like really specific maps of like every building in the world?
yeah i mean and also yeah like that's it's going to help create that like the glasses are both
you know producing surveillance and in the output of that surveillance also is helping to optimize
i mean that's what they say but part of that i think is why it's going to be a little bit more
difficult to reject them so that there'll be lots of cases where people might be wearing them
where they don't have a choice you know for instance like we're starting a
see Axon, which is like the body cam company, they are now pushing body cams for
like hospital workers and for retail workers. You know, I can see a future where instead of
body cams, now there's Facebook glasses or some version, meta-google, but there's some version
of these things that workers are encouraged or forced to wear. And so rejection is going to be
a little bit different so that the way that people who were wearing them were ostracized
when they showed up in public, you know, when they came to a bar, when they went to a concert,
when they were at the gym, you know, in a locker room, things like that will be a lot more difficult
because there'll be a segment of the population who are wearing them because it's part of their
requirements for fulfilling their job. Now that's not to say that we cannot reject them. I mean, I think
there's still some strategies we can learn from that. And I think, you know, kind of shame and
ostracism in terms of people conducting antisocial behavior, I think we need to bring that
back, to be honest. I mean, I think that the idea that tech companies promote that people
should be able to engage in profoundly antisocial behaviors and like we just have to put up with
it, I think if we look around, then we can see some of the results of that. And they're not
good. So I think we should bring back some of the ways that we rejected those technologies.
No question. And I want to come back to that notion of like the strategies that we can use
to push back on it this time around like a little bit later in our conversation. So we'll get
to that once we have discussed a bit more of what these companies are doing in this moment as
well. And so, you know, I want to take what you're saying there about this really grim future
that you are laying out. We're already in a pretty grim present. But like, you know, layering on
this additional form of surveillance and recording that is being enabled by these supposedly
you know kind of fashionable glasses or whatever that are embedded with all this tech sounds terrible
but i think it's also worth understanding like what the companies are actually pushing in this
moment right like more tangibly so you know we saw some announcements from meta recently about
what they are promoting and we've heard things from apple and google and you mentioned amazon as well
can you give us a bit of a lowdown on like what the companies are actually presenting right
now and how they're trying to sell this product to the public.
Yeah.
So one of the biggest things is the claim that the glasses will liberate you from your screen.
We've seen, you know, the last 10, 15 years of smartphones.
And I think people have begun to recognize.
Again, I'm going to use this term.
I've used it a ton already.
But we've seen sort of the antisocial effects of people pulling out their phones during the course of their days.
So someone pushing a stroll.
while they're looking at their phone, right?
So instead of enjoying their time with their child in the park,
you know, they're looking at their phone.
I mean, there's countless examples that we could point to.
Like, we all know them at this point.
One of the persistent claims in terms of the luxury end of it, right,
the people who would willingly adopt these glasses
is that it will liberate you from your screen.
Now, that's not true, but we can come back to that.
Liberate you by sticking a permanent screen right in front of your eye.
Right, right.
The second thing is that it's somehow sort of onerous to pull out your phone.
So if you're at a thing where you would want to capture video, for instance,
or get turn-by-turn instructions or something like that, that it's onerous to pull out your phone
and do these things.
So glasses will make that easier.
And I think the third thing, and there's several more, but I think these are the biggest
ones. The third thing is that it is a portal for constant interaction with AI. And so that instead of
interacting with AI through a screen, you know, a chatbot or what have you, instead of interacting
with it through a screen, it can now be consistently in your ear. Now it can again give you turn
by turn instructions. And I'm speaking like this is what the companies would say. You give you turn
turn-by-turn instructions, you can give you affirmations. It can look up something for you. It can
interpret what you're seeing. It will allow AI, Gen. A.I. to be your constant companion. Yeah. I saw
this quote from Mark Zuckerberg. Like he said recently that wearing these smart glasses allow
AI, generative AI, to see what you see and hear what you hear. It's like, you know, the sales
fish that you're talking about, right? Yeah. I mean, and Zuckerberg has gone so far as to say that
people in the future who don't wear these will be at a cognitive disadvantage. Now, this is a
nakedly eugenicist assertion. I think it needs to be called out as such. You know, the idea is that
this technology on your face will enhance you in ways that can only, you can't really imagine
until you participate in it. The first step toward the like desired cyborgish kind of future
that they want to see. I was struck when you were saying that about AI. Like I want to come
back to the broader points. But like I was saying, Google co-founder, Sergey Brin was a big pusher
of Google Glass back in the day. I believe it was because he was heading up like the Google X
division or whatever. And it was part of that. But he did an interview, I believe, back in May,
where he was talking about how, you know, there was a, like the technology wasn't where it needed
to be in 2014 for Google Glass. And that was part of the reason that it failed. I think we could
debate that. But he's basically saying that now the technology is there.
because precisely as you're saying, AI is going to allow you to like do all these things by
putting a computer on your face. And so of course, like everybody is going to want this.
Like the vision of the future that these guys have seems so divorced from like, I don't know,
what most people I feel like would would really desire like from technology, though they can
certainly, some people will certainly be sold it or be convinced that it's an appealing thing.
But also just like that this is what they think the future should look like.
It's just so impoverished, you know?
I think it needs to be noted how divorced these folks are from what life looks like for the everyday person.
One of the most profound illustrations of that is how bad they have been at marketing Gen A.I. in general.
I mean, we've all seen the commercials, whether it's from OpenAI or from Meta or from Apple,
of these commercials where they don't really resonate with almost anyone.
I was talking last week about the commercial, and it's about a year old or maybe a little bit more, where there's a young girl in her father, and the young girl is a fan of a track and field athlete and wants to write her a letter.
And so the dad signs into whatever, I don't remember his chat, GPT, but he signs into whatever chat bot and helps the daughter generate a letter to her idol.
You know, I mean, and it's like, come on, it's so very much removed from what I think people, you know, really want her need.
The superstar athlete, in as much as she would appreciate a letter from a fan who's a seven-year-old girl, doesn't really want it generated by a chat.
You know, I mean, I'm not a superstar athlete, but like, I think that's probably accurate.
You know, there's a, there's another commercial where a guy is, he's going to be a superstar.
going to meet his girlfriend's dad. And the girlfriend has, her dad is some kind of high-powered
scientist. He's like a neuroscientist or astrophids sisters or something like that. So the guy
spends a little bit of time interacting with, again, with Gemini or chat GPT or something like that,
sort of tutoring himself in that field so that when he shows up to talk to the dad, he can pretend
he knows something about it. Now, again, this is jerk behavior. I mean, if you are going to meet
someone who has like an intimidating job or does something really interesting or you know isn't a
field that you know nothing about the sort of more human thing to do is say hey that's really
interesting let's talk about it like I'd love to know more about it instead of be a fraud and
show up pretending that you know something about it the demo with the the meta glasses the one of
the more recent ones where Zuckerberg was at a table with all these ingredients and he says well
show me what I can cook with this maybe there are people who do those things right but
But I don't think there are a lot of people who would find a lot of use in these specific kinds of cases.
I think that there are reasons why these companies want you to stuff a computer on your face.
And so what they're left with then is trying to find reasons to make you want to do that.
What their need is is for you to have a computer on your face, not for you to be able to do X, Y, and Z better.
It's not to liberate you from your screen.
they want you to be a constant source of data, not only a constant source of data in terms of how
you are ingesting the world when you wear this technology, but also the extent to which
you would be in constant communication with AI. And that, I think, is the impetus behind so many
of the wearables, whether that's the pendants, the rings, the bracelets, you know, anything
that conditions you to be a constant source of data, to always be datified.
is sort of the next step because, as we talked about with workers, like producing more accurate
maps of the world, always knowing what you're up to, having the ability to constantly be in
your ear directing you in ways I think that often will benefit, be more of a benefit for the
company than for the individual. I feel like even hearing you describe that, it brings to mind
something I've been thinking about about why these smart glasses are re-emerging in this moment
specifically. And it feels like in part, there's like the leftover of the metaverse moment and all
of the money that meta in particular put into trying to realize this kind of virtual or augmented
reality environment that was, as you say, very much about having you interacting with a screen
much more often, collecting much more data about you on a constant basis. The idea being that,
you know, you are going to spend a lot more of your time in this metaverse with this.
headset on your face rather than out in the world, so to speak. And I think it's clear that Mark Zuckerberg
always had the ambition to do something like a smart glasses and the tech just wasn't there a few years
ago to try to promote the type of thing that he's doing right now. And then, of course, the generative
AI moment that followed that layers on another opportunity where you can say, okay, we can stick this on
your face. You know, the tech has advanced to the degree that we can like project a screen onto the
glasses and maybe like animate some things in your field of vision and like use computer vision
and stuff to detect what is in front of you. But then on the other hand, it's like now we're
layering in this other big bet that we're making to try to add some like more appealing features
and the kind of, you know, voice response system that wouldn't have worked nearly as well a few
years ago as well. So I don't know, to me it feels like a real intersection of those two
of those two moments and trying to push it forward in this way to try to find something tangible
from both of those big experiments, I guess.
I think one of the things that's interesting, you know, interesting pejorative, the model
has changed for what these companies are trying to do and what they say they're trying to do.
And so what I mean by that is for a long period of time, the sort of deal, and this was often
openly articulated by the companies, the deal was that you would get free services in exchange
for some surveillance, right?
Google's free.
You know, Facebook's free, right?
These are all in quotation marks.
These systems and technologies and platforms are free, and the way you pay for them is in some modest
form of surveillance.
Now, that deal was always a lie, right?
I mean, so the deal is a lie because there was not moderate surveillance.
It was intense surveillance in ways that most people, I think, were not aware of.
You know, we find that there's Facebook pixels in the site where you go into to making a point with your doctor.
We find that, you know, Google's driving past your house, right?
You know, on and on and on.
We could spend the entire length of the podcast talking about these abuses, and these were part of the deal, you know, that these companies rarely mentioned.
The deal has sort of changed.
And now what these companies are saying is not that you pay for the thing you want with a little bit of surveillance.
they're trying to now convince us that surveillance is the thing we actually want.
You know, the good things that you'll get will only come, you know, through like intense
surveillance, through like having this thing on your face all the time, through having this
thing that constantly soaks up not only kind of what you see, but also what you hear.
And, you know, further, kind of your biometrics, right?
Like your tone of voice, your heart rate, you know, your blood pressure, you know, whatever it is.
that surveillance then becomes like the thing.
The way I've been thinking about it is it's like a third wave of algorithms.
That, again, part of how they used to talk about these systems
is that they were built to connect you.
And, you know, I'll say specifically with Facebook,
but we can also think about this with other platforms,
not only through meta, but, you know, things like Spotify,
things like Google, like all these things.
That what they did was to, you know, connect you.
That the idea with Facebook,
was we can let you know what your friends are up to.
You know, Zuckerberg's got that famous line.
I'm paraphrasing a little bit, but he was quoted as saying like a squirrel dying in your front yard may be more important to you than people dying in Africa.
Like this is early Zuckerberg, okay, early Facebook.
I hadn't heard that one, but sounds about right.
Yeah.
Yeah.
Yeah.
Wow.
That's early Facebook.
We connect.
Now the second wave is curate that.
great, that algorithms do all these fabulous things for you, that the finer tune the algorithm is,
the better it will be connecting you with things that you want, even when you didn't know you
wanted them. This is like Facebook, but this is also Spotify. This is also TikTok. I think the
third wave and why they're a lot less coy about the surveillance aspect is, you know, the third C.
I call it like the command phase, right? The idea is now that you will be able to
optimize your life if you just listen to the to AI right if you give yourself over to the
algorithm and the only way for you to do that you know in terms of what the company needs is for
the company to have all the information on you right whether that's agentic AI you know again
in quotation marks well that's agentic AI over there that's a device that you know sort of
inhales and captures everything you do it sort of gets rid of the messiness of sort of
trying to predict what you'll do and gets more into like trying to like train you to do a certain
thing. And I know this sounds like a little bit sort of conspiratorial and things like that,
but look at what they say. Look at how they talk about it. They've been far more explicit.
And again, I think Zuckerberg is sort of the worst or the most obvious person. I think Altman does
this a lot. Any AI booster who works or runs these companies or does extreme promotions of
AI. The idea is that with enough information, the AI can help you optimize your life.
What that necessitates is you do what the AI says you do. You know, you watch the shows that
it tells you to watch, right? You go to bed when it says go to bed. The interesting thing too,
right, is like I thought for a long time nobody wanted this. I think there are people who want
this. This brings me no joy to say. I think there are people who are happily would see their
lives to the machine, make fewer decisions if it made life easier.
It's like, you know, the people who prefer like order, even if it comes with the curtailment
of freedoms, right?
But I don't think what you're laying out is conspiratorial at all.
Like, I think it's a reflection of what we're seeing these executives say.
And obviously what these companies are very much trying to do and the use cases that
they're promoting for these tools, right?
And as you're saying, like, yeah, we can talk about smart glasses, but smart glasses seem
just intimately connected to the generative AI moment in such a way that they can't really
be disentwined, right? Because the whole idea of why you would put the computer on your face
is so that you can interact with the AI bot, the AI agent, the chat bot, whatever. And it's like
we saw those earlier attempts to push out, you remember those little little like squares basically
that a few companies released that like kind of go on your lapel or whatnot. And they were like
kind of ridiculed. They were like, this is stupid.
you know and they kind of fell apart but it does really feel that the companies are making a big push
to move into that space and to justify it even more and it's like the wearables already started to
accustomed us to it you know you were talking about the notion that surveillance ultimately becomes
the product and it feels like apple has been moving in that way for quite some time right it doesn't
have the smart glasses but through the apple watch and now the headphones are getting it remember
when they released the apple watch initially and it was like kind of positioned as like a fashion
product, right? You know, they had this big announcement with like the fashion magazines and all this
kind of stuff because that's what Johnny I've wanted. And once they found like the real use
case, it was like, oh, okay, this is like fitness and health, right? This is tracking all these
things. And you see when they do the keynotes now, there are these big like videos that come
before the Apple Watch announcements about how many people whose lives have maybe been saved because
they were wearing an Apple Watch and all this kind of stuff, right? Like it's explicit in the pitch.
And now it's like they're just taking the kind of foundation that they've already built here and like taking it to a whole other level.
Yeah.
You know, I think, I mean, there's a bunch of interesting things in what you said.
I mean, one of the points I wanted to bring up is like tech pros in one sort or another have been fantasizing about this for a long time.
Yeah, I was just reading about what he bledso, who was one of the early researchers into facial recognition.
And one of the things he wrote about was the idea that he would have a pair of glasses.
you know, with a microphone, you know, equipped with facial recognition so that when he
encountered people on the street, it could tell him who the person was, right?
It could identify his friends, for instance, right?
But also, yeah, I think that that move, the suggestion that these devices are also going
to be health devices, I think is interesting and rather dire turn as well because of the moment
we're living in, RFK Jr. or RFK.
Not, I think the summer, you know, said, like, I think that every American, you know, in his, in his future, he believes that every American should be equipped with a wearable.
Yeah, you shouldn't have vaccines, but you can have this, like, device on you all the time.
Yeah.
Right.
Yeah.
This is a terrible future, right?
I don't think this is a good idea, you know, but the companies are very closely aligned with this, with this mission.
And to blur the line between what is, you know, a luxury device, a fitness device, and a health device.
I think a thing that turns health into a set of individual choices, you know, directly serves, again, like a eugenicist and fascist directive.
When you're talking about, like, even that vision of using the glasses to be able to identify someone who's, like, going up to speak with you, a few things come to mind there initially.
First is, like, the scene in the film, The Devil Wears Prada, you know, where Merrill Streep's character is like basically the Anna Windor, the editor of the chief of the fashion.
magazine and she's at this like you know kind of gala sort of thing where she has to greed all these
people and her two assistants are basically trained on this like big book of people so that if
someone starts to approach her and she doesn't know who it is she can like kind of lean back and
they can say the name into her ear and who it is you know and it's like it's like okay
automating the like personal assistant role in in that to like help the kind of wealthy person
know what's going on but then also like the more like tangible piece is like I believe it was
404 media reported on this experiment by these Harvard students where they tried out the
meta smart glasses and an application and we're like able to docks anybody they wanted like in
real time like figure out their information based on facial recognition just like out in the
streets and it's like so these tools like already kind of have that capability and it's not
just like oh I want to see who this person is when I greet them it's like now I can figure
out anybody out there in public and like get their information just by like scanning their
face with my glasses. And like, that's a scary step that these companies certainly don't want us
to be thinking about, right? Yeah. It's a stalker's fantasy that is unfortunately about to
become our reality. I mean, the two Harvard students you mentioned, I mean, they had a very
clunky version that they were able to put together. And since meta's been playing with AR and
VR, people have approached them and asked whether they put facial recognition into their devices.
They had been very coy about this, and they did what these companies do a lot of times, which is to say, well, you know, we'll need to sort of figure out how society thinks about that, right?
I mean, that's never what they really do.
They wait for a moment where it is more palatable or seems that they'll face less backlash, and then they do what they had planned to do probably all along.
Now, as far as I know, the current versions do not have facial recognition in them, but I feel like I've seen some reports that this is a thing that is going to be integrated into the glasses sooner rather than later.
And again, like, this is like a profoundly anti-social application.
Like, I don't think facial recognition should exist.
Like, I'll be super honest about that.
But I think that these things that lower the barrier for toxic behavior.
behavior. For instance, when someone approached, I can't remember the name of the Facebook
or meta executive who said this, but I can point you to the article where they said it.
Someone approached them and asked them about the two Harvard students who made the classes.
And what this meta-exec said was, well, you could do that with a phone, right? Now, it's true
you could do that with a phone. But I think what this fails to sort of reckon with is each
iteration of these things it sort of lowers the barrier for toxic behavior and so you could walk
around with your phone out scanning people's faces and docks them but to wear your phone on your face
and do that with everyone it's like a little easier and you know some people point to the sort of
light that supposedly shines when you're recording with metaglasses and things like that
I don't think that's sufficient and I don't know if they work or not
with several outlets have reported that people are selling online stickers that
occlude the light, but allow the camera to still work.
And whatever sort of positive use cases you might imagine, okay, maybe you don't,
you forget people's names and you would like a thing that reminds you of who you're talking
to.
That's possible.
But I don't think that that outweighs all the sort of negative things that will be done
with this, right?
Rather that's like doxing people at protest, stalking people, producing or in live streaming, hateful content, you know, or atrocities.
It's already been done, you know, used by a fascistic government to more easily produce and stream content.
It's already been done.
I think that when you lower the barrier for these activities, which is what these companies are doing, it makes society worse.
That some of these things are possible now, it's slightly more difficult to do them.
And I think the more difficult it is to do toxic and antisocial things, the better.
I wonder as well, like, how you're thinking about how they're selling this to people, right?
You know, you said that there are obviously going to be people who are, you know, who want to have
their lives kind of controlled and managed by generative AI.
But when I think about, say, the smart glasses, you know, even people I know who are generally
like, I would never wear a pair of smart glasses, like, this is stupid.
You know, when Google and meta showed off their tools to basically like do the live.
translation feature, they were like, yeah, but that would be kind of convenient, right? Now,
whether it works in practice, how they were presenting it in like a demo, I'm sure maybe not as
kind of seamless, but I wonder how you make of that and like how they effectively try to
position these potential benefits to try to get people on board. Yeah. One of the things I like
to always point out to people is that we could have these things in different ways, that there
are very conscious design decisions that go into these things. For instance, live translation.
Could it G earbuds? Sure, it could. But that's like a different kind of way that then
meta is ingesting everything you do. I believe Apple showed off earbuds with like a similar
feature more recently, I think. Yeah. I mean, but they're also putting, they plan to put cameras in
their earbuds. Right. Oh my God. Right. But the other thing is it cannot be removed.
from the fact that these are some of the most extractive and oppressive companies to ever exist.
And so it can't be removed.
If you look at what might be the positive use cases of the benefits of these things,
it can't be removed from the current situation in the history of these companies.
For instance, I mean, Facebook is a company that had information that the content they were
producing was negatively affecting girls and women's body image.
their own research told them that and they kept doing it you know more recently they developed chatbots
that were able to have sexy talk with minors right they they had a grooming chatbot that they
only backed down on after journalists and researchers pointed it out there are some of the worst
companies to ever exist that routinely do things to harm their users if it benefits the company
If it keeps people on the platform, if it maintains people's defendants on the platform, that their metrics are more important than our well-being, people's health, the social fabric, like any of those things.
They demonstrated this over and over and over and over again.
And so, yeah, would it be cool to have some of these functions?
Yeah, like, I can admit that.
Right now, it can't be removed from a company or a set of companies that have open.
align themselves with authoritarianism and have a terrible track record in terms of the
ways that they privilege the bottom line and the dependence on a platform over people's social
and mental and physical well-being. Bringing up the broader social context, I wanted to get
to that as well, right? Because I feel like there's a pretty common meme that spreads online,
that like generative AI, you know, meme argument, whatever you want to say, that generative AI is like
basically the aesthetic of fascism at the moment. You know, you see how, say, the Trump administration
and, you know, these really right-wing groups kind of use generative AI in order to create a
particular form of media that justifies and amplifies the politics that they are spreading, right?
You know, we see in particular in the United States, but other countries as well where this is
playing out. And, you know, when we talk about smart glasses and the kind of surveillance and the potential
deployments of these technologies. I wonder how you think about that in relation to the kind of
fascist political moment that we find ourselves in. Yeah, I mean, I think, again, like, I look no
further than the report from 404 where CBP was wearing a pair of meta-ray bands. And I think
the other thing to think about, too, is the Anderil and meta partnership to produce, you know,
some form of VR AR goggles for the military.
I think it's understated the degree to which all of these companies that were started out
as some sort of social media platform or something like that are now also weapons dealers
in some form or another, their arms dealers in some form or another, they're government
contractors in ways that certainly they didn't start out to be, right?
But whether they're providing hardware or infrastructure, all of these companies now are
government contractors and arms dealers. I don't think that these things are sort of incidental.
And again, I don't think it can be removed from the larger context. I think a lot always about
who was lined up at the inauguration, who was like in the front row, you know, and that was
Zuckerberg, and that was the head of Google and the head of Microsoft. That was a very clear
indication of their priorities and their values. And I think what's also important, you know,
when we're talking about Gen AI is that I think GROC is the best illustration of this. We can think
about sort of Trump's edict against Woke AI and things like that. But I think, another thing I think
is understated is the degree to which part of the function of Gen AI is to have us acclimated to
chat bots as a source of truth. Now, Grog being the best example because we've seen countless
times where when it produced answers that didn't agree with Musk's ideology, that there was some
tinkering with it. And what that resulted in is kind of Mecca Hitler and things like that.
I think that other companies are up to things that are very similar. You know, we've seen this
with Google, for instance, when they admitted in the monopoly trial, that they're probably
search, right? The search that we've become accustomed to had been degraded because they're trying
to switch people over to AI to the AI summaries, the Gemini summaries and things like that.
And they didn't really care that their primary search had been degraded, one, because, you know,
it meant people would spend more time on Google, but also that the hope was that people would then
come to see Gemini as a source for truth. Again, I think this is really dangerous.
Dude, it's so grim to hear you describe that. Yeah. Because.
they are clearly aligned with authoritarian forces. And they're also the companies that are saying
use our technology to provide you with the truth. And I don't think those things could be separated
from one another. Definitely. And there was that report the other day as well that they were
suppressing searches about Trump and some kind of health condition, I guess that people have
been talking about and potentially having. And there was another report that Apple had also kind of
after Trump was reelected, tinkered with its AI tools to change the type of responses it would
generate about Trump or something like that, you know, like kind of giving into, you know,
without explicitly being demanded of them, but making sure that they wouldn't kind of create
something that would piss off Trump and his followers, right?
Yeah.
And these are things, I mean, they're in learning management systems.
They're in schools.
They are encouraging us to have it as a best friend, as a therapist.
as a life coach. They're using it to answer 911 calls, right? I mean, all kinds of things, right? And
the implications for that are frankly awful. You know, and so the idea that we would have
devices that are constantly dripping poison into our ear, I don't think that that's like a net
positive for society. Absolutely not. In the same way social media isn't, if it ever was,
you know, you say dripping poison in your ear, but it's like, where was I? I was at a, I was at like a
journalism conference the other day and somebody said that social media like a platform like
Facebook is almost like if the water pipe coming into your home was bringing your drinking
water and your sewage like in the same pipe right it's like it all just gets full of sewage
you know it's that it's not drinkable it's gross it's terrible like what they have created and
what we now have to deal with and I think what you're saying about kind of assessing the drawbacks
and the risks alongside the things that they're trying to sell us that might be good is like a
really important way to look at it because that's how I tend to look at these things more and more
is like, okay, there might be some benefits here, but does that outweigh the real drawbacks that
come with it, you know, the ways that it's causing social harm in the way that you're saying? And
you know, with a lot of these things, I'm less and less convinced. Yeah. And I think another thing
I haven't mentioned, you know, a couple months back, Zuckerberg talked about, you know, what he
called the male loneliness crisis and that chatbots and gen AI are part of his solution to that.
And I think, again, knowing what we know about these companies, knowing what we know about
the extent to which they will visit harm upon people if it means people keep using their
product, the idea that these tools could be your friends or your confidants, I think is
really dangerous.
And I think that it can't be overstated the extent to which part of why these tools exist
is to isolate us.
to insert these companies in between us and all the relationships that we have.
That's why they say, well, you can use it on dating sites.
You know, you can talk to it about your problems, you know, things like that.
But, you know, going back to some of the other wearables, you know, for as long as people
have existed, you know, mostly when they wake up in the morning, they know if they have
slept well or not, for instance.
They didn't need a device to tell them that.
these devices, particularly with Gen. AI, they exist to foster a sense of distrust in
yourself in terms of, you know, what you think and how you feel. When I go to bed tonight and wake up
tomorrow, you know, God willing, like I will have some sense of whether or not I slept well.
And I'll know, for instance, well, part of it is probably that I stayed up late watching or playing
video games or, you know, I ate something I shouldn't have eaten late at night, you know, or
I'm stressed about something and to adjust my myself accordingly or my habits accordingly.
That most of the time we know these things.
And so offloading that to a device, I think gives the company a kind of power that we don't
want to cede.
You know, I mean, we've already seen, for instance, the extent to which using GenAI kind
of erodes people's ability for critical thinking.
We've seen how it lessens doctors, for instance, ability to diagnose.
We've seen lots of examples like this, and I think that the further reliance on these things
makes it possible that those things will be exacerbated.
It's not really on smart glasses, but since we've kind of expanded the conversation a bit,
I have to ask you about SORA2, the video generator that Open AI released recently, that has, of
course, taken the internet and the social media and the discourse by storm, but it's also obviously
clearly, just within the first few days, showing how it can be used in really negative and
harmful ways. So when you see a product like that emerge onto the scene, what are you thinking?
Yeah. I mean, for people who don't know who I am or follow me on Blue Sky, I mean, I refer to
Open AI as a social arsonist. Again, I'll be honest. Like, I am not the audience or market for these
tools. But I cannot name you a socially positive use of these things. And if you can name
them does it outweigh the abuses, right? And it's actually not fair to call them abuses
because this is actually to me why they exist. The deep fakes, the ability to create
degrading and offensive, misogynistic, transphobic, racist imagery on demand at scale.
You know, I go back a lot of times to Rob Horning's piece where he talks about Gen A.I.
and he says, you know, and I'm paraphrasing, I should know the exact quote because I've used it so many times, but I'm paraphrasing and he says the purpose of Gen AI is to force someone to see the world in the way that you desire, that it exists to promote and proliferate degrading and antisocial imagery and any other use case is incidental.
what is the positive use case that outweighs a technology that lowers the barrier to producing like deep fakes and all other kinds of horrendous material on demand at scale?
Like there is not one.
You know, I'm pretty unequivocal about that.
And then for Altman to come out, you know, a couple days later and say, well, we didn't know that people were going to use it in this way or we didn't know that companies wouldn't want their IP used in this fashion.
And I mean, I think they knew that people would make images of SpongeBob and Patrick cooking meth and that the people who own that if he probably would not be happy with that.
But if you sort of drill down and you think, like, what is this good for?
What is the positive use case on the one hand that makes all these other use cases acceptable?
And, you know, for my taste, there is none.
I think these technologies should be like wiped off the face of the earth.
I don't see the positive use case at all, especially where like you're saying,
if you can think of a positive use case, it's like destroying like the creative work
that we feel is so intrinsically human and like part of our culture and now handing that
over to a machine.
And it's like, oh, okay, great.
Like that saying that like, I thought the robots were going to like do the dirty work,
do the chores, clean up after me, whatever.
And now it's like they're taking the like fulfilling.
feeling part of being a human and making me do the dredgery, you know, like, it's a mess.
Yeah.
Although I did see there are companies now that claim if you let them train their robots
on you folding clothes, for instance, they're trying to accumulate a critical mass of training
data that will allow robots to fold clothes and do laundry and things like that.
And part of the pitch is that you need to wear meta glasses while you're doing these things.
I remain unconvinced.
I mean, like, human beings still do all of our sewing.
So, like, I'll believe it when I see it.
Absolutely.
I'm not expecting the Tesla bot to be in my house anytime soon, doing my chores.
It's always great to talk to you and to dig into all this.
I have one more question for you, going back to what we were talking about in the very beginning, right?
You know, we were talking about the resistance to Google Glass, how that got defeated.
You were saying this time is quite different, right?
And I think that you have laid out for us through this conversation, why that is.
is. Do you see opportunities to push back against this future to try to defeat it? And where do you
see hope for that? Yeah. I mean, you know, I go back to something Emily Bender said. You know,
again, I don't have the quote in front of me, but like I'll do my best to approximate it when she was
talking about Gen A.I. And she said that she thought about it like plastic, that in our daily lives
at this moment, it's very difficult to avoid plastic. But that when we have the opportunity to avoid
it and to reject it, we should. And that it's much earlier in the life cycle of Gen AI than it is
with plastic, right? That perhaps if we had done more to reject it as we started to realize the
harms, we would be in a different spot. Part of why I started started the whole luxury surveillance
project is I think that there are a segment of people who think that they are immune from the
harms of these technologies. When I talked about the parallels between an ankle monitor and Apple
Watch or Fitbit, the reason I came up with that is because there were so many people who
embrace these things, not understanding, and I don't mean that to be patronizing, right, but not
considering even the possibility that in some way or another, not only would that surveillance
harm them, but that the embrace of these things in many ways make society worse, that
everyone having a ring doorbell now means that we have much more closer to, you know, a panopticon
than we would be. And again, if you need to have surveillance cameras, there's ways to do that
that doesn't feed it to Amazon and doesn't feed it to a fusion center that's in the downtown
of where you live. So part of the reason I came up with that formulation of luxury surveillance
is to try to tease out ways that these systems are,
not good overall, that even if you think you're immune from the harms of these things,
most likely you're not.
I mean, there's a tiny, tiny segment of the population who will never experience those harms.
But also that that is not, whether or not something harms you is not really a good metric
for whether or not you should do it, that the embrace of these things make society worse.
That is how I would encourage people to think about it, that whether or not individually you may
gain some benefit from these things, I recognize that some people think they do.
Often that benefits overstated, right?
Like, you know, does an Apple Watch make the average person, like, more fit?
Like, the answer is no.
There's been a lot of research on it.
I mean, like, it's a little bit more qualified than just no, but like mostly the answer is no, right?
You know, just because people think that they accrue some benefit, that's not a good metric
for whether or not we should adopt these things.
And so what resistance looks like is turning them.
down when we can, understanding what the sort of tradeoffs are, what the deal is now. Is that deal
the one that's articulated by the companies or is it much deeper? But also, we've come to think
of like social shaming as a bad thing. But I think when people display profoundly harmful and
antisocial behavior, like one of the mechanisms that I think societies have developed is to
shame those people and ostracize them. I don't know people who,
who have Alexis and ring doorbells.
Now, there's some self-selection in that,
but also the people I know who would say something like,
wouldn't it be great if I could do this, right?
Like, I have a talk with them, right?
Like, you know, I think that if we want a society
that's less beholden to these things and less accepting,
that some of that needs to come back.
When someone walked into a bar with Google Glass,
sometimes the bartender would just say, get out.
Moving away from that, I think, has been detrimental to society.
Totally.
A hundred percent agree.
Let's bring back the shaming of these people.
I loved seeing recently the graffiti on all of the ads for the friend AI wearable necklace or whatever it is that this company was trying to sell.
And like, people are just not having it, you know?
And that's a good thing.
So like I was saying, it's always great to have you on the show.
You can lay out the issues with these things so well.
you know, make it so clear and concrete, I always enjoy hearing your perspective.
Thank you so much for taking the time.
Oh, it's always wonderful to be here, and I really appreciate it.
It's my pleasure.
Chris Gilliard is a co-director of the Critical Internet Studies Institute
and is working on a book called Luxury Surveillance.
Tech Won't Save Us is made in partnership with The Nation magazine
and is hosted by me, Paris Marks.
Production is by Kyla Hewson.
Tech Won't Save Us or lies in the support of listeners like you
to keep providing critical perspectives on the tech industry.
You can join hundreds of other supporters.
by going to patreon.com slash tech won't save us
and making a pledge of your own.
Thanks for listening and make sure to come back next week.
