Cognitive Dissonance - Episode 908: Artificial Intelligence and Ethics with Dr. Aaron Rabinowitz
Episode Date: April 6, 2026Evaluating the ethics of AI requires seeing through the never-ending hype - The Skeptic There are serious ethical implications to sexualising AI chatbots - The Skeptic https://www.voidpod.com/ https:/.../creatoraccountabilitynetwork.org/ https://0gphilosophy.libsyn.com/
Transcript
Discussion (0)
This episode of Cognitive Dissinance is brought to you by our patrons.
You fucking rock.
Be advised that this show is not for children, the faint of heart, or the easily offended.
The explicit tag is there for a reason.
Glory Hull Studios in Chicago and Beyond, this is Cognitive Dissinence.
Every episode we blast anyone who gets in our way.
We bring critical thinking, skepticism, and irreverence to any topic that makes the news, makes it big or makes us mad.
It's skeptical, it's political, and there is no welcome.
at. I'm recording this on Thursday. And normally, of course, Tom would be here. Tom would be
doing the intro, but Tom and his wife went down to Houston. They had a procedure done on,
his wife had a procedure done, and they came back. So they are back at home. She is recovering.
Tom is by her side helping her recover. So he is not here this week, probably not here next week.
And so this week, I'm going to take that opportunity to interview Dr. Aaron Rabinowitz, who produces two shows, Embrace the Void and Philosophers in Space. He does that with Eursa. And he is also the ethics director at the Creator Accountability Network, although he is not visiting me in that capacity today, even though we do talk a little bit about the Creator Accountability Network. But we had a really great conversation, which I'm going to play for you in a moment. I would love to have to have a conversation. I would love to have.
have an opportunity to talk to you guys about Pam Bondi getting fired today. Big, big news there.
Who the replacement's going to be? Also big news. Trump's weird, crazy speech that he gave. I'd love to
have the opportunity. I'm sure when Tom comes back, we will have all of those stories and more to
talk to you about. But today we're going to be talking about AI, ethics around AI, genitive AI,
use, things like that. And I had a really interesting conversation with Aaron. I don't know
that it doesn't solve any problems.
It doesn't fix any of these big issues
that we're going to see with AI,
but it certainly airs some of these things out,
and I talked to him about it for a while,
and it actually turned out to be a really great conversation,
so I hope you enjoy it.
Here is Dr. Aaron Rubinowitz.
I'm joined by Dr. Aaron Rubinowitz from the Creator Accountability Network.
He's the ethics director,
and he also does two podcasts, Embrace the Void,
and Philosophers in Space.
He's the co-host of Philosophers,
space with Ursa. So, Dr. Aaron Rabinowitz, congratulations on adding that prefix to your name.
That's pretty awesome. Super great. And so tell us a little bit, before we get into it, because we're
going to talk about AI today. Brachian, because you're going to, you did some work with AI
ethics. You wrote a couple of different articles for the skeptic. And I wanted to sort of pick your
brain about ethics. And then people in my audience can get really mad at you and send you messages,
which is pretty much what happens
every time you're on our show.
But before we subject you to that,
why don't we talk a little bit
about the Creator Accountability Network?
You just did a fundraiser.
Yeah, we just had a really fun fundraiser
where we staged a play,
which is something I used to do back in my previous lives.
We got put in touch with this playwright
who had basically written a like can the play.
It was about, they had done it about sort of a version of Mr. Beast, who, you know,
unsurprisingly behind the scenes turns out to be a monster.
Yeah, right.
Sure.
And like, it was done very loosely in the style of Julius Caesar, which was also quite, like,
clever the way they pieced some of those things together.
But we got together, you know, a bunch of our credentialed creators and other folks who've
been supportive and put together a play and people came and watched and donated and it was great.
We raised about, I think, somewhere 20, 25,000 for that one.
Very nice.
Yeah.
It's a really fun play for, like, helping people be aware of parisocial relationships and how
horrible all of this can go.
It's really your job as an ethicist.
It's just to tell people how horrible things can get, I think, is what you, that's like
what you signed up for when you started doing all this.
Yeah, the good place was never wrong when it was like everybody hates ethics people.
We're the worst.
You are the worst.
Admittedly.
So any big things happening with Cannes?
Are you planning on visiting any cons, anything like that in the horizon for you guys?
There's nothing immediate that I can point to.
And the reason is we just brought on a new person as a CEO to help with growth, development.
planning strategy stuff.
So we're in the phase of like catching her up on all like the complexities of our system
and all of the organizational side of things.
But I think, you know, that's hopefully then going to lead to a bunch more things down
the road.
I just don't want to, you know, do anything inappropriate or step on any toes in terms of,
I also want to reiterate, I am here in my capacity as an ethicist.
I am not speaking for can.
So when we get to the AI canceling part of the episode,
none of the things that I say are can policy or can positions.
They will certainly distance themselves from me as soon as is necessary.
And we're going to, cognitive dissonance will distance themselves from me too.
So like just so you know, we're going to leave you to hang too when you say something awful this episode.
Just letting you know.
Yeah, no, I would expect nothing more or not less.
Expect no less.
Now, if anybody wanted to get in touch with the creator accountability,
Network and maybe donate some time or something like that, where would they go?
Yeah, Creator Accountability Network.org is the website. You can contact us there. There's a volunteering
section. There's a donations section. And yeah, we're really, the reason we do these
fundraisers, besides wanting to increase the visibility of ourselves and our creators
is also, we really are trying to build a solid base in our communities of support,
both for the long-term stability of this project
and also because it reflects
that the community values what we're doing.
And we want to make sure that we're doing things
that our community's value.
All right, cool.
Well, we'll send people there.
We'll put a link in the show notes.
So now we're going to start talking a little bit
about artificial intelligence.
It's sad that Tom can't be here
because he hates everything that has to do with technology.
And, you know, he's the kind of guy
who will smash his phone
to spite his face.
So he's very, very angry
about technology all the time.
I'm a little less angry,
but I am just as skeptical
about AI as I think
a lot of people are.
I think before we get started,
why don't we start with
what are we talking about
when we talk about
what we currently have as AI?
What should we use as,
what's the benchmark
for both of us
to understand what we're talking about here?
Let's define our terms
as we move into this.
Sure, and it's really important because the concept of AI expands and contracts as the discourse moves back and forth in different directions over the past, you know, 50 years or so.
So in the most basic sense, right, you could understand AI is just an artificial entity or object or whatever that has the capacity to replicate some sort of cognitive function that,
human beings are engaged in.
So on your very low end of the spectrum,
your thermostats and AI in that sense.
Right.
But what we're mostly talking about these days
is what's called generative AI,
especially what are called large language models,
which are things like chat, GPT, and Claude.
These are systems that are trained
on massive quantities of information,
the Internet, as it were.
and are then able to do a bunch of tasks.
They have a bunch of emergent capabilities
because of that training to varying degrees,
which are, I would argue, really impressive.
Like if you just take a step back from the politics of this discourse
and look at what the things can do now
versus what they could do five years ago
versus what we thought they were going to be able to do,
it's made some impressive progress.
And I just want to front load and highlight that because that is not the same as saying it's good.
Okay.
Right.
I really want folks on the left to understand this because especially folks on the left, I think, are stuck in a place right now where they don't think this technology can do things, which is a dangerous place to be in because it makes them easier marks for this technology.
There is a widespread view in my experience that people think they can tell when something is written by AI.
and I'm here to tell you you probably can't.
That's a really interesting point.
We're starting to see more and more of this sort of thing
creep into different places.
I remember when images first started coming out,
they were easy to pick out.
Now things will get posted videos,
full videos will get posted to different Reddit sites,
and often I'll see conflicting answers.
People will say yes and people will say no.
They'll say, look at the hand on this one,
and someone will say,
Well, no, the hand is, it's just filmed badly.
There's no, you can't tell about that.
So there's people who are arguing over whether or not certain things are AI right now.
I think it's much harder to know visually what something, if something is AI.
I think there are some obvious versions out there.
But I think those are like low effort.
I think some of the people who put a lot of effort into it, it's actually much harder to tell nowadays.
Right.
So there are, so basically what you want to understand are there are tells for AI generated content,
in various formats, the fingers, right, the eyes,
in writing a big one for a long time,
it has been the M-Dash.
M-Dash, yeah.
I had to take that out.
I couldn't use it anymore.
That sucks.
I used to use it all the time.
What's so funny is I have always been a fervent opponent of M-Dashes
because of the way they are horribly abused in academic writing,
especially in philosophical writing.
Yeah, yeah, yeah.
And now I'm like, I was just starting to come around
on accepting the occasional use of M-Dash.
and now I'm being completely crushed in that.
But the key thing I want to note is
anything that is a tell right now,
you cannot reliably think it will be a tell six months from now.
Yeah, I don't disagree.
I chat with Jonathan Jerry about this
for an article he was writing that's really good
about specifically tells in visual production of AI
but also some in writing.
And I just think this is the most important thing
to understand is don't get complicit.
Don't get comfortable because this technology
is continuously getting better
and it specifically is correcting
for what are perceived tells.
You can tell an AI now like Claude
right without using M-Dashes
and it will do so.
Yeah. And then it'll forget
the next time you tell it and it'll do it again.
It won't.
It will.
I mean like here's the scary thing.
I was a hipster of AI
I want to point out.
I was teaching AI ethics
and into this, but back before it was
in the current way
of being both very cool and very uncool.
And the degree to which the writing side of it
has improved in its capacity for large-scale context,
long-term retention of expectations is incredible.
Like, I remember five years ago messing around with Chat GPT
and, like, you could make it fall apart in a couple of minutes.
And now it's just not like that anymore.
There are still issues,
but they're way, way more sophisticated.
So, for example, hallucinating of citations, right?
There was a big problem where the AIs were just making up citations
when asked to give references, just totally whole cloth,
making up people, making up articles.
Now it's more nuanced.
So if I ask it for like a list of references in relation to a topic
based on, you know, like your parenthetical citations, right, the author's last name and year,
it will find an article from that person from that year, but it won't necessarily find the right ones.
So you still have to kind of go and check and make sure and verify it in that kind of way.
But it's really getting a lot stronger.
Let me say one other thing here that I prefaced one of the articles that I wrote about and Skeptic Mag about this,
which is, this is, this is.
in my opinion the most cursed discourse online right now,
like in our culture right now,
in terms of the gap between understanding of the topic
and the quality of the communication,
the kind of, you know, we joke about getting canceled about it,
but like there is a ton of animosity,
and it's not necessarily unjustified.
But we're all really struggling, I think,
with dealing with a discourse that is living in the shadow
of endless hype, I think,
how I want people to understand this,
like whether that hype is on the pro side
or on the con side.
I see.
On the negative side, right, yeah.
Both kinds of hype,
both kinds of perspectives are just
because of all of the factors that influence them,
like, you know, these companies are selling a product.
They, you know,
you can't necessarily trust anything that the companies are saying
about the AIs.
But also, it's very,
you need to be more skeptical of,
the criticisms of the AIs as well, I think.
But that's very hard to do.
Well, let's start with one criticism that Tom and I have all the time, and we talked about
at the beginning here where you're mentioning that AIs themselves are getting harder
and harder to spot.
What they produce is artificially generated, right?
We're having a more difficult time as time goes on.
We're having to spend more time.
Sometimes we're getting it wrong more often now than we were before on what is created
by an AI.
So what ethically, when we think about this ethically,
isn't this bad for us to have a thing
that can look as if it were a person
and talk as if it were a person
and create something that could be like an opinion of a person
that isn't a real opinion of a person.
So it's creating a fiction in our reality.
And it's starting to merge that
and to create more and more fictions, right?
And this could be used with both both visually and through writing and through, you know,
artificial chatbots that we find on Twitter.
And so there's many different avenues.
It can, it can warp our reality.
Is that bad for us?
Yes.
Good.
I'm glad we're on the same page.
Absolutely.
Like, we were already in an epistemic crisis.
Yes.
Right?
Like, if we think back to before the AI times, things were already real bad because of the information environments, because of media, you know, the, like, it could be, some people might just sort of try to dismiss this concern and say, well, things were always bad. Now they're just bad with AI. But no, I do think there's an important difference in the scale and certain kinds of outputs from AI, specifically pictures and video.
I think I have a lot more concern.
I think there is a lot less upshot and a lot more downside to Deepfake
than there is to chatbots writing things.
There are a couple of reasons for that.
I think when it comes to written material,
if you're willing to take the time,
it is far easier to check it and assess its quality.
When it comes to video,
in pictures, there's a lot less capacity necessarily to verify its accuracy for the
common, like the regular individual, right? And like, it is, it can flood our environment
essentially with fake content, with fake stuff that, like, damages us in a bunch of different ways,
right? So now, instead of getting to watch a video of some cool animal that you've never
heard of or seen before, you have to wonder, is this video of this cool animal actually super
fake? And there are fake versions out there of like this cool new bug that people found that
opens up into a flower, but like, obviously not. But not obviously, right? Because, you know,
our brains are not well designed for the epistemic crisis that we are in. And so without structural
help, we are all at high risk of believing more false things. Which, yeah,
It's just bad, which is why it's good, for example, that chat GPT just had to shut down SORA.
Like, I think it's great that they did that.
And I think, not great to their credit.
I think chat GTP is, like, OpenAI is not a good organization in my opinion.
Yeah.
And if you're going to use AI, please don't use chat GPT.
But it is good that they were forced into that.
In contrast, Anthropic, for example, their AI can't make pictures and can't do videos.
and they have deliberately avoided developing that
because I think reasonably, like correctly,
they, at least to some degree,
recognize that that is mostly downside epistemically for humans.
And it puts you at high risk of things like
your AI creating child pornography, Kroc.
Yeah, right?
Yeah, that's a real thing that's happening.
And not just that, but there's like deep fake stuff going on
where you could put people in different outfits
and change the outfit of this person
to what I just showed you and it will.
That's some really creepy stuff
that's happening with Grok for sure.
I mean, Grock and Twitter
should both just be shut down at this point.
They are, it's just a Nazi
child porn website at this point.
Yeah, it is Monster Island.
Yeah, we got there.
Yeah, we really got there, folks.
We did it, everybody.
We made Monster Island for real.
Yeah, we did it.
Don't make Monster Island from the book.
Please don't make Monster Island.
Yeah, no, it really is.
It's, and I had a different.
debate with somebody when he bought it,
it was a free, you know, free speech debate where I was like,
this is going to be real bad.
Like, if he does the things that he's saying he's going to do,
it's going to be horrible.
And they were like, no, it's going to be, like, it's going to be fine.
There's going to be some polite Nazis around.
It won't be a big deal.
We knew that they would stay polite, too.
We knew that they would just be polite the whole time.
And we could just have a marketplace of ideas where the meritocracy,
could win out. That's what we were hoping for.
And that's what we got. If you listen
to Joe Rogan for a second, he will
tell you every few minutes that
Elon saved society
because he bought Twitter. He will tell you
that every few minutes if you let him.
Yeah. I saw a picture recently
where it was a map of
which
source of
political information you are most likely
to encounter on Twitter.
You know, like
C-SPAN, whatever, right? The
giantest orb in the middle
was fucking Elon.
He paid $44 billion
to make himself the permanent main character
of Twitter.
That is all he did.
How pathetic.
It's the cringiest thing ever.
Let's, okay, so
we know that
changing our reality
through using AI
and then, you know, the nefarious groups
out there that are either trolling or purposely
trying to change your opinion, that's bad.
It's bad to give them the tools to do this stuff.
without any guardrails.
Probably people like you should be sitting in a room deciding what those guardrails are.
That's probably someplace that we'd want to be.
I don't want to ask you about solutions because I know that neither of us have the power
to create those solutions.
And thinking about those, probably it's not going to be a great use of our time.
I do want to talk about other problems, though, with AI.
So another one would be as we start to create more and more,
uses for AI, especially on the creative side of human beings, we start to see less and less
use for creativeness on human side. And I'm not talking individually more on a on a like a
workplace scale. So companies may decide to not have as many graphic designers anymore because what
they need, they can just ask chaty PT to whip something up for them and then they'll create something
that is graphically what they would ask a graphic designer to do or an illustrator to do or something
like that. And so people who had real skills that spent a lot of time honing those skills aren't as
in demand anymore. They're being replaced by this new tool. I know that we go through cycles of that
in our, and that's how technology works. We go through cycles. But this is, it feels like it's sort of like
an on rush that is going to really be a very difficult time for people in the workplace.
Absolutely. And I want to just step back and broadly say, of any of the arguments that we're likely to talk about here today, I don't think any of them are definitive knockdown arguments one way or another on the broad question of whether it's ethical to use AI. I think this is a complicated choice that individuals have to make for themselves. And falls, I think often somewhere in the like no ethical consumption under capitalism reality that we live.
in. But I do want to take all of these concerns seriously.
I definitely want to put a pin in what you just said.
And we'll put a pin in it. We'll talk about it for sure.
Because I definitely want to talk about whether or not consuming it is ethical or not.
And that might be something we can work our way back to.
Yeah. And that's both, you know, consuming it on a like straight up consumer level,
but also using it as a tool.
Yeah.
Right, right. The other thing is, you know, there's,
there's often a question, I think, that is not fully answered yet, which is, is this a new problem or just an old problem with a bigger scale?
Often, whenever you're talking about any of these concerns, you can very, you can definitely point out that, like, the printing press also did the things that we are describing, right?
What's the first thing that gets put on the printing press?
The Bible, right?
So if you're talking about scaling up misinformation, I would argue about...
We were already there.
Interesting point.
I'll allow it.
Okay.
Right.
That's not a full dismissal of the argument because scale matters, right?
And then I think there are some objections that make the case where what the AI is doing is categorically different than what a calculator was doing.
Right.
For the debates about calculators will make us bad at math.
It's a slightly different debate because the AI's already different.
because the AIs are stepping into the reasoning process, right,
and are replacing human thinking in various situations.
Now, we don't actually know what that's going to mean for human beings.
We're really in a live experiment right now.
There was a study that I think came out today that was getting talked up.
That was about degrees to which offloading of cognitive behaviors onto the AIs
decreases our own capacities,
makes kind of a learned helplessness
or dependency kind of thing.
And I'm going to come back around to your question here.
This is just like putting this very broadly
as like, there are a bunch of risks
with this technology.
One of them might be we get dumber
and less creative, right?
Because we're not flexing those muscles as much.
Now, I would say the evidence on this
is still really inconclusive
and we want to not assume that,
you know, just because,
we do start offloading some cognitive labor, that we don't then fill in what we're doing
with that free space with other kinds of cognitive labor that are also enriching and valuable,
and that would sort of undercut that kind of argument. That would make it more like the calculator,
at least. Now, you're asking about creatives. This is a big problem on several fronts, right?
Oh, yeah. Yeah, I just, I use them as an example, but yes, you're right.
Right. So this is one case, one really meaningful case.
case, particularly to the left, I think, of job displacement, which is a conversation people
have been having about AI for a long time. And I worry because people, people who are stuck in
the narrative that AI can't do anything don't understand the amount of job displacement that's
already happening and is going to accelerate, in my opinion. I had sort of guessed that, like,
one big concern is you have your sort of pipeline of entry-level job to senior-level job. That's
how our capitalist system works, right?
What we're seeing most likely is gaps forming at the low end of that ladder,
where entry-level jobs are the kinds that are the easiest for AI to automate.
So there's far fewer of those, more senior jobs,
but not a clear path for how someone gets from one of those things to the other.
And so there could potentially, we could see sort of breakdowns in labor pipelines
in places that aren't sexy, and so people aren't thinking about them as much,
like data entry.
a lot of jobs are just putting moving numbers around spreadsheets.
And the AIs are getting good enough to displace people in significant quantities in those spaces.
And the solution here is universal basic income.
Like, there's no, there's no other functional answer to this problem than a robust social safety net that ensures that quality of life.
Is it like, you know, laughable in the sense of, yeah, how the fuck we're going to get there.
And I'm right there with you, but I just like, like, I see where we're at in our government right now.
And I see the hold that these people have on other people.
And I think there's a universal basic income might as well be colonies on an orbiting moon of Jupiter.
Like, it might as well be that.
Sure.
Which is wild because Richard Nixon floated the idea back in the day.
This used to be a less extreme position, but the right has radicalized to such a degree that, yeah,
agree with you. Now, what I think then is probably likely to happen is it's going to take things
getting bad enough with no other alternatives in sight. Right? You're going to have to see enough
collapse of consumer capacity, for example, because people who don't have jobs can't buy the
things that the AIs are making, right? There's somewhat of, I think, a misconception that this won't be
a problem because there will just be more new. Like, you know, if the company,
can make more stuff, then they'll just make more, more stuff, right?
Like they can just up their supply of the thing and make more money.
But like for a lot of kinds of products, it's not actually beneficial to make a bunch of them.
One example that came up recently was Fortnite, right?
Fortnite's having some issues in terms of maintaining a player base.
And there was some poking at them about use of AI.
And some people were like, well, whatever, they could just use AI.
to make like 12 new fortnights.
But like that's not actually functional
if you're trying to maintain
an organized player base on one single game.
So essentially this is the AI slop problem, right?
Yes, you can make a bunch of slop,
maybe even high quality AI slop,
but you're flooding your own markets in the sense
and diluting your ability to take in resources essentially.
So like the same answer that I would give
for the spreadsheet person
is the one I would give for artists on at least this one narrow part of the problem.
There are separate questions we can answer.
We can talk about like stealing of people's style and art.
Oh, yeah.
Obviously, cooperate issue is a huge issue with AI too.
Right, right.
This is just purely the question of artists can't make money anymore
because you can have an AI do the art for free.
Part of what I think will happen is there is enough dislike of,
AI material, that they're going to be consumers who will be willing to pay more for things that are
explicitly not AI. So that is something that we see, you know, in all of our artisanal styles of
production. But also, like, we just need to be giving people universal basic income and then
artists can make what they want and people can connect with the artists that they want to
connect to. So another thing, this is something that
when I was talking with Sarah
Tulene, our executive director
can about this stuff. She brought
up a really good point
which is that consuming
art isn't just about the experience
of the art. It's also
for a lot of people about feeling
a connection to the artist,
wanting to feel in community
with other people because
of your shared experience of that art
and of the creation of that art.
You can't necessarily
get that from an AI, at least not yet.
Sure. And the actual
artist would say the
other side of that, which is
the creation of the art is the
point. The art itself
often is not the point. It's the
creation process of getting it all to
where it is. Sure, they feel satisfaction
when it's done, but all the process leading
up to it is all just as important. Every
piece is just as important. So
creating something that is AI
doesn't have that same feel to
it for them. So the artist that
themselves needs to create. That's the outlet that they have. Right. So as with cheating, right?
So in higher education, there's this huge crisis of AI-based cheating. Sure. Yeah, that's another
huge issue too. Right. But what is, like, what is the solution to that? It's not AI arms race
where you try to build better AIs to figure out who's cheating. It's attack the motivations for cheating.
And what are the motivations for cheating if we're being.
honest. It's because we're lazy.
No. No, that's a conservative
right-wing, like
intrinsic, like internal factor analysis.
What is the, like, think about your students
back when you were teaching them, right?
Are they lazy?
I think they're often distracted and want to do other
things. Right. I would argue that they're
horrifyingly overworked and burned out.
I would accept that.
Sure. Yeah. So nine times out of ten, I think, when I, you know, like, and this is not like, it's complicated because as norms change, this, this can shift and it could become just, you know, default, whatever. But, but the reason, like, the fundamental reason people are using AI to do their work is because we've built a system that solely rewards you through extrinsic motivators. And that teaches you that the goal is to get the credential, to get to the next phase.
to get the job you want.
And doing that can be done more effectively
without having to sit down and personally write
every one of my papers.
So the solution is getting back to an education system
that actually prioritizes intrinsic motivation.
Because what you were saying there about artists
is they're intrinsically motivated to make the art.
They make it because the act of making it is valuable.
If you take away the value from the act of making something,
which is what we've done in education broadly,
like in our over-credentialing meritocratic approach,
you're going to get lots of people cheating
because you've taught them that cheating is good.
Cheating is the correct maneuver in these situations
when you are over-over-worked.
And, you know, that's just a problem
where we shouldn't be blaming the students.
We should be lamenting what we've done to them.
And also the system that you put them in,
because cheating isn't new.
There was plenty of people that I went to school with
that would write a paper for you.
If you needed somebody to write a paper,
they were just good at it.
So they would say, sure, pay me an X amount of money
and I'll write your paper on Machiavelli.
No problem.
Easy peasy.
I'll get it done in an hour,
or, you know, more than that, obviously,
but still, they'll get it done in a short amount of time
and then they'll produce you a paper that you didn't write.
And this is where you get into a fun argument
that the left will definitely,
folks will definitely love,
which is some of the,
complaints about using AI have a class problem to them,
which is there is a sense in which AI is democratizing
the kind of advantages that wealthy people have always had.
Sure, yeah.
It's doing it imperfectly,
but it is giving people access to a personal tutor.
It's giving people access to someone who will write a paper for you
for less money than what it would have cost you otherwise.
So if you're a poorer student, if you're a lower socioeconomic student,
you know, you're getting help that wasn't available to you for basically economic reasons.
Yeah.
It's a leveling, it's a leveling system in that sense.
Yeah.
Right.
And again, there are costs.
There are always like, whenever I make a point like that, I'm not saying that's the definitive thing.
I'm just saying you need to, in ethics, ethics is complicated.
It's stupidly, painfully complicated.
Sure.
That's why I love it because I love really complicated puzzles.
But like, there are always so many more facts.
than you're considering in every ethical judgment.
We should rope in a couple more before we start talking about other things.
And I like a couple of more downsides because they're so important that we can't just skim
over them.
Obviously, the pollution and the energy drain that AI is creating is really one of those things
that doesn't, it makes it look almost unsustainable in the amount of resources.
that it's using, you know, I've had conversations with people that that seem to think that
there's no way without that sort of venture capital money that this would actually be able to
continue on with the model that it currently sits under. There's no way that subscriptions
would be able to pay for the things that, you know, that we're getting out of AI that people
are using AI for every day. There's no way that that, even like the most expensive subscriptions,
wouldn't be able to pay for the sort of consumption cost of it.
So obviously there's going to be some sort of rug pull in the future,
and it's also destroying things as it works its way through.
Yeah.
Again, all of these, I think, are legitimate concerns to varying degrees.
The environmental one is a real concern,
and it depends to some extent on which AIs we're talking about again, right?
So if it's Elon building his incredibly harmful data centers near populate, you know, lower economic populations, for example, like that's a worse scenario than some other organization that might be trying to find eco-neutral ways to do this kind of stuff.
Elon being in charge of it in general is a bad thing, right?
The people who are in charge of this is a bad thing, too, because those people aren't, most of them aren't, they don't subscribe to some sort of ethic.
I mean, you certainly couldn't look to Elon Musk and say.
an ethical human being.
No, no quite the opposite.
So, yeah, and that's just another major problem, is that, like,
this technology requires a bunch of money.
Yeah.
And that money tends to come from not the best people.
Terrible humans.
You know, like, so all of this, you know, a lot of the problems with AI, like it being
synchophantic, for example, are just downstream of capitalism.
They're just the results of this being a product.
being sold by capitalists.
So the answer obviously is nationalize this technology, right?
Like make it a state service in some way.
The ideal situation, I think, for the development of this kind of AI would be something
like a concerted global effort of governments being transparent about working together
to develop safe AI at a reasonable pace, which should be.
note is very different from where we are, which is
a free for all of capitalists.
Yeah, the straight up wild, like the government
actively saying we will never regulate this in any way.
Go wild.
No regulation.
Do what you want.
Yeah.
Yeah, again, Elon's Mecca Hitler putting out kitty porn.
Yeah, man.
That is a thing that just happened.
You're not wrong.
No consequences.
You're not wrong.
Fucking wild.
Yeah.
So a lot of bad things when it comes to AI.
A lot of negatives.
What are you?
You tell me, can you tell me some positives that you've thought of?
Um, yes.
No, I'm sorry.
That was perfect.
Because everybody's listening is just like, he can't do it.
He's not going to be.
He's going to say no.
He's going to say no.
No, I think there are lots of valid use cases for AI, lots of situations in which it is
valuable.
I don't know that that, hold on now.
Hold on.
I said, I don't know that that's positive, though, right?
Because if it was gone, would it matter if it was gone?
Is it doing things that we can't do, is what I'm saying?
Okay.
So yes.
The answer is yes.
And let's give a real concrete case to go back to the environmental stuff.
If we're doing the math on the environmental impact of AI, in the negative column,
you've got your water use, your electric use, et cetera.
In the positive column, you've got AI is being implemented to improve.
are, you know, eco-friendly technologies, making better solar cells or making, you know, materials that are
sturdy and lightweight and don't have to be made from garbage or that biodegrad. Like,
those kinds of things, right? It's being used on the inventing side of things to progress our
understandings of material science. And it's hard to know and gauge what exactly the benefits will be
for that long term, right?
It could be significant breakthroughs.
And that's where the hype problem comes back, right?
We don't want to be like, look, it could invent cold fusion tomorrow or something.
Yes, technically, but like not high probability.
But what we do probably see are consistent incremental gains in various spaces as a result
of this technology, similar in medical science to training it to be good at reading charts
or reading, you know, x-rays
or scanning for cancer cells.
You don't want it to replace doctors in doing that,
and you have the concern that it will be treated,
deferred to too much in certain contexts.
But those are at least, I think, situations
where we do see a genuine benefit.
And I think there are other things
where it can be helpful for people
in ways that can enrich their lives
and promote flourishing.
I think collaborating with it,
in creating your own art can actually be not a bad thing, right?
I think a lot of times when we think of the bad scenario,
what it is is we're replacing humans with the technology fully
rather than using it as a tool,
another thing in our toolbox that can give us valuable research.
So here's an example.
I think the AI is better at coming up with objections
than the average human being to an art.
argument.
Oh, that's interesting.
Like if I ask, if I ask, you know, an undergrad to list as many objections as they can think of,
as many good objections they can think up to a particular argument, especially one they
agree with.
That's the real kicker.
It's like, if you agree with the argument, it's just a harder cognitive lift to come
up with, you know, because you cog diss, right?
But the AI is much better at that.
It can, it can really, when it's not being syncope,
It can push you to really challenge an idea in that way.
It can help improve your writing by noting where there are issues.
Like it's a good editor.
It's gotten frighteningly good at like you can give it a document and say, can you proofread this?
And it will say on this line, you've got two these in a row.
On this line, you're missing a word.
It's probably this word.
You should consider adding this word.
Here, you probably need a citation that you don't have.
or here, you've left yourself a note that you forgot to delete,
and you should probably remove that before submitting for publication.
It's an interesting use for it, because, you know, like,
I think a lot of people go for, like, the bigger use of it,
which is write this for me instead of just proofread this thing I wrote,
which is a, it's an interesting thing because we're,
I think a lot of people who downplay the use of AI always want to say,
well, I told it to do something and it didn't do it.
And it didn't do a good job at it.
But I think a lot of times where maybe we're giving it too big a bite.
We're saying, do this thing.
way too much work for it to do exactly how you would do it, but giving it smaller tasks
might be more useful in that sense. Yeah, or giving it like recursive, reflective tasks.
So, you know, helping you write an article and then asking it to do essentially a peer review
of that article, right? What would a peer reviewers say about this article?
Interesting. And then it'll come back with, you know, here's my overall take on it. Here's some
points that I think are weak.
Here, you know, like here are some ways to, they could potentially improve that,
exactly like you get back from a peer reviewer in journals.
And you can then shore up those particular spots, right?
You can improve it.
You can get it, you know, like get that one step ahead in that way, which, you know,
in a system that is fucking horrible, let me tell you, pure view is awful, right?
Yeah, yeah.
So, like, I understand why you wouldn't want to wait three months for the first chance for, you
know, somebody with a fairly decent understanding of the topic to give you feedback on the paper.
And like, you can say, well, what about getting humans to do it for you? I'm like, have you
met humans recently? They don't have a lot of time for reading. And their society's collapsing
around them. So like, yes, I would love for humans to read the thing for me. But, you know,
like, yeah, and here's another thing that it's useful for. And this is actually a little bit why I
paused when you first asked, like, what are the benefits of this? Because I have a lot of
lot of hesitancy about talking about my own use of AI because I see kind of the animosity,
especially on the left, and I feel like there's a lot of misunderstanding about this.
And so I get anxious about outing myself in explicit detail about how I use the technology
beyond just studying it as an AI ethicist. One thing that has been really impactful to me in the
past years, I have found the AI really helpful for doing work when I'm depressed.
I have been dealing with some pretty bad depression this past year
because my book will never fucking come out.
And I'm sitting in peer review hell,
still going insane.
And so it is so hard when you're in that place to do,
like even things you really want to do to write,
to, you know, engage with the material that is making you so depressed in that way.
Having a AI support who's there, not for therapy to be.
clear. Don't get therapy from an AI. But, you know, to help me get off from a blank page to something,
you know, to get the ball rolling a little bit. To feel in sort of the same way that like when I do
Tai Chi, it's so much easier for me to do it if I'm with somebody. If there's another person there,
it just makes it feel a little bit less isolating. And like being an academic, especially when
you're not currently at a school, is incredibly isolating. Sure. And so,
having something to bounce ideas off of while I'm working, something to help me with
outlining or, you know, considering if an argument works or not, I don't necessarily even
have to rely on its answers. It's just having a writing partner in that sense, right? It gives you,
it gives you a little bit more to start with. And once that gets rolling, then it's easier to be
doing that work on your own. Yeah. Yeah. As somebody who writes essays for citation needed,
I know the feeling of that there's a little bit of dread at the beginning of the project.
But once you get the ball rolling, all those muscles that you've created and used for a long time.
But if there's somebody there who can give you a little shove that might be very helpful.
And I think this can be true for all sorts of tasks.
You know, all the kind of like executive function tasks that are so painful when you are depressed,
like it can help you find the stores that you need or find information or, you know, like, oh, I've got to.
trip that I've got to figure out. It can help me break that up into more manageable chunks.
I think that is really important and gets very lost in the unfortunate reality that also
depressed people talking to these AIs or getting information about how to kill themselves better.
That's a huge problem. And it's why I'm nervous about saying, let's hand these AIs to people
with depression because it'll help them with their tasks. Because until we make sure that it's not
also going to help them with the wrong kinds of tasks, that's a serious problem.
Absolutely. Yeah.
As goods go, I don't think, you know, I think it can slide into ableism to downplay the benefit
that people who are struggling can get from working with this technology in the same way
that earlier when we were talking about class, you know, people who don't have certain financial
resources can benefit from these in ways that other people with resources have always had
those benefits.
So let's go back to the pin.
No ethical consumption under capitalism
really genuinely plays its hand
when we talk about AI
because the people who own AIs, all capitalists,
they are trying to figure out ways
to make gobs and gobs and gobs of money off this.
They're collecting venture capital
by the bucket load.
We're seeing our entire economy shift
towards being focused more on AI
stocks in the stock market are shifting, people's jobs are shifting.
So it really is tied to capitalism.
So let's talk about that particular maxim.
I mean, is there going to be ethical consumption of AI that we can think this is something
that you could convince yourself you could use?
Right.
And I first want to caveat, I think it is true of open AI that their goal is at this point
just to make gobs and gobs of money.
I think it's more accurate,
franthropic, to say they're just trying to create the Omni Messiah.
Like, I think they really are what people think of
as, like, the true AI cult.
Like, they are the adaptist mechanicas at this point.
Wow.
But they also have made, I think,
the most ethical AI out there right now,
the most functional, safe one.
And my argument for why they're not just trying to make gobs and gobs of money
is they didn't develop, you know, making photos and making videos,
videos to uses that are very consumable, right, but have serious downsides.
Arguably more consumable in some ways.
Yeah.
Way more consumable, right?
There's no, like, yeah, there isn't a market for AI Twitter, like posts in the same way
that there is for like AI reels.
Yeah.
Right.
You know, everybody wants those little 10-second videos.
Sure.
So ethical consumption under capitalism, here's how I sort of think about this, right?
Like you go to the grocery store, everything that you eat there is drenched in blood, right?
Like your bananas, blood, your chocolate blood, so much of it blood, all the meat sections, lots of blood, right?
But humans have to also eat.
And when you're put in an environment where all of your choices are ethically common.
compromised. There is a limit on how you can cope with that, essentially. Like, to what degree
can you alter your behavior to minimize that harm? You should. You should really try to do so,
but we have limits, I think, and we have to acknowledge and give grace for those limits. So,
when it comes to the ethical use of AI, right, take writing, for example, if you're trying to actually
make any money right now writing, you know, articles for publications or whatever, you got to do a lot,
right? There isn't a lot of money per article going on there. So if you are a poor, you know,
writer desperately trying to, you know, pump out as much as you can, there's an understandable
necessity there that could justify potentially using the technology, or at least should give us
compassion, that the person is not doing it because they are lazy. They are doing it because this
technology allows them to do something that they love as a job when unfortunately, you know,
like, unfortunately, when they should just be getting to do it for value's sake, right?
So when I think of no ethical consumption under capitalism, that the point is not just
everything is drenched in blood, but we have to acknowledge that human beings to some extent
have to live in the world that they exist in.
And, you know, we all make various kinds of ethical compromises every single day.
And if you don't, if you don't think you do, like, you know, go read some Peter Singer on
famine affluence and wealth and, like, think about really how many times you actually do
let yourself off the hook where you could be more morally stringent.
Yeah.
Oh, yeah.
Yeah, no, I don't disagree.
I don't disagree.
Although I wonder, as it stands right now, AI doesn't have to be something that you rely on.
So the things you're mentioning are things you have to have, right?
You're saying food.
I need to go to the store and get food.
Yeah, you do need to go to the store and get food.
But do you need to go to the store and get AI to help you write a thing?
And that thing is, is no, you don't.
It's a choice rather than a necessity.
Do you see the difference?
Well, yes, up to a point, right?
I think it's a, so yes, we have pure needs and then we have what seems like less pure needs,
but let's go back to our content creator, whatever, who's using AI.
If they are relying on the money from their content creation to buy the food that they desperately need,
you see how these things blur together, right?
Yeah, yeah, yeah, yeah.
So, yes, technically, I could avoid using anything that remotely looks like AI.
It would be very difficult in our world at this point.
Yeah, it's going to get more difficult tomorrow too.
And it'll be more difficult the next day.
Right.
And like, how far do you go with that?
Do you stop using search engines because search engines are AI?
Yeah.
Like, I think that there are, you know, necessities in engaging with our worlds that are not, you know, food,
water, shelter space, but are fundamentally connected to them because of capitalism, right?
So again, a lot of these problems get a lot easier if you have a universal basic income, right?
Universal basic income and government controlling the throttle on AI would change a lot of things.
Yeah, because then you can, you know, people, if they are using the AIs, are doing it non-coerced,
right?
Like, they aren't being coerced into using it.
And then it's really, you know, it can be a question of like, is it valuable?
Does it actually contribute benefit?
And there's some really interesting
conversations where I struggle to some extent
to get where people are coming from
when it comes to things like,
like if you have a piece of art that you really enjoy, right?
And then you find out that that art was made by AI.
Does that decrease your experience of that art?
Yeah.
Yeah, that's an interesting question.
It's an interesting...
I saw recently that something that was going around was there was a young lady in an army outfit
that they kept on showing multiple times that was like standing next to the president,
another place that was all fake.
It was all just AI.
It was 100% AI created and AI generated, but was gotten a bunch of people to follow it on, you know,
one of these social platforms.
And I know that there's also social platforms.
Didn't I have an only fans account?
I swear it was a foot.
model on an only fan's account or something like that.
Was it really?
I didn't realize that.
I think so.
Oh gosh.
But yes,
had a very big following on the right and was totally fake.
Had like two ankles and six toes because AI couldn't figure out the digits very well.
What, uh, what,
I guess,
I guess like there's also people who do AI Instagram accounts.
So there's plenty of these out there that are completely generative that they just,
like,
they're not a real person and they're not in a real place and they just take a picture of themselves
and people see it and they, for a bunch of time I was getting on Facebook, they were easier to pick out then,
but, you know, kitchens, people would just have these like really elaborate kitchens and, you know,
I'm a nerd, so I would look at them and stop and then realize, oh no, that's like it's in the trunk of a tree.
Like there's no way that you would be able to do that, but AI has created stuff like that.
So that happened.
It was easier back then.
It's a lot harder now.
And to be honest, I saw pictures of that, that woman that AI was creating.
And it was harder to pick out in those.
I mean, way harder than it used to be.
So that's definite for sure.
Yeah.
And those are bad things generally, I think.
So, like, I think there's a couple of different potential concerns that are lurking behind the negative reaction that I think people have
when they engage with content
and then find out that it was created by AI.
So one is the deception problem, right?
Sure.
If you're deceiving someone
and acting like this thing is not AI
when it actually is,
which is different from disclosure, right?
But if you're actively deceiving people,
that's a problem, I think.
Right?
If you're trying to give the impression
that this is a real human being
that you're interacting with,
you are, you know,
it's the kind of parisocial problem
where you're generating a connection
that isn't real, or giving the experience of a connection that isn't real there.
Which is a whole different problem from if you know that it's an AI
and then you still interact with it because you actually think that it is a person,
because your definition of person includes AIs.
But then you also, like, then there's sort of the weirder to me kind of cases
where you just experience a piece of art,
you're not told anything about the artist,
You just are like, oh, that's a really nice picture or something.
And then you're told that it was made by an AI, and it kind of cheapens it for you.
And maybe that is just because people always are assuming there is an artist behind this piece of art.
And so they always feel a bit of deception or that they have been tricked in some way.
But like, I personally don't feel that something being made by AI is in itself a bad thing.
Does that make sense?
Like, it can be bad in various ways.
ways. But if it's a good piece of art, it's a good piece of art, in my opinion, right? If it's a good
article or a good argument or a good book or whatever, those are the things about the piece
of art itself for me. You can be sort of, but it feels weird to me to then be like, oh, this is
actually terrible art now that I know that it was created by AI, which is something that I see
happen sometimes. I don't think we need to make that move in order to still have concerns.
about, you know, whether we should know ahead of time
if a piece of art was made by an AI, for example.
I think the thing that would bother me the most,
just speaking for myself, I can't speak for anybody else,
but when I think about whether or not I see something that's AI
and I wonder whether or not it's good,
like it's something that I think is pleasing or not,
my brain first goes to, it's probably plagiarized.
My brain first goes to, well, it scoured the Internet
and found something and it moshed a couple things together, but they're both plagiarized,
and then it kind of just made an amalgamation or completely plagiarized, because there's
sometimes that it'll just absolutely completely plagiarize something that is definitely someone
else's style, that it just stole. And so, like, the first thing I think is copyright infringement.
As soon as I look at AI, that's the first thing that runs through my mind. And maybe it's possible
that in the future it gets so good that it doesn't need to, maybe it uses.
those things as references, just like regular artists do, right?
Like, so someone may as an artist be like, oh, I was inspired by Dolly, but I don't do
the exact same thing.
I do something similar, but it's not the exact same thing.
And so I think that in the future, maybe it could get good enough to be inspired by the
things that it took.
But as it stands right now, it doesn't feel like it's that different from the things
it took.
Yeah, I think that's right.
And this is another example of like, is it a different problem from human artists?
or is it just the same problem at scale?
And I think in some ways, it's the same problem at scale,
and in other ways it is kind of a unique problem.
Let me unpack that a little bit, right?
So one concern here is credit where credits do, right?
Artists are totally fine with other artists sampling their art,
building on their art, referencing it,
re-built, you know, revising it in different ways,
as long as proper credit is given, right?
That's a technical solvable problem where you can just have the AI learn that it needs to give credit for its sourcing.
Right.
And they can do that.
They can say this was inspired by XYZ, right?
So that is a solvable problem in the same way that I think IP infringement is already kind of a solved problem.
Like you just need to be able to enforce it better, right?
You need to make sure that you can enforce it.
So you had, for example, a lawsuit against Anthropic where they're having to pay out for people that they used –
whose work they use in their training data.
My work included.
Really?
My book is in the Anthropics training data.
Yes, correct.
Not luck-pilling, but a old book I did way back when on the devil and philosophy.
So that kind of compensation package is a solution that I think is functional for the people not being compensated for their work.
So we want credit.
We want compensation.
Also, if you do the universal basic income thing, then everybody is getting compensated.
and that I think further ameliorates the problem essentially, right?
In my opinion, so then there's a separate concern, right,
for which no compensation can be given,
which is some people just really fucking hate the idea
of an AI absorbing their essence and reproducing it.
Yep.
Right?
Yeah, man.
Yeah.
You know, you don't want your ghost going into the machine.
Yeah.
Don't take a picture of me.
Yeah.
Yeah, exactly.
Straight up.
And this is one where I don't have a straight answer.
Yeah.
Right.
Like, I'm sympathetic to artists who feel that way.
I do think that there probably should be a model where they can,
and like it should probably be not an opt out model either because that's bad.
So probably some sort of opt-in model with regard to training.
There are ways that this is being worked around, I think.
So people are putting together certified training data sets
where they know what's going into it.
They have the permission to use the things that are going into it
so that it doesn't have this kind of tainted aspect to it.
But I don't think, I don't want to just be dismissive of artists
and be like, oh, you're just being petty at this point
because you don't want the robot to learn your stuff.
stuff.
Yeah.
Another way in which I think this also gets really, really weird is I think their argument,
the justification for their argument diminishes, the closer to personhood the AI gets.
That's a good.
You know what?
Let's talk about that for a second.
Is AI the thing we have, the large language models we have, is it conscious yet?
And do we expect it to be?
short answer is not yet.
How do you know?
Yeah, that's the hard problem.
Right.
So the long answer is we don't know and we can't know.
That's the correct answer.
Right.
So there is no test for consciousness.
That's the problem.
In my articles, I talked about this as like you can look at external versus internal
features of an object or an entity, right?
Your external features are your behaviors,
your responses to stimulus, et cetera.
And AI has that.
AI has that.
AI reasons in an external sense
is how I would put this, right?
If you put in an input,
it will give you the correct output
fairly reliably,
at least as reliably as humans,
if not more so, in a lot of cases.
Now, it is fragile in certain spots,
as we've all seen online.
you know, it gets the number of R's and blueberry wrong for a while,
but that gets corrected for,
and it's probably, I think,
the closest we have to an external indicator
that these things aren't currently conscious.
So the idea here would be what makes it that human beings
don't fuck up the way that AI's do in weird, fragile ways.
One good argument would be
it's because we have internal consciousness
that provides a slightly better check
on something just being absurd
before we just say something patently absurd,
which obviously is not a good check if you look around, right?
People say fucking absurd things all the time,
but that's probably what's going on there.
Now, as these systems get more complicated
and have more internal points of reference
and like understanding, quote unquote,
that probably goes down and goes away.
And as things like that go away,
you have less and less evidence that these things are not conscious.
So our friends over an Anthropic, every month now, there's another article where they're like,
hey, we kind of think Claude might have emotions or might have something that are like emotions.
But what they mean there is it has internal representational states or representational knowledge, right,
that impacts how it responds to things.
and on one version of understanding consciousness,
the only real difference here is a matter of degrees between us and that, right?
This is not the like very pat, like Pat, you know,
we're all just token generators.
I don't think we actually work the same way that AIs do in that sense.
But at the end of the day, when we're talking about consciousness,
we're probably talking about complex internal self-referential states.
layers of feedback loops of complexity and such.
Now, where that gets hard is,
is that necessarily going to turn into phenomenal consciousness?
So by which I mean,
there's something it's like to be me or you or a bat or the AI, right?
And we don't know, because we don't know how phenomenal consciousness arises,
we don't know what it is, we can't study it directly.
We don't have a good answer to that.
We probably should figure that out first, I think.
I don't think we can.
Like, so this is,
if you've ever seen the movie X Machina?
Which would be perfect,
then we would never create AI.
So if we had to do that first,
then we'd never have AI and we'd be in a good spot,
I think.
Better spot than we are now.
I mean, I've repeatedly argued
that we are hurtling towards this problem
with no solution.
Like, straight up.
Yeah, in the movie XMachina,
they explain it as the chess playing robot problem.
Right?
So if you play an AI in chess,
that's really good at chess,
right? And it can play the game.
Yeah. I've never been to a computer, not a single time. Not in chess.
Right. You still have a separate question though. Does it
understand or know that it is playing chess? Is it having an experience of playing chess like we do
when we are playing chess? And the answer is, I don't think we have a way to test that.
We don't have a way to look inside. No amount of external information can guarantee one way
the other. And so basically what you're just going to get are increasingly indistinguishable
entities, indistinguishable from our behavior. And at a certain level of indistinguishability,
you're probably at risk where you just have to kind of accept a, you know, a precautionary
principle of treating it as if it is a conscious entity. This is the Star Trek answer, which is,
you know, if you treat data like it's not conscious, you risk enslaving an entire race of being.
that actually are conscious.
So better to...
We're probably not going to get into it too deeply,
but the second article
you wrote about sex bots,
if you treat them as if they are
conscious, then it's wholly unethical.
Like, it's ridiculous.
You have to get to the point
where you think that it's not like just like a,
I don't know, like a text-based fleshlight or whatever.
But I think like once you get past that point
and you think, we get to the point
where it's indistinguishable,
then yeah, it becomes whole.
wholly un, I mean, just genuinely unethical in every way to treat to have a sex bot at that point.
Yeah, right? If you think that a chat bot is bad, turning it into an actual sex slave is probably worse.
Yeah. I like the way you simplified that. I think that's very good. Yeah, absolutely. Yeah.
Yeah. There are specific costs to it being a sex bot as opposed to a person bot. Yeah. Right. But they're
both bad in very different, like very significant ways.
And I think you have to get to the point where you have to say,
is, is this thing conscious and you're making genuine,
like you're genuinely wondering whether it's conscious or not.
I mean, I think that point has to be reached
before you're starting to think about those kind of ethics.
I would be shocked if we don't see court cases about this in the near future.
Yeah, I would be too.
Yeah.
Are arguing that their bot is a person.
It was just a court case recently about how, I don't know,
it was necessarily.
AI. It was like how the internet affected someone and they got, they wound up winning. So yeah,
this could be something that could definitely be something that we see in the future. I want to ask you
something as a skeptic though and not as an ethicist. Okay. So let me ask you this. What I'm seeing,
the one dangerous thing I'm seeing when I listen to Joe Rogan is Joe Rogan has a really
interesting view of AI. And his view of AI, he has sponsored by,
perplexity. So he has a sponsor perplexity. And then he will constantly ask Jamie to type this into
perplexity so he could read the answer. And he will, he will at the same time in each hand,
in one hand, it is a super genius. And on the other hand, there's no way it could know and it
doesn't know this stuff. And he decides, because on whether or not,
which hand is the more important one, right?
Which one is the true answer?
It's either too, there's no way you could understand this stuff,
so it's going to get the answer wrong,
or it's a super genius and it got the answer right.
And it's based on his assumptions going in.
So what he does is he treats it like a supercharged,
like a confirmation bias machine.
Whereas if he types in something about, say,
the Gulf of Tonkin incident,
and it lists all the conspiracy stuff
that he wants it to list, it's awesome.
If it brings up some of the skeptical points
about why you shouldn't be thinking
that that was the most important thing
that got us into the Vietnam War
and whether or not it happened, et cetera, et cetera,
he will say, well, how could it know?
There's no way it could know.
So he's treating it like a supercharged confirmation bias machine.
I would imagine that if Joe Rogan is doing that,
lots of people are doing that.
I don't think that Joe Rogan is an outlier.
I think he might be someone who might be doing this quite a bit.
That seems to me, you know, not talking ethically, but skeptically,
that seems like a very dangerous tool to put in the hands of people.
Yes.
Broadly speaking, like there is a reasonable concern that especially syncophanic AIs,
create a system-wide cognitive bias where everyone is in an echo chamber with their little AI,
telling them how great their ideas are.
South Park did a really good episode on this.
Now, the flip side there is, my understanding of Joe
is he was already doing that with the internet beforehand.
Absolutely, absolutely.
This doesn't create anything new.
It just creates a brand new tool for him to use
to help fuel that and dismiss it when he wants.
Yeah, and I think there, I'm not sure there's any greater harm
than when he was just using a search engine
and doing the same thing.
It's just a little bit faster, I guess.
A little more eloquent, a little more streamlined.
Yeah.
I mean, the funniest part, the funniest thing to me about everything you just said is,
I just looked it up.
Perplexity, as far as I can tell, is just chat GPT.
It's just another company running chat GPT,
which is super funny that if he really wants a conspiracy riddled AI,
why is he not using GROC?
He's not sponsored by it.
that's the thing is they're paying him he's a cash. Why is Elon not sponsoring Joe Rogan?
Because Elon doesn't have to. Because Elon gets a full blowjob every third episode. So why would he bother? There's no reason. He loves Elon.
It's so bizarre. Why would he freely use Grock? It's like when the defense department was like trying to force Anthropic to make weapons for them. And it was like GROC and GopT are just sitting there like, we'll do it for free.
Yeah, they're begging for it. Yeah, as soon as that happens, they're begging for it.
It tells you how bad their relative codes are, I think, is the reality.
Like, Croc is so bad that even the fascist don't want to use it.
Well, Aaron, here's the thing.
I promise you that when Tom comes back,
we're going to have you on to be the AI defender
so Tom can yell at someone for a full hour,
and people will enjoy it.
And the people who are in our comment section will enjoy it.
And then you can come here.
And I promise you,
I promise you it won't be fun.
I promise you it will not be fun at all,
but Tom would love to, I'm sure,
be involved in this conversation next time.
I don't know that we covered anything that we fixed,
but I think, like you definitely pointed out,
a lot of real interesting ethical concerns
when it comes to AI,
and it gave people a lot to think about.
Let me just, can I just state some very explicit,
you know, a lot of times I hedge,
I'm going to say some very explicit things here at the end.
I definitely want you to make sure
that you feel comfortable about all about all the things you said.
I want to cancel for the right reason.
Right? So real quick, current AIs not phenomenally conscious. We won't be able to know if they do become phenomenally conscious, but they're probably not right now. Not right now. I don't think so either. Yeah. Don't form relationships with AI. Please don't. Just it's not a good idea. Don't do it.
It really is something else. Yeah. Yeah. Using AIs for various tasks is ethically complicated. That doesn't mean it's necessarily wrong, but you really got to think about all of the issues going into it and ask yourself,
is it really contributing enough here to counteract all of those calls?
And there's negative things we didn't even bring up.
There's obviously negative and maybe some positive stuff we didn't bother to bring up.
For sure.
It is a very fraught discourse.
And so finally, give people grace on this discourse.
Like everybody is working with a lot of misinformation and a lot of charged emotions on this.
And like whether someone uses AI or thinks it's the devil, try to have some sympathy for where they're coming.
from because this is not an easy topic.
There are not easy black and white answers on this stuff.
Be nice to Aaron in the comment section is all we're saying.
You can hate me.
That's okay.
You can take me apart.
I'm just the other people, right?
I'll be the AI.
Jesus, I'll take all of the, all of it on to me.
You're the one they crucify.
I'll save you all.
Amazing.
Dr. Aaron Rabinowitz, thanks so much for joining me today and talking about this
very, very difficult.
to talk about. I appreciate you coming on, man.
My pleasure. It's always fun chat
with you, and I'm always happy to come back for more.
So I want to thank Dr. Aaron Rubinowitz
for joining me today. Remember, you can check out
him out at Embrace the Void podcast
as well as Philosophers in Space.
You can also check out the creator
accountability network. I think they're linked to every
single show we do. And
yeah, we're going to be back.
Hopefully I'll be back next week. I'm going to see
if Tom's not here
and it's difficult to book things,
something I might wind up skipping a week. We'll know next week more. I'll put out a short video
if I am and I'll put it out early so people wouldn't expect to show on Monday if I'm going to wind
up skipping a week. Tom isn't expected back next week, but I will try to put something together
to see if I can paste something together. The show itself is not the same show, obviously,
when Tom isn't here. I really am hoping for the best for him and Haley. I know he'll come back when
when Haley is feeling better,
and we hope for a speed of recovery for her.
You can always send messages to our email,
and then Tom will, of course, relay those
if you want to send your best to Haley.
And we're going to wrap it up for this show.
I want to thank Aaron Rabinowitz for coming on,
and we're going to come back hopefully next week,
if not in two, and we're going to leave you,
like we always do, with the Skeptics Creed.
Credulity is not a virtue.
It's fortune cookie cutter, mommy issue, hypno-Babillon, bullshit.
Couched in Scientician, double bubble, toil, and trouble, pseudo-quazi alternative, acupunctuating, pressurized, stereogram, pyramidal, free energy, healing, water, downward spiral, brain dead pan, sales pitch, late-night info docutainment.
Leo Pisces, cancer cures, detox, reflex, foot massage, death in towers, tarot cars, psychic healing, crystal,
balls, Bigfoot, Yeti, aliens, churches, mosques, and synagogues, temples, dragons, giant
worms, Atlantis, dolphins, truthers, birthers, witches, wizards, wizards, vaccine nuts.
Shaman healers, evangelists, conspiracy, double-speak stigmata, nonsense.
Expose your sides. Thrust your hands. Bloody, evidential, conclusive.
Doubt even this.
If you enjoyed the show, consider supporting us on Patreon at patreon.com forward slash dissonance pod.
Help us spread the word by sharing our content.
Find us on TikTok, YouTube, Facebook, and Prits, all under the handle at Dissonance Pod.
This show is Can Credentialed, which means you can report instances of harassment, abuse, or other harm
on their hotline at 649-4255 or on their website at creator-accountability network.
org.
