Hard Fork - ‘Something Big Is Happening’ + A.I. Rocks the Romance Novel Industry + One Good Thing
Episode Date: February 13, 2026This week, we discuss Wall Street’s software-stock sell-off and a viral essay on X about the potential for widespread job displacement from A.I. Then, the New York Times reporter Alexandra Alter wal...ks us through the process that a growing number of writers are adopting to churn out romance novels with help from A.I. chatbots. Finally, we each share one bit of good tech-related news — a new way to make playlists on Spotify and progress toward decoding whale sounds. Guest:Alexandra Alter, a New York Times reporter covering books and publishing. Additional Reading:The Dark Side of A.I. Weighs on Tech StocksMatt Shumer’s essay “Something Big Is Happening”The New Fabio Is ClaudeHow a New A.I. Tool Fixed My Single Biggest Problem With SpotifyHow A.I. Trained on Birds Is Surfacing Underwater Mysteries We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Well, Kevin, did you see this? Elon Musk told employees at XAI that the company needs a factory on the moon to build AI satellites and a massive catapult to launch them into space.
Yes, this is his new pivot from Mars. He's no longer interested in Mars as he was all those years. Now he's going to the moon.
This Looney Tunes ass company, I swear to God. Elon Musk, I have a message for you.
If Bugs Bunny ever shows up and tells you to climb into that catapult, do not trust him, okay?
That is a rascally rabbit, and you might find yourself in space, my friend.
He's going to launch you from the moon catapult.
You know what? That might be the way I want to go out, obviously.
It's just to have a nice long career in journalism and then put me in the moon catapult.
I'm ready.
I'm Kevin Roos, a tech homeless at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, Labor Pains.
why AI is causing freakouts from the markets to the workforce.
Then, the Times-Alexandra Alter on how the romance novel industry is being overtaken by
AI authors.
And finally, it's time for our new segment, One Good Thing.
It replaces our previous segment, a lot of terrible things.
Well, Kevin, welcome back from our nation's capital.
Yes, I was in D.C. very briefly this week there for some book meetings, and it was
very cold.
But the bigger observation is that Washington, D.C. is like freaking out about AI.
Is that right?
Yes. So everywhere I went, every meeting I had, people were sort of asking me, is this stuff real?
Is it happening? Are we in the takeoff? Is the singularity approaching? And it does feel like
the sort of political salience of AI has gotten much, much higher just in the past couple of weeks.
Well, why do you think that is?
So there are a lot of reasons for that. I think one of them is that I think there's been a lot of people waking up to the
new agentic coding capabilities of these models. We've obviously talked about that on the show,
Claude, et cetera. I think that is starting to kind of make its way out into the world. There's
also this stock market stuff that's been going on with a lot of the software stocks that are falling
because of the threat of AI. And then I think there's just sort of this ambient cultural vibe
shift happening that has led to a lot of people in my life who are not like AI bubble people
texting me and saying, hey, is this really something I should be worried about? Is my job at risk here?
And so today I think we should talk about this because, among other reasons, there is this viral essay
that I've been sent now, no fewer than three times just in the past day, that is by a man named
Matt Schumer called Something Big is Happening. And it's basically a distilled version of something
you and I have been talking about on this show for a while now, which is like these tools are getting
really good. They're changing the way that programmers work. They're approaching some sort of
inflection point, and everyone needs to be worried about this. Yeah. So all of this is starting
to make me think that there is something big happening here. And I'm not sure it's exactly
what Matt thinks is happening. But I do think we are reaching an inflection point in people's
feelings and senses about AI and where it's going. And I think we should spend some time today
exploring that. There's a lot to dive into, Kevin. And before we do that, perhaps we should make our
disclosures. Yes. I were first.
in the New York Times, which is suing open AI, Microsoft,
some perplexity over alleged copyright violations.
And my boyfriend works at Anthropic.
Okay, so let's start with this,
what they're calling the SaaSpocalypse,
the sell-off in some of the tech stocks.
SAS is, of course, software as a service.
Not the sort of SaaS you're used to seeing on RuPaul's Drag Race.
Yes.
Different kinds.
So this would be companies like Salesforce and Workday,
and what are some other good SaaS companies?
Here's a SaaS company.
If you've ever seen a billboard in San Francisco and you haven't understood what the company does, that's a SaaS company.
Yes. These are companies that sell software to other businesses. And over the past couple of weeks, we've seen a lot of these companies stock prices falling precipitously. So on Monday this week, Monday.com, the production management software company, their stock plunged more than 20% after a week financial outlook.
We had a real case of the Mondays over there.
Yes. Workday, which makes sort of HR tools for companies. They announced on Monday that they announced on Monday that they,
their CEO is stepping down after their stock lost 17% of its value last year.
His workday came to an abrupt end.
Yes.
And a bunch of other SaaS company stocks also fell.
Salesforce, Shopify, Adobe, SAP, Oracle, Microsoft.
Basically, if you were a company that builds software for other companies,
you are not having a good month.
And give us the basic investor thesis, Kevin, for why those companies aren't worth as much anymore.
So I think in this specific instance, it's not clear to me that there was like one particular
trigger. People pointed to this set of plugins that Anthropic released, which included some tools for
law firms trying to use AI. And some people think that was sort of behind a lot of the sell-off.
I don't think it was that in particular. I think maybe that was sort of the straw that broke the
camel's back. But I think there is sort of a mounting sense that, as we've talked about in this show,
now you and I and anyone can theoretically at least build your own version of that and these companies
may not be as valuable. Yeah. I mean, there have always been.
people who would look at a product like Salesforce, which offers customer relationship management.
You know, if you're a salesperson, it helps you keep track of all of your different leads.
And they've said, well, you know, that's basically just a fancy spreadsheet.
You know, I could make my own spreadsheet. And there are companies that are born almost every
year that kind of take a direct run at Salesforce and they say, we can build a better version of
this. And I do think that when Claude Co-Work came out and all of a sudden, you could just
take a bunch of files on your computer and throw them into Claude and get a lot. And
something useful back, there were people who said, huh, maybe we can't actually just make a version of
ourselves. And I would say, as a small business owner, I used a plugin that Claude has for finances,
and I just took all of my financial data that my bookkeeper has kept for me, you know, over the past
five plus years, and I threw it into Claude. And we had a nice long conversation about what was going
on in my business. Now, it so happens that I don't pay a startup to do this for me. This was just kind of a
little bonus that I got. But could I imagine some people saying, hey, why am I using this kind of
like bespoke financial startup to do business analysis when now I can just put these files into
something that lives on my computer? So I actually do feel like I have a sense of why the market
freaked out a little bit. Yeah. And do you feel like that fear is justified? Like do you,
because there have been a couple of reactions to this from the business community. One reaction is
the sort of investor reaction, which is, oh my God, all of the SaaS companies are overvalued.
Salesforce doesn't have a moat anymore, workday doesn't have a moat anymore.
Everyone's going to be vibe-coding their own versions of these tools and using them at their
businesses.
And the other reaction, which is sort of the reaction to that reaction is like, calm down,
don't freak out.
No one is going to be vibe-coding their payroll software.
That's not how this works.
So there are people saying, this market reaction is overblown.
And despite the fact that these vibe-coding tools are very cool, they are not going to lead
to the death of the software industry.
So this gets at a really interesting question that I think is still unanswered, Kevin,
and that is what makes these companies truly vulnerable?
Is it the technology itself, or is it that the technology will just enable different kinds of
business models, right?
So the argument that it's the technology itself is the person who says, look, we're
just going to be able to vibe code all our own software now. Or we'll have like just very smart
agents that can do all of the different things that every piece of software we used to buy did for us,
right? That's the sort of technology demolishes everything argument. And I would say that that's like
a minority view. Most of the people that I read and talk to do not actually think that that's
going to happen. But there is this other view, which is that the technology enables a change in the
business model. And I think law is a really good
place to think about that, right? Because lawyers are expensive. They charge by the hour, the most
expensive ones, charge $1,500 for an hour of a lawyer's time. Well, what happens when you don't need
an hour of a lawyer's time anymore? What happens when there's a legal startup that does all your
contract review essentially instantly? There's going to be a different business model around that.
And so if you're a big white shoe law firm and you're charging $1,500 an hour, all of a sudden, it does
seem possible, then one of these little startups is going to eat your lunch. So that's the
distinction I would draw. Yeah. So you see this more as like a case of the startup with 10 employees
being able to use AI to do the work that would have required a thousand people a year or two ago.
Yes, absolutely. Let's talk about another really common business model in tech. A lot of these
business software companies have you pay by what they call the seat. So it,
if you want 10 of your employees to be able to use software, like, let's say, Notion, you buy 10
seats. Another threat of anxiety rippling through Silicon Valley right now is maybe paying by the
seat isn't really going to make sense anymore because we're actually just going to have one
agent that does that whole thing. We're not going to expose that to employee. Employees don't need to
worry about that anymore. And so we're just not going to buy seats. So again, if I had like one
prediction to make here, it's that you're going to see more companies experiment with outcome-based
pricing. You're already starting to see this with companies like Sierra, which is a customer
service startup, and they will sort of handle customer service, like inquiries that your business
may get, and you pay Sierra based on how many calls it resolves for you. So that's the kind of thing
that I think we're going to start seeing more and more of. And to be clear, if you're, if you're like
an incumbent SaaS company and you have a seat-based business model, that is eventually going to be
a problem for you. Yeah. So do you think the sell-off is justified? Do you think people are panicking for the
right reasons inside these companies and their investors?
Here's a funny thing. As a journalist, we're not allowed to buy individual stocks.
So I actually never have any idea of what the investors are supposed to be doing.
So I'm not going to, you know, I'm not going to comment on whether I think, you know,
the stock market is justified here or not. But I'm happy to comment on the overall picture,
which is, do I think that AI is about to change a whole lot of business models and that a
whole lot of businesses are probably going to have to either change dramatically or go out
a business as a result, absolutely. Yeah. And I think this notion that like no one is going to vibe
code their own payroll software, I think that is, like, I am not as convinced of that as some people.
I think that if you are a business and you have, you know, 10,000, 20,000 employees and you're
paying by the seat for some piece of software, whether it's payroll or workday or your HR
compliance software, those are real expenses. And so do I think that the like the CEO of the
company is going to vibe code the thing, you know, with one clod prompt that is going to replace the
software tool? No, but I can totally imagine a world in which you have sort of one or two full-time
developers who are managing and overseeing and repairing your own internal software and you don't
have to pay for a bunch of seats for someone else's thing. I think that's a very plausible outcome.
So I am not as dismissive of the sort of the fear around these SaaS businesses as some people.
and yet, if I were any of these large enterprise businesses with tens of thousands of employees
charging, you know, per seat for software, I'm very worried.
Yeah.
Now, another pushback that we're seeing to this narrative that, like, AI is going to eat all the software
companies is, well, it's all going to be so insecure and sloppy and buggy that no one is going to
rely on it.
What do you make of that argument?
I think you're absolutely right that it will be buggy and insecure, and you're wrong that
people won't rely on it. If there's one thing we've seen with MaltBook Mania, it's that security
is basically the last priority, at least for the, you know, bleeding edge maniacs you just want to
try everything first. Right. But if you're a law firm or a bank or something like that, you do care
about things like that. I think that that is true. And, you know, there are a whole startups, and I know
this because I see the billboards around San Francisco that specialize in various compliance functions.
You know, it's like, well, if you're going to offer this kind of service, you have to be this
kind of compliant, and so you're going to pay us to make sure that all goes very well.
That to me, like, because that is a repeatable automated process where you're just trying to get
your business to like match a bunch of checkboxes on a form, that just kind of seems like something
that you're going to be able to train an agent to do. So like, there are going to be some categories
of things that I think are just going to be very risky for a long time. And then there are just
these kind of like automated compliance functions that I just look at and I think, well, I don't,
I can't think of a reason why a good agent wouldn't be able to do that.
Yeah. And I think in these cases where it's in a more sensitive industry or something that has more regulatory or compliance needs, like, I just think it will take a little bit more effort to automate some of these functions, but I think it's totally doable. So one thing that we've heard about just in the last week was this story about Anthropic developing tools with Goldman Sachs. So together, these companies have been deploying AI agents inside Goldman. And Anthropic actually has forward.
were deployed engineers that will like go to, if you're a big customer, you know, like every other
AI company, they will send people to your office to like work to put agents into your workflows.
And so that is also something that we're seeing is like for the really sensitive things,
it is not impossible to design AI tools that comply with all your various requirements,
but you might need a little more handholding.
Yeah, Goldman said, Claude, we want you to design the most dangerous mortgage that's ever existed.
We want to get that on the market.
Let's go, buddy.
Claude, don't do it.
Now, do you want to talk about this big essay that's captured your imagination?
Yes, well, I should say, like, I think if you are a regular listener to the Hart Fork podcast,
none of what is in this essay will be news to you.
But this essay did go insanely viral.
I am hearing about it all over the place.
It's called Something Biggest Happening.
And it's basically an explainer for non-technical people about why everyone who's,
or a lot of people who are in technical fields are freaking out about what
happening in AI. And this is by a guy named Matt Schumer. He runs an AI company, so there's a little
conflict of interest there. But he's basically saying, look, the technical parts of my job are
automated, not they will be automated or they might be automated, but I am no longer needed
for the actual technical work of my job. He talks about the advances in recent coding models,
these agentic coding systems. He talks about how, you know, a lot of people have not tried AI,
since the original sort of LLM boom and their impressions of AI are falling behind.
Again, none of this is news to you if you are a listener of this podcast.
But he's sort of talking about this idea that these new models are contributing to their own
development, this idea of recursive self-improvement.
And he says that, you know, GPD 5.3 codex, which came out just last week,
Open AI, says that this is their first model that was instrumental in creating itself.
So the AI models are now, at least if you believe the labs,
contributing to their own development,
starting to accelerate the development of these AI systems.
And so what might have taken, you know,
if there might have been six months between one model and the next,
a year ago, now that might be a month or two
or even a couple weeks.
Yeah, and look, I'm going to throw a hype flag on the play, Kevin,
because I think that all labs have a vested interest
and you believing that if you use their software,
you can just tell anything to improve itself,
and it will become amazing.
I'm not saying it's not true.
I'm just saying we should approach such claims with a degree of skepticism.
Now, at the same time, I've talked to enough software engineers who work on this stuff
that I do think that it is true in some ways.
Like, that if you squint, that it is true.
It is 100% true that they use these models in the production now of everything that they do, right?
Like, Claude and GPT codecs are deeply integrated into the workflows of both companies.
And, you know, you look over the past three months, does it feel to you like there's been an acceleration
in the pace of releases? It kind of feels that way to me. We'll see if that pace feels like it
continues to accelerate, but it feels like things are moving faster now than they did say in
February 2023. Yes. Right. So there's evidence for it. I just always want us to be a little
cautious with these claims. I think that's right. At the same time, I think that the timelines here
are shorter than many people would imagine. I talked to an executive at one of the big AI companies
this week who said that basically right now, software engineering is kind of 90% automated, right?
You still need a human to check in on the code that's being written, to make sure it works,
to fix things when they break. But, you know, that basically within the year, this person's
prediction was that software engineering will be fully automated. Now, that could take a little
longer. It could happen sooner than the end of the year. But I think that is sort of the moment
that a lot of people in the tech industry
are looking at as sort of the beginning
of what they call the takeoff.
And one specific detail I would add here
is that I've been talking to engineers
who've been telling me about the specific ways
that these agents work.
And something that comes up again and again,
I've heard Sam Altman say a version of this,
is that these models just never get tired.
They're relentless.
And so you can say to them,
I want you to meet this objective,
and they will just keep trying things
until it works. And of course, there are some people who work this way, but those people
don't work 24 hours a day. They don't typically work overnight. Sometimes they get tired.
Their morale drops. That's not true of the agents. And so engineers I've talked to have started
to see this behavior. And this is another reason why they think, aha, we're starting to enter this
takeoff phase is the relentlessness of the agents. Yes. And so that's basically the point of this
viral essay is things are happening, things are accelerating. It is not just coming for programs,
It is going to be a force in all kinds of white collar field.
This is sort of the worker side of the AI panic.
And Matt's recommendations are, you know, basically figure out how to use these tools, try
them out if you haven't tried them in a while.
He says, get your financial house in order.
Basically, this, you know, the next few years could be very disruptive to your career.
If you're a white collar worker, so like, don't, you know, take on a bunch of new debt or
anything like that.
And then he...
What am I supposed to do with this yacht I just bought this week?
Oh, my God.
you tell me. So yes, I don't feel like this is the best explanation I've heard of for what people
should do, but I do think that this was something that has been rocketing around the internet.
I think that people who have not been paying attention to what's happening in AI are starting
to wake up and maybe in ways that are panicky and maybe just in ways that are sensible.
Well, let me ask you about the panicked observation that kicks off the piece, which is that
he likens this moment to the February before the pandemic and says, basically, there were people
in February 2020 who were paying attention, who said, hey, this thing is growing exponentially,
and it's likely to catch us all up in it. And Matt is saying, you should be feeling the same way about
AI. How do you feel about that comparison? I actually think that comparison is fairly apt.
I had a similar moment the other day. I was looking at this chart that came out of a report
that semi-analysis, the great semiconductor newsletter, put out, where they looked at the percentage
of GitHub commits that are being made by Claude Code.
So essentially, a GitHub commit is like when someone pushes some code to a GitHub repository
to an open source project.
And right now, the percentage of GitHub public commits that are being written by Claude Code
is 4%.
And we should say there are some sort of addendums to that.
Some of these are being written by Claude Code but pushed by humans, in which case it
doesn't show up as cloud code.
But just say it's 4%.
Sure.
they are predicting that at the current trajectory, by the end of 2026, more than 20% of all daily commits on GitHub public projects will be authored by Claudecode.
That is one tool.
So that is essentially a story that feels to me like an exponential outcome here, like the beginnings of something like a pandemic where all of a sudden it's like this tiny little thing and only the real like data nerds are paying attention to it.
but if you just extrapolate on a straight line,
you end up in a very different world a year from now.
Yeah, and some people always look at those charts
and they say, well, you shouldn't extrapolate along a straight line
and that the real world, things move slower.
And, you know, I think there are places where that is true.
But I looked at that same piece,
and I do believe that the ceiling for, you know,
Claude commits to GitHub is probably not 20%.
I think it's probably going to be a lot higher than that.
I think that by the end of the year,
if you are still writing code by hand,
that is going to be an obsolete behavior.
Well, I really like writing it in longhand on this legal pad, and you can't take it away from me.
I just think, like, it is so clear to me that this is the way software development is going,
and we should say, like, it's not, that has spillover effects in a lot of other industries as well.
A lot of things that we would, you know, consider normal companies are software companies.
Banks produce software. Their interfaces are software. Law firms have software.
So I think that the notion that this is just going to be confined to coding is wrong, and I think that people should be paying attention to what's going on in coding.
All right. Well, any practical tips as we wrap up here for people beyond just pay attention to this, right?
Because I think this is kind of the trap that we fall into, and this is why a lot of people tune out this kind of conversation, I think quite understandably, is people keep saying, hey, everything's about to change. I think a lot of people here, then they think, okay, well, like, let me know when everything has changed.
like, you know, in the meantime, I have to get dinner on the table, right?
What actually should people be doing, aside just like, listen to the Hard Fork Show and rate
of five stars on a podcast?
Well, subscribe to the YouTube channel, too.
That's a great idea.
No, look, I want to do some writing and thinking about this because I feel like this has been
the prescriptive, what should you do part of this, has been the weakest in my estimation.
The thing that everyone says is, like, get familiar with the tools.
And I do think there is some merit in that.
I think that if you have not touched cloud code or codex or one of these agentic coding tools,
maybe the last time you used an LLM for serious work was a year or two ago.
I think it's worth getting back up to speed on that.
I find that a little bit maybe unsatisfying as a response to all this,
but I'm curious, do you have any better tips for people who are maybe starting to freak out about AI in their own lives and work?
Well, I just think that this needs to be part of our political conversation, right?
I think this needs to be a conversation that constituents are having with lawmakers.
They need to be telling them, hey, my boss is telling me that I need to start using AI every day
and that there's a high risk that I'm about to lose my job and that maybe there's going to be
permanent unemployment in sub-job categories.
What is your plan for that, right?
Many economists are looking at that and they believe that is at least a possibility in
at least some industries, and they don't believe the AI lab spin that, oh, don't worry,
technology always creates more job opportunities in the long run, right?
So we do need to have a political conversation about this, and we may need to have, you know,
real sort of government answers for people if and when their jobs do become automated away.
Yeah. I think that the people at the AI companies are woefully underestimating how much people
hate this technology already. And unemployment is still at near record lows. And so I think that
if and when this does start to, you know, eat away at some of these white collar industries,
I think the backlash is going to be furious.
I think it's going to be very intense.
We're already starting to see elements
of this kind of populist reaction to AI.
Bernie Sanders recently said
that he is going to introduce a bill
to put a moratorium on data centers for AI.
So I think we can expect a lot more of that kind of thing.
And I think this is something that
if I were working at one of the big AI companies,
I'd be very worried about.
When we come back,
It's 50 shades of AI in romance novels.
The Times-Alexandra Alter is here.
Well, Kevin, love is in the air.
Or was it AI-generated love?
Yes, it is almost Valentine's Day,
and we have a story today
that we think we'll bring a little bit
of the Valentine's Day spirit to our listeners.
So earlier this week,
New York Times reporter Alexandra Alter
put out a story we both loved
about the way that AI is disrupting
the romance novel industry.
Yes, are you a big romance novel?
novel reader? You know, I can't say I've read too many of them, but one thing I know about that
industry is that they are very early adopters of new tech and basically always have been.
Yes, and according to Alexandra, a growing number of writers are using AI to churn out tons and
tons of these novels at record speed. In one case, she reported on a writer who went from writing
about 10 books a year, which is already a lot, to now doing more than 200 romance novels a year
with the help of AI. And to put that in perspective, that is 200 times.
as many books as Kevin will publish in
2026. It's true.
So we should say, like, these are not
going to be winning any Pulitzer Prizes,
but they are stories like
the Whiskey Wedding or Diagnosis
of the Heart. Sort of think of, like,
the old, you know, romance novels with,
like, a shirtless firefighter
on the front. That's kind of the genre we're talking about here.
What they used to call a bodice ripper, back when
the gals still wore bodices. Yes.
And this has been very divisive
among writers, publishers, and readers.
and it just raises a whole lot of questions about how this technology is moving into industries like publishing.
That's right. And while we didn't quite intend it this way, if you've always wanted a chat bot to talk dirty to you,
the segment will have some practical tips.
Yes, very useful for that.
So we're very excited to hear more from Alexandra about this story. Let's bring her in.
Alexandra Alter, welcome to Hartfork.
Hi, thanks so much for having me.
How did you discover this story? How did you stumble into this?
So the question that led to this story came up last year when Open AI said that they would allow erotic content.
They were going to make this change and start with age verification and then allow users to generate erotica,
which is something that users had apparently been clamoring for.
And so then I started asking around to figure out whether romance authors and publishers were feeling threatened by this.
Was this something that they felt like might erode the market for traditionally published romance novels?
you know, if readers could instantly generate their own love stories.
And I was expecting to find a lot of hand-wringing and anxiety, which I did find.
But I was also surprised to find a few writers who were willing to speak to me about how much they love AI
and how they've been using it to churn out dozens of romance novels, and they feel like it could revolutionize the genre.
So that was a surprise to me because it's a very contentious issue in the literary world.
Most people, if they're using it, are not open about it.
And then the next question was, how good or bad is AI at writing sex and love stories? And the answer was it's pretty bad and it requires a lot of help.
So we'll get into the controversy around it, but I want to hear first about kind of how this is actually working. Tell us about what is the workflow for a romance author who has decided, you know what? I'm tired of writing all these bodice rippers. It's time to just hand that over to the large language model.
It's so interesting because there's different platforms that writers are using. There's places like SudaWrite. Then there are these sites that will generate customized erotica like Red Quill or My Spicy Vanilla. And then there's just sort of your general bots, you know, your Claude, your chat, GBT, Gemini. And so writers have different tools that they're using. But what I learned from talking to a couple of people was that if you learn how to prompt the bot correctly, it will write a pretty compelling sex scene. You have to give it kind of a
outline. It helps if you give it a ton of information and tell it what subgenre you want to do.
Because of course, they've ingested all of these books from different subgenres. So you can tell it,
I would like a reverse harem mafia, enemies to lovers, slow burn romance. And it will deliver
all those beats. It will require editing. It will require prompting. But I think, you know,
what I heard from people who have played around with it a lot is, you know, some of them say they can
write a book in a day and have it edited, ready to publish. Wow. Now,
Have you ever tried pseudo-write?
I ask because you're a pseudo-writer.
Yes, I actually played around a pseudo-write before Chat Chupit.
This was like one of the first AI programs that I ever played around with.
Interesting.
And it was somewhat helpful for me, but it was more oriented toward fiction writing.
So I totally get what the appeal of the tool is.
Tell us the story of Coral Hart, who's a longtime romance novelist who's been experimenting with some AI stuff.
Yeah, she was one of the people that I found who was actually teaching.
other writers how to use AI tools to produce novels. And she has only been doing this writing
with AI for about a year. She's been writing novels, romance novels for a really long time and
was quite prolific, but realized she could absolutely supercharge her output if she started using
AI. So last year, she created 21 different pen names and published more than 200 romance novels
in all kinds of genres, super spicy erotica, tame, sweet teen stories, rom-coms. So it sort of had
her foot in every corner of this market to see what would pop. And so just through a sheer volume
kind of game, she ended up making six figures. She told me selling these books. And in the
process, she really learned a lot about which models, which chatbots would do what for her.
She would combine them. Now she's created her own proprietary AI writing system. But she has this
whole kind of spreadsheet that she shared with me, which was sort of like Claude writes,
beautiful senses, but is terrible at sexy banter. Chat GPT will lock you every time. Grock will do
whatever you want. It goes for the filthiest option every time. Novel AI was literally trained on erotica
and like it's out of control. So some of the writers said they actually had to prompt their bots to
sort of calm down a bit. I was so interested in the bits of your piece about how there's a lot of
steering needed to keep these things from sort of veering onto this very average set of romantic tropes,
right? If you don't give the chat bot
any guidance, it's going to suggest
you know that the characters should be having sex
in the bedroom or the shower.
Boring!
Yes. And so, Coral, this romance writer
who's running these classes,
is advising her students to like give them
a list of settings that are weird,
like a winery fermentation tank
or a stalled ski lift or a horse stable.
And she's also recommending that her students
give the AI a detailed inventory
of sexual kinks that are not
just the old, you know, typical one. So this is, this is actually more involved than just typing
into a chatbot, like, give me a romance novel about a big city lawyer who moves to the country
and falls in love with the stablekeeper. Oh, wait, that sounds interesting. What happens next?
Yeah. Yeah, you know, it was super fascinating, too. I sat in on one of Coral Hart's classes,
which was specifically about getting AI to write decent or even great. She said sex scenes. And
she said, you are not going to get a good sex scene if you don't carefully prompt it. You're
going to get weird euphemistic stuff. She mentioned that Claude had written in one of her recent
drafts, his turgid manhood. And that was the kind of language she was getting. And so then she
started writing lists of words that the AI loved like shiver, unravel, manhood, moan, and blocking
them saying you cannot use these words. And I think the funniest thing that she said to students,
She said it's very important that you tell the chop bot to slow down because otherwise they just jump to the end of the scene.
Everyone's tangled in the sheets.
So she gave them a specific prompt, which was something along the lines of make it slow and agonizing.
Do not rush to the finish.
So that was her tip.
The approach that you're describing here with publishing hundreds of romance novels in a year seems functionally indistinguishable from spam, right?
It's like you're trying to just sort of flood the market with as much material as possible
in the hopes that you'll, you know, connect with customers.
I'm curious, like, what is Coral's relationship with her own work, you know?
As an author, I take some amount of pride in what I create.
It's one reason why I don't use AI to write my columns.
But Coral seems to have a different perspective here.
Very different.
And that's a great question.
And I asked her, too.
I was like, do you, because she started off as a romance writer, presumably she likes writing stories.
And I said, do you still think of yourself as a writer? And she said, I mean, not really. I'm more of a director. I'm a creator. She feels like she comes up with the plots and the characters, but she doesn't necessarily think of herself as the quote unquote author anymore, which is a different kind of species of writer than we've seen before, I think. And a lot of people are very uncomfortable with that.
One of the reasons that your story was so interesting to me is that romance as a genre relies on these templates, right?
Like enemies to lovers or the slow burn, or I think there's another one in your story, Forced Proximity, which I'd never thought of as a romance template.
But I guess it's not dissimilar from our podcast.
Yeah.
Yeah.
That was actually the backup name for our podcast.
Well, within forced proximity, you have only one bed, which is a,
agree to some genre.
Very interesting.
Well, we are getting a new studio, so we'll talk about that.
But because the romance writers use these templates, I think some might look at that work as
maybe less creative than somebody who is writing literary fiction and is just sort of
writing whatever scenes come to mind.
And I'm curious if you thought about that tension when you were writing this piece, and I
want to be careful how I say this, because I think if you're, you know, anyone who is working in
genre fiction,
you're coming up all the plots yourself, you are bringing your own humanity to that process.
And yet, I understand why some people are trying to automate it because they think to themselves,
look, all of these stories hit the same eight or nine beats and why bother writing them myself
if the reader already knows where it's going to go before they've started reading it?
Exactly. I mean, I think that's a real point of contention when you're talking about AI and romance.
There are these tropes that readers love. But I think even the writers I spoke to who use AI said
AI is really bad at human emotion, at nuance, at kind of the slow burn. And, you know, one person told me that she tried to get an AI program to write an enemies to lovers romance. And within the first chapter, they went from being enemies to being lovers. And so it's obviously not what readers are looking for. And I do think, while there are these common tropes that readers love and you can see them all over Amazon, they're marketed that way. Readers like to see, how are you going to put a twist on this? Like, I think you're a fan of heated rivalry.
A huge fan.
There was a big twist on the hockey romance the way that Rachel Reed did it.
And people love it for that reason.
So they're within these kind of familiar tropes, people can be very creative and original
and create these original characters and scenarios.
And I think people do worry that automating the process, you lose some of that for sure.
That raises the question for me of, you know, and I don't know if you happen to look at this
data in particular, but I would be curious to know among best-selling romance novels,
do they tend to be ones that are inventing new formats are sort of driven by twists?
Does it seem to be novels that are just particularly well written or maybe an author, you know, has a huge fan base?
Or is it a situation where some of these formats are just so popular that you actually can use AI to just write one from soup to nuts?
And it'll sell maybe better than someone who did try to come up with their own twist.
I think we're waiting to see how that takes shape.
Because right now, we're in the very early stages of people openly writing with AI, where we can say, okay, this novel is written with AI, how our reader is going to react to that. And so we haven't seen an AI novel go up against an original novel in that way. I will say you have both in romance. You have like the Twilight craze led to a million vampire boyfriend novels, so many. But then you have something like 50 Shades of Gray, which like before that book, nobody knew that middle-aged women really wanted to read about bondage. They did. It was very.
kind of opened up this whole field. So I think you have both. You have super original stuff,
and then you have things that are maybe writing the coattails of popular trends, and those do well
also. And like this is a subjective question, but like, are these AI generated novels any good?
Like, have you ever come across a passage in one of them where you're like, okay, this was,
this was as good or better than a real famous romance writer would have written?
That is a really good question. And what I found, you know, reading probably half a dozen novels that I knew to be AI generated. And I've read a lot of fan fiction for work. I've read a lot of self-published romance. And romance just like any genre, there's an entire range from exquisite prose to things that are more cliched and formulaic. And I think what I found didn't necessarily come together in the way you would want some of the AI-generated stuff was it was exactly what readers and writers and writers.
were complaining about that the sort of emotional nuance, the characters, like there was something,
and it might have been, it's hard to know because when you know it's AI, I felt a little bit of
distance from the characters. I sort of felt like, well, there's not a human talking to me,
which is one of the things I love about reading. It's like I'm in someone else's brain right now.
That's like a magic trick. And so when you know that that's not happening, it feels quite
flat. But it's hard to say how much of that I was projecting.
How many of these writers are disclosing that they use AI? Because I can imagine that that would
affect people's enjoyment of the book. If I, you know, open the first page of something and it says
this was written by Claude, I'm, like, way less interested as a reader than if I don't know.
Yeah. And writers are very well aware of that. So very few of them are disclosing that they're using it.
It's very contentious. They, you know, there's been a lot of outcry on social media. There were
a couple of Romanticie authors last year who accidentally left AI prompts in their books.
people got quite upset about that
And I think it really takes you out of the moment when you're in the middle of a sex scene
Yeah
And it's like as an AI model
Yeah
We need to compact this conversation so that we can continue
Right, right
So I think it does shade readers' perspectives
And the thing that was interesting about the people I spoke to who were open about it
Was they are convinced that readers' attitudes are going to change
They think once readers give this a try
and they see that these stories are good,
and they know that a human came up with the ideas and the characters,
they're going to get over it.
They've gotten over a lot of other stuff.
So it's a really interesting, unanswered question right now.
Yeah.
Alexander, I'm wondering how the publishers are responding to this.
The publishing houses that you mentioned in your piece
all seem to be smaller or people that are self-publishing their books.
But the big major publishers, I imagine, are starting to grapple with this, too.
And I'm curious what conversations you've had or are hearing about.
among the big publishers.
Have they become turgid with rage?
A lot of turgidity.
It's interesting because I think they, you know,
they see this happening in self-publishing.
And self-publishing has become such a critical pipeline
into traditional publishing.
That's where they're finding, like,
these huge bestsellers, not just in romance,
but thriller writers, even some self-help.
So this has become kind of a feeding ground
for traditional publishing.
So they're very aware that they,
are probably at some point going to acquire a book that has some AI in it. And most of them have
policies that they've always had, which is their authors have to assure them in their contracts
that the work is original. And what does that mean if AI wrote it? Does that mean it's original?
Some of them say that's not original work. But if someone prompted it and they fed in their own
ideas, then is it original? So it gets into these really sort of thorny areas. And I think there's
also a copyright issue for publishers because stuff produced with AI cannot be copyrighted.
And publishers do not want to put out a book that they can't hold the copyright for.
My sense is that this is an area that they're not quite prepared for.
Yeah.
I mean, I thought as I was reading your piece, like some of these publishers are just going to start
cutting out the writers, right?
If you're a romance publisher and you see that your writers are just, you know, creating
AI romance novels and you're paying them to do that and you're paying them a cut of royalties,
why can't I, as the publisher, just go in and do the same prompt and say, I want, you know, 36 romance novels according to this template in these different genres and just do it yourself?
This might actually be the best business idea you've had on the show so far.
I'm going to take a few notes.
Well, one flaw with that is in romance, you know, in many genres, but in romance in particular, the authors have very close relationships with their audiences.
They have these almost parisocial relationships, all these forums online.
And so I think the self-published authors that succeed, one of the things that the publishers are buying is that sort of brand and the persona.
However, someone's definitely going to try it.
And maybe it should be you, Kevin.
I think writers should try to continue to, you know, develop these audiences by proposing novel things.
Like, we all get together inside a wine fermentation tank and we just kind of talk about romance.
It's so interesting because I am dealing with this right now.
I mean, I'm writing a book and my publisher has, you know, is a big major publisher.
they've sort of asked me, like, you're not using AI to write any of this. And it's true. I'm,
I am not using AI for composition, but I'm using it in all kinds of other ways. And I can totally
see the temptation to just let it sort of write the words on the page. And maybe it would even be
better in some cases if I did. Like, I think that the, as the models get better at this,
which I believe they will, I think there's going to be real incentives to just turn over
more and more parts of the writing process in all kinds of genres, even nonfiction.
You know, ChatGBTBT has said that they're going to allow erotic content soon.
I wonder, Alexandra, is there any anxiety among writers or publishers that readers might soon start bypassing the publishing industry altogether and just sort of generating their own personalized romance stories to read?
Yeah, I think, you know, most people were cautious about citing that as a concern.
and they felt like, well, that's a different kind of activity.
It's more interactive.
It requires a lot of work.
Like reading a story is more passive.
But I'm not so sure.
I feel like, you know, there's this app called Janitor AI.
I don't know if you all have seen this.
It's a Romantasy app where you can chat with a vampire boyfriend or an orc or somebody.
And it's one of the most popular apps in the book section on the store.
So it's clearly going for Romanty readers.
So there is this sense.
that AI is encroaching, I think, into really clearly into the territory of romance and what
it's delivering for readers. And I do think that is a risk, yeah.
Well, Alexandra, thank you so much. And one last question. Tell us about the ragged prayer.
Okay. All right. So as I was reading through several of these AI-generated romance novels,
I was commuting to work and looking at one of my phone. And I read this phrase, the hero and the
heroin or in the throes of passion, and he whispers her name like a ragged prayer. And I was like,
wait, I must have gone back to the other book I was just reading. I just read that exact thing.
And I flip back and forth, and I realized that phrase was in several of the books and repeatedly
within the same books whenever the hero says her name, he says it like a ragged prayer,
or sometimes like a jagged prayer, or sometimes like a rough prayer. And so then I asked a
couple of writers who I knew were using AI, like, what is this ragged prayer? And one of them said,
I actually had to block that phrase. It loves to say ragged prayer. Another one said, like,
yep, that's an AIism. Like, one of them pinned it on Claude. And I couldn't really, I tried to trace
the origins of it. I did find it in a romantic see, a very popular romance sea book by Sarah J.
Mas, which was one of the many books that was ingested by Anthropic, according to the lawsuit that
authors brought against Anthropic. And the phrase, said her name like a prayer, was in a sex scene
in that book. But it's just hard to know where it invented that. It was an unusual thing that
stuck in my head, and now I can never stop thinking about it. So thanks for real. Well, one does,
of course, think of the great Madonna song, like a ragged prayer.
Life is a mystery, Alexandra, and we thank you for coming to tell us about this.
I'm going to go get back to writing my human-written romance novel, Enemies to Lovers,
podcasters.
It's called Hot Mike, and so look for that soon.
I'll get a Valentine's Day gift for the special someone in your life.
Yeah, but I'm on my deadline.
I've got to get this LLM working tonight.
Alexander, thanks so much for coming.
Thank you for having me.
When we come back, call Amory, because it's this one-
thing that's got me tripping our new segment and one good thing i don't get it you don't remember the
2005 classic one thing by amory no it's this one thing that got me tripping it's this one thing
that got me tripping you did come on i believe everybody knows that song
kasey what did you get me for valentine's day this year heaven you know i'm spoken for
you forget yourself sir now let's talk about ai again yes so we have a new segment this
week called One Good Thing. This is exactly what it says on the box. You're going to share
one good thing in tech that has caught your attention recently. I'll share one and we'll talk about
them for a few minutes and let everyone go enjoy their Valentine's Day. Sounds like a great time to
me. Kevin, shall I go first? Yes, what is your one good thing? So this is one that has been
flying a little bit under the radar. It is a new feature in Spotify and it is only available
in a few countries, the U.S., Canada, New Zealand, I believe,
and it's only available to premium subscribers, okay?
So there's a little bit of a bar if you want to try this one.
But in my experience, it has been well worth it.
The feature is called Prompted Playlists.
Have you seen these?
I read about them in your newsletter, but tell me more.
So from time immemorial, Kevin,
we have all wanted the perfect playlist.
And we have devoted countless hours
to crafting them manually inside iTunes, Spotify,
whatever music system we were using at the time.
But there are so many things that this system misses.
And what I have found is that prompted playlists are a way to solve this problem.
Here's how it works.
You open your Spotify app.
There's a tab somewhere there called Create.
You go to Create.
And if you have it, you will see prompted playlist in there.
And it will show you a text box that looks exactly like ChatGPT.
And this is just basically a genie that you can throw a wish into and say,
I would like a playlist like this, and then Spotify will go forth and do its best to create that playlist.
Now, I'm looking at you, and I feel like you might be a little bored so far.
No, I'm not bored.
I'm thinking about my own version of this and what I would use it for, but keep going.
Well, I'll tell you how I used it.
I am a music nerd, and back in the OOs, I used iTunes to create these playlist that did something essential for me,
which was they would keep track of my favorite songs and how recently I listened to them.
And when it had been too long since I last heard my favorite song, it would just put those together in an automated playlist.
So like throw it back into the rotation.
Throw it back in the rotation.
And while I vastly prefer the streaming era to the iTunes era in many ways, this is just something we lost.
We took a very simple rules-based technology.
We threw it out the window.
And now we're having to recreate it using something vastly more complicated.
But as soon as I saw that these playlists came out, I wanted to see if,
I could get it to recreate the iTunes playlist of my dreams.
And I wrote a very enthusiastic newsletter about this the other day.
I will say my experience has been tempered somewhat in recent days.
But maybe I should just tell you how I use this in case anybody else might want to try something similar.
Please.
So what I did was said, show me songs I've listened to at least 20 times, but not in the past two months.
Please don't repeat albums and create a well-sequenced playlist drawing from this set rather than
just ranking by play count. And this was able to give me the playlist that I was looking for.
It was songs that it knows I like because I played them, you know, a bunch and I haven't listened
to them in a while. And, you know, sometimes when you use an LLM, Kevin, they will lie to you.
And so I actually called up Spotify and I said, put me on the phone with somebody who can explain
to me whether this is actually working. Like, is it actually using my listening data?
and I wound up talking to the VP of personalization over there, a woman named Molly Holder,
and she confirmed that, yes, when you type into Spotify, if you want it to use your listening data,
and in my case, I have listening data on Spotify going back more than a decade.
It actually will do that.
And this just enables all sorts of fun things.
So she was telling me, some people have used this playlist to say, make a playlist
that is the opposite of my taste.
And it will try to get you as far outside of your filter bubble as,
it can take you.
And if you're listening to the Hard Fork podcast,
because you gave your Spotify a thing that said,
make a playlist with the opposite of my taste,
and it led you here.
Welcome.
Welcome.
You're safe now.
We don't know what you were listening to before.
Wait, what is the opposite podcast to Hard Fork?
The Megyn Kelly Show.
The Megyn Kelly Show.
So anyways, I encourage you to have fun with this.
Some other things you might want to try.
Molly was telling me, if you're traveling to a new country,
you might say,
make me a playlist of the top hits in this country,
or make me a playlist of some very popular songs in this country,
you know, over the past couple of decades,
that tends to work.
You can also go really abstract.
You know, you can say,
make me the perfect playlist for eating fish tacos on the beach
and just see what happens.
So the thing I like about this is it's a real kind of canvas for creativity
and it is anti-slop, you know?
I wrote about this in my column.
You know, you have this great term machine drift,
which is the process where an algorithm kind of grabs a hold of you,
and it leads you somewhere that you might never have wanted to go.
This is anti-machine drip.
This is you saying, hey, you already know a lot about me based on what I chose to listen to.
I want to use that as the foundation to find more cool stuff that I might like.
So this is one of those AI tools that I actually found quite empowering in my life.
It hasn't been perfect.
It's kind of started to break in a couple ways that have been disappointing me over the past few days.
But in general, I would say, this is a cool feature.
If you have it on Spotify, you should probably give it a try.
That's very cool.
I like that.
And it gives me an idea because I have a sort of nethered.
niche musical fascination that I'm now wondering if this can handle. Have I told you about my thing
about songs where the titles don't appear in the lyrics? Okay, for some reason, I love songs with
titles that don't appear in the lyrics. Give us some examples. Bohemian Rhapsody. Yeah. Day in the Life.
Annie's song. There are lots of these. Songs that have a sort of descriptive title and they're not
that good in the lyrics. So let me see what happens. If I go into Spotify, where do I find this?
If you tap your library, the tab there, and you'll see a big plus button at the upper right hand corner, and you should see prompted playlist.
Prompted playlist, okay.
And so you'll tap on there.
So make me a playlist of songs whose titles do not, a bullet with butterfly wings.
That's another one.
Another good one.
In the lyrics of the song, maybe I should give it an example.
Yeah.
like Bohemian Rhapsody.
I realize this is a very niche,
but I'll be very impressed if it can do this.
So here's why I think this might work,
is prompted playlist have what Molly called world knowledge.
Basically, this feature has data that goes beyond song data.
It actually knows things about the world
in the same way that an LLM might.
So, you know, I bet there are people who have written
about song titles whose lyrics who not appear in the titles,
and maybe that'll find its way into your playlist.
Okay, so it's creating a playlist,
called song titles that mislead.
It says identifying popular songs, filtering songs by lyrics.
And these will take a couple minutes to create.
Another feature that I like about them is you can set them to update every day.
So this was really important for my playlist where I'm trying to listen to stuff I haven't listened to in a while.
Every morning when I wake up now, it says, hey, here's a bunch of stuff that is brand new.
Because I'm trying to listen to this playlist basically every day.
And you just sort of, you can set it to update automatically.
That's cool.
You can even pick the day of the week you want it to update.
That's cool.
So you could do like songs that are sort of about a day of the week and have it auto update.
You could have like your classic Friday songs that updated every Friday.
Casey, would you believe it is killing it?
Is it really?
Baba O'Reilly.
Yes.
Smells like teen spirit.
Yes.
For what it's worth.
Yes.
Sympathy for the devil.
These are so many great songs.
Which you might actually just sing a quick medley of those songs that you just named.
Wait, this is amazing.
It got all of them right.
Mm-hmm.
The wait.
Number 9 Dream, White Rabbit, Unchained Melody.
It is doing something that I have been wanting for years.
This is cool.
Yeah, so I'm telling you, it's kind of a magic thing.
This is one where the limit is only your creativity and your imagination.
So if you have a Spotify premium, go nuts to my friends.
Okay, that is a cool thing.
Now, what do you have for us, Kevin?
So, Casey, I've got a whale of a story for us today.
Love that.
This comes to us from The Great Minds over at Google, who recently put out a paper
that was about the use of AI to understand and interpret whale song and other underwater noises.
So this is a new, what they're calling a bioacoustics foundation model called Perch 2.0.
And, you know, if I had to summarize this, I would say, this is not a fluke, we're not spouting nonsense, free willy-nilly.
This is a new criller app for AI.
Very, very good.
No notes. Very good.
So basically this is a paper that was released that is about this new foundation model that they have built over there that allows them to categorize underwater audio, underwater audio samples from whales, dolphins, orcas, you name it.
And what's really interesting about this is that it is not trained on whale song or any other underwater noises.
It's actually trained on bird song.
And so they have found that these same sort of embeddings and techniques that had been trained on Birdsong were able to also consistently label and categorize underwater noises.
So it's a kind of transfer learning when you make these models big and general.
If you give them one task, sometimes they also learn how to do related tasks.
And so in this situation with Perch 2.0, they were sort of impressed and surprised at how good this.
model was at interpreting the underwater sounds that they were giving it.
Well, so what are some of the, what are the whales saying exactly?
So this paper is really just describing a method of classifying sounds rather than sort of
understanding what they map to in terms of like speech.
We're still not quite there with that, although there are a bunch of projects including
the cetacean translation initiative that are trying to understand the songs and
noises produced by sperm whales.
But this is basically giving scientists, marine biologists, a new set of tools to be able
to, you know, if they hear something that they don't recognize, to sort of classify that,
to maybe say this comes from this specific kind of creature, and that that will sort of
help them detect and classify sounds going forward.
Wow.
Another thing I appreciate about these scientists is that they also have a sense of humor.
Their paper is titled Perch 2.0 transfers whale to underwater tasks.
They also make a reference to something called the Bittern method, which is sort of a play on an AI, a famous AI paper called The Bitter Lesson.
So they are basically saying that some of the principles that we are finding for large language models with humans where like if you make a model better at coding, it also gets better at math or some other related task.
They're also finding that that applies to things in animal studies.
So if you make a model better at classifying bird sounds, it also gets better at classifying underwater.
sounds. So I don't know what's next for this particular project. This seems like an active area that a lot
of companies or organizations are involved in. We may be getting close to understanding more about
birds' songs or whale speech. But I just thought it was interesting the idea that there is something
generalizable about understanding animal sounds as a whole where you can take a model trained on
bird sounds, use it to analyze underwater acoustic data, and find that it actually outperforms a lot of
the more specific models that are just trained on whale noises.
Well, thanks for the deep dive into that whale paper, Kevin.
Did you hear about the researcher who found himself missing a whale?
No.
Yeah, he had a cetacea needed.
All right, that's one good thing.
Hartforge is produced by Whitney Jones and Rachel Cohn.
We're edited by Veraan Pavich.
We're fact-checked by Caitlin Love.
Today's show was engineered by Katie McBerner.
Our executive producer is Jen Poyant.
Original music by Alicia Baitoup,
Marion Lazzano, Rowan Nemistow,
Alyssa Moxley, and Dan Powell.
Video production by Soir Roque,
Pat Gunther, Jake Nicol, and Chris Schott.
You can watch this full episode on YouTube at YouTube.com
slash hardfork.
Special thanks to Paula Schumann,
Kwewing, Tam, and Dahlia Hadad.
You can email us, as always,
at hardfork at nYTimes.com.
Send us your whale song.
