The Prof G Pod with Scott Galloway - Introducing ProfG.AI
Episode Date: October 10, 2023In this bonus episode, we get a behind the scenes look at how Prof G Media created a chatbot that sounds like Scott with the help of Spirito.ai. Check out the chatbot yourself at https://profg.ai/ Lea...rn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards.
Head over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts, mortgage rates, and more.
NerdWallet. Finance smarter.
NerdWallet Compare Incorporated.
NMLS 1617539. at prop g media we attempt to stay up on the latest technologies and one way we do this
is to actually use these technologies ourselves in 2023 that means using wait for it ai tools
we've been experimenting with translating our podcast
into other languages creating short videos for social media and now we've developed an ai tool
of our own that we'd like to share with you it's called profg.ai i know that sounds scary
it's a chatbot similar to chatGPT, only with a twist.
Instead of chatting with open AI servers, you're chatting with a digital version of me.
The catalyst here is C above.
We want to learn about technologies.
But also, I receive dozens of emails each day from thoughtful people asking for advice.
And as much as I'd like to respond, I can't. And so we tasked the team with coming up with a generative AI that could sound very similar and provide responses that felt sort of on point.
This is a bit eerie because we took many of the office hours questions that we received,
put them into PropG AI, and found that the responses were pretty similar to what I would have said or how
I would have responded. Anyways, with that, and here to explain how we actually made this tool
and some of what we learned about the market along the way is PropG Media's Editor-in-Chief,
Jason Stavers. When ChatGPT came out late last year, one of the first things we did at PropG Media was ask it to imitate Scott.
We spent a lot of our time working with Scott on scripts and articles and other writing, and it would be incredible to have a digital Scott available to us 24-7.
Plus, we thought it'd be fun.
As Scott talked about last week on markets, OpenAI used some of his books to train GPT.
So the bot can make an effort to imitate him,
but you get a pretty generic, vague version.
We thought we could do better.
So we built our own.
Building a chatbot requires two primary components.
The artificial intelligence portion
that does the heavy lifting
is what's called a
large language model, or LLM. These are enormous statistical engines that take in a string of text
and then predict what the most likely next string of text is going to be. That's a narrow skill,
but as everyone who's used these tools has seen, it turns out to be a very powerful one.
These systems are quite
good at predicting the right answer to a question or how that answer might sound in the style of a
pirate or written in computer code. However, because LLMs are just statistical engines,
they can be a bit finicky to work with. That's where the second component comes in.
The chatbot itself is an extra layer of software
that sits between the user and the LLM.
It's not really a translator
since the LLM knows every language.
The chatbot is more like a diplomat.
It takes the user's questions and instructions
and it gives the LLM more context
for how it should respond.
For example, most chatbots insert text before the user's
message along the lines of, you are a helpful AI assistant that politely and accurately responds
to user messages. The idea is to increase the statistical likelihood of the ideal response.
So to make a digital Scott, we needed an LLM and we needed a chatbot that could provide the LLM with enough context
about how Scott thinks and writes that the LLM could accurately predict how he would respond
to any question. To accomplish this, we turned to a London startup called Spirito.ai. Spirito
was founded by two engineers who left Meta just a few weeks before we met them, Dennis and Alice. I've asked them to join me and explain how we made a digital Scott.
Dennis, one of the first decisions we had to make was which LLM we wanted to use.
There's quite a few options available in the marketplace, right?
Yeah, definitely.
And it feels like there are new ones every week.
Some are trained on specific knowledge domains like Google's MedPom, which is specific to the medical field.
Some are more general purpose, like GPT-4 from OpenAI.
Some are open source, like Lama from Meta.
So there's a bunch of different services out there, and there's a lot of variety in the industry.
Okay, so we decided to go with OpenAI's GPT. Why did that work for us?
Yeah, so there were two kind of main criteria that we were kind of looking at here. So one is
production capability, and then the other is basically model performance. So on production
capability, what we're really concerned about is basically, like, can we even use this LLM at scale?
A lot of the LLMs that exist out there are primarily for research purposes or academic
purposes or haven't yet been released, or they don't have the infrastructure to basically support what we're trying to do, which is build a large-scale consumer application.
And on model performance, what we're really trying to evaluate is basically how good is the LLM at this specific use case of building digital versions of creators. And ultimately, we felt that OpenAI's products
basically were best at addressing both of these criteria. And there are a couple other sort of
bonus features as well, like fine tuning. So can you explain a bit more about what fine tuning is
and how it's helpful to us? So basically, fine tuning is where you take a general purpose model like GPT-4 and train it to better perform at
specific tasks. You kind of show the LLM how to respond by giving it a bunch of data, and then
later it'll use that data to basically help itself improve on those sets of tasks. So in our case,
what we did, we took GPT-3.5, we gave it a bunch of questions, and then we gave it a bunch of responses
in terms of how we would want the ideal Scott bot to respond. And in the end, we got a sine
2 model that performed better than base GPT 3.5. Now that's the LLM piece of the equation.
But then we needed a chatbot that could provide the LLM with the context it would need to capture
Scott. And there we had a
great advantage because Scott has been writing prolifically for years, and he's recorded hundreds
of hours of podcasts. So what we needed the chat bot to do was to provide the LLM with just the
portions of all that writing that would help it respond to each user question. Alice, can you
explain how he went about that?
Columns can only process a certain amount of tokens at a time,
and Scott's prolific writing is definitely more than the limit there.
So we use a strategy of chunking, embeddings, and similarity search
to find the relevant text when someone asks a question.
So let's go over each of these.
Chunking is basically dividing the text into smaller pieces,
which we can then embed and store in a database. The embeddings are important because in the next
step, we use the embeddings to run a similarity search to find the chunks that are similar to
a question, let's say, that is put into the chatbot. This helps us find the right slivers
of information when someone asks the chatbot a specific topic.
And then how does the chatbot coach the LLM to use that material and sound like Scott?
When we want our chatbot to sound like Scott, we can use a system prompt, which essentially guides the LLM into how to approach answering a question.
And we want to balance style, such as tone, key phrases, maybe Scott-isms.
Instructions essentially act like a chatbot embodying Scott.
And we're also going to feed in some extra context and how to handle that extra context that's passed in.
So we need to be careful about all of these because adding too much information can cause the LLM to forget instructions.
While adding too little information can cause it to perform suboptimally. So it's really part science, but also part art. Thanks, Alice. Thanks, Dennis.
We've made our chatbot available at profg.ai, where you can check it out. It's an experiment,
and still in the early stages. It handles some questions better than others, and it will get
better as it answers more questions. So some fine print. One, I'm sure some of this will be wrong.
And then again, I'm wrong quite a bit, but I'm sure some of this will not hit the mark.
And we're open to feedback for how we make it better.
And two, and most importantly, this cannot replace human relationships.
And our hope is that this not only provides insight and guidance to people who I otherwise couldn't get back to, but that you use this information as a catalyst for reaching out to potential friends, potential mentors, to increase your dialogue, your intimacy, and your contact with other people.
Every digital analog of your life is a shittier version of your life the digital facsimiles
of relationships are just that they're facsimiles find mentors discuss this with friends I just don't get it.
Just wish someone could do the research on it.
Can we figure this out?
Hey, y'all.
I'm John Blenhill, and I'm hosting a new podcast at Vox called Explain It To Me.
Here's how it works.
You call our hotline with questions you
can't quite answer on your own. We'll investigate and call you back to tell you what we found.
We'll bring you the answers you need every Wednesday starting September 18th.
So follow Explain It To Me, presented by Klaviyo.
Hey, it's Scott Galloway, and on our podcast Pivot, we are bringing you a special series Thank you. I'm joined by Kylie Robeson, the senior AI reporter for The Verge, to give you a primer on how to integrate AI into your life.
So, tune into AI Basics, How and When to Use AI, a special series from Pivot sponsored by AWS, wherever you get your podcasts.