The Prof G Pod with Scott Galloway - No Mercy / No Malice: Thought Partner
Episode Date: August 31, 2024As read by George Hahn. https://www.profgalloway.com/what-does-ai-think/ Learn more about your ad choices. Visit podcastchoices.com/adchoices...
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards.
Head over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts, mortgage rates, and more.
NerdWallet. Finance smarter.
NerdWallet Compare Incorporated.
NMLS 1617539.
I'm Scott Galloway, and this is No Mercy, No Malice.
I've known Greg Shrove, the CEO of Section, for years.
Section's focus is upskilling the enterprise for an age of AI.
AI Thought Partner, as read by George Hahn.
This is the summer of AI discontent.
In the last few weeks, VCs, pundits, and the media have gone from overpromising on AI to raising the alarm.
Too much money has been poured in, and enterprise adoption is faltering.
It's a bubble that was overhyped all along. If the computer is what
Steve Jobs called the bicycle for our minds, AI was supposed to be our strongest peddling partner.
It was going to fix education, accelerate drug discovery, and find climate solutions. Instead, we got sex chatbots and suggestions to put glue on pizza.
Investors should be concerned.
Sequoia says AI will need to bring in $600 billion in revenue
to outpace the cost of the tech.
As of their most recent earnings announcements,
Apple, Google, Meta, and Microsoft are estimated to generate a
combined $40 billion in revenue from AI. That leaves a big gap. At the same time, the leading
AI models still don't work reliably, and they're prone to ketamine-like hallucinations. Silicon Valley speak for they make shit up.
Sam Altman admits
ChatGPT-4 is, quote,
mildly embarrassing at best,
unquote.
But he's also pumping up expectations
for GPT-5.
Unless you're an investor or an AI
entrepreneur, though, none of this
really matters. Let's refocus.
For the first time,
we can talk to computers in our language and get answers that usually make sense.
We have a personal assistant and advisor in our pocket, and it costs $20 a month.
This is Star Trek 58 years ago, and it's just getting started.
If you're using AI as a bubble as an excuse to ignore these capabilities, you're making a big mistake. Don't laugh. I know Silicon Valley tech bros need a win after their NFT metaverse consensual hallucination.
And as you're likely not reading this on a MetaQuest 3 purchase with Dogecoin,
your skepticism is warranted. For all the hype of AI, few are getting tangible ROI from it.
OpenAI has an estimated 100 million monthly active users worldwide. That sounds like a lot, but it's only about 10% of the global
knowledge workforce. More people may have tried ChatGPT, but few are power users, and the number
of active users is flatlining. Most people bounce off, asking a few questions, getting some nonsense answers, and return to Google. They mistake GPT for
better Google. Better Google is Google. Using AI as a search engine is like using a screwdriver
to bang down a nail. It could work, but not well. That's what Bing is for.
The reason people make this mistake? Few have discovered AI's premier use case as a thought partner.
I've personally taught more than 2,000 early adopters about AI, and Section has taught over 15,000.
Most of them use AI as an assistant, summarizing documents or contracts, writing first drafts, transcribing or translating documents, etc.
But very few are using AI to think.
When I talk to those who do, they share that use case almost like a secret.
They're amazed AI can act as a trusted advisor and reliably gut-check decisions, preempt the
boss's feedback, or outline options. Last year, Boston Consulting Group, Harvard Business School,
and Wharton released a study that compared two groups of BCG consultants, those with access to AI and those without. The consultants with AI completed 12% more tasks
and did so 25% faster. They also produced results their bosses thought were 40% better.
Consultants are thought partners, and AI is super soldier serum. Smart people are quick to dismiss AI as a cognitive teammate.
They think it can automate call center operators, but not them,
because they're further up the knowledge work food chain.
But if AI can make a BCG consultant 40% stronger,
why not most of the knowledge workforce?
Why not a CEO? Why not you?
Last fall, I started asking AI to act like a board member and critique my presentations before I sent them to the section directors. Even for a longtime CEO, presenting to the board as a test you always want to ace. We're blessed with a
world-class board of investors and operators, including former CEOs of Time Warner and Akamai.
Also, Scott. I try to anticipate their questions to prepare me for the meeting and inform my
decisions around operations. I prompt the AI, quote, I'm the CEO of Section. This is the board meeting pre-read
deck. Pretend to be a hard-charging venture capitalist board member expecting strong growth.
Give me three insights and three recommendations about our progress and plan, unquote. Unquote. priorities the board did, with the associated trade-offs, including driving enterprise value,
balancing growth and cash runway, and taking on more technology risk.
Since then, I've used AI to prepare for every board meeting. Every time, AI has close to a
90% match with the board's feedback. At a minimum, it helps me know most of what Scott is going to say before he says it.
A free gift with purchase?
The AI is nicer, doesn't check its phone, and usually approves management comp increases.
Let's call it a draw.
Think of what this could mean for any of your high brainpower work.
Less stress, knowing you didn't overlook obvious angles or issues.
A quick gut check to anticipate questions and develop decent answers, which you will improve.
And a thought partner to point out your blind spots, risks you forgot to consider, or unintended consequences you didn't think of.
Whether interviewing for a job, admissions to a business school, or trying to obtain asylum,
I can't imagine not having the AI role play to better prepare.
Other scenarios where AI has helped as my thought partner?
Discussing the pros and cons of going into a real estate project with several friends as co-investors.
Getting a summary of all my surgical options after uploading my MRI to fix my busted ankle so I can hold my own with my overconfident, time-starved Stanford docs.
Doing industry and company research to evaluate a startup investment opportunity.
Right now, the smartest people in the room think they're above AI.
Soon, I think they'll be bragging about using it.
And they should.
Why would anyone hire a doctor, lawyer, or consultant who's slower and dumber than their peers?
Why would you hire someone with a fax number on their business card?
As Scott says, AI won't take your job,
but someone who understands AI will.
Here are some ideas of how to use AI as a thought partner.
One, ask for ideas, not answers.
If you ask for an answer, it will give you one, and probably not a very good one.
As a thought partner, it's better equipped to give you ideas, feedback, and other things to consider.
Try to maintain an open-ended conversation that keeps evolving, rather than rushing to an answer.
Two, more context is better.
The trick is to give AI enough context
to start making associations.
Having a generic conversation will give you generic output.
Give it enough specific information
to help it create specific responses.
Your company valuation, your marketing budget,
your boss's negative feedback about your last idea an MRI of your ankle and
Then take the conversation in different directions
three
Ask AI to run your problems through decision frameworks
Massive amounts of knowledge are stored in LLMs. so don't hesitate to have the model explain concepts to you.
Ask, how could a CFO tackle this problem?
Or, what are two frameworks CEOs have used to think about this?
Then have a conversation with the AI unpacking these answers.
4. Ask it to adopt a persona. Quote, if Brian Chesky and Elon Musk were co-CEOs,
what remote work policies would they put in place for the management team? Unquote. That's a
question Google could never answer, but an LLM will respond to without hesitation. Five, make the AI explain and defend its ideas.
Say, why did you give that answer?
Are there any other options you can offer?
What might be a weakness in the approach you're suggesting?
And six, give it your data.
Upload your PDFs, business plans, strategy memos, household budgets,
and talk to the AI about your unique data and situation.
If you're concerned about privacy, then go to data controls in your GPT settings
and turn off its ability to train on your data.
When you work this way, the possibilities are endless.
Take financial planning.
Now I can upload my entire financial profile,
assets, liabilities, income, spending habits, W-2, tax return,
and begin a conversation around risk,
where I'm missing opportunities for asymmetric upside,
how to reach my financial goals,
the easiest way to save money,
be more tax efficient, etc.
The financial advisor across the table doesn't, in my view, stand a chance.
She's incentivized to put you into high-fee products
and doesn't have a billionth of the knowledge and case studies of an LLM. In addition,
she's at a huge disadvantage as the person in front of them, you, is self-conscious and unlikely
to be totally direct or honest. Quote, I'm planning to leave my husband this fall, unquote.
We all crave access to experts. It's why people show up to hear Scott speak.
It's why someone once paid $19 million for a private lunch with Warren Buffett.
It's why, despite all the bad press regarding their ethics and ineffectiveness,
consulting firms continue to raise their fees and grow.
But most of us can't afford that level of human expertise.
And the crazy thing is, we're overvaluing it anyway.
McKinsey consultants are smart, credentialed people.
But they can only present you with one worldview
that has a series of biases, including how to create problems
only they can solve with additional
engagements, and what will please the person who has a budget for follow-on engagements.
AI is a nearly free expert with 24-7 availability, a staggering range of expertise, and most
importantly, inhumanity. It doesn't care whether you like it, hire it, or find it attractive.
It just wants to address the task or query at hand.
And it's getting better.
The hardest part of working with AI isn't learning to prompt.
It's managing your own ego and admitting you could use some help
and that the world will pass you by if you don't learn how to use a computer, PowerPoint, AI. So get over your immediate defense mechanism.
AI can never do what I do. And use it to do what you do just better.
There is an invading army in business. Technology. Its weapons are modern-day tanks, drones, and supersonic aircraft.
Do you really want to ride into battle on horseback?
Life is so rich. Thank you.