Pivot - AI Basics: How and When to Use AI

Episode Date: October 23, 2024

Kara and Scott are back in your feeds for a special series on the basics of Artificial Intelligence. What should you use it for? What tools are right for you? And what privacy issues should you watch ...out for? Kylie Robison, Senior AI Reporter for The Verge, joins Pivot with a primer on how to integrate AI into your life. Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everyone. This is Pivot from New York Magazine and the Vox Media Podcast Network. I'm Kara Swisher. And I'm Scott Galloway. And you're listening to our special series on AI. We talk a lot about the business of AI, but today we want to focus on the ways we can actually be using it in our day-to-day lives. But here to chat with us about some of the AI basics is Kylie Robison. Kylie is a senior AI reporter for The Verge. Welcome, Kylie. It's good to talk to you. Yeah, thank you for having me.
Starting point is 00:00:30 So first of all, talk a little bit about how you personally use AI in your day-to-day life, and why are you covering it? Yeah, you know, before this, I was covering Twitter at Fortune Magazine, which is run by Kara's favorite person. And that wasn't easy to cover. And I thought AI showed a lot of promise and was just such an interesting area to tackle for a young reporter. I mean, who doesn't want to be covering the biggest technology to hit the scene? And I live in San Francisco, so it just seemed perfect. But it is perhaps the most
Starting point is 00:01:06 stressful beat I've ever had because it's so large and so nuanced and people argue about it all day. About using AI in my day-to-day life, I would say it's not something I use heavily. I do use it for, like, I upload a document, for instance, OpenAI will release these safety cards for their models that show like, this is how safe it is. So I can upload that PDF to GPT-4.0 and say, okay, and ask questions based off that PDF. Do they mention this? Can you expand on what this means? So like really heavily technical documents or white papers, I can ask questions and like simplify it in a way that's more helpful. And it's quicker for me to understand than going to a bunch of researchers, making a bunch of calls to get them to explain it. I think that's been really helpful for me. I know Scott uses it for writing. I think I've noticed Claude can be kind of a helpful writing assistant. But in terms of actually using it to write, I don't use that
Starting point is 00:02:05 because I don't think it's helpful yet. But just as a partner, as Scott has mentioned on the podcast, as a partner to be like, okay, here are some of my rambling thoughts. Can you streamline what I'm trying to say and like, you know, edit it for me? Right. Absolutely. So give us a few tips for someone who's looking to implement AI in their lives, like daily tasks that could be made easier when people ask you this question? Obviously, Google is integrated into writing emails, for example, which I don't find useful. But bills, resumes, and much of what the average person knows of AI is chat, GPT, and related tools. Try to expand on that.
Starting point is 00:02:40 What other tools are useful for them? Yeah, there's so many different AI tools now. I think a lot of people use Grammarlyly so they can check your grammar in your browser, which is really helpful. I think, you know, when you want to use AI, I think you should consider it for low stakes tasks. You have to imagine your data privacy because often these models will use what you input to train the model.
Starting point is 00:03:02 So you don't want it someday spitting back, you know, your bank account information, which is really funny, because I'm a big listener of Pivot and Hard Fork, which Kevin, one of the hosts there had uploaded his bank statements to Notebook LM, so they could create a podcast to help him with his financial information, which is, it was a really interesting thing that I think people want, you know, AI to be capable of right now is like, can you help me budget? Which, you know, I think again, low stakes tasks, thinking of the data you upload, trying not to upload sensitive information. I think just like I said, with the PDF, that was really helpful.
Starting point is 00:03:41 I think, you know, I used it, I just turned 26. So I used it to compare health insurance. These are my needs. These are the options I have. Here's a PDF of what they offer. Which one should I choose was really helpful. Stuff like that. Nice to meet you, Callie. By the way, I used it this morning. I asked AI, why am I so broke? And it immediately sent me a copy of an email confirmation from Amazon that my Mexican cat costume will be delivered tomorrow. That's a little AI humor, Kylie. I'm so sorry, Kylie. As a young woman, you needn't endure this, but here you are. Go ahead. Or I'll say, how can I feel better about myself? And it'll just come back with,
Starting point is 00:04:22 good morning, you fucking stack of sunshine. So it has. Oh, my God. Anyway, a lot of people use AI. A lot of people haven't even started. What would you suggest someone does to get started? Which one or two LLMs would you suggest they download, the free version even? And how do they start to try and unlock the potential and just understand it more?
Starting point is 00:04:45 And, of course, I'll start with an answer. The first time I really interfaced with AI was trying to figure out fun stuff for me and my 14-year-old son to do in London. And it kind of went from there. What two or three things would you suggest to help people get started? Yeah, I think that's a really helpful example and something I think the makers of these models want people to be using it for rather than highlighting any nefarious ways to use AI. They're hoping people use it to write, you know, a story for their young child set with pictures. You can use Chachapiti for that or Grok. And you can use it to book travel or like to plan your travel, which is really cool. And I used it just when I went to Spain. I was like,
Starting point is 00:05:25 what should I go see? So I think those are really helpful examples. Again, these are all low stakes tasks that you can use any really any chatbot on the market that's capable enough to use creatively. I think you have to check to see if these places are open and exist because it can hallucinate very confidently. But yeah, any model right now, they're all kind of on par with each other. A lot of them are because they're all working towards the same thing. So, you know, you can use it to decide what hair color you want to do next. Anything that's just fun and low stakes, I think, is easy. Low stakes.
Starting point is 00:06:09 So you did mention privacy. Now, we do put a lot of privacy online. My bank things are online, everything else. But I really limit what I use here because of that. Because I'm like very worried. And I'm someone who's very aware of privacy. I'm like, Scott, when you said you put your medical records, I'm like, I'm not putting my record into open AI. No way. They already know, Kara. Your privacy is gone. Yeah, I guess. But I just don't want to help them along to like put all the things together. And so I just they definitely don't have my my heart surgery stuff. They don't. They don't. And so you're worried about the CCP, like scaring you to give you a heart attack. I don't know. I just, I'm just telling you the feeling I had. Kylie, this is what I deal with. This is what I deal with. This is my feeling, like,
Starting point is 00:06:49 I don't want to give them too much personal information. I don't mind it on writing things and like, that are low stakes. But how do you, but then now we put lots of stuff on the regular internet. When do you imagine that crossover for the, how should the average person feel about that? Because we don't know where what this stuff is being used for, correct? Like there's not as much transparency as there needs to be. No, there is not as much transparency. And they claim that it's because of, like Scott said, the CCP or, you know, anti-competitive reasons. I think, you know, they've already hoovered up the entire Internet. There is no going back from there. All these models have hoovered up the
Starting point is 00:07:24 entire Internet. It's something I've kind of. All these models have hoovered up the entire internet. It's something I've kind of reckoned with when they said, you know, Photobucket signed over all of its data to train large language models. I was like, dang, like whatever I put on Photobucket that was stupid when I was 11 is now going to be used to train large language models. And 11-year-old me wouldn't have known that. So I think it's kind of, it's a really tough position. People on Instagram, celebrities were like, you know, Meta does not have the right to use my
Starting point is 00:07:53 photos if I post this story. I think people are really protective over, you know, the lives that they have shared with the internet that they have been encouraged by these large companies to share with the internet. And it's just, I think they feel like it's being taken away from them and used to train these black box models. I think people have different opinions on it. I personally feel I have the heebie-jeebies about it because I have, you know, I grew up with the internet, with Facebook launching when I was a young teen. So I think it's a very tough position. I think some people are like, I don't care. So do you think that people are more worried? Because a recent study found that one in nine Americans use AI every day in the work. That's a very small number,
Starting point is 00:08:34 like at this point, right? Where are we in that? Do you think everyone's just going to do it? Like, not do it, it's going to be done to them, and that'll just be foisted upon them by Apple intelligence or whatever. You know, do you want to get an Uber? You have your airport. That seems like a good thing, for example. Totally. I think automating, you know, rote tasks is not a bad thing. When it comes to the workplace, I think a lot of workplaces are well aware of the data privacy issues. And they're like, please don't upload our internal documents to OpenAI. That's been a problem. It's trained in a lot of workplaces. So one in nine doesn't surprise me. I think it's going to continue to grow. I just published a story
Starting point is 00:09:10 today about AI agents, which is just like the new next thing. So sort of an AI assistant. And where I'm seeing this a lot is in SaaS products, like Salesforce released a CRM agent. Microsoft has co-pilots, stuff that they believe will increase efficiency amongst their staff. But I think it's going to be hard for that number to grow so long as there's transparency issues and that trust has to grow. Okay, let's take a quick break. When we come back, we'll talk about where we're already using AI without realizing it, and what we should not be using it for. Fox Creative. This is advertiser content from Zelle.
Starting point is 00:10:00 When you picture an online scammer, what do you see? For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night. And honestly, that's not what it is anymore. That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more like crime syndicates than individual con artists. And they're making bank. Last year, scammers made off with more than $10 billion. It's mind-blowing to see the kind of
Starting point is 00:10:27 infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem, we can protect people better. One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says one of our best defenses is simple. We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages you don't recognize? What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a, thank goodness, a smaller dollar scam, but he fell victim.
Starting point is 00:11:16 And we have these conversations all the time. So we are all at risk and we all need to work together to protect each other. Learn more about how to protect yourself at vox.com slash Zelle. And when using digital payment platforms, remember to only send money to people you know and trust. Support for this episode comes from AWS. AWS Generative AI gives you the tools to power your business forward with the security and speed of the world's most experienced cloud. Scott, we're back with our special series on AI. We're talking to a senior AI reporter for The Verge, Kylie Robison. Huge leaps have been made in AI over the last couple of years.
Starting point is 00:12:02 Talk about how we're using it without realizing it. It's also been around for a while, right? Where are people not realizing they're using it today? Totally. I think in your career, Cara, you've probably covered AI. It's been around forever. I think, you know, your Netflix algorithm, that's AI. Automated self-driving cars, Waymos. I live in San Francisco.
Starting point is 00:12:22 Waymos are everywhere. That's AI. It is used in, you knowmos are everywhere. That's AI. It is used in TikTok algorithms. That's AI. It's everywhere. And it has been working in the background quite a bit. I think you'll hear companies, especially as a reporter, they're like, we've been in the AI business for two decades, which is, it's not facetious, but it is different than the frontier models we're seeing, open AI and anthropic release. But it is those algorithms you're used to using.
Starting point is 00:12:51 So, break down or do a quick kind of J.D. Powers review of the biggest LLMs from your favorite to your least favorite. Favorites to least favorites. Scott likes Claude, just so you know. I do like Claude. Claude is really good. It's surprisingly good. I just started using it recently and I messaged a coworker. I was like, I'm a bad AI reporter because this is way better than I anticipated. It also has, I don't know if
Starting point is 00:13:15 you've noticed this, Scott, kind of intense guardrails. I asked some questions about AI. It's like, well, as an AI, I can't exactly answer these questions. Whereas chat GPT would have just spit it out. Just so people know, it's by Anthropic, which was a group of people who thought OpenAI was not safe enough and started Anthropic. Exactly. It's backed by Amazon. Exactly. Yes. And Google has a smaller cloud share for Anthropic.
Starting point is 00:13:40 But yes. So Anthropic is a competitor to OpenAI. OpenAI has GPT-4.0 is their latest frontier model. They've also released a reasoning model called O1, but they consider that, for lack of a better word, kind of dumber than their frontier model. And frontier models are basically the biggest, the best, you know, models that are out there. So frontier models are like, the next one is like the next iPhone, basically. So I would say Claude is amazing. Claude Opus is amazing. I think the thing is, they're all building the same thing
Starting point is 00:14:12 with the same training data, which is the entire internet. So they're going to continue just leapfrogging over each other. So it's hard to compare because it's, you know, five major companies with some of the best researchers in the world with all of the same training data, all building the same thing. Do you use Grok? Don't laugh. Do you use Grok? I'm not getting in any of the new whatever cyber taxis get. I'm not getting in and I'm not putting any information into any of his properties.
Starting point is 00:14:40 I don't trust him personally. I used Grok when it first came out. I think I made, you know, Kamala with a gun is, you know, The Verge put out a story because they have Grok for the listener is available on X, formerly Twitter, which is owned by Elon Musk. And he owns XAI, which created this chatbot. And it has what it feels like no guardrails. So you can make a lot of photos that break all sorts of copyright laws. No, I don't use Grok, and I don't necessarily find it to be top of the line. If I were to rank models, they're not at the top.
Starting point is 00:15:18 So I'll go back to my question. What's your favorite LLM? I would say Opus, Claude. It's really intelligent and incredible. I think, you know, it's enviable from other labs what they've built. And then what, can you name some long tail LLM or AI apps that are sort of fun that people that maybe haven't gotten very much attention that are kind of fun? Any sort of undiscovered gems out there? Undiscovered gems. I mean, if you go to Hugging Face, if you're into open source,
Starting point is 00:15:45 there are hundreds of thousands of open source LLMs that people can mess with. I mean, that's the cool part about open source LLMs, which is a very hot debate. But, you know, developers are creating all sorts of cool shit with the open source LLMs available on Hugging Face. So there's almost too many to choose from,
Starting point is 00:16:06 but none of them are mainstream in the way, because it costs so much money. Hence why OpenAI just raised the most money that anyone's ever raised ever. It's a lot of money. Yeah, it's just for people on Hugging Face is an AI community where it's a platform where they collaborate and they do different things. And, you know, this will be very much like the early app days or the early internet days where there was suddenly websites and things. And then there was Yahoo that compiled them into a yet another hierarchical, officious oratory. I have a follow-up. I find that AI is very politically correct. That it will say, to answer this question, make sure that you check with law enforcement or- That's not politically correct.
Starting point is 00:16:49 Oh, I find it very politically correct. Please don't steal from the jewelry store. All right. No, but it'll come back and say, well, this might reflect bias or you should. I just find it's very, I'm looking for an AI that'll say, that's a stupid fucking question. Or your question, I want it as a friend. He wants the shame AI. Shame AI.
Starting point is 00:17:12 Makes sense. Hit me harder. Call me daddy, you bitch. Oh, my God. No, but I do find it's very, that they've put it, they're so worried about it going weird places. It has gone weird places. It's constantly preconditioning and qualifying all its answers and being very gentle. And I find it's very overly sensitive and, quite frankly, politically correct.
Starting point is 00:17:33 So I'll start with Kylie. Do you find that to be the case or do you think that's just they're putting in appropriate guardrails? I think they're putting in the appropriate guardrails because it's so nascent. I mean, why start off crazy? I feel like we can work our way up. I feel like we can work our way up to getting you a sadistic chatbot. But for now, it's so...
Starting point is 00:17:50 Go on! It's just such a nascent technology. So I think being overly safe and correct and nervous about what it's going to output to millions of people, I think that's a good move. Yeah, Scott, come on. I think that I'm going to output to millions of people. I think that's a good move. Yeah, Scott, come on. I think that I'm going to answer this.
Starting point is 00:18:07 I know you want to please bitch AI, but one of the things that's really important is it doesn't sexually harass people. It doesn't like start... Okay, you're taking this to an even darker place than I would go. But I'm just saying, it has. It's been, the original ones were racist.
Starting point is 00:18:22 If you ask it a simple question, it'll start conditioning everything and telling you to check this and make sure that you talk. And it's a sort of, just give me the goddamn answer. I get that. But they're never going to do that because they literally, the first time they put out some Microsoft stuff, it was racist, right? It started to say racist things. So they really can't have. One of the things that I think I tell a lot of people is I met these two guys on the street yesterday and they were creating an AI. They just
Starting point is 00:18:50 ran up to me. They love Pivot. They're creating an AI that goes on, speaking of odd and unusual things, that goes on top of 911 calls that they'll be selling into cities. And it'll translate, say, Spanish immediately because not every person, there's a delay there because the person who's taking the call is not Spanish, and they have to go get a Spanish-speaking dispatcher. And so it's doing all kinds of things that groks it and sends things out really quick. I thought it was a great idea. I thought it was a really interesting way of use of AI. And I said, but you know what something is? You can't make a mistake even if human dispatchers do. So I think they have to be unusually careful with all these things as this AI is shoving us around the planet. I don't know.
Starting point is 00:19:30 I just feel like that's okay. You can take it, Scott. But I'm going to get someone to make you a mean AI, overlord, or please bitch, or something like that. I learned this morning from AI that a group of flamingos is called a flamboyance. That's true. And I love that. How awesome is that? That's also in the dictionary. A flamboyance? You're a flamboyance. Anyway, last question.
Starting point is 00:19:52 We just sort of covered the idea of what things we should not use it for. It's going to be used for everything, just FYI. But any predictions, last question, on things AI can't do yet, but will be able to do for us in the next three years? Use your imagination hat here, things you're seeing able to do for us in the next three years. Use your, you know, imagination hat here, things you're seeing or hearing.
Starting point is 00:20:08 I think in the next three years, again, these are so hard to tell because you need so much money and so much compute. So if we just continue on that exponential curve that these companies are hoping for, I see, you know, probably more accurate and natural voice interactions. That's something that they're building
Starting point is 00:20:24 that they really want, you know, the her movie style reality. I do think that that will get better, especially as they release all of this to the public and people test it and they train on people using it. I think those will naturally get better. Advanced code generation and debugging, that's something they're already really good at. And if these reasoning models from OpenAI and others continue to get better, it's going to be better at coding and debugging, which will be really cool. And they're all building agents, which again, are like little AI assistants. That's sort of the high stakes tasks that they want to access. Hence why all these guardrails are so tough because they want these high stakes tasks like running your life and booking you flights and, you know, having access to all of this.
Starting point is 00:21:06 So I do see them building out agents, but it would require so much compute and so much money to get there. So, you know, I'd be curious what you guys think, because I get asked all the time, like, is the bubble going to pop? Is opening I just going to crash? Which I think, you know, it's so hard for me to tell. You guys have been doing this for so long. It will, but no. No, no, no. It's like when the internet crashed.
Starting point is 00:21:31 This is a big deal. This is a change in computing. It's yet another great change in computing. This is not crypto. This is not, you know, some of the little bubbles. But a bubble, I guess, but it's directionally correct. It's directionally. And it's going to be huge and encompass everything. Scott? We've talked about this. We think that relative to its size and leadership position, OpenAI, in my view, is actually at 12 times revenues,
Starting point is 00:22:07 is actually probably the best value because some of the long-tail ones you talked about who have almost no revenues and no real visible business model yet still get $2, $10, $20, $50 billion valuations. So it's going to be a wild ride. It's like I would describe as like late 90s internet.
Starting point is 00:22:28 We don't know if it's 97 or 99 now, but we know that by 2005, it's going to be much bigger than it is now. That's a long-winded way, Kylie of saying, I have no idea. Yeah, he does.
Starting point is 00:22:40 It's up and to the right eventually. Anyway, thank you, Kylie. We really appreciate it. You can read Kylie on The Verge. She does amazing work on this topic and breaks a lot of stories. A colleague. A colleague. She's a scoopster.
Starting point is 00:22:54 She's a scoopster. She's a scoopster, and she's a great one at it. Anyway, okay, Scott, that's it for our AI Basics episode. Please read us out. Today's show was produced by Lara Naiman, Zoe Marcus, and Taylor Griffin. Ernie and her dad engineered this episode.
Starting point is 00:23:08 Thanks also to Drew Burrows and Mio Severio. Nishat Kharat is Vox Media's executive producer of audio. Make sure you subscribe to the show wherever you listen to the podcast. Thanks for listening to Pivot
Starting point is 00:23:18 from New York Magazine and Vox Media. You can subscribe to the magazine at nymag.com slash pod. We'll be back next week for another breakdown of all things tech and business.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.