Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 07: The Seductive Appeal of AI with @JCEFidel

Episode Date: October 6, 2020

Stephen Foskett is joined by Josh Fidel, a technologist and futurist who has been inspired by applications for AI. Starting with a discussion of GPT-3, the AI text generation engine, the discussion ra...nges widely from AI Weirdness to James Yu's Singular to technological determinism to the Melbourne Monolith. Applications of AI are everywhere today, and Fidel's background as an enterprise technologist and futurist gives him a unique perspective. AI is generating compelling content, and we all must be ready to absorb and understand it. How will businesses use machine-generated text? It is likely that we will have to push back on too-broad uses of AI technology in the interest of truth and usefulness, rather than simply applying AI to every task at hand. This episode features: Stephen Foskett, publisher of Gestalt IT and organizer of Tech Field Day. Find Stephen's writing at GestaltIT.com and on Twitter at @SFoskett   Josh Fidel, futurist and technical architect. Find Josh on Twitter at @JCEFidel. Date: 10/06/2020 Tags: @SFoskett, @JCEFidel

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing AI, the podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence technology. Each episode brings experts in enterprise infrastructure together to discuss applications of AI in today's data center. Today, we're discussing the seductive power of GPT-3 and other automatic
Starting point is 00:00:26 text generation and sort of output generation algorithms. I'm your host, Stephen Foskett. I'm the organizer of Tech Field Day and the founder of Gestalt IT. You can find me on Twitter at sfoskett and you can find my writing at gestaltit.com. Now let's meet our guest. Josh, why don't you go ahead and introduce yourself? Thank you, Stephen. My name is Josh Fidel. I am a principal solutions architect with Advisex. You can find me online at thevfidel.com, and you can find me on Twitter at J-C-E Fidel, F-I-D-E-L. And I was really actually thankful that you invited me to this. I've been doing some work with AI and machine learning with some of my manufacturing customers. And generally, they're utilizing it as a, they're working on utilizing it as a QA system, right? Utilizing
Starting point is 00:01:23 IoT cameras to take pictures of manufacturing lines and to do image comparison to make sure that the products that are created are acceptable. So AI to me is a fascinating topic and I'm already seeing it in the workplace, in the marketplace. Yeah, I think that that's one of the sort of low hanging fruit applications for AI is sort of needle finding in haystacks. You know, you train the system to, you know, this is what's normal. Tell me when something not normal happens. And that's actually a really fun and useful application. But, you know, Josh and I, you know, you and I have talked at length on the On-Premise IT Roundtable podcast about issues of social justice. And I know that you're something of a futurist and that you're very excited about sort of where things that you brought up was this GPT-3 engine and the amazing and
Starting point is 00:02:28 quite lovely and meaningful text that it generates. Can you give us a little bit of background? What is GPT-3 and what comes out of it and what got your attention? GPT-3 is a transformer-based neural network. It became popular around two or three years ago. It's the basis for the NLP model BERT and GPT-3's predecessor, GPT-2. The cool thing about GPT-3 is it was trained on the largest language model data set that has ever been utilized. And when I say a large data set, it actually had about 175 billion data points. I mean, this thing trolled the internet. It was fed massive texts. It's basically, you know, what is 175 billion? Multiple, multiple copies of the Library of Alexandria, right?
Starting point is 00:03:38 Prior to this, GPT-2, well, it was 100 times less. I think it was like 7 billion or something like that, 7.5 billion. So what's really exciting about it, you know, if you're utilizing another language model like BERT, you have to have this really elaborate fine-tuning step where you gather thousands of examples of, let's say, different sentence pairs in different languages to teach the model how to do translation. GPT-3 already has that, so you don't have to train it. And we could get down into the specifics about, you know, what's a zero hit, what's a few hit, what's a multiple hit, you know, when you feed a model samples. A zero hit is, you don't give it any examples and you tell it to do a task.
Starting point is 00:04:42 A few hit is you give it a few. Multiple is you give it a bigger data set. GPT-3, and I forget the number. My brain kind of messes it up. It's either 89% or 98%, one of those two. But it has this incredible correct output for zero hit functions that has never been done before previously you always had to feed it something gpc3 you don't that's what makes it really cool and and that's the coolest thing so one of my favorite toys um related to ai is actually ai Weirdness, the amazing, wonderful blog. And she does the most remarkably fun things with AI. You know, is this a giraffe is a is a classic as are the Sherwin
Starting point is 00:05:35 Williams paint colors, I recommend checking that out. You know, she recently did quarantine houses, what appliances are in your quarantine house, and use GPT-3 to generate this. And so here's an example of, you know, sort of what Josh is saying. GPT-3 just put this together out of nothing. So it said, this house contains an espresso machine, a fondue pot, a kitchen robot, and a wormhole. And it's like, that sounds like something Douglas Adams would write. You know what I mean? And there's so many wonderful, beautiful, amusing examples of what does a quarantine house have in the future. It has a cybernetic limb, superhuman intelligence, an Android, and a battle tank. AI came up with that list. And I think Twitter couldn't do any better.
Starting point is 00:06:26 I mean, it's just fun. And we've seen things, Josh was telling me about a novel that's partly written by GPT-3. The thing that's fun about this is because it is so credible and compelling and even insightful. Why don't you talk a little bit, you were talking to me a little bit ago about that novel. How did that kind of, don't get weird, but how did that novel touch you in a way because of AI? I was amazed. And I believe the author's name is James Yu, Y-U. And I wouldn't say it was a novel necessarily, it was more like a novella. It was a smaller piece of writing, it wasn't huge. And I read through it and the only way I could tell which parts of this little novella that the AI had written was because the author was kind enough to actually
Starting point is 00:07:28 put the text in a different color. That's the only way I could tell. It was so good. And I read voraciously, especially sci-fi and fantasy, you know, being a futurist. And it was amazing because some of the things that the AI wrote, it didn't just quote verbatim from different literary texts. It instead took the ideas encapsulated in things like the Buddhist Dharma. It actually, it took some of its character building from one of my favorite authors, M.K. Gibson. It described one of his characters from a previous book, took that, translated it, and turned it into a character inside this story. And it's very obvious that there's heavy influence here.
Starting point is 00:08:23 But all authors have heavy influence it it was so amazing to me and then the the best part after reading the novel uh the author had had discussions he had had q a sessions with the gpt3 withT-3 role-playing characters from the writing and the discussions they had were amazing because he was the author was asking about the you know what's your drive why are you saying this why are you writing this if you were this character in this story what are you saying this? Why are you writing this? If you were this character in this story, what are you thinking about? And the conversation was like reading a conversation between an actor and an interviewer. It was not a, if you didn't know that it was computer generated, you wouldn't know. And to me, that's the Turing test, right? Can a computer produce something that makes it so you don't know it's a computer? If I hadn't have known, I wouldn't have known. I think we finally destroyed that. That's one thing that just amazes me about GPT-3. Yeah. And so I'll just point out, yeah, James J. Yu on Twitter and the novel that we're discussing is singular, possible futures of the singularity in conversation with GPT-3.
Starting point is 00:09:53 And he even says in conversation with, as his co-writer, it wasn't that he tasked it to write, it's that he tasked it to have a conversation. And this is what came out. And AI Weirdness is AIweirdness.com or Lewis and Quark on Twitter. So I think that this is really the heart of what I wanted to talk to you about today, Josh. And that is, as you can probably tell by listening to us, these outputs are not just seductive and correct, but compelling. They make us say, this is a valuable work. This is a valuable thing that has been produced. And one of the things I think that, you know, when you see something like that, when you see something so incredibly compelling,
Starting point is 00:10:41 you start thinking about all the different things you can do with it. But my concern, I want to slam on the brakes here. So I'm excited by this. But I am also completely terrified by this because I can see businesses looking at this and saying, aha, that's a shortcut to do X. And so one of the things, for example, that has come up, you know, in our chats is, you know, this is something that could, you know, maybe it can write other kinds of business documents. Maybe it could write legal contracts. Maybe it could write tech support documents. Maybe it could do customer service interactions. And my answer is not just no, but hell no. We can't have something
Starting point is 00:11:28 that's going to produce something unexpected and wonderful writing our tech support documents. Are you kidding me? What's going to come out of that? So what do you say to this, Josh? How do you feel when we're driving down the road and I'm like, yeah, yeah, Josh, this is great. And then I slam on the brakes and I say, no way. I think this is the first time I've ever heard you express borderline Luddite sympathies, Stephen. And okay, I agree with you. Humans are incredible at creating tools. I mean, that has been the whole basis of our evolution. You know, we start with the jawbone as a weapon, right? We move up to pointy sticks. We make arrowheads. And now we've moved up to something where we're creating something that's as smart as we are. But if you look at mankind's inventions, not always have they been for good. We have nuclear power, which is awesome. Nuclear energy is great. We also have the atomic bomb. It's about the application of the tool and how the tool is used. I do like the fact that with GPT-3, it's not open access. You have to apply to open AI, you have to submit what projects you're working on, and
Starting point is 00:12:55 that is good. We created guns and guns are useful. You know, they're great for hunting. We let those guns go in the wild and now we have school shootings. Same thing with GPT-3. We have something amazing that we could use, but if we use it poorly, things can go bad. Like I said, I do like the fact that opening eyes is keeping a wrap on this. And really what this turns into is it almost turns into an arms race. If you look at
Starting point is 00:13:34 what's going on with China's AI development compared to the United States AI development, I don't know all the specifics, but I do know that China's, they're outpacing us in AI development. I don't know what that will mean for international geopolitics. GPT-3, we talk about the things it creates. You talk about, oh, it could write business contracts. The other things it could do, it could write computer code. I mean, it could code business contracts the other things it could do it could write computer code I mean it could code applications for us but here's the thing you can't just trust the output from this thing you have to filter you have to you have to have good intentions going in which I certainly hope open AI holds to. You have to get that output and you have to review that output.
Starting point is 00:14:31 Can it write business contracts? Absolutely. I would like to see nothing more than different copies of AIs working for different companies that write the basis of a bunch of legal contracts, save you money on lawyers. But at the same time, you're also going to have to have someone review those contracts. It's to me, it's the open source model. It is the peer review model in science. You have to have multiple sets of eyes on anything. Because as we all know, we all have bias, regardless of what our bias is. The only way to get away from that bias is to open it up to observation, to be transparent,
Starting point is 00:15:15 to have review. And, you know, hopefully the more of us that look at a thing, the fewer problems will exist in it, because we'll all find those problems. And I would say that the same should be applied to AI output, peer review, open source, letting people see the product, and not just the product necessarily, but also the code behind it, because maybe the code that's generating this output is biased. Again, open source. So one of the things that you said in there, I just want to get back to, and I really want to, I don't that's almost apples and oranges.
Starting point is 00:16:05 I mean, this AI is perhaps as good as some of us at generating interesting output. But is that smart? Or is that just good at generating interesting output? Maybe it is good it's just just generating output but but here here would be here would be my counterpoint find me an individual who has 175 billion literary training points inside their brain. That's basically saying you've read 175 billion books. I've read a lot of books, but it's not 175. I would never get anything done. Um, when it comes to code, I can code. I know a lot of really good coders. I know a lot of
Starting point is 00:17:06 really good coders who can tear stuff apart. I don't know any coder that is proficient in hundreds of programming languages. Does that make it smarter than us? Or maybe it just gives it a deeper field of reference, which humans don't necessarily have. But here's the thing is, as a human, I can do lots of things. I can throw a ball. I can play a cello. I can ride a bike. AI cannot do that.
Starting point is 00:17:40 AI does not have that multifunction ability, right? AI is very good for one, two, three things. I don't think it'll ever replace humans, but I do think in some ways it can actually surpass what we can do. I absolutely agree with you on that, that in some ways it can. And like you said, I mean, no human can have written, have read this many books. No human can have, you know, this many experiences. Similarly, no human has the patience or focus to watch every line of a log file and find the one that doesn't fit and doesn't you know that they out of the ordinary one so certainly I can absolutely feel that
Starting point is 00:18:31 that AI is already better than us at some tasks but again I want to I want to get back to this this core idea it's seductive you You know, again, not to sound, I don't want to get too crazy here. Many people have heard of a American economist, sociologist named Thorstein Veblen, because Veblen came up with this idea of Veblen goods and conspicuous consumption, the idea that some things are more valuable because they're more expensive, not necessarily because of other externalities. Veblen also came up with the concept of technological determinism, which basically says that technology itself drives the use and the thought about technology. And this is something that I'm going to come back
Starting point is 00:19:26 to, I think, in future episodes of Utilizing AI, this idea that, as Veblen said, that the machine causes us to have certain patterns and habits of thought. In other words, the availability of technology and the availability of such compelling technology makes us want to use that technology in some way. And I really want to challenge listeners and enterprise people, and I guess I'm not refuting Josh here because I think he said roughly the same thing. I really wanted you to challenge and think about what the right use of technology is, not what the possible use of technology is.
Starting point is 00:20:11 Do we want this thing to write text for us? Sure, but then we have to go back and check that text and make sure that it doesn't say something totally wacky. Because it could, because we don't know how this thing is going to work. We don't know what it's going to come up with. It's all well and good to come up with an amusing character in a novel, but it's not so well and good to come up with an amusing sentence in a corporate, you know, blog post. You know, there are reasons that we want to have things checked. And I think that we shouldn't get too excited about the possibilities of any AI system, especially machine learning system.
Starting point is 00:20:53 Because really machine learning, the way that it works is it's just, I've said this in a previous episode, it's just driving using the rear view mirror. It just knows what it's seen and it's continuing to do that thing in a compelling manner. But if we don't have guardrails on the road, it might drive us right off a cliff if there comes to be a turn. So, you know, I mean, how does this strike you? I know, Josh, again, you are somebody who is very thoughtful on issues like this. How does all this strike you? I mean, I've brought in basically Marxism in AI. What do you think? Well, it's a good thing I'm an anarchist. Anyway, let me ask you this. You have a piece of metal and that piece of metal has a very sharpened edge on it. And that piece of metal is very useful because it reduces the amount of labor you need
Starting point is 00:21:55 to cut up food, to tear up food and put it in your mouth. And that's really useful, right? You've got a knife. Here's the problem. You can also take that knife and stab someone. It's not about the tool necessarily. It is about how the tool is utilized. And I can't drive that point home enough. And here's, you know, you're worried about it, Stephen. Are you familiar with scissor statements? Tell me more. This was, it was something I actually learned about through work I'm doing with a think tank in England. And the title of the article is Sort by Controversial.
Starting point is 00:22:44 And I highly suggest everyone read this article. It's on SlateStarCodex.com. Sort by Controversial basically says that there are statements that are so divisive, they can tear apart society. And you won't even know that you're involved in a scissor statement because you're so caught up in defending your side of the scissor that you don't realize what's happening and and I could even make a case in point because I'm seeing it happen in real time right now if you look at the Kenosha Wisconsin shootings that just occurred at the protest you have some people who are saying this kid is a murderer he was
Starting point is 00:23:38 illegally carrying a weapon blah blah blah blah you have another side of this argument that is saying he was acting in self-defense. They threw things at him. He was trying to keep himself safe. This is a scissor statement because it is causing a rift in society. You look at politics in the United States, it's a scissor statement, right? You're either for this person or you're against this person. That's the problem with duopolies, but we can talk about that later. If someone were able to utilize GPT-3, if the output was not checked and a malicious actor slipped scissor statements into its news articles, into its contracts, that quite literally could cause a collapse. And that's dangerous.
Starting point is 00:24:30 And that's why- And not only that, but my concern would be that it would learn to do that. It would learn. So let's say you were trying to write news stories and you ranked it on popularity or number of clicks or something, it would rapidly learn to make things as controversial and insane as possible,
Starting point is 00:24:51 because that's how you get clicks. And without any kind of guardrails on morality or truthfulness or anything, it would, I mean, this is basically what you and I are describing here would be basically a nuclear weapon of thought in that it would rapidly progress to be as divisive as possible because that's where the attention comes from. And, you know, again, I want to turn back toward AI and toward utilizing AI. The message here is not this stuff is terrible or this stuff is dangerous. The message here is simply don't go getting too excited about taking something from one area into another, because it could very quickly turn against you and it could very quickly embarrass you, right?
Starting point is 00:25:45 It could very quickly embarrass you, could very quickly cause some very serious ramifications. Yeah. The cool thing about technology is we're making better and better tools that are faster and they're able to achieve outcomes quicker. And that's really great. That's awesome, because that increases the efficiency in which humanity as a civilization operates. But again, when you have AI like GPT-3, I cannot praise open AI enough. They have this massive knife, and they are keeping their hand
Starting point is 00:26:23 firmly on the handle and making sure it is only used in the proper ways question is what happens when people without that i don't know what you call it morality ethical whatever um instead of using that knife to cut their food they're going to start throwing it around the room or instead of using this stuff to cut their food, they're going to start throwing it around the room. Or instead of using this stuff for entertainment, they're going to start using it for productivity without questioning whether it's ready for that. You know, I'll just throw out one more here. We got to wrap here in a second, but I'll just throw out one more example that people should really look up. So the latest version of Microsoft Flight Simulator, everybody's super excited about it because it includes the entire world. And that's really cool.
Starting point is 00:27:10 You can fly your plane anywhere on Earth. But the 3D terrain that it's using was all generated by AI. And in many cases, that's awesome. And in some cases, it's ludicrous and amazing. So for example, the obelisk of Melbourne or the Melbourne monolith is a 200-story building in downtown Melbourne that stands up to the stratosphere. You can land a plane on top of it because somebody typoed something and there was no guardrail and the AI said, okay, 200 stories. Or the structure of Bergen, Norway, which is, you know, it's a very
Starting point is 00:27:55 hilly place. In flight simulator, it's the buildings that are hilly. And so it's just really odd. But there's so many instances here where AI has just, it's developed amusing things that Microsoft then has to go back and fix. But they shipped it. And they, and it's fine. It's a game. And it's fun. Honestly, it's hilarious to go through and find, you know, ice cliffs in the Arctic that are, you know, a thousand feet tall. I mean, it's something from Game of Thrones or something. That's fun, but it's only fun, because can you imagine if they ship this in a commercial aviation, you know, whatever, can you imagine like all the commercial planes avoiding downtown Melbourne because they don't want to hit that building? You know, it's just so weird what can come out of AI. And I think that I just want to leave everybody with a warning not to assume that only good things are going to come out.
Starting point is 00:29:00 So, Josh, like I said, we do have to wrap. I've enjoyed the conversation thoroughly as always. You want to wrap up and tell us where we can find a little bit more of your work? Absolutely. So you can find me on the vfidel.com and you can find me on Twitter at JCE Fidel. And I really look forward to more of these discussions because AI and machine learning to me is just fascinating. This podcast was brought to you by gestaltit.com, your home for IT coverage across the enterprise. For show notes and more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI. Thanks, and we'll see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.