Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x13: AI is a Creativity Maximizer with Ben Taylor of DataRobot

Episode Date: December 7, 2021

Many of the tasks we perform on a daily basis are beneath our abilities, and these are the ideal targets for AI. In this episode, Ben Taylor of DataRobot joins Frederic Van Haren and Stephen Foskett t...o talk about AI as a creativity maximizer. Business people too often get stuck in a process rather than innovating, from office work to manufacturing to R&D, and all of these can be augmented by AI-based tools. The most successful companies have hundreds or thousands of AI initiatives across the entire business to help identify opportunities for the technology to help employees be more successful. There is an inherent push and pull between small tactical projects and big strategic ones, and we have to consider the level of effort and the impact of AI projects. Three Questions Frederic: When will we see a full self-driving car that can drive anywhere, any time? Stephen: Is AI just a new aspect of data science or is it truly a unique field? Mike O'Malle, SVP, Seneca Global: Can you give an example where an AI algorithm went terribly wrong and gave a result that clearly wasn’t correct? Guests and Hosts Ben Taylor, Chief AI Evangelist, DataRobot and host of the More Intelligent Tomorrow podcast. Connect with Ben on LinkedIn or on Twitter at @BenTaylorData Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.       Date: 12/07/2021 Tags: @DataRobot ,@BenTaylorData, @FredericVHaren, @SFoskett

Transcript
Discussion (0)
Starting point is 00:00:00 I'm Stephen Foskett. I'm Frederick Van Haren. And this is the Utilizing AI podcast. Welcome to another episode of Utilizing AI, the podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence topics. If you've been listening to Utilizing AI much, you know of one of our, I don't know, go-to sayings, and that's the idea that AI is my co-pilot.
Starting point is 00:00:30 Even though in our three-question segment, we often ask about jobs ruined and lives destroyed by artificial intelligence, and usually not the robot uprising, but we do sometimes, I think, wonder about whether AI is something that replaces people or helps people. And that's the topic here today. Yeah, I think you're absolutely right. I mean, I think AI, I see AI as something to augment people, right, to help them out, not necessarily to replace them. And there are a couple of ways to do that, or with data, collecting data, processing and building models, or providing automation tools that will help humans with AI. And that's why we wanted to invite today's guest in, because this is something that Ben Taylor talks about quite a lot. So, Ben, why don't you introduce yourself for a minute, and then we'll dive in. Sure. Thanks for having me. And I love this topic. So I'm Ben Taylor. I'm the Chief AI Evangelist for DataRobot. And my career started very technical. I started focused in chemical
Starting point is 00:01:36 engineering, worked at Intel Micron and applied semiconductor building models. I worked as a quant at a hedge fund and I was the Chief Data Scient scientist for HireVue. So I managed a team of PhD physicists building out new technologies, and that continued through a startup. And now today I work in marketing, and I love storytelling. I still do innovation, but I have fallen in love with the creative side. So I get to work with other creatives and marketing just like you guys do on the video team and on the design team. And I, and I think there's an intersection here. That's really fascinating when you bring technology together with creativity. Um, but I also understand the sales side, what happens when we sell AI technologies into companies, do employees get laid off? Do they have more creative cycles? What does that look like? So happy to be here. Yeah, that's, and that's something that
Starting point is 00:02:24 definitely has come up repeatedly. In fact, I mean, I was just talking to somebody from Intel about that. And we were talking about basically the ways in which they use AI internally to help everything from design to sales to engineering. And frankly, it's all about trying to build tools that augment people's own creativity. I think that's really what you're talking about, right? The fact that an AI system can help your brain kind of fill in the blanks, right? Yeah. And to pull on that, I think there's actually something that's a little sad today when you look at some of the tasks and things that
Starting point is 00:03:03 people do, even evaluate ourselves. What is something you did this week that was maybe beneath your abilities where you could have mentored high school you or someone else to read an email and respond? And so I see it as an amplifier. And the thing I want the audience to think about is what fraction of your day do you truly need your full potential? And by your full potential, like all of your past experience to do something innovative or creative or difficult. And usually when you do that type of an audit, you realize it's a very small fraction of your time. And part of that is because you're overwhelmed with mundane tasks,
Starting point is 00:03:39 filling out expense reports, answering emails, maybe doing some chores around the house. And all of that is starting to go away, but we will live in a reality where that's all gone. So if you don't have to do those chores and tasks, and if you can augment yourself, what are you going to be working on? And I think that's where things get really exciting. This doesn't have to be anti-human, end of the world type of scenario. I think we're just tip of the iceberg on innovation. So what do you think are the challenges for enterprises and individuals to be creative around AI? I'm reminded of that. There's that classic cartoon with the square wheels and there's someone working on a circular wheel and they're saying they're too busy. And I think you,
Starting point is 00:04:22 in a business, you get a lot of people to get stuck in a process. And by stuck in a process, I mean, this is a process that's existed for a decade, whether they're underwriters for insurance or they're doing loan approvals or there's something else there. And if you're stuck in a process, you don't have time to innovate. Manufacturing is a great example
Starting point is 00:04:39 because they have hair on fire, work on progress, QA, they have to meet deliverables. They typically aren't very friendly when it comes to live R&D in a manufacturing setting because they have other priorities. And so those are some big challenges. How do you allow people to take a break and step back from the process? And I think AI enables that. We bring AI into a process and the humans are not fired.
Starting point is 00:05:05 What happens is the humans, I really celebrate the SMEs, the subject matter experts, the people that work the process are brilliant because they understand edge cases. They have a lot of past experience. They begin to inform these AI systems. They begin to direct them. And that's where you get this beautiful augmentation. So there's that initial dance that I think is really exciting, but eventually you graduate the process. Eventually, you reach full automation. And then that human is on to a new challenge. They're on to a new process.
Starting point is 00:05:34 Yeah, I think when a lot of people talk about innovation and transformation of AI, but a lot of them get stuck just with the idea, not with the concept. Somehow, I feel like that competition between companies is really kind of the driver for AI innovation in companies, as opposed to innovation from within. Is that something you see as well? Absolutely. Gartner did an executive survey, I think in 2018, and I'd love to go find the link to show it for people that are listening. But one of the biggest concerns that came back was growth. How do executives continue growth?
Starting point is 00:06:13 Because if you're a good CEO, you're not flatlining, you're growing the company. And the biggest concerns are it's harder to innovate with regulation. And also with some of these newer technologies with digital disruption, you can have a newcomer into your market that can, and we see this happen multiple times where a young newcomer startup will get funding and then they'll take off. And now your business is on the defense. And anytime you're on the defense, you have a problem. There's some strong fighting analogies in business. If you're innovating and
Starting point is 00:06:46 you're on the offense, that's a better situation to be when you become defensive because you have a hard time maintaining top talent. If you're not innovating, we kind of know how that story ends. So I think you're exactly right. Digital transformation with AI being a part of it is more existential. But there's still caution. We're past the hype cycle. People are not jumping, spending millions of dollars on hopes and dreams of what this can accomplish because 85% of AI projects fail. And I think people know that.
Starting point is 00:07:19 And so there's some justified skepticism on where should they go? Where should they start? But yeah, you have the market pressure to do it. Yeah, and I think it's a concern because large organizations I talk to, they fully understand the 85% failure rate. So they have a tendency to say, let the startups do the innovation for us and the failing. And if it works out somehow, we will acquire them. And that kind of causes kind of a shockwave in the market
Starting point is 00:07:49 where startups are being bought for a lot of money. The other thing which I also kind of see in the market is that a lot of people think that an AI transformation is something they can do for a while and then step away from it. A lot of them don't understand that AI really is about learning and it's a continuous process, right? Is that something you also see that people kind of do not understand the value of AI, which is the continuous learning concept? Absolutely. So I like to compare applied AI to experimental or brittle AI.
Starting point is 00:08:25 And you're talking about a lot of people think this one and done mindset. Good, we got our data together. We built a model. That model's being consumed by an API. We're essentially done. We've demonstrated some AI capabilities, but that's not how the real world works. You constantly want to retrain models, but the most successful people we see in the market, they don't just have one AI initiative or one AI project or model.
Starting point is 00:08:48 They have hundreds and thousands. And so I've become a big believer that every department, every individual, doesn't matter what your job is, every single individual should be able to frame or scope an AI opportunity within their field. They don't have to know how to solve it, but they should know how to identify it to say that I think this is an opportunity for AI. So I see AI being distributed through the entire organization, every department, sales, marketing, finance, et cetera. And I think that's not the mindset that's existed in the industry. It's felt more like a luxury than a core requirement.
Starting point is 00:09:27 So what do you see the limiting factor for enterprises to do AI? Is it access to good data? Is it the mindset, meaning the business not understanding what they can do? Is it a lack of tools, maybe a combination of all of it? There's, yeah, there's so many problems that kind of fall under this umbrella. So not having a lack of having data ready data in a centralized place with access. People also make mistakes by hiring junior talent. So when I look at the cost of principal level AI talent, let's say I'm a company, I haven't taken the step. If I realize what it costs to hire a principal level seasoned AI manager, and I compare that to the PhD that just graduated, who is very ambitious, but they lack a lot of experience.
Starting point is 00:10:13 We see a lot of companies that make that mistake where they hire that individual. And in the common joke in the industry, which is sad because it's got more truth to it, this individual, as soon as their tuition has reached a level of transformation for them, they will leave the company. So I've had multiple executives text me and say, nine months in, we started doing something interesting with deep learning and I lost my talent. They went somewhere else and they're going somewhere else for not a little bit more money for a lot more money. And so that's very discouraging for smaller businesses that, um, that are trying to hire. Um, but failure is failure. Failure is kind of the moat in this industry because I'd be very cautious for someone coming in to run an AI strategy or deliver AI models who hasn't been burned, who hasn't made mistakes,
Starting point is 00:11:05 because there's so many mistakes to be made. A model in production can drift over time. You can have feature drift excursions. You can have wild predictions, introducing new sources of data. You can have bias that's amplified. There's a lot of problems that I think people are naive to. Value is one that still amuses me why people miss on this. So I've even heard really bad AI projects from executives. And so I like to joke and say, if you're ever starting a project with, wouldn't it be cool if, I don't need to hear the rest of it because I know it's not worth our time and our talent. Because some of the most valuable projects, they're not cool. They don't always have to be cool. And that's the danger with AI. It's the shiny new toy. So given that the number of AI projects, as you're saying, should be a large
Starting point is 00:11:58 number and therefore they should be smaller projects, slicing things thinner, as they say, in order to make more of an impact. I guess it seems inevitable that there would be failures, but when you have lots of small projects and initiatives instead of one large, giant AI omnibus for the whole company, it's much more likely, I think, that you would succeed. And I think this also matches the sort of, in a way, sort of sandbox approach that we're seeing with a lot of AI products as well, where it's sort of, you know, it's something you can build yourself or quickly get into production and then iterate and innovate. We've talked to companies that are doing many things like that as well in order to have these initiatives get off the ground quickly, have an impact quickly, and then literally be tossed away quickly. Is that what you're seeing as well, that there's sort of this rapid life cycle of create, use it, and then iterate it with a new version or a new idea pretty quickly.
Starting point is 00:13:06 And by tossed away or new idea, you mean the initial project is, it's abandoned where they move on to the next? Yeah, they move on to the next. Maybe they create a version two, or maybe they say, you know what? It would have been better if we had done it a different way and we're going to kind of toss it away and do a different thing. Yeah. So that's interesting because I think we do see people building AI models and getting
Starting point is 00:13:29 things out there, but maybe they don't have the right ecosystem or the value is not there for it to actually persist. Because I would argue if you really delivered on value, if you found something that was worth $10 million a year to your company, that will persist. Even if you get fired, that thing will persist because the value is there. And so I'm a big fan of, if you want to start an AI project, you should create a scatter plot of feasibility. So how hard is this to do?
Starting point is 00:13:55 Is this something, can we get a win this week, this quarter, or is this a moonshot? And I don't want to be negative on the moonshots. You should have those too, but you need to get that momentum. You need to get the quick wins. And the other axis would be how out valuable. This is to the company. And I think for some data scientists that can be a little confusing because I think I'm asking for an exact number. I'm not asking for an exact number. Is this $0 signs? And by zero, I'd mean like worth less than 50,000 a year. Like I'm going to round down pretty quick, or is it $1 sign two
Starting point is 00:14:23 or three and three would be transformational. Like the board's going to hear about this one. And I, I just want you finger in the wind to tell me is what, what tier is this? And then what you're trying to do is you're trying to find the quick wins feasible with maximum value. And so I think what you hit on Steven is it, they might be failing on the value side. They're looking for quick wins, things that they can show some activity around, but it's not something that gets flown at the flagpole as a major win. But communication is a big issue in this space
Starting point is 00:14:53 that a lot of data scientists really struggle to communicate with managers, which I used to be one of those data scientists. I've had executives kind of push back on me and get frustrated that we can't communicate. I'm much better now. I know to leave the jargon at home and, and try to get in their headspace. The more you understand an executive's headspace and their life cycle, just the things they're dealing with, the better you can meet them, the better you can actually help with their quarter, help with their KPIs, make them look good.
Starting point is 00:15:25 The highest bar to have is you should fight, fight to get your end customer promoted. Like that's the ultimate bar. Like if you're actually fighting for that, I'm sure you're aligned on value. I'm sure you're aligned on priorities. I'd say any company should really, any kind of service provider in the B2B space should have that same philosophy that, you know, your job is to make your customer look good, not necessarily to achieve the explicit goals laid out in the statement of work. Yeah, absolutely. And I'm reminded, Stephen, with a company that I had, we built out this deep learning platform. And I remember when it's time for pricing, the pricing kept going up. And I remember I'd feel uncomfortable. So we'd go through the 10 slide pitch deck on how we're going to change your whole organization. And then you'd see the price
Starting point is 00:16:09 and it'd be $250,000. And that would make me feel uncomfortable, which is really funny because now I understand that better. Why would I feel uncomfortable if I'm asking you for a quarter million dollars? Well, it's because I, I have some doubt that I'm going to deliver a healthy multiplier on that because I'm giving you a I'm gonna deliver a healthy multiplier on that because I'm giving you a platform and I'm helping you. I'm hoping that you'll be successful. And if you have the value approach where you have a laser focus on value,
Starting point is 00:16:34 I can charge you whatever I want. Obviously you need a multiplier. But if I say, hey, you can't write this check, it's 5 million a year. If the value's there, everyone's happy. We're going to figure out how to write that check because if the value's been proven, I mean, this is real concrete value. Someone's performed a proof of value. It's black and white. It's obvious. So that was a good learning point for me to understand. If you're selling value, then sales becomes easier. If
Starting point is 00:17:03 you're selling promises, the sky's the limit, which we see a lot in AI, imagine what you could do with our platform. That's harder to sell that for good reason. Yeah. I think upselling the value, it also has changed in the last couple of years. AI used to be living in the R&D world of an organization, kind of far away from the CIO, maybe closer to the CTO. And as you mentioned earlier, since we're going horizontal and multiple verticals, the CIOs now are really becoming more interested in how AI works. And from a value perspective, the same thing, right? $250,000 can be a lot or cannot be a lot depending on what you've been doing. But for a traditional CIO, $250,000 is a lot because they have a tendency just to buy commodity equipment and, you know, $250,000 of commodity
Starting point is 00:17:58 equipment, that's a lot of equipment. And it's not, you know, AI transformation is not replacing C with Python or the other way around, right? It's a lot more complex than that. And I think finally there is that when an organization goes in too quickly into an AI prototype or experimenting, they don't know the difference between a data engineer and a data scientist, right? And when they're trying to hire, they might hire a data scientist, but then they would ask that person to do data engineering work on top of that. And I
Starting point is 00:18:32 think that's one thing I saw. I've been talking to a large organization and they completely failed because of that reason, right? They were hiring one type of person, loading them up, and then in the end, that person left because they said, well, it's whatever I was hired for doesn't match with what I'm doing. Yeah. And I think you're hitting on that. It takes a team effort. Like a data scientist is never going to own DevOps and deployment and SLAs and scalable AI, because that's not their skill set. And that's why that you do need a team to succeed, but it, you know, it is scary for new executives to kind of venture into this unknown. They don't want to, they don't want to have this be a black mark on their record. And they, you know, their job's always on the line as well.
Starting point is 00:19:16 And if they have to, if a CIO has to come back a couple of years later to show that they completely failed on an AI initiative, There's consequences for that. I also chuckle too, thinking about the big, big data hype cycle. So you guys remember 2011, 2012, like you have to buy big data or you have to build out a Hadoop cluster because if you don't, your competitors will. So they're selling on fear, which I think is, it's so funny that, that, you know, there was so much buzz around building up these data lakes and they became data swamps and and very few people found the value that was promised well as people in in it for a while i think that we've seen situations like this in software development and other areas
Starting point is 00:19:57 of it uh infrastructure as well and and and it seems like a lot of the time there i don't know we're just we're just, it's the same old thing over and over again. As somebody who's been through this for a while, Ben, do you see any differences, any inherent differences to AI and big data projects that maybe we didn't see previously in software development projects and software development methodologies and so on? Oh, definitely. I think that's a great question because we see a lot of similar patterns. Like, hmm, this kind of feels like the big data hype cycle or other hype cycles we've seen. But the thing that is so unique about AI for the companies that really, really adopt it,
Starting point is 00:20:37 and by really adopt it, I mean it becomes part of their core DNA for the company, we see transformational growth. So companies I'm thinking of that definitely happened at HireVue when I was there. It also happened, another company I'm familiar with, Branded Entertainment Networks. They saw explosive growth after they became, after they made AI a core part of their business. And that's because it allowed them to compete in new markets. So that's not something people really talk about a lot, how AI can actually transform your business completely. You can spill over into a new vertical or a new market that you were helpless to compete in before, but now that you've brought AI into your systems and into your business. So yeah, I think that's how it's different. It can actually completely reinvent
Starting point is 00:21:21 your company. And this isn't, sometimes people get excited about that. They think, oh, if I just put an AI stamp on my company, it'll give me a better multiplier. Like if you're like a VC backed company. So I'm not talking about that because that's not really, that's not true transformation. I'm talking about like, if you live and breathe AI all the way up through the executive team, we see examples of transformation that are remarkable. So when I look at enterprises and the amount of effort it takes to build up the knowledge and hire people, do you think that it would accelerate AI in enterprises if instead of hiring their own teams, they would rely more on organizations that do this for a living, meaning delivering AI transformation
Starting point is 00:22:07 over and over and over, as opposed to build up that knowledge themselves. Yeah. And we see this firsthand. So I believe eight or nine of the top 10 banks work with us. And then we're working with some of the biggest companies in the world. And the interesting thing there is, do you think they can hire top talent? Can they hire a million dollar a year plus data scientists? Of course they can. If they wanted to, they could do that. But I think what you're hitting on is something I definitely agree with.
Starting point is 00:22:39 The act of transforming multiple companies through multiple industries, the teams that are tasked with doing that, they bring a level of experience that you can't buy. Like it's technically, you can always buy anything, right. But like they bring a level of experience that I think you would be reckless to buy. So why, you know, why try to be in the news for a new record comp when you could just work with a team through a platform or a partner who's done this before. So, um, that that's generally my recommendation because these teams will come in and there's other, I don't want this to be too centric on the company I work for now. Like there are other teams in the industry, they'll come in
Starting point is 00:23:14 and they will fight to transform your company in a quarter. And so for executives, they like that. Good. You should transform my company. You should prove value in a quarter. In the past, it's been kind of this hope and a prayer, 12, 24 months, good luck. We'll circle back. And that's where things get really dicey. Well, yeah, that's the, I mean, my background is in consulting. So that's exactly it, right? You come in as a consultant, you make some recommendations and honestly, the good consultants leave saying, I really, really hope that they're able to succeed with this advice. But the smart consultants kind of know that sometimes that doesn't happen. And especially, it seems like sometimes the bigger, the bigger the task, the less likely it is to succeed.
Starting point is 00:24:01 And that's kind of what I was trying to get at earlier when we were talking about the size of AI projects. Sometimes I think people think that it has to be a moonshot and they don't realize that it's okay to just shoot for something less. Yeah. And Stephen, I think some of the talent that comes into help, it's almost like they're biased to make it worse. And what I mean by that is if you're hiring PhDs in STEM coming out of academia, they don't want to deploy a Bayesian model. But from a business perspective, wow, we got 80% of our value today. They want to work on a project that they can go, not that they're going to write a white paper about, but they want to work on a project that they can go, that they can go, you know, not that they're going to write a white paper about, but they want to work on a project that they can go tell their data science peers that this was ambitious. This was impressive.
Starting point is 00:24:50 And I, I think there's a disconnect between that camp and people that know the business. And, and usually they try to bring in a manager that knows both where they know how to herd the cats, but they also know that every quarter matters and attribution is real and everyone has a multiplier. And if I don't know what your multiplier is, then you're on the short list to be out of the company. Yeah, I've heard that. Not all professionals and consultants are the same, right? lot of enterprises kind of holding off on AI because they tried to hire a group of consultants who, like you said, they were doing something specifically, but not really matching what that enterprise wanted to do.
Starting point is 00:25:34 And then it was a failure. And then they backed off for a year just because they didn't realize how complicated it was. And at the same time, I mean, it's kind of a joke, but when the public clouds came around, people were saying, hey, I want to buy a cloud. I see this effect with AI as well, right? What do I do to buy an AI transformation? What is that kind of cost? What do I do to get that?
Starting point is 00:25:58 And I prefer to have it by next week, Friday. I like the next week by Friday. It's funny because in the sales process, I interact with a bunch of different characters and types and personas my favorite are the executives so when i do get a chance to talk to the c-suite i like talking to them because they don't care about how it works and how it's built they care about what the outcome is how much it costs what the timeline is i think too often we will run into antagonists where they want to like go to nerd hell to debate like some tensorflow architecture or something that really doesn't matter but they to them it matters um but to
Starting point is 00:26:38 the business they could care less right so when we talk about creativity, who brings the creativity to the table? Is that the enterprise asking for the transformation or is that the team that question, creativity is fascinating because how do we become more creative? So if you as an individual decide, I want to be more creative next week, well, how do you become more creative? It's going to be an exercise of breadth of experience. So if you only do one thing, that's a problem. And so going back to your question, teams that transform multiple companies, they are going to be more creative because of their past experience. They are, they have more experience to draw on. And sometimes maybe this is a general statement in tech. When you look at the biggest disruptors, a lot of times for different industries,
Starting point is 00:27:35 they come outside the industry. Like look at Elon Musk in space, like, you know, some of the early NASA astronauts were very apprehensive. They did not think that SpaceX had any chance of transformation, but he's bringing concepts outside of that industry, doing 3D printing on rocket engines and different things that that industry would not have done. And so I actually celebrate, I'm kind of torn in half here because I have so much respect for this me, who's the human that's worked this process for 10, 20 years. I want them in the room for this entire project, but I also have a lot of hope for people that aren't from your industry. If I can bring people in who aren't
Starting point is 00:28:17 from your industry, they'll see analogies, they'll have different creativity and perspectives. And I'm, I would, I'm more likely to bet on them to transform your industry than people that are stuck in it. One of the things that came up in a previous discussion related to this was this idea that by not having the inherent biases, and I don't mean like the capital B biases, but I just, just by looking at data in a more objective way, I guess that's anthropomorphizing it, but essentially by looking at data in a more objective way, artificial intelligence systems can surface things that we humans might have rejected. And I think this goes as well to the
Starting point is 00:28:55 same topic that you're talking about in terms of having people from outside, having different voices, having new energy brought in. AI can bring that as well because it can make unexpected connections and even illogical connections that we personally would have rejected as out of hand. Yeah, and I completely agree with that. And I think one of the key things that you're reminding me of is, sure, when you say AI's objective,
Starting point is 00:29:22 which it is, that kind of hints of like a lack of creativity. But what people don't realize is it can surface concepts and topics. So traditionally when humans do innovation, it's light bulb goes off in my head. I have a hypothesis and I'm going to go test it. With AI, AI can actually create the light bulb. AI with clustering and different things and doing some of this analysis on data, it can suggest an idea to you. And this isn't science fiction. This is stuff that happens to today where you as the human are now inspired by that light bulb. And you're going to go do some project that you would have never done. And so it definitely feeds this creative engine,
Starting point is 00:29:59 but also the there's the classic things we've already talked about on this episode, like time savings is key. Anyone that's listening. If I could allow you to triple the time you spent on creative work, that would have a big impact for you and your business. Like just having more creative cycles to think on processes and what data sets. I think innovation too is scary. Like there's always some risk associated with innovation and that's a
Starting point is 00:30:24 constant challenge for companies. How much do you spend on innovation? How much do you risk on innovation? And how many projects are you willing to allow to fail doing innovation? Amazon's famous for their 90%, like their VC model, which most companies think that's insane. They could never support that. But if you're targeting zero points of failure failure then i would say you're not actually trying right so is it fair to say then that disruptive thinking accelerates innovation just as a final thought from my perspective absolutely because you need you need to be willing to push through a wall break through a wall And if you're not willing to do that,
Starting point is 00:31:05 then you're going to be stuck in linear thinking. And this is something I'm really passionate about today is you have to have courage to innovate. You have to be disruptive. You have to have vision and spirit to believe in the impossible. And if you don't, then you're going to get more of the same. And that's hard for these industries. Like you look at insurance and banking, like they have traditionally been rewarded for conservative, conservative thinking.
Starting point is 00:31:30 And so just come in and say, Hey, you guys need nonlinear thinking, and you need to be breaking down these walls and working with regulators to introduce new technology components that have never been regulated. That's, that's, that's hard. And I think that's why they tend to rely on partners for that because the potential mistakes, I love AI applications where the stakes are high because I feel like that's where you get the biggest impact, but that's also where you get the biggest consequence when things don't work well. I think that's a great summary. I mean, of the whole, the whole thought. So thank you so much, Ben. Now has the time on our podcast though, when we shift gears a little bit. So I hope you're ready for this. So now it's time for three questions. This is a tradition we started
Starting point is 00:32:19 in season two. Our guest has not been prepared for these questions ahead of time. So we're going to just get his off the cuff answers on, uh, on what he thinks. So, um, we're also changing things up a little bit and then we're going to have Frederick ask a question, me, and then we're also going to have one of our previous guests ask a question. So Frederick, you can go first. Sure. So when we will, when we will see a full self-driving car that can drive anywhere, anytime. Anywhere, anytime. And so I'm going to, I'm going to take your,
Starting point is 00:32:50 your question definitions pretty literally. So like out in Idaho, in a farm, no paved road, broken stop sign with graffiti, 15 or 20 years. I think you've got this long tail problem, highways, cities. I'm more optimistic. Elon Musk always has tighter timescales. I don't fault him for that because I'm a fan of what he does, but yeah, I'd say 15, 15 years, which normally I'm a bigger optimist. I'm actually surprised that I'm even throwing out a 15-year number rather than something like five years or seven years, but we'll see. And we'll see what happens. He certainly is an optimist on that.
Starting point is 00:33:33 Next question for me. In your opinion, is AI a unique field or is it just another aspect of data science? Oh, that's a tricky one. So is AI, you might have to kind of steer me on this one, a little one. So is AI a unique field or is it, so you're trying to say, is it, is it a separate field compared to just your traditional data science? Yeah. Is it just a new flavor of data science? Just more of the same? I think it's more ambitious. So when I think of AI, the beautiful thing about AI, so data science, I would actually kind of throw that under the bus
Starting point is 00:34:11 and just say, it's like statistical supervised learning, building statistical models, nothing that inspiring. But when I think of AI, I think of the intersection between neuroscience and philosophy and psychology and all these things intersecting on what makes us human. How do babies learn? Babies learn a new language. I've got three kids. I didn't go to school to train me how to teach them English. They just learned through experience. And so I think there's a journey there in AI that is unbelievably exciting because it's infinite. And so that is very different to me than I think of traditional data science.
Starting point is 00:34:47 I think of this infinite. So think of like applications in healthcare, AI applications in healthcare. We could all be old and gray and we would never run out of ideas for what's next, what's next? How can we save human life? What's next, what's next?
Starting point is 00:34:59 It's infinite. And so AI is a field, I see it as being infinite innovation, which is, can you think of other I see it as being infinite innovation, uh, which is, uh, can you think of other fields where you could say infinite innovation, like chemical engineering feels like this is what we've done for the last 50 years. Sure. You get into like biopharmaceuticals or something, you can push the envelope, but AI feels truly
Starting point is 00:35:17 infinite. Well, that's actually an inspiring answer. I appreciate that. Um, next, um, as promised, we've got a question from a previous guest. Michael Malley from Seneca Global comes in now. So take it away, Mike. This is Michael Malley, SVP of Marketing and Sales for Seneca Global. And my question is, can you give an example where an AI algorithm went terribly wrong and gave a result that clearly wasn't correct. I'd love to hear that.
Starting point is 00:35:48 Oh, so many. Tay.ai, that was that NLP bot that learned through Twitter experience and started saying terrible, terrible genocide, racist tweets, like stuff so terrible. I don't want to quote them on this podcast, but you can go Google it. Beauty.ai had a racist beauty model. The top 50 most attractive people were all white. So I had racial bias. So like there's so many examples where AI can make
Starting point is 00:36:18 mistakes that are obviously wrong and terrible and where it's the amplifier. And so that goes back to the augmentation. If you're doing anything like a life and death decision, think of like AI and surgery, which I think is a great application, but not a great application. If it's on its own, you want a surgeon to be reviewing, working with AI to make the final decision. Um, cause it's, it needs us. It, it, yeah. So kind of a long rambling answer, but yes, lots of examples of that. I think of the tay.ai Twitter bot as being the classic example
Starting point is 00:36:53 of what people see as being evil, but it learned it from human biases or human experience. Well, thanks so much for those answers. And it's kind of fun that you managed to bring home the topic with the final question as well. So appreciate it.
Starting point is 00:37:06 We're also looking forward to what your question might be for a future guest. And if any of our listeners want to join in on the fun, just send an email to host at utilizing-ai.com and we'll record your question for a future guest. So thank you so much for joining us today, Ben. Where can people connect with you and follow your thoughts on these topics? So I'm also a host on a podcast. It's called The More Intelligent Tomorrow. So please go take a listen. We've had guests like General Petraeus, former Congressman Will Hurd. We had Hannah Fry on. We just had Bob Work. He wrote the 765-page U.S. Strategy on AI on, he'll be coming out soon. So yeah, find me there. And then Ben Taylor data on Twitter and LinkedIn, very active on, on LinkedIn. Great. Thanks. And I do recommend the podcast as well. As for me, I'm busy planning the next AI field day event coming up here in
Starting point is 00:38:00 April. And of course, recording,ilizing AI for Weekly Consumption. How about you, Fred? So I'm helping enterprises with data management and definitely building out large scale GPU clusters, which is very popular now with the demand for billions of parameters. And I can be found on LinkedIn and Twitter as Fredrik V. Heron.
Starting point is 00:38:26 Well, thank you very much for joining us for Utilizing AI. If you enjoyed this discussion, please do subscribe. We're available in pretty much every podcast application now. And also maybe give us a rating and review and share this episode with your friends. This podcast is brought to you by gestaltit.com, your home for IT coverage across the enterprise. For show notes and more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI. Thanks for listening and we'll see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.