CyberWire Daily - Retirement plan breach shakes financial giant.

Episode Date: May 1, 2024

A breach at J.P. Morgan Chase exposes data of over 451,000 individuals. President Biden Signs a National Security Memorandum to Strengthen and Secure U.S. Critical Infrastructure. Verizon’s DBIR is ...out. Cornell researchers unveil a worm called Morris II. A prominent newspaper group sues OpenAI. Marriott admits to using inadequate encryption. A Finnish man gets six years in prison for hacking a psychotherapy center. Qantas customers had unauthorized access to strangers’ travel data. The Feds look to shift hiring requirements toward skills. In our Industry Voices segment, Steve Riley, Vice President and Field CTO at Netskope, discusses generative AI and governance. Major automakers take a wrong turn on privacy.  Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today on Industry Voices, Steve Riley, Vice President and Field CTO at Netskope, discusses generative AI and governance. For more of Steve’s insights into gen AI, check out his article in Forbes.  Selected Reading Breach at J.P. Morgan Exposes Data of 451,000 Plan Participants (PLANADVISER) White House releases National Security Memorandum on critical infrastructure security and resilience (Industrial Cyber) DBIR Report 2024 - Summary of Findings (Verizon) Experimental Morris II worm can exploit popular AI services to steal data and spread malware (Computing) Major U.S. newspapers sue OpenAI, Microsoft for copyright infringement (Axios) Marriott admits it falsely claimed for five years it was using encryption during 2018 breach (CSO Online) Finnish hacker imprisoned for accessing thousands of psychotherapy records and demanding ransoms (AP News) Qantas Airways Says App Showed Customers Each Other's Data (GovInfo Security) Agencies to turn toward ‘skill-based hiring’ for cyber and tech jobs, ONCD says (CyberScoop) Carmakers lying about requiring warrants before sharing location data, Senate probe finds (The Record) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me. I have to say, Delete.me is a game changer. Within days of signing up, they started removing my personal information from hundreds of data brokers. I finally have peace of mind knowing my data privacy is protected. Delete.me's team does all the work for you with detailed reports so you know exactly what's been done. Take control of your data and keep your private life Thank you. JoinDeleteMe.com slash N2K and use promo code N2K at checkout. The only way to get 20% off is to go to JoinDeleteMe.com slash N2K and enter code N2K at checkout. That's JoinDeleteMe.com slash N2K, code N2K. A breach at JPMorgan Chase exposes data of over 451,000 individuals. President Biden signs a national security memorandum to strengthen and secure U.S. critical infrastructure.
Starting point is 00:01:43 Verizon's DBIR is out. Cornell researchers unveil a worm called Morris 2. A prominent newspaper group sues OpenAI. Marriott admits to using inadequate encryption. A Finnish man gets six years in prison for hacking a psychotherapy center. Qantas customers had unauthorized access to strangers' travel data. The feds look to shift hiring requirements towards skills. In our Industry Voices segment,
Starting point is 00:02:10 Steve Riley, vice president and field CTO at Netscope, discusses generative AI and governance. And major automakers take a wrong turn on privacy. It's Wednesday, May 1st, 2024. I'm Dave Bittner, and this is your CyberWire Intel Briefing. JPMorgan Chase has reported a data breach impacting over 451,000 retirement plan participants, according to a filing with the Maine Attorney General. The breach exposed names, addresses, social security numbers, payment details, and bank account information linked to direct deposits. This incident, discovered on February 26th, was due to a software flaw and not a cyber attack.
Starting point is 00:03:15 There's no evidence of data misuse. The breach occurred when three users linked to J.P. Morgan customers accessed unauthorized information through system reports between August 2021 and February 2024. The bank responded by updating the software to prevent future breaches and is offering two years of identity theft protection through Experian to affected individuals. President Joe Biden has signed a National Security Memorandum, NSM-22, aimed at securing and enhancing the resilience of America's critical infrastructure, replacing a previous policy from President Barack Obama's era. This comprehensive strategy focuses on protecting infrastructure against current and future threats
Starting point is 00:04:02 by refining federal roles in security, resilience, and risk management and implementing a coordinated national approach. NSM-22 establishes minimum security requirements, accountability mechanisms, and leverages federal agreements to enforce these standards. It also improves intelligence collection and sharing related to infrastructure threats involving federal, state, local, tribal, territorial, private sector, and international partners to facilitate risk mitigation. Additionally, the memorandum promotes investments in technologies that mitigate risks from evolving threats and strengthens international collaborations for global infrastructure security. The Department of Homeland Security, led by the Cybersecurity and Infrastructure Security Agency, will spearhead this whole-of-government effort, with CISA acting as the national coordinator for security and resilience. The initiative reaffirms the designation of 16 critical infrastructure sectors, each managed by a
Starting point is 00:05:06 specific sector risk management agency responsible for risk management and coordination. The policy directs federal departments and agencies to implement these strategies while respecting privacy, civil rights, and civil liberties. It also sets timelines for risk management plans and sector-specific assessments, emphasizing the need for a robust federal framework to combat the complex and frequent threats facing critical infrastructure, ensuring national vigilance, security, and resilience. Verizon's 2024 Data Breach Investigations Report, the DBIR, is out, highlighting key trends in cybercrime. It reveals a significant 180% increase in attacks exploiting vulnerabilities compared to last year. This surge was predominantly due to ransomware and extortion-related threats, with web applications being the common entry point. Ransomware and other extortion techniques contributed to one-third of all breaches,
Starting point is 00:06:08 with pure extortion attacks making up 9% of incidents. The report also notes a growing trend of breaches involving third-party vulnerabilities and errors, with errors now accounting for 28% of breaches. Additionally, phishing remains a critical concern, with users typically succumbing to phishing emails in less than a minute. Financial losses from ransomware and extortion attacks vary greatly, with a median loss of $46,000. The report aims to guide organizations in enhancing their security measures
Starting point is 00:06:43 against evolving cyber threats. Researchers at Cornell Tech have developed MORRIS-2, a generative AI worm that poses significant risks by spreading through interconnected AI systems. Named after the infamous 1988 Morris worm, MORRIS-2 can hijack generative AI email assistants to exfiltrate data and distribute spam. It exploits large language models by using an adversarial self-replicating prompt technique, which compels the AI to generate a prompt that further spreads the malicious code. The researchers demonstrated Morris 2's capabilities
Starting point is 00:07:25 by sending emails containing these prompts, which then poisoned AI systems like ChatGPT and Gemini by leveraging retrieval augmented generation. This could potentially jailbreak AI services, allowing unauthorized access to data. OpenAI recognized the vulnerabilities highlighted and is working to fortify its systems against these sorts of attacks.
Starting point is 00:07:50 Eight prominent U.S. newspapers owned by Alden Global Capital, including the New York Daily News and Chicago Tribune, are suing OpenAI and Microsoft for copyright infringement, claiming the tech giants used their articles to train AI models without permission. This lawsuit, filed in the Southern District of New York, builds on a similar case by the New York Times and emphasizes the growing legal challenges AI companies face from publishers. Unlike others who have negotiated paid deals,
Starting point is 00:08:23 these newspapers allege that their content was used unlawfully to enhance AI products like ChatGPT and Copilot, also accusing the firms of reputational damage through AI-generated inaccuracies. The outcome could reshape compensation structures for news content in the AI era. for news content in the AI era. Marriott admitted in a course case about a 2018 data breach that it had used the outdated Secure Hash Algorithm 1, that's SHA-1, rather than the more secure AES-128 encryption it previously claimed. This disclosure came during a hearing in the U.S. District Court for the District of Maryland. Marriott's misrepresentation of its security measures could have serious legal and financial implications, including potential lawsuits from its insurance carrier and impacts on its stock prices.
Starting point is 00:09:18 The revelation also complicates the breach's fallout, as SHA-1's vulnerabilities to quick hacking could mean that sensitive data was not as secure as stakeholders were led to believe. Marriott has faced scrutiny for not disclosing this correction more transparently, only adding a brief update to an old webpage rather than issuing a new statement. A Finnish court has sentenced a 26-year-old man, Aleksanteri Kivimaki, to over six years in prison for a major cybercrime involving the hacking of around 33,000 patient records from the Vastamo Psychotherapy Center. Kivimaki's crimes included an aggravated data breach, nearly 21,000 aggravated blackmail attempts, and over 9,200 instances of aggravated dissemination of private information. He was arrested in France in 2023 and deported to Finland for trial.
Starting point is 00:10:18 The court described his actions as ruthless and very damaging, particularly given the vulnerable psychological state of the victims. Some victims even succumbed to suicide due to the breach. After the center refused his ransom demands, Kivamaki leaked the data on the dark web and directly extorted patients. Australian airline Qantas experienced a data mishap where customers logging into the airline's app inadvertently accessed other users' information, including names, upcoming flight details, and frequent flyer points. The incident, reported widely on social media, occurred over two periods on Wednesday
Starting point is 00:10:59 and allowed some customers to view and interact with bookings not their own, even leading to accidental cancellations. The airline attributed the issue to a technology issue related to recent system updates, clarifying that it was not a cybersecurity breach. Despite the exposure, Qantas assured that no financial data was compromised and that no unauthorized boarding occurred. The airline has since advised customers to re-log in to their accounts and is closely monitoring the situation.
Starting point is 00:11:32 Federal agencies are set to adopt skill-based hiring for IT roles by next summer, focusing on actual proficiencies rather than traditional qualifications like degrees or years of experience. This shift, announced by National Cyber Director Harry Coker, aims to fill nearly 100,000 IT positions and address a wider cyber job gap currently leaving about 500,000 roles vacant across the U.S. This new hiring approach, which also applies to federal contractors, is part of a broader initiative to diversify the cybersecurity workforce, often underrepresented by women and people of color, and to bolster the nation's defenses against escalating cyber threats. The move aligns with the Biden administration's strategy to leverage federal influence to drive
Starting point is 00:12:24 private sector change, particularly in critical areas like cybersecurity. Coming up after the break, Steve Riley, Vice President and Field CTO at Netscope, discusses generative AI and governance. Couple trying to beat the winter blues. We could try hot yoga. Too sweaty. We could go skating. Too icy. We could book a vacation. Like somewhere hot.
Starting point is 00:13:10 Yeah, with pools. And a spa. And endless snacks. Yes! Yes! Yes! With savings of up to 40% on Transat South packages, it's easy to say, so long to winter.
Starting point is 00:13:20 Visit Transat.com or contact your Marlin travel professional for details. Conditions apply. Air Transat.com or contact your Marlin travel professional for details. Conditions apply. Air Transat. Travel moves us. Do you know the status of your compliance controls right now? Like, right now? We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks. But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta. Here's the gist. Vanta brings automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001. They also centralize key workflows like policies, Thank you. off Vanta when you go to vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off.
Starting point is 00:14:38 And now a message from Black Cloak. Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home? Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives. Because when executives
Starting point is 00:15:00 are compromised at home, your company is at risk. In fact, over one-third of new members discover they've already been breached. Protect your executives are compromised at home, your company is at risk. In fact, over one-third of new members discover they've already been breached. Protect your executives and their families 24-7, 365, with Black Cloak. Learn more at blackcloak.io. Steve Riley is Vice President and Field CTO at Netscope. And in this sponsored Industry Voices segment, we discuss generative AI and governance.
Starting point is 00:15:35 Sometimes it's tempting to think, oh, governance and security are the same thing, and they really aren't. Security is an aspect of governance. Governance would include also things like managing hallucinations. These tools are sometimes quite spectacularly wrong and they don't know that. What can we do to train our people so that they are aware of this, can detect it, and not be duped by something that sounds really good. Another aspect I would say includes deepfakes.
Starting point is 00:16:12 Just in the news last Friday was something about LastPass, how an employee was targeted with a deepfake, but actually managed to avoid falling into the trap because LastPass apparently has some pretty good training so that their people can detect and reject that sort of manipulation. I'd include data privacy in this list, not only protecting a company's data, also what about other companies' personal data that might leak into a public generative AI model? personal data that might leak into a public generative AI model? Is there something we could do as a company using GenII to ensure that the responses we receive to various prompts might be purged of clearly sensitive information from somebody
Starting point is 00:17:00 else who might have accidentally uploaded it elsewhere? That notion might also apply to copyright as well. Again, these models maybe don't really know the difference between public domain and copyrighted works. How can we, as companies who use Gen AI, ensure that the information we receive from the public services isn't filled with copyrighted information that might set us up for some kind of a lawsuit or something for copyright infringement.
Starting point is 00:17:31 And then finally, I'd put ethical policies and practices in the governance too, including things like responsible use and mitigating bias. You know, I've heard people say that when it comes to things like hallucinations, before you ask a generative AI to give you an answer about something you don't know about, ask it a question about something that you have a lot of expertise on, because that answer could help shape the way that you perceive and the degree to which you take seriously the answers about the things that you don't know a whole lot about.
Starting point is 00:18:09 That's an interesting idea. Yeah, I like that. But what if the questions are completely unrelated? You ask the GenIO tool something you are an expert in and you receive a response that you can validate as useful because of your expertise.
Starting point is 00:18:24 And then the next question is something that you aren't an expert in at all. How do you balance the usefulness of these tools? I mean, I guess the direction I'm coming from is that I don't think it's realistic to have your governance say, simply don't use these tools because they can be so powerful. There is so much potential there. And I think people are so captivated by them. How do you strike that balance between allowing people to make good use of them, but also putting useful guardrails around them? Well, you know, I've actually thought about this in the past because I teach at a university here in Seattle,
Starting point is 00:19:12 and I'm confident that the students are at least finding ways to use Gen-AI to help them. I don't believe yet that a complete Gen-AI assignment has been submitted to me, but that's not generally the way I structure courses I teach. And, you know, it reminds me, and I'm sure you might have heard this story about the comparison of Gen AI to calculators in school. Have you? Oh, yes. Yeah. But please, for our listeners, go ahead. Yeah. It used to be when calculators first emerged that math teachers hated them. And so they said,
Starting point is 00:19:43 no, you're not allowed to use them. And eventually the teachers adjusted so that the calculator becomes a tool that helps the students still use their own minds to arrive at an answer to a problem. So it required changing teaching methods. I know that over time, I'm probably going to need to change some of my teaching methods at the university, especially if I have courses or assignments that are like writing paper. I'm going to need to ensure that my students know how to use the tool as a helper, not to write the whole paper on their own. So I think that we need to recognize skepticism from some folks aside that these tools do have a utility. I guess that kind of comes back to some of the things I mentioned earlier about privacy, copyright, ethical practices.
Starting point is 00:20:27 Tools are amoral. People can do good things and people can do bad things with them. So I want to ensure that as AI becomes more and more pervasive, the companies develop that sense of ethics around how to use the tool. how to use the tool. How do you suppose a company can measure success here to know that the governance that they've applied to something like generative AI is actually being effective? Well, I think maybe it might be useful to first recognize that not everybody even feels they have to do that yet.
Starting point is 00:21:00 Last year, in 2023, ISACA surveyed a number of companies and some rather stark results came back. I'll just enumerate these for you. 41% of respondents provided no AI education or training for employees. Even though that number was low, 54% believe AI ethics receive insufficient attention. So that's good. At least some people think that they need to do better, but are they actually doing better? Less than one third of companies responded, consider AI risk an immediate priority.
Starting point is 00:21:38 And only 10% of respondents have a formal policy for generative AI. percent of respondents have a formal policy for generative AI. Maybe we're still in the phase where it's important to, dare I say it, evangelize the need for thinking more about AI risk and governance. And then, of course, like you said, we also need to have ways to measure success. we also need to have ways to measure success. I think that's going to be a little difficult right now because there's no widely recognized guidelines. We don't really have in the United States something from a regulatory agency even. So companies are inventing their own strategies,
Starting point is 00:22:19 which feels kind of risky in these early days. However, I did find some instances of examples that might help folks. The World Economic Forum as an AI governance alliance. It seems like a decent start. It's geared more toward government and society, but some of the recommendations appear suitable to companies using regenerative AI. And the big consulting firms, you know, I'm thinking like PwC, E&Y, and Deloitte, they're providing governance frameworks that seem like decent starting points for companies using Gen AI.
Starting point is 00:22:52 So I would encourage companies, instead of just trying to invent from whole scratch, use these examples because they might include things you might not have thought of. As for what a Gen AI governance strategy might look like, here are some ideas. First, determine your company's goals for incorporating Gen AI into the business workflows. Next, you've got to discover all the existing Gen AI.
Starting point is 00:23:17 Who's using it? What kind of data is going into it? What is the purpose for that? Then next, I would recommend setting a criteria for which Gen AI apps are approved or merely tolerated or actually unapproved. And IT shouldn't make this decision by itself. It should involve all of the stakeholders, businesses, business units, maybe even customers and suppliers. Be sure to communicate
Starting point is 00:23:43 this to employees, and then you want to enforce it with security tools, like I mentioned in the beginning of our conversation. Don't forget, though, to adjust as necessary, because user behaviors will change. They'll show more responsibility over time. New AI applications will appear, so follow the criteria you set before to put that in an approved or tolerated or unapproved bucket. And then finally, I would recognize that companies who want to train models of sensitive information should pay for privacy. Absolutely. Do not use the public models for that. I want to switch gears with you and talk about this notion of platforming. There's been a lot of debate in security right now about platforming.
Starting point is 00:24:26 And I'd love to get what your take is, your perspective from you and your colleagues at Netscope, kind of like we did with generative AI and governance. Can we start off by just sort of defining what we're talking about here? Well, most vendors in any technology space really start out with one product, like us. We started out as a CASB vendor. But as vendors acquire more and more customers, and as customers acquire or ask for more and more functionality, And as the market sort of demands that growth overall, which means not only more money and more customers, but also more capabilities, many vendors seek to add extra capabilities to what they already have. You can do that by building or buying. And that sometimes means building or buying something in another market, not your original market. Our first move to do more than just a CASB
Starting point is 00:25:27 was to add a secure web gateway, a SWIG. Now, some vendors are happy with just acquiring different things, putting them on a price sheet, and selling them as is. They might call that a platform, but is it really a platform? Is it one console? Is it one policy framework? Those sorts of things. If they're not, I'd call those like portfolios, right?
Starting point is 00:25:58 Almost a dashboard, right? Well, but maybe not even one dashboard, right? If you've got dashboards for the different products you required, or two or three different kinds of agents that might have to be installed on endpoints, two or three reporting mechanisms. What separates platforms from portfolios is a greater degree of convergence. So if, and I'm going to use Nescope as an example, we can take data that goes to and comes from SAS applications,
Starting point is 00:26:32 which was the CASB, to and from the general web, which is the secure web gateway or SWIG, to and from private applications published through our ZTNA, Zero Trust Network Access capability. But these all work together. We have one policy framework, one agent, one form of content inspection for all lanes of traffic. We have one reporting mechanism. So a platform is something that is intentionally designed or has been redesigned to ensure that differences in data types or differences in data sources don't mean you change the way you administer or operate. Where do you suppose we're headed with this? As this platform trend continues, and I think it's fair to say
Starting point is 00:27:26 that this is the direction things are headed, what does the future look like here? I see the future maybe for a little while becoming kind of a battle. The vendors who want to have a single, grand, unified platform that does everything, which seems a little unrealistic to me, that does everything, which seems a little unrealistic to me. Or that just feels like an aspiration more than something that is reality. How can one vendor do everything equally well? I do wonder if that might actually be a bit of a weak stance as opposed to vendors choosing one of the four platforms, being really well at that,
Starting point is 00:28:08 and figuring out ways to interoperate with others. It's going to be interesting over the next few years, particularly as we know that more and more companies are seeking ways to consolidate. Vendor consolidation is huge. We see evidence for that. There is no longer the appetite to manage 67 different security tools, which according to various surveys is the average that companies are dealing with right now. They want to reduce that. But how much reduction is the thing we're going to observe in the next couple of years at least? Should folks be cautious of having all their eggs in one basket? I mean, is there a risk of a monoculture here where maybe you don't want to
Starting point is 00:28:46 have everything under one roof? You know, that argument has been around since the early days of firewalls. I can remember early in my InfoSec career being given the guidance that if you're going to build a three-tier network, you know, the int corp net, a DMZ, and the internet access, you want to have two different firewalls. Your internet-facing firewall should be a different brand than your DMZ-facing firewall. And the thinking was that vulnerabilities in one might not be present in another.
Starting point is 00:29:19 But in reality, it turns out that that was a less safe configuration than using the same firewalls in both places. Because if you use the same firewall in both places, you can become an expert at one as opposed to trying to be an expert at two. And the likelihood that you will make configuration mistakes is reduced. I wish I could find this report. It's 20 years old now, and I can't find it anymore. Carnegie Mellon University did some research and determined that like 90% of successful attacks launched from the internet were the result of a configuration mistake, not a code mistake. So I want to do everything I can to minimize configuration mistakes. Therefore, I'm going to use the same brand of firewall in both
Starting point is 00:30:00 places. Now, how much do you extend this notion, right? Does it mean you go to one vendor for all of your security products, for all of your IAM, EPP, SSE, and SEM and SOAR? I feel like that's taking the one vendor approach too far Because I do not personally believe that one vendor can do all four of those equally well. That's Steve Riley, Vice President and Field CTO at Netscope. Thank you. trusted by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com today to see how a default deny approach can keep your company safe and compliant. And finally, a Senate probe spearheaded by Senators Ron Wyden and Ed Markley has exposed hypocrisy among major car manufacturers.
Starting point is 00:31:46 They've been sharing drivers' location data with law enforcement without requiring warrants, blatantly contradicting their public pledges of privacy protection. Despite previously committing to needing a warrant or court order before disclosing such sensitive information, only five out of the 14 automakers surveyed actually adhere to this practice, and just one, Tesla, informs customers of law enforcement requests. This deception has led the senators to demand a Federal Trade Commission investigation into these misleading practices and the automakers' data retention policies. The Alliance for Automotive Innovation's statements on commitment to privacy starkly contrast the reality of these findings, further fueling frustrations over the lack of transparency and respect for consumer privacy.
Starting point is 00:32:37 This breach of trust highlights the need for stringent regulatory oversight to ensure that automakers live up to their promises and protect consumer data as they claim to. Maybe what we really need is a little dashboard light that comes on every time your car is ratting you out to law enforcement. And that's The Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Starting point is 00:33:11 We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like this show, please share a rating and review in your podcast app. If you like this show, please share a rating and review in your podcast app. Please also fill out the survey in the show notes or send an email to cyberwire at n2k.com. We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment, your people.
Starting point is 00:33:53 We make you smarter about your teams while making your teams smarter. Learn how at n2k.com. This episode was produced by Liz Stokes. Our mixer is Trey Hester with original music and sound design by Elliot Peltzman. Our executive producer is Jennifer Iben. Our executive editor is Brandon Karp. Simone Petrella
Starting point is 00:34:09 is our president. Peter Kilpie is our publisher. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow. Your business needs AI solutions that are not only ambitious, but also practical and adaptable. Thank you. Secure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts, and act with ease through guided apps tailored to your role.
Starting point is 00:35:11 Data is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.