CyberWire Daily - Critical GoAnywhere bug fuels ransomware wave.
Episode Date: October 7, 2025Microsoft tags a critical vulnerability in Fortra’s GoAnywhere software. A critical Redis vulnerability could allow remote code execution. Researchers tie BIETA to China’s MSS technology enablemen...t. Competing narratives cloud the Oracle E-Business Suite breach. An Ohio-based vision care firm will pay $5 million to settle phishing-related data breach claims. “Trinity of Chaos” claims to be a new ransomware collective. LinkedIn files a lawsuit against an alleged data scraper. This year’s Nobel Prize in Physics recognizes pioneering research into quantum mechanical tunneling. On today’s Industry Voices segment, we are joined by Alastair Paterson from Harmonic Security, discussing shadow AI and the new era of work. Australia’s AI-authored report gets a human rewrite. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today’s Industry Voices segment, we are joined by Alastair Paterson, CEO and Co-Founder of Harmonic Security, discussing shadow AI and the new era of work. You can hear the full conversation with Alastair here. Selected Reading Microsoft: Critical GoAnywhere Bug Exploited in Medusa Ransomware Camp (Infosecurity Magazine) Redis warns of critical flaw impacting thousaRends of instances (Bleeping Computer) BIETA: A Technology Enablement Front for China's MSS (Recorded Future) Well, Well, Well. It’s Another Day. (Oracle E-Business Suite Pre-Auth RCE Chain - CVE-2025-61882) (Labs) EyeMed Agrees to Pay $5M to Settle Email Breach Litigation (Govinfo Security) Ransomware Group “Trinity of Chaos” Launches Data Leak Site (Infosecurity Magazine) LinkedIn sues ProAPIs for using 1M fake accounts to scrape user data (Bleeping Computer) The Nobel Prize for physics is awarded for discoveries in quantum mechanical tunneling (NPR) Deloitte refunds Australian government over AI in report (The Register) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire Daily podcast is a production of N2K Networks, your source for critical industry insights, strategic intelligence, and performance-driven learning products. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
And now a word from our sponsor.
The Johns Hopkins University Information Security Institute is seeking qualified applicants
for its innovative Master of Science in Security Informatics degree program.
Study alongside world-class interdisciplinary experts
and gain unparalleled educational research and professional experience in information security and assurance.
Interested U.S. citizens should consider the Department of Defense's Cyber Service Academy program,
which covers tuition, textbooks, and a laptop, as well as providing a $34,000 additional annual stipend.
Apply for the fall 2026th semester and for this scholarship by February 28th.
Learn more at CS.com.
jhu.edu slash MSSI.
Researchers Tai Bietta to China's MSS technology enablement.
Competing narratives cloud the Oracle e-business suite breach.
An Ohio-based vision care firm will pay $5 million to settle fishing-related data breach claims.
Trinity of Chaos claims to be a new ransomware collective.
LinkedIn files a lawsuit against an alleged data scraper.
This year's Nobel Prize in Physics recognizes pioneering research into quantum mechanical tunneling.
In today's Industry Voices segment, we're joined by Alistair Patterson from Harmonic Security,
discussing Shadow AI and the new era of work.
And Australia's AI authored report gets a human rewrite.
It's Tuesday, October 7, 2025.
I'm Dave Bittner, and this is your Cyberwire Intel briefing.
Thanks for joining us here today.
It's great as always to have you with us.
A critical vulnerability in Fortress Go Anywhere managed file transfer software is being exploited in ransomware attacks.
Microsoft has warned the flaw with a maximum CVSS score of 10 allows attackers to buy
bypass license signature verification and achieve remote code execution on vulnerable systems.
Exploitation requires no authentication if attackers can forge or intercept valid license responses,
posing significant risk to internet-facing instances.
Microsoft linked the zero-day activity to threat group Storm 1175,
which used legitimate remote monitoring tools, network scanners, and cloud flare tunnels for command and control,
before deploying Medusa ransomware.
Though Fortra patched the flaw in September 18th,
hundreds of exposed Go Anywhere servers remain.
Microsoft urged immediate patching, network perimeter reviews,
and running endpoint defenses in block mode.
A critical vulnerability in Redis could allow attackers to gain remote code execution on affected systems.
Redis, short for remote dictionary server,
is an open-source in-memory data structure store that's widely used as a database,
cache, and message broker. The flaw with a CVSS of 10 stems from a 13-year-old use-after
free bug in Redis's Lua scripting feature, which is enabled by default. Authenticated attackers
can exploit it to escape the Lua sandbox, trigger memory corruption, and establish a reverse shell
for persistent access. Researchers at WIS, who discovered the issue and dubbed it Redishell,
warned that over 330,000 Redis instances are exposed online, with at least 60,000 requiring no
authentication. Exploited systems risk data theft, ransomware, or crypto mining. Redis has issued
patches for all supported versions and urges immediate updates, especially for internet-facing servers.
A new report from Recorded Futures Insict Group says the Beijing Institute of Electronics, Technology,
and Application, or Bietta, is almost certainly affiliated with China's Ministry of State Security.
Researchers assess Bietta is very likely MSS-led and likely a public front for the MSS First Research Institute.
Public sources indicate Bietta research is steganography, communications, and forensics,
and collaborates with MSS-run University of International Relations.
Personnel histories, including links to CNITSEC, reinforce the assessment.
Activities likely aid intelligence, counterintelligence, and military missions.
The research concludes Bietta almost certainly forms part of a broader MSS enablement network.
Engagement with them risks technology transfer, covert communication support,
and strengthen cyber espionage tradecraft.
Export controls, academia, and vendors should review ties and conduct strict due diligence.
Reports of active exploitation targeting Oracle E-Business Suite have sparked widespread confusion
and competing narratives across the cybersecurity community.
Over the past week, vendors and researchers have offered conflicting explanations,
ranging from password issues to credential reuse to an alleged zero day,
each claiming to have identified the true root cause.
Analysis by Watchtower Labs claims that the attacks involve a remotely exploitable flaw
that allows unauthenticated code execution across multiple Oracle EBS versions.
The report calls for restraint, criticizing speculation that fueled panic and misinformation
before Oracle's official advisory.
The incident highlights how rumor and premature attribution can undermine coordinated response
during active exploitation.
Clear communication and evidence-based reporting remain vital as security teams assess exposure
and await further clarification from Oracle and trusted researchers.
Ohio-based IMed Vision Care will pay $5 million to settle a class action lawsuit
over a 2020 fishing-related data breach affecting its email system.
The settlement provides compensation for affected members,
including up to $10,000 for documented losses
and smaller payments for time and inconvenience.
IMed will also implement new security controls
such as enhanced multi-factor authentication,
stricter password policies,
employee training, and third-party HIPAA risk assessments.
The company denies wrongdoing, but agreed
to improve its cybersecurity posture as part of the resolution.
A new Tor-hosted leak site run by the Trinity of Chaos Ransomware Collective,
allegedly tied to Lapsis, Scattered Spider, and Shiny Hunters,
lists 39 major companies and claims more than 1.5 billion records across 760 firms,
re-security reports.
Rather than announcing fresh intrusions, the group published previously undisputed,
disclose data from past breaches and has threatened Salesforce, alleging massive corporate data
holdings. Salesforce denies new vulnerabilities. Sample data reportedly contains significant personally
identifiable information, but few passwords, suggesting access via stolen Oath tokens and
vishing tied to third-party integrations. The FBI issued an alert to help detect similar
compromises. The leak site faces DDoS attacks and set an October 10th negotiation deadline.
Experts warn further releases could spur fishing, identity theft, and AI-driven abuse.
LinkedIn has filed a lawsuit against Delaware-based Pro-API's Incorporated and its founder,
accusing them of creating over 1 million fake accounts to scrape user data and sell access via a tool
called iScraper API.
The company seeks a permanent injunction,
data deletion, and damages.
LinkedIn alleges pro APIs charged up to $15,000 per month
for large-scale scraping,
violating its terms of service.
The suit also names a Pakistan-based partner, NetSwift.
LinkedIn says it will continue aggressive legal action
to protect member data.
John Clark, Michelle Deverey,
and John Martinez have been awarded the 2025 Nobel Prize in Physics
for pioneering research into quantum mechanical tunneling,
a phenomenon fundamental to quantum computing and modern electronics.
Clark of UC Berkeley said the award was the surprise of his life,
adding that their collective work underpins technologies like smartphones.
The Nobel Committee praised their discoveries
for advancing quantum cryptography, computing, and sensing.
calling them vital to the next generation of digital innovation.
This year's physics prize marks the 119th Nobel Award,
carrying a cash prize of about $1.2 million.
Other Nobel announcements continue throughout the week
with the award ceremony set for December 10th in Stockholm.
Sadly, there's still no Nobel for podcasting.
Coming up after the break, my conversation with Alistair Patterson from Harmonic Security.
We're discussing Shadow AI and the new era of work.
And Australia's AI authored report gets a human rewrite.
Stay with us.
At Talas, they know Cyber.
security can be tough and you can't protect everything. But with TALIS, you can secure what
matters most. With TALIS's industry-leading platforms, you can protect critical applications,
data and identities, anywhere and at scale with the highest ROI. That's why the most trusted
brands and largest banks, retailers, and healthcare companies in the world rely on TALIS
to protect what matters most. Applications, data, and identity. That's TALIS. TAS.
L-H-A-L-E-S.
Learn more at talusgroup.com slash cyber.
What's your 2-A-M security worry?
Is it, do I have the right controls in place?
Maybe are my vendors secure?
Or the one that really keeps you up at night,
how do I get out from under these old tools and manual processes?
That's where Vanta comes in.
Vanta automates the manual work so you can stop sweating over spreadsheets,
chasing audit evidence, and filling out endless questionnaires.
Their trust management platform continuously monitors your systems,
centralizes your data, and simplifies your security at scale.
And it fits right into your workflows,
using AI to streamline evidence collection, flag risks,
and keep your program audit ready all the time.
With Vanta, you've got to do.
get everything you need to move faster, scale confidently, and finally, get back to sleep.
Get started at vanta.com slash cyber. That's v-a-n-ta.com slash cyber.
Alistair Patterson is from Harmonic Security, and in today's sponsored industry voices segment,
we discuss shadow AI and the new era of work.
So today we are tracking some of the changes that we're seeing in workplaces,
particularly as a result of AI-driven tools.
And I know you and your colleagues there make the point that there's kind of a new presence on people's desktops these days.
It's not just Microsoft Word and a browser anymore.
Yeah, that's right.
I grew up in that world where everyone got their work done.
essentially on the office suite in email, and then SaaS came along.
And then I think, you know, the biggest change that I've ever seen is occurring right now,
which is that, you know, a lot of people start and finish a number of work activities
in these AI chatbots and agents and other applications that are coming along very fast.
I think this is a generational shift, of course, as many have said before me,
with a lot of profound implications, both for how we work, but also how we think about security.
Well, for the employees themselves, how does this shift show up in their day-to-day work?
Yeah, I mean, I think previously, you know, you'd have the Google search bar there.
You'd be writing a document or an email, and you would use, you know, that standard set of tools that we've all got so used to.
But I think now, you know, these, first of all, the likes of chat GPT, it's just been the most incredible growth that it has seen through the workplace.
And whether employers facilitating the adoption of AI or not, it is happening everywhere.
And what that means, typically in most people's days, I'm sure it is in yours and mine,
one of the things we think about first when we've got a new problem to solve is,
hey, you know, can I use an AI for this that might save me a whole bunch of time
in whatever it is that I'm doing, whether it's researching something or summarizing something
or even being an aspiring partner in learning something, I find, you know,
it can be very, very effective.
So there's a great, you know, we see this in the activity that we win.
There's just a great shift underway where very much more of our job is interacting with
these AI agents and chatbots and other applications that are being built for the enterprise.
Well, let's dig into some of the security implications here.
I know you've said that there's no true control plane for AI usage today.
What does that mean in practice for organizations?
organizations are in a tough spot because they clearly are under a lot of pressure to adopt AI as
fast as possible and not be left behind competitively. And so every board, every CEO is sort of
carrying that same message of being an AI leader and pushing into AI. But then at some point
the security team and trust and compliance get involved and they start to think more about,
where does our sensitive data go in this scenario? You know, what is being adopted and where
our employees putting our data. That tension exists everywhere right now. And the problem is that
traditional controls are just not set up for this era. I mean, we went through the, obviously,
the SaaS era most recently. We have, you know, web gateways and Sassie-casby capabilities. But
they were designed for a different era. And the problem now is that they typically don't see
the prompt level data, the use cases around that. And, you know, how AI is being used by
employees and where the data is going beyond the list of URLs.
And so we're really trying to understand contextually what are the employees doing and, you know,
is that something that's high risk or not?
And where, you know, where's the ROI on my tools even is something that most companies
struggle with.
They're sort of rushing into deployment or I think worse is when they actually just try to block
access and then employees find ways around anyway.
Well, are you finding that companies are trying to retrofit old security types?
technologies for AI?
Yeah, I think it's the natural first place to look, right?
Because nobody wants yet another tool if they can avoid it.
And so they'll look at the, you know, Sassy Kasby world, first of all,
and maybe they want to revisit the LP as a control plane,
which strikes, you know, fear into the heart of most security professionals
for many reasons, as you know.
And then they, you know, they figure out that trying to get visibility there is challenging as well
because you get a kind of URL list and not much, much more.
more. And then the other area is, you know, Microsoft will say, hey, go and label everything
with purview. And that's a pretty big challenge for most security teams, you know, to find all
the data and label it. And even if you do, the challenge is, well, what's going into the prompt
data, which is not necessarily, you know, files that can be easily labeled. And when we try and
apply the last year is DLP style, you know, PII detection, credit cards and, you know,
social security numbers and things like that that are easily matchable, that only
tells a small part of the story here. There's lots of other very sensitive business information
that's getting put into these chat applications that in aggregate, in particular, it could be
very damaging. It could be outlining M&A events, or it could be, you know, legal action, or it could
be to do with layoffs and personnel changes and HR issues. And every industry has some slightly
different nuances to it. But essentially, there's a ton of sensitive corporate data that's
getting put into these engines.
And we also, we shouldn't be trying to stop it here because there's huge benefits
been gained in letting your employees use these tools too.
So it's finding that balance.
But I think for sure, the tools that were designed for the last era are not fit for
this era, as we've seen time and time again in security.
Well, some organizations are trying to block AI tools altogether.
And you make the case that that's unsustainable.
Absolutely.
I mean, I think it's just.
so apparent that if you try to stop employees using these tools. I mean, they're all using
it in their personal lives now. And so when they come in the workplace, they expect to be able to
get access to this. And the strategy that I see from a lot of companies is to say, well, okay,
here's our AI policy, part one. Part two is we put in place our AI steering committee, okay,
but it hasn't typically got very good visibility in how AI is being used and adopted, because
it's usually just looking at the sassy tool. And then part three, when it comes to control,
Well, no one really wants to put DLP in place or deal with labeling if they can avoid it.
So they do often go with that blocking approach.
And the problem is the employees tend to find ways around those controls.
They get frustrated.
The security team ends up in exception hell, having to approve lots of apps for different teams in different ways.
And we're back to security being the Department of No again.
And to give you just one anecdote, I was talking recently to the head of AI, actually,
at a pretty major insurance company in the US.
And he said to me, hey, Al, I don't have access to chat GPT,
and I'm the head of AI.
And I said, well, you know, what do you do?
And he held up to the camera his, you know, a laptop.
And he said, well, I use this laptop instead,
which is his personal laptop.
And then he said, you know, and so does my team.
And that was his way of dealing with that corporate block.
But, you know, we see it everywhere.
And no one really wants to use the corporate mandated version.
necessarily. I mean, there's one other customer working with this time in Europe where they
deployed Harmonic and discovered that they'd mandated Microsoft Copilot as the AI tool of choice
and try to appoint all the employees at that. And they bought a lot of licenses as well, so spending
a lot of money. They had four times as many users of the free ChatGPT edition that made it
a corporate co-pilot. It's just staggering, right? And we also see, interestingly, even where you've
got paid chat GPT, about 40 to 50% of the data loss we see is going through personal accounts
into chat GPT. So even when the corporate ones available, often employees are opting to use
their Gmail personal logins for their own accounts, maybe because they have other information
in there already and they're used to it and that sort of thing. But yeah, it's very interesting
how this adoption journey is going. And I think just blocking things is never going to be the right
answer. Yeah, it's interesting that, you know, we've always talked about Shadow IT, but I guess
Shadow AI is kind of a subset of that now. That's right. I mean, and I think AI is, is everywhere.
I sort of reject the notion that, you know, as you have in the sassy world, this sort of
AI category of 300 apps that's supposedly all things AI, because I think, you know, essentially
every enterprise app is building LLMs in the back end at this point. So I think about it being
well, we were pre-gen-A-I era and now we're kind of post-gen-A-I era.
And this is just the new reality and how we handle it is the next question.
Well, looking at the companies who are having success here,
can you describe what sort of things they're doing?
What does it look like?
Yeah, I think the winning approach here is to try to work with the employees
and meet them where they are, right?
Understand the use cases, understand why.
they're using certain tools and get that visibility, that picture of what's going on so that you
can put the appropriate controls in place. I think the blanket block is not good because it just
pushes people outside of your monitoring and they inevitably adopt these tools anyway and it
causes all the frustration that I talked about. Equally, just having a completely permissive policy
isn't great either because you're accepting a pretty major risk, whether it's customer data
going to China-hosted apps or your IP becoming part of someone else's training
data set. Those are often risks that companies are pretty concerned about.
And so I think the best thing to do here is to try to get the visibility into how AI is
being adopted in use today. You may find, for example, people are instead of using something
like co-pilot, they're using, let's say, gamma AI for presentation generation or beautiful
AI or napkin AI. In every category, there's a lot of these new apps that are popular.
up and then go and find out why, right? What's the requirements gap? Do we need a dedicated
control in that space? Can we standardise on one and put an enterprise agreement around it? Or do we
want to block and redirect them in this case? Because we think really they should be using
copilot or whatever it might be. But at least then you're having the conversation with them,
meeting them where they are, we're not the Department of No anymore. We're facilitating and
hopefully accelerating AI adoption. You know, I think that's one area that is interesting to me psychologically
is that some employees self-censor.
So instead of leaning into these tools,
they're worried about using them at work
and they're sort of holding back a little bit.
And so I think you've kind of got to give them permission,
encourage them to lean in and meet them where they are.
And that way, I think, yeah, security can become an enabler again
and the business is going to benefit overall.
What about for the IT teams and the security folks at the organization
to encourage them to have a helpful approach here,
as you say, to not be the Department of No.
That's right.
I mean, I think there's a bit of education here.
It's pretty tough because it's not the security teams
were not overstretched already, right?
There was enough going on.
And now they have to become AI experts on the side as well,
which is not great.
There's just so much buzz around this space as well.
You know, there's the threats from AI,
which is one area.
There's, you know, using AI for security in the SOC,
which is another area.
And there's the area around building your own.
own AI and trying to protect that and your own apps and so on. But I think the fourth areas where
I think it's the most immediately interesting and applicable, which is, you know, how do we enable
the employees to use AI safely and securely and put those appropriate guardrails around?
And I think that, again, starts with really visibility and then understanding the needs of
the business and meeting the business at that point and making sure that instead of the
parliament and know we're saying, yes, use AI, but do it with appropriate guard.
guardrails, you know, and that starts with a policy. I think everyone's got a policy now pretty much.
And then, you know, you get your steering committee together, but I think you need to feed the steering
committee some, you know, some proper data and visibility into what's going on so that you can then
make the appropriate controls and guardrails and have those in place around the business.
Are there any common mistakes that you see organizations making here?
There's probably four buckets of all that I see. I think that there's a set of coming
that are just very permissive
and they don't particularly
care about their data. There's not too many
of those, but they are out there and I think
they're pretty wide open and
you know, that is
what it is. Then there's
the opposite extreme, you've got the ones that are
just in heavy block
mode and just saying no to all things
AI and trying to block everything. And I think
outside of kind of national security
areas probably
not so overkill in most cases and ends up
being counterproductive.
because I think, as I said,
you're going to drive the behavior just outside your monitoring,
which is not helpful.
People use their own devices.
They disable or go around the controls,
and that's not a good place to be either.
And then I think in the middle,
it's the ones that are either, you know,
right now they're very permissive,
but they're worried about the risk
and they want to put some controls around it.
I think that makes some sense that you've got to lean in, right?
You're trying to enable your employees.
You've always had that attitude,
but you are worried about the risk.
So that's kind of bucket three.
And then bucket four I see probably the most, which is companies that are currently in block mode,
but are desperately trying to become more progressive while managing the risk.
And I think that's probably the key category.
Companies that care about their data deeply, but they don't want to sit out this whole AI transformation.
They've got to get in the middle of that.
And again, that comes back to, I think, having the right guardrails, putting the right controls in place,
but ultimately leaning in and enabling the employees.
As we look towards the future here, where do you suppose this is going to take us?
What do you suppose people's working relationship with AI is going to look like a few years down the line?
You know, of course there's a lot of hype around agents right now.
I think the reality, I think for me, if both agents and AI more generally,
is that rather than companies themselves building tons of this stuff in-house,
I really think this is going to be mostly a use of third-part.
party's challenge because I think you've got so many well-funded dedicated teams in Silicon Valley
and elsewhere that are building out for every conceivable kind of vertical and use case
at the moment in using AI and agents more generally. And so I think probably what we're going to
see is, yeah, employees are going to be making use of agents, but it's going to be mostly third-party
stuff. I think the enterprise thinks it can dictate how AI is getting deployed. But I think the
reality is that the employees are going to be mostly dictating that by what they use.
like what applications and services they use externally.
I think the majority of that is going to go through the browser, as it has done so far.
You know, if you look at the use of AI and agents today, it's almost all browser-based.
And we even have the agentic browsers now, like Comet and DIA and others,
which are a good first step in this direction.
So, yeah, I think it's going to be more browser-based usage by employees of third-party AI agents
and apps would be my quick summary of that.
And then I think there's a whole other debate around where,
engineering is going from here. And I think for sure there's a place for these, obviously
the AI engineering environments with a cursor in the lead, but windsurf and many others in the
mixed too. That's Alastair Patterson from Harmonic Security.
Presale tickets can score you a spot track side. So being a fan for life turns into the trip of a lifetime.
That's the powerful backing of Amex. Pre-sale tickets for future events subject to availability and varied by race.
Terms and conditions apply. Learn more at amex.ca. slash Yannex.
And finally, Deloitte has agreed to refund part of a $440,000 Australian government contract after admitting that a
report it produced was, shall we say, a little too imaginative. The Department of Employment and
Workplace Relations discovered that its commissioned analysis contained fake citations, phantom footnotes,
and even a fabricated court judgment, courtesy of a large language model enlisted to tidy up the
paperwork. Officials insist the substance remains intact, though the confession reads like a case
study in modern due diligence gone missing. Increasingly, AI is slipping into serious policy work,
performing assistive tasks that somehow leave fingerprints of fiction. The irony, of course,
is that this technology is being sold as a tool for efficiency and truth, yet keeps
demonstrating a flair for creative writing. The quiet weekend upload of the corrected version
suggests that the machines aren't the only ones
generating artful evasions these days.
And that's the Cyberwire.
For links to all of today's stories,
check out our daily briefing at the Cyberwire.
we'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey and the show notes or send an email to Cyberwire at N2K.com.
N2K's senior producer is Alice Carruth. Our Cyberwire producer is Liz Stokes. We're mixed by Trey Hester with original music by Elliot Peltzman.
producer is Jennifer Ibin. Peter Kilby is our publisher, and I'm Dave Bittner. Thanks for
listening. We'll see you back here tomorrow.
Cyber Innovation Day is the premier event for cyber startups, researchers, and top VC firms building trust into tomorrow's digital world.
Kick off the day with unfiltered insights and panels on securing tomorrow's technology.
In the afternoon, the eighth annual Data Tribe Challenge takes center stage as elite startups pitch for exposure, acceleration, and food.
funding. The Innovation Expo runs all day, connecting founders, investors, and researchers around
breakthroughs in cybersecurity. It all happens November 4th in Washington, D.C. Discover the
startups building the future of cyber. Learn more at c.id.d. datatribe.com.
