The Good Tech Companies - Business Pros Underestimate AI Risks Compared to Tech Teams, Social Links Study Shows

Episode Date: June 20, 2025

This story was originally published on HackerNoon at: https://hackernoon.com/business-pros-underestimate-ai-risks-compared-to-tech-teams-social-links-study-shows. Busine...ss Professionals Are Half as Concerned as Technical Teams About AI-Driven Threats, Social Links Report Reveals Check more stories related to cybersecurity at: https://hackernoon.com/c/cybersecurity. You can also check exclusive content about #cybersecurity, #ai-risks, #cybersecurity-for-businesses, #social-links, #darkside-ai, #employee-footprint-risk, #phishing, #good-company, and more. This story was written by: @pressreleases. Learn more about this writer by checking @pressreleases's about page, and for more stories, please visit hackernoon.com. New York, NY, June 13, 2024—A new study from Social Links, a leader in open-source intelligence solutions, reveals a gap between business and technical professionals when it comes to recognizing the risks posed by AI-powered cyberattacks. Despite the rapid rise in threat sophistication, business respondents appear significantly less concerned thаn their tech colleagues. This fact highlights a potential blind spot in organizational preparedness.

Transcript
Discussion (0)
Starting point is 00:00:00 This audio is presented by Hacker Noon, where anyone can learn anything about any technology. Business pros underestimate AI risks compared to tech teams, Social Links study shows, by Hacker Noon press releases. A new study from Social Links, a leader in open-source intelligence solutions, reveals a gap between business and technical professionals when it comes to recognizing the risks posed by eye-powered cyberattacks. Despite the rapid rise in threat sophistication, business respondents appear significantly less concerned than their tech colleagues. This fact highlights a potential blindspot in organizational preparedness. The survey gathered insights from 237 professionals,
Starting point is 00:00:39 from CEO and technical C-level to security specialists and product managers, across various industries, including financial services, technology, manufacturing, retail, healthcare, logistics, government, etc. The results showed that just 27.8% of business people, professionals in in-technical, business-oriented roles, identified usage of AI to generate fake messages as one of the most relevant cyber threats. In contrast, 53.3% of technical professionals flagged it as a top concern, nearly double the level of alarm. A similar pattern emerged around deepfake technology, 46.7% of technical staff expressed concern, compared to just 27.8% of business respondents. This gap underscores a critical vulnerability in organizational security.
Starting point is 00:01:30 Business professionals, who often make prime targets for sophisticated AI-driven social engineering in deepfake schemes, show notably lower levels of concern or awareness about these threats. At the same time, the most vulnerable departments for cyber threats identified by respondents were Finance and Accounting, 24.1% IT and Development, 21.5% HR and Recruitment, 15.2% and Sales and Account Management, 13.9% greater than — this is no longer a question of if — iPowered threats are already here and greater than evolving quickly, says Ivan Shk Varun, CEO of Social Links. We're seeing a greater than clear gap between those building defenses and those most likely to be greater than targeted.
Starting point is 00:02:15 Bridging that gap requires not just better technical tools, but greater than broader awareness and education across all levels of an organization. Key insights from the research. Traditional vs AI-driven threats, while phishing and email fraud remained the most cited threats – 69.6% Followed by malware, ransomware – 49.4% AI-driven attacks are gaining ground – 39.2% Of respondents identified the use of AI to craft fake messages and campaigns as a major concern, and 32. 9% pointed to deep fakes and synthetic identities, confirming that generative technologies are
Starting point is 00:02:53 now a recognized part of the corporate threat landscape. Greater than, traditional threats like phishing and malware still dominate the charts. But greater than what we're seeing now is that AI isn't replacing these risks, it's greater than supercharging them, turning generic scams into tailored operations, fast, greater than cheap, and more convincing. That's the real shift. Automation and greater than personalization at scale, explains Ivan. Employee footprint risk.
Starting point is 00:03:21 60. 8% of respondents report that employees use corporate accounts for personal purposes, such as posting on forums, engaging on social media, or updating public profiles. 59. 5% also link publicly available employee data, e.g. LinkedIn bios, activities in forums and blogs, to real cyber incidents, identifying it as a recurring entry point for attacks. In regulated AI adoption, over 82% of companies let employees use AI tools at work, yet only 36.7% have a formal policy that controls how those tools are used.
Starting point is 00:03:58 This gap fuels, shadow AI, the unsanctioned adoption of chatbots, code assistants, or other AI services without IT oversight, which can leak sensitive data and create hidden security and compliance risks. Greater than, you can't really stop people from using work accounts or data when they're greater than active online. The same goes for AI tools. People will use them to save time greater than or get help with tasks, whether there's a policy or not. But all this activity greater than leaves digital traces. And those traces can make it easier for scammers to greater than find and target employees. What actually helps is teaching people how to spot greater than the risks and giving them the right tools to stay safe, instead of just saying greater
Starting point is 00:04:40 than, don't do it, explains Ivan. The research emphasizes that effective cyber security in the AI era requires a holistic approach that extends beyond technical controls to include comprehensive human-centric security programs. Employee training on safe AI use was overwhelmingly perceived by survey respondents as the most effective mitigation measure for, Shadow AI, 72.2% followed by the development of internal policies 46.8% Social Links is committed to addressing these evolving challenges and has recently launched the DarkSide AI initiative aimed at further exploring and mitigating the risks posed by advanced eye-driven threats. About Social Links
Starting point is 00:05:20 Social Links is a global provider of open-source intelligence, OSINT, solutions, recognized as an industry leader by Frost and Sullivan. Headquartered in the United States, the company also has an office in the Netherlands. Social Links brings together data from over 500 open sources covering social media, messengers, blockchains, and the dark web, enabling users to visualize and analyze a comprehensive informational picture and streamline investigations. Its solutions support essential processes across various sectors, including law enforcement, national security, cybersecurity, due diligence, banking, and more. Companies from the S&P 500 and public organizations in over 80 countries rely on Social Links
Starting point is 00:06:04 products every day. Contacts Email Social Links at perform.it.com website Social Links Thank you for listening to this Hacker Noon story, read by Artificial Intelligence. Visit HackerNoon.com to read, write, learn and publish.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.