David Kellerman is the Field CTO at Cymulate, and a senior technical customer-facing professional in the field of information and cyber security. David leads customers to success and high-security standards.
Cymulate is a cybersecurity company that provides continuous security validation through automated attack simulations. Its platform enables organizations to proactively test, assess, and optimize their security posture by simulating real-world cyber threats, including ransomware, phishing, and lateral movement attacks. By offering Breach and Attack Simulation (BAS), exposure management, and security posture management, Cymulate helps businesses identify vulnerabilities and improve their defenses in real time.
What do you see as the primary driver behind the rise of AI-related cybersecurity threats in 2025?
AI-related cybersecurity threats are rising because of AI’s increased accessibility. Threat actors now have access to AI tools that can help them iterate on malware, craft more believable phishing emails, and upscale their attacks to increase their reach. These tactics aren’t “new,” but the speed and accuracy with which they are being deployed has added significantly to the already lengthy backlog of cyber threats security teams need to address. Organizations rush to implement AI technology, while not fully understanding that security controls need to be put around it, to ensure it isn’t easily exploited by threat actors.
Are there any specific industries or sectors more vulnerable to these AI-related threats, and why?
Industries that are consistently sharing data across channels between employees, clients, or customers are susceptible to AI-related threats because AI is making it easier for threat actors to engage in convincing social engineering schemes Phishing scams are effectively a numbers game, and if attackers can now send more authentic-seeming emails to a wider number of recipients, their success rate will increase significantly. Organizations that expose their AI-powered services to the public potentially invite attackers to try to exploit it. While it is an inherited risk of making services public, it is crucial to do it right.
What are the key vulnerabilities organizations face when using public LLMs for business functions?
Data leakage is probably the number one concern. When using a public large language model (LLM), it’s hard to say for sure where that data will go – and the last thing you want to do is accidentally upload sensitive information to a publicly accessible AI tool. If you need confidential data analyzed, keep it in-house. Don’t turn to public LLMs that may turn around and leak that data to the broader internet.
How can enterprises effectively secure sensitive data when testing or implementing AI systems in production?
When testing AI systems in production, organizations should adopt an offensive mindset (as opposed to a defensive one). By that I mean security teams should be proactively testing and validating the security of their AI systems, rather than reacting to incoming threats. Consistently monitoring for attacks and validating security systems can help to ensure sensitive data is protected and security solutions are working as intended.
How can organizations proactively defend against AI-driven attacks that are constantly evolving?
While threat actors are using AI to evolve their threats, security teams can also use AI to update their breach and attack simulation (BAS) tools to ensure they are safeguarded against emerging threats. Tools, like Cymulate’s daily threat feed, load the latest emerging threats into Cymulate’s breach and attack simulation software daily to ensure security teams are validating their organization’s cybersecurity against the most recent threats. AI can help automate processes like these, allowing organizations to remain agile and ready to face even the newest threats.
What role do automated security validation platforms, like Cymulate, play in mitigating the risks posed by AI-driven cyber threats?
Automated security validation platforms can help organizations stay on top of emerging AI-driven cyber threats through tools aimed at identifying, validating, and prioritizing threats. With AI serving as a force multiplier for attackers, it’s important to not just detect potential vulnerabilities in your network and systems, but validate which ones post an actual threat to the organization. Only then can exposures be effectively prioritized, allowing organizations to mitigate the most dangerous threats first before moving on to less pressing items. Attackers are using AI to probe digital environments for potential weaknesses before launching highly tailored attacks, which means the ability to address dangerous vulnerabilities in an automated and effective manner has never been more critical.
How can enterprises incorporate breach and attack simulation tools to prepare for AI-driven attacks?
BAS software is an important element of exposure management, allowing organizations to create real-world attack scenarios they can use to validate security controls against today’s most pressing threats. The latest threat intel and primary research from the Cymulate Threat Research Group (combined with information on emerging threats and new simulations) is applied daily to Cymulate’s BAS tool, alerting security leaders if a new threat was not blocked or detected by their existing security controls. With BAS, organizations can also tailor AI-driven simulations to their unique environments and security policies with an open framework to create and automate custom campaigns and advanced attack scenarios.
What are the top three recommendations you would give to security teams to stay ahead of these emerging threats?
Threats are becoming more complex every day. Organizations that don’t have an effective exposure management program in place risk falling dangerously behind, so my first recommendation would be to implement a solution that allows the organization to effectively prioritize their exposures. Next, ensure that the exposure management solution includes BAS capabilities that allow the security team to simulate emerging threats (AI and otherwise) to gauge how the organization’s security controls perform. Finally, I would recommend leveraging automation to ensure that validation and testing can happen on a continuous basis, not just during periodic reviews. With the threat landscape changing on a minute-to-minute basis, it’s critical to have up-to-date information. Threat data from last quarter is already hopelessly obsolete.
What developments in AI technology do you foresee in the next five years that could either exacerbate or mitigate cybersecurity risks?
A lot will depend upon how accessible AI continues to be. Today, low-level attackers can use AI capabilities to uplevel and upscale their attacks, but they aren’t creating new, unprecedented tactics – they’re just making existing tactics more effective. Right now, we can (mostly) compensate for that. But if AI continues to grow more advanced and remains highly accessible, that could change. Regulations will play a role here – the EU (and, to a lesser extent, the US) have taken steps to govern how AI is developed and used, so it will be interesting to see whether that has an effect on AI development.
Do you anticipate a shift in how organizations prioritize AI-related cybersecurity threats compared to traditional cybersecurity challenges?
We’re already seeing organizations recognize the value of solutions like BAS and exposure management. AI is allowing threat actors to quickly launch advanced, targeted campaigns, and security teams need any advantage they can get to help stay ahead of them. Organizations that are using validation tools will have a significantly easier time keeping their heads above water by prioritizing and mitigating the most pressing and dangerous threats first. Remember, most attackers are looking for an easy score. You may not be able to stop every attack, but you can avoid making yourself an easy target.
Thank you for the great interview, readers who wish to learn more should visit Cymulate.
Credit: Source link