It’s not exactly breaking news to say that AI has dramatically changed the cybersecurity industry. Both attackers and defenders alike are turning to artificial intelligence to uplevel their capabilities, each striving to stay one step ahead of the other. This cat-and-mouse game is nothing new—attackers have been trying to outsmart security teams for decades, after all—but the emergence of artificial intelligence has introduced a fresh (and often unpredictable) element to the dynamic. Attackers across the globe are rubbing their hands together with glee at the prospect of leveraging this new technology to develop innovative, never-before-seen attack methods.
At least, that’s the perception. But the reality is a little bit different. While it’s true that attackers are increasingly leveraging AI, they are mostly using it to increase the scale and complexity of their attacks, refining their approach to existing tactics rather than breaking new ground. The thinking here is clear: why spend the time and effort to develop the attack methods of tomorrow when defenders already struggle to stop today’s? Fortunately, modern security teams are leveraging AI capabilities of their own—many of which are helping to detect malware, phishing attempts, and other common attack tactics with greater speed and accuracy. As the “AI arms race” between attackers and defenders continues, it will be increasingly important for security teams to understand how adversaries are actually deploying the technology—and ensuring that their own efforts are focused in the right place.
How Attackers Are Leveraging AI
The idea of a semi-autonomous AI being deployed to methodically hack its way through an organization’s defenses is a scary one, but (for now) it remains firmly in the realm of William Gibson novels and other science fiction fare. It’s true that AI has advanced at an incredible rate over the past several years, but we’re still a long way off from the sort of artificial general intelligence (AGI) capable of perfectly mimicking human thought patterns and behaviors. That’s not to say today’s AI isn’t impressive—it certainly is. But generative AI tools and large language models (LLMs) are most effective at synthesizing information from existing material and generating small, iterative changes. It can’t create something entirely new on its own—but make no mistake, the ability to synthesize and iterate is incredibly useful.
In practice, this means that instead of developing new methods of attack, adversaries can instead uplevel their current ones. Using AI, an attacker might be able to send millions of phishing emails, instead of thousands. They can also use an LLM to craft a more convincing message, tricking more recipients into clicking a malicious link or downloading a malware-laden file. Tactics like phishing are effectively a numbers game: the vast majority of people won’t fall for a phishing email, but if millions of people receive it, even a 1% success rate can result in thousands of new victims. If LLMs can bump that 1% success rate up to 2% or more, scammers can effectively double the effectiveness of their attacks with little to no effort. The same goes for malware: if small tweaks to malware code can effectively camouflage it from detection tools, attackers can get far more mileage out of an individual malware program before they need to move on to something new.
The other element at play here is speed. Because AI-based attacks are not subject to human limitations, they can often conduct an entire attack sequence at a much faster rate than a human operator. That means an attacker could potentially break into a network and reach the victim’s crown jewels—their most sensitive or valuable data—before the security team even receives an alert, let alone responds to it. If attackers can move faster, they don’t need to be as careful—which means they can get away with noisier, more disruptive activities without being stopped. They aren’t necessarily doing anything new here, but by pushing forward with their attacks more quickly, they can outpace network defenses in a potentially game-changing way.
This is the key to understanding how attackers are leveraging AI. Social engineering scams and malware programs are already successful attack vectors—but now adversaries can make them even more effective, deploy them more quickly, and operate at an even greater scale. Rather than fighting off dozens of attempts per day, organizations might be fighting off hundreds, thousands, or even tens of thousands of fast-paced attacks. And if they don’t have solutions or processes in place to quickly detect those attacks, identify which represent real, tangible threats, and effectively remediate them, they are leaving themselves dangerously open to attackers. Instead of wondering how attackers might leverage AI in the future, organizations should leverage AI solutions of their own with the goal of handling existing attack methods at a greater scale.
Turning AI to Security Teams’ Advantage
Security experts at every level of both business and government are seeking out ways to leverage AI for defensive purposes. In August, the U.S. Defense Advanced Research Projects Agency (DARPA) announced the finalists for its recent AI Cyber Challenge (AIxCC), which awards prizes to security research teams working to train LLMs to identify and fix code-based vulnerabilities. The challenge is supported by major AI providers, including Google, Microsoft, and OpenAI, all of whom provide technological and financial support for these efforts to bolster AI-based security. Of course, DARPA is just one example—you can hardly shake a stick in Silicon Valley without hitting a dozen startup founders eager to tell you about their advanced new AI-based security solutions. Suffice it to say, finding new ways to leverage AI for defensive purposes is a high priority for organizations of all types and sizes.
But like attackers, security teams often find the most success when they use AI to amplify their existing capabilities. With attacks happening at an ever-increasing scale, security teams are often stretched thin—both in terms of time and resources—making it difficult to adequately identify, investigate, and remediate every security alert that pops up. There simply isn’t the time. AI solutions are playing an important role in alleviating that challenge by providing automated detection and response capabilities. If there’s one thing AI is good at, it’s identifying patterns—and that means AI tools are very good at recognizing abnormal behavior, especially if that behavior conforms to known attack patterns. Because AI can review vast amounts of data much more quickly than humans, this allows security teams to upscale their operations in a significant way. In many cases, these solutions can even automate basic remediation processes, controverting low-level attacks without the need for human intervention. They can also be used to automate the process of security validation, continuous poking and prodding around network defenses to ensure they are functioning as intended.
It’s also important to note that AI doesn’t just allow security teams to identify potential attack activity more quickly—it also dramatically improves their accuracy. Instead of chasing down false alarms, security teams can be confident that when an AI solution alerts them to a potential attack, it is worthy of their immediate attention. This is an element of AI that doesn’t get talked about nearly enough—while much of the discussion centers around AI “replacing” humans and taking their jobs, the reality is that AI solutions are enabling humans to do their jobs better and more efficiently, while also alleviating the burnout that comes with performing tedious and repetitive tasks. Far from having a negative impact on human operators, AI solutions are handling much of the perceived “busywork” associated with security positions, allowing humans to focus on more interesting and important tasks. At a time when burnout is at an all-time high and many businesses are struggling to attract new security talent, improving quality of life and job satisfaction can have a massive positive impact.
Therein lies the real advantage for security teams. Not only can AI solutions help them scale their operations to effectively combat attackers leveraging AI tools of their own—they can keep security professionals happier and more satisfied in their roles. That’s a rare win-win solution for everyone involved, and it should help today’s businesses recognize that the time to invest in AI-based security solutions is now.
The AI Arms Race Is Just Getting Started
The race to adopt AI solutions is on, with both attackers and defenders finding different ways to leverage the technology to their advantage. As attackers use AI to increase the speed, scale and complexity of their attacks, security teams will need to fight fire with fire, using AI tools of their own to improve the speed and accuracy of their detection and remediation capabilities. Fortunately, AI solutions are providing critical information to security teams, allowing them to better test and evaluate the efficacy of their own solutions while also freeing up time and resources for more mission-critical tasks. Make no mistake, the AI arms race is only getting started—but the fact that security professionals are already using AI to stay one step ahead of attackers is a very good sign.
Credit: Source link