Defining Shadow AI and Identifying Its Causes
‘Workers, in the absence of structured advice or oversight, are looking to gain access to something that has a strategic benefit to them being able to perform their jobs every day,’ says Barracuda CIO Siroui Mushegian. The challenge that agency leaders face is that AI use is advancing rapidly, and governance can’t keep up.
‘Different teams across an organization may adopt AI tools independently to enhance productivity, analyze data, or drive innovation without going through formal IT approval processes,’ says Cristian Rodriguez, field CTO for the Americas at CrowdStrike. ‘This can stem from pressure to stay competitive, a lack of awareness about security protocols, or insufficient enterprise AI solutions. Without visibility into these deployments, organizations lose control over how data is accessed, processed, and stored.’
Why Shadow AI Is a Unique Security Problem
The rapid advancement of AI technology presents a new challenge as few established norms around security exist. ‘We’ve been dealing with cloud security for a long time, so we know what secure looks like with that,’ says Mitch Herckis, global head of government affairs at Wiz. ‘With AI, we are much less certain. The vulnerabilities and risks aren’t as well known. There’s no common understanding of the risk it presents.’
The unsanctioned use of new workplace technology may go unnoticed by agency leaders and security teams. ‘The understanding of shadow AI as an issue has not broken through at the C-level,’ Herckis adds. ‘It hasn’t received the attention it deserves because people are still adopting it; it’s still novel. Leaders are busy struggling with many of the traditional problems.’
Unnecessary Risks and Costs Stemming From Shadow AI
When onboarding new technologies, agencies typically follow a rigorous vetting process, considering several factors including security. When staff use unsupported AI tech, they are ‘introducing potential vulnerabilities’ into their projects.
‘These tools may lack encryption, secure data storage, or compliance with regulatory standards, exposing sensitive information,’ Rodriguez warns. ‘With adversaries increasingly targeting AI models and the sensitive data they process, shadow AI can accelerate risks of breaches and leaks.’
Moreover, the use of unapproved AI tools leads to wasted efforts across teams and unnecessary costs. Rodriguez notes, ‘This duplication increases licensing fees and support costs but complicates the organization’s ability to standardize AI operations. Disparate AI systems may produce conflicting results, requiring additional effort to reconcile outputs. This inefficiency hampers productivity and delays decision-making.’
Tools and Strategies for Managing the Shadow AI Threat
Security teams can draw from their experience with shadow IT to combat shadow AI effectively. To start, a complete inventory of the agency’s IT environment, including AI, is essential. ‘Getting a technical inventory of AI models and technology through automated means is critical for understanding the situation,’ says Herckis. ‘There are AI security posture management tools available, such as Wiz’s AI-SPM, that can help agencies with this.’
Once a technical inventory has been established, security teams must implement controls around AI technologies. ‘Agencies can then set up correct permissions, giving the right people access to the right data sources,’ Herckis adds. ‘Ensuring that you identify AI technologies and isolate them appropriately is critical. AI security posture management serves to continually address these concerns.’
‘By automating threat detection, vulnerability management, and policy enforcement, AI-SPM solutions like CrowdStrike Falcon Cloud Security AI-SPM allow organizations to remediate risks in real time while preventing unauthorized tools from entering the ecosystem,’ Rodriguez explains. ‘Cybersecurity professionals agree that integrating safety and privacy controls is key to unlocking generative AI’s potential, highlighting the importance of governance in creating a secure and innovative AI environment.’
Establishing AI Use Policies and Understanding Use Cases
Besides deploying security measures, agencies should establish policies and governance to manage AI use thoughtfully. The aim shouldn’t be outright prohibitions on AI, but to create guardrails for its adoption to maximize benefits. ‘Agencies should develop an AI policy that is communicated clearly within the organization so people understand the framework,’ Mushegian recommends. ‘This must be written from a risk-based approach by the agency’s legal head.’
Establishing a steering committee on AI may provide better insights on use cases and concerns within the organization. ‘An AI council allows leaders to gather diverse perspectives, facilitating a comprehensive understanding of team needs,’ Mushegian says. ‘It is crucial to include leadership from security and product teams along with general counsel.’
In conclusion, agency leaders must recognize AI as both a valuable resource and a potential threat. By providing safe avenues for AI utilization, leadership can ensure that their teams meet their objectives without compromising security. ‘It is the responsibility of leadership to provide the right tools to workers,’ Herckis emphasizes. ‘If you don’t supply the right tools for people to do their jobs, they will seek alternatives, leading to an unfavorable risk posture.’”
- 0 Comments
- Cybersecurity Threats
- Shadow AI
- U.S. Agencies