What We Do in the (AI) Shadows

Shining a light on Shadow AI without stifling innovation

What We Do in the (AI) Shadows

If you have been around security since the early days of cloud adoption, you probably have dealt with Shadow IT – trying to control and govern the unsanctioned use of unknown cloud services by your users. Thanks to the rapid adoption of Gen AI, another shadow is rapidly expanding - Shadow AI.

Shadow AI typically falls into two types. First, and most familiar, is when employees start using SaaS-based GenAI applications like ChatGPT or Perplexity without prior approval by their businesses. The second, and more challenging for security, is when data science teams start evaluating or experimenting with LLM models or AI frameworks that have not gone through adequate evaluation or have no enterprise governance. In the first case, casual AI users can be addressed very similarly to Shadow IT – with monitoring, and in some cases restricting access. This blog will focus on the second case, involving builders of AI applications - Shadow AI in the MLOps/LLMOps environments.

Like Shadow IT, some of the risks of Shadow AI include data leaks, compliance issues, security vulnerabilities and operational risks. In addition, AI adds specific concerns such as inaccurate or biased results via unvalidated models, legal and ethical issues, resource drains and financial risks due to the high cost of deploying models and potential compliance failures.

A recent McKinsey survey found that 65% of organizations are regularly using generative AI, nearly doubling from the prior year. Another survey found that 81% of respondents are worried about the security implications of Generative AI. Despite these concerns, AI adoption is exploding, and the problem of Shadow AI is here to stay. But given the diverse nature of the MLOps and LLMOps platforms, governing it is not simple.

Organizations can mitigate these risks by implementing stricter IT governance, encouraging transparency about the tools being used, and fostering collaboration between departments to ensure all AI initiatives align with company policies and regulations. Regular audits and the adoption of a centralized AI management system can also help in identifying and managing Shadow AI. To make this work, these governance processes must be automated and seamlessly integrated into the existing MLOps/LLMOps and application security processes.

We should also ask a more fundamental question – why do new technologies typically start in the shadows? This happened in the early 2000’s with web browsers, then the 2010’s with cloud services, and now in the 2020’s with AI. There typically two factors casting shadows:

  1. Developers being innovative, while lacking awareness of risks, or governance processes,
  2. Security teams taking a “just say no” approach trying to block what they don’t understand.

The first needs to be addressed carefully, and thoughtfully, with new tools to automate discovery, visibility, posture management and governance. The second approach of simply blocking sounds easier, but inevitably will fail, increasing the shadows and security risks.

At the end of the day, we need to foster innovation but provide the tools and frameworks that shine light on the shadows so we can all move forward safely and confidently.

For more information about how AppSOC addresses the challenges of Shadow AI, please see our additional blogs, web content, videos, and demos.