Thinking AI Security: Understanding the Shared Responsibility Model

Providing clarity on shared responsibility across the AI usage and applications

Willy Leichter

October 16, 2024

Thinking AI Security: Understanding the Shared Responsibility Model

Subscribe to our Blogs

Get weekly updates on the latest industry news, thought leadership, and other security topics in our blogs.

The adoption rate of artificial intelligence (AI) across industries is unprecedented. Soon, almost every application will incorporate some form of AI integration or AI-generated code. We’re already witnessing AI’s impact in fields such as copywriting, translation, medical diagnostics, legal summarization, infrastructure management, fraud detection, and many more yet to be imagined.

However, this rapid, unmanaged growth has led to significant challenges. Organizations are increasingly concerned about visibility, security, and governance to prevent AI from becoming a vector for cyberattacks and data leaks. Another outcome of this fast-paced adoption is a general confusion on how to begin addressing AI security. As end-users explore tools like ChatGPT, businesses roll out copilots, and data scientists rush to demonstrate new concepts, security and compliance teams are left navigating how to establish controls that promote AI confidence without hindering innovation.

To help address these challenges, we’re launching a series of blogs with practical guidance on getting started in AI security.

AI Builders vs. Consumers

For those not deeply embedded in AI technology, tools like ChatGPT introduced an exciting new world, offering immediate access to generative AI capabilities powered by large language models (LLMs). However, the rapid adoption of these SaaS tools by millions has raised concerns about “Shadow AI,” mirroring the “Shadow IT” issues that emerged when SaaS became widespread in the 2010s.

AI Runtime Defense

Most concerns around AI involve potential data leaks, inappropriate content, bias, or inaccurate responses. Addressing these issues will require a new generation of tools to analyze prompts and responses, blocking, masking data, or coaching users as needed. Real-time or retrospective data access models are in development, including API calls to SaaS providers, code insertion, or inline proxies—similar to today’s web or email gateways. Each approach has its own set of pros and cons, which we’ll explore in upcoming blogs.

 AI Application Builders

As businesses and developers recognize AI’s potential, they’re building or customizing models to enhance specific applications. To maintain a competitive edge, organizations are training models in their unique environments or constructing entirely new AI infrastructures.

The DevOps ecosystem now includes MLOps and LLMOps, with tools from Databricks, AWS, Microsoft, Google, and others providing environments for building AI applications, introducing LLMs, training or fine-tuning models, and using retrieval-augmented generation (RAG) systems to enrich LLM outputs with relevant data outside their training scope.

 AI Shared Responsibility Models

The SaaS boom introduced shared responsibility models, clarifying that even with cloud-hosted infrastructure, organizations are still accountable for securing their data, access controls, and more. Microsoft and others have updated this model to outline shared responsibilities across AI usage, applications, and platforms.

AI Usage: This involves using SaaS services from external AI providers, where end-users have minimal control over model fine-tuning or application customization. Like other SaaS services, this doesn’t absolve organizations of responsibility for managing access, identity, and data, ensuring that sensitive or regulated data remains secure. Runtime controls are essential at this level.

  • AI Applications: Many organizations are integrating external AI tools with platforms like Salesforce, Workday, or internal sources. While setting up these API connections is simple, they often lack inherent security and can expose new vulnerabilities. This integration level connects legacy applications to the AI landscape, demanding heightened security and governance.
  • AI Platforms: At the platform level, AI developers bear the most responsibility for the security of models, training data, and sometimes even the AI compute infrastructure. The “bring-your-own” model approach is common here, as seen on platforms like Hugging Face, which offer extensive models and datasets with minimal security guarantees. This flexibility fosters innovation but requires diligent monitoring and governance to prevent security incidents.

Next Steps

In following blogs, we’ll introduce the AppSOC AI Security & Governance platform's three main modules that address various layers of AI security:

  • AI Discovery: Detection tools, integrated with LLMOps platforms, provide visibility and governance for AI assets, including models, datasets, clusters, plugins, API connectors, and more.
  • AI Security Posture Management: With deep integration into AI platforms (such as Azure AI, Databricks, AWS SageMaker, Google Vertex), this module ensures secure setup and deployment, automatically detecting misconfigurations, controlling access, protecting against data leaks, and managing vulnerabilities through automated remediation workflows integrated with Jira, ServiceNow, Slack, or PagerDuty.
  • AI Model Scanning: This includes static and dynamic scanning of models, notebooks, and applications, identifying vulnerabilities in formats, libraries, and third-party API calls.
  • AI Runtime Defense: This module provides tools to monitor prompts and responses, intercepting traffic through SDKs, application agents, provider APIs, or proxies. These tools detect threats like prompt injection, jailbreaking, and malicious code, enforcing policies to block, redact, or mask sensitive data.

Through this blog series, we’ll explore practical solutions to help you safeguard your AI initiatives, starting with foundational tools for AI discovery, posture management, model scanning, and runtime defense. By implementing these layers of protection, organizations can confidently embrace AI, knowing they have the right security framework to mitigate risks and unlock the full potential of artificial intelligence. Stay tuned as we continue this journey, offering insights and tools to help secure your path to AI adoption. Please contact us at any time to schedule a demo or our AI governance and application security platform.