The UK Code of Practice for AI Cyber Security: What it Means and Why it’s Important

This voluntary standard defines key baseline security standards across AI systems

The UK Code of Practice for AI Cyber Security: What it Means and Why it’s Important

As AI adoption grows, both its benefits and risks are becoming more apparent. Governments worldwide are drafting guidelines and, in some cases, enacting legislation to regulate AI security and usage. In response to these challenges, the UK government has introduced a Code of Practice for the Cyber Security of AI, establishing baseline security standards. While this voluntary framework lacks the legal weight of the EU AI Act, it is among the first government-backed security guidelines specifically for AI. Organizations like OWASP and MITRE have developed AI risk and control frameworks, but U.S. government efforts have stalled due to disruptions affecting agencies like NIST and CISA.

This initiative highlights the UK's commitment to securing AI development and deployment without compromising technological progress. At AppSOC, we prioritize standards-based security, embedding robust controls into our AI Security Platform based on multiple emerging frameworks. The new UK guidelines define 13 security principles, 10 of which align directly with AppSOC’s built-in capabilities—all technical principles except for awareness, training, and human interactions. Below is an overview of these principles and how AppSOC supports each one. At the end, we also compare the UK guidelines to the EU AI Act.

Overview of the UK's AI Cyber Security Code of Practice

Published in January 2025, the Code of Practice addresses the distinct cybersecurity risks associated with AI, such as data poisoning, model obfuscation, and indirect prompt injection. It serves as an addendum to the existing Software Code of Practice, emphasizing the need for AI-specific security measures. The Code is structured around several key principles designed to guide organizations in securing their AI systems:

Principle 1: Raise Staff Awareness of AI Security Threats and Risks

Organizations should educate their staff about potential AI security threats to foster a culture of security awareness.

Principle 2: Design AI Systems for Security as well as Functionality and Performance

Security considerations should be integral to the design process of AI systems, ensuring that functionality and performance do not overshadow security requirements.

AppSOC integrates security into the full MLOps/LLMOps pipeline, ensuring a balance between security, functionality, and performance.

Principle 3: Evaluate the threats and manage the risks to your AI system

Organizations should conduct comprehensive threat modeling to identify and mitigate potential risks to their AI systems.

AppSOC takes a risk-based approach to managing AI security, incorporating threat intelligence, exploitability, and business context, enabling organizations to identify and prioritize the most critical risks.

Principle 4: Enable human responsibility for AI systems

When designing an AI system, Developers and/or System Operators should incorporate and maintain capabilities to enable human oversight.

AppSOC provides visibility into AI tools, processes, and risks, enabling oversight and governance for enterprise AI applications.

Principle 5: Identify, Track, and Protect Your AI System’s Assets and Dependencies Organizations should maintain an inventory of AI assets and their dependencies to safeguard them effectively.

AppSOC provides comprehensive tools to discover, inventory and monitor AI assets and dependencies, ensuring visibility and protection.

Principle 6: Secure Development and Training Environments

Development and testing environments should be secured to prevent unauthorized access and potential compromises.

AppSOC integrates directly with AI development tools to harden them against misconfiguration and other errors, testing LLMs, and detecting unauthorized access risks.

Principle 7: Secure the software supply chain

Organizations should assess and manage risks arising from the use of third-party AI components to ensure overall system security.

AppSOC provides full stack visibility into supply chains for AI models, datasets, and other assets, along with third-party libraries and code that can deliver malware. No other AI security platform integrates AI and application supply chain security.

Principle 8: Document Your Data, Models, and Prompts

Organizations should maintain comprehensive documentation of the data used, AI models developed, and prompts applied to ensure transparency and security.

AppSOC provides automated discovery, and documents data lineage, model changes, and prompt modifications, ensuring organizations have clear records for audit and security compliance.

Principle 9: Conduct Appropriate Testing and Evaluation

AI systems and models must undergo rigorous testing and evaluation to detect vulnerabilities, biases, and performance issues before deployment.

AppSOC includes built-in AI security testing, Red Teaming, adversarial robustness assessments, and bias detection tools to help organizations validate the security and reliability of their AI models.

Principle 10: Communication and Processes Associated with End-users

Organizations should establish clear communication channels to inform end-users and affected entities about AI system behaviors, risks, and changes.

While AppSOC focuses on technical security, it also includes reporting and alerting functionalities to support organizations in maintaining transparency and trust with stakeholders.

Principle 11: Maintain Regular Security Updates, Patches, and Mitigations

AI systems should be regularly updated with security patches and mitigations to address emerging vulnerabilities.

AppSOC automates security management and provides real-time alerts for new vulnerabilities, ensuring organizations can quickly apply necessary mitigations.

Principle 12: Monitor Your System’s Behavior

Continuous monitoring of AI systems is essential to detect anomalies, security incidents, and unexpected behavior.

AppSOC delivers real-time monitoring, anomaly detection, and logging to track system behavior and flag security threats before they escalate.

Principle 13: Ensure Proper Data and Model Disposal

Organizations should implement secure data deletion and model retirement processes to prevent unauthorized access to outdated AI assets.

AppSOC keep an inventory of all AI assets and can send out alerts for models and data that are obsolete or have been retired.

Comparing the UK’s AI Cyber Security Code with the EU AI Act

While the UK’s AI Cyber Security Code focuses specifically on cybersecurity principles for AI development and deployment, the EU AI Act takes a broader regulatory approach to AI governance. The key differences between these frameworks include:

  • Scope: The UK’s Code of Practice is a set of voluntary security guidelines primarily aimed at AI developers and operators. In contrast, the EU AI Act is a comprehensive, legally binding framework that regulates AI systems based on their risk levels, imposing strict obligations on high-risk AI applications.
  • Focus on Cybersecurity: The UK’s guidelines prioritize the technical security aspects of AI, ensuring resilience against cyber threats, data manipulation, and system vulnerabilities. The EU AI Act, while addressing security concerns, also emphasizes ethical considerations, such as fairness, bias mitigation, and fundamental rights protection.
  • Regulatory Enforcement: Compliance with the UK’s AI Cyber Security Code is encouraged but not mandatory, whereas the EU AI Act enforces penalties for non-compliance, with fines reaching up to 6% of global annual turnover for severe violations.
  • AI System Categorization: The EU AI Act classifies AI systems into four risk categories—unacceptable, high-risk, limited risk, and minimal risk—each with corresponding compliance requirements. The UK’s approach does not classify AI systems by risk but instead provides overarching cybersecurity principles applicable across different AI use cases.
  • Business Impact: Companies operating in the UK can adopt the Code of Practice to strengthen their AI security posture without facing immediate legal repercussions. However, organizations doing business in the EU must comply with the AI Act’s stringent regulations, especially if they develop or deploy high-risk AI systems.

Both frameworks play a crucial role in shaping AI security and governance. The UK’s approach fosters a flexible, industry-driven model for AI cybersecurity, whereas the EU’s AI Act enforces a comprehensive regulatory structure with mandatory compliance.

Conclusion

The UK's Code of Practice for the Cyber Security of AI represents a significant step towards establishing clear and effective guidelines for AI security. By aligning with these principles, organizations can enhance the security of their AI systems, fostering trust and reliability in AI technologies. AppSOC is dedicated to supporting organizations in this endeavor, offering solutions that align with the Code's technical principles to ensure secure AI development and deployment.