Keeping up with Global Regulations Around AI: A Complicated Map

Compliance with these regulations will be a challenging moving target

Keeping up with Global Regulations Around AI: A Complicated Map

As the use of AI and development of new AI applications continues to explode, there is a growing list of regulations, frameworks, and recommendations around the globe that will make this a complex regulatory environment. Previous waves of regulations around privacy (starting with HIPAA), breach disclosure laws, and privacy laws (culminating in the GDPR) have had a substantial impact on how organizations manage security, privacy, and compliance. Just keeping up with all these regulations will be challenging, and ensuring establishing what’s required for compliance will be a moving target for some time. 

This blog explores the emerging global regulations around AI, including key frameworks, regional approaches, and the potential impacts on businesses and innovation.

 The Need for AI Regulations

AI's rapid advancement brings numerous benefits, such as improved healthcare, efficient supply chains, and enhanced customer experiences. However, it also poses significant risks, including bias in decision-making, privacy infringements, and potential job displacement. These risks necessitate a regulatory approach that balances innovation with safeguards against misuse.

 Key Global Frameworks and Principles

1. OECD Principles on AI (2019):

The Organisation for Economic Co-operation and Development (OECD) adopted the first intergovernmental standard on AI in May 2019. These principles emphasize AI’s ethical use, transparency, robustness, and accountability. They advocate for AI systems that respect human rights and democratic values, providing a foundational framework that many countries use as a reference.

2. G20 AI Principles:

Following the OECD’s lead, the G20 adopted a set of AI principles aimed at fostering trust in AI technologies while promoting innovation. These principles focus on inclusive growth, sustainable development, and well-being, encouraging the use of AI for social good.

 Regional Approaches to AI Regulation

 European Union (EU)

The EU is at the forefront of AI regulation with its comprehensive approach to creating a trustworthy AI environment. The EU’s proposed Artificial Intelligence Act is one of the most ambitious regulatory frameworks. Key aspects include:

  • Risk-Based Classification: AI systems are classified into categories such as minimal risk, limited risk, high risk, and unacceptable risk. High-risk systems, like those used in critical infrastructure or law enforcement, will be subject to stringent requirements.
  • Mandatory Requirements: High-risk AI systems must comply with requirements related to data governance, documentation, transparency, human oversight, and robustness.
  • Prohibited Practices: Certain AI applications, like social scoring by governments and real-time biometric identification in public spaces, are banned due to their intrusive nature.

United States

The United States adopts a more decentralized approach to AI regulation. While there is no overarching federal AI law, several initiatives are shaping AI governance:

  • The National AI Initiative Act (2020): This act coordinates AI research, development, and deployment across various federal agencies, emphasizing innovation and leadership in AI.
  • AI Bill of Rights: Introduced by the White House Office of Science and Technology Policy, this blueprint outlines five principles to protect civil rights in the AI era: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback.

China

China aims to become a global leader in AI by 2030 and has been proactive in developing AI regulations:

  • New Generation AI Development Plan (2017): This plan outlines China’s strategy for AI development, emphasizing advancements in core technologies and the integration of AI across various sectors.
  • AI Ethics Guidelines: China’s guidelines emphasize the principles of fairness, transparency, and accountability, with a strong focus on aligning AI development with national interests and social stability.

Other Regions

  • Canada: The Canadian government’s Directive on Automated Decision-Making sets requirements for transparency, accountability, and impact assessment of AI systems used in federal operations.
  • Japan: Japan’s AI Strategy 2019 focuses on creating a human-centered society through AI, emphasizing ethical considerations, human rights, and collaboration between government, industry, and academia.

 Impact on Businesses and Innovation

Emerging AI regulations present both challenges and opportunities for businesses. Compliance with diverse regulatory frameworks can be complex and costly, especially for multinational companies operating in multiple jurisdictions. However, clear regulations also provide a predictable environment for innovation, helping to build public trust in AI technologies.

Businesses need to stay informed about regulatory developments and proactively adapt their AI strategies. This includes:

  • Investing in Compliance: Establishing robust data governance and ethical AI practices to meet regulatory requirements.
  • Collaborating with Regulators: Engaging with policymakers to contribute to the development of balanced regulations that consider industry perspectives.
  • Adopting Ethical AI Frameworks: Implementing ethical AI frameworks and best practices to enhance transparency, fairness, and accountability.

 Deploying AI Systems Securely: Best Practices from the CSI Report

The complexity and potential vulnerabilities of AI systems necessitate a secure deployment approach. The CSI report “Deploying AI Systems Securely” offers best practices for organizations to deploy and operate AI systems securely, emphasizing the need for robust governance, secure configurations, and continuous protection measures【7†source】.

Key recommendations include:

  • Managing Deployment Environment Governance: Ensuring the AI deployment aligns with organizational IT standards and assessing the threat landscape to document applicable risks.
  • Ensuring Robust Deployment Environment Architecture: Establishing security protections for IT environment boundaries, applying secure by design principles, and using Zero Trust frameworks to manage risks.
  • Harden Deployment Environment Configurations: Applying existing security best practices to the deployment environment, such as sandboxing ML models, configuring firewalls, and using phishing-resistant multifactor authentication.
  • Continuous Protection and Monitoring: Implementing detection and response capabilities, securing exposed APIs, and monitoring model behavior to quickly identify and mitigate potential security breaches.

 AI Security Frameworks from OWASP and MITRE

AI security is a rapidly evolving field, and frameworks from organizations like OWASP and MITRE provide essential guidelines for securing AI systems.

1. OWASP AI Security and Governance Checklist:

The Open Worldwide Application Security Project (OWASP) provides a comprehensive checklist to ensure AI systems' cybersecurity and governance. This checklist includes best practices for securing data, managing vulnerabilities, and ensuring transparency in AI operations.

2. MITRE ATLAS:

MITRE's Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) matrix identifies potential threats and attack vectors specific to AI systems. It offers a structured approach to understanding and mitigating risks associated with AI, focusing on adversarial machine learning and other AI-specific vulnerabilities.

Conclusion

The landscape of AI regulation is rapidly evolving, with significant regional variations reflecting different cultural, political, and economic priorities. As AI continues to permeate various aspects of life, the need for comprehensive, balanced, and forward-looking regulations becomes increasingly critical. By fostering innovation while safeguarding against potential harms, these emerging regulations aim to ensure that AI serves humanity's best interests.

Staying ahead in this dynamic regulatory environment requires businesses, policymakers, and technologists to work collaboratively, ensuring that AI technologies are developed and deployed in ways that are ethical, transparent, and beneficial for all. Incorporating best practices from frameworks like those provided by OWASP and MITRE, and adhering to guidelines for secure AI deployment as outlined in the CSI report, can significantly enhance the security and resilience of AI systems.

Through proactive compliance, collaboration with regulators, and the adoption of robust security measures, organizations can navigate the complex regulatory landscape while harnessing the transformative potential of AI responsibly and ethically.