* Watch the Video Blog *
This week marks the one-year anniversary of the White House Executive Order on AI (October 30, 2023), and the Biden Administration has released a new Memorandum on Advancing the United States’ Leadership in Artificial Intelligence, which sets more specific objectives on national security, safety, security, and trustworthiness of artificial intelligence.
While government agencies globally have been moving relatively quickly drafting AI guidelines and regulations, the pace of AI adoption across many sectors has exploded. In the last 18 months, 80% of companies began using or experimenting with AI technologies.
As AI systems have become more widespread, they have also become a fatter target for cyber threats. Recently, Forbes reported that over 3,000 examples of malware were detected on the massive Hugging Face repository of open-source AI models. The need for stronger AI governance and security measures in both the public and private sectors has never been clearer.
With this backdrop, let’s compare the original Executive Order with the new Memorandum to gauge what progress has been made, and where considerable work still needs to be done.
Recap of the 2023 Executive Order on AI
The October 2023 White House order laid the groundwork for a comprehensive national strategy around AI, setting the stage for innovation while also addressing its risks. The executive order focused on three key areas:
- AI Safety Standards: Agencies were tasked with developing clear safety guidelines for AI systems, ensuring transparency and risk mitigation. This includes the requirement for AI developers to share safety testing data with the government, which would help identify and address potential vulnerabilities.
- Ethical AI Development: The 2023 order also emphasized responsible AI development, encouraging collaborations between public and private sectors to advance ethical standards while fostering innovation.
- Workforce Development: Recognizing the transformative power of AI, the executive order called for initiatives to train the American workforce in AI skills to maintain the U.S.’s competitive edge in this domain.
2024 National Security Memorandum: Focus on Defense and Security
Building on the 2023 order, the October 2024 memorandum takes a more focused approach, particularly addressing the security implications of AI. As AI systems evolve, they are playing increasingly central roles in military applications, from cybersecurity to counterintelligence. The NSM specifically emphasizes the need to adopt cutting-edge AI capabilities within U.S. defense agencies to maintain a competitive advantage over adversaries like China, which has made significant strides in AI-driven military technologies.
The 2024 AI Memorandum’s key goals include:
- Accelerated AI Adoption in National Security: The U.S. government is directing its national security agencies to incorporate AI into their operations, with an emphasis on defense and intelligence. This includes increasing access to frontier AI systems that could be leveraged to enhance cybersecurity and military intelligence.
- Ethical and Responsible AI Use in Defense: Much like the original WHEO, the NSM reiterates the importance of ethical AI use, ensuring that AI applications respect civil liberties and do not perpetuate biases or violate privacy rights.
- Public Trust and Safeguards: In response to concerns raised by civil society groups, the NSM introduces frameworks for transparency and governance in AI use, particularly in law enforcement and intelligence contexts. This aims to build public trust while preventing the misuse of AI technologies.
A Year of Progress: What’s Been Achieved?
In the year since the original executive order, progress has been made in embedding AI into federal operations. Key milestones include:
- Strengthened AI Governance: Agencies have developed clear frameworks for AI use, focusing on safety, transparency, and ethical standards. These frameworks are designed to mitigate the risks associated with AI deployment in critical sectors like national security.
- AI in National Security: The U.S. has made strides in adopting AI within its defense agencies, ensuring that the technology is being used to enhance national security capabilities, particularly in counteracting adversarial AI threats from nations like China.
- Collaboration Between Public and Private Sectors: There has been increased collaboration between AI innovators and the federal government, with partnerships focused on ensuring AI’s safe and responsible use. These partnerships have been instrumental in pushing the boundaries of AI innovation while addressing its potential risks.
Ongoing Challenges
Despite these advancements, challenges remain. AI’s rapid evolution means that regulatory frameworks are still catching up. Cybercriminals will increasingly exploit vulnerabilities in AI systems, and many organizations are still unprepared to deal with these risks. According to recent studies, over 90% of organizations using AI admit they are not fully equipped to manage its security implications.
On top of this, international competition continues to heat up. China’s growing AI capabilities, particularly in the military domain, underscore the need for the U.S. to accelerate its AI adoption and strengthen its defenses. The 2024 NSM reflects this urgency by directing national security agencies to incorporate AI more rapidly and effectively into their operations.
Looking Ahead
While it’s clear that while progress has been made, the journey is just beginning. The explosive growth of AI has brought with it unprecedented opportunities and challenges, particularly in the realm of national security. The 2024 NSM highlights the U.S.’s commitment to staying ahead in the AI race, ensuring that the technology is used responsibly while addressing the emerging cyber threats that exploit AI vulnerabilities.
Moving forward, the focus will need to remain on balancing innovation with security. As AI continues to reshape industries and national defense, the U.S. must continue to lead in both the development of AI technologies and the creation of robust frameworks to safeguard against their misuse.
At AppSOC, we are committed to developing effective and practical tools for enterprises to enable AI innovation by ensuring governance and security of AI systems through discovery, model scanning, AI security posture management and runtime defense.
* Watch the Video Blog *