In a collaborative effort, 18 countries, including the United States, the United Kingdom, and Australia, have unveiled comprehensive global guidelines aimed at fortifying artificial intelligence (AI) models against potential tampering. The joint initiative underscores the imperative for AI companies to prioritize security throughout the design, development, launch, and monitoring phases of AI models.
Released on November 26, the 20-page document provides a roadmap for AI firms, emphasizing the need for a “secure by design” approach, especially in an industry where security considerations often take a backseat due to the rapid pace of innovation. The guidelines encompass a range of recommendations, including maintaining stringent control over the infrastructure supporting AI models, continuous monitoring for tampering throughout the lifecycle, and comprehensive training for staff to mitigate cybersecurity risks.
While the guidelines primarily offer general advice, they notably sidestep addressing contentious issues within the AI realm, such as the regulation of image-generating models, concerns related to deep fakes, and the ethical use of data in training models—a topic that has led to copyright infringement claims against multiple AI firms.
U.S. Secretary of Homeland Security Alejandro Mayorkas emphasized the pivotal role of cybersecurity in shaping safe, secure, and trustworthy AI systems, characterizing the current stage in AI development as an “inflection point” with far-reaching consequences.
This collaborative effort follows recent governmental initiatives, including a London AI Safety Summit where governments and AI firms worked towards a consensus on AI development. Simultaneously, the European Union is finalizing details for its AI Act to regulate the AI landscape, and U.S. President Joe Biden’s executive order in October established standards for AI safety and security, despite facing opposition from the industry citing potential innovation constraints.
In addition to the core contributors, including Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore, prominent AI firms like OpenAI, Microsoft, Google, Anthropic, and Scale AI played a crucial role in shaping these “secure by design” guidelines.
