The United States National Institute of Standards and Technology (NIST) and the Department of Commerce are taking a significant step towards AI safety by inviting members to join their newly established Artificial Intelligence Safety Institute Consortium. In a document released on November 2, NIST announced the consortium’s formation and officially requested applications from qualified individuals and organizations.
NIST’s objective in forming this consortium is to collaborate with non-profit organizations, universities, government agencies, and technology companies to tackle the challenges associated with AI development and deployment. The focus is on ensuring a human-centered approach to AI safety and governance, with the goal of creating and implementing specific policies and measurements.
The members of this consortium will play a vital role in various activities, including developing measurement and benchmarking tools, offering policy recommendations, conducting red-teaming exercises, performing psychoanalysis, and conducting environmental analyses.
These initiatives are in response to a recent executive order from President Joseph Biden, which established six new standards for AI safety and security. While these standards have not yet been legally enforced, they represent a step toward creating specific policies for AI governance in the United States.
Compared to many European and Asian countries, the United States has been somewhat behind in implementing comprehensive AI policies covering user and citizen privacy, security, and potential unintended consequences. President Biden’s executive order and the formation of the Safety Institute Consortium are important steps toward rectifying this situation. However, there is still no clear timeline for implementing AI-related laws in the U.S., and existing regulations are often seen as inadequate for the rapidly evolving AI sector.