The U.S. National Institute of Standards and Technology (NIST), operating under the U.S. Department of Commerce, is soliciting input from both AI companies and the general public through a request for information. This initiative aligns with the recent presidential executive order emphasizing the secure and responsible development and use of artificial intelligence (AI).
Open for public input until February 2, 2024, NIST aims to collect crucial insights to facilitate testing procedures ensuring the safety of AI systems. Secretary of Commerce Gina Raimondo underscored that this undertaking is a response to President Joe Biden’s October executive order, instructing NIST to formulate guidelines involving evaluation, red-teaming, consensus-based standards, and the establishment of testing environments for assessing AI systems. The overarching goal is to provide the AI community with a framework for developing AI in a safe, reliable, and responsible manner.
The specific focus of NIST’s request for information is on generative AI risk management and the mitigation of risks associated with AI-generated misinformation. Generative AI, capable of producing text, photos, and videos based on open-ended prompts, has sparked both excitement and concerns. Issues such as job displacement, electoral disruptions, and the potential for the technology to surpass human capabilities, leading to unforeseen consequences, are among the concerns.
In addition to generative AI, the request seeks insights into identifying optimal areas for “red-teaming” in AI risk assessment and establishing best practices. Red-teaming, a technique derived from Cold War simulations, involves a group, known as the red team, simulating adversarial scenarios or attacks to evaluate vulnerabilities and weaknesses in a system, process, or organization. Widely used in cybersecurity, this approach helps uncover potential risks.
NIST’s move follows the inaugural U.S. public evaluation red-teaming event held in August at a cybersecurity conference organized by AI Village, SeedAI, and Humane Intelligence. In November, NIST announced the formation of a new AI consortium, soliciting applicants with relevant credentials. This consortium aims to develop and implement specific policies and measures, ensuring a human-centered approach to AI safety and governance is adopted by U.S. lawmakers.
