In a groundbreaking move, the Financial Stability Oversight Council (FSOC), chaired by U.S. Treasury Secretary Janet Yellen, has highlighted the potential perils of artificial intelligence (AI) for the first time in its annual financial stability report, released on December 14.
While acknowledging the transformative potential of AI in boosting innovation and efficiency within financial institutions, the FSOC emphasized the need for heightened supervision due to the breakneck pace of technological advancements.
The report specifically outlined AI-related risks, pinpointing concerns such as cybersecurity and model risks. It underscored the necessity for both companies and regulators to bolster their knowledge and capabilities to effectively monitor the evolution and application of AI, identifying potential emerging risks.
A critical concern highlighted in the report is the intricate nature of specific AI tools, which poses challenges for institutions in terms of comprehension and monitoring. The FSOC warned that without a comprehensive understanding, companies and regulators might inadvertently overlook biased or inaccurate results generated by AI algorithms.
Moreover, the report shed light on the increasing reliance of AI tools on expansive external data sets and third-party vendors, raising significant concerns related to privacy and cybersecurity.
Regulators, including the U.S. Securities and Exchange Commission (SEC), are already delving into the examination of firms’ use of AI. The White House has also taken action, issuing an executive order aimed at addressing and mitigating AI risks.
Notably, global figures such as Pope Francis, Elon Musk, and Steve Wozniak have voiced apprehensions about the rapid progress of AI. Pope Francis, in a letter on December 8, called for an international treaty to ethically regulate AI development, cautioning against the potential emergence of a “technological dictatorship” in the absence of proper controls.
Tech leaders, including Musk and Wozniak, echoed these concerns in March 2023, signing a petition urging a “pause” in AI development. Their collective emphasis was on the profound societal and humanitarian risks posed by AI advancements, especially surpassing the capabilities of GPT-4.
