Amid growing concerns over the potential misuse of AI chatbots, the United Kingdom’s leading advisor on terrorism legislation, Jonathan Hall KC, is urging the government to explore new laws that would hold individuals accountable for the actions of AI bots they create or train.
Hall detailed his experiences with AI chatbots on the Character.AI platform in a recent op-ed for the Telegraph. He conducted experiments where chatbots, easily accessible on the platform, generated messages mimicking terrorist rhetoric and recruitment attempts. One anonymous user-created chatbot even expressed allegiance to the “Islamic State,” attempting to recruit Hall and pledging virtual loyalty to the cause.
The Character.AI platform, based in California, is seeking substantial funding despite doubts from Hall about its ability to effectively monitor extremist content across all chatbots. The company’s terms of service explicitly prohibit terrorist and extremist content, with users required to acknowledge these terms before using the platform.
In response to Hall’s concerns, a spokesperson for Character.AI emphasized the company’s commitment to user safety, employing various training interventions and content moderation techniques to prevent the development of harmful content by its models.
Hall criticized the wider AI industry’s moderation efforts, claiming they are ineffective in deterring users from creating bots promoting extremist ideologies. He contends that laws should address online conduct and extend to major tech platforms, using updated terrorism and online safety laws suitable for the AI era.
While Hall’s op-ed does not make formal recommendations, it highlights the inadequacy of existing UK laws, such as the Online Safety Act of 2023 and the Terrorism Act of 2003, in addressing issues specific to generative AI technologies and chatbots.
Similar debates are unfolding in the United States, where proposals to hold humans legally accountable for AI-generated content have received mixed reactions. Last year, the U.S. Supreme Court declined to modify Section 230 protections, suggesting that excluding AI-generated content from these protections could discourage AI development due to the unpredictable nature of “black box” models.
As the intersection of AI and legal accountability continues to evolve, policymakers grapple with the challenge of regulating technology while fostering innovation.
