MIT is tracking the potential dangers posed by AI — and has found that most adoption frameworks designed to boost safe use of the technology are missing out on key risks.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have joined forces with colleagues at University of Queensland, Future of Life Institute, KU Leuven, and Harmony Intelligence to create the AI Risk Repository.
This is a database of more than 700 AI-related risks that researchers have identified by examining 43 existing frameworks. From that, the researchers have further developed taxonomies that classify the risks.
Dr. Neil Thompson, head of the MIT FutureTech Lab and one of the lead researchers on the project, stated, “The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database.”
He added, “It is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches.”
The AI Risk Repository was created because researchers observed that individuals utilizing AI were identifying some, but not all, risks. The goal was to consolidate existing research, analysis, and safety work into one resource for academics, policymakers, and businesses.
Dr. Peter Slattery, an incoming postdoc at the MIT FutureTech Lab and current project lead, expressed concern: “Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots.”
The AI Risk Repository team cautioned that despite such efforts, the database may still lack all potential risks to consider due to researchers’ biases, emerging challenges, and domain-specific issues.
Thompson elaborated, “We are starting with a comprehensive checklist to help us understand the breadth of potential risks. We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”
How the MIT AI Risk Repository Works
The database categorizes risks by cause (how they occur), domain (such as misinformation), and subdomain (false or misleading information).
Most of the risks analyzed were attributed to AI systems (51%) rather than humans (34%), suggesting that risks emerged not during development (10%) but during deployment (65%) — indicating that the machine’s behavior was often unforeseen.
The frameworks were most likely to address risks in system safety, socioeconomic or environmental harms, discrimination and toxicity, privacy/security, and malicious actors and misuse. They were less likely to consider human-computer interaction (41%) or misinformation (44%).
Overall, these frameworks only mentioned 34% of the 23 subdomains — with a quarter of those covering just a fifth of potential sources of risk. No single risk assessment document considered all 23 subdomains, and the one with the most comprehensive coverage only managed 70%.
This indicates that assessments, frameworks, and other studies of the dangers posed by AI are failing to consider all aspects of risk.
Soroush Pour, CEO & Co-founder of AI safety evaluations and red teaming company Harmony Intelligence, remarked: “It becomes much more likely that we miss something by simply not being aware of it.”
Next, the project will involve external experts ranking the risk levels and applying that to public documents from AI developers and companies, which will help determine if companies are doing enough to mitigate risks in AI development.
Most Popular
- 0 Comments
- Ai Process
- Artificial Intelligence