12.1 C
New Delhi
Thursday, December 26, 2024

AI Models Ranked by Risk: Researchers Discover Surprising Variability

More from Author

In Short:

Bo Li, a professor at the University of Chicago, is advising consulting firms on the risks of AI models. He co-developed a system to evaluate AI compliance with laws and ethics. Researchers found that while government regulations are lacking, some companies’ policies could improve. Understanding these risks is crucial for businesses using AI, especially in sensitive areas like customer service.


Bo Li, an associate professor at the University of Chicago specializing in stress testing and the scrutiny of AI models to identify their misbehavior, has become a sought-after resource for several consulting firms. These consultancies are increasingly focusing on the potential issues—legal, ethical, and regulatory—that AI models may pose, rather than merely assessing their intelligence.

Collaboration with Industry Leaders

Li and researchers from other esteemed universities, alongside Virtue AI, which was co-founded by Li, and Lapis Labs, have developed a taxonomy of AI risks. Their work also includes a benchmark that illustrates the extent to which various large language models (LLMs) violate established rules. According to Li, “We need some principles for AI safety, in terms of regulatory compliance and ordinary usage.”

Regulatory Analysis and Benchmarking

The researchers conducted a thorough analysis of government AI regulations and guidelines from the United States, China, and the European Union. They also reviewed the usage policies of sixteen prominent AI companies globally. In their findings, they created the AIR-Bench 2024, a benchmark utilizing thousands of prompts to assess how various AI models perform concerning specific risks. The results indicate that Anthropic’s Claude 3 Opus excels in refusing to generate cybersecurity threats, while Google’s Gemini 1.5 Pro is noted for successfully avoiding the creation of non-consensual sexual nudity.

Model Performance Insights

Conversely, the DBRX Instruct model, developed by Databricks, received the lowest scores across various assessments. Upon its release in March, Databricks announced its commitment to enhancing the safety features of DBRX Instruct.

Need for Increased Regulation

As AI technologies evolve, understanding the risk landscape and the distinct strengths and weaknesses of specific models will become crucial for companies aiming to employ AI across different markets or applications. For example, a company intending to utilize a large language model for customer service may prioritize the model’s tendency to generate offensive language under provocation over its technical capabilities.

Regulatory Gaps and Developer Responsibilities

Li points out notable discrepancies in the development and regulation of AI technologies. The research highlights that governmental regulations tend to be less comprehensive than the policies formulated by companies, suggesting a significant opportunity for tightening regulatory measures. Moreover, the analysis indicates that several companies have room for growth regarding their model safety protocols. As Li notes, “If you test some models against a company’s own policies, they are not necessarily compliant. This means there is a lot of room for them to improve.”

Ongoing Research and Future Directions

Other researchers are working to clarify the complex and often confusing landscape of AI risks. Recently, two researchers from MIT introduced their own database of AI dangers, compiled from 43 different risk frameworks. Neil Thompson, a research scientist at MIT, emphasized the necessity for organizations, many of which are still in the initial stages of AI adoption, to receive guidance on potential risks.

As the field of AI continues to develop, the endeavors to catalog and measure associated risks will need to adapt accordingly. Li underlines the importance of investigating emerging issues, such as the emotional impact of AI models. A recent analysis of Meta’s Llama 3.1 model showed that despite its increased capabilities, the model’s safety did not demonstrate significant improvement—an observation that reflects a broader trend. “Safety is not really improving significantly,” Li stated.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article