22.1 C
New Delhi
Saturday, December 21, 2024

Enhancing AI Reliability Through Game Theory

More from Author

In Short:

Researchers faced a tough challenge with the game of Diplomacy due to its complexity. Meta’s AI program Cicero showed human-level play in the game. The team then developed a consensus game where a generator must answer questions correctly or incorrectly based on a coin toss. The game incentivizes agreement between the generator and discriminator, encouraging accurate responses based on their initial beliefs. This approach aims to improve language models’ performance.


AI researchers faced a substantial challenge with the game of Diplomacy, known for its complexity and the need for negotiation among seven players. Meta’s AI program Cicero managed to achieve “human-level play” after 40 games in 2022, placing in the top 10 percent against human participants.

During the project, Jacob from the Meta team realized the reliance of Cicero on a language model for generating dialogues. This insight led him to explore building a better language model for gameplay.

Consensual Interactions

In 2023, Jacob collaborated with Yikang Shen, Gabriele Farina, and his adviser Jacob Andreas at MIT to develop the consensus game. The game was designed to align the generator and discriminator systems of a language model during a conversational setting.

The consensus game involves the generator receiving a question and providing candidate responses, with the added challenge of answering correctly or incorrectly based on a coin toss. Points are awarded based on the discriminator’s judgment of the response, aiming to incentivize agreement and knowledge incorporation from both players.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article