13.1 C
New Delhi
Sunday, December 22, 2024

Enhancing AI helpers by mimicking human irrational behavior | MIT News

More from Author

In Short:

Researchers from MIT and the University of Washington have developed a model for AI systems to understand and predict human behavior based on computational constraints. By analyzing previous actions, the model can infer an agent’s “inference budget,” helping predict future decisions. The model outperforms others in tasks like inferring navigation goals and predicting moves in chess matches. This approach could lead to more effective AI collaborators.


Modeling Human Behavior to Improve AI Collaboration

To develop AI systems that can effectively collaborate with humans, understanding human behavior is crucial. However, humans often make suboptimal decisions due to computational constraints.

Researchers from MIT and the University of Washington have created a model that can account for these unknown computational constraints when analyzing the behavior of an agent, whether human or machine.

Inferring Computational Constraints

Their model can automatically infer an agent’s computational constraints by observing a few traces of their previous actions, known as the “inference budget.” This budget can predict the agent’s future behavior.

In a recent paper, the researchers demonstrated the effectiveness of their method in inferring navigation goals from past routes and predicting players’ moves in chess matches. The technique showed superior performance compared to other modeling methods.

A Vision for AI Assistants

This research can potentially enhance AI systems’ understanding of human behavior, leading to more effective collaboration. By being able to predict human mistakes and adapt to their weaknesses, AI agents can provide better guidance to their human counterparts.

Authors and Presentation

The paper was authored by Athul Paul Jacob, an EECS graduate student, along with Abhishek Gupta and Jacob Andreas from the University of Washington and MIT, respectively. The research will be presented at the International Conference on Learning Representations.

How the Model Works

The model developed by the researchers draws inspiration from chess players’ behavior, where depth of planning correlates with the player’s ability. By inferring an agent’s planning depth from past actions, the model predicts their decision-making process.

By running algorithms to solve problems and comparing the results to human behaviors, the model can determine an agent’s inference budget and predict their actions in similar scenarios.

Future Applications

The researchers plan to apply this approach to other domains, such as reinforcement learning, with the goal of creating more effective AI collaborators.

This research was supported by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article