(Click here if you don't see a menu to the left of this page.)
Agent Modeling is the ability of an agent to reason about other agents, from observations of those agents. It is a key capability for human team training and evaluation, network security, distributed application monitoring and visualization, failure detection and identification, teamwork and coordination, opponent modeling and adversarial planning, human-computer interaction, and many other applications. Buzz-words often associated with agent modeling include: plan-recognition, user-modeling, agent-tracking, behavior recognition, agent monitoring, observation-based coordination, etc.
My focus is primarily on multi-agent modeling--allowing an agent to reason about groups and teams as a whole. I have been conducting my research in this area in several real-world, challenging applications: distributed internet-based software agents, high-fidelity virtual environments for training, and RoboCup soccer . I'm currently expanding my research into new domains and agent modeling problems: Adversarial planning, automated team-training, and teamwork analysis, in both computational and human teams.
I investigate domain-independent theories, principles, architectures, and computational techniques that employ agent modeling capabilities in autonomous agents. I place heavy emphasis on systems and theories that are of practical interest, and that tackle challenging real-world problems. My research explores a variety of agent modeling tasks (e.g., visualization, failure-detection, prediction, recognition, opponent-modeling), and a variety of settings (e.g., centralized/distributed, offline/on-line). I build systems that tackle modeling in a growing number of different dynamic, complex, multi-agent domains, and aim to provide both empirical evidence as well as analytical guarantees on the utility of the techniques and theories that emerge.
A central problem in Agent Modeling is the Monitoring Selectivity Problem: On one hand, it has been shown that an agent cannot know everything about other agents (e.g., due to perceptual and bandwidth limitations), and is therefore uncertain about the state of other agents. On the other hand, it has been repeatedly shown that an agent needs to know about other agents in order to be able to achieve its goals, and thus uncertainty about others can lead to failures.
My research focuses on tackling the monitoring selectivity problem in multi-agent systems, where an agent models multiple other agents. The problem is particularly severe in such settings, because the uncertainty in modeling others grows with the number of agents (e.g., the number of modeling hypotheses may grow exponentially in the number of agents).
My approach to tackling this problem is Socially-Attentive Monitoring. It utilizes knowledge about the social structure of the group of agents being monitored: The relationships between the agents modeled (e.g., the fact that they are collaborating on a task), and the procedures that the agents utilize to maintain these relationships (e.g., the fact that the communicate in a particular way to carry out this collaboration). Such knowledge is available from the designer of a system, from theory (there is growing literature about domain-independent social coordination structures), or by learning it from observing the agents prior to modeling. See below for examples of Socially-Attentive monitoring systems which I have built.
For the first part of my Ph.D. research, I have looked (with Milind Tambe) at one the key goals of Agent Modeling: Recognizing failures in the behavior of interacting agents. I have investigated Socially-Attentive monitoring techniques, in which members of a team of cooperating agents autonomously detect and diagnose coordination and teamwork failures. This investigation covers both distributed and centralized failure-detection, and explicitly addresses issues of uncertainty about the monitored agents. This investigation resulted in analytical results, and in deployed monitoring systems in two challenging domains: ModSAF: A high-fidelity virtual battlefield distributed simulation, and RoboCup, a soccer simulation environment for AI research. A good reference for these results is provided in the an article in the Journal of Artificial Intelligence Research, which you can find in my publications list.
Another important goal of Agent Modeling is visualization: Identifying the state of agents for a human operator. I investigated (with David Pynadath and Milind Tambe) techniques for visualizing internet-based distributed applications, in which the only information available is the infrequent communications that take place among the distributed agents that together form the application. I have developed novel socially-attentive visualization techniques, which utilize knowledge about the coordination mechanisms which the visualized agents employ. I have also developed a new probabilistic plan-recognition representation which is particularly useful for monitoring communications. Together, these techniques lead to very significant boosts to visualiation accuracy. This research has been evaluated in one specific domain, and is currently being extended and evaluated in additional domains (see my JAIR 2002 paper).
An important issue in agent modeling in multi-agent settings is scaling up the techniques to work with many agents. Together with David Pynadath and Milind Tambe, I've developed the YOYO family of monitoring algorithms, which are explicitly designed for plan-recognition tasks with many agents. YOYO algorithms provide an interesting trade-off between expressivity and scalability. They can only model certain modeling hypotheses, but scale-up very well (almost independently of the number of agents). In particular, the YOYO teamwork failure-detection algorithm guarantees sound failure-detection in constant-space/linear-time, while the YOYO* visualization algorithm sacrifices the ability to visualize coordination failures to provide visualization in linear-space/linear-time. With Mike Bowling, I continued development of YOYO and of analytical examination of lower bounds for the number of agents that must be monitored in a team to maintain coherence (see AAMAS 2002 paper).