In the next few years, traditional single agent architectures will be more and more replaced by actual multi-agent systems with components that have increasing autonomy and computational power. This transformation has already started with prominent examples such as power networks, where each node is now an active energy generator, robotic swarms of unmaned aerial vehicles, software agents that trade and negotiate on the Internet or robot assistants that need to interact with other robots or humans. The number of agents in these systems can range from a few complex agents up to several hundred if not thousands of typically much simpler entities.
Multi-agent systems show many beneficial properties such as robustness, scalability, paralellization and a larger number of tasks that can be achieved in comparison to centralized, single agent architectures. However, the use of multi-agent architectures represents a major paradigm shift for systems design. In order to use such systems efficiently, effective approaches for planning, learning, inference and communication are required. The agents need to plan with their local view on the world and to coordinate at multiple levels. They also need to reason about the knowledge, observations and intentions of other agents, which can in turn be cooperative or adversarial. Multi-agent learning algorithms need to deal inherently with non-stationary environments and find valid policies for interacting with the other agents.
Many of these requirements are inherently hard problems and computing their optimal solutions is intractable. Yet, problems can become tractable again by considering approximate solutions that can exploit certain properties of a multi-agent system. Examples of such properties are sparse interactions that only occur between locally neighbored agents or limited information to make decisions (bounded rationality).
The fundamental challenges of this paradigm shift span many areas such as machine learning, robotics, game theory and complex networks. This workshop will serve as an inclusive forum for the discussion on ongoing or completed work in both theoretical and practical issues related to the learning, inference and control aspects of multi-agent systems
The workshop will serve as a platform to bring researchers from the different relevant communities together and foster discussions about the next necessary developments for multi-agent systems. The workshop will consists of 5 to 6 invited talks, a few contributed talks and a poster session.
- Multi-Agent Reinforcement Learning
- POMDPs, Dec-POMDPS and Partially Observable Stochastic Games
- Multi-Agent Robotics, Human-Robot Collaboration, Swarm Robotics
- Game Theory, Algorithms for Computing Nash Equilibria and
other Solution Concepts
- Swarm Intelligence
- Evolutionary Dynamics
- Complex Networks
- Mechanism Design
- Ad hoc teamwork