Social learning in a multi-agent system
Noble, Jason and Franks, Daniel W. (2004) Social learning in a multi-agent system. Computing and Informatics, 22, (6), 561-574.
In a persistent multi-agent system, it should be possible for new agents to benefit from the accumulated learning of more experienced agents. Parallel reasoning can be applied to the case of newborn animals, and thus the biological literature on social learning may aid in the construction of effective multi-agent systems. Biologists have looked at both the functions of social learning and the mechanisms that enable it. Many researchers have focused on the cognitively complex mechanism of imitation; we will also consider a range of simpler mechanisms that could more easily be implemented in robotic or software agents. Research in artificial life shows that complex global phenomena can arise from simple local rules. Similarly, complex information sharing at the system level may result from quite simple individual learning rules. We demonstrate in simulation that simple mechanisms can outperform imitation in a multi-agent system, and that the effectiveness of any social learning strategy will depend on the agents' environment. Our simple mechanisms have obvious advantages in terms of robustness and design costs.
|Keywords:||Multi-agent systems, social learning, imitation, artificial life, biology|
|Divisions:||Faculty of Physical and Applied Science > Electronics and Computer Science > Agents, Interactions & Complexity
|Date Deposited:||18 Feb 2007|
|Last Modified:||18 Aug 2012 04:07|
|Contributors:||Noble, Jason (Author)
Franks, Daniel W. (Author)
|Further Information:||Google Scholar|
|ISI Citation Count:||1|
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required)