[Top]

[Back]

[Next]

[Bottom]


3 Agents


"agent (a'jent) n. a person who acts on behalf of another person, business or government, etc. [C15th: From Latin agent-, noun use of the present participle of agere to do]" - The Collins English Dictionary

3.1 What is an Agent?

The problem of giving a complete description of an agent is difficult, due to the wide range of tasks that an agent can perform and the general perceptions of the user community. However, in its broadest sense, all of the following scenarios could be attributed to the work of an `agent' (adapted from (Wooldridge et al., 1995)):

Although some of the technology to support computer systems of this level of sophistication is not available, research is being undertaken in these areas. However, the key computer-based element that occurs in each of the above scenarios is termed an agent.

At an elementary and conceptual level, agents can be regarded as entities that perform actions on a person's behalf. For example, we visit a travel agent and book our holidays; the travel agent is given the task of organising the flight details, the hotel bookings and the insurance arrangements on our (the holiday maker's) behalf. We do this because it would be tiresome to have to organise all of the details ourselves; we delegate responsibility to an agency we know can perform the task.

An analogous description in the software world could be that an agent is `some software that performs tasks on a user's behalf'. However, in blithely making this parallel and believing that the analogy translates correctly, a number of fundamental assumptions are being taken for granted. In our real-world scenario, we trust a travel agent to book the flight and we believe that they will book the hotel for the correct dates. In other words, we have delegated not only our trust in the ability of the travel agent to complete the task that we have conferred to them, but we also believe that they will execute it correctly.

Therefore, the usefulness of an agent is directly related to the amount of trust and believability it generates and this is possibly more important than the actual task it can perform; agents that fulfil their tasks badly (either incorrectly or incompletely) are of little use to us, since we would employ the services of another agent or simply complete the task ourselves, rather than use it again.

These concepts of trust and believability need to be identified and embodied in software agents (hereafter referred to simply as an agent), before their tasks can be accorded the same amount of trust that is granted to real-life agents.

3.2 Notions of Agency

Agency is concerned with the concepts and attributes that can be assigned to agents to determine their nature and to predict their behaviour. An agent whose nature is well defined and whose behaviour is predictable is more likely to be of use and to be trusted by the user. Wooldridge and Jennings (Wooldridge et al., 1995) detail two notions of agency, weak and strong, that are useful when describing the nature and actions of agents.

3.2.1 A Weak Notion for Describing Agents

An agent is considered to adhere to a weak notion of agency if it is comprised of the following properties:

When using these attributes to describe agents, it is often referred to as agent-based software engineering (Genesereth et al., 1994). This programming model supports the concept of an agent as a self-contained, concurrently executing process that contains and controls some internal state and accesses its environment and other agents through a message passing protocol.

3.2.2 A Stronger Notion for Describing Agents

Further notions of attributes for agents include descriptions that possess more specific meaning than weak agency, that is, they are attributed characteristics and tasks that would normally be ascribed to humans. Shoham (Shoham, 1993) describes the mentalistic notions of knowledge, belief, intention and obligation that might be attributed to strong agents above and beyond those defined for a weak agent. Dennett (Dennett, 1987) describes such an agent as an intentional system.

An intentional system is a system that can best be described by the intentional stance; the ascription of abstraction notions to systems for the purpose of describing how they work. For example, although the technical description of a computer system may be available, it is too complex to use when describing, say, why a menu appears when a mouse button is clicked over a certain area of the display. The intentional notions as described by Shoham are useful for providing convenient and familiar ways of describing, explaining and predicting the behaviour of complex systems: Dennett suggests that strong agents are best described by the intentional stance.

Bates uses this the concept of strong agents and takes it into more anthropomorphic areas by considering the implications of believable agents, that is, agents that try to model a human approach to their interaction with the user by displaying emotions (Bates, 1994). Additionally, Maes talks about representing agents visually by attaching an icon or a face to associate them with cartoon or computer characters (Maes, 1994). These types of agents are being used in both Human Computer Interaction (HCI) scenarios to help the social interaction between a user and their agents, and also in the computer gaming community to produce virtual characters that react in believable and human ways to given situations.

3.2.3 Intelligence and Learning

Intelligence is an attribute that can be given to both weak and strong agents and determines how agents will react to situations and events. In most agent communities, intelligence is seen as the key factor that separates agents from ordinary pieces of software.

However, as Brooks stated (Brooks, 1990; Brooks, 1991; Brooks, 1991b), intelligence is `in the eye of the beholder', since the question `what is intelligence?' is as difficult to define as `what is an agent?'! He goes on to make two key points about intelligence:

The problems of artificial intelligence seem to lie in two areas (Shardlow, 1990); the translation of the real world into an adequate description in time for it to be useful (transduction), and, the representation of complex systems and entities and how to reason about this information in time for it to be useful (reasoning).

Edwards (Edwards, 1995) argues that it is not enough for agents to react intelligently to their environment, but they must also be able to adapt and alter to changes by learning. An agent that can learn through exposure to given situations and examples could be more useful to a user than an agent whose intelligence is fixed. However, it is far more difficult to predict the behaviour of an agent that can learn, since it is not possible to determine exactly what it will learn and how it will apply that information.

3.2.4 Other Attributes of Agents

A number of other attributes can be given to both weak and strong agents to augment or temper their functionality. These include, but are not limited to (Wooldridge et al., 1995):

Goodwin formally defines these and other agent attributes using the Z formal specification method (Goowdin, 1993).

3.2.5 Summary

Agent classifications are useful in illustrating some of the ideas that lie behind the development of agents for both theoretical and real-world applications. Whether embodying weak or strong notions of agency, an agent needs to act in a manner that forges a trust relationship with the user. If an agent cannot complete a task, the user must be informed of what when wrong and why. Similarly, if the agent successfully completes a task, then the user must be informed of the details of how this was accomplished and the results generated, so that they can be verified.

Intelligence and learning appear to offer methods for building trust and believability into agents; intelligence allows agents to fulfil tasks competently and learning allows agent to react to situations in an intelligent manner. To ensure that an intelligent and learning agent does not grow outside of its remit, other agent attributes, such as benevolence, veracity and rationality may need to be incorporated.

3.3 The Differing Views on Agents

The following is a brief taxonomy of the various perceptions that differing computer science disciplines hold about agents (Wooldridge et al., 1995).

3.3.1 The Traditional Agent

The traditional concept of agents began, not surprisingly, with the artificial intelligence community. It is a view based around agents being systems that can take input data about their environment, reason about it and (possibly) generate appropriate output responses (Kurzweil, 1990).

The ultimate goal of AI agents is to provide intelligence and reasoning capabilities that are comparable to that of human beings. As McCarthy (McCarthy, 1978) puts it:

"[Artificial Intelligence is] the science of making computers do things which if done by humans would require intelligence."
However, due to the difficulties of capturing and describing the essential qualities of intelligence, the AI community have come to recognise that agents are a suitable vehicle for expressing the desirable properties of artificial intelligence. Indeed, a common theme that is shared by most AI computer scientists is that because agents possess a capacity of intelligence, this differentiates them from other, normal pieces of software.

Traditional AI architectures are generally based around three core philosophies:

3.3.2 The Interface Agent

An interface agent is described by Maes (Maes, 1994b) as:

"...a personal assistant who is collaborating with the user in the same work environment."
Thus, interface agents assist the user in whatever tasks they are performing, maybe to provide insight into specific situations or to provide alternative material into related areas of work.

To support this line of reasoning, the AI laboratory within MIT have developed a prototype interface agent called News Tailor, or NewT (Sheth, 1994; Maes, 1994). A NewT agent is a USENET news filter that can be `trained' by giving it a series of examples that show in which kind of articles the user is interested. From this, the NewT agent can search all news articles to try and find other articles which are similar to the one's initially indicated by the user. When the agent presents the other articles that it has found, the user gives feedback according to their applicability; thus, the NewT agent can widen or restrict its searching next time.

Other interface agent systems include NewsWeeder (Lang, 1995), UNA and LAW (Green et al., 1995), WebWatcher (Armstrong et al., 1995) and LIRA (Balabanovic et al., 1995).

3.3.3 The Information Agent

An information agent is one that has access to a number of information resources and is able to collect and manipulate that information. Typically, it can communicate across the network to locate information resources to query or manipulate. An example might be where an information agent is asked to find a particular paper; the information agent searches a number of information resources and presents the user with FTP sites and WWW addresses, for example.

The key qualities of information agents lie in their ability to communicate with a wide range of information resources to ensure that the widest amount of information is processed to provide the user with the best results to their original request.

Theoretical studies on how agents can utilise the information that they receive from different resources are presented by Levy (Levy et al., 1994) and Gruber (Gruber, 1991). A more practical application has been presented by Voorhees (Voorhees, 1994), who describes a prototype system called the Information Retrieval Agent (IRA) which can search for loosely specified articles from differing document repositories.

3.3.4 The Distributed Agent

Distributed agents (also known as multi-agent systems) are collective agents which together sit at the macro (social) level, rather than the micro (agent) level. Distributed AI (DAI) (Bond et al., 1988) looks at how problems at a macro level can be broken down into agents at the micro level and how those agents can be made to co-operate and co-ordinate their activities to ensure that the problems are solved efficiently.

DAI agent technology is being employed in many real-world situations, for example, air traffic control (Steeb et al., 1988), particle accelerator control (Jennings et al., 1993) and telecommunications network management (Weihmayer et al., 1994). However, a key problem with DAI is ensuring that problem decomposition and subsequent communication and discussion between communities of agents can take place timely enough to produce useful and achievable results.

3.4 Summary

Although useful for breaking down classifications for agents, the views that are used and put forward by computer scientists are not necessarily compliant with the notions of weak and strong agency as described by Wooldridge and Jennings.

Most technologies that use agents realise that the more attributes that an agent possesses, the more complex the task becomes of specifying, designing and implementing that agent. This helps to explain why there has been a general trend over the past 10 years away from AI dreams (the HAL computer, for example, from 2001) to more realistic areas of actual applicability.

Agents that are useful to the user in everyday activities seem to be the way that agent technology as a whole is moving. It is hoped that by starting in the small with relatively easily specified agents which have limited capabilities and limited intelligence/learning, then the experience gained will begin to show the way of progressing up to computing with agents in the large.

General progress has been made over recent years, especially in the area of information agents, and this appears to be where future research is heading. The following chapter discusses a particular aspect of agent technology, mobility, and describes how applicable current mobile agent systems are for distributed information management.




[Top]

[Back]

[Next]

[Bottom]


EMail: jd94r@ecs.soton.ac.uk
WWW: http://www.ecs.soton.ac.uk/~jd94r
Copyright © 1996, University of Southampton. All rights reserved.