[Top] | [Back] | [Next] | [Bottom] |
3 Agents
"agent (a'jent) n. a person who acts on behalf of another person, business or government, etc. [C15th: From Latin agent-, noun use of the present participle of agere to do]" - The Collins English Dictionary
3.1 What is an Agent?
The problem of giving a complete description of an agent is difficult, due to the wide range of tasks that an agent can perform and the general perceptions of the user community. However, in its broadest sense, all of the following scenarios could be attributed to the work of an `agent' (adapted from (Wooldridge et al., 1995)):
- Upon logging into your computer system, you are greeted by your personal digital assistant (PDA), who presents you with a summation of activities that have occurred since you last logged in. The PDA has sorted your email messages into order of importance and has automatically cross-referenced them to find any related email messages. One email in particular is from your supervisor requesting a meeting to discuss your work; the assistant has checked your calendar and already negotiated a suitable time and date with the PDA of your supervisor. In addition to this, your PDA has also searched the network news groups and presents you with a list of articles that it knows you are interested in; the assistant draws your attention to a thread which describes work that is very closely related to your own. In anticipation that this will be of interest to you, the PDA has already obtained a technical report from an FTP site and compiled a list of other references which are available and detail this work further.
- You are editing a file when your PDA requests your attention; an email message has arrived that contains notification of acceptance of a paper that you submitted to a conference. Without prompting, your PDA has looked into travel arrangements and has determined all of the planning requirements, from taxis to aeroplanes. In a short while, you are presented with a summary of the most convenient travel options. Once you have confirmed a suitable option, your PDA makes reservations on your behalf, marks your calendar accordingly and negotiates with the conference site about locating your data locally for the duration of the conference. While you are in transit to the conference, the assistant transfers your data (including the files necessary to your presentation) to the conference site, redirects your information sources (email, for example) and informs the PDA of the conference supervisor of your impending arrival.
Although some of the technology to support computer systems of this level of sophistication is not available, research is being undertaken in these areas. However, the key computer-based element that occurs in each of the above scenarios is termed an agent.
At an elementary and conceptual level, agents can be regarded as entities that perform actions on a person's behalf. For example, we visit a travel agent and book our holidays; the travel agent is given the task of organising the flight details, the hotel bookings and the insurance arrangements on our (the holiday maker's) behalf. We do this because it would be tiresome to have to organise all of the details ourselves; we delegate responsibility to an agency we know can perform the task.
An analogous description in the software world could be that an agent is `some software that performs tasks on a user's behalf'. However, in blithely making this parallel and believing that the analogy translates correctly, a number of fundamental assumptions are being taken for granted. In our real-world scenario, we trust a travel agent to book the flight and we believe that they will book the hotel for the correct dates. In other words, we have delegated not only our trust in the ability of the travel agent to complete the task that we have conferred to them, but we also believe that they will execute it correctly.
Therefore, the usefulness of an agent is directly related to the amount of trust and believability it generates and this is possibly more important than the actual task it can perform; agents that fulfil their tasks badly (either incorrectly or incompletely) are of little use to us, since we would employ the services of another agent or simply complete the task ourselves, rather than use it again.
These concepts of trust and believability need to be identified and embodied in software agents (hereafter referred to simply as an agent), before their tasks can be accorded the same amount of trust that is granted to real-life agents.
3.2 Notions of Agency
Agency is concerned with the concepts and attributes that can be assigned to agents to determine their nature and to predict their behaviour. An agent whose nature is well defined and whose behaviour is predictable is more likely to be of use and to be trusted by the user. Wooldridge and Jennings (Wooldridge et al., 1995) detail two notions of agency, weak and strong, that are useful when describing the nature and actions of agents.
3.2.1 A Weak Notion for Describing Agents
An agent is considered to adhere to a weak notion of agency if it is comprised of the following properties:
- Autonomy. Once launched with the information describing the bounds and limitations of their tasks, an agent should be able to operate independently from their user, that is, autonomously in the background (Castelfranchi, 1995). To this end, an agent needs to have control over its actions so that it can determine what to do when an action succeeds or fails. Moreover, an agent must be able to augment its internal state so that it can make rational decisions based upon the information that it has gathered.
- Social ability. To effect changes or interrogate their environment, an agent must possess the ability to communicate with the outside world (Genesereth et al., 1994;
Mayfield et al., 1995). This interaction can exist at a number of levels depending upon the remit of the agent, but typically an agent would need to communicate with other agents and the local environment (to maintain/discover information) and users (to appraise them of their progress).
- Reactivity. Agents need to be able to perceive their environment and respond to changes to it in a timely fashion, depending upon their remit. For example, an agent's task could be to monitor a local file system, informing the user when changes occur to a particular file set. This implies that the agent has an awareness of the appropriate filing system and how to interrogate it; agents need not only to be aware of their environment, but they need to be aware of what the state and changes in that environment mean and how to react to them.
- Pro-activeness. To help differentiate an agent from another piece of software, agents need to be able to exhibit pro-activeness, that is, the ability to effect actions to achieve their goals by taking the initiative. This means that an agent needs to appreciate the state of their environment and to decide how best to fulfil their mission target.
When using these attributes to describe agents, it is often referred to as agent-based software engineering (Genesereth et al., 1994). This programming model supports the concept of an agent as a self-contained, concurrently executing process that contains and controls some internal state and accesses its environment and other agents through a message passing protocol.
3.2.2 A Stronger Notion for Describing Agents
Further notions of attributes for agents include descriptions that possess more specific meaning than weak agency, that is, they are attributed characteristics and tasks that would normally be ascribed to humans. Shoham (Shoham, 1993) describes the mentalistic notions of knowledge, belief, intention and obligation that might be attributed to strong agents above and beyond those defined for a weak agent. Dennett (Dennett, 1987) describes such an agent as an intentional system.
An intentional system is a system that can best be described by the intentional stance; the ascription of abstraction notions to systems for the purpose of describing how they work. For example, although the technical description of a computer system may be available, it is too complex to use when describing, say, why a menu appears when a mouse button is clicked over a certain area of the display. The intentional notions as described by Shoham are useful for providing convenient and familiar ways of describing, explaining and predicting the behaviour of complex systems: Dennett suggests that strong agents are best described by the intentional stance.
Bates uses this the concept of strong agents and takes it into more anthropomorphic areas by considering the implications of believable agents, that is, agents that try to model a human approach to their interaction with the user by displaying emotions (Bates, 1994). Additionally, Maes talks about representing agents visually by attaching an icon or a face to associate them with cartoon or computer characters (Maes, 1994). These types of agents are being used in both Human Computer Interaction (HCI) scenarios to help the social interaction between a user and their agents, and also in the computer gaming community to produce virtual characters that react in believable and human ways to given situations.
3.2.3 Intelligence and Learning
Intelligence is an attribute that can be given to both weak and strong agents and determines how agents will react to situations and events. In most agent communities, intelligence is seen as the key factor that separates agents from ordinary pieces of software.
However, as Brooks stated (Brooks, 1990; Brooks, 1991; Brooks, 1991b), intelligence is `in the eye of the beholder', since the question `what is intelligence?' is as difficult to define as `what is an agent?'! He goes on to make two key points about intelligence:
The problems of artificial intelligence seem to lie in two areas (Shardlow, 1990); the translation of the real world into an adequate description in time for it to be useful (transduction), and, the representation of complex systems and entities and how to reason about this information in time for it to be useful (reasoning).
Edwards (Edwards, 1995) argues that it is not enough for agents to react intelligently to their environment, but they must also be able to adapt and alter to changes by learning. An agent that can learn through exposure to given situations and examples could be more useful to a user than an agent whose intelligence is fixed. However, it is far more difficult to predict the behaviour of an agent that can learn, since it is not possible to determine exactly what it will learn and how it will apply that information.
3.2.4 Other Attributes of Agents
A number of other attributes can be given to both weak and strong agents to augment or temper their functionality. These include, but are not limited to (Wooldridge et al., 1995):
Goodwin formally defines these and other agent attributes using the Z formal specification method (Goowdin, 1993).
3.2.5 Summary
Agent classifications are useful in illustrating some of the ideas that lie behind the development of agents for both theoretical and real-world applications. Whether embodying weak or strong notions of agency, an agent needs to act in a manner that forges a trust relationship with the user. If an agent cannot complete a task, the user must be informed of what when wrong and why. Similarly, if the agent successfully completes a task, then the user must be informed of the details of how this was accomplished and the results generated, so that they can be verified.
Intelligence and learning appear to offer methods for building trust and believability into agents; intelligence allows agents to fulfil tasks competently and learning allows agent to react to situations in an intelligent manner. To ensure that an intelligent and learning agent does not grow outside of its remit, other agent attributes, such as benevolence, veracity and rationality may need to be incorporated.
3.3 The Differing Views on Agents
The following is a brief taxonomy of the various perceptions that differing computer science disciplines hold about agents (Wooldridge et al., 1995).
3.3.1 The Traditional Agent
The traditional concept of agents began, not surprisingly, with the artificial intelligence community. It is a view based around agents being systems that can take input data about their environment, reason about it and (possibly) generate appropriate output responses (Kurzweil, 1990).
The ultimate goal of AI agents is to provide intelligence and reasoning capabilities that are comparable to that of human beings. As McCarthy (McCarthy, 1978) puts it:
"[Artificial Intelligence is] the science of making computers do things which if done by humans would require intelligence."
However, due to the difficulties of capturing and describing the essential qualities of intelligence, the AI community have come to recognise that agents are a suitable vehicle for expressing the desirable properties of artificial intelligence. Indeed, a common theme that is shared by most AI computer scientists is that because agents possess a capacity of intelligence, this differentiates them from other, normal pieces of software.
Traditional AI architectures are generally based around three core philosophies:
- Symbolic. The reduction of the world to a representation of realisable symbols that can be combined to form structures upon which processes can be executed to operate upon the symbols according to symbolically coded sets of instructions (Newell et al., 1976). Decisions regarding actions to perform are made through logical reasoning, based upon pattern matching and symbolic manipulation. Example systems include Homer
(Vere et al., 1990), a robot submarine which explores a two-dimensional Seaworld, can respond to an 800-word spoken vocabulary and has a limited episodic memory, and Grate* (Jennings, 1992), a simulation of electricity transportation management where agents act in an individual, selfish and cooperative manner to achieve their intentions.
- Reactive. A major problem with symbolic AI is the processing power required to analyse the information about the real world, plan a suitable solution and then implement a chosen action. Critics of symbolic AI have advocated the use of reactive architectures; architectures where there is no complex representation of the real world in symbolic terms and where no symbolic reasoning is performed. The most vociferous critic, Rodney Brooks, has developed an approach based around a reactive model, called the subsumption architecture (Brooks, 1986). A subsumption architecture is a hierarchy of behaviours designed to accomplish tasks. Each behaviour competes with other behaviours in order to influence the actions of the agent; lower layers in the hierarchy represent primitive behavioural styles (for example, avoiding obstacles in the Homer Seaworld) and higher layers represent more abstract behaviours (for example, collecting an object and returning it to a given location). The amount of computation required for these systems is very small when compared with symbolic AI systems. Another reactive architecture, based along the same lines as the subsumption architecture, is Pengi
(Agre et al., 1987), a computer simulated game where routine tasks are encoded in low-level structures and are only updated to handle new problems that develop. The idea is that most decisions are routine and can therefore be performed quickly and efficiently.
- Hybrid. These types of architectures attempt to marry the best qualities of both symbolic and reactive approaches to AI. The method consists of building an agent system out of two (or more) subsystems; a symbolic world model which develops plans and makes decisions, and a reactive subsystem which is capable of reacting quickly to events that happen without having to resort to complex symbolic manipulation. Examples of such hybrid systems are Touring Machines (Ferguson, 1992) and
InteRRaP (Müller, 1995). In these architectures a layered model is employed; a symbolic AI engine sits at the top of the model handling long-term goals and a reactive AI engine sits at the bottom handling low-level reactions. The problem with such systems is how to manage the interactions between the layers.
3.3.2 The Interface Agent
An interface agent is described by Maes (Maes, 1994b) as:
"...a personal assistant who is collaborating with the user in the same work environment."
Thus, interface agents assist the user in whatever tasks they are performing, maybe to provide insight into specific situations or to provide alternative material into related areas of work.
To support this line of reasoning, the AI laboratory within MIT have developed a prototype interface agent called News Tailor, or NewT (Sheth, 1994; Maes, 1994). A NewT agent is a USENET news filter that can be `trained' by giving it a series of examples that show in which kind of articles the user is interested. From this, the NewT agent can search all news articles to try and find other articles which are similar to the one's initially indicated by the user. When the agent presents the other articles that it has found, the user gives feedback according to their applicability; thus, the NewT agent can widen or restrict its searching next time.
Other interface agent systems include NewsWeeder (Lang, 1995), UNA and LAW (Green et al., 1995), WebWatcher (Armstrong et al., 1995) and LIRA (Balabanovic et al., 1995).
3.3.3 The Information Agent
An information agent is one that has access to a number of information resources and is able to collect and manipulate that information. Typically, it can communicate across the network to locate information resources to query or manipulate. An example might be where an information agent is asked to find a particular paper; the information agent searches a number of information resources and presents the user with FTP sites and WWW addresses, for example.
The key qualities of information agents lie in their ability to communicate with a wide range of information resources to ensure that the widest amount of information is processed to provide the user with the best results to their original request.
Theoretical studies on how agents can utilise the information that they receive from different resources are presented by Levy (Levy et al., 1994) and Gruber (Gruber, 1991). A more practical application has been presented by Voorhees (Voorhees, 1994), who describes a prototype system called the Information Retrieval Agent (IRA) which can search for loosely specified articles from differing document repositories.
3.3.4 The Distributed Agent
Distributed agents (also known as multi-agent systems) are collective agents which together sit at the macro (social) level, rather than the micro (agent) level. Distributed AI (DAI) (Bond et al., 1988) looks at how problems at a macro level can be broken down into agents at the micro level and how those agents can be made to co-operate and co-ordinate their activities to ensure that the problems are solved efficiently.
DAI agent technology is being employed in many real-world situations, for example, air traffic control (Steeb et al., 1988), particle accelerator control (Jennings et al., 1993) and telecommunications network management (Weihmayer et al., 1994). However, a key problem with DAI is ensuring that problem decomposition and subsequent communication and discussion between communities of agents can take place timely enough to produce useful and achievable results.
3.4 Summary
Although useful for breaking down classifications for agents, the views that are used and put forward by computer scientists are not necessarily compliant with the notions of weak and strong agency as described by Wooldridge and Jennings.
Most technologies that use agents realise that the more attributes that an agent possesses, the more complex the task becomes of specifying, designing and implementing that agent. This helps to explain why there has been a general trend over the past 10 years away from AI dreams (the HAL computer, for example, from 2001) to more realistic areas of actual applicability.
Agents that are useful to the user in everyday activities seem to be the way that agent technology as a whole is moving. It is hoped that by starting in the small with relatively easily specified agents which have limited capabilities and limited intelligence/learning, then the experience gained will begin to show the way of progressing up to computing with agents in the large.
General progress has been made over recent years, especially in the area of information agents, and this appears to be where future research is heading. The following chapter discusses a particular aspect of agent technology, mobility, and describes how applicable current mobile agent systems are for distributed information management.
[Top] | [Back] | [Next] | [Bottom] |
EMail: jd94r@ecs.soton.ac.uk
WWW: http://www.ecs.soton.ac.uk/~jd94r
Copyright © 1996, University of Southampton. All rights
reserved.