Stuart Watt - School of Computing at RGU
The functional role of anthropomorphism in interfaces
Now at the Robert Gordon University, School of Computing, St.
Andrew Street, Aberdeen.
Anthropomorphism is a key element in the ascription of agency in a computer interface. While a few studies have been carried out on the effect of using human faces in interfaces, there is, as yet, no principled theory of the causes or effects of anthropomorphism in a human-computer interface. This paper brings together theories from psychology and experimental studies on anthropomorphism in psychology and HCI to provide this theory and to develop a methodology for the design of expressive interfaces. The theory and methodology are shown through two worked examples.
As computers take more of the burden in assisting users, there has been a shift from manipulation of objects to management of agents (Kay & Goldberg, 1977). There has also been a corresponding shift in the tendency to endow these interfaces with specifically human-like traits to encourage to make this management easier. Because of this, anthropomorphism, the ascription of human qualities to non-human systems, has become a topic for study in interface design.
Implementation techniques which add human-like characteristics to interfaces are now widespread. Empirical studies tell a different story: interfaces are not made easier to use just by adding a human face (Walker, Sproull, & Subramani, 1994).
A key problem in the design of such expressive interfaces is that, to date, there is no theory of anthropomorphism as a psychological phenomenon in its own right, although it is quite freely adopted as an interface technique (Laurel, 1990, 1991).
In this paper we will develop a model of anthropomorphism that can be applied in interface design. We will draw on three main sources in this model; first, the HCI literature on anthropomorphism in interfaces, second, the psychological studies of the phenomenon of anthropomorphism, and third, the literature on Turing testing. Turing tests are, perhaps, most relevant, because they provide a framework for studying the ascription of a complex human quality, intelligence, to a machine.
The psychology of anthropomorphism
The psychological origins for anthropomorphism have not yet been adequately studied (Caporael, 1986). The strongest suggestion seems to be that it is based in Humphrey’s (Humphrey, 1976) “natural psychology.” Humphrey’s suggestion is that the main extension of human minds over that of most other animals is a naturally evolved social, or transactional, thinking which allows them to guess the thoughts of others. Unfortunately, this sometimes goes too far: “through a long history, men have, I believe, explored the transactional possibilities of countless of the things in their environment, and sometimes, Pygmalion-like, the things have come alive.” It is this that is the source of anthropomorphism: a natural human tendency to “conjure up social relationships with even the most unpromising material.”
This provides the first stage of our model: the tendency to anthropomorphise an object is roughly proportional to the human transactional predictiveness of that object. This is complicated by immediate and social contexts, but is generally less ‘logical’ than it might appear.
There have been a few psychological experiments on anthropomorphism. Eddy et al. (1993) showed that there is a correlation between the ascription of cognitive abilities and genetic similarity. They also showed that for common pet animals such as cats and dogs this ascription was increased.
We can identify two factors in this: we can call them similarity and familiarity, and formulate an initial prediction of when anthropomorphism will happen in an interface accordingly.
This anthropomorphic principle can be stated initially as follows: the amount by which the user anthropomorphises an agent depends on the perceived similarity of the agent and the familiarity of the user with the agent.
Turing tests allow these models to be taken further. Turing’s original proposal (Turing, 1950) was for a test which could replace the question “can machines think?” His test replaced this with a human-computer interaction experiment: in a teletype mediated dialogue can an observer distinguish between a computer and a human. Here, following Laurel (1990), we do not suggest that an agent interface which can pass the test means that the agent is usable; we use the thought experiments derived from Turing’s to highlight some other factors that affect the ascription of human cognitive qualities.
Following Turing’s article, the most problematic variation on the test was Searle’s “Chinese Room” argument which still leaves turbulence in artificial intelligence (Searle, 1980). In Searle’s variation, we are told what is inside the system (namely a human following a precise set of rules and facts) and we end up identifying with the human in the room. This shows a third major factor, which is that if we know what it inside a system and it is something we can anthropomorphise with more easily, then we lose our ability to anthropomorphise with the whole system. Set in HCI, we will find it harder to anthropomorphise a computer with agents inside, than we will one without, because anthropomorphism with the agents inside will prevent us from seeing the computer as an agent in its own right.
One running theme in replies to Searle was the “Robot Reply” (Harnad, 1991). Here, the room was put into the head of a robot and the direct connection of sensors changes the interpretation of Searle’s challenge. For our purposes the most important property of this reply is that an increase in the richness of the sensory modality increases the tendency to anthropomorphise. Certainly linguistic communication is not enough for ascription of human cognitive capabilities; paralinguistic communication may be more important – even for humans.
The third and last factor showing up from Turing tests is the effect of the social context – noted even by Turing (1950). Harnad (1991) said: “bodily appearance clearly matters far less now than it did in Turing’s day.” Time changes the human as well as the computer.
Table 1. Component factors affecting anthropomorphism
Combining the two factors from the psychological studies with these additional factors, we can derive a model of anthropomorphism involving five related factors. Table 1 shows these factors and their interactions derived from the studies outlined above.
It is from this table, and the observation embodied in the anthropomorphic principle that ascription of cognitive states happens in proportion to these factors, that the methodology can be developed. These factors enable a reasonably accurate prediction of the level of cognitive complexity that will be attributed to an interface, and this can be balanced against the actual cognitive complexity of the behind the interface. Getting this balance right prevents the mismatching which inhibits natural interaction.
The similarity and familiarity factors are those which show up in empirical studies. One further complexity to note is that familiarity can be transferred. My familiarity with my cat means that I will transfer some of that familiarity to interface components which look like my cat. I will then tend to apply cat-management techniques to this interface which may not be wholly appropriate.
The medium of interaction also has an effect: human communication media promote ascription of cognitive processes, machine communication media inhibit it. The positive effect has been shown in HCI (Walker et al., 1994), and the negative effect in reverse Turing test studies, where a human pretends to be a machine (Hofstadter, 1985). The class running this experiment “was willing to view anything on a video terminal as mechanically produced, no matter how sophisticated.”
The negative effect of the user’s knowledge may be surprising. The point is that anthropomorphism can only happen at one level at a time. As the user becomes more familiar with the components of the system compared to the system as a whole, there is a tendency for the anthropomorphic effect to be shifted onto the system components.
Anthropomorphism in traditional interfaces
Anthropomorphism can happen even when the interface designer hasn’t intended it (Nass, Moon, Fogg, Reeves, & Dryer, 1995). Tognazzini (1992) notes of the Macintosh desktop interface: “the painful appearance of that trash can compels people to relieve it of its suffering when emptying it is the last thing they should be doing.” The trash can is almost unique in the Macintosh interface in that it changes shape not as a result of a user’s direct manipulation, but is changed by the system as a secondary effect. This, with the almost biological nature of the change of appearance, is enough to induce a anthropomorphic effect which reduces the effectiveness of the interface.
These principles show that ascription of human characteristics is dependent on five major factors. Of these factors, only three, similarity, familiarity, and interaction richness, can be controlled to any degree by an interface designer. This ascription is not a simple choice, but happens to a degree dependent upon these factors.
Anthropomorphism in an interface is not simply a problem in its own right: the problems occur when there is a mismatch between the user’s predictions, generated by that anthropomorphism, and the actual behaviour of the system. A methodological application of anthropomorphism is possible by finding the levels of similarity, familiarity, and interaction richness which minimise this mismatch. With today’s applications, this means that interfaces that use human faces and interaction modalities are not likely to work. Animal faces and slightly dampened interaction are more likely to be successful, although animated versions of inanimate objects are probably most appropriate.
Even so, interfaces which might be anthromorphised, whether or not they are intended to be so by their designers, should be carefully evaluated to ensure that problems like those associated with the Macintosh trash can do not arise.
Anthropomorphism is a complex psychological phenomenon in its own right, and one which needs more study before its effects on interface design can be predicted to any degree of accuracy. In the absence of a complete theory of anthropomorphism this study collates evidence from a number of studies in different disciplines to sketch a model of five principal factors. This model can successfully account for a wide range of the problems caused to interfaces by anthropomorphism. We therefore believe that it can be useful as a predictive tool in the interface design process.
The model shows two aspects of interface design. First, anthropomorphism can cause problems even in interfaces which don’t set to use human-like forms or media. Second, as the interfaces of the future are developed which are more conversational and transactional, anthropomorphism is more likely to happen with or without the intervention of the designer. The balancing and focusing of this anthrpomorphism will be key to the success or failure of these interfaces, and models like this one may have a valuable role in determining this.
Caporael, L. R. (1986). Anthropomorphism and Mechanomorphism: Two Faces of the Human Machine. Computers in Human Behavior, 2(3), 215-234.
Eddy, T. J., Gallup, G. G., & Povinelli, D. J. (1993). Attribution of Cognitive States to Animals: Anthropomorphism in Comparitive Perspective. Journal of Social Issues, 49(1), 87-101.
Harnad, S. (1991). Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem. Minds and Machines, 1(1), 43-54.
Hofstadter, D. R. (1985). Metamagical Themas: Questing for the Eessence of Mind and Pattern. New York: Basic Books.
Humphrey, N. K. (1976). The Social Function of Intellect. In P. P. G. Bateson & R. A. Hinde (Eds.), Growing Points in Ethology. Cambridge: Cambridge University Press.
Kay, A., & Goldberg, A. (1977, March 1977). Personal Dynamic Media. Computer, 10, 31-41.
Laurel, B. (1990). Interface Agents: Metaphors With Character. In B. Laurel & S. J. Mountford (Eds.), The Art of Human-Computer Interface Design (pp. 355-365): Addison-Wesley.
Laurel, B. (1991). Computers as Theatre. Reading, Massachusetts: Addison-Wesley.
Nass, C., Moon, Y., Fogg, B. J., Reeves, B., & Dryer, C. (1995). Can Computer Personalities Be Human Personalities? Paper presented at the CHI’95.
Searle, J. R. (1980). Minds, Brains, and Programs. Behavioural and Brain Sciences, 3, 417-424.
Tognazzini, B. (1992). Tog on Interface: Addison-Wesley.
Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, LIX(2236), 433-460.
Walker, J. H., Sproull, L., & Subramani, R. (1994). Using a Human Face in an Interface. Paper presented at the CHI’94.
Contact: sw (at) comp.rgu.ac.uk; tel: +44 (0)1224 26 2723; fax: +44 (0)1224 26 2727