Hide this

What is Next Nature?

With our attempts to cultivate nature, humankind causes the rising of a next nature, which is wild and unpredictable as ever. Wild systems, genetic surprises, autonomous machinery and splendidly beautiful black flowers. Nature changes along with us.

eyeless real doll

Essay: Anthropomorphobia

Are you familiar with the affliction? Anthropomorphobia is the fear of recognizing human characteristics in non-human objects. The term is a hybrid of two Greek-derived words: ‘anthropomorphic’ means ‘of human form’ and ‘phobia’ means ‘fear’. Although anthropomorphobia was originally rare, with complaints limiting themselves to fairs and amusement parks with moving dummies that laughed at visitors, the blurring boundary between people and products is leading to increased problems. Complaints can be accompanied by irrational panic attacks, disdain, revulsion, and confusion about what it means to be human. Will anthropomorphobia eventually become public disease number one? Or can anthropomorphobia serve as a guiding principle in the evolution of humanity? Herewith, an exploration.

By KOERT VAN MENSVOORT

Exploring the Twilight between Person and Product

Luxury cars with blinking headlight eyes. Perfume bottles shaped like beautiful ladies. Grandma’s face stretched smooth. Carefully selected designer babies. The Senseo coffeemaker shaped – subtly, but nonetheless – like a serving butler. And, of course, there are the robots, mowing grass, vacuuming living rooms, and even caring for elderly people with dementia. Today more and more products are designed to exhibit anthropomorphic – that is, human – behaviour. At the same time, as a consequence of increasing technological capabilities, people are being more and more radically cultivated and turned into products. This essay will investigate the blurring of the boundary between people and products. My ultimate argument will be that we can use our relationship to anthropomorphobia as a guiding principle in our future evolution.

Introduction: Anthropomorphism for Dummies

Before we take a closer look at the tension between people and products, here is a general introduction to anthropomorphism, that is, the human urge to recognise people in practically everything. Researchers distinguish various types of anthropomorphism (DiSalvo, Gemperle and Forlizzi, 2007). The most obvious examples ⎯ cartoon characters, faces in clouds, teddy bears ⎯ fall into the category of 1) structural anthropomorphism, evoked by objects that show visible physical similarities to human beings. Alongside structural anthropomorphism, three other types are identified. 2) Gestural anthropomorphism has to do with movements or postures that suggest human action or expression. An example is provided by the living lamp in Pixar’s short animated film, which does not look like a person but becomes human through its movements. 3) Character anthropomorphism relates to the exhibition of humanlike qualities or habits – think of a stubborn car that doesn’t want to start. The last type, 4) aware anthropomorphism, has to do with the suggestion of a human capacity for thought and intent. Famous examples are provided by the HAL 9000 spaceship computer in the film 2001: A Space Odyssey and the intelligent car KITT in the TV series Knight Rider.

Besides being aware that anthropomorphism can take different forms, we must keep in mind that it is a human characteristic, not a quality of the anthropomorphised object or creature per se: the fact that we recognise human traits in objects in no way means those objects are actually human, or even designed with the intention of seeming that way. Anthropomorphism is an extremely subjective business. Research has shown that how we experience anthropomorphism and to what degree, are extremely personal – what seems anthropomorphic to one person may not to another, or it may seem much less so (Gooren, 2009).

Blurring The Line Between People And Products

To understand anthropomorphobia − the fear of human characteristics in non-human objects − we must begin by studying the boundary between people and products. Our hypothesis will be that anthropomorphobia occurs when this boundary is transgressed. This can happen in two ways: 1) products or objects can exhibit human behaviour, and 2) people can act like products. We will explore both sides of this front line, beginning with the growing phenomenon of humanoid products.

Products as People

To understand anthropomorphobia we must begin by studying the boundary between people and products.

The question of whether and how anthropomorphism should be applied in product design has long been a matter of debate among researchers and product designers. “You shouldn’t anthropomorphise computers, they don’t like it” is a classic and frequently made joke among interaction designers; the punch line rests on the knowledge that people will always, to a greater or lesser degree, ascribe human attributes to products, whether or not they are not designed to exhibit them − evidently it is simply human nature to project our own characteristics on just about everything (Reeves & Nass, 1996).

Some researchers argue that the deliberate evocation of anthropomorphism in product design must always be avoided because it generates unrealistic expectations, makes human-product interaction unnecessarily messy and complex, and stands in the way of the development of genuinely powerful tools (Shneiderman, 1992). Others argue that the failure of anthropomorphic products is simply a consequence of poor implementation and that anthropomorphism, if applied correctly, can offer an important advantage because it makes use of social models people already have access to (Harris and Louwen, 2002; Murano, 2006; DiSalvo & Gemperle, 2003). A commonly used guiding principle among robot builders is the so-called uncanny valley theory (Mori, 1970), which, briefly summarised, says people can deal fine with anthropomorphic products as long as they’re obviously not fully fledged people −e.g., cartoon characters and robot dogs. However, when a humanoid robot looks too much like a person and we can still tell it’s not one, an uncanny effect arises, causing strong feelings of revulsion − in other words, anthropomorphobia (Macdorman et al., 2009).

Most researchers agree that anthropomorphism can be advantageous as well as dangerous. On the one hand, it can encourage an empathic relationship between the user and the product. If the expectations raised are not met, however, disappointment and incomprehension can result. Personality, cultural background and specific context can also influence one’s perception of the product, increasing the chance of miscommunication further.

Although no consensus exists on the application of anthropomorphism in product design and there is no generally accepted theory on the subject, technology cheerfully marches on. We are therefore seeing increasing numbers of advanced products that, whether or not as a direct consequence of artificial intelligence, show ever more anthropomorphic characteristics. The friendly soft drink machine; the coffeemaker that says good morning and politely lets you know when it needs cleaning. A robot that scrubs the floor, and one that looks after the children. Would you entrust your kids to a robot? Maybe you’d rather not. But why? Is it possible that you’re suffering from a touch of anthropomorphobia? Consciously or unconsciously, many people feel uneasy when products act like people. Anthropomorphobia is evidently a deep-seated human response − but why? Looking at the phobia as it relates to products becoming people’, broadly speaking, we can identify two possible causes:

1) Anthropomorphobia is a reaction to the inadequate quality of the anthropomorphic products we encounter.

2) People fundamentally dislike products acting like humans because it undermines our specialness as people: if an object can be human, then what am I good for?

The coffeemaker that says good morning and politely lets you know when it needs cleaning. A robot that scrubs the floor, and one that looks after the children.

Champions of anthropomorphic objects − such as the people who build humanoid robots − will subscribe to the first explanation, while opponents will feel more affinity for the second. What’s difficult about the debate is that neither explanation is easy to prove or to disprove. Whenever an anthropomorphic product makes people uneasy, the advocates simply respond that they will develop a newer, cleverer version soon that will be accepted. ­ Conversely, opponents will keep finding new reasons to reject anthropomorphic products. Take the chess computer − an instance of aware anthropomorphism, like HAL 9000. Thirty years ago, people thought that when a chess computer was able to beat a grandmaster, it would mean computers had achieved a human level of intelligence. But when world champion Garry Kasparov was finally vanquished in the 1990s by IBM’s monster computer Deep Blue, the opponents calmly moved the goalposts, proposing that chess required merely a limited kind of intelligence and human intelligence as a whole entailed much more than that − emotional intelligence, bodily intelligence, creative intelligence, and so on. Never fear: computers couldn’t touch human beings, even if they could beat us at chess! Then again, the nice thing about this game of leapfrog is that through our attempts to create humanoid products we continue to refine our definition of what a human being is − in copying ourselves, we come to know ourselves.

Where will it all end? We can only speculate. Researcher David Levy (2007) predicts that marriage between robots and humans will be legal by the end of the 21st century. For people born in the 20th century, this sounds highly strange. And yet, if we think for a minute, we realise the idea of legal gay marriage might have sounded equally impossible and undesirable to our great-grandparents born in the 19th century. Boundaries are blurring; norms are shifting. Although I’m not personally interested in hopping into bed with a sophisticated sex doll, nor am I especially bothered if other people are. Robot sex has been a secret fantasy of both men and women for decades, and although I don’t expect it will go mainstream anytime soon, I think we should allow each other our placebos. Actually, I’m more worried about something else: whether marrying a normal person will still be possible at the end of the 21st century. Because if we look at the increasing technologization of human beings and extrapolate into the future, it seems far from certain that normal people will still exist by then. This brings us to the second cause of anthropomorphobia.

People as Products

We have seen that more and more products in our everyday environment are being designed to act like people. As described earlier, the boundary between people and products is also being transgressed in the other direction: people are behaving as if they were products. I use the term ‘product’ in the sense of something that is functionally designed, manufactured, and carefully placed on the market.

It is becoming less and less a taboo to consider the body as a medium, something that must be shaped, upgraded and produced.

The contemporary social pressure on people to design and produce themselves is difficult to overestimate. Have you put together a personal marketing plan yet? If not, I wouldn’t wait too long. Hairstyles, fashion, body corrections, smart drugs, Botox and Facebook profiles are just a few of the self-cultivating tools people use in the effort to design themselves − often in new, improved versions.

It is becoming less and less taboo to consider the body as a medium, something that must be shaped, upgraded and produced. Photoshopped models in lifestyle magazines show us how successful people are supposed to look. Performance-enhancing drugs help to make us just that little bit more alert than others. Some of our fellow human beings are even going so far in their self-cultivation that others are questioning whether they are still actually human − think, for example, of the uneasiness provoked by excessive plastic surgery.

The ultimate example of the commodified human being is the so-called designer baby, whose genetic profile is selected or manipulated in advance in order to ensure the absence or presence of certain genetic traits. Designer babies are a rich subject for science fiction, but to an increasing degree they are also science fact. “Doctor, I’d like a child with blond hair, no Down’s Syndrome and a minimal chance of Alzheimer’s, please”. An important criticism of the practice of creating designer babies concerns the fact that these (not-yet-born) people do not get to choose their own traits but are born as products, dependent on parents and doctors, who are themselves under various social pressures.

In general, the cultivation of people appears chiefly to be the consequence of social pressure, implicit or explicit. The young woman with breast implants is trying to measure up to visual culture’s current beauty ideal. The Ritalin-popping ADHD child is calmed down so he or she can function within the artificial environment of the classroom. The ageing lady gets Botox injections in conformance with society’s idealisation of young women. People cultivate themselves in all kinds of ways in an effort to become successful human beings within the norms of the societies they live in. What those norms are is heavily dependent on time and place.

Ever Met a Normal Human Being? What Did You Think of Them?

At the beginning of the 1990s, shortly after the fall of the Berlin Wall, I was in a European airport. The Cold War had just ended. Waiting to check in, I was standing between two queues for other flights, one of which was going to the United States – Los Angeles, I think – and the other to Bucharest, Romania. The striking difference between the people in the two queues made a powerful impression on me. In the queue for the United States stood a film crew and a Hollywood actor, who had been in picturesque Europe filming a romantic comedy whose name I have since forgotten. There were slightly too thin, stretched-tight yet elegantly dressed “Hello, how are you?” women and friendly yet superficially smiling white-toothed men like the ones I had seen in Gillette commercials. The whole thing made a sophisticated yet somewhat artificial Barbie-and-Ken-like impression. The contrast with the queue for the Eastern European flight was enormous. The latter was comprised of proud but bony people in grey fur coats with grown-out haircuts and too many suitcases – many times more authentic, but shabby verging on animal (I know that today Bucharest is a hip, fashionable city, but in 1990, just after the fall of the Wall, things were different).

As a Western European (deodorant, highlights, no Botox yet), I felt somewhere in between, with enough distance to reflect. Never before had I been so keenly aware of how relative our ideas about what a ‘normal’ human being is really are. Someone from the Middle Ages probably would have considered the Romanians’ suitcases and fur coats unbelievably sophisticated. From the perspective of a cave-dweller, we would scarcely be recognisable as humans. I wouldn’t be surprised if a caveperson experienced strong feelings of anthropomorphobia at the sight of the lines in the airport and presumed it was a landing zone for post-human aliens from a faraway planet.

Humans As Mutants

Throughout our history, to a greater or lesser degree, all of us human beings have been cultivated, domesticated, made into products. This need to cultivate people is probably as old as we are, as is opposition to it. It’s tempting to think that, after evolving out of the primordial soup into mammals, then upright apes, and finally the intelligent animals we are today, we humans have reached the end of our development. Of course, this not the case. Evolution never ends. It will go on, and people will continue to change in the future. But that does not mean we will cease to be people, as is implied in terms like ‘transhuman’ and ‘posthuman’ (Ettinger, 1974; Warwick, 2004; Bostrom, 2005). It is more likely that our ideas about what a normal human being is will change along with us.

We should prevent people from becoming unable to recognize each other as human.

The idea that technology will determine our evolutionary future is by no means new. During its evolution over the past two hundred thousand years, Homo sapiens has distinguished itself from other, now extinct humanoids, such as Homo habilis, Homo erectus, Homo ergaster and the Neanderthal, by its inventive, intensive use of technology. This has afforded Homo sapiens an evolutionary advantage that has led us, rather than the stronger and more solidly built Neanderthal, to become the planet’s dominant species. From this perspective, for technology to play a role in our evolutionary future would not be unnatural but in fact completely consistent with who we are. Since the dawn of our existence, human beings have been coevolving with the technology they produce. Or, as Arnold Gehlen (1961) put it, we are by nature technological creatures.

Because only one humanoid species walks the earth today, it is difficult to imagine what kind of relationships, if any, different kinds of humans living contemporaneously in the past might have had with each other. Perhaps Neanderthals considered Homo sapiens feeble, unnatural, creepy nerds, wholly dependent on their technological toys. A similar feeling could overcome us when we encounter technologically ‘improved’ individuals of our own species. There is a good chance that we will see them in the first place as artificial individuals degraded to the status of products and that they will inspire violent feelings of anthropomorphobia. This, however, will not negate their existence or their potential evolutionary advantage.

Human Enhancement

If the promises around up-and-coming bio-, nano-, info-, and neurotechnologies are kept, we can look forward to seeing a rich assortment of mutated humans. There will be people with implanted RFID chips (there already are), people with fashionably rebuilt bodies (they, too, exist and are becoming the norm in some quarters), people with tissue-engineered heart valves (they exist), people with artificial blood cells that absorb twice as much oxygen (expected on the cycling circuit), test-tube babies (exist), people with tattooed electronic connections for neuro-implants (not yet the norm, although our depilated bodies are ready for them), natural-born soldiers created for secret military projects (rumour has it they exist), and, of course, clones – Mozarts to play music in holiday parks and Einsteins who will take your job (science fiction, for now, and perhaps not a great idea).

It is true that not everything that can happen has to, or will. But when something is technically possible in countless laboratories and clinics in the world (as many of these technologies are), a considerable number of people view them as useful, and drawing up enforceable legislation around them is practically impossible, then the question is not whether but when and how it will happen (Stock, 2002). It would be naive to believe we will reach a consensus about the evolutionary future of humanity. We will not. The subject affects us too deeply, and the various positions are too closely linked to cultural traditions, philosophies of life, religion and politics. Some will see this situation as a monstrous thing, a terrible nadir, perhaps even the end of humanity. Others will say, “This is wonderful. We’re at the apex of human ingenuity. This will improve the human condition”. The truth probably lies somewhere in between. What is certain is that we are playing with fire, and that not only our future but also our descendants’ depends on it. But we must realise that playing with fire is simply something we do as people, part of what makes us human.

While the idea that technology should not influence human evolution constitutes a denial of human nature, it would fly in the face of human dignity to immediately make everything we can imagine reality. The crucial question is: How can we chart a course between rigidity and recklessness with respect to our own evolutionary future?

Anthropomorphobia as a Guideline

Let us return to the kernel of my argument. I believe the concept of anthropomorphobia can help us to find a balanced way of dealing with the issue of tinkering with people. There are two sides to anthropomorphobia that proponents as well as opponents of tinkering have to deal with. On the one hand, transhumanists, techno-utopians, humanoid builders, and fans of improving humanity need to realise that their visions and creations can elicit powerful emotional reactions and acute anthropomorphobia in many people. Not everyone is ready to accept being surrounded by humans with plastic faces, electrically controlled limbs and microchip implants – if only because they cannot afford these upgrades. Along with the improvements to the human condition assumed by proponents, we should realise that the uncritical application of people-enhancing technologies can cause profound alienation between individuals, which will lead overall to a worsening rather than an improvement of the human condition.

Understanding anthropomorphobia can guide us in our evolutionary future.

On the other hand, those who oppose all tinkering must realise anthropomorphobia is a phobia. It is a narrowing of consciousness that can easily be placed in the same list with xenophobia, racism and discrimination. Just as various evolutionary explanations can be proposed for anthropomorphobia as well as xenophobia, racism and discrimination, it is the business of civilisation to channel these feelings. Acceptance and respect for one’s fellow human beings are at the root of a well-functioning society.

In conclusion, I would like to argue that understanding anthropomorphobia can guide us in our evolutionary future. I would like to propose a simple general maxim: Prevent anthropomorphobia where possible. We should prevent people from having to live in a world where they are constantly confused about what it means to be human. We should prevent people from becoming unable to recognise each other as human.

The mere fact that an intelligent scientist can make a robot clerk to sell train tickets doesn’t mean a robot is the best solution. A simple ticket machine that doesn’t pretend to be anything more than what it is could work much better. An ageing movie star might realise she will alienate viewers if she does not call a halt to the unbridled plastic surgeries that are slowly but surely turning her into a life-sized Barbie – her audience will derive much more pleasure from seeing her get older and watching her beauty ripen. The 17-year-old boy who loses his legs in a tragic accident should think carefully before getting measured for purple octopus attachments, although that doesn’t mean he should necessarily get the standard flesh-toned prosthesis his overbearing anthropomorphobic mother would prefer. Awareness and discussion around anthropomorphobia can provide us with a framework for making decisions about the degree to which we wish to view the human being as a medium we can shape, reconstruct and improve – about which limits it is socially acceptable to transgress, and when.

I can already hear critics replying that although the maxim ‘prevent anthropomorphobia’ may sound good, anthropomorphobia is impossible to measure and therefore the maxim is useless. It is true that there is no ‘anthromorphometric’ for objectively measuring how anthropomorphic a specific phenomenon is and how uneasy it makes people. But I would argue that this is a good thing. Anthropomorphobia is a completely human-centred term, i.e., it is people who determine what makes them uncomfortable and what doesn’t. Anthropomorphobia is therefore a dynamic and enduring term that can change with time, and with us. For we will change – that much is certain.

This essay was originally published in the Next Nature Book. Image via Amusing Planet

References

Bostrom, N. ‘In Defence of Posthuman Dignity’. Bioethics, Vol. 19, No. 3, 2005, pp. 202–214.

DiSalvo, C., Gemperle, F. From Seduction to Fulfillment: The Use of Anthropomorphic Form in Design. Engineered Systems. 2003.

DiSalvo, C., Gemperle, F., and Forlizzi, J. Imitating the Human Form: Four Kinds of Anthropomorphic Form. 2007.

Duffy, B.R. ‘Anthropomorphism and the Social Robot’, Robotics and Autonomous Systems, vol. 42, 2003, pp. 177–190.

Ettinger, R. Man into Superman. Avon, 1974.

Gehlen, A. Man: His Nature and Place in the World. Columbia University Press, 1988.

Gooren, D. Anthropomorphism & Neuroticism: Fear and the Human Form. Eindhoven: Eindhoven University of Technology, 2009.

Haraway, D. Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge, 1991.

Harris, R. and Loewen, P. (Anti-) Anthropomorphism and Interface Design. Toronto: Canadian Association of Teachers of Technical Writing, 2002.

Levy, D. Love and Sex with Robots: The Evolution of Human-Robot Relationships. Harper, 2007.

Macdorman, K., Green, R., Ho, C. and Koch, C.  ‘Too Real for Comfort? ‘Uncanny Responses to Computer Generated Faces’. Computers in Human Behavior, 2009.

Murano, P. ‘Why Anthropomorphic User Interface Feedback Can Be Effective and Preferred by Users’. Enterprise Information Systems, vol. VII, 2006, pp. 241–248.

Mori, M. ‘The Uncanny Valley’, Energy, vol. 7, 1970, p. 33–35.

Reeves, B. and Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge: Cambridge University Press, 1996.

Shneiderman, B. ‘Anthropomorphism: From Eliza to Terminator’, Proceedings of CHI ’92, 1992, pp. 67–70.

Stock, G. Redesigning Humans: Our Inevitable Genetic Future. 2002.

Warwick, K. I, Cyborg. University of Illinois Press, 2004.

Discussion

  1. Totally amazing, fulfilling website! It is so wonderful when people put into words what I imagine. Thank you.