What is Art for ?
What is art for? Science shows it’s in the eye—and brain—of the beholder
The Nobel Prize-winning neuro-psychiatrist Eric Kandel explains what happens when we look at art
|To what extent is a portrait shaped by the way in which it is perceived by the viewer? Photo: Thomas Ricker via Flickr|
What is art for? In the most general sense, it is for the beholder. Let me put this issue into perspective.
At the turn of the century, in Vienna 1900, Alois Riegl the leader of the Vienna School of Art History argued that art history is going to die unless it becomes more scientific, and that the science it ought to relate itself to is psychology, and the question that it ought to address is the beholder’s share. The artist creates the painting, Riegl argued, and the beholder responds to it. Without the response of the beholder, art is incomplete. This is an obvious point, but it had not been pointed out in precisely these terms and had not been highlighted as being a key issue for experimental investigation.
A later generation of Riegl disciples, the Viennese trained art historians Ernst Kris and Ernst Gombrich, took up this challenge. They appreciated that understanding the viewer’s response to art—what Gombrich called the beholder’s share—is the natural bridge between the sciences and art, between psychology and portraiture.
Kris and Gombrich now asked the next logical question. To what extent is a portrait shaped by the way in which it is perceived by the viewer? To what degree is beauty in the eyes of the beholder? Kris argued that when any two of us look at the same painting we each respond to that same work in a slightly different way, because our brain is not a camera but a creativity machine. Thus, the beholder undergoes a creative experience that recapitulates in a minor way the creative experience of the artist.
Gombrich advanced this idea further by familiarising himself with the psychological literature on visual perception. He read Bishop Berkeley, and realised that when one looks at a face, all the retina of our eyes receives as information are the photons bouncing off the face. Yet despite this paucity of information coming from the face, we have no difficulty recognising a face and our friends recognise the same face pretty much the way we do. Clearly there must be other sources for information besides the photons bouncing off the face.
It was Hermann von Helmholtz, one of the pioneers of modern psychology who first pointed out, at the end of the 19th century, that there are two other sources of information: bottom-up and top-down. Bottom-up information is supplied by conserved perceptual mechanisms, which the human brain has evolved over about the past six million or so years. So there are certain built-in perceptual givens —approximations and guesses that are highly successful and that all of us use routinely. For example, when we see a source of light, we immediately assume it is from above because the sun is above us. But in addition to bottom-up information there is top-down information. Each of us has different experiences, we learned different things, we’ve seen different art images, and therefore each responds to art in a different way.
These insights made several of us realise that it is possible to bring a merger of the psychology of the beholder’s share with its underlying biology (see for example Kandel 2012). This beginning merger was made possible during the past several decades by the advances in the biology of perception, emotion, empathy and memory that began in the 1960s and continue to this day. Let me give you just one example for portraiture.
A beginning in the understanding of the beholder’s share of portraiture came in 1947 from the German neurologist Joachim Bodamer. He treated three patients who had acquired face blindness through an injury to the inferior temporal cortex. He named this disorder prosopagnosia, after the Greek terms for face (prosop) and lack of knowledge (agnosia).
Picking up on the work of Bodamer on the one hand and Hubel and Wiesel on the other, Charles Gross began in 1969 to examine single cells in the inferior temporal cortex of monkeys. Gross found, amazingly, that some cells responded specifically to people's hands, while other cells responded to their faces. The cells that responded to faces were not selective for any unique face but for the general category of faces. This suggested to Gross that a particular face, a particular grandmother, is represented by a small, specialised collection of nerve cells—an ensemble of grandmother cells, or proto-grandmother cells.
In 1992, Justine Sergent and her colleagues at the Montreal Neurological Institute used PET imaging and found that when normal subjects look at faces, both hemispheres of the fusiform gyrus and the anterior temporal cortex, are activated. In 1997, Nancy Kanwisher at MIT used fMRI and also delineated a region in the inferior temporal lobe specialised for face recognition. This region, which she called the fusiform face area, becomes active when an average person looks at a face. When the same person looks at a house, the region does not respond, although a different region of the brain does. The fusiform face area even becomes active when the person simply imagines a face. In fact, Kanwisher could tell whether a person was thinking about a face or a house by observing which region of the brain became active.
To explore face recognition further, Doris Tsao and Winrich Freiwald combined the approaches of Kanwisher and those of Gross. In 2006, they used both fMRI and electrical recordings of individual nerve cells in the brain of monkeys. They used fMRI to determine which areas of the inferior temporal lobe become active when a monkey looks at a face, and they used electrical recordings to determine how the nerve cells in those areas respond to a face.
With fMRI, they pinpointed six regions in the monkey’s inferior temporal lobe that responded only to faces. They called these areas face patches. Face patches are small, about 3 millimeters in diameter, and they are arranged along an axis from the back of the inferior temporal lobe to the front, suggesting that they may be organised into a hierarchy. Tsao and Freiwald next positioned electrodes in each of the six regions to record signals from individual nerve cells. They found that cells in the face patches are specialised for processing faces; moreover, in the two middle face patches, an amazing 97% of the cells respond only to faces.
Tsao and Freiwald next studied the connections among the six face patches by imaging all six simultaneously and electrically stimulating just one of them. They found that activating one of the middle face patches caused nerve cells within the remaining five areas to become active also. This finding implies that all of the face recognition regions in the temporal lobe of the brain are interconnected: they seem to form a unified network that processes information about different aspects of the faces they see. The entire network of face patches appears to constitute a dedicated system for processing one high-level object category: faces.
Freiwald and Tsao then asked: What types of visual information do each of these six face patches process? To answer the question, they focused on the two middle face patches and found that the neurons there detect and differentiate faces using a strategy that combines both parts-based as well as holistic, Gestalt principles. They showed monkeys drawings and pictures of faces with different shapes and orientations, and they discovered that the middle face patches are tuned to the geometry of facial features: that is, they detect the shape of the face. In addition, cells in these two regions respond to the orientation of the head and face: they are specialised for whole, upright faces.
Underscoring the behavioural finding that the brain can most easily recognise upright faces, Freiwald and Tsao next found that as with you and me cells in the face patches of the temporal lobe respond more weakly and less specifically when a face is presented upside down than when it is presented right side up. Moreover, when the eyes are exaggerated, as in a cartoon, the cells respond more strongly.
The face patch system discovered by Tsao and Freiwald is one of the most important and surprising advances in analysing the visual system since Hubel and Wiesel’s classic contribution on the early stages of visual processing. It illustrates how brain science is beginning to give us some initial insight into the biological mechanisms of the beholder’s share.
Thanks to ERIC KANDEL