Nicholas Nixon for The New York Times

All photographs taken at the Infant Cognition Center at Yale University.

The New York Times, May 11, 2010, by Paul Bloom  –  Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.

This incident occurred in one of several psychology studies that I have been involved with at the Infant Cognition Center at Yale University in collaboration with my colleague (and wife), Karen Wynn, who runs the lab, and a graduate student, Kiley Hamlin, who is the lead author of the studies. We are one of a handful of research teams around the world exploring the moral life of babies.

Like many scientists and humanists, I have long been fascinated by the capacities and inclinations of babies and children. The mental life of young humans not only is an interesting topic in its own right; it also raises — and can help answer — fundamental questions of philosophy and psychology, including how biological evolution and cultural experience conspire to shape human nature. In graduate school, I studied early language development and later moved on to fairly traditional topics in cognitive development, like how we come to understand the minds of other people — what they know, want and experience.

But the current work I’m involved in, on baby morality, might seem like a perverse and misguided next step. Why would anyone even entertain the thought of babies as moral beings? From Sigmund Freud to Jean Piaget to Lawrence Kohlberg, psychologists have long argued that we begin life as amoral animals. One important task of society, particularly of parents, is to turn babies into civilized beings — social creatures who can experience empathy, guilt and shame; who can override selfish impulses in the name of higher principles; and who will respond with outrage to unfairness and injustice. Many parents and educators would endorse a view of infants and toddlers close to that of a recent Onion headline: “New Study Reveals Most Children Unrepentant Sociopaths.” If children enter the world already equipped with moral notions, why is it that we have to work so hard to humanize them?

A growing body of evidence, though, suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.

Smart Babies
Babies seem spastic in their actions, undisciplined in their attention. In 1762, Jean-Jacques Rousseau called the baby “a perfect idiot,” and in 1890 William James famously described a baby’s mental life as “one great blooming, buzzing confusion.” A sympathetic parent might see the spark of consciousness in a baby’s large eyes and eagerly accept the popular claim that babies are wonderful learners, but it is hard to avoid the impression that they begin as ignorant as bread loaves. Many developmental psychologists will tell you that the ignorance of human babies extends well into childhood. For many years the conventional view was that young humans take a surprisingly long time to learn basic facts about the physical world (like that objects continue to exist once they are out of sight) and basic facts about people (like that they have beliefs and desires and goals) — let alone how long it takes them to learn about morality.

I am admittedly biased, but I think one of the great discoveries in modern psychology is that this view of babies is mistaken.

A reason this view has persisted is that, for many years, scientists weren’t sure how to go about studying the mental life of babies. It’s a challenge to study the cognitive abilities of any creature that lacks language, but human babies present an additional difficulty, because, even compared to rats or birds, they are behaviorally limited: they can’t run mazes or peck at levers. In the 1980s, however, psychologists interested in exploring how much babies know began making use of one of the few behaviors that young babies can control: the movement of their eyes. The eyes are a window to the baby’s soul. As adults do, when babies see something that they find interesting or surprising, they tend to look at it longer than they would at something they find uninteresting or expected. And when given a choice between two things to look at, babies usually opt to look at the more pleasing thing. You can use “looking time,” then, as a rough but reliable proxy for what captures babies’ attention: what babies are surprised by or what babies like.

The studies in the 1980s that made use of this methodology were able to discover surprising things about what babies know about the nature and workings of physical objects — a baby’s “naïve physics.” Psychologists — most notably Elizabeth Spelke and Renée Baillargeon — conducted studies that essentially involved showing babies magic tricks, events that seemed to violate some law of the universe: you remove the supports from beneath a block and it floats in midair, unsupported; an object disappears and then reappears in another location; a box is placed behind a screen, the screen falls backward into empty space. Like adults, babies tend to linger on such scenes — they look longer at them than at scenes that are identical in all regards except that they don’t violate physical laws. This suggests that babies have expectations about how objects should behave. A vast body of research now suggests that — contrary to what was taught for decades to legions of psychology undergraduates — babies think of objects largely as adults do, as connected masses that move as units, that are solid and subject to gravity and that move in continuous paths through space and time.

Other studies, starting with a 1992 paper by my wife, Karen, have found that babies can do rudimentary math with objects. The demonstration is simple. Show a baby an empty stage. Raise a screen to obscure part of the stage. In view of the baby, put a Mickey Mouse doll behind the screen. Then put another Mickey Mouse doll behind the screen. Now drop the screen. Adults expect two dolls — and so do 5-month-olds: if the screen drops to reveal one or three dolls, the babies look longer, in surprise, than they do if the screen drops to reveal two.

A second wave of studies used looking-time methods to explore what babies know about the minds of others — a baby’s “naïve psychology.” Psychologists had known for a while that even the youngest of babies treat people different from inanimate objects. Babies like to look at faces; they mimic them, they smile at them. They expect engagement: if a moving object becomes still, they merely lose interest; if a person’s face becomes still, however, they become distressed.

But the new studies found that babies have an actual understanding of mental life: they have some grasp of how people think and why they act as they do. The studies showed that, though babies expect inanimate objects to move as the result of push-pull interactions, they expect people to move rationally in accordance with their beliefs and desires: babies show surprise when someone takes a roundabout path to something he wants. They expect someone who reaches for an object to reach for the same object later, even if its location has changed. And well before their 2nd birthdays, babies are sharp enough to know that other people can have false beliefs. The psychologists Kristine Onishi and Renée Baillargeon have found that 15-month-olds expect that if a person sees an object in one box, and then the object is moved to another box when the person isn’t looking, the person will later reach into the box where he first saw the object, not the box where it actually is. That is, toddlers have a mental model not merely of the world but of the world as understood by someone else.

These discoveries inevitably raise a question: If babies have such a rich understanding of objects and people so early in life, why do they seem so ignorant and helpless? Why don’t they put their knowledge to more active use? One possible answer is that these capacities are the psychological equivalent of physical traits like testicles or ovaries, which are formed in infancy and then sit around, useless, for years and years. Another possibility is that babies do, in fact, use their knowledge from Day 1, not for action but for learning. One lesson from the study of artificial intelligence (and from cognitive science more generally) is that an empty head learns nothing: a system that is capable of rapidly absorbing information needs to have some prewired understanding of what to pay attention to and what generalizations to make. Babies might start off smart, then, because it enables them to get smarter.

Nice Babies
Psychologists like myself who are interested in the cognitive capacities of babies and toddlers are now turning our attention to whether babies have a “naïve morality.” But there is reason to proceed with caution. Morality, after all, is a different sort of affair than physics or psychology. The truths of physics and psychology are universal: objects obey the same physical laws everywhere; and people everywhere have minds, goals, desires and beliefs. But the existence of a universal moral code is a highly controversial claim; there is considerable evidence for wide variation from society to society.

In the journal Science a couple of months ago, the psychologist Joseph Henrich and several of his colleagues reported a cross-cultural study of 15 diverse populations and found that people’s propensities to behave kindly to strangers and to punish unfairness are strongest in large-scale communities with market economies, where such norms are essential to the smooth functioning of trade. Henrich and his colleagues concluded that much of the morality that humans possess is a consequence of the culture in which they are raised, not their innate capacities.

At the same time, though, people everywhere have some sense of right and wrong. You won’t find a society where people don’t have some notion of fairness, don’t put some value on loyalty and kindness, don’t distinguish between acts of cruelty and innocent mistakes, don’t categorize people as nasty or nice. These universals make evolutionary sense. Since natural selection works, at least in part, at a genetic level, there is a logic to being instinctively kind to our kin, whose survival and well-being promote the spread of our genes. More than that, it is often beneficial for humans to work together with other humans, which means that it would have been adaptive to evaluate the niceness and nastiness of other individuals. All this is reason to consider the innateness of at least basic moral concepts.

In addition, scientists know that certain compassionate feelings and impulses emerge early and apparently universally in human development. These are not moral concepts, exactly, but they seem closely related. One example is feeling pain at the pain of others. In his book “The Expression of the Emotions in Man and Animals,” Charles Darwin, a keen observer of human nature, tells the story of how his first son, William, was fooled by his nurse into expressing sympathy at a very young age: “When a few days over 6 months old, his nurse pretended to cry, and I saw that his face instantly assumed a melancholy expression, with the corners of his mouth strongly depressed.”

There seems to be something evolutionarily ancient to this empathetic response. If you want to cause a rat distress, you can expose it to the screams of other rats. Human babies, notably, cry more to the cries of other babies than to tape recordings of their own crying, suggesting that they are responding to their awareness of someone else’s pain, not merely to a certain pitch of sound. Babies also seem to want to assuage the pain of others: once they have enough physical competence (starting at about 1 year old), they soothe others in distress by stroking and touching or by handing over a bottle or toy. There are individual differences, to be sure, in the intensity of response: some babies are great soothers; others don’t care as much. But the basic impulse seems common to all. (Some other primates behave similarly: the primatologist Frans de Waal reports that chimpanzees “will approach a victim of attack, put an arm around her and gently pat her back or groom her.” Monkeys, on the other hand, tend to shun victims of aggression.)

Some recent studies have explored the existence of behavior in toddlers that is “altruistic” in an even stronger sense — like when they give up their time and energy to help a stranger accomplish a difficult task. The psychologists Felix Warneken and Michael Tomasello have put toddlers in situations in which an adult is struggling to get something done, like opening a cabinet door with his hands full or trying to get to an object out of reach. The toddlers tend to spontaneously help, even without any prompting, encouragement or reward.

Is any of the above behavior recognizable as moral conduct? Not obviously so. Moral ideas seem to involve much more than mere compassion. Morality, for instance, is closely related to notions of praise and blame: we want to reward what we see as good and punish what we see as bad. Morality is also closely connected to the ideal of impartiality — if it’s immoral for you to do something to me, then, all else being equal, it is immoral for me to do the same thing to you. In addition, moral principles are different from other types of rules or laws: they cannot, for instance, be overruled solely by virtue of authority. (Even a 4-year-old knows not only that unprovoked hitting is wrong but also that it would continue to be wrong even if a teacher said that it was O.K.) And we tend to associate morality with the possibility of free and rational choice; people choose to do good or evil. To hold someone responsible for an act means that we believe that he could have chosen to act otherwise.

Babies and toddlers might not know or exhibit any of these moral subtleties. Their sympathetic reactions and motivations — including their desire to alleviate the pain of others — may not be much different in kind from purely nonmoral reactions and motivations like growing hungry or wanting to void a full bladder. Even if that is true, though, it is hard to conceive of a moral system that didn’t have, as a starting point, these empathetic capacities. As David Hume argued, mere rationality can’t be the foundation of morality, since our most basic desires are neither rational nor irrational. “ ’Tis not contrary to reason,” he wrote, “to prefer the destruction of the whole world to the scratching of my finger.” To have a genuinely moral system, in other words, some things first have to matter, and what we see in babies is the development of mattering.

Moral-Baby Experiments
So what do babies really understand about morality? Our first experiments exploring this question were done in collaboration with a postdoctoral researcher named Valerie Kuhlmeier (who is now an associate professor of psychology at Queen’s University in Ontario). Building on previous work by the psychologists David and Ann Premack, we began by investigating what babies think about two particular kinds of action: helping and hindering.

Our experiments involved having children watch animated movies of geometrical characters with faces. In one, a red ball would try to go up a hill. On some attempts, a yellow square got behind the ball and gently nudged it upward; in others, a green triangle got in front of it and pushed it down. We were interested in babies’ expectations about the ball’s attitudes — what would the baby expect the ball to make of the character who helped it and the one who hindered it? To find out, we then showed the babies additional movies in which the ball either approached the square or the triangle. When the ball approached the triangle (the hinderer), both 9- and 12-month-olds looked longer than they did when the ball approached the square (the helper). This was consistent with the interpretation that the former action surprised them; they expected the ball to approach the helper. A later study, using somewhat different stimuli, replicated the finding with 10-month-olds, but found that 6-month-olds seem to have no expectations at all. (This effect is robust only when the animated characters have faces; when they are simple faceless figures, it is apparently harder for babies to interpret what they are seeing as a social interaction.)

This experiment was designed to explore babies’ expectations about social interactions, not their moral capacities per se. But if you look at the movies, it’s clear that, at least to adult eyes, there is some latent moral content to the situation: the triangle is kind of a jerk; the square is a sweetheart. So we set out to investigate whether babies make the same judgments about the characters that adults do. Forget about how babies expect the ball to act toward the other characters; what do babies themselves think about the square and the triangle? Do they prefer the good guy and dislike the bad guy?

Here we began our more focused investigations into baby morality. For these studies, parents took their babies to the Infant Cognition Center, which is within one of the Yale psychology buildings. (The center is just a couple of blocks away from where Stanley Milgram did his famous experiments on obedience in the early 1960s, tricking New Haven residents into believing that they had severely harmed or even killed strangers with electrical shocks.) The parents were told about what was going to happen and filled out consent forms, which described the study, the risks to the baby (minimal) and the benefits to the baby (minimal, though it is a nice-enough experience). Parents often asked, reasonably enough, if they would learn how their baby does, and the answer was no. This sort of study provides no clinical or educational feedback about individual babies; the findings make sense only when computed as a group.

For the experiment proper, a parent will carry his or her baby into a small testing room. A typical experiment takes about 15 minutes. Usually, the parent sits on a chair, with the baby on his or her lap, though for some studies, the baby is strapped into a high chair with the parent standing behind. At this point, some of the babies are either sleeping or too fussy to continue; there will then be a short break for the baby to wake up or calm down, but on average this kind of study ends up losing about a quarter of the subjects. Just as critics describe much of experimental psychology as the study of the American college undergraduate who wants to make some extra money or needs to fulfill an Intro Psych requirement, there’s some truth to the claim that this developmental work is a science of the interested and alert baby.

In one of our first studies of moral evaluation, we decided not to use two-dimensional animated movies but rather a three-dimensional display in which real geometrical objects, manipulated like puppets, acted out the helping/hindering situations: a yellow square would help the circle up the hill; a red triangle would push it down. After showing the babies the scene, the experimenter placed the helper and the hinderer on a tray and brought them to the child. In this instance, we opted to record not the babies’ looking time but rather which character they reached for, on the theory that what a baby reaches for is a reliable indicator of what a baby wants. In the end, we found that 6- and 10-month-old infants overwhelmingly preferred the helpful individual to the hindering individual. This wasn’t a subtle statistical trend; just about all the babies reached for the good guy.

(Experimental minutiae: What if babies simply like the color red or prefer squares or something like that? To control for this, half the babies got the yellow square as the helper; half got it as the hinderer. What about problems of unconscious cueing and unconscious bias? To avoid this, at the moment when the two characters were offered on the tray, the parent had his or her eyes closed, and the experimenter holding out the characters and recording the responses hadn’t seen the puppet show, so he or she didn’t know who was the good guy and who the bad guy.)

One question that arose with these experiments was how to understand the babies’ preference: did they act as they did because they were attracted to the helpful individual or because they were repelled by the hinderer or was it both? We explored this question in a further series of studies that introduced a neutral character, one that neither helps nor hinders. We found that, given a choice, infants prefer a helpful character to a neutral one; and prefer a neutral character to one who hinders. This finding indicates that both inclinations are at work — babies are drawn to the nice guy and repelled by the mean guy. Again, these results were not subtle; babies almost always showed this pattern of response.

Does our research show that babies believe that the helpful character is good and the hindering character is bad? Not necessarily. All that we can safely infer from what the babies reached for is that babies prefer the good guy and show an aversion to the bad guy. But what’s exciting here is that these preferences are based on how one individual treated another, on whether one individual was helping another individual achieve its goals or hindering it. This is preference of a very special sort; babies were responding to behaviors that adults would describe as nice or mean. When we showed these scenes to much older kids — 18-month-olds — and asked them, “Who was nice? Who was good?” and “Who was mean? Who was bad?” they responded as adults would, identifying the helper as nice and the hinderer as mean.

To increase our confidence that the babies we studied were really responding to niceness and naughtiness, Karen Wynn and Kiley Hamlin, in a separate series of studies, created different sets of one-act morality plays to show the babies. In one, an individual struggled to open a box; the lid would be partly opened but then fall back down. Then, on alternating trials, one puppet would grab the lid and open it all the way, and another puppet would jump on the box and slam it shut. In another study (the one I mentioned at the beginning of this article), a puppet would play with a ball. The puppet would roll the ball to another puppet, who would roll it back, and the first puppet would roll the ball to a different puppet who would run away with it. In both studies, 5-month-olds preferred the good guy — the one who helped to open the box; the one who rolled the ball back — to the bad guy. This all suggests that the babies we studied have a general appreciation of good and bad behavior, one that spans a range of actions.

A further question that arises is whether babies possess more subtle moral capacities than preferring good and avoiding bad. Part and parcel of adult morality, for instance, is the idea that good acts should meet with a positive response and bad acts with a negative response — justice demands the good be rewarded and the bad punished. For our next studies, we turned our attention back to the older babies and toddlers and tried to explore whether the preferences that we were finding had anything to do with moral judgment in this mature sense. In collaboration with Neha Mahajan, a psychology graduate student at Yale, Hamlin, Wynn and I exposed 21-month-olds to the good guy/bad guy situations described above, and we gave them the opportunity to reward or punish either by giving a treat to, or taking a treat from, one of the characters. We found that when asked to give, they tended to chose the positive character; when asked to take, they tended to choose the negative one.

Dispensing justice like this is a more elaborate conceptual operation than merely preferring good to bad, but there are still-more-elaborate moral calculations that adults, at least, can easily make. For example: Which individual would you prefer — someone who rewarded good guys and punished bad guys or someone who punished good guys and rewarded bad guys? The same amount of rewarding and punishing is going on in both cases, but by adult lights, one individual is acting justly and the other isn’t. Can babies see this, too?

To find out, we tested 8-month-olds by first showing them a character who acted as a helper (for instance, helping a puppet trying to open a box) and then presenting a scene in which this helper was the target of a good action by one puppet and a bad action by another puppet. Then we got the babies to choose between these two puppets. That is, they had to choose between a puppet who rewarded a good guy versus a puppet who punished a good guy. Likewise, we showed them a character who acted as a hinderer (for example, keeping a puppet from opening a box) and then had them choose between a puppet who rewarded the bad guy versus one who punished the bad guy.

The results were striking. When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior.

All of this research, taken together, supports a general picture of baby morality. It’s even possible, as a thought experiment, to ask what it would be like to see the world in the moral terms that a baby does. Babies probably have no conscious access to moral notions, no idea why certain acts are good or bad. They respond on a gut level. Indeed, if you watch the older babies during the experiments, they don’t act like impassive judges — they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events (remember the toddler who smacked the bad puppet). The babies’ experiences might be cognitively empty but emotionally intense, replete with strong feelings and strong desires. But this shouldn’t strike you as an altogether alien experience: while we adults possess the additional critical capacity of being able to consciously reason about morality, we’re not otherwise that different from babies — our moral feelings are often instinctive. In fact, one discovery of contemporary research in social psychology and social neuroscience is the powerful emotional underpinning of what we once thought of as cool, untroubled, mature moral deliberation.

Is This the Morality We’re Looking For?
What do these findings about babies’ moral notions tell us about adult morality? Some scholars think that the very existence of an innate moral sense has profound implications. In 1869, Alfred Russel Wallace, who along with Darwin discovered natural selection, wrote that certain human capacities — including “the higher moral faculties” — are richer than what you could expect from a product of biological evolution. He concluded that some sort of godly force must intervene to create these capacities. (Darwin was horrified at this suggestion, writing to Wallace, “I hope you have not murdered too completely your own and my child.”)

A few years ago, in his book “What’s So Great About Christianity,” the social and cultural critic Dinesh D’Souza revived this argument. He conceded that evolution can explain our niceness in instances like kindness to kin, where the niceness has a clear genetic payoff, but he drew the line at “high altruism,” acts of entirely disinterested kindness. For D’Souza, “there is no Darwinian rationale” for why you would give up your seat for an old lady on a bus, an act of nice-guyness that does nothing for your genes. And what about those who donate blood to strangers or sacrifice their lives for a worthy cause? D’Souza reasoned that these stirrings of conscience are best explained not by evolution or psychology but by “the voice of God within our souls.”

The evolutionary psychologist has a quick response to this: To say that a biological trait evolves for a purpose doesn’t mean that it always functions, in the here and now, for that purpose. Sexual arousal, for instance, presumably evolved because of its connection to making babies; but of course we can get aroused in all sorts of situations in which baby-making just isn’t an option — for instance, while looking at pornography. Similarly, our impulse to help others has likely evolved because of the reproductive benefit that it gives us in certain contexts — and it’s not a problem for this argument that some acts of niceness that people perform don’t provide this sort of benefit. (And for what it’s worth, giving up a bus seat for an old lady, although the motives might be psychologically pure, turns out to be a coldbloodedly smart move from a Darwinian standpoint, an easy way to show off yourself as an attractively good person.)

The general argument that critics like Wallace and D’Souza put forward, however, still needs to be taken seriously. The morality of contemporary humans really does outstrip what evolution could possibly have endowed us with; moral actions are often of a sort that have no plausible relation to our reproductive success and don’t appear to be accidental byproducts of evolved adaptations. Many of us care about strangers in faraway lands, sometimes to the extent that we give up resources that could be used for our friends and family; many of us care about the fates of nonhuman animals, so much so that we deprive ourselves of pleasures like rib-eye steak and veal scaloppine. We possess abstract moral notions of equality and freedom for all; we see racism and sexism as evil; we reject slavery and genocide; we try to love our enemies. Of course, our actions typically fall short, often far short, of our moral principles, but these principles do shape, in a substantial way, the world that we live in. It makes sense then to marvel at the extent of our moral insight and to reject the notion that it can be explained in the language of natural selection. If this higher morality or higher altruism were found in babies, the case for divine creation would get just a bit stronger.

But it is not present in babies. In fact, our initial moral sense appears to be biased toward our own kind. There’s plenty of research showing that babies have within-group preferences: 3-month-olds prefer the faces of the race that is most familiar to them to those of other races; 11-month-olds prefer individuals who share their own taste in food and expect these individuals to be nicer than those with different tastes; 12-month-olds prefer to learn from someone who speaks their own language over someone who speaks a foreign language. And studies with young children have found that once they are segregated into different groups — even under the most arbitrary of schemes, like wearing different colored T-shirts — they eagerly favor their own groups in their attitudes and their actions.

The notion at the core of any mature morality is that of impartiality. If you are asked to justify your actions, and you say, “Because I wanted to,” this is just an expression of selfish desire. But explanations like “It was my turn” or “It’s my fair share” are potentially moral, because they imply that anyone else in the same situation could have done the same. This is the sort of argument that could be convincing to a neutral observer and is at the foundation of standards of justice and law. The philosopher Peter Singer has pointed out that this notion of impartiality can be found in religious and philosophical systems of morality, from the golden rule in Christianity to the teachings of Confucius to the political philosopher John Rawls’s landmark theory of justice. This is an insight that emerges within communities of intelligent, deliberating and negotiating beings, and it can override our parochial impulses.

The aspect of morality that we truly marvel at — its generality and universality — is the product of culture, not of biology. There is no need to posit divine intervention. A fully developed morality is the product of cultural development, of the accumulation of rational insight and hard-earned innovations. The morality we start off with is primitive, not merely in the obvious sense that it’s incomplete, but in the deeper sense that when individuals and societies aspire toward an enlightened morality — one in which all beings capable of reason and suffering are on an equal footing, where all people are equal — they are fighting with what children have from the get-go. The biologist Richard Dawkins was right, then, when he said at the start of his book “The Selfish Gene,” “Be warned that if you wish, as I do, to build a society in which individuals cooperate generously and unselfishly toward a common good, you can expect little help from biological nature.” Or as a character in the Kingsley Amis novel “One Fat Englishman” puts it, “It was no wonder that people were so horrible when they started life as children.”

Morality, then, is a synthesis of the biological and the cultural, of the unlearned, the discovered and the invented. Babies possess certain moral foundations — the capacity and willingness to judge the actions of others, some sense of justice, gut responses to altruism and nastiness. Regardless of how smart we are, if we didn’t start with this basic apparatus, we would be nothing more than amoral agents, ruthlessly driven to pursue our self-interest. But our capacities as babies are sharply limited. It is the insights of rational individuals that make a truly universal and unselfish morality something that our species can aspire to.

Paul Bloom is a professor of psychology at Yale. His new book, “How Pleasure Works,” will be published next month.

Why do some men and women cheat on their partners while others resist the temptation?

The New York Times, May 11, 2010, by Tara Parker-Pope  –  To find the answer, a growing body of research is focusing on the science of commitment. Scientists are studying everything from the biological factors that seem to influence marital stability to a person’s psychological response after flirting with a stranger.

Their findings suggest that while some people may be naturally more resistant to temptation, men and women can also train themselves to protect their relationships and raise their feelings of commitment.

Recent studies have raised questions about whether genetic factors may influence commitment and marital stability. Hasse Walum, a biologist at the Karolinska Institute in Sweden, studied 552 sets of twins to learn more about a gene related to the body’s regulation of the brain chemical vasopressin, a bonding hormone.

Over all, men who carried a variation in the gene were less likely to be married, and those who had wed were more likely to have had serious marital problems and unhappy wives. Among men who carried two copies of the gene variant, about a third had experienced a serious relationship crisis in the past year, double the number seen in the men who did not carry the variant.

Although the trait is often called the “fidelity gene,” Mr. Walum called that a misnomer: his research focused on marital stability, not faithfulness. “It’s difficult to use this information to predict any future behavior in men,” he told me. Now he and his colleagues are working to replicate the findings and conducting similar research in women.

While there may be genetic differences that influence commitment, other studies suggest that the brain can be trained to resist temptation.

A series of unusual studies led by John Lydon, a psychologist at McGill University in Montreal, have looked at how people in a committed relationship react in the face of temptation. In one study, highly committed married men and women were asked to rate the attractiveness of people of the opposite sex in a series of photos. Not surprisingly, they gave the highest ratings to people who would typically be viewed as attractive.

Later, they were shown similar pictures and told that the person was interested in meeting them. In that situation, participants consistently gave those pictures lower scores than they had the first time around.

When they were attracted to someone who might threaten the relationship, they seemed to instinctively tell themselves, “He’s not so great.” “The more committed you are,” Dr. Lydon said, “the less attractive you find other people who threaten your relationship.”

But some of the McGill research has shown gender differences in how we respond to a cheating threat. In a study of 300 heterosexual men and women, half the participants were primed for cheating by imagining a flirtatious conversation with someone they found attractive. The other half just imagined a routine encounter.

Afterward, the study subjects were asked to complete fill-in-the-blank puzzles like LO_AL and THR__T.

Unbeknownst to the participants, the word fragments were a psychological test to reveal subconscious feelings about commitment. (Similar word puzzles are used to study subconscious feelings about prejudice and stereotyping.)

No pattern emerged among the study participants who imagined a routine encounter. But there were differences among men and women who had entertained the flirtatious fantasy. In that group, the men were more likely to complete the puzzles with the neutral words LOCAL and THROAT. But the women who had imagined flirting were far more likely to choose LOYAL and THREAT, suggesting that the exercise had touched off subconscious concerns about commitment.

Of course, this does not necessarily predict behavior in the real world. But the pronounced difference in responses led the researchers to think women might have developed a kind of early warning system to alert them to relationship threats.

Other McGill studies confirmed differences in how men and women react to such threats. In one, attractive actors or actresses were brought in to flirt with study participants in a waiting room. Later, the participants were asked questions about their relationships, particularly how they would respond to a partner’s bad behavior, like being late and forgetting to call.

Men who had just been flirting were less forgiving of the hypothetical bad behavior, suggesting that the attractive actress had momentarily chipped away at their commitment. But women who had been flirting were more likely to be forgiving and to make excuses for the man, suggesting that their earlier flirting had triggered a protective response when discussing their relationship.

“We think the men in these studies may have had commitment, but the women had the contingency plan — the attractive alternative sets off the alarm bell,” Dr. Lydon said. “Women implicitly code that as a threat. Men don’t.”

The question is whether a person can be trained to resist temptation. In another study, the team prompted male students who were in committed dating relationships to imagine running into an attractive woman on a weekend when their girlfriends were away. Some of the men were then asked to develop a contingency plan by filling in the sentence “When she approaches me, I will __________ to protect my relationship.”

Because the researchers could not bring in a real woman to act as a temptation, they created a virtual-reality game in which two out of four rooms included subliminal images of an attractive woman. The men who had practiced resisting temptation gravitated toward those rooms 25 percent of the time; for the others, the figure was 62 percent.

But it may not be feelings of love or loyalty that keep couples together. Instead, scientists speculate that your level of commitment may depend on how much a partner enhances your life and broadens your horizons — a concept that Arthur Aron, a psychologist and relationship researcher at Stony Brook University, calls “self-expansion.”

To measure this quality, couples are asked a series of questions: How much does your partner provide a source of exciting experiences? How much has knowing your partner made you a better person? How much do you see your partner as a way to expand your own capabilities?

The Stony Brook researchers conducted experiments using activities that stimulated self-expansion. Some couples were given mundane tasks, while others took part in a silly exercise in which they were tied together and asked to crawl on mats, pushing a foam cylinder with their heads. The study was rigged so the couples failed the time limit on the first two tries, but just barely made it on the third, resulting in much celebration.

Couples were given relationship tests before and after the experiment. Those who had taken part in the challenging activity posted greater increases in love and relationship satisfaction than those who had not experienced victory together.

Now the researchers are embarking on a series of studies to measure how self-expansion influences a relationship. They theorize that couples who explore new places and try new things will tap into feelings of self-expansion, lifting their level of commitment.

“We enter relationships because the other person becomes part of ourselves, and that expands us,” Dr. Aron said. “That’s why people who fall in love stay up all night talking and it feels really exciting. We think couples can get some of that back by doing challenging and exciting things together.”

The New York Times, By TARA PARKER-POPE

The bodies of many older Americans are practically bionic: more than 770,000 hip and knee replacements are performed each year in the United States.

Now another aging joint is fast becoming a candidate for replacement. This year, 4,400 patients are expected to undergo surgery to replace arthritic or injured ankles with artificial joints made of metal alloys and lightweight plastic, according to industry estimates.

Four models are commonly used in the United States, with Food and Drug Administration approval. And demand is expected to grow as more and more baby boomers hobble into their 60s and 70s with debilitating ankle pain.

Ankle replacement has been around for three decades, but it has been slow to catch on. Problems with early devices left surgeons and patients wary. The operation is complex, and many foot and ankle surgeons lack experience. While Medicare pays for ankle replacement, many private insurers do not.

Each year about two million Americans visit the doctor for ankle pain from arthritis or fracture. An estimated 50,000 people a year experience end-stage ankle arthritis, in which the ankle cartilage has worn away completely, causing painful bone-on-bone contact and some level of disability.

Until lately, such patients have had only one surgical option: ankle fusion surgery, in which the worn-out part of the joint is removed and the bones are permanently locked together with screws and plates. The procedure usually relieves pain, but the patient loses mobility in the ankle, leading to changes in gait and, ultimately, additional wear and tear and arthritic pain in other parts of the ankle. About 25,000 ankle fusions were performed in the United States last year.

Andrew Keaveney, now 73, shattered his ankle in a fall from a truck while hanging flags as an American Legion volunteer. Surgery repaired the broken bones, but he continued to have severe pain.

Doctors suggested ankle fusion, but he found a surgeon who offered total ankle replacement. He had the operation in December 2008, and now he says the ankle is “99 percent.”

“Before the surgery, I couldn’t sleep at night,” said Mr. Keaveney, of Locust Valley, N.Y. “Now I’m able to climb ladders. I have absolutely no pain. I was even playing soccer with my grandkids a few months ago.”

His surgeon, Dr. Craig S. Radnay, an associate at the Insall Scott Kelly Institute for Orthopedics and Sports Medicine in New York City and on Long Island, says he is now a “big believer” in ankle replacement for certain patients.

“For an ankle replacement you have to be a little more picky in who you select for those cases,” he said. “But I can’t tell you how many patients come in, and I mention this option they don’t even know exists.” (Dr. Radnay, who says he has performed more than 100 ankle replacements using an Inbone device from Wright Medical Group of Arlington, Tenn., is now a paid consultant to the company, helping to gather data on long-term success rates.)

The ideal patient is around 60 years old and of normal weight, although doctors consider older patients, depending on their health. People with diabetes may not be good candidates because they may risk complications as a result of poor blood circulation.

Dr. Jonathan T. Deland, chief of the foot and ankle service at the Hospital for Special Surgery in Manhattan, said that while the devices had improved, he remained cautious about offering the operation. (Dr. Deland is helping to develop a new ankle replacement device for Zimmer of Warsaw, Ind., which may be submitted for F.D.A. approval this year.)

“The big concern about ankle replacement is how often do they fail and how often do they loosen,” he said.

Complications can include slow healing, as well as infection. Severe complications are rare, but they can result in amputation. Still, Dr. Deland said, “we’re getting fewer and fewer failures.”

The new models require that less bone be removed, so the bone to which the device is affixed is stronger. In addition, instruments used to guide surgeons in aligning the artificial joint have improved. Dr. Deland cited data showing that for some recent models, 90 percent of ankle replacements were still in place after an average of eight and a half years.

Though the four devices in common use have technical differences in design and in how they are implanted, doctors say the choice of device matters far less than the experience of the surgeon. The procedure is among the most difficult that foot and ankle surgeons perform, and one of the biggest challenges is getting proper alignment of the replacement joint.

Dr. Brian Donley, an orthopedic surgeon who is director of the foot and ankle center at the Cleveland Clinic, says patients should always ask their doctor to disclose any financial interest in a device. (He performed the first United States operation using the Salto Talaris device from Tornier of Minneapolis, and receives consulting fees from the company.)

Even with a successful implantation, patients should not necessarily expect to have the same ankles they did at 18. They should not try to return to activities like basketball and distance running. But golf and walking, and sometimes even skiing, are typically allowed.

“My happiest patients I have in my practice are my ankle replacement patients,” Dr. Donley said. “They are so appreciative about how their life has been changed. They can go to their grandchild’s wedding and get up and have a dance.”