MRI scan of brain. (Credit: iStockphoto)

MIT, March 1, 2011  —  When your brain encounters sensory stimuli, such as the scent of your morning coffee or the sound of a honking car, that input gets shuttled to the appropriate brain region for analysis. The coffee aroma goes to the olfactory cortex, while sounds are processed in the auditory cortex.

That division of labor suggests that the brain’s structure follows a predetermined, genetic blueprint. However, evidence is mounting that brain regions can take over functions they were not genetically destined to perform. In a landmark 1996 study of people blinded early in life, neuroscientists showed that the visual cortex could participate in a nonvisual function — reading Braille.

Now, a study from MIT neuroscientists shows that in individuals born blind, parts of the visual cortex are recruited for language processing. The finding suggests that the visual cortex can dramatically change its function — from visual processing to language — and it also appears to overturn the idea that language processing can only occur in highly specialized brain regions that are genetically programmed for language tasks.

“Your brain is not a prepackaged kind of thing. It doesn’t develop along a fixed trajectory, rather, it’s a self-building toolkit. The building process is profoundly influenced by the experiences you have during your development,” says Marina Bedny, an MIT postdoctoral associate in the Department of Brain and Cognitive Sciences and lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Feb. 28.

Flexible connections

For more than a century, neuroscientists have known that two specialized brain regions — called Broca’s area and Wernicke’s area — are necessary to produce and understand language, respectively. Those areas are thought to have intrinsic properties, such as specific internal arrangement of cells and connectivity with other brain regions, which make them uniquely suited to process language.

Other functions — including vision and hearing — also have distinct processing centers in the sensory cortices. However, there appears to be some flexibility in assigning brain functions. Previous studies in animals (in the laboratory of Mriganka Sur, MIT professor of brain and cognitive sciences) have shown that sensory brain regions can process information from a different sense if input is rewired to them surgically early in life. For example, connecting the eyes to the auditory cortex can provoke that brain region to process images instead of sounds.

Until now, no such evidence existed for flexibility in language processing. Previous studies of congenitally blind people had shown some activity in the left visual cortex of blind subjects during some verbal tasks, such as reading Braille, but no one had shown that this might indicate full-fledged language processing.

Bedny and her colleagues, including senior author Rebecca Saxe, assistant professor of brain and cognitive sciences, and Alvaro Pascual-Leone, professor of neurology at Harvard Medical School, set out to investigate whether visual brain regions in blind people might be involved in more complex language tasks, such as processing sentence structure and analyzing word meanings.

To do that, the researchers scanned blind subjects (using functional magnetic resonance imaging) as they performed a sentence comprehension task. The researchers hypothesized that if the visual cortex was involved in language processing, those brain areas should show the same sensitivity to linguistic information as classic language areas such as Broca’s and Wernicke’s areas.

They found that was indeed the case — visual brain regions were sensitive to sentence structure and word meanings in the same way as classic language regions, Bedny says. “The idea that these brain regions could go from vision to language is just crazy,” she says. “It suggests that the intrinsic function of a brain area is constrained only loosely, and that experience can have really a big impact on the function of a piece of brain tissue.”

Bedny notes that the research does not refute the idea that the human brain needs Broca’s and Wernicke’s areas for language. “We haven’t shown that every possible part of language can be supported by this part of the brain [the visual cortex]. It just suggests that a part of the brain can participate in language processing without having evolved to do so,” she says.


One unanswered question is why the visual cortex would be recruited for language processing, when the language processing areas of blind people already function normally. According to Bedny, it may be the result of a natural redistribution of tasks during brain development.

“As these brain functions are getting parceled out, the visual cortex isn’t getting its typical function, which is to do vision. And so it enters this competitive game of who’s going to do what. The whole developmental dynamic has changed,” she says.

This study, combined with other studies of blind people, suggest that different parts of the visual cortex get divvied up for different functions during development, Bedny says. A subset of (left-brain) visual areas appears to be involved in language, including the left primary visual cortex.

It’s possible that this redistribution gives blind people an advantage in language processing. The researchers are planning follow-up work in which they will study whether blind people perform better than sighted people in complex language tasks such as parsing complicated sentences or performing language tests while being distracted.

The researchers are also working to pinpoint more precisely the visual cortex’s role in language processing, and they are studying blind children to figure out when during development the visual cortex starts processing language.

Journal Reference:

1. Marina Bedny, Alvaro Pascual-Leone, David Dodell-Feder, Evelina Fedorenko and Rebecca Saxe. Language processing in the occipital cortex of congenitally blind adults. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1014818108

Massachusetts Institute of Technology (2011, March 1). Parts of brain can switch functions: In people born blind, brain regions that usually process vision can tackle language. ScienceDaily. Retrieved March 1, 2011, from­ /releases/2011/02/110228163143.htm

Scientists offer evidence that it is easier to rewire the brain early in life. Researchers found that a small part of the brain’s visual cortex that processes motion became reorganized only in the brains of subjects who had been born blind, not those who became blind later in life.
(Credit: iStockphoto/Vasiliy Yakobchuk)

Massachusetts Institute of Technology  —  A new paper from MIT neuroscientists, in collaboration with Alvaro Pascual-Leone at Beth Israel Deaconess Medical Center, offers evidence that it is easier to rewire the brain early in life. The researchers found that a small part of the brain’s visual cortex that processes motion became reorganized only in the brains of subjects who had been born blind, not those who became blind later in life.

The new findings, described in the Oct. 14 issue of the journal Current Biology, shed light on how the brain wires itself during the first few years of life, and could help scientists understand how to optimize the brain’s ability to be rewired later in life. That could become increasingly important as medical advances make it possible for congenitally blind people to have their sight restored, said MIT postdoctoral associate Marina Bedny, lead author of the paper.

In the 1950s and ’60s, scientists began to think that certain brain functions develop normally only if an individual is exposed to relevant information, such as language or visual information, within a specific time period early in life. After that, they theorized, the brain loses the ability to change in response to new input.

Animal studies supported this theory. For example, cats blindfolded during the first months of life are unable to see normally after the blindfolds are removed. Similar periods of blindfolding in adulthood have no effect on vision.

However, there have been indications in recent years that there is more wiggle room than previously thought, said Bedny, who works in the laboratory of MIT assistant professor Rebecca Saxe, also an author of the Current Biology paper. Many neuroscientists now support the idea of a period early in life after which it is difficult, but not impossible, to rewire the brain.

Bedny, Saxe and their colleagues wanted to determine if a part of the brain known as the middle temporal complex (MT/MST) can be rewired at any time or only early in life. They chose to study MT/MST in part because it is one of the most studied visual areas. In sighted people, the MT region is specialized for motion vision.

In the few rare cases where patients have lost MT function in both hemispheres of the brain, they were unable to sense motion in a visual scene. For example, if someone poured water into a glass, they would see only a standing, frozen stream of water.

Previous studies have shown that in blind people, MT is taken over by sound processing, but those studies didn’t distinguish between people who became blind early and late in life.

In the new MIT study, the researchers studied three groups of subjects — sighted, congenitally blind, and those who became blind later in life (age nine or older). Using functional magnetic resonance imaging (fMRI), they tested whether MT in these subjects responded to moving sounds — for example, approaching footsteps.

The results were clear, said Bedny. MT reacted to moving sounds in congenitally blind people, but not in sighted people or people who became blind at a later age.

This suggests that in late-blind individuals, the visual input they received in early years allowed the MT complex to develop its typical visual function, and it couldn’t be remade to process sound after the person lost sight. Congenitally blind people never received any visual input, so the region was taken over by auditory input after birth.

“We need to think of early life as a window of opportunity to shape how the brain works,” said Bedny. “That’s not to say that later experience can’t alter things, but it’s easier to get organized early on.”

Bedny believes that by better understanding how the brain is wired early in life, scientists may be able to learn how to rewire it later in life. There are now very few cases of sight restoration, but if it becomes more common, scientists will need to figure out how to retrain the patient’s brain so it can process the new visual input.

“The unresolved question is whether the brain can relearn, and how that learning differs in an adult brain versus a child’s brain,” said Bedny.

Bedny hopes to study the behavioral consequences of the MT switch in future studies. Those would include whether blind people have an advantage over sighted people in auditory motion processing, and if they have a disadvantage if sight is restored.

Journal Reference:

1. Bedny, M., Konkle, T., Pelphrey, K., Saxe, R., Pascual-Leone. Sensitive period for a multi-modal response in human MT/MST. Current Biology, 14 October, 2010 DOI: 10.1016/j.cub.2010.09.044

Massachusetts Institute of Technology (2010, October 22). Younger brains are easier to rewire — brain regions can switch functions

Go Easy on Yourself, a New Wave of Research Urges

Graphic: Stuart Bradford

The New York Times, March 1, 2011, by Tara Parker-Pope  —  Do you treat yourself as well as you treat your friends and family?

That simple question is the basis for a burgeoning new area of psychological research called self-compassion — how kindly people view themselves. People who find it easy to be supportive and understanding to others, it turns out, often score surprisingly low on self-compassion tests, berating themselves for perceived failures like being overweight or not exercising.

The research suggests that giving ourselves a break and accepting our imperfections may be the first step toward better health. People who score high on tests of self-compassion have less depression and anxiety, and tend to be happier and more optimistic. Preliminary data suggest that self-compassion can even influence how much we eat and may help some people lose weight.

This idea does seem at odds with the advice dispensed by many doctors and self-help books, which suggest that willpower and self-discipline are the keys to better health. But Kristin Neff, a pioneer in the field, says self-compassion is not to be confused with self-indulgence or lower standards.

“I found in my research that the biggest reason people aren’t more self-compassionate is that they are afraid they’ll become self-indulgent,” said Dr. Neff, an associate professor of human development at the University of Texas at Austin. “They believe self-criticism is what keeps them in line. Most people have gotten it wrong because our culture says being hard on yourself is the way to be.”

Imagine your reaction to a child struggling in school or eating too much junk food. Many parents would offer support, like tutoring or making an effort to find healthful foods the child will enjoy. But when adults find themselves in a similar situation — struggling at work, or overeating and gaining weight — many fall into a cycle of self-criticism and negativity. That leaves them feeling even less motivated to change.

“Self-compassion is really conducive to motivation,” Dr. Neff said. “The reason you don’t let your children eat five big tubs of ice cream is because you care about them. With self-compassion, if you care about yourself, you do what’s healthy for you rather than what’s harmful to you.”

Dr. Neff, whose book, “Self-Compassion: Stop Beating Yourself Up and Leave Insecurity Behind,” is being published next month by William Morrow, has developed a self-compassion scale: 26 statements meant to determine how often people are kind to themselves, and whether they recognize that ups and downs are simply part of life.

A positive response to the statement “I’m disapproving and judgmental about my own flaws and inadequacies,” for example, suggests lack of self-compassion. “When I feel inadequate in some way, I try to remind myself that feelings of inadequacy are shared by most people” suggests the opposite.

For those low on the scale, Dr. Neff suggests a set of exercises — like writing yourself a letter of support, just as you might to a friend you are concerned about. Listing your best and worst traits, reminding yourself that nobody is perfect and thinking of steps you might take to help you feel better about yourself are also recommended.

Other exercises include meditation and “compassion breaks,” which involve repeating mantras like “I’m going to be kind to myself in this moment.”

If this all sounds a bit too warm and fuzzy, like the Al Franken character Stuart Smalley (“I’m good enough, I’m smart enough, and doggone it, people like me”), there is science to back it up. A 2007 study by researchers at Wake Forest University suggested that even a minor self-compassion intervention could influence eating habits. As part of the study, 84 female college students were asked to take part in what they thought was a food-tasting experiment. At the beginning of the study, the women were asked to eat doughnuts.

One group, however, was given a lesson in self-compassion with the food. “I hope you won’t be hard on yourself,” the instructor said. “Everyone in the study eats this stuff, so I don’t think there’s any reason to feel real bad about it.”

Later the women were asked to taste-test candies from large bowls. The researchers found that women who were regular dieters or had guilt feelings about forbidden foods ate less after hearing the instructor’s reassurance. Those not given that message ate more.

The hypothesis is that the women who felt bad about the doughnuts ended up engaging in “emotional” eating. The women who gave themselves permission to enjoy the sweets didn’t overeat.

“Self-compassion is the missing ingredient in every diet and weight-loss plan,” said Jean Fain, a psychotherapist and teaching associate at Harvard Medical School who wrote the new book “The Self-Compassion Diet” (Sounds True publishing). “Most plans revolve around self-discipline, deprivation and neglect.”

Dr. Neff says that the field is still new and that she is just starting a controlled study to determine whether teaching self-compassion actually leads to lower stress, depression and anxiety and more happiness and life satisfaction.

“The problem is that it’s hard to unlearn habits of a lifetime,” she said. “People have to actively and consciously develop the habit of self-compassion.”

Nori Dried Seaweed

By  Mehmet Oz MD and Michael Roizen MD, March 1, 2011

You know those pills that block fat absorption? There may be a natural snack that offers a similar benefit: toasted nori.

This crispy Japanese munchie — made of thin sheets of seaweed that have been roasted or toasted and lightly salted — could help your body block fat calories. In a new study, a special fiber found in seaweed appeared to inhibit fat absorption by over 75 percent!

Natural Fat-Blocker
The fat-blocking fiber in seaweed is called alginate. And in a recent lab study using an artificial gut, alginate interfered with a key enzyme responsible for breaking down dietary fat. The likely result in a real gut? The undigested fat would just pass right through and get expelled, which means fewer fat calories to use or store. Another small study using alginate-spiked drinks provides additional evidence of the fat-blocking effect. In that study, getting just 1.5 grams per day of alginate fiber caused a reduction in calorie intake over the course of a week.

The Road to Alginate
More studies are needed before alginate can be recommended as a weight loss aid. But the research is a good reason to be more adventurous in your eating. Start enjoying dishes made out of — or seasoned with — edible seaweed and you’ll get not only a fiber boost but also a healthy dose of calcium, iron, magnesium, and B vitamins. Toasted nori snacks — available in many Asian markets — are just one way to enjoy produce from the sea. You can also make soups or salads with wakame, a slightly sweet leafy sea vegetable.

Btw, I am now, eating 2 sheets of Nori a day and they’re delicious.  Hope to be thinner by the end of Spring. Joyce Hays

BIG NEW IDEA Parthasarathy Ranganathan and his prototype of a data center

Noah Berger for The New York Times

The New York Times, March 1, 2011, by John Markoff, PALO ALTO, Calif. — Hewlett-Packard researchers have proposed a fundamental rethinking of the modern computer for the coming era of nanoelectronics — a marriage of memory and computing power that could drastically limit the energy used by computers.

Today the microprocessor is in the center of the computing universe, and information is moved, at heavy energy cost, first to be used in computation and then stored. The new approach would be to marry processing to memory to cut down transportation of data and reduce energy use.

The semiconductor industry has long warned about a set of impending bottlenecks described as “the wall,” a point in time where more than five decades of progress in continuously shrinking the size of transistors used in computation will end. If progress stops it will not only slow the rate of consumer electronics innovation, but also end the exponential increase in the speed of the world’s most powerful supercomputers — 1,000 times faster each decade.

However, in an article published in IEEE Computer in January, Parthasarathy Ranganathan, a Hewlett-Packard electrical engineer, offers a radical alternative to today’s computer designs that would permit new designs for consumer electronics products as well as the next generation of supercomputers, known as exascale processors.

Today, computers constantly shuttle data back and forth among faster and slower memories. The systems keep frequently used data close to the processor and then move it to slower and more permanent storage when it is no longer needed for the ongoing calculations.

In this approach, the microprocessor is in the center of the computing universe, but in terms of energy costs, moving the information, first to be computed upon and then stored, dwarfs the energy used in the actual computing operation.

Moreover, the problem is rapidly worsening because the amount of data consumed by computers is growing even more quickly than the increase in computer performance.

“What’s going to be the killer app 10 years from now?” asked Dr. Ranganathan. “It’s fairly clear it’s going to be about data; that’s not rocket science. In the future every piece of storage on the planet will come with a built-in computer.”

To distinguish the new type of computing from today’s designs, he said that systems will be based on memory chips he calls “nanostores” as distinct from today’s microprocessors. They will be hybrids, three-dimensional systems in which lower-level circuits will be based on a nanoelectronic technology called the memristor, which Hewlett-Packard is developing to store data. The nanostore chips will have a multistory design, and computing circuits made with conventional silicon will sit directly on top of the memory to process the data, with minimal energy costs.

Within seven years or so, experts estimate that one such chip might store a trillion bytes of memory (about 220 high-definition digital movies) in addition to containing 128 processors, Dr. Ranganathan wrote. If these devices become ubiquitous, it would radically reduce the amount of information that would need to be shuttled back and forth in future data processing schemes.

For years, computer architects have been saying that a big new idea in computing was needed. Indeed, as transistors have continued to shrink, rather than continuing to innovate, computer designers have simply adopted a so-called “multicore” approach, where multiple processors are added as more chip real estate became available.

The absence of a major breakthrough was referred to in a remarkable confrontation that took place two years ago during Hot Chips, an annual computer design conference held each summer at Stanford University.

John L. Hennessy, the president of Stanford and a computer design expert, stood before a panel of some of the world’s best computer designers and challenged them to present one fundamentally new idea. He was effectively greeted with silence.

“What is your one big idea?” he asked the panel. “I believe that the next big idea is going to come from someone who is considerably younger than the average age of the people in this room.”

Dr. Ranganathan, who was 36 at the time, was there. He said that he took Dr. Hennessy’s criticism as an inspiration for his work and he believes that nanostore chip design is an example of the kind of big idea that has been missing.

It is not just Dr. Hennessy who has been warning about the end the era of rapidly increasing computer performance. In 2008, Darpa, the Defense Advanced Research Projects Agency assembled a panel of the nation’s best supercomputer experts and asked them to think about ways in which it might be possible to reach an exascale computer — a supercomputer capable of executing one quintillion mathematical calculations in a second, about 1,000 times faster than today’s fastest systems.

The panel, which was led by Peter Kogge, a University of Notre Dame supercomputer designer, came back with pessimistic conclusions. “Will the next decade see the same kind of spectacular progress as the last two did?” he wrote in the January issue of IEEE Spectrum. “Alas, no.” He added: “The party isn’t over, but the police have arrived and the music has been turned way down.”

One reason is computing’s enormous energy appetite. A 10-petaflop supercomputer — scheduled to be built by I.B.M. next year — will consume 15 megawatts of power, roughly the electricity consumed by a city of 15,000 homes. An exascale computer, built with today’s microprocessors, would require 1.6 gigawatts. That would be roughly one and half times the amount of electricity produced by a nuclear power plant.

The panel did, however, support Dr. Ranganathan’s memory-centric approach. It found that the energy cost of a single calculation was about 70 picojoules (a picojoule is one millionth of one millionth of a joule. The energy needed to keep a 100-watt bulb lit for an hour is 360,000 joules). However, when the energy costs of moving the data needed to do a single calculation — moving 200 bits of data in and out of memory multiple times — the real energy cost of a single calculation might be anywhere from 1,000 to 10,000 picojoules.

A range of other technologies are being explored to allow the continued growth of computing power, including ways to build electronic switches smaller than 10 nanometers — thought to be the minimum size for current chip-making techniques.

Last month, for example, researchers at Harvard and Mitre Corporation reported the development of nanoprocessor “tiles” based on electronic switches fabricated from ultrathin germanium-silicon wires.

I.B.M. researchers have been pursuing so-called phase-change memories based on the ability to use an electric current to switch a material from a crystalline to an amorphous state and back again. This technology was commercialized by Samsung last year. More recently, I.B.M. researchers have said that they are excited about the possibility of using carbon nanotubes as an a partial step to build hybrid systems that straddle the nanoelectronic and microelectronic worlds.

Veteran computer designers note that whichever technology wins, the idea of moving computer processing closer to memory has been around for some time, and it may simply be the arrival of nanoscale electronics that finally makes the new architecture possible.

An early effort was called iRAM, in a research project at the University of California, Berkeley, during the late 1990s. Today pressure for memory-oriented computing is coming both from computing challenges posed by smartphones and from the data center, said Christoforos Kozyrakis, a Stanford University computer scientist who worked on the iRAM project in graduate school.