The New York Times, November 9, 2010, By Carl Zimmer

Scientists can’t say what they’ll be discovering 10 years from now. But they do pay careful attention to the direction in which their fields are moving, and they have some strong hunches about where they are headed in the year ahead. Here are prognostications for science in 2011 from 10 leading figures in 10 widely scattered disciplines, from genomics to mathematics to earth science. Regardless of whether they prove true next year, they offer a glimpse into the kinds of possibilities that get scientists excited.

TARGET HEALTH INC. comments: “Of all the predictions below, the first one by Professor Steven Strogatz at Cornell, will take your breath away!  Not only are computers relieving humans of their jobs, but in the near future will relieve humans of the excitement of discovery and will change forever the nature of science itself.  Humans will be forced to enhance their brain power, in order to retain some semblance of control.”

Professor Steven Strogatz, Cornell University


Jennifer S. Altman for The New York Times

“We’re going to see scientific results that are correct, that are predictive, but are without explanation. We may be able to do science without insight, and we may have to learn to live without it. Science will still progress, but computers will tell us things that are true, and we won’t understand them.”

Steven Strogatz

Computers have been taking over more and more of the things humans used to do, including getting driving directions and operating subway trains. They’ve even started making serious inroads into the heart of science. Rather than just churning out simulations or pretty pie charts, computers can do what scientists have traditionally done: find mathematical equations that explain complicated data. Eureqa, for example, is an “automated scientist” created by a Cornell engineer, Hod Lipson, and his students. In 2009, they reported that simply by observing a pendulum, Eureqa can rediscover some of Newton’s laws of physics.

In 2011, automated scientists are poised to make major contributions to science. Dr. Lipson and his students are looking for hidden patterns in the networks of proteins that break down food in cells, for example, and they’ve set up a Web site where people can download Eureqa free of charge and discover laws of nature for themselves.

Automated scientists may speed up the pace of discovery, but in the process they may change the nature of science itself. For centuries, scientists have solved problems with flashes of insight. But while the equations that automated scientists offer are very good at making predictions, they are often inscrutable to human scientists. We may have to program computers to explain their discoveries to us. Otherwise they will become more like oracles than scientists, handing down mysterious utterances to us mere mortals.

Dr. Strogatz is the Jacob Gould Schurman Jacob Gould Schurman Professor of Applied Mathematics at Cornell University.

Charles M. Vest, past president of MIT


Brendan Smialowski for The New York Times

“We’re going to see in surprisingly short order that biological inspiration and biological processes will become central to engineering real systems. It’s going to lead to a new era in engineering.”

Charles M. Vest

In the 20th century, engineers and biologists dwellt in different universes. The biologists picked apart cells and tissues to see how they worked, while the engineers designed bridges, buildings and factories based on what they understood about physics and chemistry.

In recent years, however, engineers have begun paying very close attention to life. Evolution has fine-tuned living things for billions of years, giving them many of the properties — efficiency, strength, flexibility — that engineers love. Now biologically inspired engineering is taking hold in many engineering departments. In some cases, engineers are trying to mimic nature. In other cases, they are actually incorporating living things into their designs.

Researchers at Delft University in the Netherlands, for example, are developing bacteria-laced concrete. When cracks form, the bacteria wake from dormancy and secrete limestone, in effect healing the concrete. Next year, Dr. Vest expects, more of these lifelike designs will come to light, and they will keep coming for many years.

Dr. Vest is also president emeritus, Massachusetts Institute of Technology.

André A. Fenton


Fred R. Conrad/The New York Times

“I expect we will see the physical organization of a memory within the brain.”

André A. Fenton

For decades neuroscientists searched through the brain, in pursuit of physical markers of memories. They found evidence that memories form through the contact of neurons. They grow new branches to communicate with other neurons, and old branches become stronger or weaker. In just the past few years, Dr. Fenton and other researchers have discovered that one molecule present in those branches, known as PKMzeta, maintains memories. Block PKMzeta, and the memory vanishes.

This discovery opens up an exciting prospect. Scientists could train animals to perform some simple task and then compare the brains of the animals that learned with those of the ones that didn’t. There should be a unique sprinkling of PKMzeta molecules in the animals that formed the new memory. Scientists could then map all the neurons and their branches that were required for the animals to remember what they learned. For the first time in history, scientists would be able to see a memory.

Rob Carlson


Stuart Isett for The New York Times

“It seems pretty likely within this year someone will show how to go from an adult peripheral blood draw to pluripotent stem cells. It means anyone who wants to try to make stem cells will be able to give it a whirl.”

Rob Carlson

The cells in an embryo can give rise to any kind of tissue in the adult body. But once they commit to being muscle cells, neurons or some other type of cell, there’s usually no going back. A huge amount of research has gone into finding a way to induce adult cells to turn back into so-called pluripotent stem cells. Someday it might be possible to use them to grow back damaged organs from a person’s own cells.

In September, Derrick J. Rossi and his colleagues at Harvard Medical School created artificial versions of RNA molecules, the templates that cells use to build proteins. They bathed human cells in a cocktail of five kinds of RNA molecules. The cells took in the RNA and made proteins that reprogrammed them into pluripotent stem cells.

Rossi’s method has a drawback as well, however: he and his colleagues used a type of cell called a fibroblast. To gather these cells, they have to do an invasive biopsy and then culture the cells to get enough fibroblasts for their experiment.

This July, Dr. George Q. Daley of Harvard Medical School and his colleagues had success using a different route: they drew blood from healthy human donors and genetically reprogrammed the cells to become pluripotent stem cells. Next year, Dr. Carlson predicts, scientists will combine these methods: they will draw a little blood, place it in a cocktail of RNA and — voilá! — stem cells. This advance would make producing stem cells cheap, fast and relatively easy. In fact, it may even be possible for dedicated amateurs to set up stem cell labs in their own garage.

Dr. Carlson is also the author of “Biology is Technology: The Promise, Peril, and New Business of Engineering Life.”

David Haussler


Peter DaSilva for The New York Times

“You’ll have a number of reports where people will have their genome sequenced, but there will be new types of genomes being read. We can read genomes from your immune cells. They adapt throughout your lifetime so they can protect you from diseases. Reading those genomes will be important, and you’re going to hear a lot about them next year.”

David Haussler

It took 15 years and $3 billion to sequence the first human genome. Today the cost is down to $20,000, and is expected to continue to drop in years to come. As the price falls, scientists are sequencing human genomes at a faster rate. Strictly speaking, however, each of us carries many different genomes, rather than just one. Every time a cell divides, there’s a small chance that it will make a mistake in copying its genes. The mutations that cancer cells acquire, for example, are often crucial for their ability to spread and resist chemotherapy.

Immune cell genomes change as well, but most of the time those changes keep us healthy rather than make us sick. By rearranging certain stretches of their DNA, immune cells can create new genes for antibodies and receptors. The International Cancer Genome Consortium started up in April, with the goal of sequencing 25,000 genomes from a wide range of cancers. Next year, we will see some of the first fruits of this collaboration.

Dr. Haussler also predicts that the first immune cell genomes will make their debut. Both lines of research could lead to different kinds of genome-based medicine.

Cancer genomics could allow doctors to select drugs with the best chances of killing a tumor. Immune genomics could let them survey the state of the immune system as it battles infections or as it learns to tolerate a transplanted organ.

The Center for Biomolecular Science and Engineering is at University of California, Santa Cruz.

Heidi B. Hammel

Space Science

Christopher Capozziello for The New York Times

“The Dawn spacecraft will get to orbit around a very large asteroid called Vesta in July. It’s going to be fascinating to see what it looks like up close. We’re going to be able to start answering very broad questions about the history of asteroids.”

Heidi B. Hammel

Asteroids first formed at the birth of the solar system 4.6 billion years ago, developing into small proto-planets. The biggest asteroids, like the 330-mile-wide Vesta, might have even grown into full-blown planets if not for the pull of Jupiter’s powerful gravitational field. Since then, collisions have blasted asteroids apart into smaller bodies. Astronomers want to know just how planet like asteroids like Vesta became. It’s possible, for example, that Vesta developed a heavy core and might even have a magnetic field. Once the Dawn takes a close look at Vesta, it will move on to another giant asteroid, Ceres, which has water-bearing minerals and perhaps even a weak atmosphere. By comparing the two asteroids, astronomers hope to learn about early planet formation.

Stuart L. Pimm

Conservation Ecology

Alex di Suvero for the New York Times

“I think this year, the time will come to get a sense of just how much marine biodiversity is out there. And that will be really exciting, because for a long time we really haven’t known. There hasn’t been a sense of what’s happening out there.”

Stuart L. Pimm

In 2000, an international network of 2,700 scientists began the Census of Marine Life, the most ambitious attempt in scientific history to catalog the life dwelling in the world’s oceans. After a decade of trawling the seas and making 30 million observations, the project came to an end this year. The researchers unveiled dazzling photographs of some of the 6,000 or so new species they discovered.

Now they are busy crunching the data to come up with estimates of how many species of animals and other organisms are in the oceans. Next year, they may start to offer rough estimates, as well as hypotheses for why the diversity is high in some places and low in others. Dr. Pimm is particularly interested in what the census researchers will discover about how widespread species are in the ocean. On land, much of the world’s diversity is made up of species with very small ranges. Their limited habitats also make them vulnerable to human disturbances, which is why so many animals and plants on land are threatened. If the diversity of the oceans is also built on narrow-range species, such a finding might raise concern about the risk of extinction in the oceans as well.

Jane McGonigal

Game Design

Jim Wilson/The New York Times

“We’re going to see games tackling women’s rights. We’re going to see games around climate change. We’re going to see games around medical innovation, that doctors are going to play.”

Jane McGonigal

In August, the journal Nature published a paper on protein folding with 56,000 co-authors. Researchers at the University of Washington had set up a program that ran on people’s idle computers, using their free computer power to search for the accurate shape of proteins. But they eventually realized that the people who owned the computers themselves could help nudge the molecules into their proper shape. The scientists took advantage of this crowd-sourced intelligence with a game, called Foldit, that allowed people to compete with each other to become championship folders. Foldit’s community of online gamers exploded, and they’ve driven the science of protein folding forward accordingly.

The growth of broadband Internet access and computer speed has made online games a force to be reckoned with. The world spends three billion hours a week on online games, and that investment is only going to grow. Many people play war games and medieval adventures, but Dr. McGonigal predicts that in 2011 unconventional games with real-world impact will become much more prominent.

Dr. McGonigal is also the author of “Reality is Broken: Why Games Make us Better and How They Can Change the World” (Penguin, January 2011).

Michael J. McPhaden

Ocean Science

Isaac Brekken for The New York Times

“We’re going to deploy lots and lots of new kinds of instruments in the Indian Ocean that will be out there for decades. The Indian Ocean has got these tentacles that reach across the globe, and the data we’re collecting is going to revolutionize our understanding of the system.”

Michael J. McPhaden

The oceans, covering 70 percent of the planet, remain a barely explored world. And in that world, the Indian Ocean has been particularly mysterious. Early oceanographers paid more attention to the Atlantic and Pacific Oceans; today, pirates can put some parts of the Indian Ocean off limits to research. Nevertheless, ocean scientists have been finding evidence that the Indian Ocean is a very interesting place. In fact, the circulation of the ocean and its changing temperature can drive major changes in the atmosphere that can affect the entire planet.

Every month or two, these fluctuations send up towering clouds that travel east. Depending on the time of year, these disturbances can affect the monsoons over India, the rainfall in the northwestern United States or hurricanes forming in the Atlantic. Known as the Madden-Julian oscillation, it is so powerful that its winds can even speed up and slow down the rotation of the Earth.

In October 2011, an international team of scientists will be converging on the Indian Ocean for a campaign called Dynamo (short for Dynamics of the Madden-Julian Oscillation). Deploying sensors across much of the ocean, they will try to track an oscillation from its earliest stages. If Dynamo is a success, it will help scientists understand the conditions that trigger a new oscillation and how to predict its effects far and wide.

Dr. McPhaden is also president of the American Geophysical Union.

Ken Caldeira

Climate Change

Jim Wilson/The New York Times

“We’ll get to see whether the climate models have really improved since the last set of I.P.C.C. reports. My wager is that most of the improvements will be modest, and not represent a quantum leap in predictive capability.”

Ken Caldeira

Every few years, the Intergovernmental Panel on Climate Change publishes reports on the state of climate science and what we can expect from the climate in the future. In its latest report, published in 2007, the group concluded that most of the observed increase in global average temperatures since the mid-20th century is very likely due to the billions of tons of greenhouse gases that humans have pumped into the atmosphere.

The panel also made projections into the future, basing each one on different assumptions about how human society will change over the next century. If the population peaks around 2050 and the world’s economies shift toward information technology and service industries, the panel projects that by 2100 the average global temperature will likely rise between 1.1 and 2.9 degrees C. If, on the other hand, the world continues to rely on fossil fuels for its economic growth, the panel projects a likely rise of 2.4 to 6.4 degrees.

The climate models the I.P.C.C. used in 2007 were substantially more sophisticated than previous ones. But climate scientists at the time could see plenty of room for improvement. The 2007 I.P.C.C. report shied away from giving an upper boundary for sea level rise, for example. By the time the 2007 report was published, climate scientists were already developing a new set of models for the next I.P.C.C. report.

The Tianhe-1A Supercomputer NVIDIA, October/November 2010, by By Clay Dillow —  Earlier this week China unveiled the world’s fastest bullet train, and today it boasts the world’s fastest supercomputer. Unveiled earlier today, the Tianhe-1A supercomputer has set a new performance record at 2.507 petaflops via 7,168 NVIDIA GPUs and 14,336 CPUs, unseating the Cray XT5 Jaguar at Oak Ridge National Labs as the world record holder.

Tianhe-1A was designed by the National University of Defense Technology in China, but like the XT5 Jaguar it will be operated as an open access system for high-powered, large scale scientific computations. Costing $88 million, Tianhe-1A weighs 155 tons and consumes 4.04 megawatts of electricity.

That sounds like a lot of power, but for what Tianhe-1A is capable of it’s actually pretty efficient. By integrating GPUs (graphic processing units) versus CPUs (central processing units, or your basic microprocessors) cuts power consumption substantially, making it three times more efficient that a CPU-only computer with the same performance (such a computer would require more than 50,000 CPUs, according to NVIDIA).

Where can China get that much power? Maybe from the Three Gorges Dam, the world’s largest hydroelectric project, which also reached maximum generating capacity this week. There’s no word if China plans to build a space elevator, perfect the cold fusion process, and win the World Series before the week is out, but we suspect the Chinese have people diligently working on it.

By Robert Lanza MD

Imagine watching TV without a screen or communicating with friends without a phone or facebook. Would you have an implant to have virtual sex with anyone you wanted — or to be stronger or smarter? What’s the status of the science? When do humans become obsolete?

It’s not a matter of if, but rather when it’s going to happen. We already know how to clone entire organisms — for instance, our team has cloned herds of cows and even the first human embryos and endangered species (Science 294, 1893, 2001), we’ve reversed aging at the cellular level (Science 288, 665, 2000), and we’ve made progress growing replacement tissues for every organ system of the body, including the heart and kidney (Nature Biotechnology 20, 689, 2002). However, there’s one organ that’s a far greater challenge: the brain.

I remember a journey I took with my dog Shepp. I’d wandered miles, when from the trees came the sound of a train. Clatter-clatter-rap-rap! To Shepp, still a puppy and a few days out of the pound, it’s possible an extraterrestrial would look not unlike the steel caterpillar that rounded the corner, thunder billowing out of its nostrils. It seemed so alive. Shepp let out a yelp. You can scarce imagine his expression as it rushed toward us rattling the earth. “It’s not alive,” I said, more to myself than to Shepp. How could I convey that it was only a lump of metal and quite unconscious — that it was only a machine with sliding bars and wheels hauling TV sets into the city? A loud whoosh and it vanished into the trees.

When the vibrations ceased, Shepp crawled out from the bushes. For myself, I stood there for some minutes, picturing the metal caterpillar moving beneath the tree-tops. As a biologist I could easily list the differences between a machine and a living organism. The anatomy of a train is not unlike the human body. There are moving parts, and within its huge round body, a carburetor that takes in air and fuel, and wires sending electrical impulses to the spark plugs.

It seems natural that someday we’ll make machines that’ll think and act like people. Already, there are scientists at MIT who say the interactions between our neurons can be duplicated with silicon chips. As a boy I worked in the laboratory of Stephen Kuffler — the pre-eminent neurophysiologist and founder of Harvard’s Neurobiology department — watching scientists probe the neurons of caterpillars. Kuffler was the brilliant author of From Neurons to Brain, the textbook I used later as a medical student. In fact, so intrigued was I by the sensory-motor system that I returned to Harvard to work with psychologist B.F. Skinner. However, I’ve since come to believe that the questions can’t all be solved by a science of behavior. What is consciousness? Why does it exist? There’s a kind of blasphemy asking these questions, a personal betrayal to the memory of that gentle, yet proud old man who took me into his confidence so many years ago. Perhaps it was the train, that insensate machine rolling down the tracks.

“The tools of neuroscience,” cautioned David Chalmers “cannot provide a full account of conscious experience, although they have much to offer.” The mystery is plain. Neuroscientists have developed theories that help explain how information — such as the shape and smell of a flower — is merged in the brain into a coherent whole. But they’re theories of structure and function. They tell us nothing about how these functions are accompanied by a conscious experience. Yet the difficulty in understanding consciousness lies precisely here, in understanding how a subjective experience emerges from a physical process at all. Even Nobel physicist Steven Weinberg, concedes that there’s a problem with consciousness and that its existence doesn’t seem to be derivable from physical laws.

Physicists believe the “Theory of Everything” is hovering around the corner, and yet I’m struck that consciousness is still a mystery. We assume the mind is totally controlled by physical laws, but there’s every reason to think that the observer who opens Schrödinger’s box has a capacity greater than that of other physical objects. The difference lies not in the gray matter of the brain, but in the way we perceive the world. How are we able to see things when the brain is locked inside a sealed vault of bone? Information in the brain isn’t woven together automatically any more than it is inside a computer. Time and space are the manifold that gives the world its order. We instinctively know they’re not things, objects you can feel and smell. There’s a peculiar intangibility about them. According to biocentrism, they’re merely the mental software that, like in a CD player, converts information into 3D.

And this brings me back to the train hauling TVs into the city. I suspect that in some years there might even be a robot in the conductor’s seat, blowing the whistle that warns pedestrians to get off the track. In the 1950’s, neurologist William Walter built a device that reacted to its environment. This primitive robot had a photoelectric cell for an eye, a sensing device to detect objects, and motors that allowed it to maneuver. Since then robots have been developed using advanced technology that allows them to “see,” “speak,” and perform tasks with greater precision and flexibility. Eventually we may even be able to build a machine that can reproduce and evolve.

“Can we help but wonder,” asked Isaac Asimov, “whether computers and robots may not eventually replace any human ability? Whether they may not replace human beings by rendering them obsolete? Whether artificial intelligence, of our own creation, is not fated to be our replacement as dominant entities on the planet?” These are the questions that I pondered along the railroad tracks that day, and that trouble me when I see cyborgs on TV.

However, for an object — a machine, a computer — there’s no other principle but physics, and the chemistry of the atoms that compose it. Unlike us, they can’t have a unitary sense experience, or consciousness, for this must occur before the mind constructs a spatial-temporal reality. Eventually science will understand these algorithms well enough to create ‘thinking’ machines and enhancements to ourselves (both biological and artificial) that we can’t even fathom. And after over 200,000 years of evolution, Homo sapiens, as a distinct species, may go extinct, not by a meteor or nuclear weapons, but by our desire to achieve perfection.

Robert Lanza has published extensively in leading scientific journals. His book ‘Biocentrism’ lays out the scientific argument for his theory of everything.

Supercomputers. There probably isn’t a tech geek out there who doesn’t find them intriguing. Huge, hulking computers with performance that’s ages ahead of what we have on our desktops. They are the most powerful computing devices on the planet.

But where in the world do we find these supercomputers? Where are the fastest ones located? Which countries have the most of them? Read on to find out.

(Fun trivia #1: The top right image is a Connection Machine 5, one of the coolest-looking supercomputers of all time. It’s from the early ‘90s and was featured in the movie Jurassic Park. Unfortunately, they are no longer in use.)

Location of the 10 fastest supercomputers

It turns out that eight of the ten fastest supercomputers are in the United States. The other two are located in Germany and China.

Top 10 supercomputers and their locations
Performance ranking Computer (name) Country
1 Jaguar United States
2 Roadrunner United States
3 Kraken XT5 United States
4 Jugene Germany
5 Tianhe-1 China
6 Pleiades United States
7 BlueGene/L solution United States
8 BlueGene/P solution United States
9 Ranger United States
10 Red Sky United States

For those interested, the complete list of the top 500 supercomputers is available at

So, 80% of the 10 fastest supercomputers are in the United States. What if we expand our scope a bit and look at the top 500 supercomputers out there?

Supercomputers by country

Not only does it dominate the top 10, the United States dominates the list of the top 500 supercomputers in the world as well, although not to the same extent. More than half of the top 500 supercomputers are located in the United States.

Another observation here is that the top ten countries hold 89% of the top 500 supercomputers in the world.

Us being Swedes, we would have loved seeing Sweden make it into this top ten list, and we almost made it. With seven supercomputers, Sweden comes in at number 11.

(Fun trivia #2: Seven of New Zealand’s eight supercomputers belong to Weta Digital, Peter Jackson’s visual effects company. Creating visual effects for movies like the Lord of the Rings trilogy, King Kong, Avatar, and more, they apparently need a ton of number-crunching power. With all the business he’s bringing into New Zealand, we wouldn’t be surprised if Peter Jackson is crowned King of Wellington any day now.)

Supercomputers by region

Where are the fastest supercomputers if you don’t look at countries, but at regions instead?

Granted, we know already that North America will have the biggest piece of the pie since the United States has so many supercomputers, but it’s interesting to see how the other regions of the world are doing.

Nine of the eight supercomputers in Oceania are in New Zealand, one in Australia. Four of the supercomputers in the Middle East are in Saudi Arabia and the other two in Israel. The one in Africa is in South Africa while the South American one is in Brazil.

More supercomputer stats

While we were going through the supercomputer information on the site (the data source for this article), a few other things struck us as particularly noteworthy.

Most popular supercomputer OS

Here below you can see the division by operating system family across the top 500 supercomputers.

1.    Linux (89.2%)

2.    Unix (5.0%)

3.    Mixed (4.6%)

4.    Windows (1.0%)

5.    BSD based (0.2%)

As you can see, Linux has close to 90% of the supercomputer market. Quite an impressive feat. We wrote about the triumph of Linux as a supercomputer OS last summer. It’s an area where it dominates almost completely. Windows, on the other hand, is only used by 1% (5 out of 500).

Most popular supercomputer vendors

You might ask yourself where you go to buy one of these huge computer systems, and here you go, the top ten vendors of supercomputers (again, based on the top 500).

1.    Hewlett-Packard (41.8%)

2.    IBM (37.0%)

3.    Cray (3.8%)

4.    SGI (3.8%)

5.    Dell (3.2%)

6.    Sun Microsystems (2.2%)

7.    Appro International (1.2 %)

8.    Fujitsu (1.0%)

9.    Bull SA (1.0%)

10.                       Hitachi (0.6%)

The remaining vendors make up 4.4% altogether.

It should be noted that although HP is the leader in terms of sheer quantity, IBM and Cray have the two fastest systems.

So, now you know who to speak to if you want a supercomputer… All you need now is a very big wallet, a huge room, and one heck of a power supply.

Final words

Supercomputers are used for a ton of different purposes, often scientific in nature. They are for example a valuable resource to many universities. As we have shown here they are spread all over the world, albeit with a heavy bias towards the United States and Europe.

However, these stats show that the supercomputing hotbed is definitely the United States. The fact that the country has more than half of the top 500 supercomputers in the world, and eight of the ten fastest, is made all the more impressive when you consider that the United States only account for less than 4.5% of the world population and 13% of the internet users.