Cancer cells that have broken away from the main tumor can spread the disease. Now scientists are developing better ways to find them.

 

 

MIT Technology Review, October 12, 2011  –  One way that cancer spreads through the body is through circulating tumor cells. These are cancer cells that have broken away from the main tumor and begun to circulate in the blood. A new tumor can form if they become embedded elsewhere in the body and begin to grow.

So spotting circulating tumor cells is an important goal in the treatment of cancer.

Here’s the problem though. Circulating tumor cells are extremely difficult to find. In a single millilitre of blood there are usually several billion red blood cells, several million white blood cells but fewer than ten circulating tumor cells.

And there is the only one way to find them. The cells can be made to look different from normal blood cells. So you need a highly trained cell biologist with a microscope and plenty of time. The words needle and haystack don’t do this task justice.

Various groups are looking for better ways to find circulating tumor cells and their efforts fall essentially into two categories. The first is biochemical: trapping the cells using antibodies that bond to them. The second is mechanical: filtering them out.

Both of these methods have drawbacks. Antibodies can only bond to the cells if they can get close enough. And although circulating tumor cells are bigger than red blood cells, they are about the same size as white blood cells so filters have limited success..

Today, Markus Gusenbauer at St. Poelten University of Applied Sciences in Austria and a few buddies make some progress in this area. These guys have developed a computer model of the way blood flows through a bed of magnetic beads.

When a magnetic field is applied to such a bed, the beads line up into strings that together form a filter with a specific gap size. Whether a cell can pass through depends on its size and also its flexibility.

The Austrian team’s model takes into account the size and flexibility of both red blood cells and circulating tumor cells to show how this kind of switchable filter could catch the bad guys.

The idea here is that the beads would also be covered in an antibody that latches onto the circulating tumor cells, keeping them trapped even when the magnetic field is switched off. This method uses both of the current techniques to overcome their drawbacks.

The plan would be to store the beads in a chamber in a microfluidic lab-on-a-chip device. A blood sample containing a handful of circulating tumor cells but billions of other types is pumped into the chamber and the magnetic field switched on.

This causes the beads to line up in a filter that traps the biggest cells. The antibodies on the beads then latch on to their targets, trapping them for later study.

That’s the theory anyway. In reality, these guys have a lot more work to do before such a system can be made to work. For a start, circulating tumor cells come in a number of different flavours and the mechanical characteristics of each will need to be worked out.

More serious is the problem with white blood cells. Being a similar size to circulating cancer cells means they could easily clog these kinds of filters.

But the reality is that this kind of problem can only be solved by understanding what’s going on on the level of individual cells and engineering a solution that works on this scale. That’s why this kind of simulation is a useful first step.

Ref: arxiv.org/abs/1110.0995: A Tunable Cancer Cell Filter Using Magnetic Beads: Cellular And Fuid Dynamic Simulations

Wired Petri Dish Gives Real-Time Updates

 

A smart petri dish: Cells are grown directly on top of ePetri’s image sensor, the same type used in cell phones.

 

 

Researcher says “it’s like getting continuous tweets from the cells rather than an occasional postcard.”

 

 

MIT Technology Review, October 12, 2011, by Katharine Gammon  –  A new prototype petri dish can create an image of what’s growing on it and send that information to a laptop, all from inside an incubator. The prototype, dubbed the ePetri, was created from Lego blocks and a cell-phone image sensor, and uses light from a Google Android smart phone.

“Normally, one leaves the cells in an incubator and just checks up on them from time to time,” says Michael Elowitz, a professor of biology at Caltech, who coauthored the paper. “With ePetri, it’s like getting continuous tweets from the cells rather than an occasional postcard.”

A sample is placed on top of a small image-sensor chip, which uses an Android phone’s LED screen as a light source. The whole device is placed in an incubator, and the image-sensor chip connects to a laptop outside through a wire. As the image sensor snaps pictures of the cells growing in real time, the laptop stitches hundreds of images together to create a high-resolution picture of what is happening on the dish.

The resolution is similar to a traditional microscope—fine enough to see the contents of cell nuclei, says senior author Changhuei Yang, professor of electrical engineering and bioengineering at Caltech. The prototype was described in a paper appearing online this week in the Proceedings of the National Academy of Sciences.

Peering into cells while they stay in the incubator has a number of benefits. For one, each device is its own lens-free microscope, meaning that many samples can be monitored at once automatically on the laptop. In addition, instead of using a microscope that can only focus on one tiny part of a sample, researchers get a picture of what’s happening on the entire petri dish at the same time—something that would help a lot with stem cells, which often change into different types of cells and move around.

The team is also working on a self-contained system with its own incubator that could eventually stay as a desktop diagnostic tool in a doctor’s office, so bacterial samples wouldn’t have to be sent out to a lab for testing.

“The low cost allows you to think creatively about how this will be used in the future,” says Jeffrey Morgan, a professor at Brown University who was not involved in the study. For example, the new device could cut down on time and cost for high-throughput drug screening, and create cheaper diagnostic tools.

The Petri Dish Gets a Makeover

 

The pore man’s solution: This nanopore membrane placed atop an agar plate allows nutrients and individual cells to move to the surface, growing bacterial microcolonies more quickly and enabling faster, more accurate diagnostics.
Nanologix

 

 

A nanopore membrane creates faster, surer cultures for everything from hospital diagnostics to water-quality checks.

 

MIT Technology Review, Summer/Fall 2011, by Lauren Gravitz  –  A new type of diagnostic could let hospital laboratories identify the presence of dangerous bacteria up to five times faster than conventional methods. The test could reduce unnecessary antibiotic use and provide more reliable water-quality test results. The key to the process is a membrane with nanosized pores, which enable rapid growth and identification of live organisms.

Current methods of identifying bacterial infections in hospitals seem almost antiquated: Swab, rub on petri dish filled with agar, and wait. Some bacteria can take 48 hours or more to grow into visible colonies. But the new technology, developed by Hubbard, Ohio-based Nanologix, speeds up the process. Bacteria, and potentially viruses, move through the pores of its membrane, and grow there. Then the membrane is plucked off the agar and placed on a staining plate.

“People knew for decades that microcolonies would be present in culture, but there was no way to transfer them or stain them in a way to make them visible,” says Nanologix CEO Bret Barnhizer. But the company’s technology—”bionanopore” membranes and “bionanofilters”—is sensitive enough to detect a single cell. And when the nanofilter is saturated with antibodies specific to a particular bacteria or virus, it can quickly indicate whether a particular offender is present.

The first test of the Nanologix system has been completed by a group of researchers at the University of Texas Health Sciences Center on a bacterium known as group B streptococcus. Also known as GBS, it can cause feeding, breathing, and other problems in a newborn baby if its mother is infected with it at the time of childbirth. Because of this, most pregnant women are tested for GBS about a month before their due date, with a culture test that yields results in two to three days. If the results are positive, antibiotics can eliminate the infection before the baby is born.

But if a woman arrives at a hospital in labor and has never been tested for GBS, she’s assessed for GBS risk and often given large doses of broad-spectrum antibiotics, just in case. Because the risk assessment basically consists of the physician’s best guess, some patients who need the antibiotics won’t get them, and some who don’t need them will.

A study published online this month in the American Journal of Perinatology by the University of Texas researchers shows that the Nanologix test can yield reliable results in as few as four hours. It’s not fast enough to prevent antibiotic administration to untested women already in labor, but it’s fast enough to know whether their new babies should be monitored for signs of infection. “It would be great to have a faster-turnaround test,” says Kristin Brigger, a Houston private-practice obstetrician and gynecologist who was not involved in the research. “For patients in the academic setting, patients without good prenatal care and high risk of preterm labor, it would be really good.”

 

Other, more advanced technology already offers much faster turnaround than the Nanologix plates. Polymerase chain reaction machines can identify infection in as little as 30 minutes, fast enough for use between onset of labor and delivery. But such machines can be pricey, as can the individual tests, and the technology isn’t always available in community hospitals. In contrast, the Nanologix test kits cost between $5 and $10, just slightly more than the customary test, and can be done in any hospital lab. Barnhizer says the company also has developed kits that can detect E. coli, salmonella, listeria, and more; Nanologix plans to submit the first test (for GBS and other gram-positive bacteria) to the U.S. Food and Drug Administration later this year, with hopes for approval by the first quarter of 2012.

Nanologix is also working with the U.S. Environmental Protection Agency to develop kits that can be used during outbreaks for faster, more reliable detection of waterborne microorganisms such as E. coli and cryptosporidium. “It’s a really good technique, because it shortens a process of 12 to 18 hours to five or six, so we can get an answer about whether our target bacterium is there or not within a day,” says Gerard Stelma Jr., a senior microbiologist with the EPA in Cincinnati. Not only are currently available tests slow, he says, but their effectiveness also varies from day to day. “When I saw what they were doing, I thought this is the most novel new method I have seen in a number of years.”

Credit: Technology Review

 

 

The Singularity Summit approaches this weekend in New York. But the Microsoft cofounder and a colleague say the singularity itself is a long way off.

MIT Technology Review, October 12, 2011, by Paul Allen and Mark Greaves  –  Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they’ll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It’s heady stuff.

While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn’t just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Kurzweil calls the “Law of Accelerating Returns.” He writes that:

So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity … [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.

Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these “laws” will work until they don’t. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moore’s Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweil’s timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesn’t behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to reëvaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts don’t support the overall Moore’s Law-style acceleration needed to get to the singularity on Kurzweil’s schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will end—the brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress. But suppose scientists make some brilliant new advance in brain scanning technology. Singularity proponents often claim that we can achieve computer intelligence just by numerically simulating the brain “bottom up” from a detailed neural-level picture. For example, Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely take a snapshot a person’s living brain at the subneuron level. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless of whether nanobot-based scanning succeeds (and we aren’t even close to knowing if this is possible), Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity: computers could exhibit human-level intelligence simply by loading the state and connectivity of each of a brain’s neurons inside a massive digital brain simulator, hooking up inputs and outputs, and pressing “start.”

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. “Brain duplication” strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true theoretically, it has not worked out that way in practice, because it doesn’t address everything that is actually needed to build the software. For example, if we wanted to build software to simulate a bird’s ability to fly in various conditions, simply having a complete diagram of bird anatomy isn’t sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about the functional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organism’s behavior. Without this information, it has proven impossible to construct effective computer-based simulation models. Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brain simulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential. Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBM’s Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle—their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess can’t leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watson’s performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work of Cycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn’t happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we can’t create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcan and at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a computer scientist who serves as Vulcan’s director for knowledge systems.

[1] Kurzweil, “The Law of Accelerating Returns,” March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of massive brain simulation. Petaflop-class computers (such as IBM’s BlueGene/P that was used in the Watson system) are now available commercially. Exaflop-class computers are currently on the drawing boards. These systems could probably deploy the raw computational capability needed to simulate the firing patterns for all of a brain’s neurons, though currently it happens many times more slowly than would happen in an actual brain.