Craig Venter and colleagues compare consumer genetic tests and suggest ways to make them more useful

MIT Technology Review, October 16, 2009, by David Ewing Duncan  —  Geneticist Craig Venter and colleagues have tested two of the leading consumer genomics services and declared the fledgling industry to be promising, but still very early in terms of how useful the information might be.

Venter, the founder of the J. Craig Venter Institute in San Diego, CA, and collaborators from that institute and from Scripps Translational Science Institute in La Jolla, CA, sent the saliva of five individuals–they don’t say whose spit they used–to 23andme and Navigenics, two major online DNA-testing companies both based in the San Francisco Bay area. Venter’s team then analyzed and compared the results to see if the sites provided consistent information.

The team compared the five sets of DNA results for the risk of developing 13 diseases, including colon cancer, lupus, type 2 diabetes, and restless leg syndrome.

The results are published today in a commentary in the journal Nature, along with suggestions on how to improve genetic testing as a direct-to-consumer product.

Venter’s group found that the raw genetic sequencing data supplied by each company was almost 100 percent consistent. That is, for the 500,000 to 1 million genetic markers tested for each person, the As, Cs, Ts, and Gs were almost exactly the same.

How the genetic testing sites interpreted the data was less consistent. Each used studies in the scientific literature that have scanned human populations for DNA markers associated with risk factors in order to predict whether a person will succumb to a particular disease. A person might have, say, a 30 percent increased risk for type 2 diabetes if they have a particular version of the relevant genetic marker.

For the seven diseases analyzed by the researchers, only about half of the risk factors provided by 23andme and Navigenics agreed for the five patients. For instance, for lupus and type 2 diabetes, three of the five subjects received conflicting results.

Digging deeper, the researchers found that some of the individual risk factors were strikingly different. For psoriasis, 23andme reported a risk factor of 4.02 (four times greater) for one individual, while Navigenics reported only a 1.25 risk factor (25 percent greater), a threefold difference.

In my own experiments, I compared results delivered by several online genetic testing websites: 23andme and Navigenics, and also Iceland-based deCodeme. My results for heart attack produced three different overall heart-attack risks–high from Navigenics, medium from 23andme, and low from deCodeme. I found several less-striking contradictions for diabetes, macular degeneration, and other traits.

The differences arise for two main reasons: because different companies sometimes use different markers, and combinations of markers, to determine an overall risk score for a disease, and because the algorithms the sites use differ in how they weigh the risk factors for different genetic markers.

The Venter study notes that, in some cases, companies define the average population disease risk differently. “Navigenics distinguishes population disease risk between men and women (for example, men are more likely to have heart attacks than women), whereas 23andMe primarily takes into account age (for example, incidence of rheumatoid arthritis increases with age),” the authors write. “This ambiguity in the definition of a ‘population’ underscores the caution one must exercise when interpreting absolute risk results.”

The Venter study makes several recommendations for direct-to-consumer genetic testing companies. These include a greater focus on the genetic variations that have a high impact on disease risk and a call to use more markers that provide information on risk factors for taking medications, such as the blood-thinner warfarin and cholesterol-lowering statins. (23andme does provide a marker that can indicate a risk of side effects for warfarin.) The researchers also suggest that sites better explain how their markers fit into the overall impact of genes on a disease (for example, whether a given marker represents 5 percent, 20 percent, or 100 percent of the genetic impact on a disease).

Recommendations to the genetics community include conducting clinical studies to validate genetic markers for disease and for behavioral traits in actual patients tracked over time, and giving more attention to non-Caucasian ethnicities.

23andme cofounder Anne Wojcicki and Navigenics cofounder David Agus both agree with the recommendations. “There is a distinct need for transparency and quality in genetic association studies,” says Agus. “We are giving critical information to individuals to help them with their personal health. That information needs to be correct, or we have done a disservice.”

“We plan on working with 23andme to codevelop standards for the field,” Agus adds. This is a voluntary effort that so far has faltered. 23andme science and policy liaison Andro Hsu says there might be a need for a neutral third party that creates standards–he suggested the Centers for Disease Control. Others have talked about the Food and Drug Administration, or even the National Institute of Standards and Technology.

What Venter and company didn’t mention is the word “regulation”–which is likely to happen in some form if the industry is to ensure that information is accurate and consistent. The Nature commentary also doesn’t call for an aggressive and comprehensive effort to validate genetic tests by running studies in the clinic to see which of these markers really do predict disease–a project that may be needed to accelerate the day when these tests become more medically useful for individuals.

Venter does, however, suggest that the latest DNA sequencing technology is rapidly producing a more accurate and thorough alternative to the sort of single DNA markers used by these companies. In just the past few months, the ability of scientists to sequence the entire six billion nucleotides of a person’s genome has become inexpensive enough that it may soon replace the chips currently used by testing companies. Existing tests scan a person’s genome for up to one million markers covering most genetic variations associated with human disease, but they don’t nearly cover them all.

Just two years ago, it cost $1 million to sequence an entire genome; now the price is rapidly dropping to under $50,000, and may be as little as $5,000 by next year.

“Once we have at least 10,000 human genomes and the complete phenotype [disease profiles] with these genomes, we will be able to make correlations that are impossible to do at the present time,” says Venter. “At that stage, genetic testing will be a good investment for private companies and the government.”


Biological parts: Ginkgo BioWorks, a synthetic-biology startup, is automating the process of building biological machines. Shown here is a liquid-handling robot that can prepare hundreds of reactions.
Credit: Ginkgo BioWorks

Ginkgo BioWorks aims to push synthetic biology to the factory level

MIT Technology Review, October 19, 2009, by Emily Singer  —  In a warehouse building in Boston, wedged between a cruise-ship drydock and Au Bon Pain’s corporate headquarters, sits Ginkgo BioWorks, a new synthetic-biology startup that aims to make biological engineering easier than baking bread. Founded by five MIT scientists, the company offers to assemble biological parts–such as strings of specific genes–for industry and academic scientists.

“Think of it as rapid prototyping in biology–we make the part, test it, and then expand on it,” says Reshma Shetty, one of the company’s cofounders. “You can spend more time thinking about the design, rather than doing the grunt work of making DNA.” A very simple project, such as assembling two pieces of DNA, might cost $100, with prices increasing from there.

Synthetic biology is the quest to systematically design and build novel organisms that perform useful functions, such as producing chemicals, using genetic-engineering tools. The field is often considered the next step beyond metabolic engineering because it aims to completely overhaul existing systems to create new functionality rather than improve an existing process with a number of genetic tweaks.

Scientists have so far created microbes that can produce drugs and biofuels, and interest among industrial chemical makers is growing. While companies already exist to synthesize pieces of DNA, Ginkgo assembles synthesized pieces of DNA to create functional genetic pathways. (Assembling specific genes into long pieces of DNA is much cheaper than synthesizing that long piece from scratch.)

Ginkgo will build on technology developed by Tom Knight, a research scientist at MIT and one of the company’s cofounders, who started out his scientific career as an engineer. “I’m interested in transitioning biology from being sort of a craft, where every time you do something it’s done slightly differently, often in ad hoc ways, to an engineering discipline with standardized methods of arranging information and standardized sets of parts that you can assemble to do things,” says Knight.

Scientists generally create biological parts by stitching together genes with specific functions, using specialized enzymes to cut and sew the DNA. The finished part is then inserted into bacteria, where it can perform its designated task. Currently, this process is mostly done by a lab technician or graduate student; consequently, the process is slow, and the resulting construct isn’t optimized for use in other projects. Knight developed a standardized way of putting together pieces of DNA, called the BioBricks standard, in which each piece of DNA is tagged on both sides with DNA connectors that allow pieces to be easily interchanged.

“If your part obeys those rules, we can use identical reactions every time to assemble those fragments into larger constructs,” says Knight. “That allows us to standardize and automate the process of assembly. If we want to put 100 different versions of a system together, we can do that straightforwardly, whereas it would be a tedious job to do with manual techniques.” The most complicated part that Ginkgo has built to date is a piece of DNA with 15 genes and a total of 30,000 DNA letters. The part was made for a private partner, and its function has not been divulged.

Assembling parts is only part of the challenge in building biological machines. Different genes can have unanticipated effects on each other, interfering with the ultimate function. “One of the things we’ll be able to do is to assemble hundreds or thousands of versions of a specific pathway with slight variations,” says Knight. Scientists can then determine which version works best.

So far, Knight says, the greatest interest has come from manufacturing companies making chemicals for cosmetics, perfumes, and flavorings. “Many of them are trying to replace a dirty chemical process with an environmentally friendly, biologically based process,” he says.

Ginkgo is one of just a handful of synthetic-biology companies. Codon Devices, a well-funded startup that synthesized DNA, ceased operations earlier this year. “The challenge now is not to synthesize genes; there are a few companies that do that,” says Shetty. “It’s to build pathways that can make specific chemicals, such as fuels.” And unlike Codon, Ginkgo is starting small. The company is funded by seed money and a $150,000 loan from Lifetech Boston, a program to attract biotech to Boston. Its lab space is populated with banks of PCR machines, which amplify DNA, and liquid-handling robots, mostly bought on eBay or from other biotech firms that have gone out of business. And the company already has a commercial product–a kit sold through New England Biolabs that allows scientists to put together parts on their own.

“If successful, they will be providing a very important service for synthetic biology,” says Chris Voigt, a synthetic biologist at the University of California, San Francisco. “There isn’t anybody else who would be characterizing and providing parts to the community. I think that this type of research needs to occur outside of the academic community–at either a company or a nonprofit institute.”

By Gabe Mirkin MD  —  Several studies have shown that exercise is beneficial for people with varicose veins; a regular exercise program may be the most effective treatment.

Veins are supposed to contain valves that keep blood from backing up. When the valves cannot close properly, veins become varicose, blood backs up, causing the veins to widen and look like blue snakes underneath the skin. Since varicose veins swell because blood pools in them, the best treatment is to empty blood from the veins. When you exercise, your leg muscles alternately contract and relax squeezing blood back toward the heart, so running, walking, cycling, skiing, skating and dancing are ideal treatments, while standing or sitting increase blood pooling and widen the veins.

Varicose veins are caused by a genetic weakness in the valves or an obstruction of blood flow, such as by obesity, pregnancy, tumors, clots and heart disease. Superficial varicose veins that you can see can cause a feeling of heaviness or aching, but they are rarely painful. Most varicose veins are best left alone. Special injections and laser burning remove only small veins. If you don’t like the way that large veins look, you can have a surgeon make a cut through the skin above and below the veins, attach a wire and pull the vein out from underneath your skin. People with varicose veins should not stand around for a long time/ and should wear support hose when they stand or walk slowly, but don’t need them when they exercise. Leg ulcers associated with varicose veins are best treated with a bacterial culture and injections of massive doses of the appropriate antibiotic. Surgery is rarely curative.

If you have varicose veins and develop severe pain, usually in the veins in your calf muscles, you have to worry about a clot. Clots in veins are dangerous usually only if they break lose and travel to your lungs to obstruct the flow of blood. So, doctors order tests for obstruction of venous blood flow in people who develop sudden severe pain deep in muscles. If a clot is present, doctors look for clotting disorders such as caused by tumors and antiphospholipid antibody.

By Gabe Mirkin, M.D., for CBS Radio News

Checked 10/14/09 – www.DrMirkin.com


Sign of the Times This giant Manhattan billboard displays a running tally of greenhouse gases in the earth’s atmosphere. Credit: Brandon Barrett 

MIT analysis finds global-warming projections more dire 

MIT Technology Review, October 2009, by David Chandler  —  It’s getting hotter faster: a sobering new MIT study on the odds of global warming shows that without drastic action, the planet is likely to heat up twice as much this century as previously projected.

Researchers in the Joint Program on the Science and Policy of Global Change reached these conclusions using the MIT Integrated Global Systems Model, a detailed computer simulation of global economic activity and climate change that they’ve been refining since the 1990s. They ran the model 400 times, varying the values of the input parameters each time in such a way that each run had about an equal probability of being correct. This process reflects the fact that each of dozens of parameters–the rate at which the ocean’s surface waters mix with deeper waters, the rate at which new coal plants will be built–has its own range of uncertainty. The multiple runs show how these parameters, whatever their specific values, interact to affect overall climate.

Study coauthor Ronald Prinn, ScD ’71, the program’s codirector, says the MIT model is the only one that looks at the effects of economic activity in concert with the effects of changes in atmospheric, oceanic, and biological systems. The new study, published in the American Meteorological Society’s Journal of Climate, takes into account recent changes in projections about the growth of economies such as China’s, as well as new data on some physical processes, such as the rate at which oceans take up both heat and carbon dioxide. When results of the different scenarios are averaged together, the median projection shows land and ocean surfaces warming 5.2 °C by 2100. The researchers’ 2003 study, which was based on results from an earlier version of the model, projected an increase of just 2.4 °C.

Prinn and the team used the new results to update their “roulette wheel,” a pie chart representing the relative odds of various levels of temperature rise (see “Wheel of Global Fortune,” January/February 2008). The new version of the wheel reflects a much higher probability of greater temperature increases if no actions are taken to reduce greenhouse-gas emissions.

To help raise general awareness of how serious the problem is, the MIT team also developed a method for estimating the current output of greenhouse gases around the world, which is used to continuously update a “carbon counter” on a giant billboard outside New York’s Madison Square Garden. Sponsored by Deutsche Bank, the 70-foot-tall display is similar to the famed national-debt clock in Times Square.

But there’s good news from the model as well: if substantial measures to curb greenhouse-gas emissions are put in place soon, the new projections show, the expected warming will be no worse than the earlier studies suggested. “This increases the urgency for significant policy action,” Prinn says

The-Scientist.com, October 19, 2009, by Alla Katsnelson  —  A two-year battle between the US Patent and Trademark Office (USPTO) and biopharma over a much-contested set of patent rules ended yesterday (October 8) when the USPTO rescinded the rules altogether.

“These regulations have been highly unpopular from the outset and were not well received by the applicant community,” said David Kappos, director of the USPTO, in a statement. “In taking the actions we are announcing [October 8], we hope to engage the applicant community more effectively on improvements that will help make the USPTO more efficient, responsive, and transparent to the public.”

The rules, released by the USPTO in 2007 to streamline the patent-approval process, limited the number of times an applicant could file a continuation application, which adds claims to an existing patent. In addition, inventors could include only one request for a continued examination, which they file after the patent office has rejected their patent application. The new rules also limited the number of claims included in a single patent submission to 25. Under the current rules, there are no limits to the number of continuation requests and the number of claims a single patent can include.

The biopharma community objected that these limitations would make it more difficult to protect intellectual property in the life sciences, because the scope of biological discoveries so often expands with additional research.

“By the PTO’s own numbers, biotech relies to a greater extent than other industries on so-called continuing patent applications and a variety of patent claims, all of which would have been constrained by the proposed rules,” Hans Sauer, associate general counsel for intellectual property for the Biotechnology Industry Organization, told GenomeWeb Daily News. “Accordingly, biotech always felt particularly impacted by these rules.”

The new rules were supposed to take effect on November 1, 2007, but GlaxoSmithKline filed an 11th-hour lawsuit on the grounds that the USPTO did not have the authority to institute them. A district court ruled in favor of the company in April, 2008. The USPTO appealed, and in March of this year, a federal court panel ruled that the agency did have the authority to make the rules; this summer, the court scheduled further hearings on the case. Now, though, GSK and the USPTO say they will file a joint motion for the case to be dismissed.


VP of Engineering Mike Schroepfer reveals the tricks that keep the world’s biggest social network going

MIT Technology Review, October 14, 2009, by Erica Naone  —  Last week, the world’s biggest social network, Facebook, announced that it had reached 300 million users and is making enough money to cover its costs.

The challenge of dealing with such a huge number of users has been highlighted by hiccups suffered by some other social-networking sites. Twitter was beleaguered with scaling problems for some time and became infamous for its “Fail Whale”–the image that appears when the microblogging site’s services are unavailable.

In contrast, Facebook’s efforts to scale have gone remarkably smoothly. The site handles about a billion chat messages each day and, at peak times, serves about 1.2 million photos every second.

Facebook vice president of engineering Mike Schroepfer will appear on Wednesday at Technology Review’s EmTech@MIT conference in Cambridge, MA. He spoke with assistant editor Erica Naone about how the company has handled a constant flow of new users and new features.

Technology Review: What makes scaling a social network different from, say, scaling a news website?

Mike Schroepfer: Almost every view on the site is a logged-in, customized page view, and that’s not true for most sites. So what you see is very different than what I see, and is also different than what your sister sees. This is true not just on the home page, but on everything you look at throughout the site. Your view of the site is modified by who you are and who’s in your social graph, and it means we have to do a lot more computation to get these things done.

TR: What happens when I start taking actions on the site? It seems like that would make things even more complex.

MS: If you’re a friend of mine and you become a fan of the Green Day page, for example, that’s going to show up in my homepage, maybe in the highlights, maybe in the “stream.” If it shows me that, it’ll also say three of [my] other friends are fans. Just rendering that home page requires us to query this really rich interconnected dataset–we call it the graph–in real time and serve it up to the users in just a few seconds or hopefully under a second. We do that several billion times a day.

TR: How do you handle that? Most sites deal with having lots of users by caching–calculating a page once and storing it to show many times. It doesn’t seem like that would work for you.

MS: Your best weapon in most computer science problems is caching. But if, like the Facebook home page, it’s basically updating every minute or less than a minute, then pretty much every time I load it, it’s a new page, or at least has new content. That kind of throws the whole caching idea out the window. Doing things in or near real time puts a lot of pressure on the system because the live-ness or freshness of the data requires you to query more in real time.

We’ve built a couple systems behind that. One of them is a custom in-memory database that keeps track of what’s happening in your friends network and is able to return the core set of results very quickly, much more quickly than having to go and touch a database, for example. And then we have a lot of novel system architecture around how to shard and split out all of this data. There’s too much data updated too fast to stick it in a big central database. That doesn’t work. So we have to separate it out, split it out, to thousands of databases, and then be able to query those databases at high speed.

TR: What happens when you add new features to the site?

MS: Adding or changing a feature can pretty dramatically affect the behavior of the user, which has pretty dramatic implications on the system architecture. I’ll give a very simple example. We added the “Like” feature in February of this year. It’s a single-button thumbs up so the user can say, “I like this thing.” There was a long debate internally about whether the “Like” feature was going to cannibalize commenting. It turned out to be additive; the commenting rate stayed the same and “Like” became one of the most common actions in the system.

This sounds really trivial, but one of the challenges of building complex, scalable systems is always that [it’s easier to retrieve data from a database than to store it there]. Every time I click on that “Like” button, we have to record that somewhere persistently. If [we built the system assuming that we’d be mostly retrieving data], we just blew that assumption by changing the features of the product. I think we try pretty hard to not be too set on any of those assumptions and be ready to revisit them as we change the core product. That’s pretty critical.

TR: And how about hooking these new features into the existing architecture?

MS: I think one of the most interesting things is that we can turn a feature on. Going from zero users to 300 million users in an afternoon for a brand-new feature is pretty crazy. And we can do that because, generally speaking, we share all of the infrastructure. You can turn it on and have it go from 1 percent adoption to 100 percent adoption in a day without much or any perceived downtime.

TR: But you don’t just have a problem with change and complexity–there’s also the issue of storage. Facebook serves tons of photos. Was that system always built to scale?

MS: Now especially–with camera phones and direct integration via [smartphone applications]–there’s just a tremendous wealth of photos being uploaded and shared on the site. We built the first version of our photo storage using off-the-shelf network-attached storage devices with Web servers in front of them. That was functional but not functional enough, and it was also expensive. We did some tuning on that system to improve the performance and got it five or six times faster than the original version. Then we went and built our own storage system called Haystack that’s completely built on top of commodity hardware. It’s all sata drives and an Intel box with a custom stack on top of it that allows us to store and then serve the photos from the storage tier. That’s significantly faster than the off-the-shelf solutions and also significantly cheaper. We’ve invested a lot of energy in storing photos because the scale is just astounding.

TR: Do you always know that you’re going to be able to pull off the changes you try to make to the architecture?

MS: There’s been a couple cases where we’ve taken on a project where we weren’t actually sure we could do it–there’s one I can’t talk about because we’ll announce it later in the year. There are cases where we’re going to try to do something that lots of other people have tried before, but we think we can do it better. I think the courage and the willingness to make the investment are actually the most critical parts of this, because without that, all the great planning in the world isn’t going to get you there.