SingularityHub.com, by Peter Murray, April 10, 2012  —  Begun in 2008, the “1000 Genomes Project” aims to sequence 1000 genomes and gain a deeper understanding of what genetic variations may put people at risk for disease.

When the Human Genome Project got underway in 1990 it was expected to take 15 years to sequence the over 3 billion chemical base pairs that spell out our genetic code. In true Moore’s Law tradition the emergence of faster and more efficient sequencing technologies along the way led to the Project’s early completion in 2003. Today, 22 years after scientists first committed to the audacious goal of sequencing the genome, the next generation of sequencers are setting their sites much higher.

About a thousand times higher.

The 1000 Genomes Project, as its name suggests, is a joint public-private effort to sequence 1000 genomes. Begun in 2008, the Project’s main goal is to create an “extensive catalog of human genetic variation that will support future medical research studies.” The 1000 Genomes Consortium is headed by the NIH’s National Human Genome Research Institute which in turn is collaborating with research groups in the US, UK, China and Germany.

That might not sound like much. Thanks in large part to companies like Silicon Valley start up Complete Genomics perhaps as many as 30,000 complete genomes around the world have already been sequenced. But what is unique about the 1000 Genomes Project is that their genomes will be made available to the public for free, and stored in a place where the world can access the data easily and interact with it.

The original effort to sequence the human genome, while a triumph, is limited in its usefulness insofar as linking genetic sequence to disease. Because it involved DNA from just a small number of individuals (the fifth personal genome, that of Korean researcher Seong-Jin Kim, was completed only in 2008) it is impossible to use the data to make correlations between genetic variations and diseases. The 1000 Genomes Consortium hopes that their sample size will be large enough to catalog all genetic variants that occur in at least 1 percent of the population.

Among the 3 billion base pairs contained in the human genome scientists have already identified more than 1.4 million single nucleotide polymorphisms, or SNPs (pronounced “snips”). SNPs are single base variations that differ between people. By characterizing which people have which SNPs, scientists hope to identify the SNPs that predispose people for diseases such as cancer or heart disease. Smartly, the Consortium is not limiting themselves to any particular population, which might bias the genetic variability to disproportionately represent that population. The equivalent of 1,000 genomes will actually be gotten from the incomplete sequences of 2,661 people from 26 different “populations” around the world.

 

 

Understanding which single nucleotide polymorphisms increase risk for disease will not only help treat the disease, but may also contribute to a cure.

Just as advances in sequencing technologies throughout the ‘90s galvanized the Human Genome Project, advances in the last decade have put 1000 genomes within reach. So-called “next-gen” sequencing platforms reduced the cost of DNA sequencing by over two orders of magnitude in just a three year span. The lowered cost meant that individual labs could get in on the sequencing act and contribute to the kind of large-scale sequencing that had previously been the domain of major genome centers. And not only was more data being generated, but techniques to verify the quality of the sequences significantly improved.

Sequencing technology will undoubtedly continue to improve until the next “next-gen” sequencing platforms will allow us to sequence even faster and more cheaply. But the current swell in DNA data has put pressure on another technology to keep pace.

The amount of data generated from DNA sequencing is prodigious. Right now the Project has already amassed over 200 terabytes of data. That’s equivalent to 30,000 standard DVDs or 16 million file cabinets topped with text. According to the NIH, it is the largest set of data on human genetic variation. Not to be overburdened by a few hundred terabytes, Amazon announced last week that the 1000 Genomes Project data is now stored on their Amazon Web Services cloud and is publicly available. It currently contains sequence data from about 1,700 people. Sequencing the remaining 900 or so samples is expected to be completed by the end of the year.

You can find information on how to access the data here.

As if to answer the call for improved data handling tools, the Obama Administration last week launched its “Big Data Research and Development Initiative” that basically spreads $200 million across six federal science agencies to fund R & D of technologies that “access, store, visualize, and analyze” enormous sets of data. The 1,000 Genomes Project is part of the White House Initiative.

The National Human Genome Research Institute, rightly so, calls the Human Genome Project “one of the great feats of human exploration in history – an inward voyage of discovery rather than an outward exploration of the planet or the cosmos.” For the first time we were able to map our entire genome from end to end. Our estimate of total genes was whittled down to between 20,000 and 25,000, we have a better understanding of our relatedness to other species, and not to mention, we’ve discovered gene mutations associated with breast cancer, muscle disease, deafness, and other illnesses. Who knows what the 1,000 Genome Project is yet reveal about our DNA and ourselves. We’ve painted a digital portrait of our DNA, we now begin to add the finer strokes.

image credits: National Geographic and DNA Sequencing Service

 

 

PopSci.com, April 10, 2012  —  Research libraries are facing an unexpected challenge: too many books. Despite digitization, bound collections continue to grow. Some libraries house their stacks offsite, which can create multi-day delays between request and retrieval. Last June, the Mansueto Library at the University of Chicago, which accumulates about 150,000 books every year, introduced a system of robotic stacks capable of holding 3.5 million volumes in one seventh the space required by conventional stacks. The trick: Librarians sort books by size and not by Dewey decimal system. Engineers from Dematic, a firm that builds automated parts and storage-retrieval systems for Boeing, Ford and IBM, designed a five-story underground storage area managed by five robotic cranes. Dematic has built 17 automated library systems worldwide, but the University of Chicago’s is the most complex. The company has three more libraries under construction.

 

 

The basic unit of the system is the bin—the storage area contains 24,000 of them stacked on twelve 50-foot-high metal racks. Most bins are 18 inches by two feet by four feet, subdivided into several compartments, and hold about 100 average-size books. Larger items, such as manuscripts and atlases, are stored on two double-wide rows of racks that face the center aisle. The book vault is kept at optimal conditions for paper preservation, 60˚F and 30 percent relative humidity.

1. REQUEST

When a library user requests a book through the online card catalog, the catalog shares the request with the Dematic system, which pulls up a book’s bin and compartment information along with the bin’s current location on the racks.

2. RETRIEVE

In the book vault, four robotic cranes serve two rows of bins apiece and one crane serves the two double-wide rows. All cranes traverse the length of the building on rails built into the floor. A programmed logic controller, originally developed to guide automotive assembly lines, coordinates the cranes’ movements and guides them to the appropriate bin. Cranes can move horizontally and vertically at the same time. “It’s like a big matrix,” says Todd Hunter, the head of document management at Dematic. Once the crane reaches the proper bin, it extends two pins that catch metal handles on the container. With the pins engaged, the crane pulls the bin onto its platform. It then transfers the bin to a lift that delivers it through one of five openings in the vault’s ceiling to the circulation desk.

3. DELIVER

When the bin arrives at the circulation desk, the librarians receive an alert on their computer screens identifying the book title requested and the bin compartment in which it is located. A librarian sorts through the compartment to find the book—a process that generally takes 10 to 15 seconds—and scans the bar code, which prompts the system to send a ready-for-pickup e-mail to the customer. The time between request and retrieval is usually about five minutes.

4. RETURN

After a customer returns a book, a librarian requests a bin holding similarly sized titles. The librarian places the book in the proper compartment, scans it, and presses a function key to indicate that the bin should be lowered back into the vault. Sorting the books by size has another advantage over the Dewey decimal system. “Most libraries lose or misshelve 2 to 3 percent of their collection every year,” Hunter says. “With this system, that loss is virtually eliminated.”

MANSUETO LIBRARY, UNIVERSITY OF CHICAGO

Book capacity 3.5 million
Number of book bins 24,000
Height of retrieval cranes 55 feet
Delivery time 5 minutes
Cost $10 million

Story by Kalee Thompson
Illustration by Graham Murdoch

 

 

 

 

 

Credit: AP photo
Politico.com, April 10, 2012, by Donovan Slack  —  A GOP trustee on the board overseeing Medicare financing  released a report on Tuesday concluding that the health care law will add $340 billion in costs, the Washington Post reports.

The report, by conservative policy analyst Charles Blahous, spurred the White House late Monday night to issue a prebuttal of sorts, claiming Blahous is using some form of “new math” and that the law will actually decrease the deficit.

“In another attempt to refight the battles of the past, one former Bush Administration official is wrongly claiming that some of the savings in the Affordable Care Act are ‘double-counted’ and that the law actually increases the deficit. This claim is false,” Jeanne Lambrew, deputy assistant to the president for health policy wrote in a White House blog post.

The Blahous report suggests that savings from the law that go into a Medicare trust fund must be used for benefits and cannot be used to expand coverage to the uninsured, as current cost estimates of the law would require.

“This isn’t just a persnickety point about the intricacies of budget law,” Blahous told the Post.

The White House maintained that the Congressional Budget Office and the Office of Management and Budget did not double-count the Medicare savings – count them toward both the trust fund and expanding insurance benefits.

 

 

Paul Krugman responds to the Blahous report in WaPo…………..

 

Another Bogus Attack on Health Reform

Paul Krugman Blog, April 10, 2012  —  Oh, boy. It turns out that the WaPo featured on its front page a report by Charles Blahous of the (yes, Koch-funded) Mercatus Center — although the Post describes him as a Medicare trustee, giving the impression that this is somehow an official document — claiming that the Affordable Care Act will actually increase the deficit. Jonathan Chait does the honors:

You may wonder what methods Blahous used to obtain a more accurate measure of the bill’s cost. The answer is that he relies on a simple conceptual trick. Medicare Part A has a trust fund. By law, the trust fund can’t spend more than it takes in. So Blahous assumes that, when the trust fund reaches its expiration, it would automatically cut benefits.

The assumption is important because it forms the baseline against which he measures Obama’s health-care law. He’s assuming that Medicare’s deficits will automatically go away. Therefore, the roughly $500 billion in Medicare savings that Obama used to help cover the uninsured is money that Blahous assumes the government wouldn’t have spent anyway. Without the health-care law, in other words, we would have had Medicare cuts but no new spending on the uninsured. Now we have the Medicare cuts and new spending on the uninsured. Therefore, the new spending in the law counts toward increasing the deficit, but the spending cuts don’t count toward reducing it.

So saving Medicare money isn’t a deficit reduction, because Medicare is going to run out of money and cut benefits anyway. Right?

OK, this is crazy. Nobody, and I mean nobody, tries to assess legislation against a baseline that assumes that Medicare will just cut off millions of seniors when the current trust fund is exhausted. And in general, you almost always want to assess legislation against “current policy”, not “current law”; there are lots of things that legally are supposed to happen, but that everyone knows won’t, because new legislation will be passed to maintain popular tax cuts, sustain popular programs, and so on.

To take the really big example: on current law, the whole of the Bush tax cuts will expire at the end of this year. If that’s your baseline, then plans like the Ryan budget, which not only maintains those tax cuts but adds another $4.6 trillion to the pot, are wildly deficit-increasing — in fact, the Ryan plan would be a huge budget-buster even if hell freezes over and his secret loophole-closers turn out to be real. Somehow, though, I suspect we won’t get a front-page WaPo story about that insight.

So this is basically a sick joke that doesn’t pass the laugh test. Unfortunately, it seems that some news organizations don’t have mandatory laugh-testing.