FierceBiotechIT.com, September 19, 2011, by Ryan McBride  —  A new front is opening up in the ongoing effort to identify bad drug reactions, a major sore spot for drug companies and healthcare providers. Harvard doctors are using supercomputing technology to predict adverse reactions to drugs and hospital readmissions in patients with congestive heart failure, with an eye toward finding new ways prevent such problems.

Cambridge, MA-based GNS Healthcare is providing the advanced analytics technology for the work, which is being done in collaboration with Dr. David Bates and Harvard-affiliated Brigham and Women’s Hospital, where Bates directs the Center for Patient Safety Research and Practice. GNS’s technology has found an audience in drug development and healthcare because it can extract causal relationships–such as how networks of genes drive diseases–from complex datasets.

In its latest collaboration, the company plans to analyze data from the electronic health records of patients treated in health system that includes Brigham and Women’s, pharmacy data and claims information to find which factors or combination of factors contribute to adverse drug reactions and hospitalizations. Since GNS’s system crunches huge amounts of complex data in an unbiased or “hypothesis-free” fashion, doctors are hoping to glean some new insights about what plays a role patients getting sick from drugs they take.

“GNS Healthcare has developed supercomputer-driven, hypothesis-free technologies to extract actionable insights from large, complex healthcare datasets,” Bates said in a statement. “The hypothesis-free approach represents an exciting way to identify non-obvious combinations of conditions, drugs and other factors that lead to adverse events, and reveal what activities can mitigate them.”

In the name of pharmacovigilance, governments and researchers have been employing new IT-enabled methods such as reporting systems and data-mining experiments to provide early identification of or rapid responses to adverse drug events. What makes the GNS-Brigham effort stand out, however, is the use of the firm’s reverse engineering/forward simulation (REFS) technology to home in on the causes of such problems directly from electronic patient data, potentially leading to new ways stop the unfortunate episodes from happening.

 

 

 

 

Supercomputers

AFP/Getty Img/File Hundreds of ethernet cables are connected to rows of laptops for Flashmob 1, the first supercomputer...

 

 

 

The United States has been rated number one for the fastest computer ever made with more than half of the top 500 supercomputers. China now claims to take over that number one spot!

Sustaining a computed speed of 2,507 trillion calculations per second, makes it the fastest computer known. This high speed supercomputer is named after the Milky Way, Tianhe-1.

According to the New York Times it is ranked 1.4 times faster than the world’s current fastest supercomputer, which is in the US and housed at a national laboratory in TN.

Using mostly chips designed by United States companies, the Tianhe-1 does its super speed work at the National Center for Supercomputing in the northern port city of Tianjin.

Trials were started using this advanced computer at the Tianjin meteorological Bureau and the National Offshore Oil Corporation dat centre.

Liu Guangming, director of the supercomputing centre, told state news agency Xinhua that this computer can also serve in the animation industry and help in bio-medical research.

Japan is also working on a supercomputer called “K computer” in hopes to steal the number one spot.

China submitted its new technical data of Tianhe-1 to the world top 500 list which will be released in November.

 

Huffington Post

 

 

 

Interesting is the fact that “rare earth” is needed to build such computers. The minerals essential to making such a machine comes from China. Recently China made a boast that they are considering restricting their “rare earth” exports to Europe and the United States. This would make it extremely difficult if not impossible to produce solar panels, computer chip, and other valuable technologies here in the United States. Most of the minerals needed can be located in the US as well, although located mostly in our national parks. It is estimated that in the US these minerals are worth over 3.5 trillion dollars and discussions have even been made to develop our national parks and even tear down monuments to get to these precious minerals.

 

 

China Has the World’s Fastest Supercomputer

 

Supercomputer Tianhe-1A

 

 

 

Larger Than Life, Supercomputers can be Tagged as Fast and Furious

 

Photo from newsfactor.com

 

 

Almost all of us are now familiar with personal computers, in fact we and even our kids use them everday. Indeed computers in most part of the world are already attached and integrated with our way of life. Here are some of the importance of computers in us: We utilize them in our work and homes performing different tasks like making spreadsheets, pertinent documents, presentations, desktop publishing and more. With computers we can be able to hear music, watch different movies, download any files from music to any data that can be utilize in our research work, play different games and a lot lot more. Truly computers are well-knowned to be versatile and prolific, but there is something that is far more powerful than computers, that is, the superb supercomputers, that can exhibit finesse and pizzaz, in terms of calculating power at a blazing speed and storing huge data. Elusive and rare, supercomputers are many notch better than any other Personal computers available in the world. They are faster and sophisticated as they are packed with enormous features that can make calculations several hundred times better than any current personal computers in our homes. Faster than any other PCs by 100,000 times, supercomputers are very expensive and requires an ample space to house them. One of the most powerful supercomputer and one of the top supercomputer in the world (placed a very strong second in the Top 500 supercomputers) the IBM’s Blue Gene Supercomputer /L which has a blazing speed of 136 trillion operations per second or 478.2 Teraflops, a world record, is expected to undergo major upgrades that will enable it to perform twice faster than its already very fast recent predecessor

 

Teraflops computers. Photo from flickr.com

 

 

The bluegene l supercomputer getting wired. Photo from:www.flickr.com/ photos/llnl/ 2841635792/

 

 

The norm which is used to measure the speed of our PC is in MegaHertz, (A unit of frequency the Hertz which is named in honor of the prolific scientist Heinrich Hertz is equivalent to cycle per second or a complete sinewave in Electronics). In supercomputer, the term “Petaflop,” is used to name the unit for speed (but still this is theoretical, so as to speak).Supercomputers operates at the Teraflop range, and thousands of teraflops is needed by a Petaflop. A supercomputer can contain all the output of all Hollywood films that are produced and even all the Books in the US Library of Congress, wow, a truly powerhouse especially with its capability to hold very huge amount of data.

The Japanese unveiled its Earth Stimulator in 2002, which is tapped to make weather predictions, track sea temperatures globally, rainfalls, and to oversee natural disasters in the years or even few centuries ahead.

Supercomputers are utilized in commercial drug manufacturer firms, to determine the long term effects of their products to humans. Supercomputers may also open the possiblity of discovering new treatments for various ailments. In tire making industry, Michelin now create new better tires employing supercomputers. Then, the engineers would rely in making samples by means of acquiring the services of drivers to make the drives for a couple of months or even more and will make necessary design after series of research and tests. It will take the Michelin several years to develop a tire, but now things are completely different, thanks to the emergence of supercomputers a prototype of the much desired, sturdy and well-designed tires can now be made in a matter of just few weeks. Supercomputers can simulate crashes and offers flexibilty in making various designs, indeed a cutting edge in competetion with other tire makers.

Versatile and lightning quick, supercomputer can fetch as high as one (1) Billion Dollars to build. Very expensive as it is, making the awesome computer of great magnitude will need a lot of time and effort to make. Other factors in building this powerful computing machine includes: the electricity to keep it working and functioning without letup, cooling system to keep it performing at its optimum best all the time, bear in mind that supercomputers employs layers of processors which are sensitive to heat as they can melt intstantly when exposed to heat as they can melt instantly when exposed to excessive heat. But it is worth all the gargantuan effort for at the end of the day the benefits of using these superb supercomputing supercomputers are truly helpful to humanity in a lot of ways!

 

 

HubPages.com, September 19, 2011  —  Watson is the brainchild of IBM, in conjunction with many faculty and students from Carnegie Mellon University, the University of Massachusetts, MIT, the University of Texas, the University of Trento, the Rensselaer Polytechnic Institute, and the University of Southern California/Information Sciences Institute. Watson is an artificial intelligence program, named for the founder of IBM, and designed to use natural language to communicate with humans.

But Watson isn’t just a “computer program” he is, what is called in the artificial intelligence textbooks, an intelligent agent. According to IBM, Watson is “an application of advanced natural language processing, information retrieval, knowledge representation, reasoning, and machine learning”. Watson perceives his environment, calculates odds, acts in a way that maximizes his success, and learns from his mistakes. The latter is arguably the most exciting facet of Watson’s intelligence. The field of artificial intelligence, or AI, has been around for over 50 years, but one of the field’s central problems has been that of getting a machine to learn and reason in a less linear fashion. In other words, to understand the subtleties of human language and be able to respond intelligently. Up until now this hasn’t been possible, but because of Watson’s ability to “learn” from his mistakes, he can communicate with humans. Something that, in the past, has only been the purview of myth and science fiction.

Watson was fed millions of documents, including, encyclopedias, reference materials, dictionaries, the Bible, religious doctrines, books, and information on movies to name just a few, and Watson used these texts to build his knowledge base. He was given examples of every conceivable area of the human condition in an attempt help him “learn” to understand the interactions between humans and speak their language. When posed a question Watson uses thousands of algorithms, an effective method of well defined instructions, to understand the question and seek to find the best answer. Watson does this by looking for patterns in a stream of input (unsupervised learning) and deciding what category something belongs in after seeing a number of examples. For example, if you asked Watson “what are beef hot dogs made of?” Watson would search his extensive database and come up with a short list of answers, he would then recalculate the odds of each answer being correct, and the answer that has the best odds is the answer that Watson will give. If he is wrong, he analyzes both the wrong answer and the right answer and will be able to give the correct answer the next time. Watson applies this same concept for every problem or question he is faced with, and in this way Watson “learns”.

According to IBM, Watson is so powerful, with his 2, 880 processing cores, that he can do in three seconds what it would take a home computer two hours to do even if it had the capabilities of language like Watson does.

The possible applications for Watson are innumerable, especially in the health care field, and for IBM the sky’s the limit. According to David Ferrucci, principal investigator of Watson technology at IBM, the technology used to create Watson is advancing exponentially and it’s not out of the realm of possibility that Watson’s processing power will be in devices that fit into the palms of our hands within the next decade, but for now Watson’s 10 racks of computer circuits are definitely not what you would call a mobile app.

Who knows what the future will hold, but for now Watson will have to be content with showing off his abilities on national television. Last February 2011, Watson competed on the popular quiz show Jeopardy against Ken Jennings, the winner of 74 straight games, and Brad Rutter, the all time biggest money winner.  Watson beat them both.   Now IBM has created an even more intelligent robot, that can work as a PA (physician’s assistant).

 

 

 

Quantum Computers

Techknowbutler, computer repair services in Frederick Md
Source: Computer Repair Frederick Md

 

 

 

For you sci-fi fans, it might sound like a giant computer that is kept on a mother ship somewhere deep in space. As for the rest of us, we have no idea!  A quantum computer is any device that exploits quantum mechanical phenomena to run algorithms.

The data that is in these quantum computers are referred to as quibits, instead of bits. This is because in a normal computer, the data is read through grooves that are placed on a hard disk. But, in a quantum computer the data is represented by the quantum properties of a given molecule or set of molecules.

Quantum mechanics is fundamentally uncertain in nature. So when the cycle of information represents an algorithm, the data itself is coming from a molecule. The molecule is then measured, and then presents the end result; with the uncertain nature, they end up being biased as result.

Because of this bias in results, the algorithm has to run multiple times, and with each run it weighs the average outcome to help approach the correct answer the computer is looking for. What makes these quantum computers so important is that they possess capabilities that regular computers lack. Things such as the quick factorization of large numbers (an explicit threat to conventional cryptographical techniques), the more accurate simulation of quantum phenomena, and very efficient database search.

Not only that, but quantum computers offer a speed up to search times compared to regular computers. This makes a huge advantage when computers are trying to figure out larger problems. It is not yet possible to conceive of all the applications of mature quantum computers. The largest number of quibits ever contained within one quantum computing system is 7.

While there is no specific answer to what quantum computers will be able to accomplish in the future; there are millions of dollars in funding to help figure it out. In no time the future will be measured in qubits and bits will then be ancient history.

 

 

 

NASA Supercomputers

 

A closer look at Pleiades and other supercomputers on show at the space agency’s Ames Research Center

 

 

 

By Daniel Terdiman   —   NASA’s advanced supercomputing facility, at its Ames Research Center in California, houses Pleiades, which at a current measurement of 973 teraflops – or 973 trillion floating point operations per second – is the sixth most powerful computer in the world.

Pleiades is used by Nasa personnel across the agency for research in earth and space sciences, and for conducting giant simulations. The machine is almost fully subscribed – meaning it’s in use 24 hours per day, seven days per week.

Inside the computing centre, the agency maintains rack after rack of the SGI machines that make up Pleiades, most of which have 512 cores, or about six teraflops. But recently, the centre added 32 new racks with 768 cores – some of which are seen here.

Things move fast in the world of supercomputers. When Pleiades was debuted in November 2008, it was measured at 487 teraflops and was the third-most powerful computer. Now, almost a year and a half later, it has dropped to sixth place on the list but has doubled its power.

Photo credit: Daniel Terdiman/CNET

 

 

 

 

Cielo Supercomputer, Los Alamos National Laboratory

 

Named Cielo (Spanish for “sky”), this petascale (more than one quadrillion floating point operations per second) supercomputer will help NNSA ensure the safety, security, and effectiveness of the nuclear stockpile while maintaining the moratorium on testing.

Cielo is the next-generation capability-class platform for the Laboratory’s Advanced Simulation and Computing Program. Cielo will enable scientists to increase their understanding of complex physics, as well as improve confidence in the predictive capability for stockpile stewardship.

The supercomputer is housed at the Nicholas C. Metropolis Center for Modeling and Simulation, where both Los Alamos and Sandia National Laboratories share day-to-day operations.

In its primary role, Cielo will run the largest and most demanding workloads involving modeling and simulation. Cielo will be primarily utilized to perform milestone weapons calculations.

Following the Las Conchas Fire, with the Lab reopening July 6, 2011, Cielo won’t be restarted until technical experts make sure that the power is clean and the ventilation and data storage systems are operating correctly.

Photo by Richard Robinson.

 

 

 

 

Cray Supercomputer Wires

 

 

 

Cray XMP/216 – A legacy supercomputer – Alabama was the FIRST state in the nation to create a supercomputer network for the benefit of it’s citizens. VERY forward thinking then, as it is now.

From 1987- 1993, this machine was in ’round the clock use, supporting scientific, educational, governmental, engineering and medical research throughout Alabama using high-speed telecom lines. In December 1993, it was replaced with a Cray C94/264 which is three times more powerful.

There are numerous SGI, Dell & other brands of machines with several terabytes of memory and numerous multi-core, multi-processor machines now in use in the facility which processes data for the Alabama Criminal Justice Information System, the Alabama Department of Finance Information Services Division and numerous other state agencies.

Located in Huntsville

On May 24, 2011, Cray announced the Cray XK6 hybrid supercomputer. The Cray XK6 system, capable of scaling to 500,000 processors and 50 petaflops of peak performance, combines Cray’s Gemini interconnect, AMD’s multi-core scalar processors, and NVIDIA’s many-core GPU processors.

 

 

 

 

Supercomputer Simulations

 

Earthquake Simulation Studies

Supercomputer simulations help scientists develop and validate predictive models of earthquake processes and then use these models to better understand seismic hazard. Knowledge gained from may help building designers and emergency planners worldwide.

In this image: Peak horizontal ground velocities derived from the M8 simulation reveal regions at risk during a magnitude-8 quake. Image by Geoffrey Ely, University of Southern California

For more information, visit the Argonne Leadership Computing Facility (ALCF) website.

Image courtesy of Argonne National Laboratory.

 

 

 

 

Supercomputing in Nanoseconds

 

Elucidation of Stress Corrosion Cracking Mechanisms

 

 

Petascale quantum mechanical-molecular dynamics simulations on Argonne’s Blue Gene/P supercomputer encompass large spatiotemporal scales (multibillion atoms for nanoseconds and multimillion atoms for microseconds). They are improving our understanding of atomistic mechanisms of stress corrosion cracking of nickel-based alloys and silica glass—essential for advanced nuclear reactors and nuclear-waste management.

In this image: Fracture simulations for nanocrystalline nickel without and with amorphous sulfide grain-boundary phases reveal a transition from ductile, transgranular tearing to brittle, intergranular cleavage. Image courtesy Hsiu-Pin Chen of USC et al., Physical Review Letters 104, 155502.

For more information, visit the Argonne Leadership Computing Facility (ALCF) website.

Image courtesy of Argonne National Laboratory.

 

 

 

 

NERSC Supercomputer

 

NERSC Supercomputer

This is the first phase of the Department of Energy’s National Energy Research Scientific Computing center’s (NERSC) next-generation supercomputer which was delivered to the Lawrence Berkeley National Laboratory’s Oakland Science Facility this month. NERSC awarded the contract for this system to Cray Inc. in August 2009.

The system that was delivered is a Cray XT5™ massively parallel processor supercomputer, which will be upgraded to a future-generation Cray supercomputer. When completed, the new system will deliver a peak performance of more than one petaflops, equivalent to more than one quadrillion calculations per second.

 

This machine is named Hopper, after rear admiral Grace Murray Hopper who was an American computer scientist and United States Naval officer.

NERSC Center currently serves thousands of scientists at national laboratories and universities across the country researching problems in climate modeling, computational biology, environmental sciences, combustion, materials science, chemistry, geosciences, fusion energy, astrophysics, and other disciplines. NERSC is managed by Lawrence Berkeley National Laboratory under contract with DOE.

For more information about the system and the contract, please visit: www.lbl.gov/cs/Archive/news080509.html

For more information about computing sciences at Berkeley Lab, please visit: www.lbl.gov/cs

For more information about Science at NERSC, please visit: www.nersc.gov/projects

credit: Lawrence Berkeley Nat’l Lab – Roy Kaltschmidt, photographer

XBD200910-00886-157.TIF

 

NERSC Supercomputer, delivered and put together

 

 

 

New York IBM Blue Gene/L Supercomputer

 

 

New York Blue Supercomputer

New York Blue/L is an 18 rack IBM Blue Gene/L massively parallel supercomputer located at Brookhaven National Laboratory (BNL) in Upton, Long Island, New York. It is the centerpiece of the New York Center for Computational Sciences (NYCCS), a cooperative effort between BNL and Stony Brook University that will also involve universities throughout the state of New York. Each of the 18 racks consists of 1024 compute nodes (a total of 18432 nodes) with each node containing two 700 MHz PowerPC 440 core processors and 1 GB of memory (a total of 36864 processors and 18.4 TB of memory).

 

 

 

QCDOC Supercomputer

 

QCDOC Supercomputer

Known as QCDOC machines, for quantum chromodynamics (QCD) on a chip, these supercomputers perform the complex calculations of the theory that describes the interactions of quarks and gluons and the force that holds atomic nuclei together.