Cell block: Up to 50 percent of patients with metastatic melanoma have a mutation in the BRAF protein (its structure is shown here) that renders the tumor susceptible to certain drugs. Scientists have now discovered how some tumor cells evolve resistance to those drugs. Credit: Wikipedia user Emw / Creative Commons
Tumors can evolve resistance to powerful new cancer drugs. But scientists are now learning why, giving hints at how to stop it
MIT Technology Review, November/December 2010, by Emily Singer — Last August, oncologist Keith Flaherty and colleagues at Massachusetts General Hospital published a study that gave hope to patients with metastatic melanoma. But the good news was tempered by a serious caveat: in most patients, the drug eventually stopped working after anywhere from months to years.
This issue of drug resistance has plagued the new generation of so-called targeted cancer therapies, designed to block the effects of genetic mutations that drive the growth of cancer. In two new studies published last week in Nature, researchers from Dana Farber Cancer Institute in Boston and the University of California, Los Angeles, uncovered how some melanoma tumors fight back against these drugs. They say the insight will aid in the design of new drugs and drug combinations that will allow targeted therapies to work longer and maybe even overcome resistance altogether.
“If we can understand and anticipate the full spectrum of ways cancers can get around these drugs, we can come up with formulas for combinations of drugs that could have lasting control,” says Levi Garraway, an oncologist and scientist at Dana Farber.
In one study, Garraway, Flaherty, and collaborators analyzed the effects of 600 different protein kinases, which are types of enzymes, on melanoma tumor cells growing in a dish. They found that over-activity among nine of the protein kinases made the cells resistant to the type of drug that was so promising in Flaherty’s melanoma study. One enzyme had never previously been implicated in cancer. The researchers confirmed the findings by analyzing tissue samples from melanoma patients who evolved resistance to the drug.
It’s not yet clear how common this particular mechanism of drug resistance is. But Flaherty says that, based on the findings, he is very optimistic about targeted therapies. “It’s not chaos that creates resistance, it’s the same rational cell and molecular biology that led to the development of these therapies in first place,” he says. “We don’t need to invoke some phenomenally complex network biology to figure this out.”
In a related paper in Nature, Roger Lo, a physician and scientist at UCLA’s Jonsson Comprehensive Cancer Center, found changes similar to those in Garraway’s study. Lo agrees that the results will help scientists figure out more effective drug combinations. He likened the approach to those used to eradicate stubborn viruses. “A cocktail would be designed to cut off any possible escape route,” he says. However, “it’s more daunting to cover all grounds for a cancer cell,” because such cells tend to be very “plastic,” or capable of change.
Lo also cautions that researchers have studied relatively few patients, so it’s not yet clear how broadly these findings will apply to larger numbers of patients. (One problem is that it’s hard to come by tissue samples—researchers need tissue from the same patient both before and after treatment.) The researchers found resistance mechanisms in about 40 percent of the drug-resistant patients they studied, and are now looking for explanations for the remaining 60 percent.
How scenario planning and forecasting tools can help organizations prepare for the worst—or seize entirely new opportunities.
MIT Technology Review, December 2, 2010, by Peter Schwartz & David Babington — Four years ago, the threat of an avian-flu pandemic catapulted up the agenda of governments, global health agencies, and companies. The outbreak of an earlier virus, which caused a disease called SARS, had illuminated what a fast-spreading global virus could do to travel, commerce, and public well-being. As a shipping company, UPS took the flu warnings seriously. The head of strategy assembled 20 managers from different areas of the company for several workshops that explored how the disease might affect UPS’s ability to serve its customers. The objective was to examine and rehearse responses to various scenarios. Participants came up with five of them, each of which described the possible origins of a pandemic, the consequences, and the contingency plans that UPS might implement.
Luckily, the avian-flu pandemic did not materialize. But in April of 2010, an unknown (and unpronounceable) little volcano in Iceland began spewing tons of ash into the air, disrupting travel across Europe and forcing the UPS air hub in Cologne, Germany, to shut down. UPS recognized that just as in some of the pandemic scenarios, air travel would be impossible in certain regions. And because it understood the consequences, it was able to work backwards, adapting its flu contingency plans to the volcanic eruption. The company rerouted flights from affected European hubs to Istanbul, Turkey, and directed its network of trucks to deliver packages over long distances on the ground. Service was not interrupted.
This incident illustrates the changing nature of the crises that companies—and economies—face in our increasingly complex, interconnected, fast-moving world. UPS was not just an independent actor facing a problem. Rather, it was a critical component of the global commerce infrastructure. Had the volcanic eruption been bigger or more prolonged, or had UPS been unable to respond quickly and effectively, the consequences could have triggered—and amplified—further crises across the shipping and airline industries, the countless businesses depending on them, and ultimately the global economy.
Of course, we’ve faced serious crises before. The 1970s, for example, brought the OPEC embargo and the resulting gas lines; war in Middle East, Vietnam, and Cambodia; the Pentagon Papers; Watergate; a presidential resignation; stagflation; the near bankruptcy of New York City; and the Iranian revolution and hostage crisis. But today the crises seem to come more frequently, develop more rapidly, and affect more players.
In 2008, the subprime-loan disaster in the United States quickly rippled out from local markets, toppling national investment banks that were packaging the faulty mortgages as complex bonds. As panic spread through the mass media and the Internet, the interconnected nature of the world set of a cascading effect. Much of the damage was contained by swift action on the part of national governments, but we didn’t escape a plunging stock market, unprecedented bailouts that led to potentially crippling national debt, and rising unemployment—effects that have stretched across regions and countries.
This 21st-century phenomenon of unending crisis actually began in the late 1990s, when the Web took off. Thanks to massive increases in computational power and the expansion of the global knowledge economy, the world is now densely and almost instantaneously interconnected. And thanks to ubiquitous communication, we all know about a crisis as soon as it happens, making the local instantly global. Just recall how fast the videocam images of oil gushing from BP’s broken well in the Gulf of Mexico flashed across the world last spring.
In this environment, planning, learning, and reflection are all too often replaced by sheer reaction. Our leaders in business and government rush from crisis to crisis, putting off strategic agendas to deal with every new surprise. Indeed, cascading crises have become the agenda. And this creates significant operational and strategic challenges. Amid such complexity, it’s no wonder that the average tenure of a CEO at a large U.S. company is only six years—and falling.
This month, Business Impact will look at a variety of predictive models and simulation methods that enable business and technology leaders to anticipate surprise and gain an edge. Computer scientists and mathematicians are teaming up with experts in every field to create models of the future that range from tracking how a pandemic might spread to using search data and Twitter feeds to anticipate what consumers will buy.
Some of the models aim to answer big questions in unexpected areas, such as how classroom education could be improved or whether a marriage will survive. Others target multibillion-dollar conundrums facing the world’s biggest industries; experts have devised models that forecast traffic on wireless networks to avoid future meltdowns and anticipate whether space junk will crash into the satellites responsible for global communication. Through it all, it’s important to bring a skeptical eye. Can the future really be predicted at all, given the high degree of uncertainty and complexity in play? And if so, why weren’t we better warned about catastrophes such as the 2008 financial meltdown? Many of us agree that abundant signs of trouble were out there and the tools were working, but we failed to take them seriously enough.
Scenario planning is just one of these tools–although it doesn’t try to predict the future but presents several plausible alternatives to challenge our assumptions and strategies. Others include data mining, business analytics, crowdsourcing, and neural networks. Used correctly, they can help organizations develop the capacity to anticipate crises, recognize and track them as they occur, and prepare contingency plans to deal with them. Ideally, such tools can also enable companies to seize opportunities before competitors even see them coming.
Peter Schwartz is cofounder and chairman of Global Business Network (GBN), a member of Monitor Group, and author of five books, including The Art of the Long View. David Babington, a Monitor consultant, is the 2010 Futures Scholar at GBN.
from MIT Technology Review
Inside the Fortified, Nuke-Proof Bunker that’s Now Hosting Wikileaks
Images from inside the Bahnhof data center show a world of Cold War-era security repurposed for the cyberwarfare of the 21st century.
Christopher Mims 12/01/2010
If Wikileaks founder Julian Assange is trying to turn himself into a Bond villain, he’s succeeded: the ongoing distributed denial of service attack against Wikileaks has forced his minions to move the site to a fortified data center encased in a cold war-era, nuke-proof bunker encased in bedrock. Really.
The host is called Bahnhof, and considering that the attacks against Wikileaks already forced its original host, PRG, to boot the site, and its second host, Amazon.com, to bow to political pressure to do the same, one wonders why Swedish Bahnhof would take on the challenge of hosting a site that will probably be under permanent attack for the foreseeable future.
Unless it’s for the PR value: Bahnhof has hosted Wikileaks before. In which case, let the gallery begin.
Flickr user Antony Antony had a chance to take pictures inside the Bahnhof data center despite its usual no-pictures policy. All the images that follow are creative commons licensed by him: