20090731-2

New research has proposed a genetic explanation for the evolution of a bizarre method used by male butterflies to ensure the success of their sperm

The-Scientist.com, July 29, 2009, by Jef Akst  –  The sperm of male butterflies has a strange property. About 90% of it is non-fertile — essentially filler for the females’ sperm storage organs that tricks females into thinking they have all the sperm they need to fertilize their eggs. The males’ ploy reduces the likelihood that their mates will take another suitor, thereby ensuring their own paternity. A study published online today (July 29) in Biology Letters suggests that an intense battle of the sexes drove the evolution of non-fertile sperm.

“This study is an elegant and important advance in the understanding of this fascinating male:female co-evolution,” evolutionary biologist Matthew Gage of University of East Anglia in England, who was not involved in the research, wrote in an email to The Scientist.

Nonfertile sperm — or “kamikaze” sperm, as Gage calls them, because they “protect the male’s fertile sperm from competition” — are less costly to produce than fertile sperm. Previous work has suggested that males evolved them specifically to trigger the stretch receptors in the female sperm storage organs that allow them to monitor the amount of sperm in storage, Gage said. This is to the benefit of the male who deposited it, increasing the likelihood he will father a female’s offspring. But it’s to the detriment of the female, limiting the number of large, nutritious spermatophores she receives as gifts from her mates.

Such a conflict of interest between males and females of a species, an evolutionary predicament known as sexual conflict, often leads to genetic connections between the sexually antagonistic traits — in this case, between non-fertile sperm production and the number of mates a female takes. Such a connection might then facilitate the evolution of one or both of the traits. Previous work showed that males produced varying amounts of non-fertile sperm, and that females’ ability to store it varied as well, but until now, there was no evidence for a genetic tie between these two traits.

By comparing butterfly siblings from 25 different families as well as half siblings that shared a father, evolutionary biologist Nina Wedell of the University of Exeter in England and her colleagues found that males that produced more non-fertile sperm had sisters that mated less frequently. These results showed that the two traits are genetically correlated, “a hallmark of sexual conflict,” Wedell said in an email. Furthermore, “the existence of a genetic correlation between sperm production and storage means that, provided the benefit to one sex is larger than the cost in the other sex, the trait can rapidly be elaborated,” Wedell added. That may explain how non-fertile sperm came to compose such an enormous percentage of the ejaculate.

However, the interpretation of Wedell’s results requires some assumptions about the costs and benefits of non-fertile sperm production and storage that have not yet been confirmed, cautioned evolutionary biologist Darryl Gwynne of the University of Toronto, who was not involved in the work. The nutritious gifts that females receive upon mating is likely to be a high incentive for them to mate many times, but there are often costs associated with mating as well. It is therefore unclear how often females should mate to maximize their fitness.

“This paper addresses a really neat potential conflict situation in these butterflies,” Gwynne said, “[but you] need to show [that] by filling her sperm storage organs with these non-fertile sperm and increasing her refractory period, you’re actually impacting her fitness.” In other words, demonstrating that this is a case of sexual conflict requires showing that females incur a cost by storing non-fertile sperm.

In addition to identifying the fitness consequences for females, Wedell and her colleagues want to investigate the variation in male ejaculates. “These butterflies are born with all their sperm,” Wedell explained. “What we would like to know is, how do they decide how much sperm to deliver at each mating?”

“I spend 40 percent of my time away from my patients doing paperwork and getting prior authorizations,” said Jim King, MD, a family physician in Selmer, Tenn. “We need to start taking the barriers that are between me and my patients away.”

 

The U.S. has a catastrophically fragmented system that provides incentives for sick care instead of prevention. The system is in dire need of reform – reform to save lives, to save families and to save money for both patients and the American health care system.

 

It’s time to put our health back where it belongs, out of profit-motive insurance companies’ grasp and back into your and your doctor’s hands. It’s time to stand with more than 450,000 doctors who support health care reform.

 

When our friends at The American Academy of Family Physicians (AAFP) and Herndon Alliance (a nonpartisan coalition of more than 200 health-care provider organizations including the AARP, Mayo Clinic and Families USA) asked for our help, we produced this video featuring the doctors your family relies on for care. They are urging Americans to ‘Heal Health Care Now’.

 

Now, it’s your time to stand with more than 450,000 doctors who support health care reform. Make your voice heard and call Congress to reform health care: (202) 224-3121.

 

Thursday, July 30th at 12:15 PM ET Senate Majority Leader Harry Reid and Senators Durbin, Schumer and Murray will be having a press conference regarding what Doctors think about this issue.

20090731-1

The-Scientist.com, July 29, 2009, by Jef Akst  –  Researchers report a step forward in understanding the pathology of Alzheimer’s disease. Two genes that are commonly mutated in the early-onset form of Alzheimer’s may cause the disorder by altering how presynaptic neurons release neurotransmitters, according to a study published this week in Nature.

The mechanism may apply to other neurodegenerative disorders as well, the researchers say.

“This is a new concept that’s interesting to know,” said molecular neurobiologist Ilya Bezprozvanny of the Southwestern Medical Center at Dallas, who was not involved in the work.

 

More than 100 different mutations in two genes coding for the proteins presenilin 1 and 2 are associated with early-onset Alzheimer’s disease, but the exact effects of these mutations on neural function is still unclear. “It’s the first [study] suggesting that presenilins play a presynaptic role,” Bezprozvanny said.

In 2007, molecular geneticist and neuroscientist Jie Shen of Harvard Medical School and her colleagues created knockout mice that lacked both presenilin genes and found memory deficits and neurodegeneration in the brain — two key features of Alzheimer’s disease. In the current study, Shen set out to determine on which side of the synapse presenilins exert their effect by creating two strains of knockouts: one that lacked both presinilin genes only in the presynaptic neurons of a synapse in the hippocampus — a brain region that plays an important role in memory — and another where the genes were knocked out just post-synaptically.

Measuring neural activity in dissected brain sections from the two strains of knockout mice, the researchers were able to compare the effects of presenilins on pre- and post-synatpic activity. In presynaptic presenilin knockout mice, the researchers found drastically reduced long-term potentiation (LTP) — a physiological measure of memory formation. In postsynaptic presenilin knockouts, however, LTP was normal.

Knocking out presynaptic presenilins also altered other aspects of neuronal function and reduced the probability of neurotransmitter release. “Our earlier work led us to focus on NMDA receptors, which are postsynaptic receptors,” Shen said. “This [work] led us to see the importance of the presynaptic function.”

Neurotransmitter release depends on increases in calcium levels within the neuron. By blocking the release of calcium from the endoplasmic reticulum — an intracellular source of calcium — the researchers mimicked the effects of the presynaptic presenilin knockouts in control mice. This result points to intracellular calcium release as a possible mechanism by which presenilins regulate neuronal function.

“The main take home message is there is a difference in the way neurons process calcium levels in absence of presenilins, and that has an effect on synaptic [function],” said Bezprozvanny. However, he cautioned, it’s not clear how directly the findings can be applied to Alzheimer’s disease. “This is not an Alzheimer’s mouse model. This is a presenilin knockout.”

Researchers studying neurodegenerative diseases have long debated whether knocking out these genes is a good model for Alzheimer’s because the exact role of the many presenilin mutations associated with the disease is unclear. Some argue that these mutations result in a ‘loss of function,’ in which case a knockout model would appropriately represent the changes that occur in Alzheimer’s patients. Others argue that these mutations result in a ‘gain of function,’ or a change that cannot be replicated by knocking out the genes entirely.

If patients with Alzheimer’s disease do indeed have non-functioning presenilin genes, the results of this study may suggest that presynaptic neurotransmitter release is a more general mechanism of neurodegenerative diseases. For example, Shen and her colleagues found a presynaptic effect in a mouse model of Parkinson’s disease. These changes “might be a precursor to neurodegeneration,” Shen said, and might therefore provide new targets for disease therapies.

by Gabe Mirkin MD  –  If you need proof that exercise helps to keep you young, look at the exciting study from King’s College in London, England reported in the Archives of Internal Medicine (January 28, 2008). The researchers showed that people who exercise regularly have telomeres in the DNA of their white blood cells that are longer than those of couch potatoes. White blood cell telomeres shorten over time and serve as a marker of a person’s biological age.

The active ends of genetic material in cells are covered with a layer of proteins called telomeres. If they weren’t, the exposed ends of the genetic material would stick to anything nearby and the cells would die. However, each time a cell divides to make two cells, a little bit of the telomere is removed. Eventually the telomere is gone, the ends of genetic material stick together and the cell can no longer divide so it dies without replacing itself. Obviously, the longer the telomeres, the longer it will take for the telomeres to be used up so the cells are viable longer.

The study compared physical activity, smoking and socioeconomic status in 2,401 sets of twins. Those who were more active had longer leukocyte telomeres than those who were less active. The researchers concluded that “The mean difference in leukocyte telomere length between the most active (who performed an average of 199 minutes of physical activity per week) and the least active (16 minutes of physical activity per week) subjects was 200 nucleotides, which means that the most active subjects had telomeres the same length as sedentary individuals 10 years younger, on average.” – www.DrMirkin.com

DrMirkin.com, July 31, 2009, by Gabe Mirkin MD       In North America, more than 35 percent of the population

becomes diabetic, and most cases of diabetes could be prevented

with exercise.  A high rise in blood sugar levels causes sugar

to stick on the surface of cells. Once there, the sugar can never

get off and is eventually converted to sorbitol which destroys

the cell to causes all the side effects of diabetes such as

heart attacks, strokes, arteriosclerosis, nerve damage and so

forth (even in people who have not been diagnosed as diabetic).

So anything that prevents frequent high rises in blood sugar

helps to prevent cell damage.

       This month, a study showed that exercise lowered high

blood sugar levels in diabetics far more when done AFTER

eating dinner than before eating (Journal of the American Medical

Directors Association, July 2009).  Muscle contractions drive

sugar into cells with little or no insulin.  These people were

out-of-shape diabetics who walked slowly and for only 20 minutes.

Longer and more intense exercise lowers insulin and sugar levels

even more and would be even more beneficial.

       Another new study shows that you should exercise BEFORE

you eat because it lowers blood sugar levels the next morning

(Medicine and Science in Sports and Exercise, August 2009).

Nine healthy postmenopausal women exercised two hours on a

treadmill twice a day.  Those who exercised an hour before meals

had a much lower rise in blood sugar at 16 hours after eating,

compared to those who exercised an hour after their meals.

       Humans must use their muscles to stay healthy.

Contracting muscles before eating helps to prevent the rise in

blood sugar that follows meals, and exercising after eating helps

to keep blood sugar levels low the next morning.  Of course many

people do not have the time to exercise both before and after

meals, but you will benefit from exercising WHENEVER you can

because lowering blood sugar and blood fats helps to prolong

life and prevent diseases such as diabetes.  www.DrMirkin.com

Also Helps Cholesterol — But More Than A Sprinkle Required

WebMD.com, by Jeanie Lerche Davis  –  Several years ago a study showed that cinnamon can improve glucose and cholesterol levels in the blood. For people with type 2 diabetes, and those fighting high cholesterol, it’s important information.

Researchers have long speculated that foods, especially spices, could help treat diabetes. In lab studies, cinnamon, cloves, bay leaves, and turmeric have all shown promise in enhancing insulin’s action, writes researcher Alam Khan, PhD, with the NWFP Agricultural University in Peshawar, Pakistan. His study appears in the December issue of Diabetes Care.

Botanicals such as cinnamon can improve glucose metabolism and the overall condition of individuals with diabetes — improving cholesterol metabolism, removing artery-damaging free radicals from the blood, and improving function of small blood vessels, he explains. Onions, garlic, Korean ginseng, and flaxseed have the same effect.

In fact, studies with rabbits and rats show that fenugreek, curry, mustard seeds, and coriander have cholesterol-improving effects.

But this is the first study to actually pin down the effects of cinnamon, writes Kahn. Studies have shown that cinnamon extracts can increase glucose metabolism, triggering insulin release — which also affects cholesterol metabolism. Researchers speculated that cinnamon might improve both cholesterol and glucose. And it did!

The 60 men and women in Khan’s study had a diagnosis of type 2 diabetes for an average of 6 1-2 years but were not yet taking insulin. The participants in his study had been on antidiabetic drugs that cause an increase in the release of insulin. Each took either wheat-flour placebo capsules or 500 milligram cinnamon capsules.

  • Group 1 took 1 gram (two capsules equaling about one-quarter of a teaspoon) for 20 days.
  • Group 2 took 3 grams (six capsules, equaling a little less than one teaspoon) for 20 days.
  • Group 3 took 6 grams (12 capsules, equaling about one and three-quarters teaspoons) for 20 days.

Blood samples were taken at each level of the study.

Cinnamon made a difference! Twenty days after the cinnamon was stopped, there were significant reductions in blood glucose levels in all three groups that took cinnamon, ranging from 18 to 29%. But these was one peculiar finding that researchers don’t understand at this point. Only the group that consumed the lowest level of cinnamon continued with significantly improved glucose levels — group 1. The placebo groups didn’t get any significant differences.

Taking more cinnamon seems to improve the blood levels of fats called triglycerides. All the patients had better triglyceride levels in their 40-day tests — between 23% to 30% reductions. Those taking the most cinnamon had the best levels.

In groups taking cinnamon pills, blood cholesterol levels also went down, ranging from 13% to 26%; LDL cholesterol also known as “bad” cholesterol went down by 10% to 24% in only the 3- and 6-gram groups after 40 days. Effects on HDL (“good cholesterol”) were minor.

Cinnamon should be part of our daily diet — whether we have type 2 diabetes or not, writes Kahn. However, for the best effects, just a sprinkle isn’t enough.

 

                      and from www.consumerreports.org

 

Consuming about one-half teaspoon of cinnamon a day for 40 days reduced blood levels of both glucose and triglycerides, a potentially artery-clogging fat, by about 25 percent in adults with type 2 diabetes, a USDA clinical trial found. Cinnamon also cut “bad” LDL cholesterol by nearly 20 percent. And the benefits persisted for up to three weeks after people stopped taking it.

http://blogs.consumerreports.org/health/2008/11/cinnamon-diabet.html

20090731-1

20090730-10

The New York Times, July 27, 2009, by John Markoff  –  A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

 

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

 

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

 

The researchers – leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California – generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

 

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

 

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

 

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

 

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

 

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

 

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

 

The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.

 

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”

 

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.

 

“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.

 

The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

 

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

 

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

 

“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

 

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

 

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

 

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”                                        Ken Conley/Willow Garage

20090730-11

20090730-9

Updated: July 24, 2009

Intelligence officials call unmanned aerial vehicles, often referred to as drones, their most effective weapon against Al Qaeda. The remotely piloted planes are used to transmit live video from Iraq, Afghanistan and Pakistan to American forces, and to carry out air strikes.

Predator spy planes were first used in Bosnia and Kosovo in the 1990s. The Air Force’s fleet has grown quickly in recent years, and consists of 195 Predators – which are 27 feet long and cost $4.5 million apiece – and 28 Reapers, a new, more heavily armed drone. Unmanned drones fly 34 surveillance patrols each day in Iraq and Afghanistan, up from 12 in 2006. They are also transmitting 16,000 hours of video each month, some of it directly to troops on the ground.

In addition, Army units have used hand-launched models, which look like toy planes, to peer over hills or buildings. Other drones monitor the seas and eavesdrop from high altitudes, much like the storied U-2 spy planes.

Despite their popularity, the drones have many shortcomings that have resulted from the rush to deploy them. Air Force officials acknowledge that more than a third of their Predators have crashed. Complaints about civilian casualties, particularly from strikes in Pakistan, have also stirred some concerns among human rights advocates.

In July 2009, the Air Force released a report that envisions building larger ones over the next several decades that could do the work of bombers and cargo planes and even tiny ones that could spy inside a room.

The Air Force also said it could eventually field swarms of drones to attack enemy targets. And it will have to be ready to defend against the same threat, which could become another inexpensive way for insurgents to attack American forces.The report envisions a family ranging from “nano”-size drones that could flit inside buildings like moths to gather intelligence, to large aircraft that could be used as strategic bombers or aerial refueling tankers. Midsize drones could act like jet fighters, attacking other planes or ground targets and jamming enemy communications.

Perhaps the most controversial is the idea of drones swarming on attack. Advances in computing power could enable them to mount preprogrammed attacks on their own, though that would be a difficult legal and ethical barrier for the military to cross.

But before long, even a single insurgent could dispatch several small drones at once. Referring to the improvised explosive devices that insurgents have planted like mines in Iraq and Afghanistan, the report warned that the next inexpensive threat to American troops could be “an airborne I.E.D.”

20090730-7

2008-2009 Study

Association for the Advancement of Artificial Intelligence

Co-chairs: Eric Horvitz and Bart Selman

Panel: Margaret Boden, Craig Boutilier, Greg Cooper, Tom Dean, Tom Dietterich, Oren Etzioni, Barbara Grosz, Eric Horvitz, Toru Ishida, Sarit Kraus, Alan Mackworth, David McAllester, Sheila McIlraith, Tom Mitchell, Andrew Ng, David Parkes, Edwina Rissland, Bart Selman, Diana Spears, Peter Stone, Milind Tambe, Sebastian Thrun, Manuela Veloso, David Waltz, Michael Wellman

Terms of Reference

The AAAI President has commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel will consider the nature and timing of potential AI successes, and will define and address societal challenges and opportunities in light of these potential successes. On reflecting about the long term, panelists will review expectations and uncertainties about the development of increasingly competent machine intelligences, including the prospect that computational systems will achieve “human-level” abilities along a variety of dimensions, or surpass human intelligence in a variety of ways. The panel will appraise societal and technical issues that would likely come to the fore with the rise of competent machine intelligence. For example, how might AI successes in multiple realms and venues lead to significant or perhaps even disruptive societal changes?

The committee’s deliberation will include a review and response to concerns about the potential for loss of human control of computer-based intelligences and, more generally, the possibility for foundational changes in the world stemming from developments in AI. Beyond concerns about control, the committee will reflect about potential socioeconomic, legal, and ethical issues that may come with the rise of competent intelligent computation, the changes in perceptions about machine intelligence, and likely changes in human-computer relationships.

In addition to projecting forward and making predictions about outcomes, the panel will deliberate about actions that might be taken proactively over time in the realms of preparatory analysis, practices, or machinery so as to enhance long-term societal outcomes.

On issues of control and, more generally, on the evolving human-computer relationship, writings, such as those by statistician I.J. Good on the prospects of an “intelligence explosion” followed up by mathematician/science fiction author Vernor Vinge’s writings on the inevitable march towards an AI “singularity,” propose that major changes might flow from the unstoppable rise of powerful computational intelligences. Popular movies have portrayed computer-based intelligence to the public with attention-catching plots centering on the loss of control of intelligent machines. Well-known science fiction stories have included reflections (e.g., the “Laws of Robotics” described in Asimov’s Robot Series) on the need for and value of establishing behavioral rules for autonomous systems. Discussion, media, and anxieties about AI in the public and scientific realms highlight the value of investing more thought as a scientific community on preceptions, expectations, and concerns about long-term futures for AI. The committee will study and discuss these issues and will address in their report the myths and potential realities of anxieties about long-term futures. Beyond reflection about the validity of such concerns by scientists and lay public about disruptive futures, the panel will reflect about the value of formulating guidelines for guiding research and of creating policies that might constrain or bias the behaviors of autonomous and semi-autonomous systems so as to address concerns.

Focus groups:

  • Pace, Concerns, Control, Guidelines

Chair: David McAllester

  • Potentially Disruptive Advances: Nature and timing

Chair: Milind Tambe

  • Ethical and Legal Challenges

Chair: David Waltz

 

Asilomar meeting, February 2009

20090730-81

Attendees at Asilomar, Pacific Grove, February 21-22, 2009 (left to right): Michael Wellman, Eric Horvitz, David Parkes, Milind Tambe, David Waltz, Thomas Dietterich, Edwina Rissland (front), Sebastian Thrun, David McAllester, Magaret Boden, Sheila McIlraith, Tom Dean, Greg Cooper, Bart Selman, Manuela Veloso, Craig Boutilier, Diana Spears (front), Tom Mitchell, Andrew Ng.

Feedback on study: aifutures @ aaai.org

Next Page →