Decision Making at the Cellular Level

 

Signal Transduction Networks. Mammalian Cell Engineering

Dr.Jason Haugh

 

 jason_haugh@ncsu.edu
919-513-3851 (phone)
919-515-3465 (fax)
Suite 3100, Partners II (office)

Immunomodulation & membrane receptors

Cell-cell interaction by membrane receptors are crucial for innate and adaptative immune response development. Modulation of these interactions is a way to increase or decrease immune response: it’s called “immunomodulation”. Research teams investigate small synthetic molecules in the regulation of membrane interaction during immune responses. Therapeutical applications in cancer and autoimmune disease immunotherapies as well as in vaccines could emerge from the development of such molecules. This first target is CD40, a key molecule from adaptative immune response development which belongs to TNF receptor superfamily. Already developed, are synthetic molecules mimicking CD40L homotrimer which are currently evaluated in animal models

 

 

 

Immunomodulation image from a scanning electron microscope, shows interaction between a T cell and a Dendritic Cell

 

 

Current Projects
A fundamental property of living cells is their ability to respond and adapt to stimuli, yet we have only begun to appreciate cell decision-making processes at the molecular level. These mechanisms are known as signal transduction networks. While a general understanding of how intracellular signaling molecules interact in pathways has evolved in recent years, we are still unable to predict and control cell responses quantitatively under various conditions. The central tenet of our research is that the ability to manipulate cell behavior uniquely follows from understanding signal transduction as a complex chemical system. Our interdisciplinary approach, which combines mathematical modeling and analysis with molecular biology, cell biochemistry, and fluorescence imaging methods, has implications for cancer, immune regulation, and wound healing.

 

 

Molecular Crosstalk in Life and Death Signaling
Signaling pathways seldom operate in isolation. Interactions between molecules in different pathways imply the existence of a distributed signaling network that serves as a system of checks and balances; however, multiple mutations can combine to short-circuit the system, forming the molecular basis for cancer and other diseases. Further, interventions that target intracellular enzymes can have effects that propagate through the network, affecting drug efficacy. In particular, specific signaling pathways are required for cell proliferation and survival of many cell types, and the extensive crosstalk between these pathways suggests that cell life and death are co-regulated.

We are currently studying such networks in fibroblasts stimulated with platelet-derived growth factor (PDGF) and in T cells stimulated with interleukin-2 and -4; these model systems are important for tissue homeostasis and the immune response, respectively. We have developed quantitative, high-throughput biochemical assays to measure activation of the key molecular intermediates, and both genetic and pharmacological approaches are used to manipulate their activation states independently from the extracellular stimulus. This allows us to `open’ the control structure of the network and isolate specific intermolecular interactions. Kinetic models unify our observations and predict the effects of molecular interventions in combination, and pathway outcomes are correlated with cell proliferation and survival metrics to elucidate powerful design principles for engineering the cell life and death switch at the molecular level.

 

 

Intracellular Gradients and Directed Cell Migration
In wound healing, PDGF is secreted by platelets as they clot blood vessels. This stimulates directed migration of fibroblasts from connective tissue to the wound, where they secrete, remodel, and contract the extracellular matrix, rebuilding the tissue. Animal cells detect chemical gradients by spatial sensing, in which a cell can differentiate signaling at its front from its rear. We have demonstrated that PDGF gradients stimulate asymmetric production of specific lipid second messengers in the cell membrane, which apparently act as a cellular compass to signal migration in the appropriate direction.

We use total internal reflection fluorescence microscopy (TIRFM) to quantitatively image the production, lateral diffusion, and turnover of these membrane lipids in individual, living cells in real time and at ~100 nm resolution. This technique is used in conjunction with reaction-diffusion models that allow us to parse out these concurrent molecular processes under uniform and gradient stimulation with PDGF. We are currently extending this approach to stimulation with both soluble and surface-immobilized factors, to examine the relationships among intracellular signaling, cell-substratum adhesion, and the speed and orientation of cell migration. A quantitative understanding of spatial sensing will pave the way for novel wound healing therapies, with controlled delivery of PDGF and signal transduction-modifying agents to optimize the migration and proliferation of effector cells.

 

 

Strategies for cellular decision-making

 

 

Theodore J Perkins1 & Peter S Swain

 

Ottawa Hospital Research Institute, Ottawa, Ontario, Canada

Centre for Systems Biology at Edinburgh, University of Edinburgh, Edinburgh, UK

Correspondence to: Peter S Swain2 Centre for Systems Biology at Edinburgh, University of Edinburgh, Mayfield Road, Edinburgh, Scotland EH9 3JD, UK. Tel.: +44 131 650 5451; Fax: +44 131 651 9068; Email: peter.swain@ed.ac.uk

 

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits distribution and reproduction in any medium, provided the original author and source are credited. Creation of derivative works is permitted but the resulting work may be distributed only under the same or similar license to this one. This license does not permit commercial exploitation without specific permission.

 

 

Abstract

Stochasticity pervades life at the cellular level. Cells receive stochastic signals, perform detection and transduction with stochastic biochemistry, and grow and die in stochastic environments. Here we review progress in going from the molecular details to the information-processing strategies cells use in their decision-making. Such strategies are fundamentally influenced by stochasticity. We argue that the cellular decision-making can only be probabilistic and occurs at three levels. First, cells must infer from noisy signals the probable current and anticipated future state of their environment. Second, they must weigh the costs and benefits of each potential response, given that future. Third, cells must decide in the presence of other, potentially competitive, decision-makers. In this context, we discuss cooperative responses where some individuals can appear to sacrifice for the common good. We believe that decision-making strategies will be conserved, with comparatively few strategies being implemented by different biochemical mechanisms in many organisms. Determining the strategy of a decision-making network provides a potentially powerful coarse-graining that links systems and evolutionary biology to understand biological design.

 

 

Introduction

Definition of STOCHASTIC: Involving a random variable (a stochastic process) and/or involving chance or probability (a stochastic model of radiation-induced mutation)

Life at the cellular level is stochastic. Diffusion, gene expression, signal transduction, the cell cycle, and the extracellular environment are all stochastic processes that change in time in ways that can be difficult to predict (Raj and van Oudenaarden, 2008; Shahrezaei and Swain, 2008). While a cell’s environment determines its response, information on the environment comes from different, fluctuating, and perhaps contradictory, signals. This information is processed using biochemical networks whose components themselves fluctuate in concentration and intracellular location. By coming together into a multicellular organism, cells can reduce stochastic effects in their immediate environment, but even in humans signals and the cellular response to signals can be substantially stochastic (Geva-Zatorsky et al, 2006; Sigal et al, 2006; Feinerman et al, 2008).

In our opinion, such conditions imply that the cell’s internal model of its environment can only be probabilistic. We propose that a biochemical network performing decision-making has three main tasks: it should infer from noisy, incoming stimuli the probable state or states of the extracellular environment and, potentially, the probable future states; given the most probable states, it must decide an appropriate response through weighing the advantages and disadvantages of each potential response; and it must implement these functions using a strategy that is evolutionarily stable and so allow a population of cells to outcompete their rivals and survive environmental catastrophes (Figure 1). Such a division has been made in other fields, from economics to artificial intelligence and neuroscience. Statistical inference is the discipline concerned with inferring a quantity we cannot observe directly (the quantity is hidden) from a quantity we can observe, but which is only correlated with the quantity of interest. Decision theory provides a means to find the optimum response given uncertain information by weighing appropriately the costs and benefits of each potential response. Finally, evolutionary theory considers scenarios where decisions are not made in isolation but with other competing decision-makers.

Figure 1

 

 

Factors influencing cellular decision-making.

A cell senses signals generated by a change in the environment and must decide an appropriate response. This decision-making can depend on the cell’s predictions for the current and future state of the environment based on the signals it has sensed, the short-term history of the cell, the expected benefits and the costs of each potential response, the actions of other cells, which may be competitive or cooperative, and the time taken to both decide and generate the response. The response may be a change in internal state or an action that changes the environment itself.

 

Here we survey recent work showing that techniques from these fields can explain not just qualitatively but quantitatively the behavior of cellular networks, suggesting that cells may have evolved to biochemically implement such methods. We will try to place into one framework the strategies adopted by cells to detect, process, and respond to extracellular changes. By strategy we mean how a particular signaling network detects and analyses information not in terms of the details of biochemistry, but in terms of the functions of information processing that biochemistry performs. We believe that it is at this level of information processing that we shall discover evolutionarily conserved principles, whether we consider a stem cell deciding between different fates or a bacterium deciding between expressing and not expressing a particular operon. We will begin by investigating the strategies adopted to benefit individual cells before discussing strategies that are best understood at the level of populations of cells.

 

 

How do cells interpret noisy signals?

Cells are confronted with a fundamental problem: their biochemical decision-making machinery is intracellular, but their behavior should be determined by the extracellular environment. The environment may contain, for example, energy resources or a mating partner or predator. Signals detected on the cell surface and transduced intracellularly, however, are stochastic and can never present a complete picture of the environment. What strategy should cells adopt to interpret and make use of such noisy extracellular signals?

 

One possibility is statistical inference: the cell may use extracellular signals to explicitly estimate, or infer, the state of the extracellular environment. To a human reasoner, estimating states is natural. When a doctor diagnoses a patient, she will have several possible physiological states of the patient in mind and will use observations and tests to determine which state is most likely. Similarly, a cell might be interested in the state of its environment, even though it cannot observe the state directly, because knowing the probable state can be much more beneficial than knowing several environmental parameters. For example, a rise in temperature might mean that a bacterium has become exposed to the sun or that it has entered a host organism—two different environmental states that require very different responses. From measuring extracellular signals, such as the local concentration of metabolites or hormones, cells ought to estimate the most likely state of their environment before deciding an appropriate response.

Cells that do estimate the state of their environment must infer the state or the likely future state from signals that are only correlated with the state. The optimum way to perform such inference is Bayesian inference, at least it can be proved to be so if we accept a set of axioms that any form of inference ought to obey (Cox, 1946). We conjecture, then, that cells compute the likelihood of different possible environmental states, E, based on signals they sense, S, according to Bayes’s rule:

 

 

This computation assumes that several forms of ‘prior knowledge’ are available to the cell. First, it assumes knowledge of the possible environments, E, and their relative likelihoods, or prior probabilities, P(E). This prior knowledge may be uninformative—for example, that a mating partner is equally likely to be in any direction before pheromone is detected—or more restrictive—for example, that concentrations of an extracellular sugar should be in one of two states, either high or low, with a low state twice as likely as a high state. Second, it requires the probabilities of observing different signals in different environments, P(S|E). The third term, P(S), describes the overall likelihood of sensing a signal S for all possible states of the environment. The result of the computation is the posterior probability, P(E|S)—an inference about the likelihood of different environmental states given the prior knowledge and the signals that have been sensed. In Box 1, we give an example of using Bayes’s rule. The posterior probability, P(E|S), is a function of the magnitude of the signal sensed, S, and we next discuss the common shapes that this function takes.

 

Box 1 – Inferring changes in the environment—Bayes’s rule

Full box

 

 

Cells may infer the state of their environment

Many signal transduction and genetic networks with very different biochemistry have dose–response functions that are sigmoidal. A sigmoidal function is often considered advantageous because it prevents fluctuations in the input signal affecting the response if the input is below a threshold value, at which the response increases sharply. Near the threshold value, however, a sigmoidal response can amplify fluctuations because a small change in input generates a large change in output. If the signal S is continuous and the environment can only be in two states, then equation (1) describes the posterior probability that the environment is in one of these states. Viewed as a function of S, such a posterior probability is often smooth and sigmoidal raising the possibility that biochemical networks generate sigmoidal responses because they are solving inference problems (Libby et al, 2007). In the simplest case, the output of a decision-making network could be proportional to the posterior probability of the extracellular environment being in a particular state. This inference about the probable state of the environment can then be processed by downstream networks to decide an appropriate response.

 

For example, Libby et al (2007) asked whether it is possible to design a genetic network that can infer the state of the environment from noisy concentrations of an intracellular signal—in essence, implementing a Bayesian computation using genetic-regulatory machinery. They considered a bacterium in an environment with just two states: one rich in a metabolite, say a sugar, and one poor in sugar. These states could correspond to the gut of a host organism and the soil. To regulate the genes for metabolism of sugars, many bacteria employ transcription factors directly as sensors: sugar enters the cell, interacts with a transcription factor, and consequently influences gene expression. Libby et al, therefore, treat intracellular sugar as the environmental signal. Each state of the environment implies a different amount of intracellular sugar, although this amount is stochastic because of fluctuations in the transport of sugar, its consumption in the cell, and other factors (Figure 2). We write P(S|high) for the distribution of intracellular sugar, S, when the environment has a high concentration of extracellular sugar and P(S|low) for the distribution of intracellular sugar when the environment has a low concentration of extracellular sugar.

 

 

Figure 2

 

 

Biochemical networks may use statistical inference to infer the probable state of the extracellular environment. (A) Inference in an environment with two states corresponding to low and high amounts of extracellular sugar S. The state low in sugar generates the blue distribution of intracellular sugar; the state high in sugar generates the red intracellular distribution.

 

The environmental state is ambiguous for intracellular concentrations of sugar lying in the overlap between the two distributions. The Bayesian posterior probability of the state high in sugar given intracellular levels of S is the black, sigmoid-like curve. (B) The response function of the lac operon measured by its rate of transcription in populations of E. coli as a function of the chemical IPTG, a non-hydrolysable version of the sugar lactose, and cyclic AMP (cAMP), whose concentration in vivo is inversely proportional to the concentration of glucose (Makman and Sutherland, 1965). Data taken from Setty et al (2003). In the interpretation of Libby et al, the extracellular environment has two states: one high in lactose (IPTG) and low in glucose (high cAMP), and the other low in lactose and high in glucose (low cAMP). (C) An example of the probability distributions for lactose and cAMP in the two extracellular states (Libby et al, 2007). If the extracellular environment has two states with these distributions, then the response function

measured in panel B is similar to the posterior probability of the state high in lactose and low in glucose (high in cAMP). (D) The posterior probability of the state high in lactose and low in glucose given the two distributions in panel C. Compare with the measured response function in panel B.

Full figure and legend (518K)Figures & Tables index

 

 

 

 

Bayes’s rule then states that the posterior probability of the state high in sugar depends on the concentration of intracellular sugar through

where we have expanded P(S) over the two states of the environment. Analyzing models of single-component gene regulation, Libby et al showed that even networks consisting of just one gene controlled by an allosteric transcription factor can transcribe at a rate that tightly matches the posterior probability of the state high in sugar for many distributions of intracellular sugar.

 

This interpretation of a biochemical network as a network that performs inference is consistent with measurements of the regulatory response in vivo. We can use an experimentally measured response to determine the underlying distributions for the input stimulus—the equivalent of P(S|high) and P(S|low) in equation (2)—that would give rise to the measured response if this response is proportional to the posterior probability of an environmental state high in the input stimulus (Figure 2B–D). These distributions are part of the organism’s internal model of its environment. They describe what the organism expects in different environmental states and, as a consequence, underlay its decision-making strategies.

Improving inference over time

 

The inference described by Libby et al depends only on the steady-state concentration of sugar. It, therefore, requires the network to reach steady state within the lifetime of a fluctuation in extracellular sugar if the network is not to average fluctuations in sugar. In situations where the signal fluctuates substantially over time, the cell might be expected to continually update its beliefs. Andrews et al (2006) have proposed that the network generating bacterial chemotaxis performs such real-time inference. To chemotax along a gradient of a signal, Escherichia coli estimates a time derivative of the signal (Berg and Brown, 1972). The signal is detected by its binding to receptors at the plasma membrane, which is a stochastic process (Korobkova et al, 2004).Andrews et al assume that, before estimating the time derivative, the cell first infers the concentration of the signal at the cell membrane from the concentration of receptors bound by signal. Using simulation, they show that the inference implemented by the chemotactic network strongly resembles a Kalman filter (Kalman, 1960; Kalman and Bucy, 1961), an inference technique in control theory to track the dynamics of a hidden variable (here the concentration of the signal) from noisy measurements of a correlated variable (the concentration of receptors bound by the signal). A Kalman filter falls within the Bayesian framework. It performs updating through a sequential application of Bayes’s rule: the current posterior probability of the extracellular state becomes the prior probability of the extracellular state at the next time step, and Bayes’s rule is then applied again to find the updated posterior probability (Barker et al, 1995). Intuitively, sequential updating allows a cell to base its decisions not just on the current signals it is receiving, but also on their recent history. In chemotaxis, such inference leads to optimum low-pass filtering of the concentration of the signal, reducing the effects of stochastic biochemistry and rotational diffusion of the chemotaxing cell, while maintaining a response sufficiently fast to allow the bacterium to detect changes in the gradient of the signal in real-time (Andrews et al, 2006).

 

Similar real-time inference may also occur in the system for sugar metabolism described above. For example, once exposed to a high extracellular state of sugar, another state high in sugar is perhaps more likely, at least over some period of time, because the bacteria are probably in the human gut. Such memory naturally fits into Bayesian inference through the prior probabilities of the states high and low in sugar, P(high) and P(low). After exposure to a state high in sugar, P(high) could increase and P(low) will correspondingly decrease. With this new prior probability, the posterior probability of the state high in sugar will still be a sigmoidal function of S, but will be larger at low concentrations of sugar. The change in the prior probability, P(high), could be biochemically implemented in E. coli through the concentration at the plasma membrane of the lactose permease, LacY, which is known to remain at an elevated concentration for generations after an initial exposure to lactose (Novick and Weiner, 1957). Increasing the concentration of the permease will increase the rate of the transcriptional response in a manner similar to the change in the posterior probability because more lactose will be transported into the cell for the same concentration of extracellular lactose. In the eukaryote Saccharomyces cerevisiae, a similar epigenetic memory of prior exposure to galactose is created through concentrations of the cytosolic enzyme Gal1p (Zacharioudakis et al, 2007). This increase in concentration also has the effect of enhancing the transcriptional response of the GAL regulon to low concentrations of galactose (Kundu et al, 2007). Chromatin modification is another eukaryotic epigenetic mechanism that has the potential to biochemically implement changes in prior probabilities of environmental states (Houseley et al, 2008). Such learning is often referred to as adaptive sensitization (Ginsburg and Jablonka, 2009).

 

These examples show that cells have the potential to implement sophisticated statistical calculations to infer changes in their environment despite stochastic signals and stochastic sensing networks. Over evolutionary time scales, the signaling and decision-making networks should evolve to encode the properties of the different possible environmental states. If environmental characteristics change, then the networks should alter to match this change (Tagkopoulos et al, 2008; Mitchell et al, 2009).

 

 

Cells anticipate changes in the state of the environment

Cells are continually sensing signals from a multitude of sources. Integrating this information has the potential to improve inference and consequently the fitness of the organism. While inferring the current environmental state can be advantageous, equally so is anticipating future changes. Tagkopoulos et al (2008) have shown that E. coli appears to infer from a sudden increase in temperature that it has left the soil and is now in a host organism. Consequently, as the bacteria pass into the gut of the host, they will experience a reduction in available oxygen. Using microarrays, Tagkopoulos et al demonstrated that the transcriptional response to an increase in temperature overlaps with the response to a loss of oxygen even if the temperature change occurs at maximal oxygen levels. Having inferred from the increase in temperature that they are now in a host, the bacteria predict an imminent loss of oxygen and respond appropriately in advance (Tagkopoulos et al, 2008). Such anticipation is learnt over evolutionary time scales. Using microevolution experiments in which increase in temperature was unnaturally followed by increase in oxygen, Tagkopoulos et al evolved bacteria in which the association between oxygen and temperature was substantially reduced. Another example can be found in the expression of the sugar operons of E. coli. During passage along the human gut, lactose appears earlier than maltose, and, indeed, anticipating future exposure to maltose, E. coli expresses the genes for metabolizing maltose upon exposure to lactose (Mitchell et al, 2009). This response is adaptive: activation of the maltose operon is lost if bacteria are grown in an environment where lactose is not followed by maltose and alternative sugars cannot substitute for lactose and induce expression. Similar anticipatory responses also occur in S. cerevisiae (Mitchell et al, 2009).

 

Biochemical networks have also been proposed that learn on the time scale of the lifetime of the organism (Gandhi et al, 2007; Ginsburg and Jablonka, 2009; Fernando et al, 2009). In such an associative learning framework, learning requires both memory and recall. Upon responding to a stimulus, an organism must record the aspects of the stimulus and its response. When the stimulus stops, the organism should also stop responding, but, through recall of its previous exposure, the threshold of stimulus at which future responses occur will change (Ginsburg and Jablonka, 2009). A classic example is Pavlov’s dog, which learnt to associate a bell chime with feeding by simultaneous occurrence of the chime and sight of food. Genetic and signal transduction networks have been designed in silico, which, although they initially respond only to stimulus A and not to stimulus B, learn upon simultaneous exposure to both stimuli to associate the stimuli and then respond to stimulus B when it is applied alone (Gandhi et al, 2007; Fernando et al, 2009). Both networks work through a molecule that enhances the response to stimulus B, but is only synthesized when both stimuli simultaneously occur. Such associative learning, despite its adaptive potential, has not yet been discovered in cells.

 

 

 

Weighing costs and benefits

Once a cell has inferred the most probable state of its environment, it needs to decide an appropriate response. The anticipated costs and benefits of each potential response, given the probable environmental state and the probable future environmental states, must be compared to choose both the most advantageous response and the level at which to respond. For new gene expression, for example, one expected cost is the expenditure of cellular energy in the synthesis of RNA and proteins; the expected benefits will depend on the environment and the properties and quantities of the proteins synthesized. These costs and benefits will be biochemically encoded into decision-making networks over evolutionary time-scales.

 

 

Cost and benefit in terms of fitness

In many situations, it may be hard to quantify or even identify the various costs and benefits to a cell of a particular response, particularly for cells in multicellular organisms. For unicellular organisms, however, the situation is simpler because much of their physiology appears optimized to allow as rapid a reproduction as possible, at least for laboratory strains. An appropriate measure of fitness, therefore, is cellular growth rate, an experimentally accessible quantity. Perhaps the simplest cellular decision is when and at what level a cell should express a particular set of genes. Dekel and Alon (2005) elegantly studied precisely this decision in the bacterium E. coli by measuring the effects on cellular growth rate of expressing the lac operon in different extracellular concentrations of the sugar lactose. The lac operon encodes enzymes to metabolize lactose, and we will use Z to denote their intracellular concentration. By inducing the operon to different extents in an environment without lactose and measuring the reduction in growth rate of a population of bacteria as compared with a control population that do not express the operon, Dekel and Alon estimated the cost of this decision (Figure 3A). They found that the reduction in growth rate increased more than linearly with the amount of enzymes produced because, they argued, high synthesis rates of some proteins can deplete cellular resources and so impact cell growth super-linearly (Dekel and Alon, 2005)—a form of opportunity cost where one decision precludes another. In this environment low in sugar, they found empirically that growth rate glow is reduced from the growth rate of the control population, gc, by

Figure 3

 

Cells may evolve to make optimal decisions. (A) The cost of expressing the lac operon in E. coli is measured by the reduction in relative growth rate when the operon is expressed in environments without lactose. The red curve is given by a fit of equation (4). (B) Once the cost has been found, the benefit of expressing the operon can be obtained by measuring the increase in the relative growth rate when the operon is fully expressed in environments with different amounts of extracellular lactose. The red curve is given by a fit of equation (5) with equation (6). Data in panels A and B are from Dekel and Alon (2005). (C) The level of the expression of the lac operon, Z, under conditions of zero glucose. ZWT is the level of expression of the operon when fully induced. Data are from Kalisky et al (2007). The red curve is the predicted level of expression of the operon by Kalisky et al found by maximizing the benefit minus the cost as a function of the extracellular concentration of lactose (equation (7)). Bars indicate standard errors throughout.

Full figure and legend (155K)Figures & Tables index

 

where cost of expression is calculated as

 

for positive constants 0 and ‘0. Cost is a quadratic function of the quantity of enzymes synthesized, Z, at least for the range of Z tested. Given this cost, they estimated the benefit of expression in different extracellular concentrations of lactose by measuring the increase in growth rate for cells fully expressing the operon as compared with control cells that did not express the operon. Any increase in growth rate is determined by surplus energy gained by the metabolism of lactose despite synthesis of the enzymes required (Figure 3B). In this environment where concentration of extracellular sugar can be high, the growth rate is

 

 

where increase in growth rate from sugar metabolism can be described by

for positive constants and KY. Dekel and Alon (2005) postulate that this Michaelis–Menten form arises from the action of LacY permeases, which import lactose into the cell.

 

 

Decisions to optimize fitness

To make a decision, a cell should compare the fitness of each potential response given the expected extracellular environment. We define the fitness of a response as the expected benefit to the growth rate minus the expected cost. Such comparisons happen often in our own reasoning. To decide between one treatment and another, a doctor weighs the cost and efficacy of each treatment with the seriousness of the disease. Dekel and Alon (2005) showed that the level of expression of the lac operon appears to have evolved to optimize a similar trade-off. Given their measured costs and benefits of expression, they used decision theory to ask what particular concentration of enzymes, Z, should E. coli synthesize to optimize its fitness. By assuming an extracellular environment in just one state with a constant concentration of extracellular lactose, they argue, and show with microevolution experiments, that bacteria maximize their growth rate as a function of Z. The optimum concentration of Z, Zopt, satisfies

 

 

for a fixed concentration of the sugar lactose, S, and for all other concentrations of Z. We assume that the concentration of intracellular lactose is proportional to the extracellular concentration. The optimum Z is sigmoidal in S (Figure 3C). Below a critical concentration of sugar, the cost of expression outweighs the benefit, and the optimal expression level is zero. Above this concentration, the optimal expression increases with S, although it eventually saturates because of both diminishing benefit and increasing cost.

 

Surprisingly, considering a two-state environment—one state low and one state high in sugar—with each state producing some distribution of intracellular sugar S, we can use decision theory and Dekel and Alon’s measurements to optimize the expected growth rate and derive Bayes’s rule (Box 2).

 

Box 2 – Deciding by optimizing fitness—a derivation of Bayes’s rule as an optimal response

 

Full box

 

Considering benefit minus cost as a measure of fitness may, however, be too simple. The expression for the lac operon predicted by Dekel and Alon from equation (7) does not match in detail the measured level of expression (for bacteria grown in the absence of glucose; Kalisky et al, 2007). The predicted optimal curve rises higher than that for wild-type expression, although with a gentler slope (Figure 3C). By allowing an environment with a probability distribution for concentration of sugar, Kalisky et al have improved the prediction by averaging over this distribution. For their best comparisons, they use a bimodal distribution similar to a superposition of the two distributions generated by the two environmental states proposed by Libby et al (2007). They improve their fit further by considering stochastic fluctuations in the concentration of the transcription factor controlling the response (LacI). Such stochasticity reduces fitness, but only if the regulatory proteins have concentrations near those that optimize the growth rate. Otherwise, fluctuations can be beneficial because cells that by chance happen to grow faster will dominate the population (Tanase-Nicola and ten Wolde, 2008). Near the optimal growth rate, the deleterious effect of fluctuations in the concentration of the transcription factor can be minimized if the cellular response is saturated at those concentrations of sugar that are most frequent (Kalisky et al, 2007). The DNA-binding site of the transcription factor is consequently either always occupied or always free. Typical fluctuations in the concentration of free transcription factor are buffered either by high concentration of the inducer lactose or by a large number of active transcription factors (Elowitz et al, 2002).

 

 

Other definitions of cost and benefit

That cells may do more than optimizing their growth rate is also well known in ecology. There, a distinction is drawn between r and K selection (r and K are variables in the logistic equation, which models the growth of populations: r is the maximum possible growth rate and K is the carrying capacity or maximum size of the population; MacArthur and Wilson, 1967). A typical organism undergoing r selection grows quickly and usually lives in stochastic environments where extensive environmental calamities can occur, but there is little competition. A typical organism undergoing K selection lives in a competitive environment and maximizes its competitive abilities rather than its growth rate (Pianka, 1970). An r strategy, therefore, maximizes the expected growth rate, whereas a K strategy could minimize the extinction rate or perhaps the variance in the growth rate. Such issues become more complex when we consider the effects of decisions made by other cells.

 

Although reproduction ultimately decides fitness, we can also examine the effectiveness of biochemical networks that do not directly affect growth. Chemotaxis is an attractive system because its goal—to chemotax towards or away from a source of a chemical—can be identified. Stochasticity affects the diffusion of the signal, binding of the signal to any receptors at the cell surface, signal transduction, and potentially the motion of the chemotaxing cell itself. Some organisms, such as E. coli, move by swimming at constant speed with abrupt stops where they re-orient in a random direction—a process known as tumbling—and then begin swimming again. To swim up a chemical gradient, the cell senses the current concentration of the chemical and compares it to the concentration sensed earlier (Schnitzer et al, 1990). If the concentration is increasing, the cell is swimming in the right direction and tumbling is suppressed. If the concentration is decreasing, the cell is swimming in the wrong direction and tumbling happens more often. In principle, cells could sense concentrations by allowing the chemical to enter the cytosol and interact with signaling molecules or transcription factors, as in the examples of sugar metabolism discussed previously. However, a more accurate strategy is for cells to degrade the signal at their surface and so prevent re-measurement of previously observed molecules (Endres and Wingreen, 2008). Cells may also use stochasticity to improve their chemotaxis: bacteria swimming in the wrong direction may re-orient faster by rotational diffusion rather than by actively changing their motion (Strong et al, 1998).

 

The accuracy of a decision can also be used to quantify cost. Andrews and Iglesias (2007) have modeled decision-making and the chemotactic response of slime moulds. In their model, the state of the environment is the true angle of a chemical gradient, s, up which a slime mould wishes to chemotax. A chemotaxing cell senses and responds stochastically with a movement angle r. If r does not equal s, the cell does not chemotax towards the source and receives a cost in fitness, which Andrews and Iglesias suggest obeys the equation

 

 

Equation (8) is minimal when s=r and maximal when s and r are 180 degrees apart. Using a Bayesian approach, they calculate the expected cost as

 

 

where P(r|s) is a probability distribution describing the stochastic behavior of the chemotactic network—the tighter this distribution is around s, the better the chemotaxis—and the distribution P(s) is the cell’s prior knowledge of the location of the source of the signal. Andrews and Iglesias asked how accurately does r need to reflect s if the expected cost is to be less than some threshold D, a standard information-theoretic calculation (Cover and Thomas, 2006). To predict behavior, they use the distribution P(r|s) that has the maximum allowed cost of D and so minimizes the correlation (or, more correctly, the mutual information) required between r and s. Interpreting the degree of polarization of the cell’s morphology as proportional to the cell’s degree of prior knowledge, their predictions are quantitatively consistent with observations with the slime mould Dictyostelium discoideum. Unpolarized cells respond as if they have no a priori assumptions and, for example, change directions more frequently

than polarized cells (Andrews and Iglesias, 2007).

 

 

Decisions at the level of populations

So far we have looked at decision-making strategies as they benefit isolated individuals, but cells and organisms typically exist in populations. Interactions between organisms or between organisms and their environment can change the fitness of different strategies over time. Decision theories with assumptions of a single decision-maker and fixed costs and benefits are no longer appropriate (Nowak and Sigmund, 2004). Although we have argued that decision-making strategies can be understood as maximizing or near-maximizing the reproductive success of the individual, competition may force organisms to use strategies that appear suboptimal. For example, Pfeiffer et al (2001) have argued that a trade-off exists between the yield of ATP and its rate of production during the metabolism of sugars. Fermentation can produce ATP at a faster rate than respiration because it produces fewer ATP molecules per sugar molecule. In situations where organisms are competing for a common, extracellular resource, they should, therefore, use fermentation. When metabolizing internal resources, they should use respiration. This prediction is borne out for some microorganisms, such as S. cerevisiae, which use fermentation to produce ATP while decomposing organic matter even in the presence of oxygen (Pfeiffer et al, 2001). A strategy with lower fitness in environments without competition—fermentation is inefficient use of a resource—can become successful in environments with competition.

 

 

Optimizing inclusive fitness

Such phenomena, where fitness of a strategy depends on the strategies adopted by the rest of the population, are best analyzed using ideas from evolutionary theory. Natural selection can be viewed as maximizing not the fitness of an organism, but its inclusive fitness (Hamilton, 1964). The reproductive success of an organism need not only come through the individual organism’s own reproduction, but also through reproduction of related organisms because they share the genes of the individual. Inclusive fitness includes the direct fitness of an organism, offspring generated by the organism’s own behavior, and its indirect fitness, the offspring of neighbors, which survive because of the actions of the organism, but their contribution to inclusive fitness is weighted by the degree of relatedness such offspring have with the organism. Decision-making strategies that appear suboptimal because of a suboptimal direct fitness of the individual can be understood as optimal because of their contribution to increasing the individual’s indirect fitness. Such cooperative strategies, which benefit other cells in the population, but are possibly detrimental to the decision-maker, can be described by Hamilton’s rule (Hamilton, 1964; Box 3). If b is the benefit to the fitness of the cooperation’s recipient, c is the cost to the fitness of the cooperator, and r is a measure of the genetic relatedness of the recipient and the cooperator, then a cooperative strategy will be favored by selection if

 

Box 3 – Decisions in populations—Hamilton’s rule

Full box

 

 

A cooperative strategy can only be selected if there is genetic relatedness (r>0) and if the benefit is sufficiently high and the cost is sufficiently low.

 

Equally influential is the concept of an evolutionarily stable strategy (Maynard Smith and Price, 1973), which formalizes what we mean by an optimal strategy. A population implementing an evolutionarily stable strategy is optimal in that it cannot be invaded by a small number of organisms implementing an alternative strategy.

 

Research has focused on understanding the decision-making strategies in populations of microorganisms (Keller and Surette, 2006; West et al, 2006). Experimentally determining the cost and benefits appearing in Hamilton’s rule is difficult, but it is relatively straightforward to change the degree of relatedness in populations of microorganisms: for example, by seeding the population with either a single clone or two clones with opposing phenotypes, or by either preventing or allowing mixing of growing subpopulations. Such manipulations should, however, not alter the cost and benefit of a cooperative strategy. It is important, though, to be aware that generalizing from results obtained under laboratory conditions may not always be appropriate. The effects of stochasticity have also attracted attention, and we will begin by looking at such effects in bet-hedging decisions.

 

 

Bet-hedging strategies

A bet-hedging strategy is usually one in which different individuals of an isogenic population persistently exhibit different phenotypes. It can be defined as a phenotypic polymorphism that reduces the variance in fitness of a population of cells while possibly increasing the variance in fitness for certain individuals within the population (Seger and Brockmann, 1987). How are such strategies implemented by the cell? Biochemically, the gene or protein network that determines the phenotype must be bi-stable or, more generally, multi-stable. It must have several distinct, heritable steady states. One example is phase variation in bacteria, where cells decide between expressing different phenotypes or ‘phases’. Although the biochemistry generating the phenotypes is diverse, ranging from site-specific rearrangements of DNA to epigenetic mechanisms, the strategy of phase variation is an example of convergent evolution having been adopted by many bacterial species (Avery, 2006).

 

Stochastic fluctuations in a multi-stable network can be both advantageous and disadvantageous. Too large, and they can undermine the dynamical stability of each steady state, causing cells to fluctuate too rapidly from one phenotype to another (Hasty et al, 2000; Acar et al, 2005). For example, much of the genetic regulation active in lysogenous phage lambda is believed to reduce stochastic fluctuations into the lysogenic state (Aurell et al, 2002; Santillan and Mackey, 2004). Yet, in general, decision theory predicts that random strategies can outperform deterministic strategies whenever some aspect of the environment is unobserved (Bertsekas, 2005). A cell can never accurately sense all relevant variables in the environment suggesting that the potential for stochastic behavior is high, and not present under only special conditions. Indeed, without such fluctuations it may be impossible to generate different phenotypes within the population at all.

Although bacteria exploit stochastic fluctuations to generate phase variation and to determine the lifetime of each phase, how much stochasticity is necessary and how should this stochasticity relate to variation in the environment? Surprisingly, even fully stochastic switching with no sensing of the environment can be evolutionarily stable, but only if the environment changes infrequently (Kussell and Leibler, 2005). Assuming an alternative strategy that continuously senses the environment with a concomitant continuous cost in metabolic energy, Kussell and Leibler showed mathematically that stochastic switching without sensing is stable provided the state of the environment is not too uncertain, and related the extent of that uncertainty to the environment’s entropy. The cost of sensing then outweighs the benefit because the environment changes rarely and most sensing is superfluous. In agreement with earlier predictions (Lachmann and Jablonka, 1996; Thattai and van Oudenaarden, 2004; Wolf et al, 2005), they proved that the optimal level of stochasticity or, more exactly, the optimal rate of switching is proportional to the probability of a change in the state of the environment and inversely proportional to the average lifetime of an environmental state (Kussell and Leibler, 2005). Such a choice balances the advantages of quickly switching to the optimum phenotype for the current environmental state and the disadvantages of quickly switching from this optimum phenotype before the state of the environment changes (Wolf et al, 2005). Wolf et al (2005) included stochastic sensing, allowing environmental transitions to be unobserved, observed only after long delays, or the environmental state to be incorrectly identified. When the costs of sensing are negligible, they found that the strategy of fully stochastically switching is only evolutionarily stable if the stochasticity impeding sensing is strong enough to effectively prevent sensing of environmental transitions. For example, if the delay in signal transduction is sufficiently long that the measured environmental state no longer corresponds to the current environmental state (Wolf et al, 2005).

 

Many of these predictions have been verified experimentally. Using a synthetic bi-stable genetic network in E. coli, Kashiwagi et al (2006) showed that stochastic fluctuations can cause cells to switch into the state most favored by the current environment. Acar et al measured the growth rate of a yeast strain engineered to switch stochastically between two states in an environment that periodically varies between two environmental states: one favoring the growth of one cellular state and the other favoring the growth of the other cellular state. As predicted, they found that fast switchers grow faster in rapidly varying environments and that slow switchers grow faster in slowly varying environments (Acar et al, 2008). Natural examples include the slow-growing persister cells in isogenic bacterial colonies (Balaban et al, 2004; Kussell et al, 2005). Such cells are able to resist some antibiotics, and, after removal of the antibiotic, the surviving persisters give rise to a colony that again has a small fraction of persisters because stochastic transitions occur between the persister and the usual cellular state (Balaban et al, 2004). Another bet-hedging strategy is followed by Bacillus subtilis. Under poor nutrient conditions, most cells commit to sporulation, but a small fraction instead become ‘competent’ (Maamar and Dubnau, 2005; Smits et al, 2005; Suel et al, 2006). They are then able to take up DNA from the environment betting that new DNA will enable growth despite the poor conditions. This decision to become competent is made stochastically: reducing intracellular stochasticity reduces the fraction of competent cells (Maamar et al, 2007; Suel et al, 2007).

 

Bet-hedging can usually be understood as cooperative behavior. Consider persister cells. Although while in the persister state, cells have the potential to survive some catastrophes, they grow only very slowly, and there is no guarantee that a suitable catastrophe will ever occur. Why should cells then enter the persister state? A strain with a lower percentage of persisters could potentially invade because its faster growth may generate a greater number of persisters at the next catastrophe. Bacterial cells are usually surrounded by relatives in a clonal group. Although their direct fitness is low in the persister state, unless a catastrophe occurs, they increase their indirect fitness by freeing resources for other cells (Gardner et al, 2007). Indeed, modeling predicts that the number of persister cells should increase as resources become scarce. The cost to persister cells of their decision becomes less because the growth rate of non-persister cells decreases and the benefit to non-persister cells increases because resources are limiting (Gardner et al, 2007).

 

 

The tragedy of the commons

Problems of cooperation occur most often when different organisms share a common resource (Hardin, 1968; Rankin et al, 2007). Organisms can ‘cheat’ by using the resource inefficiently or by not contributing as much to the resource as others and yet still receive almost as much benefit because the cost of their cheating is shared by all the organisms. Such cheaters can substantially lower the fitness of the population as compared with a population of cooperators (Figure 4). An example of this ‘tragedy of the commons’ is cancer.

 

Figure 4

 

 

The tragedy of the commons. Blue cooperator cells secrete enzymes, shown by a pentagon, which hydrolyze an extracellular metabolite, shown as two joined circles, into a form that cells can import (two separated circles). The enzymatic reaction is highlighted within the dotted circle. Green cheater cells benefit from the cooperative action of synthesizing the enzyme by importing the hydrolyzed molecules. They do not, however, pay the associated cost because they do not synthesize the enzyme themselves, and hence have a growth advantage. As the number of cheater cells grows, the resource is used less and less efficiently, and the fitness of the population of cells decreases.

 

Full figure and legend (100K)Figures & Tables index

 

Microorganisms often contribute to a common pool of molecules (West et al, 2006). For example, a cytosolic pool of viral proteins is created when many viruses infect a single host cell, but a mutant virus can evolve, that sequesters proteins from the pool, but contributes little, leading to loss of fitness for all viruses (Turner and Chao, 1999). Many bacteria communicate by releasing small, diffusible, autoinducer molecules (Keller and Surette, 2006). Detection of autoinducers often leads to expression of exoproducts, such as extracellular enzymes, nutrient-scavenging molecules, and toxins, and to further synthesis of the autoinducers. At high densities of cells, this positive feedback allows such quorum sensing to generate substantial production rates of exoproducts, a common resource, but quorum sensing too is vulnerable to mutants that avoid the cost of synthesizing the exoproducts, yet still benefit from them (Diggle et al, 2007). The fitness of such cheaters decreases with frequency because the common pool shrinks as fewer and fewer individuals contribute. Nevertheless, by competing a wild-type and a cheater strain of Pseudomonas aeruginosa under different conditions of relatedness, Diggle et al (2007) showed that high relatedness favors the cooperative, quorum-sensing strategy. Cheaters, intriguingly, reduce virulence in P. aeruginosa because they decrease the rate of production of virulence factors (Rumbaugh et al, 2009).

 

Both cheaters and cooperators can stably coexist. Greig and Travisano (2004) considered the strategy to express the SUC genes used by S. cerevisiae. These genes encode the enzyme invertase, which hydrolyses sucrose, but, unusually, this enzyme is secreted extracellularly and, therefore, potentially benefits all nearby cells. Greig and Travisano argue that the observed high degree of polymorphism both in the number of SUC genes and their activity arises because of selection for cheaters, whereby some cells with one polymorphism do not synthesize invertase, but benefit, instead, from its expression by others with a different polymorphism. Gore et al (2009) extended these ideas. By competing two strains of yeast, one, a cooperator that expresses invertase, and another, a cheater that does not, they demonstrated that small numbers of cooperating cells can invade a population of cheaters and that small numbers of cheaters can invade a population of cooperators. Both strategies can coexist: the evolutionarily stable strategy is a mixed strategy. Cooperators benefit slightly more from the invertase they express than nearby cheaters and can more than recover the cost of synthesizing the enzyme. As the number of cooperators grows, more sucrose is available to the cheaters whose growth rate overtakes that of the cooperators because the cheaters do not synthesize invertase. With many cheaters, however, their growth rate slows as compared with that of the cooperators because little hydrolysed sucrose is available. Invertase converts sucrose into glucose and fructose, and wild-type yeast cells repress the expression of invertase when extracellular glucose levels are sufficiently high. Consequently, a wild-type cell will cooperate in a population of cheaters and will cheat in a sufficiently large population of cooperators (Gore et al, 2009). A similar coexistence can occur with two strains of yeast competing for a common source of glucose. Following Pfeiffer et al (2001), MacLean and Gudelj (2006) competed a strain that was only able to respire against a strain that could both respire and ferment. Despite the fermenter strain expending the glucose faster, the cooperating respirer strain was not outcompeted because the fermenters are punished through the toxic by-products (mainly ethanol with some acetate) they excrete. Although these by-products diffuse away, they can accumulate rapidly when the density of fermenters is high (MacLean and Gudelj, 2006).

 

 

Cooperating with other cooperators: structured populations

A cooperative strategy is more likely to be evolutionarily stable if an organism is often surrounded by related organisms because this spatial structure increases r in Hamilton’s rule. Ackermann et al (2008) considered self-destructive cooperation where some cells decide stochastically on self-destructive behavior for the benefit of others. Such cooperation is in general not evolutionarily stable because cheaters that never act to benefit others can always invade and dominate. The situation changes, however, in a spatially structured environment. For example, pathogenic bacteria infect a population of hosts and each host is spatially isolated. Ackermann et al showed that if the number of cells infecting a host and the probability of cheating is small then cooperation is evolutionarily stable because cooperators are likely to find themselves with other cooperators. Cheater cells, if present, will dominate in any one host, but can then be invaded by cooperators in a new round of hosts.

 

A possible example is Salmonella typhimurium, which must remove intestinal microflora as competitors. To do so, S. typhimurium triggers an inflammatory response in the human gut by invading gut tissue. Cells that invade gut tissue are, therefore, benefiting other cells and behaving cooperatively. This cooperation is also self-destructive because those S. typhimurium that do invade are usually killed by the innate immune defenses of the intestine (Ackermann et al, 2008).

 

Stochasticity can enhance cooperation in structured populations. If subpopulations of cells grow independently, the global proportion of cooperating cells can increase even though the number of cooperators within each subpopulation decreases (Chuang et al, 2009). This apparently paradoxical situation arises if those subpopulations with a higher proportion of cooperators grow faster than those with a lower proportion because fast-growing subpopulations dominate global averages. Such a global increase in cooperators requires exponential growth and sufficient variance in the composition of the initial subpopulations (Chuang et al, 2009).

 

Although limited dispersal leading to structured populations where cells grow near their relatives can favor cooperation, it need not, because scarce resources can cause related cells to compete and so reduce b in Hamilton’s rule. Siderophores are molecules secreted by many microorganisms to scavenge iron. They form an extracellular, common pool, and mutant cheater cells can evolve that benefit from the secreted siderophores, but not synthesize their own. Griffin et al (2004) showed that cooperative production of siderophores by P. aeruginosa was favored both by higher relatedness among neighboring cells and by competition occurring globally rather than locally, which reduces competition between relatives.

 

 

Conclusion

We believe that ubiquitous stochasticity makes cellular decision-making probabilistic. Here, we have reviewed recent work showing that cells can, in principle, biochemically implement statistical inference for estimating environmental states and that such an interpretation is both qualitatively and quantitatively consistent with measured responses of gene-regulatory and signaling networks. Furthermore, cells can act with anticipation, making regulatory decisions that, although suboptimal for their current environment, are expected to be advantageous after an imminent environmental change. Key to decision-making are the relative costs and benefits of different responses, which allow the optimality of responses to be tested experimentally. Finally, evolutionary theory shows how interactions within populations of organisms can lead to suboptimal behaviors, both for some individuals and for the entire population. Together, these examples demonstrate that human-developed theories of decision-making, under uncertainty, apply at the cellular level as well. This approach to understanding cellular behavior is in its infancy, but we believe many discoveries are yet to come.

 

The conjecture that cellular networks have evolved to implement statistical and decision-theoretic computations is challenging to verify experimentally. Rather than focusing on characterizing one strategy, it is better to compare different strategies, perhaps through competition experiments, but developing, for example, bacterial strains with rival strategies is difficult. Often we know little of the environmental statistics that held sway during the evolution of an organism and to which it expects to respond. One means to address this problem is microevolution experiments where, by controlling the environments that a population of cells experiences, we know the sensing and decision-making challenges the cells face (Dekel and Alon, 2005; Tagkopoulos et al, 2008; Mitchell et al, 2009).

 

The genomes of the evolved organisms can be sequenced to determine how the decision-making network has evolved or predictions of decision-making behaviors based on the presumed strategy of the cells can be verified (Dekel and Alon, 2005; Acar et al, 2008).

 

We can investigate the potential strategies implemented by cells by determining what properties of time-varying signals they measure using microfluidic devices (Bennett et al, 2008; Hersen et al, 2008; Mettetal et al, 2008).

 

Synthetic biology is another approach by which our understanding of cellular decision-making can be tested by synthesizing and analyzing in vivo a biochemical network with a desired decision-making strategy (Chuang et al, 2009).

 

Our approach to cellular decision-making highlights the importance of determining as best as possible the native environment of an organism and of studying both individual cells and populations. The results of Tagkopoulos et al (2008) and Mitchell et al (2009) show that cells may not respond to the actual signal sensed, but may instead respond in anticipation of some event historically correlated with the signal.

 

We need to investigate responses to signals of a magnitude that is appropriate to the cell’s environment. Such magnitudes are usually lower than those applied in the laboratory and, as such, can mask cooperation, where some cells may not respond to low signal to allow others to do so, or cheating, where cells use the signal rapidly to outcompete others even though a rapid response reduces the benefit gained by all. Similarly, we may need to mimic the spatial structure of the native environment to understand why some cooperative strategies persist.

 

The importance of stochasticity in cellular decision-making highlights the importance of studying single cells. In general, random strategies can outperform deterministic strategies if some aspect of the environment is unobserved, even without competition (Bertsekas, 2005). Such exploitation of stochasticity is difficult to detect in populations of cells because stochastic effects are averaged at the level of the population. Alternatively, cells often regulate away stochasticity in the signals they sense. To understand how this regulation occurs biochemically, we need to measure the responses of individual cells to signals that fluctuate as cells have evolved to expect.

 

 

With a few exceptions (Vilar et al, 2003; Tanase-Nicola and ten Wolde, 2008), an omission of present research is to connect sensing strategies at the molecular level to decision-making strategies at the population level. Most studies on bet-hedging and cooperativity, for example, do not even consider the role of sensing. Such a link is necessary to unite systems biology with evolutionary biology and to fully understand biological design (Loewe, 2009). Cellular sensing strategies, for example, have evolved in environments where interactions with other organisms are important: S. cerevisiae even though it does not secrete siderophores still expresses receptors for siderophores synthesized by other microorganisms. Analyzing such inter-organism interactions is a strength of evolutionary biology.

 

Defining the limits of adaptation determined by biochemical networks and finding the functional form of the cost, benefit, and fitness of a decision-making strategy are necessary for an understanding at an evolutionary level, yet are all strengths of systems biology.

 

A number of other areas have received little attention. Both deterministic dynamics and infinite populations are often incorrectly assumed when determining evolutionarily stable strategies. Opportunity costs, where one decision can preclude another by consuming resources, are usually ignored. So too is the ability of cells to influence the state of their environment—often viewed as the main purpose of decision-making in artificial intelligence and control theory. Such abilities can generate systems with no evolutionarily stable strategy. For example, Kerr et al (2002) consider three strains of competing bacteria: a colicinogenic strain that can release a toxin, colicin, into the environment; a resister strain that has mutated the membrane proteins that translocate the toxin; and a sensitive strain. Under certain conditions, the evolutionary dynamics of this system oscillate with time. The resister strain can outgrow the colicinogenic strain because resisters do not carry the plasmid necessary to synthesize the toxin. The resister bacteria are themselves outgrown by the sensitive strain because this strain has fully functioning membrane proteins that, although sensitive to the toxin, also uptake nutrients.

 

Finally, the sensitive strain can be outgrown by the colicinogenic strain because they are not resistant to colicin—a ‘rock-paper-scissors’ scenario (Kerr et al, 2002).

 

 

We also do not know the fidelity required of sensing. Often a cell can improve fidelity by, for example, increasing the number of receptors, but energy and resources must be expended to synthesize, operate, and maintain more complex signaling networks. Fidelity can also be increased by taking more time to detect and analyze stochastic signals, but in rapidly fluctuating or competitive environments such time may not be available.

 

These trade-offs have been little explored, although the physics of sensing at least determines a lower bound on what is achievable (Berg and Purcell, 1977; Bialek and Setayeshgar, 2005; Tostevin et al, 2007).

 

A related line of research has focused on the reliability of a response by investigating its robustness or insensitivity to changes in the values of parameters (Rao et al, 2002; Stelling et al, 2004), particularly for chemotaxis (Barkai and Leibler, 1997; Alon et al, 1999; Yi et al, 2000; Kollmann et al, 2005), developmental networks (von Dassow et al, 2000; Eldar et al, 2002; Albert and Othmer, 2003; Manu Surkova et al, 2009), and the immune response (Feinerman et al, 2008).

 

Such changes result from differences in the intracellular environment between cells and in individual cells over time. For example, many parameters in models are often implicit functions of the concentration of another intracellular species, which itself undergoes fluctuations with its own characteristic lifetime (Shahrezaei et al, 2008).

 

 

Not all decision-making need be sophisticated. Cells include, for example, many intracellular homeostatic mechanisms (Alberts et al, 2007). In other cases, the need to respond quickly may be overriding. For example, we reflexively pull our hand away from a hot stove without careful contemplation of the temperature of the stove or the likely damage our hand will receive. The importance of minimizing injury trumps all other concerns. Cells may have similar ‘reflexes’ for dealing with potentially dangerous situations. A possible example is the response to osmotic shock in S. cerevisiae (Hohmann, 2002).

 

Nevertheless, because stochasticity and incomplete information are so pervasive at the cellular level, we predict that strategies from statistics, decision theory, and evolutionary theory should be widely observed when cellular networks are viewed at the level of information processing and as such should hold much explanatory power.

 

Despite being implemented in different organisms and with different biochemistry, we believe that through functional conservation or convergent evolution, the number of such strategies will be comparatively small. Just as interactions between proteins and genes can be coarse-grained to a level of interacting functional modules (Hartwell et al, 1999), with a limited number of functions performed by those modules, a yet higher level of coarse-graining is to determine how these functional modules come together to create sensing and decision-making strategies, and, higher still, how these strategies are linked to produce adaptable and evolvable organisms.

 

………………………………………………………………………………

 

 

 


The New York Times, October 20, 2011
What Makes Free Will Free?

By GARY GUTTING

 

The Stone is a forum for contemporary philosophers on issues both timely and timeless.

The Stone is featuring occasional posts by Gary Gutting, a professor of philosophy at the University of Notre Dame, that apply critical thinking to information and events that have appeared in the news.

 

Could science prove that we don’t have free will?  An article in Nature reports on recent experiments suggesting that our choices are not free.  “We feel that we choose,” says the neuroscientist John-Dylan Haynes, “but we don’t.”

The experiments show that, prior to the moment of conscious choice, there are correlated brain events that allow scientists to predict, with 60 to 80 percent probability, what the choice will be.  Of course this might mean that the choices are partially determined by the brain events but still ultimately free.   But suppose later experiments predict our choices with 100 percent probability?   How could a choice be free if a scientist could predict it with certainty?

But my wife might be 100 percent certain that, given a choice between chicken livers and strip steak for dinner, I will choose steak.  Does that mean that my choice isn’t free?  Couldn’t she be sure that I will freely choose steak?

Perhaps, though, what’s important about the experiments is not that choices are predictable but that they are caused.   How could a choice that is caused be free?   Wouldn’t that mean that something made it happen?  On the other hand, how could a choice that was not caused be free?   If a choice has no cause at all, it is simply a random event, something that just occurred out of the blue.  Why say that a choice is mine if it doesn’t arise from something occurring in my mind (or brain)? And if a choice isn’t mine, how can we say I made it?

Following out this line of thought, David Hume, for example, argued that a free choice must be caused and that, therefore, freedom and causality must be compatible.  (This view of freedom is called “compatibilism.”) Of course, some ways of causing a choice do exclude freedom.  If I choose to remain indoors because I’m in the grip of a panic attack at the thought of going outside, then my choice isn’t free.  Here we might say that I’m not just caused to choose as I do, I’m compelled.   But perhaps I stay inside just because I want to continue reading an interesting book.  Here my desire to continue reading causes me to stay inside, but it seems wrong to say that it compels me.  So perhaps a choice is free when it’s caused by my desire rather than compelled (that is, caused against my desire).  A choice is not free when it’s uncaused but when it’s caused in the right sort of way.

Philosophers favoring compatibilism have worked out elaborate accounts of what’s involved in a choice’s being caused “in the right sort of way” and therefore free.  Other philosophers have argued that compatibilism is a blind alley, that unless our choices are ultimately uncaused they cannot be free.  These efforts have led to many important insights and distinctions, but there is still lively debate about just what is required for a choice to be free.

Figuring out what makes a choice free is essential for interpreting scientific experiments about freedom, but it does not itself involve making scientific observations.  This is because “What makes a choice free?” is not a question about facts but about meanings.  The fact that I raised my arm can be established by scientific observation—even by the impersonal mechanism of a camera.  But whether I meant to wave in greeting or to threaten an attack is a matter of interpretation that goes beyond what we can scientifically observe.  Similarly, scientific observations can show that a brain event caused a choice.  But whether the choice was free requires knowing the meaning of freedom.  If we know that a free choice must be unpredictable, or uncaused, or caused but not compelled, then an experiment can tell us whether a given choice is free.  But an experiment cannot of itself tell us that a choice is free, anymore than a photograph by itself can record a threat.

This is not necessarily because freedom is some mysterious immaterial quality that is beyond the ken of science.  That may be so, but the essential point is that, at present, we do not have a sufficiently firm idea of just what we mean by freedom to know how to design a test for it.  More precisely, we don’t know enough about the relation of free choice to the brain-events that typically precede it.  (By contrast, we do, for example, know enough to judge that a brain tumor that triggers psychotic behavior destroys free choice.)

The progress of brain science can give us specific information about how brain events affect our choices.  This allows our philosophical discussion of the conceptual relation between causality and freedom to focus on the real neurological situation, not just abstract possibilities.  It may well be that philosophers will never arrive at a full understanding of what, in all possible circumstances, it means for a choice to be free.  But, working with brain scientists, they may learn enough to decide whether the choices we make in ordinary circumstances are free.  In this way, science and philosophy together may reach a solution to the problem of free choice that neither alone would be able to achieve.

 

Readers’ Philosophical Comments

Danny P.
Warrensburg, MO
October 19th, 2011
11:04 pm
As far as I can figure, determinists say that a person’s decision is predetermined by the genetic predisposition and how it is affected by the environment over time. Nature + Nurture. Free will, in contrast, says that a person is free to evaluate their experiences and decide whether or not to resist their predispositions, aka they have experiences and predispositions related to their experiences and dispositions. Nature + Nurture.

I do not understand what the disagreement is on. Even if someone could watch my brain and see me make a decision I don’t see how that means I didn’t, conversely how if you can’t see it that somehow means I did? Whichever side says a person makes decisions based on their genes and the experiences they’ve had, put me down for that.

 

 

martin weiss
mexico, mo
October 19th, 2011
11:04 pm
Severn Darden, in the character of Professor Hugo Von der Vogelviede, once proposed that free will is when “everything in the entire universe says you couldn’t do it, and you do it anyway”. Among options for the answer to your question are human nature, myriad options and inalienable rights. The brain may propose, but the heart may differ. Sartre famously stated there is “no exit” from responsibility for defining oneself. We are thus doomed and blessed to be free. There aren’t enough cops on earth to keep people driving between the lines. Only consent of the governed makes government viable. Enlightened self-interest and pure self-interest are conflicting points of view. Both are well-represented in behavior, but both have consequences. We are free to make the choice, but doomed to live with the consequences. As free entities, we accept or deny convention. Isn’t it ironic that wild geese are more free than human beings? Reminds me of the Dylan title: “Love Minus Zero– No Limit”.

 

 

Josh Brown Kramer
Lincoln, NE
October 19th, 2011
11:04 pm
I always look forward to reading Dr. Gutting’s posts, and this is another good one.

Dr. Gutting says “…’What makes a choice free?’ is not a question about facts but about meanings.”, and later “It may well be that philosophers will never arrive at a full understanding of what, in all possible circumstances, it means for a choice to be free.” I wonder what Dr. Gutting believes about the ontological status of “what … it means for a choice to be free”. Maybe I’m wrong, but it seems to me that many philosophers treat “meanings” (i.e. definitions) like they are mathematical theorems: entities that may currently be murky or unclear, but which will eventually be uncovered after long hard work. But definitions are not like that. Definitions are linguistic shortcuts invented by humans for humans. A definition is a tool of convenience used to compress a complicated idea into a single word. Definitions cannot, in and of themselves, be wrong or right. Even mathematicians, who firmly believe in the existence of universal truths that can be discovered by hard work, do not believe that definitions can be uncovered. Mathematicians make definitions when they are useful for describing the facts they uncover.

Maybe I’m taking the language that philosophers use too seriously. Maybe philosophers are just trying to find useful definitions, not “right” definitions. But if that’s the case, why do so many philosophers seem to think, for instance, that the work they are doing to explicate the idea of free will could possibly have an effect on the question of whether we should punish criminals? Surely the answer to the question of punishing criminals is independent of whatever definitions we create to discuss this question. The definition of free will that we create cannot answer morals questions. Rather we should judge our definition of free will on the basis of how well it facilitates precise and accurate discussion of morality.

 

 

Jim Kay
Taipei, Taiwan
October 19th, 2011
11:05 pm
I suspect you’ve missed the entire point of ‘free choice.’ ‘Free’ should be taken to mean ‘under conscious control’ while ‘not free’ should be taken to mean ‘outside of conscious control.’ (see Benjamin Libet – 1983)

 

 

turbot
Philadelphia, PA
October 19th, 2011
11:05 pm
The fact that there are pre-excitation neural discharges before we commit a voluntary act, does not mean that there is no free will.

A base ball hitter chooses to swing or not, and if he swings, aims the bat at where he thinks the ball will be at the proper moment. His pre-motor cortex shows discharges of nerves preparing him to swing before he commits the act.

His act is certainly shows free will.

 

 

Richard
Bozeman, MT
October 19th, 2011
11:05 pm
Took a while for Gutting to get to the point that we scarcely know what we mean by free will. My suspicion is that it is a feeling, nothing more. I think consciousness is also in this category. These and many other puzzles will persist for a long time. The root cause is always the same – a devotion to dualistic thinking. Somehow there is a secret homunculus inside us operating the controls. Then we must posit homunculi puppets all the way up. More and more mere facts will emerge supporting the view that there is no neat dividing line between a chooser and the fait accompli of an action.

 

 

Patricia
Pasadena, CA
October 20th, 2011
10:16 am
A lot of scientists don’t act like they have free will to begin with. That’s an autism spectrum issue. There’s a lot of mild autism in science. It can be very hard to get people like that to take responsibility for things they do. They tend to see themselves as victims of external forces, not individual agents who make choices about how they treat others. It doesn’t surprise me that these are the same kind of people who doubt the existence of free will. It’s exactly what I would predict from that crowd.

 

 

Patricia
Pasadena, CA
October 20th, 2011
10:16 am
If you think about evolution, it becomes obvious why we have to have free will. Our brains can’t possibly hold the correct response to every single survival challenge that will occur to our species in the future. We have to be able to come up with new answers to new problems. That’s only possible if we can unbind ourselves from what we’ve learned in the past and we do that by exercising free will.

 

 

Howard
Los Angeles
October 20th, 2011
10:16 am
What in the world is meant by this: “Scientific observations can show that a brain event caused a choice”? Surely all observation can show is that a brain event preceded what we call a choice. You need some sort of theory in order to infer causality. The sun shines on a stone; afterwards the stone gets warm. Those are observations. The sun causes the stone to get warm; that’s based on theories about the energy from the sun, the nature of heat, the behavior of material objects when they absorb energy.
Better figure out what causality is before you theorize about free choice and its lack.

 

 

SocraticGadfly
USA
October 20th, 2011
10:16 am
The essay’s not bad, but could have definitely been better.

Per the likes of Dan Dennett, why didn’t he address subselves?

Per the likes of Daniel Wegner, why didn’t he note that the lack of a Cartesian meaner would also mean the lack of a Cartesian free willer?

In other words, the intentionality that is the baseline for free will is itself in some way an illusion.

 

 

Frank De Canio
Union City, NJ
October 20th, 2011
10:17 am
The opposite of determinism is not free will, but random behavior. “How could a choice that is caused be free?” It’s free because it’s shaped by the preponderance of contingencies that inform the choice. Of course a choice in the grip of a panic attack is not “free” (arbitrary). It’s informed by prior contingencies! It’s free in the best sense of the word. And the freest will is the will that has a multiplicity of determinants of behavior to draw from. As for “compelled to stay indoors from fear” and doing so from a desire to read – this opposition is imply a function of semantics. Both choices are free and both are informed – by fear in the one case and desire in the other. The notion that free will is behavior that is “caused in the right sort of way” leads me to ask, “in relation to what or whom. ” Is not the fear that causes one to stay indoors as much a part of the individual as the desire to read?

If free will does not exist in the romantic sense, what then is free will as we would like to understand it? Well, for example. Take an addict who is blissful in his addiction. He “freely” chooses his addiction to the extent that he has no contrary determinants of behavior. Instill determinants of pride, a model of sobriety with all of the benefits that come with that sobriety: determinants that are not sufficient to offset the pull in the opposite direction, but sufficient enough to fight against that opposite direction – and we have the nucleus of “free will”. “Free will” thus, would be the ability to choose an alternate behavior in the light of alternate models of being which the individual can then strive to attain in his behavior and choices. Free will is the ability to make the more difficult choices, to have a wider range of contingencies, which include a more ego-syntonic model of behavior. I guess that is the best we can hope for, short of reducing free will to random behavior. .

 

 

Stephen
Cambridge, MA
October 20th, 2011
10:17 am
Guttag writes of the possibility that “choices are partially determined by the brain events,” and that “science can give us specific information about how brain events affect our choices.” But ‘brain events’ don’t determine or affect our choices, brain events are our choices. We needn’t worry about whether we – that is, our brains – have free will. Each of our brains is unique, and each of our brains – that is, ourselves – has the capacity to change itself through external experiences or internal housekeeping. Life is no less a wonderful adventure when you’re a brain bouncing along on your own unique course (unpredictable to you) than it would be were you some mystical free-will-endowed non-physical entity.

 

 

Gemli
Boston
October 20th, 2011
10:18 am
The apparently simple act of raising the hand is mediated by billions of neurons, each one evaluating multiple inputs and generating outputs that feed into other neurons in complex feedback loops. The sheer complexity of the system may make analysis impossible, but the ultimate result is to generate a behavior that the organism deems to be in its best interest, all things considered. Even animals with very simple nervous systems of just a few neurons appear to explore their world purposefully, evaluating choices and acting accordingly, and appearing not to fret about the metaphysical consequences of their actions.

The fascination with Free Will seems to arise when it’s viewed in a mystical sense, the implication being that there are invisible, unfathomable forces that guide our hand, or that we choose to resist. The brain is complex enough that mystical forces are not required to explain its operation. I think this is another example of our fledgeling intellect suddenly becoming aware of enormously complex processes that took hundreds of millions of years to evolve, and becoming befuddled that it can’t immediately understand how it all works.

Dr. Gutting’s last sentence reveals that this column is taking another swing at trying to make philosophy and science equal partners in discovery, a persistent theme running through The Stone of late. I wonder if music, poetry, and rhythmic gymnastics will also want the same arrangement.

 

 

Brooks
New York, NY
October 20th, 2011
10:18 am
Here’s what Gutting, commenters above, and many others are missing:

Even if we assume (for the sake of argument or by actually accepting the premise) that 100% predictability is NOT inconsistent with free will, the key fact is that at no point in time is the individual able to alter the course leading to the pre-conditions that determine the decision or action in question.

Let’s say anyone who knows me well (and I myself) could predict with virtual certainty that, if I were handed a gun right now and given the option to shoot someone on the street, I would choose NOT to do so. That’s because of “the kind of person I am” (and the fact that I’m not in any altered/abnormal mental state), which some would argue means it’s still part of free will.

But “the kind of person I am” is not something over which I’ve ever had any control, and neither are any of the physical conditions (inside and outside my brain) immediately preceding and causing (determining) my decision. If you disagree, tell me at what point in time I could have asserted any control. We have the physical conditions existing immediately prior to (and causing) the decision, and those conditions were caused by the conditions immediately preceding that state, and so on all the way back to the twinkle in my father’s eye.

 

 

Raj
San Francisco, CA
October 20th, 2011
10:18 am
I’m going to call Occam’s razor on this. When we watch billiard balls bouncing around when hit, we don’t say they have free will. Instead, we say they’re following the laws of physics. When millions of electric pulses fire on a computer processor & generate a wide variety of results, we don’t say that our computers have free will. Instead, we say that the transistors are simply following the laws of science. So why is it that when we watch the human brain operate according to what physics & chemistry predict, compatibilists somehow ascribe free will to the process? Why the double standard?

In order for free will to exist, we would need to demonstrate that human consciousness & decision-making is something that can’t be predicted or explained by science. If you believe in this, that’s fine. But we need to realize that free will is a purely spiritual & religious concept. That it is an article of faith. There is absolutely no data or logical reason to believe that free will exists.

 

 

Bruce Crossan
Lebanon, OR
October 20th, 2011
10:18 am
So if a philosopher fails to argue his/her way out of a wet paper bag, then it must be a case of Thomistic ignorantia cultuvata. Oh wait, Mr. Thom wasn’t big on free will. Your examples, as usual, tank. Someone who suffers from acute agoraphobia (the open spaces kind), if not forced to choose, will stay inside, of their own free will; panic attacks being worse than the uninitiated can fathom. A better question would be, is making the agoraphobic person go out to the pharmacy to get his Lorazepam prescription filled, a matter of free will or coercion? But why bring the panic stricken into the conversation, in the first place? Would it not be sufficient to just lock an ordinary person in a room and say that they don’t go outside because they left their keys in their other pants?

Doesn’t someone who teaches at Notre Dame realize that the only interesting questions about free will involve God and whether or not you try to blame God for all of the evil things that happen, in the world? Or would a serious discussion of that get you fired from Notre Dame?

 

 

An Ordinary American
Prague
October 20th, 2011
10:46 am
I think a fundamental mistake in conventional thinking about “free will” is that we focus on the word “free” rather than the word “will”. There seems to be an assumption that “will” is a natural attribute like “vision” or “smell” rather than an acquired one like “personality” or “playing the piano”. But “will” is not an on/off attribute, in my view. Rather, it is something we can develop, a kind of potentiality. Some people develop it more than others. And it is grounded in something we call “intention”. We “intend” to do something (call it X), and if Nature+Nurture+Learning add up to permit it, we do X; if those factors do not add up to some necessary threshhold, then we do not accomplish X. We may not control Nature, and Nurture often depends on what others have provided to us, but Learning is something we can provide ourselves, meaning there is some degree of personal agency (choice) involved. But the fuzzy issue of “free” may be more of a distraction than a necessity in discussing the matter.

 

 

Steve Agnew
Richland, WA
October 20th, 2011
10:46 am
Life is a series of actions and each action involves choice. Actions or choices in life are affected by previous choices and so the journey of life involves sequences of choice or will that are framed by a set of primal choices. Free will is then reduced to a set of three primal choices for: 1) Origin; 2) Purpose; 3) Demise. Each of us freely chooses to believe in an origin for the universe, in a purpose for our existence, and in an end to the universe. All other “choice” follows from these primal choices and those choices define our primal free will.

 

 

mt.sinai ny
October 20th, 2011
10:46 am
The older I get the less I believe in free will. There are just too many accidental, random events that shape us and determine the paths of our lives .But I do remain skeptical of any “scientific” solution to the stated conflict. This approach can only lead to a philosophical dead end, as we can see from other readers’ comments. It may be that there is no way to arrive at a definitive resolution to this problem; after all, it is impossible for us to step outside of ourselves and view the situation with complete objectivity.
Am I “determined ” to believe in free will , am I ” free ” to choose determinism ? A hard nut to crack, logically.

 

 

dresden, germany
October 20th, 2011
10:46 am
As far as I know, all the neurological experiments which, to date, challenge the idea of free will do so by showing that a subject’s actions can be predicted a fraction of a second (as much as three quarters of a second) before the action occurs.
Of course, first, this DOES NOT determine when the consciousness of the impending action took place. The subject may be conscious of the impending action some fraction of a second before the action occurs – somewhere between the neurologically readable predictive sign and the action itself.
Second, it is reasonable to surmise that it takes some time for any stimulus, even one generated by the brain itself, to reach consciousness – that is: consciousness as knowable by the SELF: there may be earlier forms of “consciousness”, before the self enters the picture.
The main point however is that we are talking about fractions of a second. Would it not be fruitful to speculate along the lines that free will is simply the ability to forestall action – to stubbornly (in the face of the pressure to choose) forestall it – significantly longer than this fraction of a second? When the proper choice is not immediately clear I resist further action toward the choice, I stop and don’t budge, and I give myself some amount of time – not some fraction of a second, but seconds, minutes… and wait for some desire – some pressure – to act or decide one way or another (even if the pressure is simply that I have no more time left and must now act or choose). Depending on the difficulty of the decision, I erect a threshold, below which I will simply not act.
What I am asking is, is it not possible to consider that this waiting, meaningless in itself – the same stubborn inaction, regardless of the specifics of the choice to be made – is not a kind of free will that I play against (before) the unfreedom of either a future coercion, or of the choice that “I” will ultimately make (by having waited for a desire to act one way or the other)?

 

 

Michael Jutras
Nanjing, China
October 20th, 2011
10:47 am
Prof. Gutting wrote “So perhaps a choice is free when it’s caused by my desire rather than compelled (that is, caused against my desire).”

I would add that it is the fact that we can ‘conceive of” and thus contain our own desires and inclinations which allows us to understand them and ‘own’ them as aspects of our being. We can and do modify and vary our desire-based goals according to value-judgments.

But it’s this sense of conceptual containment (bounded-domains of values) which, being exercised through the faculty of volition, bestows upon us the sense of freedom, the feeling that we willfully propel our own ends forward.

I believe any reasonable theory of free will must be part-and-parcel of a comprehensive and ‘pragmatic’ theory of moral-freedom. Metaphysical free will is a philosophical dead-end, in my opinion.

But as a practical concept, freedom, or more precisely the notion of free will, gives meaning and coherence to our sense of moral responsibility. Indeed, freedom and morality mutually imply each other.

 

 

dc lambert
nj
October 20th, 2011
10:47 am
But what exactly is free will? I think that’s the core question, not pseudo studies that narrow choices down to a highly artificial two, and then pretend this is significant data.

Obviously our decisions are constrained by any number of factors: environment (if it’s snowing I’m unlikely to go out in a T shirt), societal expectations (any number of behaviors are frowned on or expected or illegal or violent), our own personality and proclivities (I would NEVER jump from a parachute), and so on.

Is free will the ability to choose right or left when we feel like it? If so, then it is patently obvious we have free will. But as readers have already pointed out, the elusive thing is our conscious mind, what the thing is that is making the free choice. In Asimov’s famous Foundation series he posits that if you have a high enough number of people, you can predict with probabilities what that group will do. If free will is less obvious in mass movements, what then of our own individual free will? Can each of us have free will when added together we have far less free will?

And is the complete illusion of free will the same as free will, just as is the illusion of consciousness for all means and purposes he same as actual consciousness?

I think this is a good question, but we need to go deeper. What is free will–who or what is actually making the decisions, in what parts of the brain, and if “I” make an apparently rational free decision for one reason but ‘really’ it was driven by a tiny portion of my irrational brain for an entirely different, predictable reason, and “I” remained unaware–does it matter? If so, what can we really do about it, given the constructs of our minds?

 

 

Victor Edwards
Holland, Mich.
October 20th, 2011
10:47 am
An uncaused choice is simply insanity. Below and prior to choice is one’s heart, for “out of the heart spring the issues of life.” Of course, that comes from Proverbs 4:23, from the One who created man as a free — and thus responsible — agent.

But sadly, the heart of man was ruined due to the fall of Adam, and now it can only issue forth evil. That, dear folks, is the bottom-line issue for every person. And that is also why one must be “born from above” to restore the ruined heart to life and goodness.

 

 

dennis speer
santa cruz, ca
October 20th, 2011
10:47 am
As there is no way to disprove my contention that we are merely characters in a dream being dreamed by a turtle sleeping on the bank of a river on a planet in a distant galaxy I am certain we will not be able to determine if we are acting out of free will.

And what does it matter if we are or are not making free choices. The way we walk and hold our heads and move and stand is all determined by nature and nurture so they are not free. Every physical move you make is determined by nature and nurture leading up to you being in that place at that time moving that way.

BTW: I got 17 on the head of my pin. How many did you get Gutting?

 

……………………………………………………………………………..

 

 

http://micro.magnet.fsu.edu/primer/java/scienceopticsu/powersof10/index.html

Click on the hot link, above, for a phenomenal slide tutorial, that will put many philosophical arguments into perspective

 

 

 

Secret Worlds:
>The Universe Within
View the Milky Way at 10 million light years from the Earth. Then move through space towards the Earth in successive orders of magnitude until you reach a tall oak tree just outside the buildings of the National High Magnetic Field Laboratory in Tallahassee, Florida. After that, begin to move from the actual size of a leaf into a microscopic world that reveals leaf cell walls, the cell nucleus, chromatin, DNA and finally, into the subatomic universe of electrons and protons.

 

 

 

Once the tutorial has completely downloaded, a set of the arrows will appear that allow the user to increase or decrease the view magnitude in Manual mode. Click on the Auto button to return to the Automatic mode.

ATTENTION!

  Secret Worlds: The Universe Within is now available as a Windows screen saver that operates in the same manner as the tutorial. Purchase the software now at the Molecular Expressions Store.  

 

 

Notice how each picture is actually an image of something that is 10 times bigger or smaller than the one preceding or following it. The number that appears on the lower right just below each image is the size of the object in the picture. On the lower left is the same number written in powers of ten, or exponential notation. Exponential notation is a convenient way for scientists to write very large or very small numbers. For example, compare the size of the Earth to the size of a plant cell, which is a trillion times smaller:

Earth = 12.76 x 10+6 = 12,760,000 meters wide
(12.76 million meters)

Plant Cell = 12.76 x 10-6 = 0.00001276 meters wide
(12.76 millionths of a meter)

Scientists examine things in particular ways using a combination of very sophisticated equipment, everyday instruments, and many unlikely tools. Some phenomena that scientists want to observe are so tiny that they need a magnifying glass, or even a microscope. Other things are so far away that a powerful telescope must be used in order to see them. It is important to understand and be able to compare the size of things we are studying. To learn more about the relative sizes of things, visit our Perspectives: Powers of 10 activity site.

Note: – The sequence of images in this tutorial has been optimized for maximum visual impact. Due to the fact that discrete exponential increments are not always the most convenient interval for illustrating this concept, our artists and programmers have made dimensional approximations in some cases. As a consequence, the relative size and positioning of several objects in the tutorial reflect this fact.

The original concept underlying this tutorial was advanced by Dutch engineer and educator Kees Boeke, who first utilized powers to aid in visualization of large numbers in a 1957 publication entitled “Cosmic View, the Universe in 40 Jumps“. Several years later, in 1968, architect Charles Eames, along with his wife Ray, directed a “rough sketch” film of the same concept and finally completed the work (entitled the “Powers of Ten“) with the assistance of Philip Morrison in 1977. Other notable contributors to this effort include Philip’s wife Phylis, who has assisted in translation of the concept into several beautifully illustrated books that are currently still available through the booksellers.

Purchase Nikon’s Small World 2009 Calendar – The Nikon Small World 2009 Calendar is printed in full color on 8.5 x 11 semi-gloss paper and spiral bound for mounting on the wall. Included in the calendar are the top 20 prize winners and thumbnail images from all of the 15 honorable mentions. Winning entries included neurons, Quantum Dot crystals, plant tissues and fibers, cells in culture, recrystallized chemicals, animal tissue sections, a tapeworm, and several microscopic invertebrates. This year’s contest drew entrants from over 50 countries, as well as from a diverse range of academic and professional disciplines. Winners came from such fields as chemistry, biology, materials research, botany, and biotechnology.

Contributing Authors

Matthew J. Parry-Hill, Christopher A. Burdett and Michael W. Davidson – National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.

 

Cells Crawling Along Crystals

 

 

 

Johns Hopkins School of Medicine  —  It’s a wonder cells make it through the day with the barrage of cues and messages they receive and transmit to direct the most basic and necessary functions of life. Such cell communication, or signal transduction, was at least thought to be an “automatic” cascade of biochemical events. Now, however, a study reported in an issue of Nature by Johns Hopkins and Harvard scientists has found that even before a message makes it through the outer cell membrane to the inner nucleus, the cell is busy activating a molecular switch to guide how the message will be delivered in the first place.

“Our results add a layer of complexity to understanding how messages are communicated by cells,” says Mark Donowitz, M.D., professor of medicine at Hopkins and a co-author of the study. “But by the same token, the new layer offers an exciting new aspect of cellular circuitry that could lead to potential therapies for many serious disorders,” he says.

“This extra step in cell signaling actually lets the cell figure out how it’s going to communicate what it needs to,” says Donowitz. “Without this switchboard system, the cell would go crazy and overload because every stimulus that passed by would be forwarded to its interior.”

The two most common cellular signals are calcium and cyclic adenosine monophosphate, or cAMP. They are sometimes known as “second messengers” because they intercept messages from receptors on the cell surface and relay them to proteins within the cell, altering their shape and thus their behavior and that of the cell at large.

Donowitz and colleagues showed that a cell decides which signal to use, calcium or cAMP, by the presence or absence of a specific protein called sodium/hydrogen exchanger regulatory factor 2, or NHERF2. Specifically, their experiments tested how the receptor for parathyroid hormone, and for parathyroid hormone-related protein (also a hormone), on the cell surface signals the interior of the cell to perform specific functions. They found that the signal includes more than just the receptor and the proteins that latch onto it, but requires an additional class of proteins (of which NHERF2 is a member) called PDZ proteins that determine whether to send the signal via calcium or through cAMP. If NHERF2 is present along with the parathyroid hormone receptor, then the signal is sent via calcium. If there is no NHERF2, then cAMP is responsible for delivering the message.

The cell’s decision to use calcium or cAMP is important because each generates different responses from its target proteins, says Donowitz. For example, a signal relayed by cAMP might induce a kidney cell to release water or a bone cell to break down into its constituent minerals. Likewise, signals relayed by calcium could lead to the aggregation of blood platelets, which cause clots, or to the release of histamine, a major component of the allergic response.

“These results show that at the very earliest stage of cell signaling, called receptor binding, there is a switch that determines what kind of signal will be used,” says Donowitz. “To understand cell signaling, you really have to know the whole system.”

The receptor for parathyroid hormone, for example, is crucial for signaling and proper functioning of the parathyroid glands, intestinal cells and kidney cells. Parathyroid hormone and parathyroid hormone-related protein are vital to the normal functioning of the body. Disruptions in the regulation or amount of these substances can lead to serious ailments, including kidney stones, convulsions, decalcification of bones or “rubber bones,” and can interfere with the normal growth of bones and cartilage. Common diseases that are caused in part by faulty signaling in cells include cancer, diabetes and disorders of the immune system.

Other authors of the study are C. Chris Yun of Hopkins, Matthew J. Mahon (lead author) and Gino V. Segre (senior author), both of Massachusetts General Hospital and Harvard Medical School.

 

 

 

Major elements in chemical synaptic transmission.