Tag Archives: progress in ecology

A Modest Proposal for a New Ecology Journal

I read the occasional ecology paper and ask myself how this particular paper ever got published when it is full of elementary mistakes and shows no understanding of the literature. But alas we can rarely do anything about this as individuals. If you object to what a particular paper has concluded because of its methods or analysis, it is usually impossible to submit a critique that the relevant journal will publish. After all, which editor would like to admit that he or she let a hopeless paper through the publication screen. There are some exceptions to this rule, and I list two examples below in the papers by Barraquand (2014) and Clarke (2014). But if you search the Web of Science you will find few such critiques for published ecology papers.

One solution jumped to mind for this dilemma: start a new ecology journal perhaps entitled Misleading Ecology Papers: Critical Commentary Unfurled. Papers submitted to this new journal would be restricted to a total of 5 pages and 10 references, and all polemics and personal attacks would be forbidden. The key for submissions would be to state a critique succinctly, and suggest a better way to construct the experiment or study, a new method of analysis that is more rigorous, or key papers that were missed because they were published before 2000. These rules would potentially leave a large gap for some very poor papers to avoid criticism, papers that would require a critique longer than the original paper. Perhaps one very long critique could be distinguished as a Review of the Year paper. Alternatively, some long critiques could be published in book form (Peters 1991), and not require this new journal. The Editor of the journal would require all critiques to be signed by the authors, but would permit in exceptional circumstances to have the authors be anonymous to prevent job losses or in more extreme cases execution by the Mafia. Critiques of earlier critiques would be permitted in the new journal, but an infinite regress will be discouraged. Book reviews could be the subject of a critique, and the great shortage of critical book reviews in the current publication blitz is another aspect of ecological science that is largely missing in the current journals. This new journal would of course be electronic, so there would be no page charges, and all articles would be open access. All the major bibliographic databases like the Web of Science would be encouraged to catalog the publications, and a doi: would be assigned to each paper from CrossRef.

If this new journal became highly successful, it would no doubt be purchased by Wiley-Blackwell or Springer for several million dollars, and if this occurred, the profits would accrue proportionally to all the authors who had published papers to make this journal popular. The sale of course would be contingent on the purchaser guaranteeing not to cancel the entire journal to prevent any criticism of their own published papers.

At the moment criticism of ecological science does not occur for several years after a poor paper is published and by that time the Donald Rumsfeld Effect would have occurred to apply the concept of truth to the conclusions of this poor work. For one example, most of the papers critiqued by Clarke (2014) were more than 10 years old. By making the feedback loop much tighter, certainly within one year of a poor paper appearing, budding ecologists could be intercepted before being led off course.

This journal would not be popular with everyone. Older ecologists often strive mightily to prevent any criticism of their prior conclusions, and some young ecologists make their career by pointing out how misleading some of the papers of the older generation are. This new journal would assist in creating a more egalitarian ecological world by producing humility in older ecologists and more feelings of achievements in young ecologists who must build up their status in the science. Finally, the new journal would be a focal point for graduate seminars in ecology by bringing together and identifying the worst of the current crop of poor papers in ecology. Progress would be achieved.

 

Barraquand, F. 2014. Functional responses and predator–prey models: a critique of ratio dependence. Theoretical Ecology 7(1): 3-20. doi: 10.1007/s12080-013-0201-9.

Clarke, P.J. 2014. Seeking global generality: a critique for mangrove modellers. Marine and Freshwater Research 65(10): 930-933. doi: 10.1071/MF13326.

Peters, R.H. 1991. A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171

 

Climate Change and Ecological Science

One dominant paradigm of the ecological literature at the present time is what I would like to call the Climate Change Paradigm. Stated in its clearest form, it states that all temporal ecological changes now observed are explicable by climate change. The test of this hypothesis is typically a correlation between some event like a population decline, an invasion of a new species into a community, or the outbreak of a pest species and some measure of climate. Given clever statistics and sufficient searching of many climatic measurements with and without time lags, these correlations are often sanctified by p< 0.05. Should we consider this progress in ecological understanding?

An early confusion in relating climate fluctuations to population changes was begun by labelling climate as a density independent factor within the density-dependent model of population dynamics. Fortunately, this massive confusion was sorted out by Enright (1976) but alas I still see this error repeated in recent papers about population changes. I think that much of the early confusion of climatic impacts on populations was due to this classifying all climatic impacts as density-independent factors.

One’s first response perhaps might be that indeed many of the changes we see in populations and communities are indeed related to climate change. But the key here is to validate this conclusion, and to do this we need to talk about the mechanisms by which climate change is acting on our particular species or species group. The search for these mechanisms is much more difficult than the demonstration of a correlation. To become more convincing one might predict that the observed correlation will continue for the next 5 (10, 20?) years and then gather the data to validate the correlation. Many of these published correlations are so weak as to preclude any possibility of validation in the lifetime of a research scientist. So the gold standard must be the deciphering of the mechanisms involved.

And a major concern is that many of the validations of the climate change paradigm on short time scales are likely to be spurious correlations. Those who need a good laugh over the issue of spurious correlation should look at Vigen (2015), a book which illustrates all too well the fun of looking for silly correlations. Climate is a very complex variable and a nearly infinite number of measurements can be concocted with temperature (mean, minimum, maximum), rainfall, snowfall, or wind, analyzed over any number of time periods throughout the year. We are always warned about data dredging, but it is often difficult to know exactly what authors of any particular paper have done. The most extreme examples are possible to spot, and my favorite is this quotation from a paper a few years ago:

“A total of 864 correlations in 72 calendar weather periods were examined; 71 (eight percent) were significant at the p< 0.05 level. …There were 12 negative correlations, p< 0.05, between the number of days with (precipitation) and (a demographic measure). A total of 45- positive correlations, p<0.05, between temperatures and (the same demographic measure) were disclosed…..”

The climate change paradigm is well established in biogeography and the major shifts in vegetation that have occurred in geological time are well correlated with climatic changes. But it is a large leap of faith to scale this well established framework down to the local scale of space and a short-term time scale. There is no question that local short term climate changes can explain many changes in populations and communities, but any analysis of these kinds of effects must consider alternative hypotheses and mechanisms of change. Berteaux et al. (2006) pointed out the differences between forecasting and prediction in climate models. We desire predictive models if we are to improve ecological understanding, and Berteaux et al. (2006) suggested that predictive models are successful if they follow three rules:

(1) Initial conditions of the system are well described (inherent noise is small);

(2) No important variable is excluded from the model (boundary conditions are defined adequately);

(3) Variables used to build the model are related to each other in the proper way (aggregation/representation is adequate).

Like most rules for models, whether these conditions are met is rarely known when the model is published, and we need subsequent data from the real world to see if the predictions are correct.

I am much less convinced that forecasting models are useful in climate research. Forecasting models describe an ecological situation based on correlations among the measurements available with no clear mechanistic model of the ecological interactions involved. My concern was highlighted in a paper by Myers (1998) who investigated for fish populations the success of published juvenile recruitment-environmental factor (typically temperature) correlations and found that very few forecasting models were reliable when tested against additional data obtained after publication. It would be useful for someone to carry out a similar analysis for bird and mammal population models.

Small mammals show some promise for predictive models in some ecosystems. The analysis by Kausrud et al. (2008) illustrates a good approach to incorporating climate into predictive explanations of population change in Norwegian lemmings that involve interactions between climate and predation. The best approach in developing these kinds of explanations and formulating them into models is to determine how the model performs when additional data are obtained in the years to follow publication.

The bottom line is to avoid spurious climatic correlations by describing and evaluating mechanistic models that are based on observable biological factors. And then make predictions that can be tested in a realistic time frame. If we cannot do this, we risk publishing fairy tales rather than science.

Berteaux, D., et al. (2006) Constraints to projecting the effects of climate change on mammals. Climate Research, 32, 151-158. doi: 10.3354/cr032151

Enright, J. T. (1976) Climate and population regulation: the biogeographer’s dilemma. Oecologia, 24, 295-310.

Kausrud, K. L., et al. (2008) Linking climate change to lemming cycles. Nature, 456, 93-97. doi: 10.1038/nature07442

Myers, R. A. (1998) When do environment-recruitment correlations work? Reviews in Fish Biology and Fisheries, 8, 285-305. doi: 10.1023/A:1008828730759

Vigen, T. (2015) Spurious Correlations, Hyperion, New York City. ISBN: 978-031-633-9438

Fishery Models and Ecological Understanding

Anyone interested in population dynamics, fisheries management, or ecological understanding in general will be interested to read the exchanges in Science, 23 April 2016 on the problem of understanding stock changes in the northern cod (Gadus morhua) fishery in the Gulf of Maine. I think this exchange is important to read because it illustrates two general problems with ecological science – how to understand ecological changes with incomplete data, and how to extrapolate what is happening into taking some management action.

What we have here are sets of experts promoting a management view and others contradicting the suggested view. There is no question but that ecologists have made much progress in understanding both marine and freshwater fisheries. Probably the total number of person-years of research on marine fishes like the northern cod would dwarf that on all other ecological studies combined. Yet we are still arguing about fundamental processes in major marine fisheries. You will remember that the northern cod in particular was one of the largest fisheries in the world when it began to be exploited in the 16th century, and by the 1990s it was driven to about 1% of its prior abundance, almost to the status of a threatened species.

Pershing et al. (2015) suggested, based on data on a rise in sea surface temperature in the Gulf of Maine, that cod mortality had increased with temperature and this was causing the fishery management model to overestimate the allowable catch. Palmer et al. (2016) and Swain et al. (2016) disputed their conclusions, and Pershing et al. (2016) responded. The details are in these papers and I do not pretend to know whose views are closest to be correct.

But I’m interested in two facts. First, Science clearly thought this controversy was important and worth publishing, even in the face of a 99% rejection rate for all submissions to that journal. Second, it illustrates that ecology faces a lot of questions when it makes conclusions that natural resource managers should act upon. Perhaps it is akin to medicine in being controversial, even though it is all supposed to be evidence based. It is hard to imagine physical scientists or engineers arguing so publically over the design of a bridge or a hydroelectric dam. Why is it that ecologists so often spend time arguing with one another over this or that theory or research finding? If we admit that our conclusions about the world’s ecosystems are so meager and uncertain, does it mean we have a very long way to go before we can claim to be a hard science? We would hope not but what is the evidence?

One problem so well illustrated here in these papers is the difficulty of measuring the parameters of change in marine fish populations and then tying these estimates to models that are predictive of changes required for management actions. The combination of less than precise data and models that are overly precise in their assumptions could be a deadly combination in the ecological management of natural resources.

Palmer, M.C., Deroba, J.J., Legault, C.M., and Brooks, E.N. 2016. Comment on “Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery”. Science 352(6284): 423-423. doi:10.1126/science.aad9674.

Pershing, A.J., Alexander, M.A., Hernandez, C.M., Kerr, L.A., Le Bris, A., Mills, K.E., Nye, J.A., Record, N.R., Scannell, H.A., Scott, J.D., Sherwood, G.D., and Thomas, A.C. 2016. Response to Comments on “Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery”. Science 352(6284): 423-423. doi:10.1126/science.aae0463.

Pershing, A.J., Alexander, M.A., Hernandez, C.M., Kerr, L.A., Le Bris, A., Mills, K.E., Nye, J.A., Record, N.R., Scannell, H.A., Scott, J.D., Sherwood, G.D., and Thomas, A.C. 2015. Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery. Science 350(6262): 809-812. doi:10.1126/science.aac9819.

Swain, D.P., Benoît, H.P., Cox, S.P., and Cadigan, N.G. 2016. Comment on “Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery”. Science 352(6284): 423-423. doi:10.1126/science.aad9346.

On Statistical Progress in Ecology

There is a general belief that science progresses over time and given that the number of scientists is increasing, this is a reasonable first approximation. The use of statistics in ecology has been one of ever increasing improvements of methods of analysis, accompanied by bandwagons. It is one of these bandwagons that I want to discuss here by raising the general question:

Has the introduction of new methods of analysis in biological statistics led to advances in ecological understanding?

This is a very general question and could be discussed at many levels, but I want to concentrate on the top levels of statistical inference by means of old-style frequentist statistics, Bayesian methods, and information theoretic methods. I am prompted to ask this question because of my reviewing of many papers submitted to ecological journals in which the data are so buried by the statistical analysis that the reader is left in a state of confusion whether or not any progress has been made. Being amazed by the methodology is not the same as being impressed by the advance in ecological understanding.

Old style frequentist statistics (read Sokal and Rohlf textbook) has been criticized for concentrating on null hypothesis testing when everyone knows the null hypothesis is not correct. This has led to refinements in methods of inference that rely on effect size and predictive power that is now the standard in new statistical texts. Information-theoretic methods came in to fill the gap by making the data primary (rather than the null hypothesis) and asking the question which of several hypotheses best fit the data (Anderson et al. 2000). The key here was to recognize that one should have prior expectations or several alternative hypotheses in any investigation, as recommended in 1897 by Chamberlin. Bayesian analysis furthered the discussion not only by having several alternative hypotheses but by the ability to use prior information in the analysis (McCarthy and Masters 2006). Implicit in both information theoretic and Bayesian analysis is the recognition that all of the alternative hypotheses might be incorrect, and that the hypothesis selected as ‘best’ might have very low predictive power.

Two problems have arisen as a result of this change of focus in model selection. The first is the problem of testability. There is an implicit disregard for the old idea that models or conclusions from an analysis should be tested with further data, preferably with data obtained independently from the original data used to find the ‘best’ model. The assumption might be made that if we get further data, we should add it to the prior data and update the model so that it somehow begins to approach the ‘perfect’ model. This was the original definition of passive adaptive management, which is now suggested to be a poor model for natural resource management. The second problem is that the model selected as ‘best’ may be of little use for natural resource management because it has little predictability. In management issues for conservation or exploitation of wildlife there may be many variables that affect population changes and it may not be possible to conduct active adaptive management for all of these variables.

The take home message is that we need in the conclusions of our papers to have a measure of progress in ecological insight whatever statistical methods we use. The significance of our research will not be measured by the number of p-values, AIC values, BIC values, or complicated tables. The key question must be: What new ecological insights have been achieved by these methods?

Anderson, D.R., Burnham, K.P., and Thompson, W.L. 2000. Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64(4): 912-923.

Chamberlin, T.C. 1897. The method of multiple working hypotheses. Journal of Geology 5: 837-848 (reprinted in Science 148: 754-759 in 1965). doi:10.1126/science.148.3671.754.

McCarthy, M.A., and Masters, P.I.P. 2005. Profiting from prior information in Bayesian analyses of ecological data. Journal of Applied Ecology 42(6): 1012-1019. doi:10.1111/j.1365-2664.2005.01101.x.

Walters, C. 1986. Adaptive Management of Renewable Resources. Macmillan, New York.

 

On Critical Questions in Biodiversity and Conservation Ecology

Biodiversity can be a vague concept with so many measurement variants to make one wonder what it is exactly, and how to incorporate ideas about biodiversity into scientific hypotheses. Even if we take the simplest concept of species richness as the operational measure, many questions arise about the importance of the rare species that make up most of the biodiversity but so little of the biomass. How can we proceed to a better understanding of this nebulous ecological concept that we continually put before the public as needing their attention?

Biodiversity conservation relies on community and ecosystem ecology for guidance on how to advance scientific understanding. A recent paper by Turkington and Harrower (2016) articulates this very clearly by laying out 7 general questions for analyzing community structure for conservation of biodiversity. As such these questions are a general model for community and ecosystem ecology approaches that are needed in this century. Thus it would pay to look at these 7 questions more closely and to read this new paper. Here is the list of 7 questions from the paper:

  1. How are natural communities structured?
  2. How does biodiversity determine the function of ecosystems?
  3. How does the loss of biodiversity alter the stability of ecosystems?
  4. How does the loss of biodiversity alter the integrity of ecosystems?
  5. Diversity and species composition
  6. How does the loss of species determine the ability of ecosystems to respond to disturbances?
  7. How does food web complexity and productivity influence the relative strength of trophic interactions and how do changes in trophic structure influence ecosystem function?

Turkington and Harrower (2016) note that each of these 7 questions can be asked in at least 5 different contexts in the biodiversity hotspots of China:

  1. How do the observed responses change across the 28 vegetation types in China?
  2. How do the observed responses change from the low productivity grasslands of the Qinghai Plateau to higher productivity grasslands in other parts of China?
  3. How do the observed responses change along a gradient in the intensity of human use or degradation?
  4. How long should an experiment be conducted given that the immediate results are seldom indicative of longer-term outcomes?
  5. How does the scale of the experiment influence treatment responses?

There are major problems in all of this as Turkington and Harrower (2016) and Bruelheide et al. (2014) have discussed. The first problem is to determine what the community is or what the bounds of an ecosystem are. This is a trivial issue according to community and ecosystem ecologists, and all one does is draw a circle around the particular area of interest for your study. But two points remain. Populations, communities, and ecosystems are open systems with no clear boundaries. In population ecology we can master this problem by analyses of movements and dispersal of individuals. On a short time scale plants in communities are fixed in position while their associated animals move on species-specific scales. Communities and ecosystems are not a unit but vary continuously in space and time, making their analysis difficult. The species present on 50 m2 are not the same as those on another plot 100 m or 1000 m away even if the vegetation types are labeled the same. So we replicate plots within what we define to be our community. If you are studying plant dynamics, you can experimentally place all plant species selected in defined plots in a pre-arranged configuration for your planting experiments, but you cannot do this with animals except in microcosms. All experiments are place specific, and if you consider climate change on a 100 year time scale, they are also time specific. We can hope that generality is strong and our conclusions will apply in 100 years but we do not know this now.

But we can do manipulative experiments, as these authors strongly recommend, and that brings a whole new set of problems, outlined for example in Bruelheide et al. (2014, Table 1, page 78) for a forestry experiment in southern China. Decisions about how many tree species to manipulate in what size of plots and what planting density to use are all potentially critical to the conclusions we reach. But it is the time frame of hypothesis testing that is the great unknown. All these studies must be long-term but whether this is 10 years or 50 years can only be found out in retrospect. Is it better to have, for example, forestry experiments around the world carried out with identical protocols, or to adopt a laissez faire approach with different designs since we have no idea yet of what design is best for answering these broad questions.

I suspect that this outline of the broad questions given in Turkington and Harrower (2016) is at least a 100 year agenda, and we need to be concerned how we can carry this forward in a world where funding of research questions has a 3 or 5 year time frame. The only possible way forward, until we win the Lottery, is for all researchers to carry out short term experiments on very specific hypotheses within this framework. So every graduate student thesis in experimental community and ecosystem ecology is important to achieving the goals outlined in these papers. Even if this 100 year time frame is optimistic and achievable, we can progress on a shorter time scale by a series of detailed experiments on small parts of the community or ecosystem at hand. I note that some of these broad questions listed above have been around for more than 50 years without being answered. If we redefine our objectives more precisely and do the kinds of experiments that these authors suggest we can move forward, not with the solution of grand ideas as much as with detailed experimental data on very precise questions about our chosen community. In this way we keep the long-range goal posts in view but concentrate on short-term manipulative experiments that are place and time specific.

This will not be easy. Birds are probably the best studied group of animals on Earth, and we now have many species that are changing in abundance dramatically over large spatial scales (e.g. http://www.stateofcanadasbirds.org/ ). I am sobered by asking avian ecologists why a particular species is declining or dramatically increasing. I never get a good answer, typically only a generally plausible idea, a hand waving explanation based on correlations that are not measured or well understood. Species recovery plans are often based on hunches rather than good data, with few of the key experiments of the type requested by Turkington and Harrower (2016). At the moment the world is changing rather faster than our understanding of these ecological interactions that tie species together in communities and ecosystems. We are walking when we need to be running, and even the Red Queen is not keeping up.

Bruelheide, H. et al. 2014. Designing forest biodiversity experiments: general considerations illustrated by a new large experiment in subtropical China. Methods in Ecology and Evolution, 5, 74-89. doi: 10.1111/2041-210X.12126

Turkington, R. & Harrower, W.L. 2016. An experimental approach to addressing ecological questions related to the conservation of plant biodiversity in China. Plant Diversity, 38, 1-10. Available at: http://journal.kib.ac.cn/EN/volumn/current.shtml

Hypothesis testing using field data and experiments is definitely NOT a waste of time

At the ESA meeting in 2014 Greg Dwyer (University of Chicago) gave a talk titled “Trying to understand ecological data without mechanistic models is a waste of time.” This theme has recently been reiterated on Dynamic Ecology Jeremy Fox, Brian McGill and Megan Duffy’s blog (25 January 2016 https://dynamicecology.wordpress.com/2016/01/25/trying-to-understand-ecological-data-without-mechanistic-models-is-a-waste-of-time/).  Some immediate responses to this blog have been such things as “What is a mechanistic model?” “What about the use of inappropriate statistics to fit mechanistic models,” and “prediction vs. description from mechanistic models”.  All of these are relevant and interesting issues in interpreting the value of mechanistic models.

The biggest fallacy however in this blog post or at least the title of the blog post is the implication that field ecological data are collected in a vacuum.  Hypotheses are models, conceptual models, and it is only in the absence of hypotheses that trying to understand ecological data is a “waste of time”. Research proposals that fund field work demand testable hypotheses, and testing hypotheses advances science. Research using mechanistic models should also develop testable hypotheses, but mechanistic models are certainly are not the only route to hypothesis creation of testing.

Unfortunately, mechanistic models rarely identify how the robustness and generality of the model output could be tested from ecological data and often fail comprehensively to properly describe the many assumptions made in constructing the model. In fact, they are often presented as complete descriptions of the ecological relationships in question, and methods for model validation are not discussed. Sometimes modelling papers include blatantly unrealistic functions to simplify ecological processes, without exploring the sensitivity of results to the functions.

I can refer to my own area of research expertise, population cycles for an example here.  It is not enough for example to have a pattern of ups and downs with a 10-year periodicity to claim that the model is an acceptable representation of cyclic population dynamics of for example a forest lepidopteran or snowshoe hares. There are many ways to get cyclic dynamics in modeled systems. Scientific progress and understanding can only be made if the outcome of conceptual, mechanistic or statistical models define the hypotheses that could be tested and the experiments that could be conducted to support the acceptance, rejection or modification of the model and thus to inform understanding of natural systems.

How helpful are mechanistic models – the gypsy moth story

Given the implication of Dwyer’s blog post (or at least blog post title) that mechanistic models are the only way to ecological understanding, it is useful to look at models of gypsy moth dynamics, one of Greg’s areas of modeling expertise, with the view toward evaluating whether model assumptions are compatible with real-world data Dwyer et al.  2004  (http://www.nature.com/nature/journal/v430/n6997/abs/nature02569.html)

Although there has been considerable excellent work on gypsy moth over the years, long-term population data are lacking.  Population dynamics therefore are estimated by annual estimates of defoliation carried out by the US Forest Service in New England starting in 1924. These data show periods of non-cyclicity, two ten-year cycles (peaks in 1981 and 1991 that are used by Dwyer for comparison to modeled dynamics for a number of his mechanistic models) and harmonic 4-5 year cycles between 1943 and1979 and since the 1991 outbreak. Based on these data 10-year cycles are the exception not the rule for introduced populations of gypsy moth. Point 1. Many of the Dwyer mechanistic models were tested using the two outbreak periods and ignored over 20 years of subsequent defoliation data lacking 10-year cycles. Thus his results are limited in their generality.

As a further example a recent paper, Elderd et al. (2013)  (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773759/) explored the relationship between alternating long and short cycles of gypsy moth in oak dominated forests by speculating that inducible tannins in oaks modifies the interactions between gypsy moth larvae and viral infection. Although previous field experiments (D’Amico et al. 1998) http://onlinelibrary.wiley.com/doi/10.1890/0012-9658(1998)079%5b1104:FDDNAW%5d2.0.CO%3b2/abstract concluded that gypsy moth defoliation does not affect tannin levels sufficiently to influence viral infection, Elderd et al. (2013) proposed that induced tannins in red oak foliage reduces variation in viral infection levels and promotes shorter cycles. In this study, an experiment was conducted using jasmonic acid sprays to induce oak foliage. Point 2 This mechanistic model is based on experiments using artificially induced tannins as a mimic of insect damage inducing plant defenses. However, earlier fieldwork showed that foliage damage does not influence virus transmission and thus does not support the relevance of this mechanism.

In this model Elderd et al. (2013) use a linear relationship for viral transmission (transmission of infection on baculovirus density) based on two data points and the 0 intercept. In past mechanistic models and in a number of other systems the relationship between viral transmission and host density is nonlinear (D’Amico et al. 2005, http://onlinelibrary.wiley.com/doi/10.1111/j.0307-6946.2005.00697.x/abstract;jsessionid=D93D281ACD3F94AA86185EFF95AC5119.f02t02?userIsAuthenticated=false&deniedAccessCustomisedMessage= Fenton et al. 2002, http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2656.2002.00656.x/full). Point 3. Data are insufficient to accurately describe the viral transmission relationship used in the model.

Finally the Elderd et al. (2013) model considers two types of gypsy moth habitat – one composed of 43% oaks that are inducible and the other of 15% oaks and the remainder of the forest composition is in adjacent blocks of non-inducible pines. Data show that gypsy moth outbreaks are limited to areas with high frequencies of oaks. In mixed forests, pines are only fed on by later instars of moth larvae when oaks are defoliated. The pines would be interspersed amongst the oaks not in separate blocks as in the modeled population. Point 4.  Patterns of forest composition in the models that are crucial to the result are unrealistic and this makes the interpretation of the results impossible.

Point 5 and conclusion. Because it can be very difficult to critically review someone else’s mechanistic model as model assumptions are often hidden in supplementary material and hard to interpret, and because relationships used in models are often arbitrarily chosen and not based on available data, it could be easy to conclude that “mechanistic models are misleading and a waste of time”. But of course that wouldn’t be productive. So my final point is that closer collaboration between modelers and data collectors would be the best way to ensure that the models are reasonable and accurate representations of the data.  In this way understanding and realistic predictions would be advanced. Unfortunately the great push to publish high profile papers works against this collaboration and manuscripts of mechanistic models rarely include data savvy referees.

D’Amico, V., J. S. Elkinton, G. Dwyer, R. B. Willis, and M. E. Montgomery. 1998. Foliage damage does not affect within-season transmission of an insect virus. Ecology 79:1104-1110.

D’Amico, V. D., J. S. Elkinton, P. J.D., J. P. Buonaccorsi, and G. Dwyer. 2005. Pathogen clumping: an explanation for non-linear transmission of an insect virus. Ecological Entomology 30:383-390.

Dwyer, G., F. Dushoff, and S. H. Yee. 2004. The combined effects of pathogens and predators on insect outbreaks. Nature 430:341-345.

Elderd, B. D., B. J. Rehill, K. J. Haynes, and G. Dwyer. 2013. Induced plant defenses, host–pathogen interactions, and forest insect outbreaks. Proceedings of the National Academy of Sciences 110:14978-14983.

Fenton, A., J. P. Fairbairn, R. Norman, and P. J. Hudson. 2002. Parasite transmission: reconciling theory and reality. Journal of Animal Ecology 71:893-905.

On Improving Canada’s Scientific Footprint – Breakthroughs versus insights

In Maclean’s Magazine on November 25, 2015 Professor Lee Smolin of the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo, and a member of the Royal Society of Canada, wrote an article “Ten Steps to Make Canada a Leader in Science” (http://www.macleans.ca/politics/ottawa/ten-steps-to-make-canada-a-leader-in-science/ ). Some of the general points in this article are very good but some seem to support the view of science as big business and that leaves ecology and environmental science in the dust. We comment here on a few points of disagreement with Professor Smolin. The quotations are from the Maclean’s article.

  1. Choose carefully.

“Mainly invest in areas of pure science where there is a path to world leadership. This year’s Nobel prize shows that when we do this, we succeed big.” We suggest that the Nobel Prizes are possibly the worst example of scientific achievement that is currently available because of their disregard for the environment. This recommendation is at complete variance to how environmental sciences advance.

  1. Aim for breakthroughs.

“No “me-too” or catch-up science. Don’t hire the student of famous Prof. X at an elite American university just because of the proximity to greatness. Find our own path to great science by recruiting scientists who are forging their own paths to breakthroughs.” But the essence of science has always been replication. Long-term monitoring is a critical part of good ecology, as Henson (2014) points out for oceanographic research. But indeed we agree to the need to recruit excellent young scientists in all areas.

  1. Embrace risk.

“Learn from business that it takes high risk to get high payoff. Don’t waste money doing low-risk, low-payoff science. Treat science like venture capital.” That advice would remove most of the ecologists who obtain NSERC funding. It is one more economic view of science. Besides, most successful businesses are based on hard work, sound financial practices, and insights into the needs of their customers.

  1. Recruit and invest in young leaders-to-be.

“Be savvy and proactive about choosing them…. Resist supporting legacies and entitlements. Don’t waste money on people whose best work is behind them.” We agree. Spending money to fund a limited number of middle aged, white males in the Canadian Excellence in Research Chairs was the antithesis of this recommendation. See the “Folly of Big Science” by Vinay Prasad (2015). Predicting in advance who will be leaders will surely depend on diverse insights and is best evaluated by giving opportunities for success to many from which leaders will arise.

  1. Recruit internationally.

“Use graduate fellowships and postdoctoral positions as recruitment tools to bring the most ambitious and best-educated young scientists to Canada to begin their research here, and then target the most promising of these by creating mechanisms to ensure that their best opportunities to build their careers going forward are here.” This seems attractive but means Canadian scientists have little hope of obtaining jobs here, since we are < 0.1% of the world’s scientists. A better idea – how about Canada producing the “best-educated” young scientists?

  1. Resist incrementalism.

If you spread new money around widely, little new science gets done. Instead, double-down on strategic fields of research where the progress is clear and Canada can have an impact.“ Fortin and Currie (2013) show that spreading the money around is exactly the way to go since less gets wasted and no one can predict where the “breakthroughs” will happen.  This point also rests on one’s view of the world of the future and what “breakthroughs” will contribute to the sustainability of the earth.

  1. Empower ambitious, risk-taking young scientists.

Give them independence and the resources they need to develop their own ideas and directions. Postdocs are young leaders with their own ideas and research programs”. This is an excellent recommendation, but it does conflict with the recommendation of many universities around the world of bringing in old scientists to establish institutes and giving incentives for established senior scientists.

  1. Embrace diversity.

Target women and visible minorities. Let us build a Canadian scientific community that looks like Canada.” All agreed on this one.

  1. Speak the truth.

“Allow no proxies for success, no partial credit for “progress” that leaves unsolved problems unsolved. Don’t count publications or citations, count discoveries that have increased our knowledge about nature. We do research because we don’t know the answer; don’t force us to write grant proposals in which we have to pretend we do.” This confounds the scientists’ code of ethics with the requirements of bureaucracies like NSERC for accounting for the taxpayers’ dollars. Surely publications record the increased knowledge about nature recommended by Professor Smolin.

  1. Consider the way funding agencies do business.

“We scientists know that panels can discourage risk-taking, encourage me-too and catch-up science, and reinforce longstanding entitlements and legacies. Such a system may incentivize low-risk, incremental work and limit the kind of out-of-the-box ideas that….leads to real breakthroughs. So create ambitious programs, empower the program officers to pick out and incubate the brightest and most ambitious risk-takers, and reward them when the scientists they invest in make real discoveries.” What is the evidence that program officers in NSERC or NSF have the vision to pick winners? This is difficult advice for ecologists who are asked for opinions on support for research projects in fields that require long-term studies to produce increases in ecological understanding or better management of biodiversity. It does seem like a recipe for scientific charlatans.

The bottom line: We think that the good ideas in this article are overwhelmed by poor suggestions with regards to ecological research. We come from an ecological world faced with three critical problems that will determine the fate of the Earth – food security, biodiversity loss, and overpopulation. While we all like ‘breakthroughs’ that give us an IPhone 6S or an electric car, few of the discoveries that have increased our knowledge about nature would be considered a breakthrough. So do we say goodbye to taxonomic research, biodiversity monitoring, investigating climate change impacts on Canadian ecosystems, or investing in biological control of pests? Perhaps we can add the provocative word “breakthrough” to our ecological papers and media reports more frequently but our real goal is to acquire greater insights into achieving a sustainable world.

As a footnote to this discussion, Dev (2015) raises the issue of the unsolved major problems in biology. None of them involve environmental or ecological issues.

Dev, S.B. (2015) Unsolved problems in biology—The state of current thinking. Progress in Biophysics and Molecular Biology, 117, 232-239.

Fortin, J.-M. & Currie, D.J. (2013) Big science vs. little science: How scientific impact scales with funding. PLoS ONE, 8, e65263.

Prasad, V. (2015) The folly of big science. New York Times. October 2, 2015 (http://www.nytimes.com/2015/10/03/opinion/the-folly-of-big-science-awards.html?_r=0 )

Henson, S.A. (2014) Slow science: the value of long ocean biogeochemistry records. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 372 (2025). doi: 10.1098/rsta.2013.0334.

 

The Volkswagen Syndrome and Ecological Science

We have all been hearing the reports that Volkswagen fixed diesel cars by some engineering trick to show low levels of pollution, while the actual pollution produced on the road is 10-100 times higher than the laboratory predicted pollution levels. I wonder if this is an analogous situation to what we have in ecology when we compare laboratory studies and conclusions to real-world situations.

The push in ecology has always been to simplify the system first by creating models full of assumptions, and then by laboratory experiments that are greatly oversimplified compared with the real world. There are very good reasons to try to do this, since the real world is rather complicated, but I wonder if we should call a partial moratorium on such research by conducting a review of how far we have been led astray by both simple models and simple laboratory population, community and ecosystem studies in microcosms and mesocosms. I can almost hear the screams coming up that of course this is not possible since graduate students must complete a degree in 2 or 3 years, and postdocs must do something in 2 years. If this is our main justification for models and microcosms, that is fair enough but we ought to be explicit about stating that and then evaluate how much we have been misled by such oversimplification.

Let me try to be clear about this problem. It is an empirical question of whether or not studies in laboratory or field microcosms can give us reliable generalizations for much more extensive communities and ecosystems that are not in some sense space limited or time limited. I have a personal view on this question, heavily influenced by studies of small mammal populations in microcosms. But my experience may be atypical of the rest of natural systems, and this is an empirical question, not one on which we can simply state our opinions.

If the world is much more complex than our current understanding of it, we must conclude that an extensive list of climate change papers should be moved to the fiction section of our libraries. If we assume equilibrial dynamics in our communities and ecosystems, we fly in violation of almost all long term studies of populations, communities, and ecosystems. The problem lies in the space and time vision of our science. Our studies are too short to show even a good representation of dynamics over a 100 year time scale, and the problems of landscape ecology highlight that what we see in patch A may be greatly influenced by whether patches B and C are close by or not. We see this darkly in a few small studies but are compelled to believe that such landscape effects are unusual or atypical. This may in fact be the case, but we need much more work to see if it is rare or common. And the broader issue is what use do we as ecologists have for ecological predictions that cannot be tested without data for the next 100 years?

Are all our grand generalizations of ecology falling by the wayside without us noticing it? Prins and Gordon (2014) in their overview seem to feel that the real world is poorly reflected in many of our beloved theories. I think this is a reflection of the Volkswagen Syndrome, of the failure to appreciate that the laboratory in its simplicity is so far removed from real world community and ecosystem dynamics that we ought to start over to build an ecological edifice of generalizations or rules with a strong appreciation of the limited validity of most generalizations until much more research has been done. The complications of the real world can be ignored in the search for simplicity, but one has to do this with the realization that predictions that flow from faulty generalizations can harm our science. We ecologists have very much research yet to do to establish secure generalizations that lead to reliable predictions.

Prins, H.H.T. & Gordon, I.J. (2014) Invasion Biology and Ecological Theory: Insights from a Continent in Transformation. Cambridge University Press, Cambridge. 540 pp. ISBN 9781107035812.

On the Use of “Density-dependent” in the Ecological Literature

The words ‘density-dependent’ or ‘density dependence’ appear very frequently in the ecological literature, and I write this blog in a plea to never use these words unless you have a very strong definition attached to them. If you have a spare day, count how many times these words appear in a single recent issue of Ecology or the Journal of Animal Ecology and you will get a dose of my dismay. In the Web of Science a search for these words in a general ecology context gives about 1300 papers using these words since 2010, or approximately 1 paper per day.

There is an extensive literature on what density dependence means. In the modeling world, the definition is simple and can be found in every introductory ecology textbook. But it is the usage of the words ‘density-dependence’ in the real world that I want to discuss in this blog.

The concept can be quite meaningless, as Murray (1982) pointed out so many years ago. At its most modest extreme, it only says that, sooner or later, something happens when a population gets too large. Everyone could agree with that simple definition. But if you want to understand or manage population changes, you will need something much more specific. More specific might mean to plot a regression of some demographic variable with population density on the X axis. As Don Strong (1986) pointed out long ago a more typical result is density-vagueness. So if and when you write about a density-dependent relationship, at least determine how well the data fit a straight or curved line, and if the correlation coefficient is 0.3 or less you should get concerned that density has little to do with your demographic variable. If you wish to understand population dynamics, you will need to understand mechanisms and population density is not a mechanism.

Often the term density-dependent is used as a shorthand to indicate that some measured variable such as the amount of item X in the diet is related to population density. In most of these cases it is more appropriate to say that item X is statistically related to population density, and avoid all the baggage associated with the original term. Too often statements are made about mortality process X being ‘inversely density dependent’ or ‘directly density dependent’ with no data that supports such a strong conclusion.

So if there is a simple message here it is only that when you write ‘density-dependent’ in your manuscript, see if is related to the population regulation concept or if it is a simple statistical statement that is better described in simple statistical language. In both cases evaluate the strength of the evidence.

Ecology is plagued with imprecise words that can mean almost anything if they are not specified clearly, so statements about ‘biodiversity’, ‘ecosystems’, ‘resilience’, ‘diversity’, ‘metapopulations’, and ‘competition’ are fine to use so long as you indicate exactly what the operational meaning of the word entails. ‘Density-dependence’ is one of these slippery words best avoided unless you have some clear mechanism or process in mind.

Murray, B.G., Jr. (1982) On the meaning of density dependence. Oecologia, 53, 370-373.

Strong, D.R. (1986) Density-vague population change. Trends in Ecology and Evolution, 1, 39-42.

Was the Chitty Hypothesis of Population Regulation a ‘Big Idea’ in Ecology and was it successful?

Jeremy Fox in his ‘Dynamic Ecology’ Blog has raised the eternal question of what have been the big ideas in ecology and were they successful, and this has stimulated me to write about the Chitty Hypothesis and its history since 1952. I will write this from my personal observations which can be faulty, and I will not bother to put in many references since this is a blog and not a formal paper.

In 1952 when Dennis Chitty at Oxford finished his thesis on vole cycles in Wales, he was considered a relatively young heretic because he did not see any evidence in favour of the two dominant paradigms of population dynamics – that populations rose and fell because of food shortage or predation. David Lack vetoed the publication of his Ph.D. paper because he did not agree with Chitty’s findings (Lack believed that food supplies explained all population changes). His 1952 thesis paper was published only because of the intervention of Peter Medawar. Chitty could see no evidence of these two factors in his vole populations and he began to suspect that social factors were involved in population cycles. He tested Jack Christian’s ideas that social stress was a possible cause, since it was well known that some rodents were territorial and highly aggressive, but stress as measured by adrenal gland size did not fit the population trends very well. He then began to suspect that there might be genetic changes in fluctuating vole populations, and that population processes that occurred in voles and lemmings may occur in a wide variety of species, not just in the relatively small group of rodent species, which everyone could ignore as a special case of no generality. This culminated in his 1960 paper in the Canadian Journal of Zoology. This paper stimulated many field ecologists to begin experiments on population regulation in small mammals.

Chitty’s early work contained a ‘big idea’ that population dynamics and population genetics might have something to contribute to each other, and that one could not assume that every individual had equal properties. These ideas of course were not just his, and Bill Wellington had many of the same ideas in studying tent caterpillar population fluctuations. When Chitty suggested these ideas during the late 1950s he was told by several eminent geneticists who must remain nameless that his ideas were impossible, and that ecologists should stay out of genetics because the speed of natural selection was so slow that nothing could be achieved in ecological time. Clearly thinking has now changed on this general idea.

So if one could recognize these early beginnings as a ‘big idea’ it might be stated simply as ‘study individual behaviour, physiology, and genetics to understand population changes’, and it was instrumental in adding another page to the many discussions of population changes that had previously mostly included only predators, food supplies, and potentially disease. All this happened before the rise of behavioural ecology in the 1970s.

I leave others to judge the longer term effects of Chitty’s early suggestions. At present the evidence is largely against any rapid genetic changes in fluctuating populations of mammals and birds, and maternal effects now seem a strong candidate for non-genetic inheritance of traits that affect fitness in a variety of vertebrate species. And in a turn of fate, stress seems to be a strong candidate for at least some maternal effects, and we are back to the early ideas of Jack Christian and Hans Selye of the 1940s, but with greatly improved techniques of measurement of stress in field populations.

Dennis Chitty was a stickler for field experiments in ecology, a trend now long established, and he made many predictions from his ideas, often rejected later but always leading to more insights of what might be happening in field populations. He was a champion of discussing mechanisms of population change, and found little use for the dominant paradigm of the density dependent regulation of populations. Was he successful? I think so, from my biased viewpoint. I note he had less recognition in his lifetime than he deserved because he offended the powers that be. For example, he was never elected to the Royal Society, a victim of the insularity and politics of British science. But that is another story.

Chitty, D. (1952) Mortality among voles (Microtus agrestis) at Lake Vyrnwy, Montgomeryshire in 1936-9. Philosophical Transactions of the Royal Society of London, 236, 505-552.

Chitty, D. (1960) Population processes in the vole and their relevance to general theory. Canadian Journal of Zoology, 38, 99-113.