Tag Archives: long-term problems

On Caribou and Hypothesis Testing

Mountain caribou populations in western Canada have been declining for the past 10-20 years and concern has mounted to the point where extinction of many populations could be imminent, and the Canadian federal government is asking why this has occurred. This conservation issue has supported a host of field studies to determine what the threatening processes are and what we can do about them. A recent excellent summary of experimental studies in British Columbia (Serrouya et al. 2017) has stimulated me to examine this caribou crisis as an illustration of the art of hypothesis testing in field ecology. We teach all our students to specify hypotheses and alternative hypotheses as the first step to solving problems in population ecology, so here is a good example to start with.

From the abstract of this paper, here is a statement of the problem and the major hypothesis:

“The expansion of moose into southern British Columbia caused the decline and extirpation of woodland caribou due to their shared predators, a process commonly referred to as apparent competition. Using an adaptive management experiment, we tested the hypothesis that reducing moose to historic levels would reduce apparent competition and therefore recover caribou populations. “

So the first observation we might make is that much is left out of this approach to the problem. Populations can decline because of habitat loss, food shortage, excessive hunting, predation, parasitism, disease, severe weather, or inbreeding depression. In this case much background research has narrowed the field to focus on predation as a major limitation, so we can begin our search by focusing on the predation factor (review in Boutin and Merrill 2016). In particular Serrouya et al. (2017) focused their studies on the nexus of moose, wolves, and caribou and the supposition that wolves feed preferentially on moose and only secondarily on caribou, so that if moose numbers are lower, wolf numbers will be lower and incidental kills of caribou will be reduced. So they proposed two very specific hypotheses – that wolves are limited by moose abundance, and that caribou are limited by wolf predation. The experiment proposed and carried out was relatively simple in concept: kill moose by allowing more hunting in certain areas and measure the changes in wolf numbers and caribou numbers.

The experimental area contained 3 small herds of caribou (50 to 150) and the unmanipulated area contained 2 herds (20 and 120 animals) when the study began in 2003. The extended hunting worked well, and moose in the experimental area were reduced from about 1600 animals down to about 500 over the period from 2003 to 2014. Wolf numbers in the experimental area declined by about half over the experimental period because of dispersal out of the area and some starvation within the area. So the two necessary conditions of the experiment were satisfied – moose numbers declined by about two-thirds from additional hunting and wolf numbers declined by about half on the experimental area. But the caribou population on the experimental area showed mixed results with one population showing a slight increase in numbers but the other two showing a slight loss. On the unmanipulated area both caribou populations showed a continuing slow decline. On the positive side the survival rate of adult caribou was higher on the experimental area, suggesting that the treatment hypothesis was correct.

From the viewpoint of caribou conservation, the experiment failed to change the caribou population from continuous slow declines to the rapid increase needed to recover these populations to their former greater abundance. At best it could be argued that this particular experiment slowed the rate of caribou decline. Why might this be? We can make a list of possibilities:

  1. Moose numbers on the experimental area were not reduced enough (to 300 instead of to 500 achieved). Lower moose would have meant much lower wolf numbers.
  2. Small caribou populations are nearly impossible to recover because of chance events that affect small numbers. A few wolves or bears or cougars could be making all the difference to populations numbering 10-20 individuals.
  3. The experimental area and the unmanipulated area were not assigned treatments at random. This would mean to a pure statistician that you cannot make statistical comparisons between these two areas.
  4. The general hypothesis being tested is wrong, and predation by wolves is not the major limiting factor to mountain caribou populations. Many factors are involved in caribou declines and we cannot determine what they are because they change for area to area, year to year.
  5. It is impossible to do these landscape experiments because for large landscapes it is impossible to find 2 or more areas that can be considered replicates.
  6. The experimental manipulation was not carried out long enough. Ten years of manipulation is not long for caribou who have a generation time of 15-25 years.

Let us evaluate these 6 points.

#1 is fair enough, hard to achieve a population of moose this low but possible in a second experiment.

#2 is a worry because it is difficult to deal experimentally with small populations, but we have to take the populations as a given at the time we do a manipulation.

#3 is true if you are a purist but is silly in the real world where treatments can never be assigned at random in landscape experiments.

#4 is a concern and it would be nice to include bears and other predators in the studies but there is a limit to people and money. Almost all previous studies in mountain caribou declines have pointed the finger at wolves so it is only reasonable to start with this idea. The multiple factor idea is hopeless to investigate or indeed even to study without infinite time and resources.

#5 is like #3 and it is an impossible constraint on field studies. It is a common statistical fallacy to assume that replicates must be identical in every conceivable way. If this were true, no one could do any science, lab or field.

#6 is correct but was impossible in this case because the management agencies forced this study to end in 2014 so that they could conduct another different experiment. There is always a problem deciding how long a study is sufficient, and the universal problem is that the scientists or (more likely) the money and the landscape managers run out of energy if the time exceeds about 10 years or more. The result is that one must qualify the conclusions to state that this is what happened in the 10 years available for study.

This study involved a heroic amount of field work over 10 years, and is a landmark in showing what needs to be done and the scale involved. It is a far cry from sitting at a computer designing the perfect field experiment on a theoretical landscape to actually carrying out the field work to get the data summarized in this paper. The next step is to continue to monitor some of these small caribou populations, the wolves and moose to determine how this food chain continues to adjust to changes in prey levels. The next experiment needed is not yet clear, and the eternal problem is to find the high levels of funding needed to study both predators and prey in any ecosystem in the detail needed to understand why prey numbers change. Perhaps a study of all the major predators – wolves, bears, cougars – in this system should be next. We now have the radio telemetry advances that allow satellite locations, activity levels, timing of mortality, proximity sensors when predators are near their prey, and even video and sound recording so that more details of predation events can be recorded. But all this costs money that is not yet here because governments and people have other priorities and value the natural world rather less than we ecologists would prefer. There is not yet a Nobel Prize for ecological field research, and yet here is a study on an iconic Canadian species that would be high up in the running.

What would I add to this paper? My curiosity would be satisfied by the number of person-years and the budget needed to collect and analyze these results. These statistics should be on every scientific paper. And perhaps a discussion of what to do next. In much of ecology these kinds of discussions are done informally over coffee and students who want to know how science works would benefit from listening to how these informal discussions evolve. Ecology is far from simple. Physics and chemistry are simple, genetics is simple, and ecology is really a difficult science.

Boutin, S. and Merrill, E. 2016. A review of population-based management of Southern Mountain caribou in BC. {Unpublished review available at: http://cmiae.org/wp-content/uploads/Mountain-Caribou-review-final.pdf

Serrouya, R., McLellan, B.N., van Oort, H., Mowat, G., and Boutin, S. 2017. Experimental moose reduction lowers wolf density and stops decline of endangered caribou. PeerJ  5: e3736. doi: 10.7717/peerj.3736.

 

On Ecology and Economics

Economics has always been a mystery to me, so if you are an economist you may not like this blog. Many ecologists and some economists have written elegantly about the need for a new economics that includes the biosphere and indeed the whole world rather than just Wall Street and brings together ecology and the social sciences (e.g. Daily et al. 1991, Haly and Farley 2011, Brown et al. 2014, Martin et al. 2016). Several scientists have proposed measures that indicate how our current usage of natural resources is unsustainable (Wackernagel and Rees 1996, Rees and Wackernagel 2013). But few influential people and politicians appear to be listening, or if they are listening they are proceeding at a glacial pace at the same time as the problems that have been pointed out are racing at breakneck speed. The operating paradigm seems to be ‘let the next generation figure it out’ or more cynically ‘we are too busy buying more guns to worry about the environment’.

Let me discuss Canada as a model system from the point of view of an ecologist who thinks sustainability is something for the here and now. Start with a general law. No country can base its economy on non-renewable resources. Canada subsists by mining coal, oil, natural gas, and metals that are non-renewable. It also makes ends meet by logging and agricultural production. And we have done well for the last 200 years doing just that. Continue on, and to hell with the grandkids seems to be the prevailing view of the moment. Of course this is ecological nonsense, and, as many have pointed out, not the path to a sustainable society. Even Canada’s sustainable industries are unsustainable. Forestry in Canada is a mining operation in many places with the continuing need to log old growth forest to be a viable industry. Agriculture is not sustainable if soil fertility is continually falling so that there is an ever-increasing need for more fertilizer, and if more agricultural land is being destroyed by erosion and shopping malls. All these industries persist because of a variety of skillful proponents who dismiss long-term problems of sustainability. The oil sands of Alberta are a textbook case of a non-renewable resource industry that makes a lot of money while destroying both the Earth itself and the climate. Again, this makes sense short-term, but not for the grandkids.

So, we see a variety of decisions that are great in the short term but a disaster in the long term. Politicians will not move now unless the people lead them and there is little courage shown and only slight discussion of the long-term issues. The net result is that it is most difficult now to be an ecologist and be optimistic of the future even for relatively rich countries. Global problems deserve global solutions yet we must start with local actions and hope that they become global. We push ahead but in every case we run into the roadblocks of exponential growth. We need jobs, we need food and water and a clean atmosphere, but how do we get from A to B when the captains of industry and the public at large have a focus on short-term results? As scientists we must push on toward a sustainable future and continue to remind those who will listen that the present lack of action is not a wise choice for our grandchildren.

Brown, J.H. et al. 2014. Macroecology meets macroeconomics: Resource scarcity and global sustainability. Ecological Engineering 65(1): 24-32. doi: 10.1016/j.ecoleng.2013.07.071.

Daily, G.C., Ehrlich, P.R., Mooney, H.A., and Erhlich, A.H. 1991. Greenhouse economics: learn before you leap. Ecological Economics 4: 1-10.

Daly, H.E., and Farley, J. 2011. Ecological Economics: Principles and Applications. 2nd ed. Island Press, Washington, D.C.

Martin, J.-L., Maris, V., and Simberloff, D.S. 2016. The need to respect nature and its limits challenges society and conservation science. Proceedings of the National Academy of Sciences 113(22): 6105-6112. doi: 10.1073/pnas.1525003113.

Rees, W. E., and M. Wackernagel. 2013. The shoe fits, but the footprint is larger than Earth. PLoS Biology 11:e1001701. doi: 10.1371/journal.pbio.1001701

Wackernagel, M., and W. E. Rees. 1996. Our Ecological Footprint: Reducing Human Impact on the Earth. New Society Publishers, Gabriola Island, B.C. 160 p.

On Post-hoc Ecology

Back in the Stone Age when science students took philosophy courses, a logic course was a common choice for students majoring in science. Among the many logical fallacies one of the most common was the Post Hoc Fallacy, or in full “Post hoc, ergo propter hoc”, “After this, therefore because of this.” The Post Hoc Fallacy has the following general form:

  1. A occurs before B.
  2. Therefore A is the cause of B.

Many examples of this fallacy are given in the newspapers every day. “I lost my pencil this morning and an earthquake occurred in California this afternoon.” Therefore….. Of course, we are certain that this sort of error could never occur in the 21st century, but I would like to suggest to the contrary that its frequency is probably on the rise in ecology and evolutionary biology, and the culprit (A) is most often climate change.

Hilborn and Stearns (1982) pointed out many years ago that most ecological and evolutionary changes have multiple causes, and thus we must learn to deal with multiple causation in which a variety of factors combine and interact to produce an observed outcome. This point of view places an immediate dichotomy between the two extremes of ecological thinking – single factor experiments to determine causation cleanly versus the “many factors are involved” world view. There are a variety of intermediate views of ecological causality between these two extremes, leading in part to the flow chart syndrome of boxes and arrows aptly described by my CSIRO colleague Kent Williams as “horrendograms”. If you are a natural resource manager you will prefer the simple end of the spectrum to answer the management question of ‘what can I possibly manipulate to change an undesirable outcome for this population or community?’

Many ecological changes are going on today in the world, populations are declining or increasing, species are disappearing, geographical distributions are moving toward the poles or to higher altitudes, and novel diseases are appearing in populations of plants and animals. The simplest explanation of all these changes is that climate change is the major cause because in every part of the Earth some aspect of winter or summer climate is changing. This might be correct, or it might be an example of the Post Hoc Fallacy. How can we determine which explanation is correct?

First, for any ecological change it is important to identify a mechanism of change. Climate, or more properly weather, is itself a complex factor of temperature, humidity, and rainfall, and for climate to be considered a proper cause you must advance some information on physiology or behaviour or genetics that would link some specific climate parameter to the changes observed. Information on possible mechanisms makes the potential explanation more feasible. A second step is to make some specific predictions that can be tested either by experiments or by further observational data. Berteaux et al. (2006) provided a careful list of suggestions on how to proceed in this manner, and Tavecchia et al. (2016) have illustrated how one traditional approach to studying the impact of climate change on population dynamics could lead to forecasting errors.

A second critical focus must be on long-term studies of the population or community of interest. In particular, 3-4 year studies common in Ph.D. theses must make the assumption that the results are a random sample of annual ecological changes. Often this is not the case and this can be recognized when longer term studies are completed or more easily if an experimental manipulation can be carried out on the mechanisms involved.

The retort to these complaints about ecological and evolutionary inference is that all investigated problems are complex and multifactorial, so that after much investigation one can conclude only that “many factors are involved”. The application of AIC analysis attempts to blunt this criticism by taking the approach that, given the data (the evidence), what hypothesis is best supported? Hobbs and Hilborn (2006) provide a guide to the different methods of inference that can improve on the standard statistical approach. The AIC approach has always carried with it the awareness of the possibility that the correct hypothesis is not present in the list being evaluated, or that some combination of relevant factors cannot be tested because the available data does not cover a wide enough range of variation. Burnham et al. (2011) provide an excellent checklist for the use of AIC measures to discriminate among hypotheses. Guthery et al. (2005) and Stephens et al. (2005) carry the discussion in interesting ways. Cade (2015) discusses an interesting case in which inappropriate AIC methods lead to questionable conclusions about habitat distribution preferences and use by sage-grouse in Colorado.

If there is a simple message in all this it is to think very carefully about what the problem is in any investigation, what the possible solutions or hypotheses are that could explain the problem, and then utilize the best statistical methods to answer that question. Older statistical methods are not necessarily bad, and newer statistical methods not automatically better for solving problems. The key lies in good data, relevant to the problem being investigated. And if you are a beginning investigator, read some of these papers.

Berteaux, D., et al. 2006. Constraints to projecting the effects of climate change on mammals. Climate Research 32(2): 151-158. doi: 10.3354/cr032151.

Burnham, K.P., Anderson, D.R., and Huyvaert, K.P. 2011. AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behavioral Ecology and Sociobiology 65(1): 23-35. doi: 10.1007/s00265-010-1029-6.

Guthery, F.S., Brennan, L.A., Peterson, M.J., and Lusk, J.J. 2005. Information theory in wildlife science: Critique and viewpoint. Journal of Wildlife Management 69(2): 457-465. doi: 10.1890/04-0645.

Hilborn, R., and Stearns, S.C. 1982. On inference in ecology and evolutionary biology: the problem of multiple causes. Acta Biotheoretica 31: 145-164. doi: 10.1007/BF01857238

Hobbs, N.T., and Hilborn, R. 2006. Alternatives to statistical hypothesis testing in ecology: a guide to self teaching. Ecological Applications 16(1): 5-19. doi: 10.1890/04-0645

Stephens, P.A., Buskirk, S.W., Hayward, G.D., and Del Rio, C.M. 2005. Information theory and hypothesis testing: a call for pluralism. Journal of Applied Ecology 42(1): 4-12. doi: 10.1111/j.1365-2664.2005.01002.x

Tavecchia, G., et al. 2016. Climate-driven vital rates do not always mean climate-driven population. Global Change Biology 22(12): 3960-3966. doi: 10.1111/gcb.13330.

On Ecological Predictions

The gold standard of ecological studies is the understanding of a particular ecological issue or system and the ability to predict the operation of that system in the future. A simple example is the masting of trees (Pearse et al. 2016). Mast seeding is synchronous and highly variable seed production among years by a population of perennial plants. One ecological question is what environmental drivers cause these masting years and what factors can be used to predict mast years. Weather cues and plant resource states presumably interact to determine mast years. The question I wish to raise here, given this widely observed natural history event, is how good our predictive models can be on a spatial and temporal scale.

On a spatial scale masting events can be widespread or localized, and this provides some cues to the important weather variables that might be important. Assuming we can derive weather models for prediction, we face two often unknown constraints – space and time. If we can derive a weather model for trees in New Zealand, will it also apply to trees in Australia or California? Or on a more constrained geographical view, if it applied on the South Island of New Zealand will it also apply on the North Island? At the other extreme, must we derive models for every population of particular plants in different areas, so that predictability is spatially limited? We hope not and work on the assumption of more spatial generality than what we can measure on our particular small study areas.

The temporal stability of our explanations is now particularly worrisome because of climate change. If we have a good model of masting for a particular tree species in 2017, will it still be working in 2030, 2050 or 2100? A physicist would never ask such a question since a “scientific law” is independent of time. But biology in general and ecology in particular is not time independent both because of evolution and now in particular because of changing climate. But we have not faced up to whether or not we must check our “ecological laws” over and over again as the environment changes, and if we have to do this what must the time scale of rechecking be? Perhaps this question can be answered by determining the speed of potential evolutionary change in species groups. If virus diseases can evolve quickly in terms of months or years, we must be eternally vigilant to consider if the flu virus of 2017 is going to be the same as that of 2016. We should not stop virus research and say that we have sorted out some universal model that will become an equivalent of the laws of physics.

The consequences of these simple observations are not simple. One consequence is the implication that monitoring is an essential ecological activity. But in most ecological funding agencies monitoring is thought to be unscientific, not leading to progress, and more stamp collecting. So we have to establish that, like the Weather Bureau every country supports, we must have an equivalent ecological monitoring bureau. We do have these bureaus for some ecological systems that make money, like marine fisheries, but most other ecosystems are left in limbo with little or no funding on the generalized assumption that “mother or father nature will take care of itself” or expressed more elegantly by a cabinet minister who must be nameless, “there is no need for more forestry research, as we know everything we need to know already”. The urge by politicians to cut research funding lives too much in environmental research.

But ecologists are not just ‘stamp collectors’ as some might think. We need to develop generality but at a time scale and a spatial scale that is reliable and useful for the resolution of the problem that gave rise to the research. Typically for ecological issues this time scale would be 10-25 years, and a rule of thumb might be for 10 generations of the organisms being studied. For many of our questions an annual scale might be most useful, but for long-lived plants and animals we must be thinking of decades or even centuries. Some practical examples from Pacifici et al. (2013): If you study field voles (Microtus spp.) typically you can complete your studies of 10 generations in 3.5 years (on average). If you study red squirrels (Tamiasciurus hudsonicus), the same 10 generations will cost you 39 years, and if red foxes (Vulpes vulpes) 58 years. If wildebeest (Connochaetes taurinus) in the Serengeti, 10 generations will take you 80 years, and if you prefer red kangaroos (Macropus rufus) it will take about 90 years. All these estimates are very approximate but they give you an idea of what the time scale of a long-term study might be. Except for the rodent example, all these study durations are nearly impossible to achieve, and the question for ecologists is this: Should we be concerned about these time scales, or should we scale everything to the human research time scale?

The spatial scale has expanded greatly for ecologists with the advent of radio transmitters and the possibility of satellite tracking. These technological advances allow many conservation questions regarding bird migration to be investigated (e.g. Oppel et al. 2015). But no matter what the spatial scale of interest in a research or management program, variation among individuals and sites must be analyzed by means of the replication of measurements or manipulations at several sites. The spatial scale is dictated by the question under investigation, and the issue of fragmentation has focused attention on the importance of spatial movements both for ecological and evolutionary questions (Betts et al. 2014).

And the major question remains: can we construct an adequate theory of ecology from a series of short-term, small area or small container studies?

Betts, M.G., Fahrig, L., Hadley, A.S., Halstead, K.E., Bowman, J., Robinson, W.D., Wiens, J.A. & Lindenmayer, D.B. (2014) A species-centered approach for uncovering generalities in organism responses to habitat loss and fragmentation. Ecography, 37, 517-527. doi: 10.1111/ecog.00740

Oppel, S., Dobrev, V., Arkumarev, V., Saravia, V., Bounas, A., Kret, E., Velevski, M., Stoychev, S. & Nikolov, S.C. (2015) High juvenile mortality during migration in a declining population of a long-distance migratory raptor. Ibis, 157, 545-557. doi: 10.1111/ibi.12258

Pacifici, M., Santini, L., Di Marco, M., Baisero, D., Francucci, L., Grottolo Marasini, G., Visconti, P. & Rondinini, C. (2013) Database on generation length of mammals. Nature Conservation, 5, 87-94. doi: 10.3897/natureconservation.5.5734

Pearse, I.S., Koenig, W.D. & Kelly, D. (2016) Mechanisms of mast seeding: resources, weather, cues, and selection. New Phytologist, 212 (3), 546-562. doi: 10.1111/nph.14114

University Conundrums

Universities in Canada and the United States and probably in Australia as well are bedeviled by not knowing what they should be doing. In general, they all want to be ‘excellent’ but this is largely an advertising gimmick unless one wishes to be more specific about excellent in what? Excellent in French literature? Probably not. Excellent in the engineering that facilitates the military-industrial complex? Probably yes, but with little thought of the consequences for universities or for Planet Earth (Smart 2016). Excellence in medicine? Certainly, yes. But much of the advertisement about excellence is self aggrandisement, and one can only hope that underneath the adverts there is some good planning and thinking of what a university should be (Lanahan et al. 2016).

There are serious problems in the world today and the question is what should the universities be doing about these long-term, difficult problems. There are two polar views on this question. At one extreme, universities can say it is our mandate to educate students and not our mandate to solve environmental or social problems. At the other extreme, universities can devote their resources to solving problems, and thereby educate students in problem analysis and problem solving. But these universities will not be very popular since for any serious issue like climate change, many voters are at odds over what can and should be done, Governments do not like universities that produce scholarship that challenges their policies. So we must always remember the golden rule – “she that has the gold, makes the rules”.

But there are constraints no matter what policies a university adopts, and there is an extensive literature on these constraints. I want to focus on one overarching constraint for biodiversity research in universities – graduate students have a very short time to complete their degrees. Given a 2-year or 3-year time horizon, the students must focus on a short-term issue with a very narrow focus. This is good for the students and cannot be changed. But it is potentially lethal for ecological studies that are long-term and do not fit into the demands of thesis writing. A basic assumption I make is that the most important ecological issues of our day are long-term problems, at least in the 20-year time frame and more likely in the 50 to 100-year time frame. The solution most prevalent in the ecology literature now is to use short term data to produce a model to extrapolate short term data into the indefinite future by use of a climate model or any other model that will allow extrapolation. The result of this conundrum is that the literature is full of studies making claims about ecological processes that are based on completely inadequate time frames (Morrison 2012). If this is correct, at least we ought to have the humility to point out the potential errors of extrapolation into the future. We make a joke about this situation in our comical advice to graduate students: “If you get an exciting result from your thesis research in year 1, stop and do no more work and write your thesis lest you get a different result if you continue in year 2.”

The best solution for graduate students is to work within a long-term project, so that your 2-3 years of work can build on past progress. But long-term projects are difficult to carry forward in universities now because research money is in short supply (Rivero and Villasante 2016). University faculty can piggy-back on to government studies that are well funded and long-term, but again this is not always possible. Conservation ecology is not often well funded by governments either, so we keep passing the buck. Collaboration here between governments and universities is essential, but is not always strong at the level of individual projects. Some long-term ecological studies are led by federal and regional government research departments directly, but more seem to be led by university faculty. And the limiting resource is typically money. There are a set of long-term problems in ecology that are ignored by governments for ideological reasons. Some politicians work hard to avoid the many ecological problems that are ‘hot potatoes’ and are best left unstudied. Any competent ecologist can list for you 5 or more long-term issues in conservation biology that are not being addressed now for lack of money. I doubt that ideas are the limiting resource in ecology, as compared with funding.

And this leads us back in a circle to the universities quest for ‘excellence’. Much here depends on the wisdom of the university’s leaders and the controls on university funding provided by governments for research. In Canada for example, funding constraints for research excellence exist based on university size (Murray et al. 2016). How then can we link the universities’ quest for excellence to the provision of adequate funding for long-term ecological issues? As one recommendation to the directors of funding programs within the universities, I suggest listing the major problems of your area and of the world at large, and then fund the research within your jurisdiction by how well the proposed research matches the major problems we face today.

Lanahan, L., Graddy-Reed, A. & Feldman, M.P. (2016) The Domino Effects of Federal Research Funding. PLoS ONE, 11, e0157325. doi: 10.1371/journal.pone.0157325

Morrison, M.L. (2012) The habitat sampling and analysis paradigm has limited value in animal conservation: A prequel. Journal of Wildlife Management, 76, 438-450. doi: 10.1002/jwmg.333

Murray, D.L., Morris, D., Lavoie, C., Leavitt, P.R. & MacIsaac, H. (2016) Bias in research grant evaluation has dire consequences for small universities. PLoS ONE, 11, e0155876.doi: 10.1371/journal.pone.0155876

Rivero, S. & Villasante, S. (2016) What are the research priorities for marine ecosystem services? Marine policy, 66, 104-113. doi: 10.1016/j.marpol.2016.01.020

Smart, B. (2016) Military-industrial complexities, university research and neoliberal economy. Journal of Sociology, 52, 455-481. doi: 10.1177/1440783316654258

Biodiversity Conundrums

Conservation ecologists face a conundrum, as many have pointed out before. As scientists we do not make policy. Most conservation problems are essentially a moral issue of dealing with conflicts in goals and allowable actions. Both the United States and Canada have endangered species legislation in which action plans are written for species of concern. In the USA species of concern are allotted some funding and more legal protection than in Canada, where much good material is written but funding for action or research is typically absent. What is interesting from an ecological perspective is the list of species that are designated as endangered or threatened. Most of them can be described colloquially as the “charismatic megafauna”, species that are either large or beautiful or both. There are exceptions of course for some amphibians and rare plants, but by and large the list of species of concern is a completely non-random collection of organisms that people see in their environment. Birds and butterflies and large mammals are at the head of the list.

All of this is fine and useful because it is largely political ecology, but it raises the question of what will happen should these rescue plans for threatened or endangered species fail. This question lands ecologists in a rather murky area of ecosystem function, which leads to the key question: how is ecosystem function affected by the loss of species X? The answer to this question depends very much on how you define ecosystem function. If species X is a plant and the ecosystem function measured is the uptake of CO2 by the plant community, the answer could be a loss of function, no change, or indeed an increase in CO2 uptake if species X for example is replaced by a weed that is more productive that species X. The answer to this simple question is thus very complicated and requires much research. For a hypothetical example, plant X may be replaced by a weed that fixes more CO2, and thus ecosystem function is improved as measured by carbon uptake from the atmosphere. But the weed may deplete soil nitrogen which could adversely affect other plants and soil quality. Again more data are needed to decide this. If the effect size is small, much research could provide an ambiguous answer to the original question, since all measurement involves errors.

So now we are in a box, a biodiversity conundrum. The simplest escape is to say that all species loss is undesirable in any ecosystem, a pontification that is more political than scientific. And, for a contrary view, if the species lost is a disease organism, or an insect that spreads human diseases, we will not mourn its passing. In practice we seem to agree with the public that the species under concern are not all of equal value for conservation. The most serious outcome of this consideration is that where the money goes for conservation is highly idiosyncratic. There are two major calls for funding that perhaps should not be questioned: first, for land (and water) acquisition and protection, and second, for providing compensation for the people whose livelihoods are affected by protected areas with jobs and skills that improve their lives. The remaining funds need to be used for scientific research that will further the cause of conservation in the broad sense. The most useful principle at this stage is that all research has a clear objective and a clear list of what outcomes can be used to judge its success. For conservation outcomes this judgement should be clear cut. Currently they are not.

When Caughley (1994) described the declining population paradigm and the small population paradigm he clearly felt that the small population paradigm, while theoretically interesting, had little to contribute to most of the real world problems of biodiversity conservation. He could not have imagined at the time how genetics would develop into a powerful set of methods of analysis of genomes. But with a few exceptions the small population paradigm and all the elegant genetic work that has sprung from it has delivered a mountain of descriptive information with only a molehill of useful management options for real world problems. Many will disagree with my conclusion, and it is clear that conservation genetics is a major growth industry. That is all well and good but my question remains as to its influence on the solution of current conservation problems (Caro 2008; Hutchings 2015; Mattsson et al. 2008). Conservation genetic papers predicting extinctions in 100 years or more based on low levels of genetic variation are not scientifically testable and rely on a law of conservation genetics that is riddled with exceptions (Nathan et al. 2015; Robinson et al. 2016). Do we need more untestable hypotheses in conservation biology?

Caro, T. 2008. Decline of large mammals in the Katavi-Rukwa ecosystem of western Tanzania. African Zoology 43(1): 99-116. doi:10.3377/1562-7020(2008)43[99:dolmit]2.0.co;2.

Caughley, G. 1994. Directions in conservation biology. Journal of Animal Ecology 63: 215-244. doi: 10.2307/5542

Hutchings, J.A. 2015. Thresholds for impaired species recovery. Proceedings of the Royal Society. B, Biological sciences 282(1809): 20150654. doi:10.1098/rspb.2015.0654.

Mattsson, B.J., Mordecai, R.S., Conroy, M.J., Peterson, J.T., Cooper, R.J., and Christensen, H. 2008. Evaluating the small population paradigm for rare large-bodied woodpeckers, with Implications for the Ivory-billed Woodpecker. Avian Conservation and Ecology 3(2): 5. http://www.ace-eco.org/vol3/iss2/art5/

Nathan, H.W., Clout, M.N., MacKay, J.W.B., Murphy, E.C., and Russell, J.C. 2015. Experimental island invasion of house mice. Population Ecology 57(2): 363-371. doi:10.1007/s10144-015-0477-2.

Robinson, J.A., Ortega-Del Vecchyo, D., Fan, Z., Kim, B.Y., and vonHoldt, B.M. 2016. Genomic flatlining in the endangered Island Fox. Current Biology 26(9): 1183-1189. doi:10.1016/j.cub.2016.02.062.

Climate Change and Ecological Science

One dominant paradigm of the ecological literature at the present time is what I would like to call the Climate Change Paradigm. Stated in its clearest form, it states that all temporal ecological changes now observed are explicable by climate change. The test of this hypothesis is typically a correlation between some event like a population decline, an invasion of a new species into a community, or the outbreak of a pest species and some measure of climate. Given clever statistics and sufficient searching of many climatic measurements with and without time lags, these correlations are often sanctified by p< 0.05. Should we consider this progress in ecological understanding?

An early confusion in relating climate fluctuations to population changes was begun by labelling climate as a density independent factor within the density-dependent model of population dynamics. Fortunately, this massive confusion was sorted out by Enright (1976) but alas I still see this error repeated in recent papers about population changes. I think that much of the early confusion of climatic impacts on populations was due to this classifying all climatic impacts as density-independent factors.

One’s first response perhaps might be that indeed many of the changes we see in populations and communities are indeed related to climate change. But the key here is to validate this conclusion, and to do this we need to talk about the mechanisms by which climate change is acting on our particular species or species group. The search for these mechanisms is much more difficult than the demonstration of a correlation. To become more convincing one might predict that the observed correlation will continue for the next 5 (10, 20?) years and then gather the data to validate the correlation. Many of these published correlations are so weak as to preclude any possibility of validation in the lifetime of a research scientist. So the gold standard must be the deciphering of the mechanisms involved.

And a major concern is that many of the validations of the climate change paradigm on short time scales are likely to be spurious correlations. Those who need a good laugh over the issue of spurious correlation should look at Vigen (2015), a book which illustrates all too well the fun of looking for silly correlations. Climate is a very complex variable and a nearly infinite number of measurements can be concocted with temperature (mean, minimum, maximum), rainfall, snowfall, or wind, analyzed over any number of time periods throughout the year. We are always warned about data dredging, but it is often difficult to know exactly what authors of any particular paper have done. The most extreme examples are possible to spot, and my favorite is this quotation from a paper a few years ago:

“A total of 864 correlations in 72 calendar weather periods were examined; 71 (eight percent) were significant at the p< 0.05 level. …There were 12 negative correlations, p< 0.05, between the number of days with (precipitation) and (a demographic measure). A total of 45- positive correlations, p<0.05, between temperatures and (the same demographic measure) were disclosed…..”

The climate change paradigm is well established in biogeography and the major shifts in vegetation that have occurred in geological time are well correlated with climatic changes. But it is a large leap of faith to scale this well established framework down to the local scale of space and a short-term time scale. There is no question that local short term climate changes can explain many changes in populations and communities, but any analysis of these kinds of effects must consider alternative hypotheses and mechanisms of change. Berteaux et al. (2006) pointed out the differences between forecasting and prediction in climate models. We desire predictive models if we are to improve ecological understanding, and Berteaux et al. (2006) suggested that predictive models are successful if they follow three rules:

(1) Initial conditions of the system are well described (inherent noise is small);

(2) No important variable is excluded from the model (boundary conditions are defined adequately);

(3) Variables used to build the model are related to each other in the proper way (aggregation/representation is adequate).

Like most rules for models, whether these conditions are met is rarely known when the model is published, and we need subsequent data from the real world to see if the predictions are correct.

I am much less convinced that forecasting models are useful in climate research. Forecasting models describe an ecological situation based on correlations among the measurements available with no clear mechanistic model of the ecological interactions involved. My concern was highlighted in a paper by Myers (1998) who investigated for fish populations the success of published juvenile recruitment-environmental factor (typically temperature) correlations and found that very few forecasting models were reliable when tested against additional data obtained after publication. It would be useful for someone to carry out a similar analysis for bird and mammal population models.

Small mammals show some promise for predictive models in some ecosystems. The analysis by Kausrud et al. (2008) illustrates a good approach to incorporating climate into predictive explanations of population change in Norwegian lemmings that involve interactions between climate and predation. The best approach in developing these kinds of explanations and formulating them into models is to determine how the model performs when additional data are obtained in the years to follow publication.

The bottom line is to avoid spurious climatic correlations by describing and evaluating mechanistic models that are based on observable biological factors. And then make predictions that can be tested in a realistic time frame. If we cannot do this, we risk publishing fairy tales rather than science.

Berteaux, D., et al. (2006) Constraints to projecting the effects of climate change on mammals. Climate Research, 32, 151-158. doi: 10.3354/cr032151

Enright, J. T. (1976) Climate and population regulation: the biogeographer’s dilemma. Oecologia, 24, 295-310.

Kausrud, K. L., et al. (2008) Linking climate change to lemming cycles. Nature, 456, 93-97. doi: 10.1038/nature07442

Myers, R. A. (1998) When do environment-recruitment correlations work? Reviews in Fish Biology and Fisheries, 8, 285-305. doi: 10.1023/A:1008828730759

Vigen, T. (2015) Spurious Correlations, Hyperion, New York City. ISBN: 978-031-633-9438

On Disease Ecology

One of the sleepers in population dynamics has always been the role of disease in population limitation and population fluctuations. Part of the reason for this is that disease studies need cooperation between skilled ecologists and skilled microbiologists. Another problem is the possibility of infinite regress in looking for disease agents as a cause of population change in natural populations – e.g. if it is not virus X, there are hundreds of other viruses that might be the culprit. In both North America and Europe one focus of concern has been on the hantavirus group (Luis et al. 2010; Mills et al. 2010, Davis et al. 2005, Mills et al. 1999). Hantaviruses come in many different forms and are typically carried by rodent species. Some varieties produce hemorrhagic fever with renal syndrome in Europe, Asia and Africa, but in the Americas the main disease of concern is HPS (hantavirus pulmonary syndrome). It is no surprise that often emerging diseases are studied only because some humans die from them. As of 2016, 690 cases of hantavirus pulmonary syndrome have been recorded in the USA, and 36% of these cases resulted in death. The reverse question of what the disease is doing to the animal population gets rather less attention typically than the human disease problem. The example I want to discuss here is the Sin Nombre virus (SNV) in deer mice (Peromyscus spp.), widespread rodents in North America.

The hantavirus outbreak in the Southwestern USA in the 1990s caused numerous human deaths and produced a number of field studies that showed a patchy pattern of infection among deer mice in Arizona and Colorado (Mills et al. 1999). Male mice were infected more than females and the suggestion was that males fighting for territories were infecting one another directly when population densities were high. The call for long-term studies went out and several studies from 3-5 years were carried out in the late 1990s until the problem of infection in the human population became less of an issue compared with other diseases such as Ebola in other parts of the world. The shift in concern resulted in reduced funding for field studies in North America.

In 1994 Rick Douglass and his research team began long term studies on the Sin Nombre virus in deer mice using 18 live trapping areas of 1 ha each spread across Montana and placed in a variety of habitats (Douglass et al. 2001). Long-term for their study was 15 years, all this at a time when 2-3 year studies were thought to be sufficient to unravel the nexus of infection and transmission. The idea was to complement in Montana similar rodent research in Arizona, New Mexico, and Colorado. The results are fascinating and important because they illustrate the importance of long term research and the understanding of what a well designed field study can produce.

Rightfully many of the hantavirus studies were focused on the human connection, but what I want to emphasize here is the impact of this virus on the rodent populations. Luis et al. (2012) estimated that male Peromyscus had their monthly survival rate reduced from 0.67 to 0.58 if they were seropositive, a 13% reduction, but females showed no effect of hantavirus on survival so that infected and uninfected females survived equally well. Hantavirus does reduce body growth rates of infected male mice. One consequence of these findings should be that the growth rate of Peromyscus populations in Montana should be only slightly affected by hantavirus infections, since it is the female component of the population that drives numbers. There are limitations to these conclusions since juveniles too young to live trap could suffer mortality that at present cannot be measured. The threshold for hantavirus transmission in these Peromyscus populations was about 17 individuals per ha (Luis et al. 2015), implying that hantavirus would disappear in populations smaller than this because it would not transmit. The consequence for us is that human hantavirus infections in North America are much more likely when deer mouse populations are high, and by monitoring deer mice ecologists can broadcast warnings when there are increased possibilities of infection with this lethal disease.

The details about the Sin Nombre hantavirus in North America are well covered in these and other papers. The most important general message from this research has been the need for long term studies to get at what might initially seem to be a simple population problem (Carver et al. 2015). There are a host of other viruses that infect rodent species and many other mammals and birds about which we know very little. The path to understanding the effects of these viruses on the animals they infect and their potential for human transmission will require much detailed work over a longer time period than what is now the funding horizon of our granting agencies. The Montana studies on the Sin Nombre virus required ecologists to trap for 20 years with more than 851,000 trap nights to catch 16,608 deer mice, and collect 10,572 blood samples to assess infections and gain an understanding of this virus disease. The problem too often is that it is easy to find ecologists and virologists keen to cooperate in these studies of disease, but it is not easy to find the long term funding that looks at these ecological problems in the time scale of 10-20 years or more. We need much more long term thinking about ecological problems and the funding to support team efforts on difficult problems that are not soluble in a 3-year time frame.

Carver, S., Mills, J.N., Parmenter, C.A., Parmenter, R.R., Richardson, K.S., Harris, R.L., Douglass, R.J., Kuenzi, A.J., and Luis, A.D. 2015. Toward a mechanistic understanding of environmentally forced zoonotic disease emergence: Sin Nombre hantavirus. BioScience 65(7): 651-666. doi: 10.1093/biosci/biv047.

Davis, S., Calvet, E., and Leirs, H. 2005. Fluctuating rodent populations and risk to humans from rodent-borne zoonoses. Vector-Borne and Zoonotic Diseases 5(4): 305-314.

Douglass, R.J., Wilson, T., Semmens, W.J., Zanto, S.N., Bond, C.W., Van Horn, R.C., and Mills, J.N. 2001. Longitudinal studies of Sin Nombre virus in deer mouse-dominated ecosystems of Montana. American Journal of Tropical Medicine and Hygiene 65(1): 33-41.

Luis, A.D., Douglass, R.J., Hudson, P.J., Mills, J.N., and Bjørnstad, O.N. 2012. Sin Nombre hantavirus decreases survival of male deer mice. Oecologia 169(2): 431-439. doi: 10.1007/s00442-011-2219-2.

Luis, A.D., Douglass, R.J., Mills, J.N., and Bjørnstad, O.N. 2010. The effect of seasonality, density and climate on the population dynamics of Montana deer mice, important reservoir hosts for Sin Nombre hantavirus. Journal of Animal Ecology 79(2): 462-470. doi: 10.1111/j.1365-2656.2009.01646.x.

Luis, A.D., Douglass, R.J., Mills, J.N., and Bjørnstad, O.N. 2015. Environmental fluctuations lead to predictability in Sin Nombre hantavirus outbreaks. Ecology 96(6): 1691-1701. doi: 10.1890/14-1910.1.

Mills, J.N., Amman, B.R., and Glass, G.E. 2010. Ecology of hantaviruses and their hosts in North America. Vector-Borne and Zoonotic Diseases 10(6): 563-574. doi: 10.1089/vbz.2009.0018.

Mills, J.N., Ksiazek, T.G., Peters, C.J., and Childs, J.E. 1999. Long-term studies of hantavirus reservoir populations in the southwestern United States: a synthesis. Emerging Infectious Diseases 5(1): 135-142.

On Critical Questions in Biodiversity and Conservation Ecology

Biodiversity can be a vague concept with so many measurement variants to make one wonder what it is exactly, and how to incorporate ideas about biodiversity into scientific hypotheses. Even if we take the simplest concept of species richness as the operational measure, many questions arise about the importance of the rare species that make up most of the biodiversity but so little of the biomass. How can we proceed to a better understanding of this nebulous ecological concept that we continually put before the public as needing their attention?

Biodiversity conservation relies on community and ecosystem ecology for guidance on how to advance scientific understanding. A recent paper by Turkington and Harrower (2016) articulates this very clearly by laying out 7 general questions for analyzing community structure for conservation of biodiversity. As such these questions are a general model for community and ecosystem ecology approaches that are needed in this century. Thus it would pay to look at these 7 questions more closely and to read this new paper. Here is the list of 7 questions from the paper:

  1. How are natural communities structured?
  2. How does biodiversity determine the function of ecosystems?
  3. How does the loss of biodiversity alter the stability of ecosystems?
  4. How does the loss of biodiversity alter the integrity of ecosystems?
  5. Diversity and species composition
  6. How does the loss of species determine the ability of ecosystems to respond to disturbances?
  7. How does food web complexity and productivity influence the relative strength of trophic interactions and how do changes in trophic structure influence ecosystem function?

Turkington and Harrower (2016) note that each of these 7 questions can be asked in at least 5 different contexts in the biodiversity hotspots of China:

  1. How do the observed responses change across the 28 vegetation types in China?
  2. How do the observed responses change from the low productivity grasslands of the Qinghai Plateau to higher productivity grasslands in other parts of China?
  3. How do the observed responses change along a gradient in the intensity of human use or degradation?
  4. How long should an experiment be conducted given that the immediate results are seldom indicative of longer-term outcomes?
  5. How does the scale of the experiment influence treatment responses?

There are major problems in all of this as Turkington and Harrower (2016) and Bruelheide et al. (2014) have discussed. The first problem is to determine what the community is or what the bounds of an ecosystem are. This is a trivial issue according to community and ecosystem ecologists, and all one does is draw a circle around the particular area of interest for your study. But two points remain. Populations, communities, and ecosystems are open systems with no clear boundaries. In population ecology we can master this problem by analyses of movements and dispersal of individuals. On a short time scale plants in communities are fixed in position while their associated animals move on species-specific scales. Communities and ecosystems are not a unit but vary continuously in space and time, making their analysis difficult. The species present on 50 m2 are not the same as those on another plot 100 m or 1000 m away even if the vegetation types are labeled the same. So we replicate plots within what we define to be our community. If you are studying plant dynamics, you can experimentally place all plant species selected in defined plots in a pre-arranged configuration for your planting experiments, but you cannot do this with animals except in microcosms. All experiments are place specific, and if you consider climate change on a 100 year time scale, they are also time specific. We can hope that generality is strong and our conclusions will apply in 100 years but we do not know this now.

But we can do manipulative experiments, as these authors strongly recommend, and that brings a whole new set of problems, outlined for example in Bruelheide et al. (2014, Table 1, page 78) for a forestry experiment in southern China. Decisions about how many tree species to manipulate in what size of plots and what planting density to use are all potentially critical to the conclusions we reach. But it is the time frame of hypothesis testing that is the great unknown. All these studies must be long-term but whether this is 10 years or 50 years can only be found out in retrospect. Is it better to have, for example, forestry experiments around the world carried out with identical protocols, or to adopt a laissez faire approach with different designs since we have no idea yet of what design is best for answering these broad questions.

I suspect that this outline of the broad questions given in Turkington and Harrower (2016) is at least a 100 year agenda, and we need to be concerned how we can carry this forward in a world where funding of research questions has a 3 or 5 year time frame. The only possible way forward, until we win the Lottery, is for all researchers to carry out short term experiments on very specific hypotheses within this framework. So every graduate student thesis in experimental community and ecosystem ecology is important to achieving the goals outlined in these papers. Even if this 100 year time frame is optimistic and achievable, we can progress on a shorter time scale by a series of detailed experiments on small parts of the community or ecosystem at hand. I note that some of these broad questions listed above have been around for more than 50 years without being answered. If we redefine our objectives more precisely and do the kinds of experiments that these authors suggest we can move forward, not with the solution of grand ideas as much as with detailed experimental data on very precise questions about our chosen community. In this way we keep the long-range goal posts in view but concentrate on short-term manipulative experiments that are place and time specific.

This will not be easy. Birds are probably the best studied group of animals on Earth, and we now have many species that are changing in abundance dramatically over large spatial scales (e.g. http://www.stateofcanadasbirds.org/ ). I am sobered by asking avian ecologists why a particular species is declining or dramatically increasing. I never get a good answer, typically only a generally plausible idea, a hand waving explanation based on correlations that are not measured or well understood. Species recovery plans are often based on hunches rather than good data, with few of the key experiments of the type requested by Turkington and Harrower (2016). At the moment the world is changing rather faster than our understanding of these ecological interactions that tie species together in communities and ecosystems. We are walking when we need to be running, and even the Red Queen is not keeping up.

Bruelheide, H. et al. 2014. Designing forest biodiversity experiments: general considerations illustrated by a new large experiment in subtropical China. Methods in Ecology and Evolution, 5, 74-89. doi: 10.1111/2041-210X.12126

Turkington, R. & Harrower, W.L. 2016. An experimental approach to addressing ecological questions related to the conservation of plant biodiversity in China. Plant Diversity, 38, 1-10. Available at: http://journal.kib.ac.cn/EN/volumn/current.shtml

Hypothesis testing using field data and experiments is definitely NOT a waste of time

At the ESA meeting in 2014 Greg Dwyer (University of Chicago) gave a talk titled “Trying to understand ecological data without mechanistic models is a waste of time.” This theme has recently been reiterated on Dynamic Ecology Jeremy Fox, Brian McGill and Megan Duffy’s blog (25 January 2016 https://dynamicecology.wordpress.com/2016/01/25/trying-to-understand-ecological-data-without-mechanistic-models-is-a-waste-of-time/).  Some immediate responses to this blog have been such things as “What is a mechanistic model?” “What about the use of inappropriate statistics to fit mechanistic models,” and “prediction vs. description from mechanistic models”.  All of these are relevant and interesting issues in interpreting the value of mechanistic models.

The biggest fallacy however in this blog post or at least the title of the blog post is the implication that field ecological data are collected in a vacuum.  Hypotheses are models, conceptual models, and it is only in the absence of hypotheses that trying to understand ecological data is a “waste of time”. Research proposals that fund field work demand testable hypotheses, and testing hypotheses advances science. Research using mechanistic models should also develop testable hypotheses, but mechanistic models are certainly are not the only route to hypothesis creation of testing.

Unfortunately, mechanistic models rarely identify how the robustness and generality of the model output could be tested from ecological data and often fail comprehensively to properly describe the many assumptions made in constructing the model. In fact, they are often presented as complete descriptions of the ecological relationships in question, and methods for model validation are not discussed. Sometimes modelling papers include blatantly unrealistic functions to simplify ecological processes, without exploring the sensitivity of results to the functions.

I can refer to my own area of research expertise, population cycles for an example here.  It is not enough for example to have a pattern of ups and downs with a 10-year periodicity to claim that the model is an acceptable representation of cyclic population dynamics of for example a forest lepidopteran or snowshoe hares. There are many ways to get cyclic dynamics in modeled systems. Scientific progress and understanding can only be made if the outcome of conceptual, mechanistic or statistical models define the hypotheses that could be tested and the experiments that could be conducted to support the acceptance, rejection or modification of the model and thus to inform understanding of natural systems.

How helpful are mechanistic models – the gypsy moth story

Given the implication of Dwyer’s blog post (or at least blog post title) that mechanistic models are the only way to ecological understanding, it is useful to look at models of gypsy moth dynamics, one of Greg’s areas of modeling expertise, with the view toward evaluating whether model assumptions are compatible with real-world data Dwyer et al.  2004  (http://www.nature.com/nature/journal/v430/n6997/abs/nature02569.html)

Although there has been considerable excellent work on gypsy moth over the years, long-term population data are lacking.  Population dynamics therefore are estimated by annual estimates of defoliation carried out by the US Forest Service in New England starting in 1924. These data show periods of non-cyclicity, two ten-year cycles (peaks in 1981 and 1991 that are used by Dwyer for comparison to modeled dynamics for a number of his mechanistic models) and harmonic 4-5 year cycles between 1943 and1979 and since the 1991 outbreak. Based on these data 10-year cycles are the exception not the rule for introduced populations of gypsy moth. Point 1. Many of the Dwyer mechanistic models were tested using the two outbreak periods and ignored over 20 years of subsequent defoliation data lacking 10-year cycles. Thus his results are limited in their generality.

As a further example a recent paper, Elderd et al. (2013)  (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773759/) explored the relationship between alternating long and short cycles of gypsy moth in oak dominated forests by speculating that inducible tannins in oaks modifies the interactions between gypsy moth larvae and viral infection. Although previous field experiments (D’Amico et al. 1998) http://onlinelibrary.wiley.com/doi/10.1890/0012-9658(1998)079%5b1104:FDDNAW%5d2.0.CO%3b2/abstract concluded that gypsy moth defoliation does not affect tannin levels sufficiently to influence viral infection, Elderd et al. (2013) proposed that induced tannins in red oak foliage reduces variation in viral infection levels and promotes shorter cycles. In this study, an experiment was conducted using jasmonic acid sprays to induce oak foliage. Point 2 This mechanistic model is based on experiments using artificially induced tannins as a mimic of insect damage inducing plant defenses. However, earlier fieldwork showed that foliage damage does not influence virus transmission and thus does not support the relevance of this mechanism.

In this model Elderd et al. (2013) use a linear relationship for viral transmission (transmission of infection on baculovirus density) based on two data points and the 0 intercept. In past mechanistic models and in a number of other systems the relationship between viral transmission and host density is nonlinear (D’Amico et al. 2005, http://onlinelibrary.wiley.com/doi/10.1111/j.0307-6946.2005.00697.x/abstract;jsessionid=D93D281ACD3F94AA86185EFF95AC5119.f02t02?userIsAuthenticated=false&deniedAccessCustomisedMessage= Fenton et al. 2002, http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2656.2002.00656.x/full). Point 3. Data are insufficient to accurately describe the viral transmission relationship used in the model.

Finally the Elderd et al. (2013) model considers two types of gypsy moth habitat – one composed of 43% oaks that are inducible and the other of 15% oaks and the remainder of the forest composition is in adjacent blocks of non-inducible pines. Data show that gypsy moth outbreaks are limited to areas with high frequencies of oaks. In mixed forests, pines are only fed on by later instars of moth larvae when oaks are defoliated. The pines would be interspersed amongst the oaks not in separate blocks as in the modeled population. Point 4.  Patterns of forest composition in the models that are crucial to the result are unrealistic and this makes the interpretation of the results impossible.

Point 5 and conclusion. Because it can be very difficult to critically review someone else’s mechanistic model as model assumptions are often hidden in supplementary material and hard to interpret, and because relationships used in models are often arbitrarily chosen and not based on available data, it could be easy to conclude that “mechanistic models are misleading and a waste of time”. But of course that wouldn’t be productive. So my final point is that closer collaboration between modelers and data collectors would be the best way to ensure that the models are reasonable and accurate representations of the data.  In this way understanding and realistic predictions would be advanced. Unfortunately the great push to publish high profile papers works against this collaboration and manuscripts of mechanistic models rarely include data savvy referees.

D’Amico, V., J. S. Elkinton, G. Dwyer, R. B. Willis, and M. E. Montgomery. 1998. Foliage damage does not affect within-season transmission of an insect virus. Ecology 79:1104-1110.

D’Amico, V. D., J. S. Elkinton, P. J.D., J. P. Buonaccorsi, and G. Dwyer. 2005. Pathogen clumping: an explanation for non-linear transmission of an insect virus. Ecological Entomology 30:383-390.

Dwyer, G., F. Dushoff, and S. H. Yee. 2004. The combined effects of pathogens and predators on insect outbreaks. Nature 430:341-345.

Elderd, B. D., B. J. Rehill, K. J. Haynes, and G. Dwyer. 2013. Induced plant defenses, host–pathogen interactions, and forest insect outbreaks. Proceedings of the National Academy of Sciences 110:14978-14983.

Fenton, A., J. P. Fairbairn, R. Norman, and P. J. Hudson. 2002. Parasite transmission: reconciling theory and reality. Journal of Animal Ecology 71:893-905.