Author Archives: Charles Krebs

Is Community Ecology Impossible?

John Lawton writing in 1999 about general laws in ecological studies stated:

“…. ecological patterns and the laws, rules and mechanisms that underpin them are contingent on the organisms involved, and their environment…. The contingency [due to different species’ attributes] becomes overwhelmingly complicated at intermediate scales, characteristic of community ecology, where there are a large number of case histories, and very little other than weak, fuzzy generalizations….. To discover general patterns, laws and rules in nature, ecology may need to pay less attention to the ‘middle ground’ of community ecology, relying less on reductionism and experimental manipulation, but increasing research efforts into macroecology.” (Lawton 1999, page 177)

There are two generalizations here to consider: first that macroecology is the way forward, and second that community ecology is a difficult area that can lead only to fuzzy generalizations. I will leave the macroecology issue to later, and concentrate on the idea that community ecology can never develop general laws.

The last 15 years of ecological research has partly justified Lawton’s skepticism because progress in community ecology has largely rested on local studies and local generalizations. One illustration of the difficulty of devising generalities is the controversy over the intermediate disturbance hypothesis (Schwilk, Keeley & Bond 1997; Wilkinson 1999; Fox 2013a; Fox 2013b; Kershaw & Mallik 2013; Sheil & Burslem 2013). In their recent review Kershaw and Mallik (2013) concluded that confirmation of the intermediate disturbance hypothesis for all studies was around 20%. For terrestrial ecosystems only, support was about 50%. What should we do with hypotheses that fail as often as succeed? That is perhaps a key question in community ecology. Kershaw and Mallik (2013) adopt the approach that states that the intermediate disturbance hypothesis will apply only to grassland communities of moderate productivity. The details here are not important, the strategy of limiting a supposedly general hypothesis to a small set of communities is critical. We are back to the issue of generality. It is certainly progress to set limits on particular hypotheses, but it does leave the land managers hanging. Kershaw and Mallik (2013) state that the rationale for current forest harvesting models in the boreal forest relies on the intermediate disturbance hypothesis being correct for this ecosystem. Does this matter or not? I am not sure.

Prins and Gordon (2014) evaluated a whole series of hypotheses that represented the conventional wisdom in community ecology and concluded that much of what is accepted as well supported community ecological theory has only limited support. If this is accepted (and Simberloff (2014) does not accept it) we are left in an era of chaos in which practical ecosystem management has few clear models for how to proceed unless studies are available at the local level.

Should we conclude that community ecology is impossible? Certainly not, but it may be much more difficult than our simple models suggest, and the results of studies may be more local in application than our current general overarching theories like the intermediate disturbance hypothesis.

The devil is in the details again, and the most successful community ecological studies have essentially been population ecology studies writ large for the major species in the community. Evolution rears its ugly head to confound generalization. There is not, for example, a generalized large mammal predator in every community, and the species of predators that have evolved on different continents do not all follow the same ecological rules. Ecology may be more local than we would like to believe. Perhaps Lawton (1999) was right about community ecology.

Fox, J.W. (2013a) The intermediate disturbance hypothesis is broadly defined, substantive issues are key: a reply to Sheil and Burslem. Trends in Ecology & Evolution, 28, 572-573.

Fox, J.W. (2013b) The intermediate disturbance hypothesis should be abandoned. Trends in Ecology & Evolution, 28, 86-92.

Kershaw, H.M. & Mallik, A.U. (2013) Predicting plant diversity response to disturbance: Applicability of the Intermediate Disturbance Hypothesis and Mass Ratio Hypothesis. Critical Reviews in Plant Sciences, 32, 383-395.

Lawton, J.H. (1999) Are there general laws in ecology? Oikos, 84, 177-192.

Prins, H.H.T. & Gordon, I.J. (eds.) (2014) Invasion Biology and Ecological Theory: Insights from a Continent in Transformation.  Cambridge University Press, Cambridge. 540 pp.

Schwilk, D.W., Keeley, J.E. & Bond, W.J. (1997) The intermediate disturbance hypothesis does not explain fire and diversity pattern in fynbos. Plant Ecology, 132, 77-84.

Sheil, D. & Burslem, D.F.R.P. (2013) Defining and defending Connell’s intermediate disturbance hypothesis: a response to Fox. Trends in Ecology & Evolution, 28, 571-572.

Simberloff, D. (2014) Book Review: Herbert H. T. Prins and Iain J. Gordon (eds.): Invasion biology and ecological theory. Insights from a continent in transformation. Biological Invasions, 16, 2757-2759.

Wilkinson, D.M. (1999) The disturbing history of intermediate disturbance. Oikos, 84, 145-147.

On Adaptive Management

I was fortunate to be on the sidelines at UBC in the 1970s when Carl Walters, Ray Hilborn, and Buzz Holling developed and refined the ideas of adaptive management. Working mostly in a fisheries context in which management is both possible and essential, they developed a new paradigm of how to proceed in the management of natural resources to reduce or avoid the mistakes of the past (Walters & Hilborn 1978). Somehow it was one of those times in science where everything worked because these three ecologists were a near perfect fit to one another, full of new ideas and inspired guesses about how to put their ideas into action. Many other scientists joined in, and Holling (1978) put this collaboration together in a book that can still be downloaded from the website of the International Institute for Applied Systems Analysis (IASA) in Vienna:
(http://www.iiasa.ac.at/publication/more_XB-78-103.php

Adaptive management became the new paradigm, now taken up with gusto by many natural resources and conservation agencies (Westgate, Likens & Lindenmayer 2013). Adaptive management can be carried out in two different ways. Passive adaptive management involves having a model of the system being managed and manipulating it in a series of ways that improve the model fit over time. Active adaptive management takes several different models and uses different management manipulations to decide which model best describes how the system operates. Both approaches intend to reduce the uncertainty about how the system works so as to define the limits of management options.

The message was (as they argued) nothing more than common sense, to learn by doing. But common sense is uncommonly used, as we see too often even in the 21st century. Adaptive management became very popular in the 1990s, but while many took up the banner of adaptive management, relatively few cases have been successfully completed (Walters 2007; Westgate, Likens & Lindenmayer 2013). There are many different reasons for this (discussed well in these two papers), not the least of which is the communication gap between research scientists and resource managers. Research scientists typically wish to test an ecological hypothesis by a management manipulation, but the resource manager may not be able to use this particular management manipulation in practice because it costs too much. To be useful in the real world any management experiment needs to have careful, long-term monitoring to map its outcome, and management agencies do not often have the opportunity to carry out extensive monitoring. The underlying cause then is mainly financial, and resource agencies rarely have an adequate budget to cover the important wildlife and fisheries issues they are supposed to manage.

If anything, reading this ‘old’ literature should remind ecologists that the problems discussed are inherent in management and will not go away as we move into the era of climate change. Let me stop with a few of the guideposts from Holling’s book:

Treat assessment as an ongoing process…
Remember that uncertainties are inherent…
Involve decision makers early in the analysis…
Establish a degree of belief for each of your alternative models…
Avoid facile and narcotic compression of indicators such as cost/benefit ratios that are generally inappropriate for environmental problems….

And probably remind yourself that there can be wisdom in the elders….

The take-home message for me in re-reading these older papers on adaptive management is that it is similar to the problem we have with models in ecology. We can produce simple models or in this case solutions to management problems on paper, but getting them to work properly in the real world where social viewpoints, political power, and scientific information collide is extremely difficult. This is no reason to stop doing the best science and to try to weld it into management agencies. But it is easier said than done.

Holling, C.S. (1978) Adaptive Environmental Assessment and Management. John Wiley and Sons, Chichester, UK.

Walters, C.J. (2007) Is adaptive management helping to solve fisheries problems? Ambio, 36, 304-307.

Walters, C.J. & Hilborn, R. (1978) Ecological optimization and adaptive management. Annual Review of Ecology and Systematics, 9, 157-188.

Westgate, M.J., Likens, G.E. & Lindenmayer, D.B. (2013) Adaptive management of biological systems: A review. Biological Conservation, 158, 128-139.

On Repeatability in Ecology

One of the elementary lessons of statistics is that every measurement must be repeatable so that differences or changes in some ecological variable can be interpreted with respect to some ecological or environmental mechanism. So if we count 40 elephants in one year and count 80 in the following year, we know that population abundance has changed and we do not have to consider the possibility that the repeatability of our counting method is so poor that 40 and 80 could refer to the same population size. Both precision and bias come into the discussion at this point. Much of the elaboration of ecological methods involves the attempt to improve the precision of methods such as those for estimating abundance or species richness. There is less discussion of the problem of bias.

The repeatability that is most crucial in forging a solid science is that associated with experiments. We should not simply do an important experiment in a single place and then assume the results apply world-wide. Of course we do this, but we should always remember that this is a gigantic leap of faith. Ecologists are often not willing to repeat critical experiments, in contrast to scientists in chemistry or molecular biology. Part of this reluctance is understandable because the costs associated with many important field experiments is large and funding committees must then judge whether to repeat the old or fund the new. But if we do not repeat the old, we never can discover the limits to our hypotheses or generalizations. Given a limited amount of money, experimental designs often limit the potential generality of the conclusions. Should you have 2 or 4 or 6 replicates? Should you have more replicates and fewer treatment sites or levels of manipulation? When we can, we try one way and then another to see if we get similar results.

A looming issue now is climate change which means that the ecosystem studied in 1980 is possibly rather different than the one you now study in 2014, or the place someone manipulated in 1970 is not the same community you manipulated this year. The worst case scenario would be to find out that you have to do the same experiment every ten years to check if the whole response system has changed. Impossible with current funding levels. How can we develop a robust set of generalizations or ‘theories’ in ecology if the world is changing so that the food webs we so carefully described have now broken down? I am not sure what the answers are to these difficult questions.

And then you pile evolution into this mix and wonder if organisms can change like Donelson et al.’s (2012) tropical reef fish, so that climate changes might be less significant than we currently think, at least for some species. The frustration that ecologists now face over these issues with respect to ecosystem management boils over in many verbal discussions like those on “novel ecosystems” (Hobbs et al. 2014, Aronson et al. 2014) that can be viewed as critical decisions about how to think about environmental change or a discussion about angels on pinheads.

Underlying all of this is the global issue of repeatability, and whether our current perceptions of how to manage ecosystems is sufficiently reliable to sidestep the adaptive management scenarios that seem so useful in theory (Conroy et al. 2011) but are at present rare in practice (Keith et al. 2011). The need for action in conservation biology seems to trump the need for repeatability to test the generalizations on which we base our management recommendations. This need is apparent in all our sciences that affect humans directly. In agriculture we release new varieties of crops with minimal long term studies of their effects on the ecosystem, or we introduce new methods such as no till agriculture without adequate studies of its impacts on soil structure and pest species. This kind of hubris does guarantee long term employment in mitigating adverse consequences, but is perhaps not an optimal way to proceed in environmental management. We cannot follow the Hippocratic Oath in applied ecology because all our management actions create winners and losers, and ‘harm’ then becomes an opinion about how we designate ‘winners’ and ‘losers’. Using social science is one way out of this dilemma, but history gives sparse support for the idea of ‘expert’ opinion for good environmental action.

Aronson, J., Murcia, C., Kattan, G.H., Moreno-Mateos, D., Dixon, K. & Simberloff, D. (2014) The road to confusion is paved with novel ecosystem labels: a reply to Hobbs et al. Trends in Ecology & Evolution, 29, 646-647.

Conroy, M.J., Runge, M.C., Nichols, J.D., Stodola, K.W. & Cooper, R.J. (2011) Conservation in the face of climate change: The roles of alternative models, monitoring, and adaptation in confronting and reducing uncertainty. Biological Conservation, 144, 1204-1213.

Donelson, J.M., Munday, P.L., McCormick, M.I. & Pitcher, C.R. (2012) Rapid transgenerational acclimation of a tropical reef fish to climate change. Nature Climate Change, 2, 30-32.

Hobbs, R.J., Higgs, E.S. & Harris, J.A. (2014) Novel ecosystems: concept or inconvenient reality? A response to Murcia et al. Trends in Ecology & Evolution, 29, 645-646.

Keith, D.A., Martin, T.G., McDonald-Madden, E. & Walters, C. (2011) Uncertainty and adaptive management for biodiversity conservation. Biological Conservation, 144, 1175-1178.

On Research Questions in Ecology

I have done considerable research in arctic Canada on questions of population and community ecology, and perhaps because of this I get e mails about new proposals. This one just arrived from a NASA program called ABoVE that is just now starting up.

“Climate change in the Arctic and Boreal region is unfolding faster than anywhere else on Earth, resulting in reduced Arctic sea ice, thawing of permafrost soils, decomposition of long- frozen organic matter, widespread changes to lakes, rivers, coastlines, and alterations of ecosystem structure and function. NASA’s Terrestrial Ecology Program is in the process of planning a major field campaign, the Arctic-Boreal Vulnerability Experiment (ABoVE), which will take place in Alaska and western Canada during the next 5 to 8 years.“

“The focus of this solicitation is the initial research to begin the Arctic-Boreal Vulnerability Experiment (ABoVE) field campaign — a large-scale study of ecosystem responses to environmental change in western North America’s Arctic and boreal region and the implications for social-ecological systems. The Overarching Science Question for ABoVE is: “How vulnerable or resilient are ecosystems and society to environmental change in the Arctic and boreal region of western North America? “

I begin by noting that Peters (1991) wrote very much about the problems with these kinds of ‘how’ questions. First of all note that this is not a scientific question. There is no conceivable way to answer this question. It contains a set of meaningless words to an ecologist who is interested in testing alternative hypotheses.

One might object that this is not a research question but a broad brush agenda for more detailed proposals that will be phrased in such a way to become scientific questions. Yet it boggles the mind to ask how vulnerable ecosystems are to anything unless one is very specific. One has to define an ecosystem, difficult if it is an open system, and then define what vulnerable means operationally, and then define what types of environmental changes should be addressed – temperature, rainfall, pollution, CO2. And all of that over the broad expanse of arctic and boreal western North America, a sampling problem on a gigantic scale. Yet an administrator or politician could reasonably ask at the end of this program, ‘Well, what is the answer to this question?’ That might be ‘quite vulnerable’, and then we could go on endlessly with meaningless questions and answers that might pass for science on Fox News but not I would hope at the ESA. We can in fact measure how primary production changes over time, how much CO2 is sequestered or released from the soils of the arctic and boreal zone, but how do we translate this into resilience, another completely undefined empirical ecological concept?

We could attack the question retrospectively by asking for example: How resilient have arctic ecosystems been to the environmental changes of the past 30 years? We can document that shrubs have increased in abundance and biomass in some areas of the arctic and boreal zone (Myers-Smith et al. 2011), but what does that mean for the ecosystem or society in particular? We could note that there are almost no data on these questions because funding for northern science has been pitiful, and that raises the issue that if these changes we are asking about occur on a time scale of 30 or 50 years, how will we ever keep monitoring them over this time frame when research is doled out in 3 and 5 year blocks?

The problem of tying together ecosystems and society is that they operate on different time scales of change. Ecosystem changes in terrestrial environments of the North are slow, societal changes are fast and driven by far more obvious pressures than ecosystem changes. The interaction of slow and fast variables is hard enough to decipher scientifically without having many external inputs.

So perhaps in the end this Arctic-Boreal Vulnerability Experiment (another misuse of the word ‘experiment’) will just describe a long-term monitoring program and provide the funding for much clever ecological research, asking specific questions about exactly what parts of what ecosystems are changing and what the mechanisms of change involve. Every food web in the North is a complex network of direct and indirect interactions, and I do not know anyone who has a reliable enough understanding to predict how vulnerable any single element of the food web is to climate change. Like medieval scholars we talk much about changes of state or regime shifts, or tipping points with a model of how the world should work, but with little long term data to even begin to answer these kinds of political questions.

My hope is that this and other programs will generate some funding that will allow ecologists to do some good science. We may be fiddling while Rome is burning, but at any rate we could perhaps understand why it is burning. That also raises the issue of whether or not understanding is a stimulus for action on items that humans can control.

Myers-Smith, I.H., et al. (2011) Expansion of canopy-forming willows over the 20th century on Herschel Island, Yukon Territory, Canada. Ambio, 40, 610-623.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp.

On Indices of Population Abundance

I am often surprised at ecological meetings by how many ecological studies rely on indices rather than direct measures. The most obvious cases involve population abundance. Two common criteria for declaring a species as endangered are that its population has declined more than 70% in the last ten years (or three generations) or that its population size is less than 2500 mature individuals. The criteria are many and every attempt is made to make them quantitative. But too often the methods used to estimate changes in population abundance are based on an index of population size, and all too rarely is the index calibrated against known abundances. If an index increases by 2-fold, e.g. from 20 to 40 counts, it is not at all clear that this means the population size has increased 2-fold. I think many ecologists begin their career thinking that indices are useful and reliable and end their career wondering if they are providing us with a correct picture of population changes.

The subject of indices has been discussed many times in ecology, particularly among applied ecologists. Anderson (2001) challenged wildlife ecologists to remember that indices include an unmeasured term, detectability: Anderson (2001, p. 1295) wrote:

“While common sense might suggest that one should estimate parameters of interest (e.g., population density or abundance), many investigators have settled for only a crude index value (e.g., “relative abundance”), usually a raw count. Conceptually, such an index value (c) is the product of the parameter of interest (N) and a detection or encounter probability (p): then c=pN

He noted that many indices used by ecologists make a large assumption that the probability of encounter is a constant over time and space and individual observers. Much of the discussion of detectability flowed from these early papers (Williams, Nichols & Conroy 2002; Southwell, Paxton & Borchers 2008). There is an interesting exchange over Anderson’s (2001) paper by Engeman (2003) followed by a retort by Anderson (2003) that ended with this blast at small mammal ecologists:

“Engeman (2003) notes that McKelvey and Pearson (2001) found that 98% of the small-mammal studies reviewed resulted in too little data for valid mark-recapture estimation. This finding, to me, reflects a substantial failure of survey design if these studies were conducted to estimate population size. ……..O’Connor (2000) should not wonder “why ecology lags behind biology” when investigators of small-mammal communities commonly (i.e., over 700 cases) achieve sample sizes <10. These are empirical methods; they cannot be expected to perform well without data.” (page 290)

Take that you small mammal trappers!

The warnings are clear about index data. In some cases they may be useful but they should never be used as population abundance estimates without careful validation. Even by small mammal trappers like me.

Anderson, D.R. (2001) The need to get the basics right in wildlife field studies. Wildlife Society Bulletin, 29, 1294-1297.

Anderson, D.R. (2003) Index values rarely constitute reliable information. Wildlife Society Bulletin, 31, 288-291.

Engeman, R.M. (2003) More on the need to get the basics right: population indices. Wildlife Society Bulletin, 31, 286-287.

McKelvey, K.S. & Pearson, D.E. (2001) Population estimation with sparse data: the role of estimators versus indices revisited. Canadian Journal of Zoology, 79, 1754-1765.

O’Connor, R.J. (2000) Why ecology lags behind biology. The Scientist, 14, 35.

Southwell, C., Paxton, C.G.M. & Borchers, D.L. (2008) Detectability of penguins in aerial surveys over the pack-ice off Antarctica. Wildlife Research, 35, 349-357.

Williams, B.K., Nichols, J.D. & Conroy, M.J. (2002) Analysis and Management of Animal Populations. Academic Press, New York.

On Political Ecology

When I give a general lecture now, I typically have to inform the audience that I am talking about scientific ecology not political ecology. What is the difference? Scientific ecology is classical boring science, stating hypotheses, doing experiments or observations to gather the data, testing the idea, and accepting or rejecting it, outlined clearly in many papers (Platt 1963, Wolff and Krebs (2008), and illustrated in this diagram:

Scientific ecology is clearly out-of-date, and no longer ‘cool’ when compared to the new political ecology.

Political ecology is a curious mix of traditional ecology added to the advocacy issue of protecting biodiversity. Political ecology is aimed at convincing society in general and politicians in particular to protect the Earth’s biodiversity. This is a noble cause, and my complaint is only that when we advocate and use scientific ecology in pursuit of a political agenda we should be scientifically rigorous. Yet much of biodiversity science is a mix of belief and evidence, with unsuitable evidence used in support of what is a noble belief. If we believe that the end justifies the means, we would be happy with this. But I am not.

One example will illustrate my frustration with political ecology. Dirzo et al. (2014) in a recent Science paper give an illustration of the effects of removing large animals from an ecosystem. In their Figure 4, page 404, a set of 4 graphs purport to show experimentally what happens when you remove large wildlife species in Kenya, the Kenya Long-term Exclosure Experiment (Young et al. 1997). But this experiment is hopelessly flawed in being carried out on a set of plots of 4 ha, a postage stamp of habitat relative to large mammal movements and ecosystem processes. But the fact that this particular experiment was not properly designed for the questions it is now being used to address is not a problem if this is political ecology rather than scientific ecology. The overall goal of the Dirzo et al. (2014) paper is admirable, but it is achieved by quoting a whole series of questionable extrapolations given in other papers. The counter-argument in conservation biology has always been that we do not have time to do proper research and we must act now. The consequence is the elevation of expert opinion in conservation science to the realm of truth without going through the proper scientific process.

We are left with this prediction from Dirzo et al. (2014):

“Cumulatively, systematic defaunation clearly threatens to fundamentally alter basic ecological functions and is contributing to push us toward global-scale “tipping points” from which we may not be able to return ……. If unchecked, Anthropocene defaunation will become not only a characteristic of the planet’s sixth mass extinction, but also a driver of fundamental global transformations in ecosystem functioning.”

I fear that statements like this are more akin to something like a religion of conservation fundamentalism, while we proclaim to be scientists.

Dirzo, R., Young, H.S., Galetti, M., Ceballos, G., Isaac, N.J.B. & Collen, B. (2014) Defaunation in the Anthropocene. Science, 345, 401-406.

Platt, J.R. (1964) Strong inference. Science, 146, 347-353.

Wolff, J.O. & Krebs, C.J. (2008) Hypothesis testing and the scientific method revisited. Acta Zoologica Sinica, 54, 383-386.

Young, T.P., Okello, B.D., Kinyua, D. & Palmer, T.M. (1997) KLEE: A long‐term multi‐species herbivore exclusion experiment in Laikipia, Kenya. African Journal of Range & Forage Science, 14, 94-102.

On Subsidies to Ecological Systems

After reading the important paper by Killengreen et al. (2011) it dawned on me that I had not thought enough about subsidies to ecosystems in the modern era. If we put the idea of subsidies together with the idea that at least many terrestrial systems are strongly influenced top down by predation, some pieces of a few puzzles seem to come together for me. For anyone working in northern Canada, one puzzle has always been how small predators like weasels survive over the winter when lemmings or voles are in very low abundance. Certainly these are highly efficient predators, but there is a limit to being a good predator when you are small and your prey is under the snow at densities that might be only a few individuals per square kilometre. But if you are a weasel and happen to find a caribou or muskox carcass, you might well be in heaven for the winter.

There are many examples of predator subsidies provided by humans. Dingos in the outback of Australia utilize garbage dumps from mining companies (Newsome et al. 2014). Feral cats in the outback of Australia travel long distances to human habitations to get food when drought has reduced prey abundance (Molsher et al. 1999). Snow geese have increased greatly in abundance from winter food provided by agricultural crops in southern USA (Alisauskas et al. 2011). There must be many more examples in the literature, even without looking at the data on rats in city dumps.

But what does all this mean, since in some sense we knew these facts long ago? First and foremost I think it means we are studying a world that did not exist in the past, so that the ‘balance of nature’ is changed in a variety of ways we do not comprehend or understand. If predators are subsidized for example over the winter period, prey populations may on average be more heavily exploited by additional predators that have not died by starvation. Or to take another view of the matter, additional predators surviving might increase intraguild predation providing a variety of indirect effects we can only guess at now.

There is an extensive literature on the effects of nutrient subsidies to aquatic ecosystems, going back to phosphorus in soap (Schindler 1977) and acid rain (Likens et al. 1996). The difference in perspective now is that while most of the effects of nutrient subsidies to lakes and forests are bottom-up, many of the more recent subsidies being recognized are top-down in affecting predator survival. Subsidies can in fact be negative in the case of the reduction of top predators by human persecution (Ripple et al. 2014).

Perhaps what concerns me most about this is that we will never be able to stop doing ecology and testing theories about how populations and communities work if the world keeps changing under our feet. The laws of physics and chemistry may not change over time, but the generalizations of ecology may change faster than we can imagine because of human perturbations.

Alisauskas, R.T., Rockwell, R.F., Dufour, K.W., Cooch, E.G., Zimmerman, G., Drake, K.L., Leafloor, J.O., Moser, T.J. & Reed, E.T. (2011) Harvest, survival, and abundance of midcontinent Lesser Snow Geese relative to population reduction efforts. Wildlife Monographs, 179, 1-42.

Killengreen, S.T., Lecomte, N., Ehrich, D., Schott, T., Yoccoz, N.G. & Ims, R.A. (2011) The importance of marine vs. human-induced subsidies in the maintenance of an expanding mesocarnivore in the arctic tundra. Journal of Animal Ecology, 80, 1049-1060.

Likens, G.E., Driscoll, C.T. & Buso, D.C. (1996) Long-term effects of acid rain: response and recovery of a forest ecosystem. Science, 272, 244-245.

Molsher, R., Newsome, A. & Dickman, C. (1999) Feeding ecology and population dynamics ofo the feral cat (Felis catus) in relation to the availability of prey in central-eastern New South Wales. Wildlife Research, 26, 593-607.

Newsome, T.M., Ballard, G.-A., Fleming, P.J.S., van de Ven, R., Story, G.L. & Dickman, C.R. (2014) Human-resource subsidies alter the dietary preferences of a mammalian top predator. Oecologia, 175, 139-150.

Ripple, W.J., Estes, J.A., Beschta, R.L., Wilmers, C.C., Ritchie, E.G., Hebblewhite, M., Berger, J., Elmhagen, B., Letnic, M., Nelson, M.P., Schmitz, O.J., Smith, D.W., Wallach, A.D. & Wirsing, A.J. (2014) Status and ecological effects of the world’s largest carnivores. Science, 343, 1241484.

Schindler, D.W. (1977) Evolution of phosphorus limitation in lakes. Science, 195, 260-262.

Citation Analysis Gone Crazy

Perhaps we should stop and look at the evils of citation analysis in science. Citation analysis began some 15 or 20 years ago with a useful thought that it might be nice to know if one’s scientific papers were being read and used by others working in the same area. But now it has morphed into a Godzilla that has the potential to run our lives. I think the current situation rests on three principles:

  1. Your scientific ability can be measured by the number of citations you receive. This is patent nonsense.
  2. The importance of your research is determined by which journals accept your papers. More nonsense.
  3. Your long-term contribution to ecological science can be measured precisely by your h–score or some variant.

These principles appeal greatly to the administrators of science and to many people who dish out the money for scientific research. You can justify your decisions with numbers. Excellent job to make the research enterprise quantitative. The contrary view which I might hope is held by many scientists rests on three different principles:

  1. Your scientific ability is difficult to measure and can only be approximately evaluated by another scientist working in your field. Science is a human enterprise not unlike music.
  2. The importance of your research is impossible to determine in the short term of a few years, and in a subject like ecology probably will not be recognized for decades after it is published.
  3. Your long-term contribution to ecological science will have little to do with how many citations you accumulate.

It will take a good historian to evaluate these alternative views of our science.

This whole issue would not matter except for the fact that it is eroding science hiring and science funding. The latest I have heard is that Norwegian universities are now given a large amount of money by the government if they publish a paper in SCIENCE or NATURE, and a very small amount of money if they publish the same results in the CANADIAN JOURNAL OF ZOOLOGY or – God forbid – the CANADIAN FIELD NATURALIST (or equivalent ‘lower class’ journals). I am not sure how many other universities will fall under this kind of reward-based publication scores. All of this is done I think because we do not wish to involve the human judgment factor in decision making. I suppose you could argue that this is a grand experiment like climate change (with no controls) – use these scores for 30 years and then see if they worked better than the old system based on human judgment. How does one evaluate such experiments?

NSERC (Natural Sciences and Engineering Research Council) in Canada has been trending in that direction in the last several years. In the eternal good old days scientists read research proposals and made judgments about the problem, the approach, and the likelihood of success of a research program. They took time to discuss at least some of the issues. But we move now into quantitative scores that replace human judgment, which I believe to be a very large mistake.

I view ecological research and practice much like I think medical research and medical practice operate. We do not know how well certain studies and experiment will work, any more than a surgeon knows exactly whether a particular technique or treatment will work or a particular young doctor will be a good surgeon, and we gain by experience in a mostly non-quantitative manner. Meanwhile we should encourage young scientists to try new ideas and studies, to give them opportunities based on judgments rather than on counts of papers or citations. Currently we want to rank everyone and every university like sporting teams and find out the winner. This is a destructive paradigm for science. It works for tennis but not for ecology.

Bornmann, L. & Marx, W. (2014) How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98, 487-509.

Leimu, R. & Koricheva, J. (2005) What determines the citation frequency of ecological papers? Trends in Ecology & Evolution, 20, 28-32.

Parker, J., Lortie, C. & Allesina, S. (2010) Characterizing a scientific elite: the social characteristics of the most highly cited scientists in environmental science and ecology. Scientometrics, 85, 129-143.

Todd, P.A., Yeo, D.C.J., Li, D. & Ladle, R.J. (2007) Citing practices in ecology: can we believe our own words? Oikos, 116, 1599-1601.

Large Mammal Conservation

The conservation problem is largely focused on large things, birds and mammals, with a few pretty things like butterflies thrown in. What concerns me is the current distortion in the conservation knowledge base available for large animal conservation. I will talk largely about mammals but large birds are equally a problem.

The difficulty is this. It is nearly impossible to study large mammals because they are scarce on the ground, so census methods must be spatially extensive and thus very expensive. One needs a big budget to do this properly, and this effectively rules out university scientists unless they can collaborate with government biologists who have large budgets or private consortia who need the large mammals so they can shoot them. But even with a large budget, a large mammal ecologist cannot be very productive as measured in papers per year of research effort. So the universities in general have shied away from hiring young scientists who might be described as large mammal ecologists. This produces positive feedback in the job market so that few young scientists see this as a viable career.

All of this would be changed if governments were hiring large mammal ecologists. But they are not, with few exceptions. Governments at least in Canada and Australia have been shedding ecologists of all varieties while all the time professing how much they are doing for conservation of threatened species. The advantage of this approach for governments is that they shed high cost biologists, and cover their tracks with some hiring of public relations personnel who have no field costs and perhaps a few biologists who concentrate on small creatures and local problems. So we reach a stalemate when it comes to large mammal conservation. Why do we need polar bear scientists when all they do is make trouble? We can escape such trouble easily. Count the polar bears or the caribou every 5 years or so, so there is consequently much less information that scientists can put their fingers on. (Imagine if we counted the stock market once every 5 years.) The consequence is that in many areas we have large scale, long-term problems with few scientists and only small scale funding to find out what is happening in the field. For polar bears this seems to be partly alleviated by private funding from people who care, while the government shirks its duties for future generations.

For caribou in Canada the situation is worse because the problem is spread over more than half of Canada so the funding and person-power needed for conservation is much larger, and this is further compounded by the immediate conflict of caribou with industrial developments in oil, gas, and forestry. When dollars conflict with conservation needs, it is best not to bet on conservation winning. What good has a polar bear or a caribou done for you?

The potential consequence of all this is that we slowly lose populations of these large iconic species. If this loss is slow enough, no one seems to notice save a few concerned conservation biologists who do not own the newspapers and TV stations. And conservation ecologists grow pessimistic that we can save these large species that require much habitat and freedom from disturbance. The solutions seem to be two. First, build a big fence and keep them in a very large zoo (Packer et al. 2013). This will work for some species like caribou, as Kruger Park in South Africa illustrates so well with African large mammals (but some disagree, Creel et al. 2013). But the fence-solution will not work for polar bears, and our best response for their conservation may be to cross our fingers and hope, all the while trying to slow down the losses in the best way we can. A second solution is to decide that these large mammal conservation situations are not scientific but sociological, and progress can best be made by doing good sociological research to change the attitudes of humans about the value of biodiversity. If this is the solution, we do not need to worry that there are no biologists available to investigate the conservation issues of large mammals.

I think perhaps the bottom line is that it takes a spirited soul to aim for a career in large mammal conservation research and we hope that this happens and the conservation future for large mammals in Canada grows brighter.

Creel, S., 2013. Conserving large populations of lions – the argument for fences has holes. Ecology Letters 16 (5): 635-641. doi: 10.1111/ele.12145.

Packer, C., et al. 2013. Conserving large carnivores: dollars and fence. Ecology Letters. 16: 1413-e3. doi: 10.1111/ele.12091.

Pauly, D. 1995. Anecdotes and the shifting baseline syndrome of fisheries. Trends in Ecology and Evolution 10: 430.

The Secretary’s Dilemma

Back in the good old days when Departments of Biology had secretaries that did the typing of formal letters, one problem always stumped me. Let us say I have a letter of reference that must be typed on departmental stationary and on the average might take about 20 minutes of typing for a good secretary. Now if I took that in today, it could be done in 20 minutes and given back to me to mail say within the hour. But in every case I can remember the turnaround time for a letter was about one week. The puzzle was that it took the same amount of time to type the letter 7 days from now as it would today, so why the delay? If it was backlog, there must be a permanent backlog or the return time would be variable not constant.

No secretaries exist today in modern universities and we all type our own letters on the computer, so why is this puzzle relevant? I suggest that it is the same dilemma that exists over referee reviews of submitted manuscripts for scientific journals. To be more specific I sit now waiting for reviews and a decision on a paper submitted 4 months ago. This is not a record I would presume, but I had another paper submitted for which the review took 6 months. Now go back to the Secretary’s Dilemma. If you are to review a paper, you could do it in say 3-4 hours today when it arrives, or put it aside for 4 months. Whatever you decide, it will take you the same amount of time to do the actual review whether now or later. So we need a set of hypotheses to explain this anomalous situation.

First of all we note that some journals like SCIENCE or PNAS will reject your paper within one day, an extreme example of the-journal-is-overrun hypothesis. If they decide to review it, I would guess you will hear something within a week or two. There are some journals that promise a decision within a short time, 2 to 4 weeks for example. These journals threaten their reviewers if they do not act within a short time. But in some cases it still takes a long time to get a decision letter, and this might be another the-editor-is-overrun hypothesis, no matter how fast the reviewers respond. Finally, many journals do not promise anything in timing, and this might be explained by the hypothesis that our-reviewers-are-overrun. This problem in turn can be a side effect of the last problem I can identify, the I-am-too-important-to-review-papers hypothesis, so that reviews fall on a small subset of ecologists rather than more evenly. One can be sympathetic to these situations since it is my observation that everyone is overrun all the time in the modern university. And everyone must publish many papers to gain a position, with many associated issues discussed by Statzner and Resh (2010).

There are some possible solutions. One is to blackball reviewers who take excessive time to return reviews. I imagine many editors do this already. Another relief valve might be to get rid of paper journals and make everything electronic. This should reduce the cost of journals and allow expanded volumes. I gain the impression that many journals have page limits set by the cost structure, so that one receives a note accompanying the review sheet that states that the journal must reject 85% of papers so that only Nobel Prize papers can be accepted. And to rub the whole process in more, some journals make you pay to publish. You do all the work, get the paper ready, and then they want money to publish it. You can see why some people start their own journal (not a solution for the faint hearted).

And finally I cannot pass on this subject without a comment about civilized behaviour on the part of reviewers. Ad hominem attacks, sarcastic remarks, and blanket condemnations have no place in any review. Journal editors should put such reviews in the garbage can. There are a few simple guidelines for reviewers, and they are summarized in a new paper by Al Glen (2014). Please read it, memorize it, and act on it when you are a reviewer.

Glen, A.S. 2014. A new ‘golden rule’ for peer review? Bulletin of the Ecological Society of America 95(4): 431-434.

Statzner, B., and Resh, V.H. 2010. Negative changes in the scientific publication process in ecology: potential causes and consequences. Freshwater Biology 55(12): 2639-2653. doi: 10.1111/j.1365-2427.2010.02484.x.