It has happened again. I have just been to a seminar on genetic models – something about adaptation of species on the edges of their ranges. Yes this is an interesting topic of relevance to interpreting species’ responses to changing environments. It ended by the speaker saying something like, “It would be a lot of work to test this in the field”. How much more useful my hour would have been spent if the talk had ended with “Although it would be difficult to do, this model makes the following predictions that could be tested in the field,” or “The following results would reject the hypothesis upon which this model is based.”
Now it is likely that some found these theoretical machinations interesting and satisfying in some mathematical way, but I feel that it is irresponsible to not even consider how a model could be tested and the possibility (a likely possibility at that) that it doesn’t apply to nature and tells us nothing helpful about understanding what is going to happen to willow or birch shrubs at the edge of their ranges in the warming arctic (for example).
Recommendation – no paper on models should be published or talked about unless it makes specific, testable predictions of how the model can be tested.
I think there are at least two types of models: predictive models and confirmatory models. The ones you talk about are supposed to be predictive models but are clearly failing at that. But confirmatory models are an important class of models that lets us know that what we think we know is actually consistent with the facts. It doesn’t have any great ontological meaning, but it does let us know that our beliefs are internally consistent (if possibly wrong).
Even “confirmatory” models should have testable predictions