Peeters, Edwin T.H.M., et al. 2009. Assessing ecological quality of shallow lakes: Does knowledge of transparency suffice? Basic and Applied Ecology 10(1): 89–96. doi:10.1016/j.baae.2007.12.009
The authors aim to identify what is the simplest way to assess “ecological quality” of aquatic ecosystems, focusing here on shallow European lakes. They are motivated by other approaches to the European Water Framework Directive (WFD), which “requires that all aquatic ecosystems in their member states should reach ‘good’ ecological quality by 2015.” Other approaches employ a slew of variables to assign each lake to one of five quality classes (Bad—High), as required by the WFD, using biological, physical, and chemical indicators. The authors hope to identify a shortcut to quality assignment, one not requiring so many variables. They seek a shortcut by performing a multinomial logistic regression to attempt to isolate the variable best explaining quality variation. As it turns out, Secci depth (essentially the depth at which a particular submerged disc is visible, and thus a measure of water clarity) corresponds best to ecological quality.
In performing this analysis, the authors face a kind of Third Man Problem: if you want to discern which empirical properties are best associated with variance in ecological quality, you need some further measure of ecological quality. Recognizing this, the authors’ strategy is to use experts’ evaluations of ecological quality. They make this the dependent variable in their regression.
But this raises the question of what standard experts use. Wherein lies their expertise? If it were based on some particular empirical measure like Secci depth, the study would have been redundant. If it were based on the more complicated integration of the slew of variables (and available corresponding data) lamented as the motivation for the study, then that motivation would seem to be undercut. For example:
We explore empirical relationships between quality classes assigned to lakes by known experts using field observations and a rich set of biological, physical and chemical data
The contrast is between the rich data set and the field observations of, it would seem, quality. Expert evaluation appears to be based on something else than the data set. What?
The authors appear to recognize the potential for subjectivity or circularity:
Although five quality classes are defined (bad, poor, moderate, good and high), the means of assigning ecosystems to these categories are still open to debate. The only definitions in the Directive that are not circular in nature are that high quality has minimal human influence and that good quality is only slightly different from high quality.
Although expert judgement plays an important role in the ecological assessment of aquatic systems, there has been little formal justification as to whether expert knowledge has any relationship with measured environmental and/or biological variables.
But then they respond to this awareness by writing as though the study were inverted, as though expertise were being assessed as a measure:
The objectives of our study are to analyze whether quality judged by experts coincides with differences in abiotic and biotic circumstances and to find a simple way of predicting quality of lakes.
Of course, expertise is being used as the dependent variable, so the study does not assess expertise.
A comment in the Discussion briefly develops this problem to the point of sounding almost defeatist about its implications the aims of the study:
Obviously, we cannot say how ‘good’ the judgement of quality by our model really is because it is difficult to evaluate how appropriate the expert judgement of the quality of the lakes is.
But bravely marching on, the paper goes on to assert in conclusion a coincidence of ecological quality as judged by experts and Secci depth. But beyond this sentence, to the degree it goes on to recognize that there is a problem of assessing quality at all, the paper treats the problem not as a problem about for determining quality—a dilemma between circularity and subjectivity of quality assessments—but as an error problem:
Although approximately 45% of the lakes are misclassified when taking four quality classes into account and 20% in case of two classes (meeting the WFD standard or not), this will at least partly be related to some level of inconsistency or ‘error’ in expert judgement (Boesten, 2000; Van Steen, 1992).
The main problem is much deeper than an error problem. In other words, a legitimate quality standard demands more than consistency among evaluators, though that might be a start. And what’s in question is what it means to “misclassify” in the first place.
Yet, the account of how experts chose lakes for inclusion in the study, according to quality, points explicitly towards subjectivity:
In one project meeting prior to the sampling it was decided that each participating country should select at least six lakes covering the range from high to poor quality as defined by their own ideas. The experts assessed the ecological quality of the lakes based on their own previous experiences taking into account all kinds of deterioration.
“As defined by their own ideas,” indeed! And of course, the problems experts face in defining quality appear already in defining “deterioration,” which is its determinant, too.
The authors offer a few references on quality expertise which might be fruitful. But we are not far here from the problem Hume considers in trying to identify a standard of artistic excellence by identifying aesthetic experts, in “Of the Standard of Taste.” Nor from the problem Aristotle raises in Nicomachean Ethics concerning how one might identify an expert or recognize expertise without already being an expert oneself. The skulking danger, of course, is that nobody is.