Pages

Thursday, January 18, 2024

What (Do) The Data Say?

Environmental & Science Education, STEM, Nature of Science, History of Science, Data

Ed Hessler

Nature News drew attention to an as yet unpublished paper by 246 ecologists who "analyzed the same data sets and got widely divergent results." Made me think of the frequently used phrase, "let's see what the data says."

In this case none of the answers are wrong: according to Hannah Fraser --"the spread of results reflects factors such as participants' training and how they set sample sizes." The paper is about a basic idea - reproducibility -, one included in teaching beginning students how science works.  Fraser is one of the lead authors...

Anit Oza wrote a news comment about this finding from which I'm going to quote more liberally than I like - I want you to read the news report - because I'm less confident than I have been on the durability of the link to the news item This, after having links denied after a short period of time with the note that it is only available to subscribers.

--demonstrates how much results in the field can vary, not because of differences in the environment, but because of scientists’ analytical choices.

--this kind of research was pioneered in psychology and the social sciences when results couldn't be reproduced in another laboratory.

--could lead to improvements in reproducibility of results

-- the method - many analyst -  is one in whch researchers gave scientist-participants one of two data sets and an accompanying research question: either 'To what extent is the growth of nestling blue tits (Cyanistes caeruleus) influenced by competition with siblings?' or 'How does grass cover influence Eucalyptus spp. seedling recruitment?'”

--of course, the researchers had to simulate the peer-review process so the authors found another group of scientists to review the participants’ results. "The peer reviewers gave poor ratings to the most extreme results in the Eucalyptus analysis but not in the blue tit one. Even after the authors excluded the analyses rated poorly by peer reviewers, the collective results still showed vast variation, says Elliot Gould, an ecological modeller at the University of Melbourne and a co-author of the study."

-some suggestions for improving the process  asking a paper’s authors to lay out the analytical decisions that they made for the reader to examine with the potential caveats of those choices as well as the use of powerful "'robustness tests'  which are common in economics, require researchers to analyse their data in several ways and assess the amount of variation in the results.

Oza ends with an important observation about two disciplines that are observational. She writes that "ecology faces a special problem when it comes to analytical variation because of a complication baked into their discipline. Nicole Nelson, an ethnographer at the University of Wisconsin-Madison was given the last word. “The foundations of this field are observational. It’s about sitting back and watching what the natural world throws at you — which is a lot of variation.” (take a look at the paper about the complication).

Here is Oza's reporting and here is a link to the paper from EcoEvoRxiv, a preprint server. 

At least take a look at the paper, especially the authors, colleges, affiliations, etc.. I like this paper because it demonstrates that scientists want to provide an analysis that has some basis and also for their willingness to invest their time in improving research methods in ecology and evolution. Science welcomes such close examination and what it reveals.

No comments:

Post a Comment