News:

Welcome to the new (and now only) Fora!

Main Menu

Nature retraction

Started by jimbogumbo, January 25, 2021, 06:13:12 AM

Previous topic - Next topic

jimbogumbo


polly_mer

"The main issue was that I had used the same data for selection and comparison, a circularity that crops up again and again." 

In the verification and validation of computer models area where I work this is a known, huge problem that is frequently ignored by people who blindly apply the techniques without knowing the V&V aspects of modeling.

I agree this is a great article.  I'm still sad that this is seen as a one-off instead of making changes in how computer models are constructed by the field to incorporate significant verification, validation, and uncertainty quantification (VVUQ) activities.
Quote from: hmaria1609 on June 27, 2019, 07:07:43 PM
Do whatever you want--I'm just the background dancer in your show!

jimbogumbo

I'm hopeful this might not be a one off. problems that are well known in mathematical modeling and statistics could become similarly well known in in other sciences as the techniques become more widespread. Fields such as biostatistics for example are pretty mainstream now, and imo good exemplars of better computer modeling.

I can dream, right?

Durchlässigkeitsbeiwert

#3
There was a similar retraction of a Nature paper in my area of interest last year.
There is a [field-specific?] growing gap between people doing case-study-like data analysis and modelling (not Nature material) and modelling with global implications (Nature material). In that specific case it resulted in Nature paper having obvious for most practitioners errors  in the validation dataset and model assumptions.
While superficially different from the situation described in the article above, I think it is caused by the same underlying problems:
- some subfields are growing into previously uncharted areas without enough awareness of possible limitations
- there are very strong incentives for getting a paper in a prestigious journal. So, many people are trying too hard to squeeze catchy results from inherently limited projects

ab_grp

I appreciate the author's perspective and attitude toward the discovery and retraction, but I am a little surprised that this approach was used in this day and age, let alone that it got through peer review.  This "circularity" is certainly a well known issue to avoid when doing statistical modeling.  But, at least it is somewhat gratifying to see a scientist open to investigation of their work and open to learning from it and sharing what they've learned.  That's not always an easy thing to embrace, in my opinion.  It helps that the investigation was collaborative versus adversarial.  Maybe that is one of the big lessons to learn here.

polly_mer

Quote from: jimbogumbo on January 25, 2021, 06:52:41 AM
I'm hopeful this might not be a one off. problems that are well known in mathematical modeling and statistics could become similarly well known in in other sciences as the techniques become more widespread. Fields such as biostatistics for example are pretty mainstream now, and imo good exemplars of better computer modeling.

I can dream, right?

Yep.

I'm currently involved heavily with VVUQ activities in materials engineering to the point that I am involved in writing international standards for the area.  It's just amazing to discover how many things were well known 20+ years ago and yet they are not in the relevant undergrad/grad curriculum, even for places that do a lot of materials simulation.

I remember one not-at-all-academic meeting where someone was indignant that we were still talking about how to ensure that students/postdocs in the pipeline knew what we need them to know to do good material modeling and simulation for engineering applications.  After all, that guy and a couple of his friends went around to 10 top institutions about 5 years ago and gave presentations.  Thus, everyone who matters should already know.

I am amused by the discussions of Covid-19 modeling that hit the mass media and how those discussions are very different from what's in the popular science media.  Then, I sigh heavily as I have the necessary discussions with my colleagues who do know the VVUQ aspects and they also sigh heavily as their models are put into the decision making mix for the politicians alongside the "models" that are hardly better than projections of the regression fits to the last few weeks of data.
Quote from: hmaria1609 on June 27, 2019, 07:07:43 PM
Do whatever you want--I'm just the background dancer in your show!

Hibush

Quote from: Durchlässigkeitsbeiwert on January 25, 2021, 08:27:31 AM
I think it is caused by the same underlying problems:
- some subfields are growing into previously uncharted areas without enough awareness of possible limitations
- there are very strong incentives for getting a paper in a prestigious journal. So, many people are trying too hard to squeeze catchy results from inherently limited projects

These two problems interact strongly. If you ignore the limitations, you are far more likely to reach a conclusion with Nature-worthy pizzazz.

That phenomenon also means that hot new fields, where people are more frequently ignoring the limitations, also produce a lot more (false) pizzazzy conclusions that get submitted to Nature.

Then editors and reviewers in that field get a bit overexcited about how clickbaity the resulting title would be that they overlook the familiar limitations.

And so it goes to press.

And to press again for the retraction. (Sometimes)