News:

Welcome to the new (and now only) Fora!

Main Menu

Prof studying honesty fabricates findings

Started by Langue_doc, June 26, 2023, 07:11:46 AM

Previous topic - Next topic

marshwiggle

Quote from: Puget on June 29, 2023, 06:44:46 AM
Quote from: marshwiggle on June 29, 2023, 05:30:57 AMI wouldn't be surprised if some of the pushback is from the "activist" types, whose research has an agenda before it starts, so that "wherever the data may lead" is not the overarching goal. That type of "research" is going to be very easy to challenge, because all kinds of things like sampling bias, lack of control groups, etc. are common when the result is decided upon before the data are collected. And so, if people are prone to this, if they're going to get called on it they want it to be as low-key and privately as possible.


Of course *you* wouldn't be surprised because that would conform nicely to *your* preconceived ideas. But nope, this has nothing to do with ideology-- did you not note that this is a biz school prof publishing silly things that corporations nonetheless seem to love? Again, I don't agree with the pushback, but it is all about due process and tone, nothing to do with the actual research. 

The point is that when "research" is done to appeal to an audience, it's easily compromised. It doesn't matter what the ideology of the audience is, the fact that they "seem to love" it incentivizes bad practice, up to and including outright fraud. When research has results that aren't going to say exactly what any particular camp would like to hear, there's much less reason for it to have been fudged.
It takes so little to be above average.

Wahoo Redux

Quote from: marshwiggle on June 29, 2023, 07:59:46 AM
Quote from: Puget on June 29, 2023, 06:44:46 AM
Quote from: marshwiggle on June 29, 2023, 05:30:57 AMI wouldn't be surprised if some of the pushback is from the "activist" types, whose research has an agenda before it starts, so that "wherever the data may lead" is not the overarching goal. That type of "research" is going to be very easy to challenge, because all kinds of things like sampling bias, lack of control groups, etc. are common when the result is decided upon before the data are collected. And so, if people are prone to this, if they're going to get called on it they want it to be as low-key and privately as possible.


Of course *you* wouldn't be surprised because that would conform nicely to *your* preconceived ideas. But nope, this has nothing to do with ideology-- did you not note that this is a biz school prof publishing silly things that corporations nonetheless seem to love? Again, I don't agree with the pushback, but it is all about due process and tone, nothing to do with the actual research. 

The point is that when "research" is done to appeal to an audience, it's easily compromised. It doesn't matter what the ideology of the audience is, the fact that they "seem to love" it incentivizes bad practice, up to and including outright fraud. When research has results that aren't going to say exactly what any particular camp would like to hear, there's much less reason for it to have been fudged.


I know you believe these things, Marshbuddy, because you seem to fear the "activists" out there.  And you've swallowed some propaganda about "ideologies" somewhere along the lines and it is sticking to your ribs.  I suspect a better explanation is simple careerism on the part of the academic involved, perhaps even self-delusion, that was exposed.  Very typical human behavior.
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

apl68

I don't think marshwiggle is wrong concerning the point that sometimes research of this sort tends to confirm prejudices, and that causes some to run with it.  And then it turns out that the research has trouble being reproduced.  The notorious marshmallow experiment, where children were offered a choice between eating a marshmallow right now or getting two if they just wait a bit, comes to mind.  It was a test of impulse control that supposedly predicted, at an early age, who was going to succeed in school, and who was going to drop out, get involved in drugs, end up in prison, etc. 

Or to name another example, I recall one of Malcolm Gladwell's books citing an experiment testing reactions of male college students to being "accidentally" bumped in the hallway.  Supposedly most students were pretty laid-back about it and didn't take offense.  Except for students from the South, who were usually ready to start a fight over a seemingly innocent hallway collision.  Which goes to show that Southerners, being from a violent "honor culture" where guys are socialized to be quick to start fights, are just like that. 

I remember calling bull on that when I saw it, since I've lived in the South my whole life and just don't see that.  You might find the occasional guy who reacts badly to being bumped into, sure, but it'll probably be somebody that everybody thought was a creep to start with, and taking such an accident in an ungraceful manner is not generally socially acceptable here, anymore than it is anywhere else.  But it confirms so many stereotypes!  Only the other day I read an excerpt from a recent book about how violent and vicious the South is--and it cited this one small experiment, which I suspect would prove hard to replicate.
For our light affliction, which is only for a moment, works for us a far greater and eternal weight of glory.  We look not at the things we can see, but at those we can't.  For the things we can see are temporary, but those we can't see are eternal.

ab_grp

I don't think this guy's research has been discussed here before (Nicolas Gueguen): https://retractionwatch.com/2022/12/02/paper-about-sexual-intent-of-women-wearing-red-retracted-seven-years-after-sleuths-raised-concerns/  The Data Thugs did a pretty thorough review of his research, summarized on Nick Brown's blog (https://steamtraen.blogspot.com/2017/12/a-review-of-research-of-dr-nicolas.html) and discussed at length in the 52-page document linked there.  The pushback against this kind of effort bugs me both because this has been going on for quite a while now, flawed research (for various reasons) uncovered and held up to scrutiny, and yet researchers continue to try to get away with the most obvious fraud.  I also see researchers I respect pushing back, and I just cannot fathom it.  I think there can be a legitimate need to change the incentive structure so just getting things published is not as much of a career driver, but the researchers who apparently think they can just get away with it should also be held publicly accountable, in my opinion.  Unfortunately, there are already plenty of examples of researchers who think the gamble is worth it.  Nick Brown and James Heathers have discovered a bunch of them and brought their actions to light.

Regarding clinical trials, Elisabeth Bik has done work in that area.  I think she is mainly known for spotting image duplication in publications but has received public backlash for her science integrity efforts: https://scienceintegritydigest.com/about/

marshwiggle

Quote from: Wahoo Redux on June 29, 2023, 08:15:24 AM
Quote from: marshwiggle on June 29, 2023, 07:59:46 AMThe point is that when "research" is done to appeal to an audience, it's easily compromised. It doesn't matter what the ideology of the audience is, the fact that they "seem to love" it incentivizes bad practice, up to and including outright fraud. When research has results that aren't going to say exactly what any particular camp would like to hear, there's much less reason for it to have been fudged.


I know you believe these things, Marshbuddy, because you seem to fear the "activists" out there.  And you've swallowed some propaganda about "ideologies" somewhere along the lines and it is sticking to your ribs.  I suspect a better explanation is simple careerism on the part of the academic involved, perhaps even self-delusion, that was exposed.  Very typical human behavior.

Even simple careerism is going to be subject to the zeitgeist, so that certain kinds of results are going to more easily attract an audience, so people who are prone to fabricate results are going to do so in a predictable way.
It takes so little to be above average.

Sun_Worshiper

Quote from: marshwiggle on June 29, 2023, 09:59:14 AM
Quote from: Wahoo Redux on June 29, 2023, 08:15:24 AM
Quote from: marshwiggle on June 29, 2023, 07:59:46 AMThe point is that when "research" is done to appeal to an audience, it's easily compromised. It doesn't matter what the ideology of the audience is, the fact that they "seem to love" it incentivizes bad practice, up to and including outright fraud. When research has results that aren't going to say exactly what any particular camp would like to hear, there's much less reason for it to have been fudged.


I know you believe these things, Marshbuddy, because you seem to fear the "activists" out there.  And you've swallowed some propaganda about "ideologies" somewhere along the lines and it is sticking to your ribs.  I suspect a better explanation is simple careerism on the part of the academic involved, perhaps even self-delusion, that was exposed.  Very typical human behavior.

Even simple careerism is going to be subject to the zeitgeist, so that certain kinds of results are going to more easily attract an audience, so people who are prone to fabricate results are going to do so in a predictable way.


I agree that people want to see their beliefs vindicated and have a hard time accepting when the evidence does not support them, but in this case the results of her studies don't advance any particular political worldview.

And the fraudster's behavior looks like pure careerism.

Diogenes

Quote from: Puget on June 28, 2023, 10:00:20 AMOf course, in many cases there are also severe harms from applying false results to real-world policies or treatments, but in this case the research is honestly just silly and pretty meaningless in the first place (e.g., arguing against your own viewpoints makes you rate cleaning products higher because you supposedly feel "contaminated"). I'm more worried about the fraud we aren't catching in more important areas, like clinical trials.   

There were some real world outcomes. The original 2012 Ariely et al paper she was involved in about honesty and insurance paperwork was heavily discussed in the famous Nudge book. Co-author Cass Sunstein went on to have a position in the Obama Administration over that book.

I do think these retractions and data checking should be very public. But how Datacolada is stretching it out into 4 blog posts over the course of weeks to maximize their own SEO and exposure feels smarmy.

Puget

Quote from: Diogenes on June 29, 2023, 11:38:41 AMThere were some real world outcomes. The original 2012 Ariely et al paper she was involved in about honesty and insurance paperwork was heavily discussed in the famous Nudge book. Co-author Cass Sunstein went on to have a position in the Obama Administration over that book.
Yes, that's true, I'm just much more worried about clinical trials and similar than whether honestly pledges are placed at the top or bottom of forms.

Quote from: Diogenes on June 29, 2023, 11:38:41 AMI do think these retractions and data checking should be very public. But how Datacolada is stretching it out into 4 blog posts over the course of weeks to maximize their own SEO and exposure feels smarmy.

Eh, I'm fine with them getting publicity-- anything that draws more attention and increases the social penalties for fraud and potentially the interest of others in getting involved in fraud dictation is fine with me.

Quote from: Sun_Worshiper on June 29, 2023, 11:33:22 AMI agree that people want to see their beliefs vindicated and have a hard time accepting when the evidence does not support them, but in this case the results of her studies don't advance any particular political worldview.

And the fraudster's behavior looks like pure careerism.

I agree with the first part, but "beliefs" in the sciences usually means pet theory, not political ideology (yes, yes, marshwiggle we know you disagree). This often doesn't involve intentional fraud, but rather p-hacking and other questionable research practices, and is one of the reasons that preregistration and open data and code are important. 

Hard to know motivation for sure of course, but I'd guess $$$ + status + maybe the thrill of getting away with breaking the rules (ffs, she wrote a whole book about breaking the rules, maybe that should have been a clue!).

Quote from: ab_grp on June 29, 2023, 09:36:29 AMRegarding clinical trials, Elisabeth Bik has done work in that area.  I think she is mainly known for spotting image duplication in publications but has received public backlash for her science integrity efforts: https://scienceintegritydigest.com/about/

I think her focus has been mostly on pre-clinical bio research with image analysis (if you're interested in such things, she often posts on twitter with "prizes" for the first person to find the duplicated parts of the images). I don't do this type of research, but it is my understanding that most bio journals now use automated image doctoring detectors for figures.

Re: clinical trials, I saw a recent analysis that of trials registered on clincaltrials.gov, only about half are published (which distorts the record if null results are fill drawered) and only about half those adhered to their registered primary outcome (i.e., half instead report some other outcome that presumably gave a more favorable result). This is without presuming any outright fraud, just bad research practices. But one can presume that give the *much* higher incentive to have a positive result of these trials (continued funding from drug companies, etc.) that fraud must be happening at at least the same if not higher rate than in other research areas. We have few good tools and no system for detecting such fraud.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Hibush

Quote from: Puget on June 29, 2023, 01:27:21 PMRe: clinical trials, I saw a recent analysis that of trials registered on clincaltrials.gov, only about half are published (which distorts the record if null results are fill drawered) and only about half those adhered to their registered primary outcome (i.e., half instead report some other outcome that presumably gave a more favorable result). This is without presuming any outright fraud, just bad research practices. But one can presume that give the *much* higher incentive to have a positive result of these trials (continued funding from drug companies, etc.) that fraud must be happening at at least the same if not higher rate than in other research areas. We have few good tools and no system for detecting such fraud.


I find this really scary. People who register their trials presumably are those most committed to an unbiased outcome. Yet three quarters fail to follow through on what is required to avoid bias.

I hope it's not Reveiwer #2 saying that you can't draw the inference from the data that was put in the registration.

Puget

Quote from: Hibush on June 29, 2023, 06:13:56 PM
Quote from: Puget on June 29, 2023, 01:27:21 PMRe: clinical trials, I saw a recent analysis that of trials registered on clincaltrials.gov, only about half are published (which distorts the record if null results are fill drawered) and only about half those adhered to their registered primary outcome (i.e., half instead report some other outcome that presumably gave a more favorable result). This is without presuming any outright fraud, just bad research practices. But one can presume that give the *much* higher incentive to have a positive result of these trials (continued funding from drug companies, etc.) that fraud must be happening at at least the same if not higher rate than in other research areas. We have few good tools and no system for detecting such fraud.


I find this really scary. People who register their trials presumably are those most committed to an unbiased outcome. Yet three quarters fail to follow through on what is required to avoid bias.

I hope it's not Reveiwer #2 saying that you can't draw the inference from the data that was put in the registration.

It is scary, but the bolded part isn't true because all clinical trials regulated by FDA (so all drug and medical device trials in humans) and/or funded by NIH (so most psychotherapy trials as well) have to be registered there-- that's actually what makes most useful for this type of meta-science research, versus other research areas were preregistration is voluntary (for now, I think many journals will make it a submission requirement within a few years, NIH and NSF are likely to follow).
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

MarathonRunner

QuoteRe: clinical trials, I saw a recent analysis that of trials registered on clincaltrials.gov, only about half are published (which distorts the record if null results are fill drawered) and only about half those adhered to their registered primary outcome (i.e., half instead report some other outcome that presumably gave a more favorable result). This is without presuming any outright fraud, just bad research practices. But one can presume that give the *much* higher incentive to have a positive result of these trials (continued funding from drug companies, etc.) that fraud must be happening at at least the same if not higher rate than in other research areas. We have few good tools and no system for detecting such fraud.

Well, it would help if journals, especially higher rated ones, would be willing to publish null or negative results. Sadly, it is really difficult in a lot of clinical fields to publish such results. It would also help to consider prior plausibility when designing some clinical trials (for example, there's no known mechanism by which homeopathy can work, as homeopathic remedies are diluted so much that often there's not a single molecule of the "active ingredient" left in them, so doing another RCT on homeopathy is just a waste of time and money).

Puget

For those interested in the forensic details, here is the 4-part series. The hubris of *openly posting on OSF* data that have been so *badly* faked is really something else! Someone who was just a tad more careful and forensically aware could have gotten away with it, but the fakery is just so sloppy here!

https://datacolada.org/109
https://datacolada.org/110
https://datacolada.org/111
https://datacolada.org/112

Quote from: MarathonRunner on June 30, 2023, 10:05:41 AMWell, it would help if journals, especially higher rated ones, would be willing to publish null or negative results. Sadly, it is really difficult in a lot of clinical fields to publish such results.

This is definitely a problem, and not just for clinical trials. It is getting better in psychology, sorry to hear that is not the case in other fields. I think biomedical fields are way over-due for their crisis-and-reform moment.

 Registered reports (where the paper is conditionally accepted based on the preregistration, then published as long as the plan was adhered to regardless of results) are a good mechanism that is arguably how all clinical trials should be published.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Hibush

Quote from: Puget on June 30, 2023, 12:23:35 PM
Quote from: MarathonRunner on June 30, 2023, 10:05:41 AMWell, it would help if journals, especially higher rated ones, would be willing to publish null or negative results. Sadly, it is really difficult in a lot of clinical fields to publish such results.

This is definitely a problem, and not just for clinical trials. It is getting better in psychology, sorry to hear that is not the case in other fields. I think biomedical fields are way over-due for their crisis-and-reform moment.

 Registered reports (where the paper is conditionally accepted based on the preregistration, then published as long as the plan was adhered to regardless of results) are a good mechanism that is arguably how all clinical trials should be published.

I suggested to my society journal that they do just this. You submit the trial plan, which is peer reviewed. If the experiment is deemed interesting and the treatments and analysis robust, the trial is registered by the journal and the authors must publish the results with the journal.

With field research, the result could be that the site was washed away by floods and there was no money to reestablish it. So you publish that. People will know that the hypotheses were not tested.

More commonly, it could be that there was no statistically significant treatment effect, but the variance was so high that even important treatment effects would have been undetectable. So you publish that. So we know that the treatment effect is less than some large number, and we have a good citeable estimate of the variance so the experiment can be done with more sensitivity next time. That would be a very valuable contribution.

Pubs like that are not as important as ones reporting successful experiments. But they are easy to publish, so the cost is minimal.

MarathonRunner

Quote from: Puget on June 30, 2023, 12:23:35 PMRegistered reports (where the paper is conditionally accepted based on the preregistration, then published as long as the plan was adhered to regardless of results) are a good mechanism that is arguably how all clinical trials should be published.

Agreed! It would be great if clinical trials could be published on that basis.

I had a related problem with my dissertation research. I used a large, secondary data source, and I could only conduct analyses that were approved in advance, in my data access application. I often had reviewers asking for additional analyses that weren't part of the initial plan. I can't tell you how many times I had to reply to reviewers "The 'data source' only allows me to conduct the analyses that were in my data access application. Any additional analyses would require a new data access application, would cost $x, and the authors wouldn't receive the new data for at least another year." Only had pushback once, which resulted in the paper's rejection. But with edits it was accepted at another journal. Also had to answer questions about why I didn't do "x" in my defence. Because I couldn't, because it wasn't in my data access application.

Puget

Retraction notices for two of the papers in question:
https://journals.sagepub.com/doi/full/10.1177/0956797615575277?journalCode=pssa%29
https://journals.sagepub.com/doi/10.1177/09567976231187595

A key line from both:
QuoteCounsel for the first author informed the journal that whereas Dr. Gino viewed the retraction as necessary, she disagreed with references to original data, stating that "there is no original data available" pertaining to the research reported in the article.

Um, is that supposed to be exculpatory here? "How dare you accuse me of changing values in the original data– there is no original data!".
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes