Reviewing for Journals with Article Processing Charges

Started by Bookworm, July 27, 2020, 11:49:13 AM

Previous topic - Next topic

mamselle

Forsake the foolish, and live; and go in the way of understanding.

Reprove not a scorner, lest they hate thee: rebuke the wise, and they will love thee.

Give instruction to the wise, and they will be yet wiser: teach the just, and they will increase in learning.

Kron3007

Quote from: mleok on December 02, 2020, 12:35:05 PM
I have published exactly one paper in a MDPI journal, because it was a special issue in information geometry that a collaborator of mine wanted the paper to appear in, and he paid for the APC charge, which I certainly would not have. The production editorial staff that handled the LaTeX submission were absolutely incompetent, incapable of following basic instructions, and maintaining version control, resulting in a neverending email exchange. All they had to do was to take my source file formatted using their style file, and flip a setting on the template, and they still managed to mess it all up. I will certainly never ever again publish a paper with a MDPI journal, or review a paper for any of these journals.

Even if it's not a scam, it's clear where their priorities lie, and it's not focused on the intellectual quality of the journals.

So, I just published a paper with an MDPI journal despite my previous hesitations.  I did so because it was a special issue that closely aligns with my work put together by a reasonably solid scholar. 

I was quite amazed at the speed of review, but would say that the feedback was typical of what I would expect in most journals. So, I cant speak for all MDPI journals, and hear they vary quite a bit, but my experience with this one was good and I would consider publishing there again.

I also quite like the fact that they give discounts based on reviewing activity.  The expectation that we should review for free and pay to publish the work that we paid to do (as is normal in my field) is silly and it is only right that journals provide some sort of incentive to review.

On a related note, I have also seen a similar paper published in a Frontiers special issue that was pretty much rubbish.  MDPI is not the only publisher that puts out crap.  I think the motives of most of them are pretty clear and it is not really about quality...

stemer

I would prefer that journals like this offer a free-of-charge "closed access" option and for rich researchers with grant money offer the open access option with their fees.  Having said that, this review for the discount model, does it ever allow the submission costs to go down to $0?

Kron3007

Yeah, the price for OA is high, but for an exclusively OA publisher to create a subscription based option would be a pretty major thing.  The other way around is not really that hard.

I don't know if the cost ever reaches 0 with these, but think it could.  I was also asked to be an editor for a different journal, which I declined, but editors get fees waved for one paper per year.  Again, it seems reasonable to me, otherwise you are editing for free as well.

youllneverwalkalone

Quote from: stemer on January 07, 2021, 03:18:10 PM
I would prefer that journals like this offer a free-of-charge "closed access" option and for rich researchers with grant money offer the open access option with their fees.  Having said that, this review for the discount model, does it ever allow the submission costs to go down to $0?

Traditional publishers (like Elsevier) already do that. Their APC charges are most of the times higher than MDPI, Frontiers, etc.

youllneverwalkalone

Quote from: research_prof on July 27, 2020, 04:14:35 PM
OP, just read through what wikipedia says about MDPI. And then think about the fact that academia is all about ethics and reputation at the end of the day.

I have strong dislike for MDPI due to how spammy they are, the crazy deadlines they require, and the fact that all communication goes through obviously clueless secretaries. That said, they are clearly not "predatory", and some of their journals in my field are considered to have a good standing.

If you are a scientist, you should have the tools to assess the quality of publishers and journals in your field. Definitely don't make up your own mind based on what you read on wikipedia. In my experience, WP is very biased against OA publishers. This is partly because, more often than not, the only available public sources on individual OA journals or publishers are from Jeff Beall, and partly because there is a cabal of editors with strong anti-OA views that keeps a tight grip on those pages.


Hibush

Quote from: Kron3007 on January 07, 2021, 04:01:40 PM
Yeah, the price for OA is high, but for an exclusively OA publisher to create a subscription based option would be a pretty major thing.  The other way around is not really that hard.

"Pretty major" is an understatement. Nearly all individuals and most libraries have stopped subscribing to journals. Only the giant publishing houses have the market power to maintain subscriptions, but their Big Deals are falling fast. The subscription-based model is dead.

For people doing funded research, the Gold OA model works just fine. It is just part of the research expense. The publishers will need to come up with a different model for societal segments or  subject areas where the work is not funded. Most have a reduced rate for official developing countries, and offer discounts on an ad hoc basis. The journals whose authors do low-budget scholarship need to come up with a different business model pronto or they will shrink to irrelevance. 

Kron3007

Quote from: youllneverwalkalone on January 08, 2021, 05:09:10 AM
Quote from: research_prof on July 27, 2020, 04:14:35 PM
OP, just read through what wikipedia says about MDPI. And then think about the fact that academia is all about ethics and reputation at the end of the day.

I have strong dislike for MDPI due to how spammy they are, the crazy deadlines they require, and the fact that all communication goes through obviously clueless secretaries. That said, they are clearly not "predatory", and some of their journals in my field are considered to have a good standing.

If you are a scientist, you should have the tools to assess the quality of publishers and journals in your field. Definitely don't make up your own mind based on what you read on wikipedia. In my experience, WP is very biased against OA publishers. This is partly because, more often than not, the only available public sources on individual OA journals or publishers are from Jeff Beall, and partly because there is a cabal of editors with strong anti-OA views that keeps a tight grip on those pages.

I agree they are spammy, but they are not alone.  Frontiers was just as bad, but seem to have toned it down.

Regarding the crazy deadlines, I am of mixed mind.  I have a couple papers that are about 4-6 months into the review process at other journals because they dont have short deadlines or enforce them.  I dont know about anyone else, but when I review papers I often procrastinate and do the review near the deadline.  The length of time I am given to do a review will not impact the amount of time I spend on it or the quality of my review, it will just let me know how long I can procrastinate.  I know this is awefull and I try to get back sooner, but I dont think I am alone and when there are three reviewers at least one will wait until the end.  So, I understand not liking the short turn around, but really, if you dont have the time in the next ten days you just deny the request. 

As an author, I greatly appreciate the quick turn around.  However, I have started posting most of my work as pre-prints before publishing, so the delay of review etc. is less important now.  As an example though, I just had a paper published after about 5 months of back and forth with reviewers (had to withdraw from one journal because of one counterproductive reviewer, and move to another).  During this period, the pre-print has been downloaded thousands of times and I have reviewed papers that cite it.  The publication delay from traditional publishers has a significant negative impact on science, especially in quickly moving fields.

 

youllneverwalkalone

#38
Quote from: Kron3007 on January 08, 2021, 07:00:58 AM
Quote from: youllneverwalkalone on January 08, 2021, 05:09:10 AM
Quote from: research_prof on July 27, 2020, 04:14:35 PM
OP, just read through what wikipedia says about MDPI. And then think about the fact that academia is all about ethics and reputation at the end of the day.

I have strong dislike for MDPI due to how spammy they are, the crazy deadlines they require, and the fact that all communication goes through obviously clueless secretaries. That said, they are clearly not "predatory", and some of their journals in my field are considered to have a good standing.

If you are a scientist, you should have the tools to assess the quality of publishers and journals in your field. Definitely don't make up your own mind based on what you read on wikipedia. In my experience, WP is very biased against OA publishers. This is partly because, more often than not, the only available public sources on individual OA journals or publishers are from Jeff Beall, and partly because there is a cabal of editors with strong anti-OA views that keeps a tight grip on those pages.

I agree they are spammy, but they are not alone.  Frontiers was just as bad, but seem to have toned it down.

Regarding the crazy deadlines, I am of mixed mind.  I have a couple papers that are about 4-6 months into the review process at other journals because they dont have short deadlines or enforce them.  I dont know about anyone else, but when I review papers I often procrastinate and do the review near the deadline.  The length of time I am given to do a review will not impact the amount of time I spend on it or the quality of my review, it will just let me know how long I can procrastinate.  I know this is awefull and I try to get back sooner, but I dont think I am alone and when there are three reviewers at least one will wait until the end.  So, I understand not liking the short turn around, but really, if you dont have the time in the next ten days you just deny the request. 

As an author, I greatly appreciate the quick turn around.  However, I have started posting most of my work as pre-prints before publishing, so the delay of review etc. is less important now.  As an example though, I just had a paper published after about 5 months of back and forth with reviewers (had to withdraw from one journal because of one counterproductive reviewer, and move to another).  During this period, the pre-print has been downloaded thousands of times and I have reviewed papers that cite it.  The publication delay from traditional publishers has a significant negative impact on science, especially in quickly moving fields.

I don't disagree with anything you say. I have my share of horror stories when it comes to waiting for reviewers, so I definitely appreciate a quick turnaround as much as the next guy. My problem with MDPI is that their focus on speed is in my view detrimental to quality. How good do you expect my review to be if you give me a super short deadline to complete it? Unlike you, I don't wait to the last minute, I do reviews whenever I have time, so the length of time I am given does matter. Also thinking as an author here. I got reviews from them that required major revisions, but still we were only allowed 7-10 days to resubmit the paper, otherwise you need to submit as a new paper. And they start spamming like crazy one minute after said deadline, even during the weekend, to ask for the "status" of the revision and to remind you that you have to submit asap.

By the way, their metrics on submission to publication speed are affected by their rule of only allowing one round of revision, after which if the reviewers are still unhappy you need to resubmit the paper as a new manuscript which of course sets the clock back to 0 - the "new" paper going to be published quite fast since by then many of the issues will have been dealt with.

Look, I am not against MDPI (have published 4-5 papers with them and it will happen again in the future), but my experience with them has been checkered. Frontiers is much better from this point of view. They are WAY less spammy and you communicate with actual scientists instead of dealing with those clueless editorial assistant all the time. 

Sun_Worshiper

MDPI may have a few respectable journals, but let's be honest, their business model is to pump out as much as possible without much quality control.

Kron3007

Quote from: youllneverwalkalone on January 08, 2021, 07:56:06 AM
Quote from: Kron3007 on January 08, 2021, 07:00:58 AM
Quote from: youllneverwalkalone on January 08, 2021, 05:09:10 AM
Quote from: research_prof on July 27, 2020, 04:14:35 PM
OP, just read through what wikipedia says about MDPI. And then think about the fact that academia is all about ethics and reputation at the end of the day.

I have strong dislike for MDPI due to how spammy they are, the crazy deadlines they require, and the fact that all communication goes through obviously clueless secretaries. That said, they are clearly not "predatory", and some of their journals in my field are considered to have a good standing.

If you are a scientist, you should have the tools to assess the quality of publishers and journals in your field. Definitely don't make up your own mind based on what you read on wikipedia. In my experience, WP is very biased against OA publishers. This is partly because, more often than not, the only available public sources on individual OA journals or publishers are from Jeff Beall, and partly because there is a cabal of editors with strong anti-OA views that keeps a tight grip on those pages.

I agree they are spammy, but they are not alone.  Frontiers was just as bad, but seem to have toned it down.

Regarding the crazy deadlines, I am of mixed mind.  I have a couple papers that are about 4-6 months into the review process at other journals because they dont have short deadlines or enforce them.  I dont know about anyone else, but when I review papers I often procrastinate and do the review near the deadline.  The length of time I am given to do a review will not impact the amount of time I spend on it or the quality of my review, it will just let me know how long I can procrastinate.  I know this is awefull and I try to get back sooner, but I dont think I am alone and when there are three reviewers at least one will wait until the end.  So, I understand not liking the short turn around, but really, if you dont have the time in the next ten days you just deny the request. 

As an author, I greatly appreciate the quick turn around.  However, I have started posting most of my work as pre-prints before publishing, so the delay of review etc. is less important now.  As an example though, I just had a paper published after about 5 months of back and forth with reviewers (had to withdraw from one journal because of one counterproductive reviewer, and move to another).  During this period, the pre-print has been downloaded thousands of times and I have reviewed papers that cite it.  The publication delay from traditional publishers has a significant negative impact on science, especially in quickly moving fields.

I don't disagree with anything you say. I have my share of horror stories when it comes to waiting for reviewers, so I definitely appreciate a quick turnaround as much as the next guy. My problem with MDPI is that their focus on speed is in my view detrimental to quality. How good do you expect my review to be if you give me a super short deadline to complete it? Unlike you, I don't wait to the last minute, I do reviews whenever I have time, so the length of time I am given does matter. Also thinking as an author here. I got reviews from them that required major revisions, but still we were only allowed 7-10 days to resubmit the paper, otherwise you need to submit as a new paper. And they start spamming like crazy one minute after said deadline, even during the weekend, to ask for the "status" of the revision and to remind you that you have to submit asap.

By the way, their metrics on submission to publication speed are affected by their rule of only allowing one round of revision, after which if the reviewers are still unhappy you need to resubmit the paper as a new manuscript which of course sets the clock back to 0 - the "new" paper going to be published quite fast since by then many of the issues will have been dealt with.

Look, I am not against MDPI (have published 4-5 papers with them and it will happen again in the future), but my experience with them has been checkered. Frontiers is much better from this point of view. They are WAY less spammy and you communicate with actual scientists instead of dealing with those clueless editorial assistant all the time.

Frontiers was initially spamming me all the bloody time, but as I said they have toned it down now.  I think/hope that MDPI will tone this down as well, but appreciate that this is how they (and Frontiers) get guest editors and establishes themselves during the early phase.     

I do really like Frontier's interactive review system as an author and a reviewer.  It is much more efficient than the traditional approach.


Kron3007

Quote from: Sun_Worshiper on January 08, 2021, 08:04:45 AM
MDPI may have a few respectable journals, but let's be honest, their business model is to pump out as much as possible without much quality control.

This is exactly what everyone said about PLoS.  When other publishers saw the success of this model, they all launched their own version (ie Nature communications, Scientific reports, etc), where the goal is cranking out papers and making money.  This is one of several problems with leaving scientific publishing to the private sector.  At the end of the day, most for-profit publishers are more interested in profit than they are in quality.  Even Nature and Science are more interested in the breadth of interest and impact factors than they really are on the quality of the research.

I would prefer that publishing was not run by for-profit companies in the first place.  In Canada, the National Research Council (NRC) published a number of journals.  They are all good quality (not exceptionally high IF, but good journals) and generally don't charge page fees unless you want colour prints (irrelevant in the digital age IMO) or open access (where they charge 1500 CAD, about half of most OA fees).  I really like publishing with them and would do so more often, but it is generally not great (at least in my field) to only publish ina  couple journals. 


Hibush

Quote from: youllneverwalkalone on January 08, 2021, 07:56:06 AM
Quote from: Kron3007 on January 08, 2021, 07:00:58 AM
Regarding the crazy deadlines, I am of mixed mind.  I have a couple papers that are about 4-6 months into the review process at other journals because they dont have short deadlines or enforce them.  I dont know about anyone else, but when I review papers I often procrastinate and do the review near the deadline.  The length of time I am given to do a review will not impact the amount of time I spend on it or the quality of my review, it will just let me know how long I can procrastinate.  I know this is awefull and I try to get back sooner, but I dont think I am alone and when there are three reviewers at least one will wait until the end.  So, I understand not liking the short turn around, but really, if you dont have the time in the next ten days you just deny the request.[/b] 

As an author, I greatly appreciate the quick turn around.  However, I have started posting most of my work as pre-prints before publishing, so the delay of review etc. is less important now.  As an example though, I just had a paper published after about 5 months of back and forth with reviewers (had to withdraw from one journal because of one counterproductive reviewer, and move to another).  During this period, the pre-print has been downloaded thousands of times and I have reviewed papers that cite it.  The publication delay from traditional publishers has a significant negative impact on science, especially in quickly moving fields.

I don't disagree with anything you say. I have my share of horror stories when it comes to waiting for reviewers, so I definitely appreciate a quick turnaround as much as the next guy. My problem with MDPI is that their focus on speed is in my view detrimental to quality. How good do you expect my review to be if you give me a super short deadline to complete it? Unlike you, I don't wait to the last minute, I do reviews whenever I have time, so the length of time I am given does matter. Also thinking as an author here. I got reviews from them that required major revisions, but still we were only allowed 7-10 days to resubmit the paper, otherwise you need to submit as a new paper. And they start spamming like crazy one minute after said deadline, even during the weekend, to ask for the "status" of the revision and to remind you that you have to submit asap.

By the way, their metrics on submission to publication speed are affected by their rule of only allowing one round of revision, after which if the reviewers are still unhappy you need to resubmit the paper as a new manuscript which of course sets the clock back to 0 - the "new" paper going to be published quite fast since by then many of the issues will have been dealt with.


These comments bring up an adaptation we will have to make as authors and reviewers. Fast-moving fields legitimately need a quite swift time to publication. Many journals are attentive to that need, and authors respond by publishing important advances there.

The challenge is how to get a good and rapid review, realizing that hardly anyone has time to review right away. The simplest approach is to send requests widely and hope a few hit, so we may be stuck with that. It seems that the way to manage that as a prospective reviewer is to identify when you might be able to review. At that point, and only at that point, look at the requests for review that have arrived recently and accept the one that seems most interesting. If that becomes the new social norm, the publishers will know that it is ok if 95% of the requests for review are ignored. Reviewers will know that they are not expected to accept unless they are ready to review. The system could be quite efficient.

A side effect of the time efficiency is that the reviewers would be drawn more from outside the paper's immediate field. They might miss nuanced problems that only insiders would recognize. On the other hand, you would not have conspiring or feuding laboratories determining what gets published based on relationships. There should be a way to make sure someone with close knowledge takes a look, especially if the results are going to make news.

mamselle

Just waving a tiny finger out of the ocean that is humanities scholarship to say that all these tight deadlines and swift review-to-publication standards don't make so much never-mind to my 13th c. documents, which are pretty much going to stay quietly in their libraries and archives and not do much in six weeks.

The trouble is when the science-y protocols for publishing start leaking into humanities publishing, where (as I probably have mentioned a few times before) no-one gets the kind of money or has the kind of time constrictions the sciences have to pay subventions or pressure editors.

I'd like to have as much publication pressure on my work as the physics guys I worked for awhile back did; they turned an R-n-R around in a week so they wouldn't be scooped by that lab on the other coast that they knew was hot on their tails.

And some of the ways of working that I learned from them were helpful when I tried applying them to my own stuff: I started looking for ways to collaborate, sought out more "case-study-like" topics that would stand up to conversational exploration, and tried developing more tightly-formulated bibliographies from the online sources the science folks were starting to use (since I had to do all their .pdf entries I knew how to do my own).

But the scale of financing and level of collegial exchange in the humanities, while more informed by these things in the past couple decades, remains tiny.

Maybe it's because we really aren't intellectuals, after all...(interthreadual allusion)

M.
Forsake the foolish, and live; and go in the way of understanding.

Reprove not a scorner, lest they hate thee: rebuke the wise, and they will love thee.

Give instruction to the wise, and they will be yet wiser: teach the just, and they will increase in learning.

Kron3007

Quote from: Hibush on January 08, 2021, 09:48:45 AM
Quote from: youllneverwalkalone on January 08, 2021, 07:56:06 AM
Quote from: Kron3007 on January 08, 2021, 07:00:58 AM
Regarding the crazy deadlines, I am of mixed mind.  I have a couple papers that are about 4-6 months into the review process at other journals because they dont have short deadlines or enforce them.  I dont know about anyone else, but when I review papers I often procrastinate and do the review near the deadline.  The length of time I am given to do a review will not impact the amount of time I spend on it or the quality of my review, it will just let me know how long I can procrastinate.  I know this is awefull and I try to get back sooner, but I dont think I am alone and when there are three reviewers at least one will wait until the end.  So, I understand not liking the short turn around, but really, if you dont have the time in the next ten days you just deny the request.[/b] 

As an author, I greatly appreciate the quick turn around.  However, I have started posting most of my work as pre-prints before publishing, so the delay of review etc. is less important now.  As an example though, I just had a paper published after about 5 months of back and forth with reviewers (had to withdraw from one journal because of one counterproductive reviewer, and move to another).  During this period, the pre-print has been downloaded thousands of times and I have reviewed papers that cite it.  The publication delay from traditional publishers has a significant negative impact on science, especially in quickly moving fields.

I don't disagree with anything you say. I have my share of horror stories when it comes to waiting for reviewers, so I definitely appreciate a quick turnaround as much as the next guy. My problem with MDPI is that their focus on speed is in my view detrimental to quality. How good do you expect my review to be if you give me a super short deadline to complete it? Unlike you, I don't wait to the last minute, I do reviews whenever I have time, so the length of time I am given does matter. Also thinking as an author here. I got reviews from them that required major revisions, but still we were only allowed 7-10 days to resubmit the paper, otherwise you need to submit as a new paper. And they start spamming like crazy one minute after said deadline, even during the weekend, to ask for the "status" of the revision and to remind you that you have to submit asap.

By the way, their metrics on submission to publication speed are affected by their rule of only allowing one round of revision, after which if the reviewers are still unhappy you need to resubmit the paper as a new manuscript which of course sets the clock back to 0 - the "new" paper going to be published quite fast since by then many of the issues will have been dealt with.


These comments bring up an adaptation we will have to make as authors and reviewers. Fast-moving fields legitimately need a quite swift time to publication. Many journals are attentive to that need, and authors respond by publishing important advances there.

The challenge is how to get a good and rapid review, realizing that hardly anyone has time to review right away. The simplest approach is to send requests widely and hope a few hit, so we may be stuck with that. It seems that the way to manage that as a prospective reviewer is to identify when you might be able to review. At that point, and only at that point, look at the requests for review that have arrived recently and accept the one that seems most interesting. If that becomes the new social norm, the publishers will know that it is ok if 95% of the requests for review are ignored. Reviewers will know that they are not expected to accept unless they are ready to review. The system could be quite efficient.

A side effect of the time efficiency is that the reviewers would be drawn more from outside the paper's immediate field. They might miss nuanced problems that only insiders would recognize. On the other hand, you would not have conspiring or feuding laboratories determining what gets published based on relationships. There should be a way to make sure someone with close knowledge takes a look, especially if the results are going to make news.

Yeah, I do think the pre-print approach addresses a lot of the issues around the timeliness but it pains me to have papers held in limbo for months on end even after they have been accepted with revisions.

I am also a little bitter about the whole peer review enterprise and publishing industry in general.  Some reviewers are very helpful and bring up great points and suggestions that make for a better paper, but others are just counter productive and either miss key points or focus on minor issues that are not all that relevant.  I find editors are not usually very good at making final decisions where there are mixed reviews or disagreements with reviewers.  Perhaps I am just venting though, I am still waiting for one paper months after sending in the revised version...good thing we posted the pre-print.