As a volunteer in the peer review system, I'm handling (as an editor & editor-in-chief) and reviewing (as a reviewer) at least 10 papers per month. While I try to perform thorough review for each paper following these four steps, the consequence is that I spend more time helping others improve and publish papers than working on my own papers. As the editorial workload keeps growing, I have been trying to figure out a more efficient way of performing editorial services. One phenomenon I have observed is that most papers (about 80% or more) get rejected at the end. Most of my editorial services are for top journals such as TSG and IJF, of which the rejection rate is even higher than 80%. In fact many papers share similar reasons for rejection. I guess one way is to standardize the rejection language, so that I don't have to type the same reasons each time. To save time for other editors and reviewers in the same peer review system, I would be more than happy to share the following rejection language.
1. Out of scope
2. Poor presentation and readability
For electricity price forecasting, refer to Weron (Wroclaw University of Technology).
For wind power forecasting, I would direct the authors to Madsen and Pinson (Denmark Technical University).
4. Data
5. Comparative study
For wind power forecasting, the most popular benchmark is the persistent method.
6. Recycled submission
Hopefully these standard comments not only help the editors and reviewers save time. I'm sure that the authors can also use them to have a sanity check of the paper before submission.
1. Out of scope
The topic is about [BRIEFLY DESCRIBE THE TOPIC], which is not of interest of [JOURNAL NAME]. I would suggest that the authors submit the papers to [OTHER JOURNAL NAMES].For instance, if a paper is on solar radiation forecasting and is submitted to TSG, I would say it is out of scope. The paper should be sent to other places, such as IJF and Weather and Forecasting.
2. Poor presentation and readability
There are many typos and grammatical errors that significantly affect the readability, which makes it hard to understand and judge the quality of the work.3. Literature review
The authors did not discuss any recent development in the field. The literature review section does not demonstrate the understanding of the state-of-the-art. I would suggest that the authors look into the work done by [NAME A FEW ACTIVE RESEARCH GROUPS, OR NAME A FEW RECENT PAPERS].For forecasting and data analysis of electricity demand, refer to the currently active researchers such as Hong (UNC Charlotte), Hyndman and Fan (Monash University), Goude et al (EDF R&D), and Singleton et al (Reading University).
For electricity price forecasting, refer to Weron (Wroclaw University of Technology).
For wind power forecasting, I would direct the authors to Madsen and Pinson (Denmark Technical University).
4. Data
Since the paper is not presenting anything validated through field implementation or currently used by utilities, I would suggest the authors include at least one case study using popular public data, such as [NAME A FEW DATA SOURCES].The preferred data sources would be from GEFCom2012 and GEFCom2014. I'm OK with the authors using data from well-known Independent System Operators that publishes comprehensive data, such as ISO New England.
5. Comparative study
The authors did not compare the proposed method with any state-of-the-art method published in the recent literature, popular benchmarks or naive benchmarks.
The proposed idea is very similar to [NAME A FEW PAPERS], but the authors did not make any direct comparison.For load forecasting, I would expect they at least use the Vanilla benchmark in GEFCom2012.
For wind power forecasting, the most popular benchmark is the persistent method.
6. Recycled submission
This manuscript was previously rejected by [JOURNAL NAME]. This version does not appear to be any different from the rejected version.
A very similar manuscript is currently under review by [JOURNAL NAME]. Self-plagiarism is not allowed.It's understandable that authors may not want to make major changes such as adding additional case studies with new data sets. Then they may decide to submit the paper to a second-class journal. Sometimes I see papers rejected by TSG or TPWRS finally appearing in other journals without a change, even typos and errors. That's really a waste of reviewers' time.
Hopefully these standard comments not only help the editors and reviewers save time. I'm sure that the authors can also use them to have a sanity check of the paper before submission.
No comments:
Post a Comment
Note that you may link to your LinkedIn profile if you choose Name/URL option.