Despite criticism, the peer review system is still widely used by most scholarly journals. Having been through the process as an author, reviewer and editor, I would love to share my four steps to review a paper, particularly in the area of energy forecasting.
1. Shall I review it?
I don't review every paper assigned to me. Under the following circumstance, I will turn down a review request after browsing through its abstract or the entire paper:
A. I don't have enough expertise to review the paper AND I have no immediate need to learn the subject.
B. I don't have time to review the paper.
It takes me on average 4 to 8 hours to review a manuscript for a journal, depending upon the quality of the manuscript. Usually the review is due in a month. If I don't have 6 to 8 hours of spare time over the next month, I will have to decline the review request.
C. Conflict of interest.
For instance, the paper may be from a close friend of mine. If I don't feel comfortable to accept the paper due to some unrepairable issues in the paper, I will suggest the editor looking for someone else to review it.
2. Contribution
Assuming I accept the review request, I will read the paper the second time, mainly focusing on its contribution:
A. Is there anything new?
I like to see a paper solving a new problem, proposing new methodology to an old problem, or discovering new insights from a new dataset. Many papers in this field just create some "new approach" by arbitrarily putting together several old techniques. I try my best to stop these papers from being published. A typical example can be "short term load forecasting with ANN + GA + PSO + ARIMA". I also look into the references and see if the authors cite the important papers in the last few years. If the authors are totally unaware of the state-of-the-art, I will recommend a rejection. If the authors are just republishing their previous work with minimal addition, I will recommend a rejection as well.
B. Is the contribution significant?
I think about whether the contribution is useful to the industry, and how useful it is. I like to read a paper describing the approach and findings of a real-world forecasting implementation at a utility. Unfortunately, among thousands of energy forecasting papers published every year, few are reporting findings from real-world projects. Sometimes the academic people put a utility friend as a co-author just for bluffing purposes, but the model was never used in the utility.
C. Is the work challenging?
Most of the time, if the research work is novel and significant, it is usually challenging. If I feel the work is marginally novel and marginally significant, I will look into how challenging it is to accomplish the task. If the work is not challenging, I will recommend a rejection.
3. Technical soundness
Assuming my impression is still good after reading it two times, I will read it the third time to look into the technical soundness. Most technical problems I have seen from journal publications and manuscripts occur in the following ways:
A. Using a clean dataset without saying how outliers are being detected and cleansed.
It may be more difficult to detect outliers and cleanse data than to develop a forecast. Good results on a clean dataset does not say much about the usefulness of a forecasting methodology.
B. Not conducting rigorous out-of-sample test.
A lot of papers were reporting extremely low errors as a result of peeking future information. A commonly made mistake is on the ANN based forecasting approach. The authors usually split the data to three pieces, training, validation and test. If one ANN model is not good at forecasting the "test" data, the authors will change the model structure until a new structure is good at all three pieces of data.
C. Not comparing with existing approach.
Unless the authors are working on a totally new problem, I would like to understand how the proposed approach is better than the existing one(s). If the problem is new, I would like to see the authors comparing at least two approaches and discussing which one is better and why.
D. Not using public data for comparison.
Unless the paper is reporting a real-world implementation project, or the public data is not applicable for the study, I would expect the authors use some public data to conduct their experiment. Otherwise, it is hard for other people to reproduce the proposed approach.
4. Writing the review report
Assuming there are no major technical issues, I will start writing the review report. In addition to the constructive comments to help the authors improve the paper, I will also comment on the clarity, logic flow, and pick up some typos if there is any. This usually involves reading the paper the fourth time. The better the manuscript is, the more I would read it. Very often I start writing the review report as soon as I identify some strong reasons to recommend a rejection.
1. Shall I review it?
I don't review every paper assigned to me. Under the following circumstance, I will turn down a review request after browsing through its abstract or the entire paper:
A. I don't have enough expertise to review the paper AND I have no immediate need to learn the subject.
B. I don't have time to review the paper.
It takes me on average 4 to 8 hours to review a manuscript for a journal, depending upon the quality of the manuscript. Usually the review is due in a month. If I don't have 6 to 8 hours of spare time over the next month, I will have to decline the review request.
C. Conflict of interest.
For instance, the paper may be from a close friend of mine. If I don't feel comfortable to accept the paper due to some unrepairable issues in the paper, I will suggest the editor looking for someone else to review it.
2. Contribution
Assuming I accept the review request, I will read the paper the second time, mainly focusing on its contribution:
A. Is there anything new?
I like to see a paper solving a new problem, proposing new methodology to an old problem, or discovering new insights from a new dataset. Many papers in this field just create some "new approach" by arbitrarily putting together several old techniques. I try my best to stop these papers from being published. A typical example can be "short term load forecasting with ANN + GA + PSO + ARIMA". I also look into the references and see if the authors cite the important papers in the last few years. If the authors are totally unaware of the state-of-the-art, I will recommend a rejection. If the authors are just republishing their previous work with minimal addition, I will recommend a rejection as well.
B. Is the contribution significant?
I think about whether the contribution is useful to the industry, and how useful it is. I like to read a paper describing the approach and findings of a real-world forecasting implementation at a utility. Unfortunately, among thousands of energy forecasting papers published every year, few are reporting findings from real-world projects. Sometimes the academic people put a utility friend as a co-author just for bluffing purposes, but the model was never used in the utility.
C. Is the work challenging?
Most of the time, if the research work is novel and significant, it is usually challenging. If I feel the work is marginally novel and marginally significant, I will look into how challenging it is to accomplish the task. If the work is not challenging, I will recommend a rejection.
3. Technical soundness
Assuming my impression is still good after reading it two times, I will read it the third time to look into the technical soundness. Most technical problems I have seen from journal publications and manuscripts occur in the following ways:
A. Using a clean dataset without saying how outliers are being detected and cleansed.
It may be more difficult to detect outliers and cleanse data than to develop a forecast. Good results on a clean dataset does not say much about the usefulness of a forecasting methodology.
B. Not conducting rigorous out-of-sample test.
A lot of papers were reporting extremely low errors as a result of peeking future information. A commonly made mistake is on the ANN based forecasting approach. The authors usually split the data to three pieces, training, validation and test. If one ANN model is not good at forecasting the "test" data, the authors will change the model structure until a new structure is good at all three pieces of data.
C. Not comparing with existing approach.
Unless the authors are working on a totally new problem, I would like to understand how the proposed approach is better than the existing one(s). If the problem is new, I would like to see the authors comparing at least two approaches and discussing which one is better and why.
D. Not using public data for comparison.
Unless the paper is reporting a real-world implementation project, or the public data is not applicable for the study, I would expect the authors use some public data to conduct their experiment. Otherwise, it is hard for other people to reproduce the proposed approach.
4. Writing the review report
Assuming there are no major technical issues, I will start writing the review report. In addition to the constructive comments to help the authors improve the paper, I will also comment on the clarity, logic flow, and pick up some typos if there is any. This usually involves reading the paper the fourth time. The better the manuscript is, the more I would read it. Very often I start writing the review report as soon as I identify some strong reasons to recommend a rejection.
No comments:
Post a Comment
Note that you may link to your LinkedIn profile if you choose Name/URL option.