Tuesday, February 9, 2016

Improving Short Term Load Forecast Accuracy via Combining Sister Forecasts

Over the past three years, I have had 14 papers published/accepted by reputable journals. These papers are from about 40 original and revision submissions, about two revisions per finally accepted paper. Among the 40 some decision letters I received in the recent three years, four were rejections. Three of the four rejections were on this paper. I'm summarizing the journey of this paper in this post, to shed some lights on the peer review process for the other authors like myself living in this broken system.

The original work was from the thesis of one of my MS students. I first submitted a two-page letter to PES Letter in February 4, 2015, and got a rejection after four months, mostly due to lack of details. The working paper is HERE. Although the reviews were not much helpful, I think the rejection was fair.

After the PESL submission, I worked with Rafal Weron and two students to expand the two-page letter to a full paper. I then submitted it to IEEE Transactions on Power Systems, and got a rejection after about 10 weeks. We received the comments from three reviewers:

  • Reviewer #1 wrote 6 sentences. One sentence was summarizing the work presented in our paper. One sentence was "The submitted manuscript is relatively well written and the references list is good.". Three sentences were repeating themselves "This manuscript does not present any novelty." The other sentence was "Load forecast is easy. Price forecast is very difficult!". I had no idea why the review made such an irrelevant comment...
  • Reviewer #2 confirmed the value of our paper, and asked a list of clarification questions. Most of the comments were quite helpful. 
  • Reviewer #3 wrote two pages of comments, or 1106 words, trying his/her best to kill our paper. What amazed me was that after reading the comments multiple times, I wasn't sure if the comments were to our paper. The review also used the term "garbage writings" to criticize our paper, which reminded me one of the 10 reviewers who rejected my papers 5 years ago. That reviewer also liked to use the terms "garbage writings" and "garbage tables" in the review report. 

Being speechless about the decision letter, we revised the paper a bit and resubmitted it to Applied Energy. Applied Energy was not my choice, because the journal has an awful reputation of manipulating the impact factor through self-citation. I did not want to associate my papers with a fishy journal. Since Rafal took the role of corresponding author this time, I respected his decision. The paper was rejected within a week by the editor-in-chief for the reason "out-of-scope". Recalling that I just reviewed some load forecasting papers, and even probabilistic load forecasting papers for the journal, I have to say that "out of scope" is equivalent to "not citing enough papers from Applied Energy". It looks like the journal is still playing the same game to keep up its impact factor.

After getting three rejections, Rafal decided to send the paper to Energy. The first decision came back in three months asking for a major revision. There comments were from three reviewers. One reviewer confirmed the contribution and pointed out a typo. Another reviewer suggested us including a machine learning method for forecast combination. The other reviewer copied exactly the same comments from TPWRS' Reviewer #3! While the TPWRS Reviewer #3's comments were not much relevant to our TPWRS submission, they are even more irrelevant to our EGY version.

In our revision, we rebutted the comments from that familiar reviewer, and revised the paper according to the comments from the other two reviewers. The second decision letter came back in about six weeks asking for another major revision. This time the other two reviewers were pleased. That familiar reviewer was making some more nonsense comments, and said "I will give authors one more time to revise their paper..."

We again rebutted all of the comments from this last reviewer, and submitted the paper on December 28, 2015. The acceptance letter came back on New Year's Eve. I guess that nonsense reviewer was finally kicked out by the editor.

Citation

Jakub Nowotarski, Bidong Liu, Rafał Weron, Tao Hong, Improving short term load forecast accuracy via combining sister forecasts, Energy, Volume 98, 1 March 2016, Pages 40-49,

You can access to this paper for free by March 24, 2016 through THIS LINK.

Improving Short Term Load Forecast Accuracy via Combining Sister Forecasts

Jakub Nowotarski, Bidong Liu, Rafał Weron, Tao Hong

Abstract

Although combining forecasts is well-known to be an effective approach to improving forecast accuracy, the literature and case studies on combining electric load forecasts are relatively limited. In this paper, we investigate the performance of combining so-called sister load forecasts, i.e. predictions generated from a family of models which share similar model structure but are built based on different variable selection processes. We consider 11 combination algorithms (three variants of arithmetic averaging, four regression based, one performance based method and three forecasting techniques used in the machine learning literature) and two selection schemes. Through comprehensive analysis of two case studies developed from public data (Global Energy Forecasting Competition 2014 and ISO New England), we demonstrate that combing sister forecasts outperforms the benchmark methods significantly in terms of forecasting accuracy measured by Mean Absolute Percentage Error. With the power to improve accuracy of individual forecasts and the advantage of easy generation, combining sister load forecasts has a high academic and practical value for researchers and practitioners alike. 

1 comment:

Note that you may link to your LinkedIn profile if you choose Name/URL option.