Monday, March 28, 2016

Relative Humidity for Load Forecasting Models

The ultimate driver of using big data for predictive modeling and forecasting, in my opinion, is customization. Obviously, such customization can be reflected by providing special treatments to individual regions in a territory and individual hours of a day, as discussed in my recent IJF paper Electric Load Forecasting with Recency Effect: A Big Data Approach. In this paper, we are taking another big data approach to load forecasting by breaking a composite variable Heat Index. We show that the NWS' formula for Heat Index is not really designed for load forecasting.

The paper went through three rounds of reviews with 5 reviewers. Most reviewers were very good at providing helpful comments. Only one reviewer raised a few interesting but naive comments. I didn't bother to please him/her by revising my paper. Nevertheless, I would love to list them here so that other authors can use my argument to respond to similar comments.

1. "Twenty-five papers and previous works are cited in this paper, with exactly 54 references along the paper. On these 54 times, the authors are citing themselves at least 31 times" [...]"This behavior tends to give the reader a strange feeling and do not give a lot of credit to your work: it is hard to value it since you mostly compare yourself to.... yourself. You need to justify it properly before claiming these kind of affirmations."

Dear reviewer,
Unfortunately you are having a "strange" feeling. I believe that this is mostly due to the fact that you are not very much aware of the recent academic literature and field practice on load forecasting. Maybe you should go to google "load forecasting", "electric load forecasting" or "energy forecasting", and see how my work shows up among the top three entries on Google's first page. THIS POST discusses my way of citing references. The papers on my reference list are carefully picked based on relevance and quality.The quality is mainly determined by whether the work is being used by the industry or not.  So far, my readers have been very pleased with the useful references I've listed on my papers. Oh, you might be pissed off by my not citing (enough) of your papers. In order for me to cite more of your papers, you should write more high quality papers to show how your work is being valued by the industry.

In my first submission, there were 25 references, of which 9 were my own papers. We carefully considered the reviewer's comments, and increased the number of references to 34 in the final submission, of which 12 papers were mine.

2. "All the models in the paper are derivatives of the Vanilla's Tao Benchmark. It has been shown that this model can be outperformed by a significant margin by state of the art models (see GEFCOM2012 results). Is it useful to use such models (and to finally improve it by max 9%) while GEFCOM2012 results show that some models can improve its forecast by almost 40%."

Dear reviewer,
Take another look at Table I please. That 5.21% were from the Vanilla model. The MAPE of model B4 without humidity is already down to 3.79%. Our proposed model is at 3.62%. This is more than 30% improvement over the Vanilla model. Here we did not even add holiday effect to the models. On the other hand, please take a look at this paper Weather Station Selection for Electric Load Forecasting. Are you wondering why the entire paper is based on the Vanilla model? Why didn't I even add recency effect? It is because the proposed methodology can also be applied to more complicated models. To avoid verbose presentation and distraction from the main theme of a paper, we can show the results on a benchmark model.

3. "It is well known that multicolinearity is very bad in MLR models and can lead to instability of parameters and false results. It would be nice to have an idea of estimated parameters for the different variables and models, and to exhibit significance results, tests, etc."

Dear reviewer,
Please read some papers in the load forecasting literature, and see how often people are using lagged variables (both load and temperatures). Maybe my recent IJF paper on recency effect can totally piss you off. Are you wondering why those papers are using these highly correlated variables? It is because we have so many observations in load forecasting. BTW, please do not show those significant tests, such as p-value, in your papers. They are useless in load forecasting. Again, we have so many observations that the residuals are rarely normally distributed. Moreover, those p-values are from in-sample fit, which tells nothing about the predictive power of your models. Furthermore, there are hundreds of parameters being estimating in a load forecasting model, how do you plan to show the significant tests of those variables? You may want to read Scott Armstrong's paper on Illusions in Regression Analysis to re-examine your understandings in regression analysis.

Citation

Jingrui Xie, Ying Chen, Tao Hong and Thomas D. Laing, "Relative humidity for load forecasting models", IEEE Transactions on Smart Grid, in press.

The working paper is available HERE.

Relative Humidity for Load Forecasting Models

Jingrui Xie, Ying Chen, Tao Hong and Thomas D. Laing

Abstract

Weather is a key driving factor of electricity demand. During the past five decades, temperature is the most commonly used weather variable in load forecasting models. Although humidity has been discussed in the load forecasting literature, it has not been studied as formally as temperature. Humidity is usually embedded in the form of Heat Index (HI) or Temperature-Humidity Index (THI). In this paper, we investigate how Relative Humidity (RH) affects electricity demand. From a real-world case study at a utility in North Carolina, we find that RH plays a vital role in driving electricity demand during the warm months (June to September). We then propose a systematic approach to including RH variables in a regression analysis framework, resulting in the recommendation of a group of RH variables. The proposed models with the recommended addition of RH variables improve the forecast accuracy of Tao’s Vanilla Benchmark Model and its three derivatives in one-day (24-hour) ahead, one-week ahead, one-month ahead and one-year ahead ex post forecasting settings, with the relative reduction in Mean Absolute Percentage Error (MAPE) ranging from 4% to 9% in this case study. It also outperforms two HI based models under the same settings.  Moreover, an extended test case also demonstrates the effectiveness of these RH variables on improving the Artificial Neural Network models.

No comments:

Post a Comment

Note that you may link to your LinkedIn profile if you choose Name/URL option.