Tuesday, December 4, 2018

Leaderboard for BFCom2018 Final Match!!!

The final match of the BigDEAL Forecasting Competition 2018 was on probability daily peak hour forecasting, a very important problem in today's electricity market but new to the academic literature. Even without any monetary prize, all 16 finalists from 5 countries submitted their forecasts. (See the qualifying match leaderboad HERE.)

The figure below shows the leaderboard for BFCom2018 Qualifying Match. The green highlighted ones are in-class students. I also created a naive forecast, which is highlighted in red. 

BigDEAL Forecasting Competition 2018 Final Match Leaderboard

One of my students Zehan Xu, who was auditing the class but got disqualified in the qualifying match, also worked on the final problem and submitted his forecast on time. I included his score on the leaderboard, but marked his ranking as "BR-6", which means bragging right for ranking #6. His ranking does not affect the rankings of the other teams. 

Congratulations to all the BFCom2018 finalists for completing this competition! 

To get updates about the follow-up events, please follow my twitter and/or connect to me on LinkedIn.

Sunday, December 2, 2018

Temperature-based Models vs. Time Series Models

Last week, Spyros Makridakis asked me a question:
I have been reading your energy competition and I cannot find any clear statements about the superiority of explanatory/exogenous variables. Am I wrong? Is there a place where you state the difference in forecasting accuracy between time series and explanatory multivariate forecasting as it relates to the short as well as beyond the fist two or three days (not to mention the long term) that accurate temperature forecasting exist?
Today, Rob Hyndman asked me a similar question, which was routed originally from Spyros.

In fact, this has been quite a debatable topic in load forecasting. The answer is not straightforward. This subject could make a good master's thesis or even a doctoral dissertation. I was going to write a paper about it, but always had something more important or urgent to work on. Recently my research team has done some preliminary work along this direction. While the paper is still under preparation, let me start the discussion with this blog post, as part of the blog series on error analysis in load forecasting.

The literature is not vacant in this area. Various empirical studies have suggested different things.

Some earlier attempts were made by James Taylor. James has written many load forecasting papers. His best known work is on exponential smoothing models.

James' TPWRS2012 paper claimed that
Although weather-based modeling is common, univariate models can be useful when the lead time of interest is less than one day.
In Fig. 9 of the paper that depicted the MAPE values by lead time, the paper stated that
The exponential smoothing methods outperform the weather-based method up to about 5 hours ahead, but beyond this the weather-based method was better. 
Based on this paper, can we conclude that exponential smoothing models are more accurate than the weather-based methods for very short term ex ante load forecasting?

No.

This is my interpretation of the paper:
A world-class expert in exponential smoothing carefully developed several exponential smoothing models. These models generated more accurate forecasts than a U.K. power company's forecasts. 
The "weather-based method" used in that paper was devised by the transmission company in Great Britain using regression models. The paper briefly mentioned how the "weather-based method" worked, but the information was not enough for me to judge how accurate these weather-based models are. I don't know if this U.K. transmission company is using state-of-the-art models.

Some evidence came from recent load forecasting competitions, such as Global Energy Forecasting Competitions, npower forecasting challenges, and BigDEAL Forecasting Competition 2018.

In short, time series models, such as exponential smoothing and ARIMA models, never showed up as a major component of a winning entry in these competitions. On the other hand, regression models with temperature variables are always among the winning models.

In fact, ARIMA showed up in a winning method in GEFCom2014, where my former student Jingrui Xie used four techniques (UCM, ESM, ANN, and ARIMA) to model the residuals of a regression model (see our IJF paper).

Based on these competition results, can we conclude that time series models are not as accurate as regression models?

No.

In GEFCom2012, we let the contestants predict a few missing periods in the history without restricting the contestants to using only the data prior to each missing period. In my GEFCom2012 paper, I briefly mentioned that
This setup may mean that regression or some other data mining techniques have an advantage over some time series forecasting techniques such as ARIMA, which may be part of the reason why we did not receive any reports using the Box–Jenkins approach in the hierarchical load forecasting track.
In GEFCom2012, npower forecasting challenges, and the qualifying match of BFCom2018, actual temperature values were provided for the forecast period. In other words, these competitions were on ex post forecasting. Again, the temperature-based models have an advantage since perfect information of temperature is given for the forecast period.

GEFCom2014 and GEFCom2017 were on ex ante probabilistic forecasting. The temperature-based models dominated the leaderboards. This would be a fair evidence favoring temperature-based models.

For benchmarking purpose, I included two seasonal naive models in my recency effect paper per the request of an anonymous reviewer. Both performed very poorly compared with the other temperature-based models. I commented in the paper:
Seasonal naïve models are used commonly for benchmarking purposes in other industries, such as the retail and manufacturing industries. In load forecasting, the two applications in which seasonal naïve models are most useful are: (1) benchmarking the forecast accuracy for very unpredictable loads, such as household level loads; and (2) comparisons with univariate models. In most other applications, however, the seasonal naïve models and other similar naïve models are not very meaningful, due to the lack of accuracy. 
Here is a quick summary based on the evidence so far:

  • For ex post point load forecasting, evidence favors temperature-based models.
  • For ex ante point load forecasting, no solid evidence favoring either method. 
  • For ex ante probabilistic load forecasting, evidence favors temperature-based models.

I'm not a fan of comparing techniques. In my opinion, it's very difficult to make fair comparisons among techniques. If I were good at ANN but bad at regression, I could build super accurate ANN models than regression models. Using exactly the same technique, two forecasters may build different models with distinct accuracy levels. My fuzzy regression paper offers such an example. In other words, the goodness of a model is largely depending upon the competency of the forecaster. The best way to compare techniques is through forecasting competitions. 

In practice, weather variables is must-have in most load forecasting situations. I'll elaborate this in another blog post. 

Tuesday, November 27, 2018

Winning Methods from BFCom2018 Qualifying Match

I invited the BFCom2018 finalists to share their methods used at the qualifying match. Here are the ones I've received so far.

#1. Geert Scholma

Team member: Geert Scholma

Software: Excel, R (dplyr, lubridate, ggplot2, plotly, tidyr, dygraphs, xts, nnls)

Core technique: Multiple Linear Regression.

The model includes the usual variables with some special recipe: 5 weekdays; federal holidays; strong bridge days (mo before / fr after); weak bridge days (others); 4th degree polynomials for exponentially weighted moving average temperatures on 3 timescales (roughly 1 day, 1 week, 1 month) with optimized decaying factors; 4th degree polynomial time trend for long term gradual changes, changing in a constant value after the last training date; 8th degree polynomial year day for yearly shape, with weekend interaction.

Core methodology: No data cleaning. 1 weighted weather station, based on the non negative linear regression coefficients of a second model step that combined the predictions of all the single weather station driven models of a first step.

Key reference: (Hong, Wang, & White, 2015).


#2. Redwood Coast Energy Authority

Team member: Allison Campbell, Redwood Coast Energy Authority and UNCC

Software: Python (SKLearn package LinearRegression, and the genetic algorithm package DEAP)

Core technique: Multiple Linear Regression.

I adapted the DEAP One Max Problem to optimize selection of weather stations. The bulk of my model is built from Tao's vanilla benchmark, with the inclusion of lagged temperature, weighted moving average of the last day's temperature, transformation of holidays to weekend/days, and exponentially weighted least squares.  Before the regression, I log transformed the load.  I also created 18 "sister" forecasts by redefining the number of months in a year to be 6 to 24.  This model was informed by Tao's doctoral thesis, Hong, Wang, White 2015 (Weather Stn Selection), Wang, Liu, Hong 2016 (Recency Big Data), Nowotarski, Liu, Weron, Hong 2016 (Combining Sisters), Xie, Hong 2018 (24 Solar Terms), and Arlot, Celisse 2009 (CV for model selection).


#5. Masoud_BigDEAL

Team member: Masoud Sobhani, UNCC

Software: SAS (proc GLM)

Core technique: Multiple Linear Regression

I work with Dr. Hong in BigDEAL lab and I am the TA of “Energy Analytics” course this semester. For the first few assignments of this class, we gave the same dataset to the student to make them improve the accuracy of their forecast after they learned different forecasting skills. Like previous classes, Dr. Hong asks me to prepare a benchmark forecast for the class. I built a model during the first lecture and we kept it as the benchmark for all assignments. Later, Dr. Hong decided to make a competition using the same dataset for the qualifying exam. My initial benchmark model was still in the leader board and fortunately qualified to the next round.

In this model, I did not do any data cleansing and I used the raw data for the forecasting. The core technique that I used was based on Vanilla Benchmark Model with recency (Wang, Liu, & Hong, 2016) and holiday effects (Hong, 2010). This model uses third order polynomials of temperature and calendar variables and interactions between them. I removed the Trend variable and used 14 lagged temperatures. For the weather station selection, I employed the exact method proposed in (Hong, Wang, & White, 2015).


#7. SaurabhSangamwar_BigDEAL

Team Member: Saurabh Sangamwar, UNCC

Software: SAS (proc GLM)

Core technique: Multiple Linear Regression

Methodology:
  • Weather station selection using proposed approach mentioned in (Hong, Wang, & White, 2015)
  • Used 24 solar terms to classify the data as proposed in (Xie & Hong, 2018)
  • Added recency effect to Tao’s Vanilla Benchmark model as proposed in (Wang, Liu, & Hong, 2016)
  • Used holiday effect (considering holiday as Sunday and day after holiday as Monday), weekend
  • effect, trend variable (Increasing serial number), maximum and minimum temperature of day and its interaction with month, solar terms and hour is considered. While forecasting using solar terms solar month 5 and 4 are grouped together.
  • Used 2 years of training period to train the model i.e.,year 2006 and 2007 to train and 2008 load data was forecasted.
  • Used 3- fold cross validation and stepwise variable selection method to select the parameter, number of lagged effects.
  • As there was different lagged effect for each year. Also, solar terms were good instead of using Gregorian calendars months as class variable and for some cases vice a versa. So, generated the point forecast from 11,12,13 and 14 lagged effect for solar terms and Gregorian calendar. So total 8-point forecasts were generated and finally submitted the average of 8 forecasts.

#10. YikeLi_BigDEAL

Team member: Yike Li, Accenture and UNCC 

Software: SAS (proc GLM)

Core techniques: Multiple Linear Regression

Core methodology:
  • Weather station selection: A modified version of (Hong, Wang, & White, 2015) by evaluating all possible combinations of top selected weather stations. Selecting the virual station based on three-fold cross validation.
  • Recency effect:  Performed a 2-dimensional forward stepwise analysis. Assumption is the MAPE results of each d-h combinations on the validation period (d=0~6, h=0~24) form a convex hull; Starting from d=0 and gradually adding the h terms to Tao’s vanilla model, until adding more temperature lags to the existing model won’t yield better MAPE; Keep the selected h value and gradually add d terms to the existing model, until adding more past daily average to the existing model won’t yield better MAPE. 

#13. 4C

Team members:
  • Ilias Dimoulkas, KTH Royal Institute of Technology, Stockholm, Sweden
  • Peyman Mazidi, Loyola Andalucia University, Seville, Spain
  • Lars Herre, KTH Royal Institute of Technology, Stockholm, Sweden
  • Nicholas-Gregory Baltas, Loyola Andalucia University, Seville, Spain
Software: Matlab / Matlab Neural Network Toolbox

Technique: Feed-forward Neural Networks

Methodology:
  • Data cleansing. Missing values at the spring daylight saving hours were filled with the average of the previous and the following hours. Double values at the fall daylight saving hours were replaced by their average value. No other data cleansing or outlier detection was done.
  • Weather station selection. The technique described in (Hong, Wang, & White, 2015) was used with the difference that neural networks were used to make the forecasts instead of multiple linear regression. 
  • Feature selection. Forward sequential feature selection was used. The initial pool of variables consisted of time variables (year, month, hour, etc.), temperature related variables (temperature, power, lags, simple moving average) and cross effects between the temperature and the time variables. The pool contained 172 variables in total. The evaluation was also based on neural networks forecasts. The final feature set consisted of 31 variables.
  • Forecast. 10 neural networks were trained on the whole data set (years 2005-2007). The forecast for year 2008 was the mean forecast of the 10 neural networks.

#13. AdG

Team member: Andrés M. Alonso, Universidad Carlos III de Madrid, Spain.

Software: Matlab (Statistics and Machine Learning toolbox)

Technique: support vector regression

In this project, I use SVM regressions to predict hourly loads using explanatory variables such as temperatures, day of the week, month, federal holidays, and a linear trend. As in Hong et al (2015), I made a selection of meteorological stations taking the loads of 2007 as a trial period. I selected the five meteorological stations with the best results from MAPE. In the final model, the five temperature measures were considered instead of using an aggregate measure. The local or focused approach consists in selecting days in the training sample that have a temperature behavior similar to the day to be predicted. In that way, the regression is estimated / trained using only similar days. That is, for 2007 (2008), I performed 365 (366) SVM regressions but trained in different samples. For 2007, the focused approach improves the overall approach that uses all data from the training set. 


References used by the finalists:
  • Hong, T. (2010), “Short Term Electric Load Forecasting,” Ph.D. Dissertation, Graduate Program of Operation Research and Dept. of Electrical and Computer Engineering, North Carolina State University.
  • Wang, P., Liu, B. and Hong, T. (2016) "Electric load forecasting with recency effect: a big data approach, "International Journal of Forecasting, vol.32, no.3, pp 585-597.
  • Hong, T., Wang, P. and White, L. (2015) "Weather station selection for electric load forecasting, "International Journal of Forecasting, vol.31, no.2, pp 286-295.
  • Tashman, L. J. (2000). Out-of-sample tests of forecasting accuracy: an analysis and review. International Journal of Forecasting, 16(4), 437-450.
  • Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics surveys,4, 40-79.
  • Xie, J. and Hong, T. (2018) "Load forecasting using 24 solar terms," Journal of Modern Power Systems and Clean Energy, vol.6, no.2, pp 208-214
  • Nowotarski, J., Liu, B., Weron, R. and Hong, T. (2016) "Improving short term load forecast accuracy via combining sister forecasts," Energy, vol.98, pp 40-49

BTW, I also created a new label "winning methods" so that audience of this blog can easily find the winning methods of previous competitions. 

Tuesday, November 6, 2018

Leaderboard for BFCom2018 Qualifying Match!!!

The forecast submission due date for Qualifying Match of BigDEAL Forecasting Competition 2018 was Nov 4, 2018. Out of 81 teams who registered the competition, 39 teams successfully submitted their forecasts by the due date. 10 teams will be advanced to the final match together with my Energy Analytics class of 2018 including 5 master and PhD students plus the teaching assistant Masoud Sobhani.

Two methods are used to calculate the MAPE of the forecasts. The first one is the direct calculation of Mean Absolute Percentage Error (MAPE) based on the raw forecast submitted by each team, which was originally announced The other is based on bias-adjusted forecast, which is calculated by dividing the hourly load forecast by the coincident monthly energy, and then multiplying it by the actual monthly energy of that month. For each measure, the MAPE of the last ranked in-class student is used as the qualifying bar. A team outperforming either bar can be advanced to the final match.

The figure below shows the leaderboard for BFCom2018 Qualifying Match. The green highlighted ones are in-class students, while the qualifying bar for each measure is in bold. The teams above the red line are the finalists. The "Ranking (BOTH)" column lists the rankings based on the sum of two rankings from both measures.

BigDEAL Forecasting Competition 2018 - Qualifying Match Leaderboard

Congratulations to the BFCom2018 finalists! A tougher problem is waiting for them in the final match :)

p.s., I will organize a series of follow-up events for the winners to present their methodologies. For more information about this qualifying match, please keep an eye on the FAQ page

Saturday, October 27, 2018

Seven Lessons Learned from Two Plagiarism Cases

Recently, I was presented two plagiarism cases in one month, which triggered this blog post.

Case #1

Research group A used an idea proposed by research group B a few years ago to publish two papers in a flagship journal. 

In the first paper, A applied B's method to an application where B applied the same method, but A did not cite B's original paper at all. In the second paper, A applied B's method to a different application. A cited the original paper where B proposed that method, but did not clearly specify that the method was first proposed by B. Instead, A only cited how B commented on the previous work, which was the motivation of B's original idea. In other words, by reading A's second paper, a reader would consider that the method was originally proposed by A

Since the two papers were published almost at the same time. It's clear that A was aware of B's work, but did not give proper credit to the original paper. The improper citation misled the editors and reviewers. 

I have no intention to defend the editors and reviewers who handled those two papers in the peer review process. Their irresponsibility was part of the problem! In fact, both papers are good papers if the authors had properly cited the literature. At least the second one deserved to be published by that flagship journal.

My recommendation to A was to retract both papers, and to apologize to B

Case #2

Last year, I worked on a proposal with a few collaborators, including the Lead PI (A), PI (B), myself, and a few others. In our proposal, we used B's idea, which he published in a paper for a different application. The proposal was rejected by the funding agency. A decided to complete the research and publish a paper with us, so she sent the proposal to her student (C) to continue the work.

A few weeks ago, A sent the manuscript to us coauthors, and told me that C completed the research, and the results looked very promising. I quickly glanced through the manuscript and found the list of references very disorganized, so I asked A to work with C to re-do the literature review. In addition, I found the results a bit fishy. The proposed method dominated its counterparts with a landslide win. I thought it's too good to be true (see THIS POST about my smoke tests). I asked A to revise the paper and check the results.

After some investigation, A told me that C manipulated the computational results to make the benchmark models look bad. She has asked C to present the full picture.

Last week, I received the revised manuscript. I briefly read through the manuscript, but still didn't like how references are being cited. In addition, I felt some sentences read familiar. I asked A to work with C to further enhance the reference list, and to validate that this manuscript did not copy sentences from other papers.

Yesterday, A told me that she found the method proposed in this draft identical to the method in B's paper. She was pissed off, because C told her that the idea was original. Moreover, the manuscript never specified that the same method was used in a different application. Again, the improper citation misled A. With such frustration, A wanted to kill the manuscript.

My recommendation to A was to properly cite B's original paper, and submit the manuscript to a first-tier journal.

Lessons learned

  1. Plagiarism is defined as "the practice of taking someone else's work or ideas and passing them off as one's own." 
  2. Always give proper credit to the prior research by citing the papers in the right places!
  3. Every co-author should understand and be able to defend every piece of the paper. 
  4. Every reviewer and editor in the peer review process should carefully review the assigned paper.
  5. Do damage control ASAP. Don't wait!
  6. Don't over-react to plagiarism. 
  7. Make preventive actions to avoid plagiarism in the future. 
In the next blog post, I will further explain what "novelty" really means in the academic literature.

Tuesday, October 23, 2018

Shreyashi Shukla - Determined to Excel

Today (October 23, 2018), Shreyashi Shukla defended her MS thesis Daily Load Forecasting Using Hourly Temperatures.

Shreyashi Shukla's MS thesis defense
From left to right: Dr. Tao Hong, Shreyashi Shukla, Dr. Simon Hsiang, Dr. Churlzu Lim

Shreyashi received her B.Tech. with Honors in Production Engineering & Management from National Institute of Technology, Jamshedpur, in 2006. Before moving to the U.S. with her family, she had a 10-year progressive career in the energy sector in India. She joined our MSEM program in Fall 2017.

Every year I give a department seminar to share with the students about the research projects at BigDEAL. The purpose of these seminars is two-fold. On one hand, these seminars can broaden the students' view about systems engineering and engineering management. On the other hand, I would like to attract the most self-motivated and talented students from the program.

While most students were scared away after seeing how productive the BigDEAL students are, Shreyashi was one of the fearless students who contacted me after the seminar. During our first conversation in October 2017, I explained to her my expectation, and told her about the BigDEAL entrance tests. She took the challenge, passed the tests, and officially joined BigDEAL in Janurary 2018 to conduct her MS thesis research under my supervision.

The research problems BigDEAL students work on are never easy. In addition to tackling the research challenge, Shreyashi had the family duties too. Everyday she spends the morning on campus working on her coursework and research, and the rest of the day with her little daughter at home. She always comes to the lab on time, leaves on time, and works very efficiently. 9 months later, her research turned into a solid MS thesis, which made her the third "mom" student completing MS thesis research at BigDEAL (after Jingrui Xie and Ying Chen). She is also my first Indian student. Next semester, Shreyashi will continue working with me towards her PhD degree.

Congratulations, Shreyashi!

Monday, October 22, 2018

FAQ for BFCom2018 Qualifying Match

After the two-week registration period, we officially kicked off the BigDEAL Forecasting Competition 2018 with 81 teams formed by 142 data scientists across 26 countries. This morning, I sent out the data and instructions to the contestants. If you are a registered contestant but have not yet receive the data and instructions, please contact me directly.

BFCom2018 attracted 142 data scientists from 26 countries.

This blog post lists the frequently asked questions for BEFCom2018. I'll be updating this post as the questions come along, so please stay tuned. 

Q: Which error measure are you going to use to rank the teams?
A: MAPE, mean absolute percentage error.

Q: Why are there 23 hours in Mar 9, 2008 and 25 hours in Nov 2, 2008?
A: They were observed daylight savings time. Similar observations were in the historical years. See THIS BLOG POST for more information. In the original submission template, the hours in Nov 2, 2008 were from 1 to 25. A new submission template was sent to the contestant on Oct 24, 2018, which had the 2nd hour of Nov 2, 2008 repeated twice, to match the temperature of 2008.

Q: There are 28 weather stations, but only one load series. Which weather stations shall I use?
A: That's part of the challenge. Read this weather station selection paper for more information. 

Q: I'm new to load forecasting. Where shall I get started?
A: This qualifying problem is very similar to the load forecasting track of GEFCom2012. Reading the papers from those winning teams should help.

Q: We are going to use multiple methods. Can we submit multiple forecasts?
A: No. You should only submit one forecast for grading. If you have multiple forecasts, you may consider combining them. This paper may give you some idea about forecast combination.

Q: The local economy information, which was not given in the data, may have some significant effects to the forecasting period. Would you provide the local economy information? (For details, see Geert Scholma's comment under the original BFCom2018 announcement.)
A: No. We will add an error measure that calculate MAPE on bias-adjusted load forecast. We will adjust the hourly forecast based on the coincidence monthly energy, so that your forecasted energy of each month equal to the actual monthly energy. Beating the last-ranked in-class student on either measure can secure the ticket to the final match.

Q: I did not pass the qualifying match bar, but I'm very interested in learning from the winners about their methodologies. Would you summarize their methods?
A: I will organize a series of webinars for the finalists to talk about their methods, though the webinars are not recorded. I will also invite the finalists to summarize their winning methods to post on the blog.

Q: I'm a PhD student just starting my research in energy forecasting. I've learned a lot from this competition. Will you organize this again?
A: Yes. This is not the first BigDEAL Forecasting Competition. It will not be the last either. You can follow my twitter, subscribe to this blog, and/or connect to me on LinkedIn to get updates about events like this.

(To be continued...)

Tuesday, October 16, 2018

Robust Regression Models for Load Forecasting

One of my doctoral majors is operations research, for which I took many courses in graduate school to build my knowledge in optimization. The topic of my dissertation was on load forecasting. Only two chapters were related to optimization, one on Artificial Neural Networks, and the other on Fuzzy Regression (or Possibilistic Linear Regression).

In fact, the fuzzy regression chapter was the only one that seriously required some optimization skills, which was published as an FODM paper three years after my graduation. To build a fuzzy regression model, I had to formulate the parameter estimation process as a linear program, and solve it in CPLEX. At that time Gurobi was not even able to provide a feasible solution for my fuzzy regression model with 200+ parameters.

After that, I continued my profession in forecasting. I knew my optimization background is helpful to forecasting, but I didn't really expect to apply many optimization skills in forecasting.

About a year ago, we performed a benchmark study to show that four representative load forecasting models would fail miserably with bad input data. That study was published as an IJF paper early this year. At the end of that IJF paper, we mentioned a future research direction of designing more robust load forecasting models.

In this paper, we propose three robust regression models for load forecasting. While all of them are more robust than the ones compared in the IJF paper, the L1 regression model outperform the others. In fact L1 regression is not really new to load forecasting. It has been used for forecast combination, where some people call it Least Absolute Deviation (LAD) regression. Its "general" form, quantile regression, is heavily used in probabilistic load forecasting.
What's new about the L1 regression model in this paper?
We built an L1 regression model with hundreds of parameters. In fact it shares the same variable combination as the Vanilla model used in Global Energy Forecasting Competitions. Building such a model is nontrivial. We didn't find an off-the-shelf package to do what we need, so we formulated it as a linear program and solved it using MATLAB's linprog.
Among hundreds of techniques that are applicable to load forecasting, how did I find L1 regression?
The idea didn't come from nowhere. When I was working on my doctoral dissertation at FANGroup (Fuzzy And Neural Group), a few other students were working on another project sponsored by U.S. Army Research Office. They were investigating some features and applications of l1 norm. Although I was thinking about applying l1 norm to load forecasting, I didn't find a good use case at that time.

Well, it's better late than never. The skills I acquired 10 years ago came handy for this paper.

Citation

Jian Luo, Tao Hong, and Shu-Cherng Fang, "Robust regression models for load forecasting," submitted to IEEE Transactions on Smart Grid, in press.


Robust Regression Models for Load Forecasting

Jian Luo, Tao Hong, and Shu-Cherng Fang

Abstract

Electric load forecasting has been extensively studied during the past century. While many models and their variants have been proposed and tested in the load forecasting literature, most of the existing case studies have been conducted using the data collected under normal operating conditions. A recent case study shows that four representative load forecasting models easily fail under data integrity attacks. To address this challenge, we propose three robust load forecasting models including two variants of the iteratively re-weighted least squares regression models and an L1 regression model. Numerical experiments indicate the dominating performance of the three proposed robust regression models, especially L1 regression, compared to other representative load forecasting models. 

Monday, October 8, 2018

BigDEAL Forecasting Competition 2018

[Update Oct 22, 2018] The registration is closed. 142 data scientists from 26 countries have formed 81 teams to join BFCom2018. See the news article from UNCC College of Engineering. An FAQ page is set up to address questions for the qualifying match.
=======================

This semester I'm teaching Energy Analytics for the fifth time. The course has earned its reputation on the UNC Charlotte campus and even around the utility industry, for its toughness, high withdraw rate, and challenging nature. Here are some comments from the students in 2015 and 2017. Nowadays, not many students even dare to register the course. 

After the first midterm exam last week, I have five students left in the class. These five "survivors" (out of more than a dozen students at the beginning of the semester) have completed two assignments and one exam. I am impressed by their submissions every time. I must confess that this is by far the most academically strong class I've ever had for this course, even stronger than the group that won several award plaques in GEFCom2014

Previously, I sent students of this course to the competitions, such as GEFCom2014 and NPower Forecasting Challenge, where they can solve some conventional energy forecasting problems while competing with others around the globe. 

This year, thanks to the outstanding performance of these students, I was spending a lot of time trying to figure out a challenge for them. Finally, I decided to give them a new load forecasting problem to solve. 

I'll keep the problem secret for now, but I can tell that a practical solution to this problem can save power companies a lot of money. To those who are interested in writing academic papers, a winning solution to this problem should greatly increase the likelihood of having the manuscript accepted by the top venues for energy forecasting papers, such as International Journal of Forecasting (IJF) and IEEE Transactions on Smart Grid (TSG). 

The competition is by invitation only. The ones who are interested in joining this competition should first pass the qualifying match. I will use the first homework problem of Energy Analytics for the qualifying match. A contestant has to beat the last-ranked student of my class to receive the invitation to BFCom2018. If nobody beats any of my students, I'll just run the competition with the in-class students. 

For the qualifying match, I'll provide three years of hourly load and temperature, and one year of hourly temperature for the fourth year. The contestants should submit the ex post load forecast for the fourth year. The temperature data is from 28 weather stations. To excel in the qualifying match, the contestants may want to read two of my IJF papers on weather station selection and recency effect

Important Dates

Oct 8, 2018 - Registration open. 
Oct 21, 2018 - Registration close. 
Oct 22, 2018 - Qualifying match data release.
Nov 4, 2018 - Qualifying match submission due. 
Nov 5, 2018 - Leaderboard published; BFCom2018 invitation sent. 
Dec 3, 2018 - BFCom2018 winners announced. 

Note: There is no monetary prize for this competition. The leaderboard will be published on this blog. I will consider providing research assistantships to the top three contestants if they are interested in joining my lab as PhD students.

If you are interested, please register HERE. See you in the game!

Monday, July 9, 2018

From Club Convergence of Per Capita Industrial Pollutant Emissions to Industrial Transfer Effects: An Empirical Study Across 285 Cities in China

China has grown to the world's second largest economy by nominal GDP. Many factors attribute to such rapid growth, such as globalization and hard-working Chinese people. Nevertheless, we can't ignore the pollution resulted from the industrialization. Dr. Chang Liu brought the research problem to me when she visited BigDEAL last year. We spent a year investigating the relationship between industrial transfer effects and per capita industrial pollutant emissions across 285 cities in China. We identified four convergence clubs for SO2 emissions, and three convergence clubs for soot emissions. We also concluded that industrial transfer effects can lead to multiple steady-state equilibria. This presents some evidence to support region-specific environmental policies and execution strategies. 

This is the first time I sent a paper to Energy Policy. The original version was submitted on Feb 5, 2018. Within five months, the paper was published after three revisions. The entire publication process was quite pleasant.

Citation
Chang Liu, Tao Hong, Huaifeng Liu, and Lili Wang, "From club convergence of per capita industrial pollutant emissions to industrial transfer effects: an empirical study across 285 cities in China," Energy Policy, vol.121, pp 300-313, October 2018. (ScienceDirect)

From Club Convergence of Per Capita Industrial Pollutant Emissions to Industrial Transfer Effects: An Empirical Study Across 285 Cities in China

Chang Liu, Tao Hong, Huaifeng Liu, and Lili Wang

Abstract

The process of industrialization has led to an increase in air pollutant emissions in China. At the regional level, industrial restructuring and industrial transfer from eastern China to western China have caused a significant difference in pollutant emissions among various cities. This paper analyzes per capita industrial pollutant emissions across 285 prefecture-level cities from 2003 to 2015, aiming to reveal how industrial transfer affects the formation of convergence clubs. Whether industrial pollutant emissions across heterogeneous cities converge to a unique steady-state equilibrium is first identified based on the concept of club convergence. Logit regression analysis is then applied to assess the effects of industrial transfer on the observed clubs. The log t-test highlights four convergence clubs for industrial SO2 emissions and three clubs for industrial soot emissions. The regression analysis results reveal that the effects of industrial transfer can lead to multiple steady-state equilibria, suggesting region-specific environmental policies and execution strategies. In addition, accelerating the development of clean energy technologies in emission-intense regions should be further emphasized.