Monday, March 20, 2017

GEFCom2014 Load Forecasting Data

The load forecasting track of GEFCom2014 was about probabilistic load forecasting. We asked the contestants to provide one-month ahead hourly probabilistic forecasts on a rolling basis for 15 rounds. In the first round, we provided 69 months of hourly load data and 117 months of hourly temperature data. Incremental load and temperature data was provided in each of the future rounds.

Where to download the data?

The complete data was published as the appendix of our GEFCom2014 paper. If you don't have access to Science Direct, you can downloaded from my Dropbox link HERE. Regardless where you get the data, you should cite this paper to acknowledge the source:

  • Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli and Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016.


What's in the package?

Unzip the file, you will see the folder "GEFCom2014 Data", which includes five zip files. The data for the probabilistic load forecasting track of GFECom2014 is in the file "GEFCom2014-L_V2.zip". Unzip it, you will see the folder "load", which includes an "Instructions.txt" file and 15 other subfolders. In each folder named as "Task n", there are two files, Ln-train.csv and Ln-benchmark.csv. The train file, together with the train files released in previous rounds, can be used to generate forecasts. The benchmark file includes the forecast generated from the benchmark method.

How to use the data?

Apparently the most straightforward way of using this dataset is to replicate the competition setup and compare results directly with the top entries. Because the data published through GEFCom2014 is quite long (totally 7 years of matching load and temperature data), we can also use this dataset to test methods and models for short term load forecasting.

GEFCom2014-E data

After GEFCom2014, I organized an in-class probabilistic load forecasting competition in Fall 2015 that was open to external participants. My in-class competition setup was very similar to that of GEFCom2014, so I denoted the data for this in-class load forecasting competition as GEFCom2014-E, where E is the abbreviation of "extended". In total, this dataset covers 11 years of hourly temperature and 9 years of hourly load. A top team Florian Ziel was invited to contribute a paper to IJF (see HERE). The readers may replicate the same competition setup and compare results with Ziel's.

Caution

Note that the data I used for GEFCom2014-E was created using ISO New England data. If you want to validate a method using two independent sources, you should not use GEFCom2014-E together with ISO New England data.

Back to Datasets for Energy Forecasting.

Monday, March 6, 2017

Leaderboard for GEFCom2017 Qualifying Match!!!

The six rounds of GEFCom2017 qualifying match just ended last week. I'm sure that the contestants are anxiously waiting for the leaderboard. Here is a brief report. I'll update this post as ISO New England releases its recent load data.

Out of 177 registered teams, 73 have submitted entries to the defined track, and 26 to the open track. After six rounds, 53 teams completed the defined track with at least 4 submissions, while 20 completed the open track. 

The due date of report and code is on March 10th, 2017. Please send them to hong.bigdeal@gmail.com. Follow the same protocol as the forecast submissions. Please follow THIS GUIDE to prepare the report.

Jingrui Xie created two benchmarks:
  • Vanilla Benchmark, which has been used to calculate the scores of the teams in each round. See Q7 of THIS FAQ for more information.
  • Rain Benchmark, which will be used to select the teams being advanced to the final match.  
(As an organizer of GEFCom2017, Jingrui Xie is not eligible for the prize.)

Round 1 summary 
  • Defined Track: 15 teams beat the Vanilla Benchmark; 5 teams beat the Rain Benchmark.
  • Open Track: 6 teams beat the Vanilla Benchmark; 3 teams beat the Rain Benchmark.
Round 2 summary 
  • Defined Track: 14 teams beat the Vanilla Benchmark; 8 teams beat the Rain Benchmark.
  • Open Track: 4 teams beat the Vanilla Benchmark; 2 teams beat the Rain Benchmark.
Round 3 summary 
  • Defined Track: 15 teams beat the Vanilla Benchmark; 9 teams beat the Rain Benchmark.
  • Open Track: 5 teams beat the Vanilla Benchmark; 2 teams beat the Rain Benchmark.
The spreadsheet with detailed scores can be accessed HERE. The higher the score is, the higher the rank is. 

Stay tuned :)

Tuesday, February 14, 2017

Call For Papers: Forecasting in Modern Power Systems | Journal of Modern Power Systems and Clean Energy

Journal of Modern Power Systems and Clean Energy

Special Section on Forecasting in Modern Power Systems 

Power systems have been evolving over the past century. The grid is getting more and more sophisticated due to modern technologies and business requirements, such as implementation of smart grid technologies, deployment of utra-high voltage transmission systems, and integration of ultra-high levels of renewable resources. All of these factors are challenging today’s energy forecasting practice. This special section of the Journal of Modern Power Systems and Clean Energy is aimed at answering the following question: How to better forecast the supply, demand and prices to accommodate the changes in modern power systems?

The topics of interests include, but are not limited

  • Probabilistic energy forecasting
  • Forecasting in multiple energy systems
  • High dimensional wind and solar power forecasting
  • Load forecasting with temporal and/or geographic hierarchies
  • Combination methods for energy forecasting

Submission Guidelines

http://www.editorialmanager.com/mpce or link via
http://www.springer.com/40565
http://www.mpce.info

The article templates can be downloaded from http://www.mpce.info.

Important Dates

Paper Submission Deadline:    June 30, 2017
Acceptance Notification:         December 31, 2017
Date of Publication:                 March 2018

Guest Editorial Board

Guest Editors-in-Chief
Wei-Jen Lee, University of Texas at Arlington, USA
E-mail: wlee@uta.edu
Tao Hong, University of North Carolina at Charlotte, USA
E-mail: hongtao01@gmail.com

Guest Editors
Jing Huang, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia
Duehee Lee, Arizona State University, USA
Franklin Quilumba, National Polytechnic School, Ecuador
Jingrui Xie, SAS Institute, USA
Ning Zhang, Tsinghua University, China
Florian Ziel, University of Duisburg-Essen, Germany

Editor-In-Chief and Deputy

Professor Yusheng Xue (State Grid Electric Power Research Institute, Nanjing, China)
Professor Kit Po Wong (The University of Western Australia)


CONTACT INFORMATION
For more information, please do not hesitate to contact
Ms. Ying ZHENG
Tel: 86 25 8109 3060 Fax: 86 25 8109 3040
E-mail: zhengying1@sgepri.sgcc.com.cn; mpce@alljournals.cn

About Journal of Modern Power Systems and Clean Energy (MPCE)

MPCE sponsored by State Grid Electric Power Research Institute (SGEPRI) is Golden Open Accessed, peer-reviewed and bimonthly published journal in English. It is published by SGEPRI Press and Springer-Verlag GmbH Berlin Heidelberg commencing from June, 2013.It is indexed in SCIE, Scopus, Google Scholar, CSAD, DOAJ, CSA, OCLC, SCImago, ProQuest, etc. It is the first international power engineering journal originated in mainland China. MPCE publishes original papers, short letters and review articles in the field of modern power systems with focus on smart grid technology and renewable energy integration, etc. MPCE is dedicated to presenting top-level academic achievements in the fields of modern power systems and clean energy by international researchers and engineers, and endeavors to serve as a bridge between Chinese and global researchers in the power industry.

Monday, February 6, 2017

Mark Your 2017 Calendar: Tao's Recommended Conferences for Energy Forecasters

I didn't realize the overdue of this post until I just hit the road for my first trip of 2017. Here is the 2017 list of my recommended conferences for energy forecasters:

1. International Symposium on Energy Analytics (ISEA2017, Cairns, Australia, June 22-23, 2017)

Even if you missed all the other events down this list, you can still find the year rewarding by attending ISEA2017, the first-ever gathering of world-wide energy forecasters. Our generous sponsors, the International Institute of Forecasters (Super Sponsor), Tangent Works (Gigawatt Sponsor) and the State Grid Electric Power Research Institute (Kilowatt Sponsor), have helped bring the registration fees down. There are many reasons to join the party. You will meet the winners of GEFCom2017. You will hear the presentations from world-class energy forecasting researchers and practitioners. You will network with energy forecasting colleagues from more than a dozen countries. And of course, you will enjoy two World Heritage sites side-by-side.

2. Tao's courses

The next two SAS courses on load forecasting have been scheduled in Charlotte, March 27-29.


In addition, I'm going to teach these three courses through EUCI:


Stay tuned with the training page of Hong Analytics for the recent updates of all training courses.

3. Conferences from other professional organizations

I will attend the following three, as always:


Look forward to seeing you in these fantastic events!

Sunday, January 1, 2017

Energy Forecasting @2016

Happy New Year! As a tradition of this blog, it's time to look at the statistics of Energy Forecasting in 2016.

Where are the readers?

They are from 147 countries and SARs.


They are from 2660 cities.


Comparing with Energy Forecasting @2015.


All-time top 10 most viewed posts (from 4478 views to 2731 views):
Top 10 most-viewed classic posts (from 3914 views to 1525 views):
Thank you very much for your support! Happy Forecasting in 2017!

Wednesday, December 21, 2016

2016 Greetings from IEEE Working Group on Energy Forecasting

Another Christmas is coming in few days. It's time to look back at 2016 and see what IEEE Working Group on Energy Forecasting has done:

Next year will be even more exciting:
  • We will hold the International Symposium on Energy Analytics (ISEA2017), the first-ever gathering of world-wide energy forecasters in Cairns, Australia, the only place on earth with two World Heritage sites side-by-side, Great Barrier Reef and the Daintree Rainforest.  
  • We will conclude GEFCom2017 at ISEA2017 with the winner presentations and prizes. 
  • A PESGM2017 panel session on multiple energy systems is being organized by Ning Zhang and myself. 
  • I will be editing a special issue for the Power & Energy Magazine on big data analytics. The papers are by invitation only. If you have any good idea and would like to present it to thousands of PES members through this special issue, please let me know. 
  • We didn't have the bandwidth for JREF this year. We will try to conduct the JREF survey next year. 

Happy Holidays and Happy Forecasting!

Tuesday, December 20, 2016

Winning Methods from npower Forecasting Challenge 2016

RWE npower released the final leaderboard for its forecasting challenge 2016. I took a screen shot of the top teams. Interestingly, the international teams (colored in red) took over all of the top 6 places. Unfortunately, some of those top-notch UK load forecasters did not join the competition. I'm hoping that they can show up at the game to defend the country's legacy:)

RWE npower Forecasting Challenge 2016 Final Leaderboard (top 12 places)


In each of the previous two npower competitions, I asked my BigDEAL students to join the competition as a team. In both competitions, they were ranked top and beating all UK teams (see the blog posts HERE and HERE). We also published our winning methods for electricity demand forecasting and gas demand forecasting.

This year, instead of forming a BigDEAL team, I sent the students in my Energy Analytics class to the competition. The outcome is again very pleasing. The UNCC students took two of the top three places, and four of the top six places. What makes me, a professor, very happy is the fact that the research findings has been fully integrated into the teaching materials and smoothly transferred to the students in the class. (See my research-consulting-teaching circle HERE.)

OK, enough bragging...

I asked the top teams share their methodologies with the audience of my blog as what we did in BFCom2016s. Here they are:

1st Place: Geert Scholma

My forecast this time consisted of the following elements:
- linear regression models seperated per 30 minute period with 78 variables each
- fourth degree yearly shapes per weekday as a base shape
- an intercept, 6 weekdays and 22 holiday, bridgeday and schoolholiday variables
- daylight savings and a linear timetrend, each seperated for weekdays and weekends
- a shift at september 2014 and a night variable
- conversion of temperature to windchill
- third degree windchill polynomials for cooling and heating with different impacts
- three moving averages with different periods for temperature effects occurring at different timescales
- different radiation variables depending on time of day with up to 6 hourly and moving average radiation variables interacted with a second degree polynomial of the day of year for peak hours
- 1 hourly and 1 moving average rainfall variable
- manually exclusion of outliers and filling of any weather gaps

2nd Place: Devan Patel

Model: Multiple linear regression approach was used during the NPower forecasting competition. The basic model was Tao’s Vanilla Benchmark model. A major change was made in the form of dependent variable Energy Consumption. A Box-Cox transformation of Energy Consumption was taken based on the train data distribution. Polynomials of Humidity and Wind Speed were added into the Base model. With the help of this changes the performance of the benchmark vanilla model was improved. During testing above changes were successfully able to improve the accuracy of vanilla model by around 1.5% on the scale of MAPE.
Data: Two different approaches were used in order to train the model. During winter (Round 1 and Round 3) model was trained using whole year’s data. During summer (Round 2) only summer month’s data was used during model training. Scatter plots across different months were helpful to understand the distribution of energy consumption.
Explanatory data analysis: The missing values of the hours were replaced by previous day's hours. Scatter plots of temperature, humidity and wind speed were used to identify their relationships with energy consumption.
Error matrix: MAPE was used as a base error matrix in order to evaluate the accuracy of the forecast during model validation.
Software: RStudio was used as a main software for model building, validation and forecasting. MS Excel was used to prepare the data files which can be used in RStudio.

3rd Place: Masoud Sobhani

For the first round, the model was Tao's Vanilla model with recency effects (by adding extra lagged temperature to the original model). The model uses MLR method and the predictors are calendar variables, temperature, lagged values of temperature and cross effects between them. The model was implemented in SAS. For the second round, I tried to improve Vanilla model by adding more predictors beyond the temperature. Humidity was added to the model by using the method introduced in Xie and Hong 2016. The new model was an improved model having temperature and relative humidity as weather related predictors. Since we didn't know the location of the utility, I tried to change the new model to select the perfect model with the best results. For the third round, the model used in previous round was improved by adding some lagged values of relative humidity. In each round, the model selection was done by cross validation method. 

Monday, November 28, 2016

7 Reasons to Send Your Best Papers to IJF

Last week, I was surfing the Web of Science to gather some papers to read during the holidays. Yes, some poor professors like myself work 24x7, including holidays. Suddenly I found that FIVE of my papers are listed by the Essential Science Index (ESI) as Highly Cited Papers. (Check them out HERE!) What a good surprise for Thanksgiving :)

What's even more surprising is that all of these five papers were published by the International Journal of Forecasting! As an editorial board member of two very prestigious and highly ranked journals, IEEE Transactions on Smart Grid (TSG) and International Joirnal of Forecasting (IJF), I send my best papers to these two journals every year, with an even split. So far, I've had six papers in TSG (not counting two editorials) and six in IJF. How come only my IJF papers were recognized by ESI?

The curiosity ate most of my Thanksgiving time. I was doing some research to answer this question, which eventually led to this blog post. In short,
you should send your best energy forecasting papers to IJF first!
Here is why:
  1. No page limit. IJF does not charge authors for extra pages. You can take as many pages as you like to elaborate your idea. The longest IJF paper I've read is Rafal Weron's 52-page review paper on price forecasting. My IJF review on probabilistic load forecasting is 25 pages long. Both reviews are now ESI Highly Cited Papers. 
  2. Short review time. A manuscript first reaches EIC, editor and then Associate Editor. It may be rejected by any of these three people. In other words, if it is a clear rejection, the decision would be coming to you rather quickly. If the manuscript is assigned to the reviewers, the first decision typically comes back within three to four months. 
  3. Very professional comments. I have seen many IJF review reports by far, as an author, reviewer and editor. Most of them are very professional. Eventually these review comments help the authors improve their work. I haven't seen any nonsense reviewer in the IJF peer-review system, which is quite remarkable! I guess the editors have done their job well by filtering out the nonsense reviewers before passing the comments to the authors. 
  4. High quality copy-editing service free of charge. Once the manuscript is accepted, it will be forwarded to a professional copy editor to polish the English for free, so you don't need to spend too much time with wordsmith. You don't need to worry about formatting either, because there is another copy editor doing that before the publisher sends you the proof. 
  5. Bi-annual awards. Every other year, IJF awards a prize for the best paper published in a two-year period. The prize is $1000 plus an engraved plaque. Details of the most recent one can be found HERE. Making some money and getting recognized for your paper, isn't it nice? 
  6. Publicity. Six years ago when I was pursuing my PhD, I was frustrated about the many useless papers in the literature. I brought my frustration to David Dickey. He made a comment that shocked me for a while. Instead of encouraging me to publish, he said that he had lost interest in publishing papers, because "the excellent papers are often buried by so many bad ones". Having been a professor for about three years, I have to agree with him. I believe in the era of "publish or perish", we have to "publish and publicize" to make our papers highly cited. Publishing your energy forecasting papers with IJF means that you get the opportunity of leveraging various channels, such as Hyndsight, Energy Forecasting, and the social media accounts of Elsevier and those renowned IJF editors. 
  7. "Business and economics" category in ESI. This is probably the most important distinction between IEEE Transactions and IJF. Many IEEE Transactions papers (including the ones in TSG) are grouped into engineering, while IJF papers are in the category of business and economics. The business and economics papers get much fewer citations on average than the engineering ones, which makes the ESI thresholds of business and economics lower than those of engineering. For instance, my TSG2014 paper is not an ESI paper, but it would have been if it were published by IJF. 
Unfortunately, IJF's acceptance rate is very low. To increase the chance to have the paper accepted, you should understand how reviewers evaluate the manuscript.

Look forward to your next submission!

Saturday, November 19, 2016

FAQ for GEFCom2017 Qualifying Match

I have received many questions from GEFCom2017 contestants. Many thanks to those who raised the questions. This is a list of frequently asked questions. I will update it periodically if I get additional ones.

Q1. I can't open the link to the competition data. How to get access to the data?

A1. If you cannot access the data via the provided link directly, you may need a VPN service. There are many free VPN services available. Use Google to find one, or post the question on LinkedIn forum to see if your peer contestants can help.

Q2. Can the competition organizer re-post the data somewhere else?

A2. No. We are not going to re-post the data during the competition, because ISO New England updates the data periodically.

Q3. Are we forecasting the same forecasting period in both Round 2 and Round 3? And another same forecasting period in both Round 4 and Round 5?

A3. For GEFCom2017-D, ISO New England updates the data every month, typically in the first half of the month. In Round 2, you will be using the data as of Nov 30, 2016. In Round 3, the December 2016 data should be available as well. For GEFCom2017-O, the data is being updated in real-time. We would like to see if there is any improvement with half a month of information. This set up also gives some flexibility to the contestants. If the team is busy with other commitments during the competition, they may submit the same forecast for both Round 2 and Round 3.

Q4. Can the same team join both tracks?

A4. Yes. A team may even submit the same forecasts to both tracks. Nevertheless, we are expecting higher accuracy in the forecasts of GEFCom2017-O than those of GEFCom2017-D.

Q5. Can one person join two or more teams?

A5. No.

Q6. I'm with a vendor. I don't know if my company wants to put its name as the team name. Can I join the competition personally? If I win, can I add my company as my affiliation and/or change the team name to my company's name?

A6. You can join the competition with or without linking your team to your company. However, you need to make the decision before registration. Once you are in the game, we can not change your affiliation or team name.

Q7. Which benchmark method will be used?

A7. The benchmark method forecasts each zone individually. We will use the vanilla model as the underlying model, simulate the temperature by shifting 11 years of temperature data (2005 - 2015) 4 days forward and backward to come up with 99 scenarios, which will be used to extract 9 quantiles. See THIS PAPER for more details.

Q8. In GEFCom2017-D, are we required to process daylight savings time in a specific way?

A8. No. You can treat the daylight savings time any way you like. THIS POST elaborates my approach, which you don't have to follow.

Q9. In GEFCom2017-D, are we allowed to assume the knowledge of federal holidays before 2011?Can we give special treatments to the days before and after the holidays?

A9. Yes, and yes. The opm.gov website only publishes federal holidays starting from 2011. You can infer the federal holidays before 2011. You can model the days before and after holidays the way you like. I had a holiday effect section in my dissertation, which you don't have to follow. Keep in mind that you should not assume any knowledge about local events or local holidays, such as NBA final games and Saint Patrick's Day.

Q10. The sum of the 8 zones are slightly different from the total demand published by ISO New England. Which number will you use to evaluate the total demand?

A10. Column D of the "ISO NE CA" worksheet.

Q11. For GEFCom2017-D, are you going to provide weather forecasts that every team should use?

A11. No. It is an ex ante hierarchical probabilistic load forecasting problem. We do not provide weather forecasts. The contestants in the GEFCom2017-D track should not use any weather forecasts from other data sources. Nevertheless, the contestants may generate their own weather forecast if they want to. The weather forecasting methodology should be in the final report if they take this route.

Q12. No wind, solar or price forecasting in GEFCom2017? It's a pity!

A12. GEFCom2017 is a load forecasting competition. Unfortunately, we were not able to identify good datasets to set up wind, solar or price forecasting tracks to match the challenge level as this load forecasting problem. Nevertheless, in GEFCom2017-O, you may leverage other data sources to predict wind, solar and prices, which may be good for your load forecasts.

Q13. I'm a professor. Any advice if I want to leverage this competition in class?

A13. It would be nice to leverage the competition in your course. I did so two years ago in GEFCom2014. There will again be an institute prize in GEFCom2017. To aim for the institute prize, I would recommend that you sign up as many teams as possible to maximize the likelihood to win. What I did two years ago was to have each student form a single-person team, and tied the competition ranking to their grades. Anyway, if you are going to join the competition, it's better to have the students look into the data ASAP. The first round submission is due on 12/15/2016.

Q14. Any reference materials we should read before we dive into the competition problem?

A14. For probabilistic load forecasting, you should at least read this recent IJF review paper on probabilistic load forecasting and the relevant references. You can find my recent papers on probabilistic load forecasting HERE. The papers from winning entries of GEFCom2014 are HERE. For hierarchical forecasting, you can check out Hyndman and Athanasopoulos' BOOK and their PAPER

Saturday, October 29, 2016

Instructions for GEFCom2017 Qualifying Match

The GEFCom2017 Qualifying Match means to attract and educate a large number of contestants with diverse background, and to prepare them for the final match. It includes two tracks: a defined-data track (GEFCom2017-D) and an open-data track (GEFCom2017-O). In both tracks, the contestants are asked to forecast the same thing: zonal and total loads of ISO New England. The only difference between the two tracks is on the input data.

Data 

The input data a participating team can use GEFCom2017-D should not go beyond the following:
  1. Columns A, B, D, M and N in the worksheets of "YYYY SMD Hourly Data" files, where YYYY represents the year. These data files can be downloaded from ISO New England website via the zonal information page of the energy, load and demand reports. Contestants outside United States may need a VPN to access the data. 
  2. US Federal Holidays as published via US Office of Personnel Management.
The contestants are assumed to have the general knowledge of Daylight Savings Time and inferring the day of week and month of year based on a date.

There is no limitation for the input data in GEFCom2017-O.

Forecasts

The forecasts should be in the form of 9 quantiles following the exact format provided in the template file. The quantiles are the 10th, 20th, ... 90th percentiles. The forecasts should be generated for 10 zones, including the 8 ISO New England zones, the Massachusetts (sum of three zones under Massachusetts), and the total (sum of the first 8 zones).

Timeline

GEFCom2017 Qualifying Match includes six rounds.

Round 1 due date: Dec 15, 2016; forecast period: Jan 1-31, 2017.
Round 2 due date: Dec 31, 2016; forecast period: Feb 1-28, 2017.
Round 3 due date: Jan 15, 2017; forecast period: Feb 1-28, 2017.
Round 4 due date: Jan 31, 2017; forecast period: Mar 1-31, 2017.
Round 5 due date: Feb 14, 2017; forecast period: Mar 1-31, 2017.
Round 6 due date: Feb 28, 2017; forecast period: Apr 1-30, 2017.
Report and code due date: Mar 10, 2017.

The deadline for each round is 11:59pm EST of the corresponding due date.

Submission

The submissions will be through email. Within two weeks of registration, the team leader should receive a confirmation email with the track name and team name in the email subject line. If the team registered both tracks, the team leader should receive two separate emails, one for each track.

The team lead should submit the forecast on behalf of the team by replying to the confirmation email.

The submission must be received before the deadline (based on the receipt time of the email system) to be counted in the leaderboard.

Template

The submissions should strictly follow the requirements below:
  1. The file format should be *.xls;
  2. The file name should be "TrackInitialRoundNumber-TeamName". For instance, Team "An Awesome Win" in the defined data track's round 3 should name the file as "D3-An Awesome Win".
  3. The file should include 10 worksheets, named as CT, ME, NEMASSBOST, NH, RI, SEMASS, VT, WCMASS, MASS, TOTAL. Please arrange the worksheets in the same order as listed above. 
  4. In each worksheet, the first two columns should be date and hour, respectively, in chronological order.
  5. The 3rdto the 11th columns should be Q10, Q20, ... to Q90. 
The template is HERE. The contestants should replace the date column to reflect the forecast period in each round.

Evaluation

In round i, for a forecast submitted by team j for zone k, the average Pinball Loss of the 9 quantiles will be used as the quantile score of the probabilistic forecast Sijk. A benchmark method will be used to forecast each of the 10 zones. We denote the quantile score of the benchmark method in round i for zone k as Bik.

In round i, we will calculate the relative improvement (1 - Sijk/Bik) for each zone. The average improvement over all zones team j accomplishes will be the rating for team j, denoted as Rij. The rank of team j in round i is RANKij.

The weighted average of the rankings from all 6 rounds will be used to rank the teams in the qualifying match leaderboard. The first 5 rounds will be weighted equally, while the weight for the 6th round is doubled.

A team completing four or more rounds is eligible to for the prizes. The ratings for the missing rounds will be imputed before calculating the weighted average of the ratings.

Prizes

Institute Prize (up to 3 universities): $1000
1st place in each track: $2000
2nd place in each track: $1000
3rd place in each track: $500
1st place in each round of each track: $200

For more information about GEFCom2017, please visit www.gefcom.org.