A while ago, I wrote a post about the Belkin Energy Disaggregation Competition, which concluded a couple of weeks ago. The competition drew entries from 165 participating teams, who each provided up to 169 submissions. The top 3 participants shared prizes from a combined pot of $24,000, while a separate data visualisation competition had a prize of $1,000.
It seemed like most entrants were regular Kagglers, with little participation from NIALM companies or academics with the NIALM field. Although I understand that many companies are likely unwilling to participate to prevent their secrets being divulged, I wonder if greater participation from the existing NIALM academic field could have been achieved by hosting the competition at a relevant conference or workshop. I would love to see the winners of the competition invited give a talk about their approaches!
The data used in the competition consisted of a public (training) and a private (test) data set, collected from 4 households. The training data included both household aggregate data and individual appliance sub-metered data. However, only the aggregate test data was released, while the sub-metered data was kept private for the evaluation of each submission. As a result, although a cross-validation evaluation technique was used, the participants were crucially not required to generalise to new households since sub-metered data was available from each test household for training.
With each submission, the public leaderboard was updated showing the best performance for each user over half of the private test data, while their undisclosed performance over the other half of the test data was used to calculate the final standings. Interestingly, the winner of the competition shown by the final standings was actually only ranked 6th on the public leaderboard. This suggests that many participants might have been overfitting their algorithms to the half of the test data for which the performance was disclosed, while the competition winner had not optimised their approach in such a way.
An interesting forum thread seems to show that most successful participants used an approach based on only low-frequency data, despite the fact that high-frequency data was also provided. This seems to contradict most academic research, which generally shows that high-frequency based approaches will outperform low-frequency methods. A reason for this could be that, although high-frequency based approaches perform well in laboratory test environments, their features do not generalise well over time, and as a result algorithm training quickly becomes outdated. However, another reason could have been that the processing of the high-frequency features was simply too time consuming, and better performance could be achieved by concentrating on the low-frequency data given the deadline of the competition.
Overall, I think the competition was very successful in provoking interest in energy disaggregation from a new community, and I hope that any follow up competitions follow a similar format. Furthermore, I think that hosting a prize giving and presentation forum at a relevant conference and workshop would inspire greater participation from academics already working in the field in NIALM.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.