Friday, 19 August 2016

Announcing EPRI EU NILM 2016

We’re pleased to announce that the European Workshop on Non-intrusive Load Monitoring will be held on the 17-18th October 2016 in London, UK. This year’s workshop is a collaborative effort between the organisers of EU NILM 2014 and EU NILM 2015 and the Electric Power Research Institute (EPRI). The aim of the European NILM conference series is to bring together all of the European researchers that are working on the topic of energy disaggregation in both industry and academia. The aim of EPRI is to facilitate a collaborative dialogue between industry stakeholders – manufacturers, utilities and researchers, to create opportunities to enable technology maturation and market adoption of NILM devices in the European market. See www.nilm.eu/nilm-workshop-2016 for full details.

Important dates


  • Registration deadline: 17th September 2016
  • Presentation abstract submission: 17th September 2016
  • Workshop dates: 17-18 October 2016

Call for presentations


We invite attendees to submit presentation abstracts via this Google Form by 17th September 2016. We build a balanced agenda from a combination of invited speakers, submitted presentations, lightning talks and a poster/demo session. We strongly encourage new and relevant submissions in the field and also welcome submissions from companies with challenges, results or data which they’d like to share with the community. Possible topics include but are not limited to algorithms, evaluation of NILM algorithms, datasets and applications. Since the workshop will not feature published proceedings, a previous or future appearance at other venues will not be an issue.

Call for sponsors


We have a number of sponsorship options for the workshop available. Options include sponsoring a lunch or the evening reception, exposure on the website and slides, as well as a space for a small information stall at the event. Please contact us via form at the bottom of www.nilm.eu/nilm-workshop-2016 for further information.


We look forward to welcoming you in London!

Friday, 29 July 2016

PNNL to use data-driven protocol for NILM vendor evaluation

PNNL recently held their fourth NILM Protocol Development Advisory Group conference call, in which it was decided that they would use a data driven protocol to evaluate the accuracy of NILM products. To give a bit of background to this decision, the choice was between the following two options:

Data driven - use (potentially existing) data collected from real homes to evaluate the accuracy of NILM products

Lab test - collect new data from an artificial lab home, in which the schedule of appliances is programmed rather than operated by humans

In my opinion, this is definitely the right decision given the diversity of loads and schedules of use in real homes. Although theoretically this is possible to simulate in an artificial lab home, in practice I would still be concerned that some reality gap might exist between the data collected from lab homes and real homes. However, monitoring real homes is more difficult than lab homes given the inherent intrusion into people's homes, and clearly a careful approach to data collection will be required to ensure the integrity and usefulness of the resulting data set.

A summary of the meeting is available via the advisory group's Conduit community.

Monday, 4 July 2016

NILM Wiki

The much discussed NILM wiki has finally emerged thanks to a great effort from Jack Kelly and the folks at Green Running. So far, the wiki features pages covering the following topics:
If you have published your own data set or if you work for a NILM company, please check it's listed on the wiki. Also, if you have any other information you think would enrich the wiki please feel free to add new content!

Thursday, 9 June 2016

Please help design a NILM competition!

Cross posted from Jack Kelly's blog:

Has disaggregation accuracy improved since the 1980s? Which algorithms are most accurate for a given use-case? Which (if any) use-cases are well served by NILM already?

It's pretty much impossible to answer any of these questions with confidence (unless you only consider the tiny number of algorithms for which you have access to executable code). We can't directly compare published results across papers because, when testing the disaggregation accuracy of NILM algorithms, each paper uses different datasets, different metrics, different pre-processing, etc.

This means that we can't measure progress over time. Nor can we decide which NILM algorithms are most promising and which might be dead-ends.

These are bad problems. Let's work towards fixing them.

Some other machine learning communities have had great success running yearly competitions. For example, the ImageNet "Large Scale Visual Recognition Challenge" has been running yearly since 2010. Some regard this competition as having played a crucial role in the recent dramatic increase in the accuracy of image classification algorithms.

The idea of running a NILM competition has been rumbling around for several years. But designing and implementing a NILM competition is hard. The community uses sample rates ranging from monthly to MHz. No single metric is informative for all use-cases. Collecting ground truth data (the power demand of individual appliances) is expensive and time-consuming.

Maybe we can pull this off. The first step is to decide on a design which will work for everyone.

To give us something concrete to debate, we'll outline one way this could work. This is not meant to be definitive! Think of this as the DNA for a clumsy, inefficient animal 500 million years ago. Together, we need to evolve this design into an elegant, efficient beast, well adapted to its environment.

Please shoot holes in this proposal! What won't work for you? What's impractical? What's unfair? What opens the competition up to cheating? How can we make the competition more attractive to researchers? How can we make the competition more informative for the community? How can we simplify the process?

The draft proposal is available on Google Docs. I've linked to a Google Doc rather than copying-and-pasting the proposal into this post so that we can update the proposal as the discussion develops. Please add your comments either to the mailing list discussion; or to the Google Doc (please sign your comment with your name; unless you deliberately want to be anonymous); or if you want to keep your comment private then email Jack directly.

Thanks, (in no particular order) Jack, Mario, Oli, Stephen, Grant, Marco, Peter

Thursday, 2 June 2016

PNNL NILM vendor survey

Below is a message to NILM vendors I'm sharing on behalf of PNNL:

The Pacific Northwest National Laboratory (PNNL) continues its work to develop Non-Intrusive Load Monitoring (NILM) test protocols.  Activities to date have focused on the development of a technical working group (NILM vendors, users/potential users, and other stake holders), research and decisions on candidate performance metrics, and the development of performance protocols.  This last activity we are seeking input from the larger NILM vendor community.

Below is a short feedback form to assist in directing the protocol development.  We would appreciate your responses as soon as possible. Please send responses by this email to Joseph.Petersen@pnnl.gov.  We appreciate your time and participation.

NILM Performance Metrics Project:  Status and Feedback Request


Feedback Goal:  To better understand preferences and constraints of proposed metric implementation approaches.  Please consider the two approaches to evaluating NILM performance listed below, and then provide your feedback to the following questions.

Approach 1: Data Driven – NILM devices use a diverse set of previously collected interval data to test device performance.

Approach 2:  Laboratory Testing – NILM devices are connected to actual appliances and/or load simulation systems to test performance.

  1. Is your NILM platform/product capable of accepting 1-second to 1-minute interval data as inputs for disaggregation?
  2. At what sampling interval is your NILM platform/product designed to take measurements or data inputs, e.g., 1 minute, 5 minute, hourly, other?
  3. What specific inputs are necessary for your NILM platform or product, e.g., interval power data, energy data, voltage, current, reactive power, other?
  4. What appliances or end-uses does your NILM product target?
  5. What are the target use cases for your NILM product?
  6. Other comments or questions you’d like to share regarding the development of the Data Driven or Laboratory Testing protocols?

Saturday, 21 May 2016

NILM 2016 presentation videos and slides now available

Stephen Makonin has just uploaded the last of the videos to the NILM 2016 Youtube playlist, meaning that you can now easily watch individual talks from the conference. In particular, I'd recommend George Hart's keynote talk: Life after NILM, covering what it was like to do energy disaggregation in the '80s, his passion for mathematics throughout his life, his more recent shift towards art and sculpture, and finally his educational workshops.

The full set of slides and papers relating to each talk are also available via the NILM 2016 program if you'd like to catch up on everything else from the conference too. Happy watching!


Wednesday, 18 May 2016

5 things I learned from NILM 2016


The 3rd International Workshop on Non-Intrusive Load Monitoring was held on the 14-15 May 2016 at the beautiful mountaintop campus of Simon Fraser University, Vancouver, Canada. I thoroughly enjoyed the event, and just wanted to summarise a few of my thoughts having had the flight home to digest the weekend's activities.

1. NILM is not dead


Despite the jokes that were thrown around at the European workshop last year, the NILM community is very much as alive as it has ever been. The Vancouver workshop was the first 2-day energy disaggregation conference, the best attended, and in my opinion the most stimulating in terms of presentations and discussions. That being said, NILM is still far from a solved problem. Michael Baker from SBW Consulting presented a paper which evaluated the performance of three NILM vendors, and concluded that "further development of disaggregation algorithms is needed before they are sufficiently accurate to provide customers with accurate estimate of how much they spend on most end uses."

2. Different customers want energy breakdowns for different reasons


Most academic researchers (I'm including myself here) see NILM as a tool to encourage energy efficiency, but utilities also see it as a tool for customer engagement. For example, a perfect monthly energy breakdown might be useful for both, but the disaggregation methodology might be quite different. In the case of energy efficiency the aim might be to identify the rare cases where large energy savings are possible, while in the customer engagement case it might be preferable for the disaggregation result to be more conservative when identifying such edge cases, since false positives are likely to be much more costly in terms of reputation than correct identification.

3. The assumption that disaggregation leads to significant energy efficiency has never been concretely demonstrated


Jack Kelly gave an excellent presentation of his paper which summarised the literature regarding whether disaggregated feedback leads to energy savings. One of the most shocking messages was that "the four studies which directly compared aggregate feedback against disaggregated feedback found that aggregate feedback is at least as effective as disaggregated feedback" (see full paper for details). Jack was very careful to clarify that this does not mean that disaggregated data is useless, but rather that the community desperately needs a large, well-controlled, long-duration, randomised, international study to confidently quantify energy reductions as a result of disaggregated data.

4. An academic energy disaggregation competition is badly needed


The panel discussion following the two algorithm sessions brought a lively debate around what a NILM competition might look like. Phrases like "bring it on!" were thrown around, though it also became clear that defining a scenario (e.g. sample rate, scale) which encouraged broad participation is a real challenge. Furthermore, such a competition would need a strong investment in time from an impartial organiser as well as an expensive process of data collection.

5. NILM researchers love puzzles


George Hart gave an excellent keynote talk describing what it was like to perform energy disaggregation research in the 1980s. He then went on to talk to a transfixed audience about his more recent interests in mathematical sculpture and puzzle solving. However, nothing prepared me for the silence which dropped over dinner when he handed out a series of puzzles (mostly physical blocks which had to be separated or assembled) as every researcher forgot the topic of the conference and indulged in some more traditional problem solving.