Wednesday 1 June 2011

Calculating the optimal appliance state transitions for an observed data set - continued

This post extends the previous post, by applying the same technique to a multi-state appliance. The classification accuracy was tested using the power demand from a washing machine. The power demand for two cycles of a washing machine is shown below.
Each cycle consists of two similar states drawing approximately 1600 W, connected by an intermediate state drawing approximately 50 W. The intermediate state always connects two on states, and therefore the model should never classify transitions between the intermediate state and off states. This can be represented by the finite state machine below.
The HMM should capture such appliance behaviour. Therefore, it was set up such that the hidden variable has three possible states: 1 (off), 2 (intermediate) and 3 (on). The observed variable now represents a sample from this Gaussian mixture, and the Gaussian responsible for generating each sample is represented by the corresponding latent variable.

The model parameters pi (probabilities of first state), A (transition probabilities) and phi (emission densities) were set up:

  • pi = [1 0 0] - The appliance will always be off at the start of the sequence
  • A = [
             0.9,  0,     0.1
             0,     0.8,  0.2
             0.1,  0.1,  0.8
             ] - The appliance is most likely to stay in the same state, while transitions between off and intermediate are not possible

  • phi = {mean: [2.5 56 1600], cov:[10 10 10]} - The distribution of the emissions of the Gaussian mixture
The transition matrix, A, models impossible state transitions as having probability 0. It also models highly probable transitions within states as close to 1.

After creating the HMM and setting the model parameters, the evidence can be entered into the inference engine. In this case, the evidence is the power demand shown by the graph at the top of this post. Once the evidence is entered, the Viterbi algorithm can be run to determine the optimal sequence of states responsible for generating the sequence of observations. The graph below shows a plot of the state sequence, where 1 = off, 2 = int, 3 = on.
Such a probabilistic approach to state classification has advantages over dividing the power demand using static boundaries. First, the approach is robust to noise, in that variations around a mean demand in the intermediate and on states do not detrimentally affect their classification. Second, samples taken during a transition are typically ignored if no probable transition exists. For instance, one or two samples are captured during the transition between the off and on states. Instead of classifying these as the intermediate state, these are assumed to be noise and correctly classified as the off state.

This approach allows information about the most probable transitions to built into the model given the sequential nature of observations. This is an extension to simply selecting the appliance state most likely to generate the observation.

I intend to investigate how this approach can be extended to multiple appliances in the near future.

3 comments:

  1. Keep posting. You are doing a great job.

    ReplyDelete
  2. Thanks for the encouragement and the reminder! I've just written a post about the current model I'm using, and will follow this up with a plan for future soon. I'm also in the process of finalising my 9-month report which I'll post here in case you're interested.

    I've just noticed you have a few blogs floating around. Is this the one you use for reserach thoughts http://www.marioberges.com/blog/ ?

    ReplyDelete
  3. Thanks!
    It's this one: http://www.marioberges.com/blog/category/research/

    ReplyDelete

Note: only a member of this blog may post a comment.