Experiment2: Hidden Markov Model
实验资料 实验源码

Experimental Requirement

In this assignment, we will consider an indoor positioning system using hidden Markov models (HMMs). The aim here is to locate a mobile sensor receiving signals from beacons placed in a building or a room. The strength of a message fluctuates not only because of the distance between the transmitter and the receiver but also the reflection and scattering in buildings. To to overcome this problem is to collect the signal strengths for each position and construct a probability distribution as our observation model, according to which we can estimate the position of the censor.

Congiguration

I coded up the algorithm and run it on vscode(Visual Studio Code) with the python package of edition 3.9.6. Additionally, I store my code on gitee in case of need.

1

Implementation

Preparation

  1. install jupyter and enable the widgets extension to have interactive controls
  2. install dependency packages
  1. code up to install jupyter
1
2
3
$ pip install jupyter
$ jupyter nbextension enable --py widgetsnbextension
$ jupyter notebook
  1. import packagenumpy and matplotlib
1
2
$ pip install numpy
$ pip install matplotlib

Conditions

  1. The total number states n_states = (width * length) and a probability distribution of the location of the target sensor is a vector of length n_states.
  2. self.transprobs stands for the transition probalities P(St+1|St) stored in a 2-dimensional NumPy array which can be obtained given indexes.
  3. n_beacons represents the length of observation vector that the sensor receives in each time step.
  4. The RSSI received from beacons are stored in an array O.
  5. P(O(b)=ob|S=s) stands for the probability of observing o[b] in a particular state s which can be obtained by self.obsprobs[b,o[b],s].
  6. Provided that the RSSI values of different messages are conditionally independent, we can decompose the probability of observing the vector o, P (O = o | S = s) into a product of individual likelihood functions following

2

Step1: Predict

To predict the next state by computing P(S_{t+1}|O_{1:t}) using probs(P(S_{t}|O_{1:t})) and transition probabilities(P(S_{t+1}|S_{t})). Calculate the multiplication of two matrix by means of method matmul of package numpy.

The code is as follows

1
2
3
4
5
def predict(self, probs): 
predicted_probs = np.empty(self.n_states);
# My Code Below
predicted_probs=np.matmul(self.trans_probs,probs)
return predicted_probs

Step2: Update

This method updates the probability using the observations to compute probs*P(O_{t}|S_{t}). What its result represents depends on the calling method. If it is called from the method monitor, then probs represents P(S_{t}|O_{1:t-1}) and the resulting vector will be proportional to P(S_{t}|O_{1:t}). Similarly, if it is called from the method backwards, probs represents P(O_{t+1:T}|S_{t}) and it will return a vector that is proportional to P(O_{t:T}|S_{t}).

The code is as follows

1
2
3
4
5
6
7
8
9
10
11
12
def update(self, probs, o): 
updated_probs = np.empty(self.n_states)
# My Code Below
# 1. initialize update probability
for i in range(self.n_states): updated_probs[i] = 1
# 2. calculate update probability by probs and o
for i in range(self.n_states):
for j in range(self.n_beacons):
updated_probs[i] = updated_probs[i]*self.obs_probs[j, o[j], i]
updated_probs[i] = updated_probs[i]*probs[i]
# 3. return
return updated_probs

Step3: Monitor

This method returns the monitoring probabilities for T time steps using the given sequence of observations by computing P(S_{t}|O_{1:t}) for each t.

The code is as follows

1
2
3
4
5
6
7
8
9
10
def monitor(self, T, observations): 
monitoring_probs = np.empty((T, self.n_states))
# My Code Below
monitoring_probs[0] = self.init_probs
for i in range(1,T,1):
monitoring_probs[i] = np.copy(monitoring_probs[i-1])
monitoring_probs[i] = self.predict(monitoring_probs[i])
monitoring_probs[i] = self.update(monitoring_probs[i], observations[i,:])
monitoring_probs[i] = monitoring_probs[i]/np.sum(monitoring_probs[i])
return monitoring_probs

Result Analysis

Test code using hmm test.py

1
$ python -m unittest hmm_test.py

The result is as follows

3

Conclusion

In this experiment, I learnt how to design and implement Hidden Markov models. Hidden Markov models are statistical models that are used to describe a Markov process with implicit unknown parameters. The difficulty is to determine the implicit parameters of the process from the observable parameters. These parameters are then used for further analysis.

I ran a few times to see how working time changes due to enironment and configuration.

4

5

Additionally, when coding up the algorithm, I need to search plenty of information on the Internet to make sure the code is correct. Looks like I’ve got a lot to learn. Plus, Hidden Markov models are exceedingly useful for forward prediction and backward extrapolation according to observation and probability.