What are the methods for determining probabilities in hidden Markov models?

I'm starting to study hidden Markov models and there are many examples on the wiki page as well as on github, but most of the probabilities already exist (70% chance of rain, 30% chance of a change in state, etc. ...). Examples of spellchecking or sentences seem to be studying books, and then ranking word probabilities.

Thus, does the mark model include a method for determining probabilities, or suppose it is pre-calculated for any other model?

Sorry if this question is off. I think it's just how a hidden Markov model selects probable sequences, but part of the probability is a little gray for me (because it is often provided). Examples or any information would be great.


For those who are not familiar with Markov models, here is an example (from Wikipedia) http://en.wikipedia.org/wiki/Viterbi_algorithm and http://en.wikipedia.org/wiki/Hidden_Markov_model

#!/usr/bin/env python states = ('Rainy', 'Sunny') observations = ('walk', 'shop', 'clean') start_probability = {'Rainy': 0.6, 'Sunny': 0.4} transition_probability = { 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, } emission_probability = { 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, } #application code # Helps visualize the steps of Viterbi. def print_dptable(V): print " ", for i in range(len(V)): print "%7s" % ("%d" % i), print for y in V[0].keys(): print "%.5s: " % y, for t in range(len(V)): print "%.7s" % ("%f" % V[t][y]), print def viterbi(obs, states, start_p, trans_p, emit_p): V = [{}] path = {} # Initialize base cases (t == 0) for y in states: V[0][y] = start_p[y] * emit_p[y][obs[0]] path[y] = [y] # Run Viterbi for t > 0 for t in range(1,len(obs)): V.append({}) newpath = {} for y in states: (prob, state) = max([(V[t-1][y0] * trans_p[y0][y] * emit_p[y][obs[t]], y0) for y0 in states]) V[t][y] = prob newpath[y] = path[state] + [y] # Don't need to remember the old paths path = newpath print_dptable(V) (prob, state) = max([(V[len(obs) - 1][y], y) for y in states]) return (prob, path[state]) #start trigger def example(): return viterbi(observations, states, start_probability, transition_probability, emission_probability) print example() 
+6
source share
1 answer

You are looking for an EM algorithm (expectation of maximization) to calculate unknown parameters from sets of observable sequences. Probably the most commonly used Baum-Welch algorithm, which uses forward-backward .

For reference, here is a set of slides that I used earlier to view HMM. It has a nice review of Forward-Backward, Viterbi and Baum-Welch.

+4
source

Source: https://habr.com/ru/post/900311/


All Articles