I am working on a book called Bayesian Analysis in Python . The book focuses mainly on the PyMC3 package, but is a bit vague in the theory behind it, so I'm pretty confused.
Let's say I have this data:
data = np.array([51.06, 55.12, 53.73, 50.24, 52.05, 56.40, 48.45, 52.34, 55.65, 51.49, 51.86, 63.43, 53.00, 56.09, 51.93, 52.31, 52.33, 57.48, 57.44, 55.14, 53.93, 54.62, 56.09, 68.58, 51.36, 55.47, 50.73, 51.94, 54.95, 50.39, 52.91, 51.5, 52.68, 47.72, 49.73, 51.82, 54.99, 52.84, 53.19, 54.52, 51.46, 53.73, 51.61, 49.81, 52.42, 54.3, 53.84, 53.16])
And I look at a model like this:
![enter image description here](https://fooobar.com//img/23c4f0fc4912cc718295d0b3b5aeaa36.png)
Using a metropolitan sample , how can I put a model that evaluates mu and sigma.
Here is my hunch about the pseudo code from what I read:
M, S = 50, 1
G = 1
mu = stats.norm(loc=M, scale=S)
sigma = stats.halfnorm(scale=G)
target = stats.norm
steps = 1000
mu_samples = [50]
sigma_samples = [1]
for i in range(steps):
mu_i, sigma_i = mu.rvs(), sigma.rvs()
"..."
a = "some"/"ratio"
acceptance_bar = np.random.random()
if a > acceptance_bar:
mu_samples.append(mu_i)
sigma_samples.append(sigma_i)
What am I missing?