I'm starting to study Stan.
Can someone explain when and how to use syntax like ...?
target +=
instead of just:
y ~ normal(mu, sigma)
For example, in the Wall manual you can find the following example.
model { real ps[K]; // temp for log component densities sigma ~ cauchy(0, 2.5); mu ~ normal(0, 10); for (n in 1:N) { for (k in 1:K) { ps[k] = log(theta[k]) + normal_lpdf(y[n] | mu[k], sigma[k]); } target += log_sum_exp(ps); } }
I think the target line increases the target value, which I consider the logarithm of back density.
But back density for what parameter?
When is it updated and initialized?
After Stan finishes (and converges), how do you access his meaning and how do I use it?
Other examples:
data { int<lower=0> J; // number of schools real y[J]; // estimated treatment effects real<lower=0> sigma[J]; // se of effect estimates } parameters { real mu; real<lower=0> tau; vector[J] eta; } transformed parameters { vector[J] theta; theta = mu + tau * eta; } model { target += normal_lpdf(eta | 0, 1); target += normal_lpdf(y | theta, sigma); }
the above example uses the target twice, not just once.
another example.
data { int<lower=0> N; vector[N] y; } parameters { real mu; real<lower=0> sigma_sq; vector<lower=-0.5, upper=0.5>[N] y_err; } transformed parameters { real<lower=0> sigma; vector[N] z; sigma = sqrt(sigma_sq); z = y + y_err; } model { target += -2 * log(sigma); z ~ normal(mu, sigma); }
This last example even mixes both methods.
To make this even harder, I read that
y ~ normal(0,1);
has the same effect as
increment_log_prob(normal_log(y,0,1));
Can someone explain why please?
Can someone provide a simple example written in two different ways: "target + =" and the usual simple way "y ~", please?
Hello