Stan Using target + = syntax

I'm starting to study Stan.

Can someone explain when and how to use syntax like ...?

target += 

instead of just:

 y ~ normal(mu, sigma) 

For example, in the Wall manual you can find the following example.

 model { real ps[K]; // temp for log component densities sigma ~ cauchy(0, 2.5); mu ~ normal(0, 10); for (n in 1:N) { for (k in 1:K) { ps[k] = log(theta[k]) + normal_lpdf(y[n] | mu[k], sigma[k]); } target += log_sum_exp(ps); } } 

I think the target line increases the target value, which I consider the logarithm of back density.
But back density for what parameter?

When is it updated and initialized?

After Stan finishes (and converges), how do you access his meaning and how do I use it?

Other examples:

 data { int<lower=0> J; // number of schools real y[J]; // estimated treatment effects real<lower=0> sigma[J]; // se of effect estimates } parameters { real mu; real<lower=0> tau; vector[J] eta; } transformed parameters { vector[J] theta; theta = mu + tau * eta; } model { target += normal_lpdf(eta | 0, 1); target += normal_lpdf(y | theta, sigma); } 

the above example uses the target twice, not just once.

another example.

 data { int<lower=0> N; vector[N] y; } parameters { real mu; real<lower=0> sigma_sq; vector<lower=-0.5, upper=0.5>[N] y_err; } transformed parameters { real<lower=0> sigma; vector[N] z; sigma = sqrt(sigma_sq); z = y + y_err; } model { target += -2 * log(sigma); z ~ normal(mu, sigma); } 

This last example even mixes both methods.

To make this even harder, I read that

 y ~ normal(0,1); 

has the same effect as

 increment_log_prob(normal_log(y,0,1)); 

Can someone explain why please?

Can someone provide a simple example written in two different ways: "target + =" and the usual simple way "y ~", please?

Hello

+5
source share
1 answer

Syntax

 target += u; 

adds u to the target log density.

The target density is the density from which the samples of the sampler should be equal to the density of the connection of all parameters, taking into account the data to a constant (which is usually achieved using the Bayesian rule by encoding parameters and simulated data to a constant as the connection density). You refer to it as lp__ in the back, but be careful, since it also contains the Jacobian arising from constraints and drops of constants in the selection operators --- you do not want to use it to compare the model.

In terms of sampling, I write

 target += normal_lpdf(y | mu, sigma); 

has the same effect as

 y ~ normal(mu, sigma); 

_lpdf signals him the probability density function of the logarithm for the normal, which is implicit in the notation of the sample. The designation for sampling is simply an abbreviation for the syntax target + = and, in addition, reduces the constant members in the log density.

This is explained in the instructions section of the language reference (second part of the manual) and is used in several examples through the programmer's manual (first part of the manual).

+14
source

Source: https://habr.com/ru/post/1258898/


All Articles