This is a simple multiplication. Sampling losses increase in weight of the sample. Assuming i = 1 to n samples, the weight vector of the sample weights w length n and that the loss for sample i denoted by L_i :

In Keras, in particular, the product of each sample loss with its weight is divided by the share of weights that are not equal to 0, so the loss per batch is proportional to the number of weight> 0 samples. Let p be the fraction of nonzero weights.

Here is the corresponding code snippet from Keras repository:
score_array = loss_fn(y_true, y_pred) if weights is not None: score_array *= weights score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx())) return K.mean(score_array)
class_weight used in the same way as sample_weight ; it is simply provided as a convenience to indicate specific weights for all classes.
Sample weights do not currently apply to metrics, but only to losses.
source share