A small grainy update will update a single record in the database, while coarse grainy, as a rule, functional operators (for example, used in sparks), for example, map, reduction, flatMap, join. The Spark model uses this because when it saves your small DAG operations (small compared to the data you process), it can use this to recalculate while the original data still exists. With fine-grained updates you cannot count, because saving updates can cost as much as saving the data itself, basically, if you update each record from billions separately, you need to save this information to calculate each update, whereas with coarse grain you can save one feature that updates a billion records. Clearly, this is due to the fact that it is not as flexible as the fine-grained model.
source share