Link to the answer to the same question of Tathagat Das:
https://www.mail-archive.com/ user@spark.apache.org /msg43512.html
The following is the text:
Both mapWithState()and updateStateByKey()use default HashPartitionerand hashed value of the key in the key DStreamon which an operation condition is applied. New data and state is a partition in the same separator, so the same keys to new data (from the input DStream) will be shuffled and placed with RDD with already separated states. Thus, the new data is brought into the corresponding old state on the same machine, and then the function of displaying / updating the state is used.
,