Guess the documentation is not 100% consistent. It works correctly with an incremental primary key. If you can guarantee that the uuid of any new record will be greater than the key of any existing record in the table, it will work correctly. Otherwise, you have a chance to skip new entries added after the start of batch processing.
Inside, at each step, it receives the identifier of the last record ( last_id
) and in the next step selects 1000 records with an identifier greater than last_id
. Therefore, if the application creates a new record with a unique identifier < last_id
at the processing stage, this record will be excluded from processing.
source share