SHORT ANSWER
Is the above assumption correct or am I not mistaken?
In short, yes, you do not think so. Read my loooong explanation with an example to see why I hope you appreciate it.
In addition, I created a second batch step to capture FAILEDRECORDS ONLY. But, with the current design, I cannot record failed recordings.
You will probably forget to set max-failed-records = "-1" (no limit) to the batch job. The default value is 0, the first time the recording fails, the party returns and does not perform the following steps.
Is there an approach that I use, is it good, or can any better design be and then ??
I think it makes sense if performance is important to you and you cannot handle the overhead created by performing this operation in sequence. If instead you can slow things down a bit, it may make sense to complete this operation in 5 different steps, you will lose parallelism, but you can better control failed entries, especially if you use batch commit.
WORK WITH MATERIAL IN PRACTICE
I think the best way to explain how this works with an example.
Consider the following case: You have batch processing with max-failed-records = "-1" (no limit).
<batch:job name="batch_testBatch" max-failed-records="-1">
In this process, we introduce a collection of 6 lines.
<batch:input> <set-payload value="#[['record1','record2','record3','record4','record5','record6']]" doc:name="Set Payload"/> </batch:input>
Processing consists of three steps: The first step is only to register the process, and the second step will keep a log and throw an exception on record3 to simulate a failure.
<batch:step name="Batch_Step"> <logger message="-- processing #[payload] in step 1 --" level="INFO" doc:name="Logger"/> </batch:step> <batch:step name="Batch_Step2"> <logger message="-- processing #[payload] in step 2 --" level="INFO" doc:name="Logger"/> <scripting:transformer doc:name="Groovy"> <scripting:script engine="Groovy"><![CDATA[ if(payload=="record3"){ throw new java.lang.Exception(); } payload; ]]> </scripting:script> </scripting:transformer> </batch:step>
The third step instead will only contain a commit with a value of the number of commits equal to 2.
<batch:step name="Batch_Step3"> <batch:commit size="2" doc:name="Batch Commit"> <logger message="-- committing #[payload] --" level="INFO" doc:name="Logger"/> </batch:commit> </batch:step>
Now you can follow me while doing this batch processing:

At the beginning, all 6 entries will be processed in the first step, and the entrance to the console will look like this:
-- processing record1 in step 1 -- -- processing record2 in step 1 -- -- processing record3 in step 1 -- -- processing record4 in step 1 -- -- processing record5 in step 1 -- -- processing record6 in step 1 -- Step Batch_Step finished processing all records for instance d8660590-ca74-11e5-ab57-6cd020524153 of job batch_testBatch
Now, in stage 2, it would be more interesting that record 3 will fail because we explicitly throw an exception, but despite this, the step will continue when processing other records, this is what the log looks like.
-- processing record1 in step 2 -- -- processing record2 in step 2 -- -- processing record3 in step 2 -- com.mulesoft.module.batch.DefaultBatchStep: Found exception processing record on step ... Stacktrace .... -- processing record4 in step 2 -- -- processing record5 in step 2 -- -- processing record6 in step 2 -- Step Batch_Step2 finished processing all records for instance d8660590-ca74-11e5-ab57-6cd020524153 of job batch_testBatch
At this point, despite the unsuccessful recording at this stage, batch processing will continue, because the max-failed-records parameter is set to -1 (unlimited), and not to the default value of 0.
At this point, all successful entries will be transferred to step 3, this is because by default the accept-policy parameter for the step is set to NO_FAILURES . (Other possible values ββare ALL and ONLY_FAILURES ).
Now step 3, containing the commit phase with a number equal to 2, will record two in two records:
-- committing [record1, record2] -- -- committing [record4, record5] -- Step: Step Batch_Step3 finished processing all records for instance d8660590-ca74-11e5-ab57-6cd020524153 of job batch_testBatch -- committing [record6] --
As you can see, this confirms that record 3, which was in a failed state, was not transferred to the next step and therefore was not completed.
Starting with this example, I think that you can imagine and test a more complex scenario, for example, after fixing, you can take another step that processes only failed entries in order to make the administrator with an error message. After you can always use external storage to store more detailed information about your records, as you can read in my answer to this other question .
Hope this helps