I think it depends on how much jobs depend on each other. Based on your example, I would suggest that
- Stage 1 does not depend on all other stages, because this is the first
- Stage 2 does not depend on all other stages, since stage 1 can immediately fail, and stage 2 will still be required to run
- stage 3 depends on the result of stage 1 and stage 2
Thus, the corresponding conveyor can be
pipeline { stages { stage('Independent tasks') { parallel { stage('stage 1') { steps { sh 'exit 1' // failure } } stage('stage 2') { steps { echo 'Happens even so stage 1 fails' sh 'exit 0' // success } } } post { // 'stage 3' failure { echo "... at least one failed" } success { echo "Success!" } } } stage ('stage 4') { steps { echo 'Happens only if all previous succeed' } } } }
Stage 1 and stage 2 are always performed, stage 3 responds to their combined success / failure.
Additional thought: this concept only works "at the end" of your pipeline. If you need it somewhere in the middle, and the assembly should continue, you can move it to your own task and use the build job plugin.
pipeline { stages { stage('Start own job for stage 1, 2, 3') { steps { build job: 'stageOneTwoThree', propagate: false, wait: true } } stage ('stage 4') { steps { echo 'Happens always, because "propagate: false"' } } }
newur source share