Hi,
I would have a theorical question :
Let's say I have a very complex process that count 60 nodes, and the latter can not reach the end because of the 50 nodes limit.
Let's admit that 30 first nodes car be executed in parallel with the 30 remaing nodes. Does the limit of 50 nodes will occur nevertheless?
(using an AND gateway for the 2 paths)
Discussion posts and replies are publicly visible
Thank you Mike.I just wanted to be sure, because some developers think they can just break a process into synchronously sub-processes to get around this problem ;-)
On the other hand, it *can* solve this problem to break chunks of processing off into asynchronous subprocesses. I've very rarely seen instances where all 50 nodes worth of processing must be executed synchronously, at least with more careful analysis of what's actually happening in the process flow.
Yes, I well understood that using asynchronous subprocessed could help in such a context.
The guardrails are pretty clear: 50 chained nodes will be your limit, after which the synchronous experience your User was expecting will come to an end and the next thing they'll see is either:
In general any process model that reaches this size probably needs to be reviewed. You can achieve a lot in a combination of Expressions, SAIL interfaces, asynchronous patterns, and even "merging" nodes (e.g. by fetching data in the 'Input' tab and then post-processing it in the ';Output' tab before passing the result to a process variable, which means many opportunities to reduce the size to less than 50 nodes.
Thank you Stewart for this further information.