node execute limit

I need to execute node consistently, but I get this error I get the error "The number of tasks per node  would exceed the limit of 1000". Could someone help me with it?

there are config details

  Discussion posts and replies are publicly visible

  • There is a limit to the number of times a node can be executed. By default this is set to 1000. When using the MNI option (as you are here) Appian examines the run-time size of the array and, as in you case, if the array contains > 1000 instances it will not even attempt to execute the node. You have two choices here:
    Option 1. you can write your own loop in the process, keep track of the count of each loop by increment an integer pv! and use that value as an index to a) address the correct entry in the array and b) know when to break out of the loop. Appian will STILL STOP THE EXECUTION after 1000 instances UNLESS you check the 'Delete previously completed/cancelled instances.' option (near the bottom of the same screen that you have included a screen-shot of)
    Option 2. You "chunk" up the array into groups each containing < 1000 items each. That is, create a parent process that takes the original array, breaks it into smaller arrays of, say, 500 items, and then pass each of those to your existing model.

    Either way, you need to be aware of the fact that if the array is huge then you may impact the memory consumption if you try to process the data in parallel. If you choose to serialise the processing (as in Option 1) then the processing will take longer because it is working on one item at a time. Neither option is "right" or "wrong" per se, they both have implications and you need to make the optimal choice based upon those implications.
  • Hope the number of records on PV 'sameAccounts' is grater than 1000. As the Appian recommends max number of instances of node to be 1000, this would be default value. But if you want to change this number, Please check this link docs.appian.com/.../Post-Install_Configurations.html.
  • True, this limit can be changed but I'd recommend that you design to work within this constraint. It's been set for a reason and increasing it might just mean you'll have to increase it yet again in the future, whereas designing to meet this constraint will be a permanent solution.
  • 0
    Certified Lead Developer
    My best advice to you is to discover the joy that is a!forEach and avoid using MNI at all costs. I reduced the running time of an MNI with 300 records from several minutes to 45 miliseconds by using a!forEach. Do whatever you can to have a script task run once and process all of your records in a looping function rather than looping a script task.

    This is not to say that MNI isn't useful. You can do great things like split your records into 1000 size chunks that are run by an a!forEach loop in parallel using MNI. That sounds like a good use case. If you have a very complicated series of operations, you may need to run an entire process model on each one. OK, MNI the subprocess node. However, if you can avoid you should. Whatever you can do in a!forEach will greatly increase your performance. If you're even reaching 1000 limit that's a sign of a serious flaw in design.
  • 0
    Certified Lead Developer

    As per my understanding, it's worth having such limitations OOTB, otherwise assume what if I have a variable having 10,000 or may be more items and have applied MNI over this, this will slow down the whole environment and will impact other applications as well as their objects performance.

    Hence I believe, it's better if you configure a manual batch to execute MNI for only 100 items at once and then loop the process flow to a timer to hold this process for couple of minutes (can be identified, based on number of applications/objects/performance of the server) and again process the MNI for Next set of 100 items.

    Also, try defining a constant of type Number(Integer) and mark it as an environment variable, so that you can define the MNI batch max limit dynamically (as this can vary from one to another environment, because one env system configurations can be different than the other).

    In this way, you will be able to complete your job without impacting the server performance.

    Hope this will help.

  • 0
    Certified Lead Developer
    MNI with batching is a possible solution, however you wind up with MNI squared. While none of your other users might notice and think the system is broken, the user waiting for 10,000 records to be processed will think the system is broken, waiting for 100 sets of 100Loops. You'll also have to cofigure something to alert the user minutes later when the processing is finally done.

    Worse still, far be it for someone to have more than 1,000,000 records, but you'll have to do MNI cubed if anyone ever does. At 900,000 records you're going to have to implement it.

    Or just don't. Or query all the records you want (several million if need be) in 1 script task and process them all in a single in a!forEach in 1 script task at several thousand times the pace.

    Again, I got a process that looped through 300 records using MNI from 3 or 4 minutes down to 40 miliseconds by switching to one a!forEach loop. If it's at all possible, switch to a!forEach.
  • 0
    Certified Lead Developer
    In these cases i would recommend to perform these operation only asynchronous by configuring this node inside a sub-process, because this could be a heavy operation, and user cannot wait for that long.

    Also, It depends upon what's next after the completion of this activity, is that the Task Assignment, or process simply terminates after processing the loop.

    Because if the next set of steps are related to Task assignment, then yes, user can't wait for that long and may suspect that there is something wrong happened behind the screen, but if the process terminates post processing this MNI then i think either the way should be fine (including performing this job asynchronous).

    Also, as per the recommended approach in terms of performance, Yes you should opt for a!forEach() over MNI, wherever applicable.

    But, just because you are making the use of forEach() doesn't mean that you should process the complete loop at once(if the data set is very huge), because this may also slow down your server performance.

    I believe one of the most important factor in terms of performance while looping is, what operation are you trying to perform, is that a normal manipulation job or interacting with the DB.

    But of course, a!forEach() stands better while compared to MNI, in terms of performance (wherever applicable).
  • 0
    Certified Lead Developer
    in reply to aloks0189
    You bring up a very good point. MNI is good for breaking up a a!forEach in to manageable chuncks. If you try to break up a !forEach inside a a!forEach, the parent will keep running until all of its children are done. The execution engine may be stuck on that without any option to do other things until the job is done, and you can cripple your engine.

    So MNI would be very good because it can actually reach a finished state to allow a short job through before the full set of data goes through a!forEach. Anything longer than 10 or 20 seconds is probably going to start impacting other users unless you can break it up with MNI.

    You'll have to benchmark to see which is the most reasonable for your case.
  • 0
    Certified Associate Developer
    in reply to Stewart Burchell

    Hi Stewart, could you please elaborate more about option 1. 

    Where do I write the for loop? as an expression in the MNI config window? or as a script task node. 

    Thank you,