node execute limit

I need to execute node consistently, but I get this error I get the error "The number of tasks per node  would exceed the limit of 1000". Could someone help me with it?

there are config details

  Discussion posts and replies are publicly visible

  • True, this limit can be changed but I'd recommend that you design to work within this constraint. It's been set for a reason and increasing it might just mean you'll have to increase it yet again in the future, whereas designing to meet this constraint will be a permanent solution.
  • Hope the number of records on PV 'sameAccounts' is grater than 1000. As the Appian recommends max number of instances of node to be 1000, this would be default value. But if you want to change this number, Please check this link docs.appian.com/.../Post-Install_Configurations.html.
  • There is a limit to the number of times a node can be executed. By default this is set to 1000. When using the MNI option (as you are here) Appian examines the run-time size of the array and, as in you case, if the array contains > 1000 instances it will not even attempt to execute the node. You have two choices here:
    Option 1. you can write your own loop in the process, keep track of the count of each loop by increment an integer pv! and use that value as an index to a) address the correct entry in the array and b) know when to break out of the loop. Appian will STILL STOP THE EXECUTION after 1000 instances UNLESS you check the 'Delete previously completed/cancelled instances.' option (near the bottom of the same screen that you have included a screen-shot of)
    Option 2. You "chunk" up the array into groups each containing < 1000 items each. That is, create a parent process that takes the original array, breaks it into smaller arrays of, say, 500 items, and then pass each of those to your existing model.

    Either way, you need to be aware of the fact that if the array is huge then you may impact the memory consumption if you try to process the data in parallel. If you choose to serialise the processing (as in Option 1) then the processing will take longer because it is working on one item at a time. Neither option is "right" or "wrong" per se, they both have implications and you need to make the optimal choice based upon those implications.