Execution Limit 50 nodes

Hi!, the limit of the 50 nodes (about the activity chaining), its  only active nodes or all nodes which are in the entire canvas in a process model

  Discussion posts and replies are publicly visible

Parents
  • The limit refers to the number of nodes (including all node types and nodes within chained, synchronous subprocesses) between two chained-together User Input Tasks.

    Basically if you design a process model with Task A followed by Task B, and chain the flow between them, the user will step from A to B upon submission of A.  If you add more than 50 nodes in between them (even something as benign as completely empty script task nodes), chaining would be expected to break and subsequently Task B would land in the assigned task queue of its assignee(s).  The user submitting Task A would experience the same thing that they would at other times chaining breaks or a process ends, which would be they'd be returned to wherever they originally accessed the process/task from (their task list, related action, etc).

  • Our problem basically is that we have a process modell with 35 nodes which update a database, these nodes are in parallel and they have activity chaining, but one way or another the flow breaks and it doesnt return all data correctly.

     

  • That design looks problematic even without the activity chaining issue...

    1. Why aren't you making use of the Write to Multiple DSE node?
    2. Why do you have to execute all of these in parallel instead of in some sort of sequence?
    3. Why do you need to have activity chaining try to go through that?
  • Hi, i response you:

    1.We cant use multiple node because each node calls a diferente table in oracle.

    2.They are in paralell (the image maybe doesnt show it correctly)

    3.We need activity chaining cause we call this processes through a web api (with a starprocess function in a API object) . This WebApi is called by Postman, and when we dont put activity chaining postman doesnt return all values of the process variables when the process is ended, its like the flow breaks.

  • The Multiple Write to Datastore Entity node supports multiple tables, as pointed out in his reply. In fact, it's better than writing to the tables individually as the whole write is transactional - it all works or none of it works, which protects the integrity of your data.

  • each node calls a diferente table in oracle.

    That should still be doable with the Write to Multiple DSE node (which is NOT the same as using the regular Write to DSE node with "multiple" enabled, which seems to be what you're thinking of)

    They are in paralell

    I understand - I was asking why?  This sort of setup should almost never be necessary.

    when we dont put activity chaining postman doesnt return all values

    I vaguely see what you mean here - but you might need to design around this, such as having Appian send an API call back to the source system after processing has been done.  However, if you're lucky you might be able to get around this by implementing the Write to Multiple DSE node as I mentioned before.

  • I understand - I was asking why?  This sort of setup should almost never be necessary.

    Because we need to save execution time.

    About the nodes , they are queries nodes, not write entity, for that we needed put them in parallel.

  • About the nodes , they are queries nodes, not write entity

    My fault, I didn't recognize the query DB node icon; are you querying data with these or writing data?  Your original text said they're "updating a database" which is why I mistakenly thought they were WTDS nodes.  If they're just querying, I'd suggest using a!queryEntity instead if possible.

  • If the tables are related you can do the heavy-lifting in the database itself by either:

    • creating a VIEW that covers all of the tables, so you'd end up with a single query from the process model OR
    • calling a Stored Procedure that makes all of the calls and returns the data in one hit

    In essence you need to try to minimise the number of calls made from Appian to the database.

Reply
  • If the tables are related you can do the heavy-lifting in the database itself by either:

    • creating a VIEW that covers all of the tables, so you'd end up with a single query from the process model OR
    • calling a Stored Procedure that makes all of the calls and returns the data in one hit

    In essence you need to try to minimise the number of calls made from Appian to the database.

Children
No Data