can use MNI configuration for start process smart service. As we have an advantage of process will start different or same engine based on the load and memory consumption will not be much .
Bcoz, we have design where we need start more than 1000 process instances ?
will it allow iterate more than 1000 atleast incase of start process smart service -since memory issue will handled by loading into different engine?
Discussion posts and replies are publicly visible
Like others have said, we need to understand more about your root use case in order to be able to answer this question. FWIW, this is a great example of an XY problem - you're asking about how to solve problem Y (running more than 1000 instances), but actually this is only one possible solution for your base problem X. By understanding your base problem, we can provide alternate solutions that hopefully wouldn't run up against product limitations.
Thanks for the quick reply on this case ,
Use case as you guys are interested,
An external system will trigger the appian web-api, which inturn triggers the process. The process will do some ETL operation and process need to update another external system with enriched data from process
When the process is triggered the request payload will have array of values , and each value should undergo an ETL operation, and the need to update the another system(B) (note: the integration from the process and accept one request per value, means :if the array 100, process need to make 100 integration call )
And so i tried something like this :
it seem i used some kind recursion function (java) to overcome the limit of starting process
But this also has serious effect on the performance:
bcoz we are starting the process from web-api, the the web-socket is being opened until it loops for 1001 times
Any feedback on this approach ? or any drawback can
Is there any way that the service would receive multiple rows to update instead of receiving a single row at a time? Or similarly, could the service that makes the initial request make multiple requests?
Also what kind of volume of data are you looking at? Is it something just over 1000, or 10k or 100k?
To be honest with you, doing heavy ETL work is not necessarily best suited for Appian. You can often do it in Appian, but other tools may be better set up to handle this.
In addition, there are some hacky work arounds that can run more than 1000 instances of a node (as mentioned by others in this thread) but personally I would be very hesitant to use them. Sure if you use a work around to run 2000 nodes across multiple processes it's probably fine. But what happens if you scale more than you expected and suddenly you need to run 200k processes from a single root process? It's easy to get into scenarios where things don't scale and it's hard to fix them.
I hope I'm not misunderstood, that the moral of my story was that I KILLED my environment trying something like this. And it wasn't as complex as ETL. I was just trying to make empty folders. I had extremely simple PMs, almost as simple as CAN be made, and I ran out of RAM almost immediately.
Now, my whole goal was to make my environment suffer by how many millions folders I had created, to show business that too many folders could be directly linked to poor performance. It suffered and died, but not before I got at least some data, and we got the buy-in we needed to prioritize the effort to clean out our folders.
The moral of my story was DON'T DO anything like what I did. I have first-hand experience that this is exactly how you break Appian.
Really helpful stuff. Thank you so much.
-San Antonio Trucking
© 2021 Appian. All rights reserved.