Hi all,
We have a process that runs every night. It makes calls to a java plugin that is slow. I am wondering how we can improve the performance.
I thought of placing the java plugin call in a subprocess and call it with the Start Process smart service to make sure that there is load balancing in place. However the Start Process SM is asynchronous and we need to have a list of the calls that returned an error.
I think having load balance could improve the performance. Is there a way to mitigate the fact that is not possible to return parameters from the subprocess? May be we can write the success/errors calls into a table? Or may be there is a better way to store the response messages?
Is there any way to make sure that there is load balancing across the appian engines without using the Start Process smart service?
Changing the plugin is not an option at this moment but it could be explored later on.
Thanks a lot
Discussion posts and replies are publicly visible
So you need to call a slow Java Plugin sequentially multiple times. I do not see any way to make this go faster if parallel calls are no option.
What is this plugin doing and why is it slow? What does slow mean? Does it perform extensive computations which just take time? Or is it badly implemented and wastes the time?
Hi Stefan. As far as I know the plugin is refreshing data from one database to another for a big number (2000~) of clients, one by one (so one call for each client). I suggested to replace it with an ETL tool but the expectation here is if possible tweak the process model to make it work a little bit faster. A long term solution will also be planned but not for now.
OK. Assuming that you need to call one after the other and Appian is not the bottleneck, then there is not much you can do.
What performance numbers do you have? If it spends 90% of the time waiting for the database, then optimizing the remaining 10% will not help that much.
What's the impact of this slow call? It being slow on its own may not be a huge problem if this is running overnight and there's litte usage on the system while this is running. Is it impacting users? Or taking so long that it comples during business hours?
That being said, there are other ways to get information from asynchronous sub-processes. For instance, each request could write to a database table whether it succeeded or failed. Then, the original process could wait a while and then query the table to check which ones failed. Also, you could utilize something like process-to-process messaging to send a message back to the original process if your sub-process failed.
The performance is very bad. It takes some hours to complete. The calls are executed sequentially.
Hi Peter, using a table with results or process to process messages is what I have in mind. thanks for sharing your thoughts here.