Hi All, I need your advice on below approaches for handling a scenari

Hi All,

I need your advice on below approaches for handling a scenario.

Scenario:- We have something like a Header/Detail tables. Each header record can contain around 100-200 detail records. User input the Header ID ; we need to get all the detail records and process each one of them and does DB update
Approach 1: Can we go for MNI ,for each detail record ; simultaneously processing all detail records.
Approach 2: In a single sub-process get all detail records ; subject each record through some business logic. Once all detail records are subjected to processing , the CDT which holds all these detail records is used for the Detail DB Update. (Single Update ,but containing all records)
Which among the above two approaches is best for handling this scenario w.r.t to performance and stability of platform. fyi, this is not a batch process ,but an interactive process where the header is keyed in by the user.

OriginalPostID-191410

OriginalPostID-191410

  Discussion posts and replies are publicly visible

  • george - it seems like all of the work is being done in the db so I'm not sure why the data needs to be brought into appian. can you write a smart service that offloads the work to the DB?
  • Thought that @jasonn769 had a good thought here and started looking at the Execute Stored Procedure plugin, however in the docs it states NOT to use it for data manipulation and read only purposes. From a performance perspective, a stored proc would definitely do this in the most efficient manner but would need to understand if the data becomes out of sync, because Appian is not aware. Don't have the answer, but just wanted to highlight this...

    forum.appian.com/.../summary
  • @nicholasw It's not that the plugin 'Execute Stored Procedure' at https://forum.appian.com/suite/tempo/records/type/components/item/i8BCLGOdlMUpdGVqT-RV7oRg74uEGJO5C8ZhL3Yukyv6zRTk30LjvusuVlHbS9BJg/view/summary is used for read only purposes. The plugin offers two functionalities, one as a smart service and other as a function. The smart service can read the data and modify the data as well. Whereas the function offers a read only capability which is especially useful when we are on SAIL interfaces so that we can quickly make some queries to stored procedures.

    @jasonn Is a smart service needed just to offload the work? I am just worried that if we write a smart service for such small implementation, isn't it that we might end up in many smart services finally at some time? And thereafter comes the maintenance and upgrade of the same. If there are any changes in the processing, the smart service needs to be changed accordingly. From my perspective, a good combination of database objects such as procedures or views and a intelligently and efficiently designed Appian process or just a stored procedure which completely takes care of the updates and a simple Appian process should do the work. Those were just my thoughts. Further I believe that we should opt for DB only or DB + Appian or Appian only, after seeing what's viable and what's efficient in both.
  • Many thanks to all your inputs; after taking into consideration we have decided to use the "Execute Stored Procedure Plugin" which does all the DB Updates. That gives us some relief. It was good learning for me with lot of ideas being shared.
  • 0
    Certified Lead Developer
    I have to agree with the Stored Procedure route. Some things to consider:

    Approach 1: You're either going to perform 200 database operations one after the other or 200 database operations at once. One takes ages and the other may drop system performance overall. Either case means using a significant number of connections from your connection pool. If you should actually manage to do all 200 synchronously, you've completely exhausted half the absolute maximum cap of open connections. 2 other users attempt to do the same thing, and they won't be able to. Your entire application could wind up deadlocked and unable to get any connections, making all writes and all queries fail application wide; it would be effectively down. (I've experienced this.)

    Not only that, but you have 200 processes waiting to archive when you're done, and while they idle even after they're complete they take up processing and RAM on your execution engines.

    Approach 2: Now you need to do batch processing on your records, which you can do with a!forEach, which is actually several orders of magnitude faster than MNI (A process I switched from MNI to a!forEach went from 2-3 minutes to 40 or 50 milliseconds for 300 records ). However, you try to do the database update for all records, and it takes so long it might routinely timeout and fail. And it will be difficult to troubleshoot with the unreadably vast CTDs.

    Now you don't have 200 processes, you have 1 process approximately 200 times the size, and it takes up less processing, but possibly even more RAM idling on your execution engines after you're done.

    So either an army of 200 process models or a process model the size of an army of 200 of them.

    Or using database to manage the database and sparing your execution engines, connections, and RAM to do it faster and more securely, at the cost of being a little more difficult to maintain.