Hi All, I need your advice on below approaches for handling a scenari

Hi All,

I need your advice on below approaches for handling a scenario.

Scenario:- We have something like a Header/Detail tables. Each header record can contain around 100-200 detail records. User input the Header ID ; we need to get all the detail records and process each one of them and does DB update
Approach 1: Can we go for MNI ,for each detail record ; simultaneously processing all detail records.
Approach 2: In a single sub-process get all detail records ; subject each record through some business logic. Once all detail records are subjected to processing , the CDT which holds all these detail records is used for the Detail DB Update. (Single Update ,but containing all records)
Which among the above two approaches is best for handling this scenario w.r.t to performance and stability of platform. fyi, this is not a batch process ,but an interactive process where the header is keyed in by the user.

OriginalPostID-191410

OriginalPostID-191410

  Discussion posts and replies are publicly visible

Parents
  • 0
    Certified Lead Developer
    I have to agree with the Stored Procedure route. Some things to consider:

    Approach 1: You're either going to perform 200 database operations one after the other or 200 database operations at once. One takes ages and the other may drop system performance overall. Either case means using a significant number of connections from your connection pool. If you should actually manage to do all 200 synchronously, you've completely exhausted half the absolute maximum cap of open connections. 2 other users attempt to do the same thing, and they won't be able to. Your entire application could wind up deadlocked and unable to get any connections, making all writes and all queries fail application wide; it would be effectively down. (I've experienced this.)

    Not only that, but you have 200 processes waiting to archive when you're done, and while they idle even after they're complete they take up processing and RAM on your execution engines.

    Approach 2: Now you need to do batch processing on your records, which you can do with a!forEach, which is actually several orders of magnitude faster than MNI (A process I switched from MNI to a!forEach went from 2-3 minutes to 40 or 50 milliseconds for 300 records ). However, you try to do the database update for all records, and it takes so long it might routinely timeout and fail. And it will be difficult to troubleshoot with the unreadably vast CTDs.

    Now you don't have 200 processes, you have 1 process approximately 200 times the size, and it takes up less processing, but possibly even more RAM idling on your execution engines after you're done.

    So either an army of 200 process models or a process model the size of an army of 200 of them.

    Or using database to manage the database and sparing your execution engines, connections, and RAM to do it faster and more securely, at the cost of being a little more difficult to maintain.
Reply
  • 0
    Certified Lead Developer
    I have to agree with the Stored Procedure route. Some things to consider:

    Approach 1: You're either going to perform 200 database operations one after the other or 200 database operations at once. One takes ages and the other may drop system performance overall. Either case means using a significant number of connections from your connection pool. If you should actually manage to do all 200 synchronously, you've completely exhausted half the absolute maximum cap of open connections. 2 other users attempt to do the same thing, and they won't be able to. Your entire application could wind up deadlocked and unable to get any connections, making all writes and all queries fail application wide; it would be effectively down. (I've experienced this.)

    Not only that, but you have 200 processes waiting to archive when you're done, and while they idle even after they're complete they take up processing and RAM on your execution engines.

    Approach 2: Now you need to do batch processing on your records, which you can do with a!forEach, which is actually several orders of magnitude faster than MNI (A process I switched from MNI to a!forEach went from 2-3 minutes to 40 or 50 milliseconds for 300 records ). However, you try to do the database update for all records, and it takes so long it might routinely timeout and fail. And it will be difficult to troubleshoot with the unreadably vast CTDs.

    Now you don't have 200 processes, you have 1 process approximately 200 times the size, and it takes up less processing, but possibly even more RAM idling on your execution engines after you're done.

    So either an army of 200 process models or a process model the size of an army of 200 of them.

    Or using database to manage the database and sparing your execution engines, connections, and RAM to do it faster and more securely, at the cost of being a little more difficult to maintain.
Children
No Data