Appian Community
Site
Search
Sign In/Register
Site
Search
User
DISCUSS
LEARN
SUCCESS
SUPPORT
Documentation
AppMarket
More
Cancel
I'm looking for ...
State
Suggested Answer
+1
person also asked this
people also asked this
Replies
15 replies
Answers
1 answer
Subscribers
8 subscribers
Views
11001 views
Users
0 members are here
Share
More
Cancel
Related Discussions
Home
»
Discussions
»
Process
Hi All, I need your advice on below approaches for handling a scenari
georgej
over 8 years ago
Hi All,
I need your advice on below approaches for handling a scenario.
Scenario:- We have something like a Header/Detail tables. Each header record can contain around 100-200 detail records. User input the Header ID ; we need to get all the detail records and process each one of them and does DB update
Approach 1: Can we go for MNI ,for each detail record ; simultaneously processing all detail records.
Approach 2: In a single sub-process get all detail records ; subject each record through some business logic. Once all detail records are subjected to processing , the CDT which holds all these detail records is used for the Detail DB Update. (Single Update ,but containing all records)
Which among the above two approaches is best for handling this scenario w.r.t to performance and stability of platform. fyi, this is not a batch process ,but an interactive process where the header is keyed in by the user.
OriginalPostID-191410
OriginalPostID-191410
Discussion posts and replies are publicly visible
Parents
0
sikhivahans
over 8 years ago
@georgej I would like to comment and add few tweaks as follows on the approaches specified by you:
Approach 1: If you mean by 'simultaneously processing' that you will try to launch all the 100 - 200 processes (1 process per 1 detail record, and on the whole 200 processes for 200 detail records as per your example) at the same time , then I would like to suggest to refrain from doing so. As you are saying that you need to update the detail record finally at the end, there is a potential danger of triggering 200 database update operations at the same time which might tentatively cause the exhaustive connection pool issue. Further you are also saying that you need to process each record. I don't think we can't judge the complexity of the process from this statement, but probably what I can suggest is to measure this complexity by checking the node execution times and the space occupied. But please do bear in mind that, a complex processing, that too triggered 200 times (i.e. per 200 detail records) in parallel flow could literally bring the performance down drastically. And this could be visible from the end user perspective, as he starts experiencing severe slowness until these processes complete. If the processing of each record is quiet simple, and the cdt(table) holds less amount of data, then ensure that Appian is able to do it for you quiet comfortably. But also bear in mind that increase in complexity of processing of detail record or increase in amount of data hold by detail record causes a severe bottle necks in performance and further leads to significant changes in design. Sometimes the time required to redesign can also equal (or exceed at times) the actual time we spend when we first deal with use-case.
I can suggest few tweaks as follows to this approach:
1. If the process is designed to run each instance one by one, it won't cause any issue. But make sure that the time taken ultimately doesn't effect your requirements(obviously 200 one by one processes takes good time).
2. If possible try to initiate the processes by making use of messaging. You can find some notes re this under 'Uneven Process Instance Distribution' at https://forum.appian.com/suite/help/16.1/Appian_Health_Check.html.
Approach 2: I would like to suggest to refrain from processing all the records in one go.
I can suggest few tweaks as follows to this approach:
1. Make sure that the batching is implemented. That is, let's say you need to play with X records, always ensure that Appian is able to handle them prior to making a decision. If so, that's fine. Else make the X into two halves and try again. Continue until Appian ensures you that it is able to handle the process comfortably. For instance let's say you have written a complex rule which will operate on any number of detail records. Now let's assume that you have put the expression rule in a script task and inputted 200 detail records to it. Let's assume that the script task has consumed 10 minutes because of the amount of data being handled by the expression rule. This kind of processing is not at all appreciated by Appian. Instead fix your batch size as 100 where each takes just 1 minute (Please note that I am just applying general logic). So the only tweak i would like to suggest to your approach 2 is, batching. And nevertheless to say, we shouldn't forget that the batches also should run one by one, but not simultaneously.
Cancel
Vote Up
0
Vote Down
Sign in to reply
Verify Answer
Cancel
Reply
0
sikhivahans
over 8 years ago
@georgej I would like to comment and add few tweaks as follows on the approaches specified by you:
Approach 1: If you mean by 'simultaneously processing' that you will try to launch all the 100 - 200 processes (1 process per 1 detail record, and on the whole 200 processes for 200 detail records as per your example) at the same time , then I would like to suggest to refrain from doing so. As you are saying that you need to update the detail record finally at the end, there is a potential danger of triggering 200 database update operations at the same time which might tentatively cause the exhaustive connection pool issue. Further you are also saying that you need to process each record. I don't think we can't judge the complexity of the process from this statement, but probably what I can suggest is to measure this complexity by checking the node execution times and the space occupied. But please do bear in mind that, a complex processing, that too triggered 200 times (i.e. per 200 detail records) in parallel flow could literally bring the performance down drastically. And this could be visible from the end user perspective, as he starts experiencing severe slowness until these processes complete. If the processing of each record is quiet simple, and the cdt(table) holds less amount of data, then ensure that Appian is able to do it for you quiet comfortably. But also bear in mind that increase in complexity of processing of detail record or increase in amount of data hold by detail record causes a severe bottle necks in performance and further leads to significant changes in design. Sometimes the time required to redesign can also equal (or exceed at times) the actual time we spend when we first deal with use-case.
I can suggest few tweaks as follows to this approach:
1. If the process is designed to run each instance one by one, it won't cause any issue. But make sure that the time taken ultimately doesn't effect your requirements(obviously 200 one by one processes takes good time).
2. If possible try to initiate the processes by making use of messaging. You can find some notes re this under 'Uneven Process Instance Distribution' at https://forum.appian.com/suite/help/16.1/Appian_Health_Check.html.
Approach 2: I would like to suggest to refrain from processing all the records in one go.
I can suggest few tweaks as follows to this approach:
1. Make sure that the batching is implemented. That is, let's say you need to play with X records, always ensure that Appian is able to handle them prior to making a decision. If so, that's fine. Else make the X into two halves and try again. Continue until Appian ensures you that it is able to handle the process comfortably. For instance let's say you have written a complex rule which will operate on any number of detail records. Now let's assume that you have put the expression rule in a script task and inputted 200 detail records to it. Let's assume that the script task has consumed 10 minutes because of the amount of data being handled by the expression rule. This kind of processing is not at all appreciated by Appian. Instead fix your batch size as 100 where each takes just 1 minute (Please note that I am just applying general logic). So the only tweak i would like to suggest to your approach 2 is, batching. And nevertheless to say, we shouldn't forget that the batches also should run one by one, but not simultaneously.
Cancel
Vote Up
0
Vote Down
Sign in to reply
Verify Answer
Cancel
Children
No Data