Hi Everyone,
We have a requirement where Application is writing 500 rows parallely via startprocess smart service(Loop).
Now for every Write opertion sync will run that means sync will run 500 times in backend. It seems to be creating connection pool issue though connection pool is 200.
can some one suggest the best soultion in this case?
Thanks in Advance!!
Discussion posts and replies are publicly visible
Could the 500 rows (or any subset of them) be written in one instance, versus looped?
Any reason why you need to do 500 writes vs just 1 write (and 1 sync with multiple ids)?
500 instances are running parallely where each instnace is writing a row in db.
Yes, that's what you said originally. And I'm asking if there's any way to restructure this, so that all 500 of those writes could be done in a single instance somehow. Because it sounds as if you're referring to 500 instances that are launched and write 100% simultaneously, which leads me to conclude that it might be possible, instead of launching 500 instances, to launch 1 instance with 500 rows (or 5 instances with 100 rows each, etc...).
It is an upload process where each row in an excel is prcessing separtely in each instance. So when an instance is writing a row into db, sync happening
Gopu Hema said:each row in an excel is prcessing separtely in each instance
Is there a particular reason why each row requires a separate instance to be launched for it?
It is an upload process where each row in an excel is getting processed. Unfortunately,do not have any away to include those rows to be inserted at a time because it is taking too much time to process all the rows at a time
Gopu Hema said:Unfortunately,do not have any away to include those rows to be inserted at a time
Can you explain a bit more about the structure of this setup that requires each row to get its own... process instance, i assume? If that's correct, I don't understand why the process can be passed multiple rows (or even all rows), versus requiring each row to be processed in its own instance.
each row of excel needs to be parsed and based on value it has to update few other attributes. And we process it sequentially then it is taking huge time and by doing load balancing and parallel processing, we are able to reduce the end to end process time by 10 times.
But as data is being written for the Entity having sync record defined.
I'd suggest you could still potentially batch the processing into larger batches than 1 row each - maybe 10 rows per process instance? This would reduce the number of sync calls considerably, and probably not increase your processing time very much overall.
An additional thing to consider would be to implement a timer at the end of each process that pauses it a few seconds. I've set up something before where each instance gets a random number of seconds between 1 and 30, and pauses that long, which gives the back-end engines enough time to process the different instances without accidentally trying to process every single one in one big lump, and thus getting hung up on itself.