Connection Pool Issues

Certified Senior Developer

Hi Everyone,

 We have a requirement where  Application is writing 500 rows parallely via startprocess smart service(Loop).

 

Now for every Write opertion sync will run that means sync will run 500 times in backend. It seems to be creating connection pool issue 
though connection pool is 200.

 

can some one suggest the best soultion in this case?

Thanks in Advance!!

  Discussion posts and replies are publicly visible

Parents Reply Children
  • 0
    Certified Associate Developer
    in reply to Mike Schmitt

    each row of excel needs to be parsed and based on value it has to update few other attributes. And we process it sequentially then it is taking huge time and by doing load balancing and parallel processing, we are able to reduce the end to end process time by 10 times.

    But as data is being written for the Entity having sync record defined. 

  • 0
    Certified Lead Developer
    in reply to piyusha6151

    I'd suggest you could still potentially batch the processing into larger batches than 1 row each - maybe 10 rows per process instance?  This would reduce the number of sync calls considerably, and probably not increase your processing time very much overall.

    An additional thing to consider would be to implement a timer at the end of each process that pauses it a few seconds.  I've set up something before where each instance gets a random number of seconds between 1 and 30, and pauses that long, which gives the back-end engines enough time to process the different instances without accidentally trying to process every single one in one big lump, and thus getting hung up on itself.

  • 0
    Certified Associate Developer
    in reply to Mike Schmitt

    But how timer will help if process are running parallely as sync will anyways run with each write operation.

  • 0
    Certified Lead Developer
    in reply to piyusha6151

    You add the delay timer prior to the Sync call (or if you're just talking about the syncing that happens automatically after a Write, then before the Write node) in each process instance.  That means instead of 500 instances all hitting write and sync within a few milliseconds of each other, they'll execute in a staggered fashion across the entire set, allowing time for the connections to open and close between the different sync calls.