we're currently doing a survey app that will input an excel file ->parse

we're currently doing a survey app that will input an excel file ->parse data using excel to cdt plugin, store data into 2 datastores and subprocess for the reporting. Here's the scenario/issues we're encountering:
1. Each file contains 1000+ rows and when we try to upload the file the ff error occured: "The number of tasks per node for "Current data" would exceed the limit of 1000." -.does it mean that we can't upload as many info more than 1000?
2. We then try to upload another excel file with 600+ rows only,yes it will upload but will take more than 30 mins (for the add new feature) but when we updated those 600+ info using the same concept, it would take us for about an hour. If this the case, how can we make it faster. This won't work with our requirement.Any suggestions?

TIA....

OriginalPostID-63749

OriginalPostID-63749

  Discussion posts and replies are publicly visible

Parents
  • @Jessica, are you kicking off 1000+ subprocess nodes? I would definitely heed to Siva's advice. The limits are put in place there for a reason and if you are hitting any of them, it is usually a very strong indicator that design needs to revamped. If you already save the records in DB, why not just use a query rule with paging grid to retrieve the data for reporting purposes?
Reply
  • @Jessica, are you kicking off 1000+ subprocess nodes? I would definitely heed to Siva's advice. The limits are put in place there for a reason and if you are hitting any of them, it is usually a very strong indicator that design needs to revamped. If you already save the records in DB, why not just use a query rule with paging grid to retrieve the data for reporting purposes?
Children
No Data