QueryTimeoutException

Certified Lead Developer

Hi all,

 

We have a process model that Fails in the Write to Data Store Entity node with a org.hibernate.QueryTimeoutException because the object to save is massive.

 

I believe the solution is to Change the design but can we Change the timeout Settings for the time being? it is Happening in production and the customer wants a quick Workaround.

 

Many thanks!

  Discussion posts and replies are publicly visible

Parents
  • If you're on Appian Cloud, this cannot be adjusted. However on-premise, it may be possible.

    BUT... I highly recommend against adjusting system settings to compensate for bad design. Is the issue that you are attempting to save too many rows in a single transaction? If so, I recommend batching the writes at a more manageable level.

    If you provide more details about the use case, we can likely give you some better options.
  • 0
    Certified Lead Developer
    in reply to Colton Beck
    Thanks Colton. I am new to this customer and cant provide much info, but the object used is so big that I cant even copy 10% of it here... I imagine it will update multiple tables and rows. there is no way to update the Transaction timeout in production for this. I will speak to the customer and redesign
  • Is it a normal Write to Data Store node or a Write to Multiple Data Stores node? If the latter, you could split out the writes into separate calls (although this may not solve the underlying issue).

    As mentioned above, batching the writes is also a reasonable approach, but would require looping over a WTDS, which isn't ideal. If the process truly requires a massive amount of writing, it may be worth looking into a design that leverages a stored procedure to make the necessary data updates, as this would be much more performant, although you would make sacrifices with respect to maintainability and transparency.
Reply
  • Is it a normal Write to Data Store node or a Write to Multiple Data Stores node? If the latter, you could split out the writes into separate calls (although this may not solve the underlying issue).

    As mentioned above, batching the writes is also a reasonable approach, but would require looping over a WTDS, which isn't ideal. If the process truly requires a massive amount of writing, it may be worth looking into a design that leverages a stored procedure to make the necessary data updates, as this would be much more performant, although you would make sacrifices with respect to maintainability and transparency.
Children
No Data