API‑triggered Process Model using Write to Data Store (MySQL) takes 90+ seconds for ~25 inserts into a single table

Hello Community,

I am facing a performance issue with an API‑triggered Process Model that writes data to a MySQL database using the Write to Data Store Entity smart service.

Small blue diamond Scenario

  • An external system invokes an Appian Web API.
  • The API starts a Process Model.
  • The Process Model creates a case by inserting ~25+ rows into a single database table.
  • Inserts are done using Write to Data Store Entity (CDT‑based).
  • The API response is sent only after the Process Model completes.

Small blue diamond Issue

  • Total API response time is 90+ seconds.
  • The process completes successfully and returns "Successful", but the latency is too high for synchronous API calls.
  • Since all records are inserted into one table, this delay seems unexpected.

Small blue diamond Current Design 

  • API → Process Model
  • Multiple Write to Data Store Entity executions (likely one per record / loop)
  • Inserts are mostly sequential
  • Backend database: MySQL

Small blue diamond Questions

  1. Is it expected for Write to Data Store Entity to take this long when inserting ~25 rows into a single table?
  2. Does Appian open a separate transaction / commit for each Write to Data Store execution?
  3. Would performance improve by:
    • Passing all 25 records at once in a single Write to Data Store node?
    • Using a database stored procedure for bulk inserts?
  4. In API scenarios, is it recommended to:
    • Immediately respond to the API
    • And continue DB inserts asynchronously in the background?

  Discussion posts and replies are publicly visible

Parents
  • 90s for 25 rows usually means the loop + repeated reads/updates are the problem, not the fact that it is one table.

    Write to Data Store Entity does not bulk insert, so if you call it once per row Appian pays that cost each time. I would test 3 things first:

    1. Pass the full CDT array into one Write to Data Store Entity node instead of looping.
    2. Time the insert path separately from the reads/updates, since you mentioned the same table is being retrieved, inserted, and updated multiple times in the same process model.
    3. If the caller does not need the DB work finished before the response, return from a!startProcess() with isSynchronous: false and let the writes finish asynchronously.

    If one-array write is still slow after that, I would move the insert-heavy part to Execute Stored Procedure. That is usually the next step when network round trips or per-node transaction overhead are the bottleneck.

Reply
  • 90s for 25 rows usually means the loop + repeated reads/updates are the problem, not the fact that it is one table.

    Write to Data Store Entity does not bulk insert, so if you call it once per row Appian pays that cost each time. I would test 3 things first:

    1. Pass the full CDT array into one Write to Data Store Entity node instead of looping.
    2. Time the insert path separately from the reads/updates, since you mentioned the same table is being retrieved, inserted, and updated multiple times in the same process model.
    3. If the caller does not need the DB work finished before the response, return from a!startProcess() with isSynchronous: false and let the writes finish asynchronously.

    If one-array write is still slow after that, I would move the insert-heavy part to Execute Stored Procedure. That is usually the next step when network round trips or per-node transaction overhead are the bottleneck.

Children
No Data