Performance of readonly grid using a!recorddata for 'data' parameter

Certified Senior Developer

Hi Community,

I am using a!recorddata for my readonly grid. record is API backed which has 3.5 million rows of meta data (just 4 columns). I am using pagesize as 50. I have provided filters in UI for users to narrow down their search. Since currently I am only using sample size data , Should I expect any performance issues once 3.5 million data is synced into records?

I didn't find any recipes for a!recorddata with pagesize. So also would like to understand how readonly grid interprets and handles pagination just by using pagesize without any limitation on amount of records?

  Discussion posts and replies are publicly visible

Parents
  • 0
    Certified Lead Developer

    AFAIK, Yes, expect performance issues. Syncing 3.5M rows will be slow and may timeout during initial and daily syncs. While pageSize: 50 creates automatic pagination, it won't solve the core problem - Appian queries all 3.5M synced rows in its database, making even paginated requests slower.
    To pass performance testing, use real-time API mode (no sync) where your API returns only 50 rows per request.
    If you must sync, performance will likely fail without optimizations like filters or the "Keep data available at high volumes" option.

  • +1
    Certified Senior Developer
    in reply to Shubham Aware

    What If I don't configure 'Scheduled full syncs', It will sync only once initially when deployed and triggered- and caches all 3.5 million rows. Now the readonly grid will only load 50 items per page as per 'pagesize' parameter on initial load of interface right?. I believed 'pagesize' will do the magic when a!recorddata is used for read only grid. When user clicks on next page subsequent batch of 50 is queried and loaded to interface, which should not get affected by number of rows lying behind in records unless I use any filters to narrow down.

  • +1
    Certified Lead Developer
    in reply to Anvesh Shetty

    Yes. After the initial sync caches all 3.5M rows, a!recordData() with pageSize: 50 will efficiently query only 50 rows per page click. This works because appian uses proper database queries. The risks are: initial sync timeout and stale data without updates. But for static reference data, this approach is valid - sync once, disable scheduled syncs, and pagination will perform well.

    Just ensure the initial deployment sync completes successfully.​​​​​​​​​​​​​​​​

  • 0
    Certified Senior Developer
    in reply to Shubham Aware

    Thanks  , That answered my question.

    But now coming to risks you mentioned,

    1.stale data without updates: I am ok with this since source data won't change.

    2.initial sync timeout : I am using integration which connect to AWS lambda (State less which executes for 15 minutes max) to bring metadata of S3 bucket. I only have 4 columns per row. So my record should get synced in 15 minutes if I go with stateless approach. A batch of 1000 , which eventually should trigger 3500 batches from Appian within 15 minutes to make this happen. Do you foresee any problem here?

  • +1
    Certified Lead Developer
    in reply to Anvesh Shetty
    I am using integration which connect to AWS lambda (State less which executes for 15 minutes max) to bring metadata of S3 bucket. I only have 4 columns per row. So my record should get synced in 15 minutes if I go with stateless approach. A batch of 1000 , which eventually should trigger 3500 batches from Appian within 15 minutes to make this happen. Do you foresee any problem here?

    Your approach might fail. Record has a 4-hour sync timeout limit, but 3,500 sequential API calls (one per batch) will exceed this. Even if each call takes just 4 seconds, that's nearly 4 hours without accounting for retries or network latency.

    https://docs.appian.com/suite/help/25.2/Records_Monitoring_Details.html

  • 0
    Certified Senior Developer
    in reply to Shubham Aware

       Once Its timed out, will it hold the data synced till timeout. & If I try to sync again manually, will it start from 1st batch?

  • 0
    Certified Lead Developer
    in reply to Anvesh Shetty

    When sync times out, the record data becomes unavailable and manual retry starts from batch 1.

    If you are ok with this since source data won't change. I would say dump data in DB and then create entity backed record as sync.

Reply Children