Hi Community,
I am using a!recorddata for my readonly grid. record is API backed which has 3.5 million rows of meta data (just 4 columns). I am using pagesize as 50. I have provided filters in UI for users to narrow down their search. Since currently I am only using sample size data , Should I expect any performance issues once 3.5 million data is synced into records?
I didn't find any recipes for a!recorddata with pagesize. So also would like to understand how readonly grid interprets and handles pagination just by using pagesize without any limitation on amount of records?
Discussion posts and replies are publicly visible
AFAIK, Yes, expect performance issues. Syncing 3.5M rows will be slow and may timeout during initial and daily syncs. While pageSize: 50 creates automatic pagination, it won't solve the core problem - Appian queries all 3.5M synced rows in its database, making even paginated requests slower. To pass performance testing, use real-time API mode (no sync) where your API returns only 50 rows per request. If you must sync, performance will likely fail without optimizations like filters or the "Keep data available at high volumes" option.
What If I don't configure 'Scheduled full syncs', It will sync only once initially when deployed and triggered- and caches all 3.5 million rows. Now the readonly grid will only load 50 items per page as per 'pagesize' parameter on initial load of interface right?. I believed 'pagesize' will do the magic when a!recorddata is used for read only grid. When user clicks on next page subsequent batch of 50 is queried and loaded to interface, which should not get affected by number of rows lying behind in records unless I use any filters to narrow down.
Yes. After the initial sync caches all 3.5M rows, a!recordData() with pageSize: 50 will efficiently query only 50 rows per page click. This works because appian uses proper database queries. The risks are: initial sync timeout and stale data without updates. But for static reference data, this approach is valid - sync once, disable scheduled syncs, and pagination will perform well.
Just ensure the initial deployment sync completes successfully.
Thanks Shubham Aware , That answered my question.
But now coming to risks you mentioned,
1.stale data without updates: I am ok with this since source data won't change.
2.initial sync timeout : I am using integration which connect to AWS lambda (State less which executes for 15 minutes max) to bring metadata of S3 bucket. I only have 4 columns per row. So my record should get synced in 15 minutes if I go with stateless approach. A batch of 1000 , which eventually should trigger 3500 batches from Appian within 15 minutes to make this happen. Do you foresee any problem here?
Anvesh Shetty said:I am using integration which connect to AWS lambda (State less which executes for 15 minutes max) to bring metadata of S3 bucket. I only have 4 columns per row. So my record should get synced in 15 minutes if I go with stateless approach. A batch of 1000 , which eventually should trigger 3500 batches from Appian within 15 minutes to make this happen. Do you foresee any problem here?
Your approach might fail. Record has a 4-hour sync timeout limit, but 3,500 sequential API calls (one per batch) will exceed this. Even if each call takes just 4 seconds, that's nearly 4 hours without accounting for retries or network latency.https://docs.appian.com/suite/help/25.2/Records_Monitoring_Details.html
Shubham Aware Once Its timed out, will it hold the data synced till timeout. & If I try to sync again manually, will it start from 1st batch?
When sync times out, the record data becomes unavailable and manual retry starts from batch 1.If you are ok with this since source data won't change. I would say dump data in DB and then create entity backed record as sync.
I can't take out the metadata data outside AWS, There lies a client restriction. I have to rely on API backed record as per architecture proposed for now. Is there any other way to tackle this?
You can also use this approach:Use a service-backed record configured for real-time, paginated API access instead of enabling full sync. Your external API must support pagination (e.g., limit, offset, or page parameters). In Appian, create an integration that takes a!pagingInfo as input and returns only the required page of data. Use this integration as the data source for your record type. In the UI, display data using a read-only grid tied to the record type with paging, sorting, and filtering passed through to your API. This on-demand approach avoids full data syncs, scales to millions of rows.https://docs.appian.com/suite/help/25.2/Service-Backed_Record_Tutorial.html