Hi All,
I have a requirement; we need to store the document in S3, or at times we need to read the document (.xlsx) and present it in a grid format.
Sometimes, it contains 1 million rows. Is it possible to fetch the data in batches while reading the document from S3, or is there any other alternative approach to achieve this? without impact any performance.
Discussion posts and replies are publicly visible
Is this about the S3 API connectivity, or about how to display a multi-million row Excel file to a user?
I have S3 API connectivity ,
1. user is able to upload the file into S3 from appian --> is any there any limitation for upload the document to s3
2. While reading the document , is any possible to fetch on batch wise.
1. Do you use the plugin or connected system or basic integrations? Did you check the typical API and Integrations file size limits?
2. An Excel is no database. And S3 is just a file storage. There is no way to read this in batches.
1 -> Depends on plugin or connected system.2 -> Cannot fetch Excel file in batches from S3. Must download complete file, then parse in batches.