Pagination - Memory threshold issue.

We have a query as below
rule!WFT_getRequestDetailsBySearchParams(ri!process,ri!subprocess,ri!action,ri!requestor,ri!company,ri!reference,ri!sapRefNo,if(isnull(ri!fromDate),todatetime(date(2010,10,10)),ri!fromDate),if(isnull(ri!toDate),todatetime((today())),ri!toDate+intervalds(23,59,59)),ri!monetaryValue,ri!priority,ri!wftRequestID,ri!salesOffice,ri!requestStatus,ri!pageInfo)))

In the above query how is "ri!pageInfo" working? I could not find any parameters like batch size set for pageInfo. Please help how does it work for the above query.

How does pageInfo decrease the load on Appian? I have referred the below link, but did not understand much, Please help me on this.
forum.appian.com/.../System_Functions.html

OriginalPostID-254399

  Discussion posts and replies are publicly visible

Parents
  • @chandrasekharg Querying the entire data in the process is the only way I am aware of. Until and unless I miss something, there isn't any other way wherein we can query in batches on the form instead of pulling the entire data into process variables and batching the data in the process variable.

    If so will that be good to have huge amount of data in the Process variables? what if many instances are running in this case?
    Definitely, it's not a great idea to store a huge amount of data in the process variables and obviously it has some adverse impacts when there are many instances running in the system that is designed on a similar pattern. Maybe we can go for alternatives such as follows:

    Option 1:
    a. Have a drop down that shows paging info(such as 1-10,11-20 etc) and auto submit the form upon changing the value in the dropdown and use javascript.
    b. You can use the Editable Grid to get rid of the pagination and sorting controls as we use the custom pagination logic now.
    c. Post form submission, query the data in the process depending on the paging info we get from the dropdown in the form.

    Option 2:
    a. Leave the paging grid as is and show a radio button/drop-down which shows the options such as Get first 100 records, Get first 200 records or something in a similar way.
    b. Again post form submission query the data based on how you design the batching.

    In case if we query the entire dataset and store it in the process variable, then it should be done by querying in the best possible way followed by archiving the instance immediately upon completion of the task. That way we can at least control few things that are in our hand. It's just better when compared to querying the entire data set at a time which hampers the performance and also frustrates the end user by increasing the loading time.
Reply
  • @chandrasekharg Querying the entire data in the process is the only way I am aware of. Until and unless I miss something, there isn't any other way wherein we can query in batches on the form instead of pulling the entire data into process variables and batching the data in the process variable.

    If so will that be good to have huge amount of data in the Process variables? what if many instances are running in this case?
    Definitely, it's not a great idea to store a huge amount of data in the process variables and obviously it has some adverse impacts when there are many instances running in the system that is designed on a similar pattern. Maybe we can go for alternatives such as follows:

    Option 1:
    a. Have a drop down that shows paging info(such as 1-10,11-20 etc) and auto submit the form upon changing the value in the dropdown and use javascript.
    b. You can use the Editable Grid to get rid of the pagination and sorting controls as we use the custom pagination logic now.
    c. Post form submission, query the data in the process depending on the paging info we get from the dropdown in the form.

    Option 2:
    a. Leave the paging grid as is and show a radio button/drop-down which shows the options such as Get first 100 records, Get first 200 records or something in a similar way.
    b. Again post form submission query the data based on how you design the batching.

    In case if we query the entire dataset and store it in the process variable, then it should be done by querying in the best possible way followed by archiving the instance immediately upon completion of the task. That way we can at least control few things that are in our hand. It's just better when compared to querying the entire data set at a time which hampers the performance and also frustrates the end user by increasing the loading time.
Children
No Data