Hi, I am using "Execute Process Report " smart service for

Hi,

I am using "Execute Process Report " smart service for one of my requirements.

The requirement is - I need to fetch subset of the records say., 100 records from the portal report that has about 10000 records and persist to a database. This way I would like to repeat the process until all records/data are fetched and moved to database.

As per the configuration of the aforementioned smart service, "Max Rows" field allows us to set the batch size as 100 but the problem I see is this smart service does not have the provision to fetch the next 100 rows once the first 100 rows are processed.

When I tested this in loop I observed that every time on execution the smart service returns only first 100 rows from the portal report but does not fetch the next set and so on.

Could you please let me know if the behavior of this smart service is limited to fetching only first 100 rows if the Max Rows field is set to 100 and does not help in fetching f...

OriginalPostID-164161

OriginalPostID-164161

  Discussion posts and replies are publicly visible

  • ...urther rows in a loop ?

    ( Note : As an alternative, I tried an approach that uses queryProcessAnalytics() but I was unable to capture the response from queryProcessAnalytics() to any predefined datatype in Appian i.e., when I tried to capture the response to a datatype called "PortalReportDataSubset" I got all blank values or rather empty dictionary. I believe this could be due to "PortalReportDataSubset" been a third party plugin and not Appian defined datatype.)
  • If you want to continue to use the Execute Process Report, you may be able to use a filter to fetch subsequent sets. You can filter on the data's unique ID. So the first time it will run all ID's >0, then you take the biggest ID in the returned data and set it to filter all ID's>biggest returned ID. But this is more of a work-around, using queryProcessAnalytics() would be the better option so that you can use paging info.
  • @sapnas Hi, let's start with the queryProcessAnalytics() function. So you said 'I was unable to capture the response from queryProcessAnalytics() to any predefined data type in Appian'. Correct, the results doesn't map to any predefined data type in Appian (though they could be mapped to PortalReportDataSubset, I didn't see an accurate mapping), but I believe that this shouldn't stop you from using it. You could crack on with this problem in two ways:
    - One way is to create a formatted cdt (perhaps you could use an existing cdt as well) and assign the results of queryProcessAnalytics() to the variable of type formatted (or existing) cdt.
    - Other way is to convert the resultant analytics data as array of label value pairs ({type!LabelValue(label:"Column1",value:),type!LabelValue(label:"Column2",value:) and so on..}). This can be done in a single expression. In the process model, make a call to the expression and store the results into the process variables by making use of the label value pairs. (This way is similar to mapping the outputs of Execute report smart service to process variables.)

    With regards to Execute process report smart service - you might be correct. To the best of my knowledge, there isn't a batching technique that can be done OOTB with the help of this smart service. But probably what you could do is, extract all the rows (Configure the 10000 as value in the 'Max Rows' input of smart service. These many records will be obtained irrespective of paging configuration you make use of in the portal report object) from the report in a single shot and store these results in a variable and let's call it as orginal_data. By applying the todatasubset() on the obtained result (orginal_data), ex: todatasubset(orginal_datasubset.data,topagingInfo(1,10)) , you could always paginate through the results and store this in other variable called current_data. The current_data in sequential iterations should be similar to this: todatasubset(orginal_data,topagingInfo(1,100)), todatasubset(orginal_data,topagingInfo(101,100)), todatasubset(orginal_data,topagingInfo(201,100)) and so on. Further you may need to maintain counter variables that enables the process to retrieve the next batch dynamically by using startIndex in topagingInfo. So all the time, you will be working on the current_data and this will be overridden upon each iteration with next set of 100 records.

    The activities which we discussed above can also be done using getportalreportdatasubset() as well, but its better to leave it for now, as this is deprecated.

    So far I have shared is from my experience and let's see if any other practitioners could add valuable suggestions. Please do let me know if you have any follow up questions. Hope the above explanation gives you an idea.
  • Thanks lakshmim. I shall try out the approach you suggested.
  • Thank You Sikhivahans for an elaborate solution. I shall try both the approaches.