I have a non-synced REST API that I am using to fetch data via an integration. This endpoint supports paging. In both the record data source and the integration, when I put a a!pagingInfo(x, y), in the pagingInfo test inputs, the data increments through pages correctly according to the values I give for x and y.
But, in the data model test listing and the record list URL, the first 50 records appear (as expected) with the correct record total (more than 1,000), but when I click the next button, the next page is blank and says "No items available". I can click the previous button, and the first 50 records display again. The Paging Info drop down on the data model page is mapped to the correct rule input on the record data source.
The Appian version is 23.2.
Here is my code for the dataSubset in the record data source:
a!dataSubset( startIndex: ri!pagingInfo.startIndex, batchSize: ri!pagingInfo.batchSize, totalCount: local!integrationResponse.result.body.total_items, data: a!forEach( local!integrationResponse.result.body.items, cast( 'type!{urn:com:appian:types:YYY}YYY_MyRecordType', fv!item ) ), identifiers: a!forEach( local!integrationResponse.result.body.items, fv!item.id ) ),
I cannot seem to find anything wrong. What am I missing?
UPDATE:
The problem was that I needed to convert start index into a page number.Much appreciation for the tremendous effort to help from @Stefan Helzle
Discussion posts and replies are publicly visible
I do not see any obvious problems. Can you share more of your code? Especially I am interested in that local!integrationResponse. Starting with the integration object itself, you need to validate each and every step, making sure that the correct values go in and come out.
Two tips:
- You can cast a list of items without a foreach. Just cast to a!listType(<YOUR_RECORD_TYPE>).
- You can extract all values form a list of items without a foreach -> "local!integrationResponse.result.body.items.id"
I think I eliminated caching as the possible problem because when I set the number of records for the record listing to 1000, it lists all of those records.
I created the interface, and it pulls however many records are specified in page size, but then next pages say no records. If I put 10, it pulls 10 records from the REST service for the first page. If I put 300, it pulls 300 records from the REST service. In both cases, all of the next pages give blank results.
{ a!gridField( label: "Read-only Grid", labelPosition: "ABOVE", data: 'recordType!{4d44effc-c462-4c70-b1aa-87905a73c395}YYY Record', pageSize: 300, columns: { a!gridColumn( label: "YYY Column", value: fv!row['recordType!{4d44effc-c462-4c70-b1aa-87905a73c395}YYY Record.fields.{some_column}some_column'] ) }, validations: {} ) }
Can you try to enable HTTP logging in the integration, do a few calls and check that the data is as expected?
I enabled logging for the integration. If I do to system logs, where do I find the logs related to these requests?
As per the documentation in integration_req_resp_activity.log
I see that now. Unfortunately, I don't have high enough access to browse that directory.
OK. That makes things complicated.
There is a plugin that allows you to write message to the tomcat-stdout log file. But you need admin permissions to install plugins.
To make sure there is no caching, please modify the code in the source expression like this:
a!refreshVariable( value: local!integrationResponse: rule!YYY_searchRecords( pagingInfo: ri!pagingInfo, searchText: ri!searchText ), refreshAlways: true ),
I am a bit confused on where exactly to put that. Where am I supposed to put it precisely?
I did an additional test as a troubleshooting measure:
I configured the data record source to put the paging info properties into one of the columns of the record to visually validate that the data record source was receiving correct paging info values. And they are valid. So that eliminates that as an issue.
Problem solved. After finding someone who has access to the logs, I quickly found the problem. The paging info has a start index giving the 1-based offset in the total number of records, while the endpoint has an offset that is actually a page number, so I needed to convert start index to a page number based on the batch size.
I am glad you got this resolved. I have been in similar situations before and validating each and every step in that chain is the route to success.