Facing Issue in Record Sync-Unable to sync due to error in data source

Certified Senior Developer

Hi All,

I am facing issue with record sync while writing dataset to the database table using write records.
This works fine if we sync record manually , data is inserted , we get the primary key back to the process. The record is a simple record with no relationship defined. Going through the community posts , this issue comes when we have BIGINT type of datatypes defined in the table. 
So need to verify how to solve this issue without changing primary key column from int8/BIGINT to INT, as we need to store the values greater than int4 range in the database. Is there any way to make this sync work?
Kindly guide.
Attaching the screenshot.

Thanks


  Discussion posts and replies are publicly visible

  • 0
    Certified Lead Developer

    Appian does not support integers with more than 4 bytes as of now. Making your fields larger will not help.

    Maybe you can elaborate on your use case and we can support you in finding a solution.

  • 0
    Appian Employee
    in reply to Stefan Helzle

    Do you truly need it to be a BIGINT? That will mean you have more than 2 Billion rows you expect to store in the table. Like Stefan, I'm curious what your use case is.

    If you truly need to use values that large, I would recommend using a text field instead of a BIGINT, since you could easily store much larger numbers in a text field.

  • 0
    Certified Senior Developer
    in reply to Peter Lewis

    Hi Peter,

    We are dealing with an external Database where workflow information of multiple applications is being stored in one single common table, the data volume is expected to go beyond the int4 range , that is why the primary key is configured as int8 in database.  

  • 0
    Certified Lead Developer
    in reply to Shikha

    One of the primary feature of record is to make data from ' disparate systems' to be made actionable in Appian. If the simple datatype like int8 cannot be used for updating in records then it defeats the higher purpose of records. We can workaround it by creating a API to write to this table or invoke a stored procedure which will write to table but they may not be the most optimized solution leveraging sync record functionality. 

  • 0
    Certified Lead Developer
    in reply to Ketan Patel

    While I am begging Appian for supporting larger integers since many years, I still want to ask you whether you really plan to store more than 2 billion rows in that table.

    And maybe you want share the bigger picture with us.

  • 0
    Certified Senior Developer
    in reply to Stefan Helzle

    Hi

    Yes , we will have more than 2 billion rows for that table as data is being inserted from other systems as well and int4 limit will definitely be exhausted.

    Raised a support case with Appian and they suggested this workaround:

    Unfortunately it is currently not supported to use any other type than regular INT for the primary key with synced records. We do have a workaround you may try however - you can update the type of the primary key field in the CDT .xsd to be type text: <xsd:element name="<PRIMARY KEY>" nillable="true" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="appian.jpa">@Id @GeneratedValue @Column(name="<PRIMARY KEY>", columnDefinition="INT UNSIGNED")</xsd:appinfo> </xsd:annotation> </xsd:element> All database operations should continue to work as long as the <PRIMARY KEY> field value is a numeric string such as "2147483648" not 2147483648. Please note that you may need to modify some of your interfaces if they handle this primary key value as an integer specifically. If the workaround above is not viable, then you will need to either use regular INT or use an unsynced record for this table

    We followed the suggested approach and came up with following observation, below is our reply:


    We tried the suggested workaround but it is not working as expected. Wanted validate, if we have done it correctly. We wanted to leverage synced record but I believe the suggested workaround will not work for synced records because for Synced Entity record , record is configured directly on a database table , there is no involvement of CDT in this scenario.- This causes record sync issues as we have primary key as int8 in database. For un synced Entity based record - we followed the suggested workaround approach, and created an un synced record type. Note : INT UNSIGNED throws Data store validation issue, so we have used BIGSERIAL datatype in XSD. In Database primary key is 'int8' as mentioned before. But we have encountered few issues for Un synced record as well : For un synced record on click of any particular record item, we get memory timeout issue. (Query reached timeout) Please note we have around 30k rows only in that table. This issue comes for lesser number of rows as well , checked after applying source filters. As I said, we wanted to use synced record only because with un synced records - we cannot use write records/record relationships.
    Attaching the screenshots of a) XSD configuration b) datastore issue with INT UNSIGNED c) Memory timeout issue for un synced records with suggested workaround
    Please help us with below two points: 1) Please suggest a workaround for Synced Records as we need to use that for our implementation 2) Please help in debugging the issue for un synced record with suggested workaround

    Any suggestions/ideas to make this work?


    Thanks

  • 0
    Certified Lead Developer
    in reply to Shikha

    I think this is pretty simple. You cannot sync 2 billion rows into a synced record.

    Regarding "For un synced record on click of any particular record item, we get memory timeout issue. (Query reached timeout)", what do you try to do here? You need to make sure that the query returns within the defined period. Do you try to load all 30k rows into memory? If yes, why?

  • 0
    Certified Senior Developer
    in reply to Stefan Helzle

    Made an unsynced /non sync type of record for that table , using the CDT with above suggested changes, for lesser number of records as well , on click of particular record we get query reached timeout issue.
    30K records not loaded as we apply source filters in record.

    Tested with queryrecordtype, we get data for above non sync record as well.

    We do have a workaround you may try however - you can update the type of the primary key field in the CDT .xsd to be type text: <xsd:element name="<PRIMARY KEY>" nillable="true" type="xsd:string"> <xsd:annotation> <xsd:appinfo source="appian.jpa">@Id @GeneratedValue @Column(name="<PRIMARY KEY>", columnDefinition="INT UNSIGNED")</xsd:appinfo> </xsd:annotation> </xsd:element>

    Unsynced Record has CDT configured with above change

  • 0
    Certified Lead Developer
    in reply to Stefan Helzle

    Great Point. 2 Billion cannot be Synced so no point in using synced records. So we are left with two options now:

    1) Do not use records at all and use Query Entity and Write to Data Store Entities to perform DB operations. In this option we can keep DB column to BIGINT mapped to CDT Type "String" 

    2) Use Synced Records and get the DB Column Data Type changed to Int ensuring the value to this field will never exceed 2147483648. Adopt to data move and reset kind of cleanup process so the value could be contained within range.