We are building a case management application that support many ad-hoc relationships among different record types. We are using data sync, but anticipate to exceed the 4M row limit eventually. What strategies should we consider for this? The documentation suggests adding source filters, which makes sense; but no longer will all the data be available in the record type. Do we move all queries to a!queryEntity but allow all writes to continue using a!writeRecords? If so, then we'd want to make sure all our record queries are in expression rules so they can be updated to queryEntity.
Discussion posts and replies are publicly visible
Difficult. Did you consider to implement a kind of data archival where you move data into a separate long term table structure?
Peter Lewis, this might be a use case you are interested in.
Data archival will eventually occur but we need to store some data for 25 years or longer.
Hi Wen, I think it really depends on how you plan to use the data in the future. For instance, what kind of information will you need about the older data once you approach the row limit? We've seen some customers who have multiple record types - one for "active" cases that are being worked on and another record type for "archived" data. This could work because all of your process models could go against the active cases, while your separate archived data could just be used for reporting.
We're also actively working to expand the amount of data that we can sync and provide features that also make it easier to determine which data is relevant at high data volumes, so we'll keep you posted!
Most likely, the centralized relationship table will exceed 4M, so we would like to stlil be able to query these 4M+ records. Is it appropriate to continue writing using a!writeRecords however reads will be converted to using a!queryEntity?
Yeah a!writeRecords() would continue to work as long as your record type is available (ie the sync has not failed). That should work as an approach, but I'd still recommend limiting a!queryEntity() to only scenarios where you truly need the archived data.