Following normal database best practices we typically store id fields in our main data tables and then link those id fields to reference data tables when we want to display the reference data values (e.g. status values such as 'Received', 'In Progress', 'Pended', 'Abandoned', 'Completed', etc.). With the new record and relationship functionality it is easy to display the textual status values rather than the id value on the record list and record summary. But, if we need to export the data, including the textual values, to a CSV file for ingestion by an MI tool we can't currently export from a record to a csv file programmatically (our export for MI would typically be a scheduled process running once a day, outside of standard business hours). We could create a database view to pull together the data and structure it for the MI extract but the documentation for the Export Data Store Entity to CSV smart service states 'Because database views do not have a primary key, or the ability to have indexes, Appian recommends exporting data only from tables or materialized views in order to avoid query timeouts with large data sets'.
I am currently considering creating an MI table and using stored procedures to populate that table from the main data tables and reference data tables (or perhaps from a view) so that I can then use the OOTB Export Data Store Entity to CSV smart service.
Does that sound like a sensible solution? Are they better alternatives that I have missed?
Discussion posts and replies are publicly visible
HI Gavin, You can create database view and define primary key as well with combination of two fields or more by concatenating them. By the way how much data you have in your table? The solution which you provided also makes sense.
And is that approach supported by Appian (their documentation suggests not)?
For the application I am looking at the expected peak will be around 120,000 rows (and around 45 columns)
I'd suggest you can do this approach as long as you carefully construct your view to be 1:1 to a specific base table. That is, you'd make sure all your joins are 1:m relationships (as opposed to m:m relationships where the original DB table length would end up being multiplied).
When you can guarantee unique rows in the view, in such a way that the original table's primary key can be considered as the primary key in the view (and set up that way in the CDT as well), then I believe you will run into very few issues attempting to export even a fairly large data set using the DSE to Excel / CSV node(s), since they allegedly use internal paging to break up their queries.
In any case if your large data set is already in place, this would be fairly easy to set up and try once, and decide whether it's acceptable for longer-term (nightly etc) use.
Thanks Mike I'll give that a try