I am using a process model to export data from the database to a spreadsheet. I am not using the export in the record type, just to be clear. When selecting the fields using the a!querySelection -> a!queryColumn, this immediately pulls the data and exports to a new spreadsheet, with no way of adding code to process it. If I have a CDT that has, for example, twenty fields, and one of those fields contains an employee ID. How can I, or is it even possible, to use an expression rule that takes an ID parameter, and outputs the actual employee name? There doesn't seem to be a path forward on this. I've looked through the process model, all the different query types, etc., but no way to pull the data, then process that data, before outputting to the spreadsheet.
Just to make sure this is clear, I already have the expression rule to process the data, I just have nowhere to do it and if there is a way to do it, I have not found it.
Any help will be greatly appreciated.
Thanks.
Discussion posts and replies are publicly visible
khabran said:export data from the database to a spreadsheet
Which node are you using to try this? Any answer we provide here would depend on this.
Currently the most common / supported method would be the "Export DSE to Excel" node, and in that case the answer to your post's subject is "no" - the only way to do what you're asking is to get the data the way you want it (and sorted the way you want it) in a DB table or view and export it directly.
Another option is the "Export CDT to Excel" node in the Excel Tools plug-in, which despite being deprecated technically, still provides capabilities that have not been sufficiently replaced by anything else (ootb or plug-in based). With this node, you'd be able to iterate over a list-type CDT and do any sort of processing you want in an expression rule, saving the output back into a CDT, then sending that to the excel template.
Thank you both for your responses. I will try to answer both of you:
Here are my nodes. Unfortunately, the data that shows in the interface table, needs to be different than what's in the spreadsheet.
- Only a handful of the columns should be viewable on the interface table.
- The spreadsheet will contain columns/values restricted to certain groups.
Additionally, I have over a hundred columns that need to be in the spreadsheet and I believe only fifty are supported. Also, I cannot use deprecated functions as this is not allowed for me.
Stewart's answer of a Staging Table will probably be the best for you. Create a new table that you plan to keep empty, containing the columns you want the export to have. In your process, query from your original table doing whatever filtering and interpolation you need to do at runtime. Insert those rows into the table. Dump that table to Excel using the Export DSE to Excel smart service. Then delete those rows from the temp table. Rinse, repeat.
Thank you.
One more question.
What if that table has over a hundred columns? I believe there is a restriction on fifty columns. Does this make any difference? How exactly is this staging table set up? Will it ever be visible? If you could help clarify, I will be greatly appreciative. Thanks.
You can run the Export DSE to Excel node twice (or three times). After the first time, pass in the first generated document as "document to update", pass in your second set of columns in "selection", and a new Starting Cell (~50 columns to the right of column "A" or wherever you first started).
Great idea! Thanks!!
I apologize, but again, I have some more questions:
You'll run the same queries on the same DSE, but you'll pass a completely separate set of columns in each call. AFAIK there's no such limit to the number of columns in a table / DSE (though having over 100 columns sounds like a maintenance nightmare, to be honest).
For your final question, I'm not quite sure what you mean. When populating your staging table, you'd query all your information from the original source, sort it and do any post-processing you need there - then all at once you'd write the entire processed list back to the staging table, already in the order you want it to be exported to excel.
One more question, how does one pass off the partially completed document to the next node? In the folder I will be using, there will be many files in this folder. Additionally, the spreadsheet will have the name of "File Name - " & now(). I'm not sure what to do in this instance or what would be a sound solution so that no collisions exist.
I think I see what you are saying, I keep forgetting about the "staged table". I'm not quite sure how that works, but I will look into it. Thank you for the help.
Again you basically create a dummy table containing only the columns you need (they could just all be text, for example), populate it on demand and then clear it back out once your document is generated.
khabran said:what would be a sound solution so that no collisions exist.
sorry i missed this particular followup question until now. there should be zero need to worry about collisions. you just pass the document id of the generated document (which you save into a PV in the first node) into the subsequent node(s) as i mentioned earlier.
It seems as if I can't export more than 50 columns via Record Type or Process Model. I have no other choice now other than to implement the previously said solution.