Can an Expression rule be used with a query entity to process the data before output?

I am using a process model to export data from the database to a spreadsheet. I am not using the export in the record type, just to be clear. When selecting the fields using the a!querySelection -> a!queryColumn, this immediately pulls the data and exports to a new spreadsheet, with no way of adding code to process it. If I have a CDT that has, for example, twenty fields, and one of those fields contains an employee ID. How can I, or is it even possible, to use an expression rule that takes an ID parameter, and outputs the actual employee name? There doesn't seem to be a path forward on this. I've looked through the process model, all the different query types, etc., but no way to pull the data, then process that data, before outputting to the spreadsheet.

Just to make sure this is clear, I already have the expression rule to process the data, I just have nowhere to do it and if there is a way to do it, I have not found it.

Any help will be greatly appreciated. 

Thanks.

  Discussion posts and replies are publicly visible

  • 0
    Certified Lead Developer
    export data from the database to a spreadsheet

    Which node are you using to try this?  Any answer we provide here would depend on this.

    Currently the most common / supported method would be the "Export DSE to Excel" node, and in that case the answer to your post's subject is "no" - the only way to do what you're asking is to get the data the way you want it (and sorted the way you want it) in a DB table or view and export it directly.

    Another option is the "Export CDT to Excel" node in the Excel Tools plug-in, which despite being deprecated technically, still provides capabilities that have not been sufficiently replaced by anything else (ootb or plug-in based).  With this node, you'd be able to iterate over a list-type CDT and do any sort of processing you want in an expression rule, saving the output back into a CDT, then sending that to the excel template.

  • Another option would be to work backwards form the desired output content, create a table that maps to that output content, and have your process load that table, converting Ids into required values (e.g. employeeId to employeeName) and then dump the data from your new table straight to Excel. You could consider this new table to be a "staging" table, loaded on demand, extracted to Excel and then flushed clear as the last step in the process.

  • Thank you both for your responses. I will try to answer both of you:

    Here are my nodes. Unfortunately, the data that shows in the interface table, needs to be different than what's in the spreadsheet.

    - Only a handful of the columns should be viewable on the interface table.

    - The spreadsheet will contain columns/values restricted to certain groups.

    Additionally, I have over a hundred columns that need to be in the spreadsheet and I believe only fifty are supported. Also, I cannot use deprecated functions as this is not allowed for me.

  • 0
    Certified Lead Developer
    in reply to khabran

    Stewart's answer of a Staging Table will probably be the best for you.  Create a new table that you plan to keep empty, containing the columns you want the export to have.  In your process, query from your original table doing whatever filtering and interpolation you need to do at runtime.  Insert those rows into the table.  Dump that table to Excel using the Export DSE to Excel smart service.  Then delete those rows from the temp table.  Rinse, repeat.

  • One more question.

    What if that table has over a hundred columns? I believe there is a restriction on fifty columns. Does this make any difference? How exactly is this staging table set up? Will it ever be visible? If you could help clarify, I will be greatly appreciative. Thanks.

  • 0
    Certified Lead Developer
    in reply to khabran

    You can run the Export DSE to Excel node twice (or three times).  After the first time, pass in the first generated document as "document to update", pass in your second set of columns in "selection", and a new Starting Cell (~50 columns to the right of column "A" or wherever you first started).

  • I apologize, but again, I have some more questions:

    1. Will I need to break up the CDT into several to do this?
    2. Will the querySelection -> queryColumn -.> object need to separated out as well?
    3. Where in this process am I able to process some of the fields, such as the IDs -> Names and also descending order by date? It seems like it would be an issue if I added the columns at separate intervals.
  • 0
    Certified Lead Developer
    in reply to khabran

    You'll run the same queries on the same DSE, but you'll pass a completely separate set of columns in each call.  AFAIK there's no such limit to the number of columns in a table / DSE (though having over 100 columns sounds like a maintenance nightmare, to be honest).

    For your final question, I'm not quite sure what you mean.  When populating your staging table, you'd query all your information from the original source, sort it and do any post-processing you need there - then all at once you'd write the entire processed list back to the staging table, already in the order you want it to be exported to excel.