"Import CSV to database v3" plugin not working correctly

My requirement is to import a csv document to database. The csv file has header field names with underscore (Eg. "first_name", "last_name"), but the database table column names don't have underscore in it. Thats why the data is not getting written to database successfully. Is there a way out here?

  Discussion posts and replies are publicly visible

Parents
  • Hi, Technically you cant use this plugin if the names doesn't match. I do remember building couple of solutions for a different requirement where user can upload file with columns on any order and names wont match the db column names.

    1. We let users map the columns(Known column names ) with the file headers(User provided) in a UI, and use parse csv to cdt plugin for dumping the data in table. ( This might be a solution to your requirement if the file will have less number of rows. But note that you will have to do some mapping based on column mapped by the user . Also this process instances will consume more memory if the file is huge and used frequently).

    2. Similar to the above approach (let users map the columns with the file headers in a UI ) and do rest of works on the DB end with TSQL . (Note this approach was for a requirement where the file had 20k+rows, upto 20 columns and very frequent file uploads upto 10k+ average on a day and appian is just orchestration tool here )
    a. Parse just the file header and call a stored proc to create a table with unique name in run time with the parsed column names .
    b. use the plugin import csv to database to dump the file data to table created.
    c. use another stored proc to move data and delete the table after verification.
Reply
  • Hi, Technically you cant use this plugin if the names doesn't match. I do remember building couple of solutions for a different requirement where user can upload file with columns on any order and names wont match the db column names.

    1. We let users map the columns(Known column names ) with the file headers(User provided) in a UI, and use parse csv to cdt plugin for dumping the data in table. ( This might be a solution to your requirement if the file will have less number of rows. But note that you will have to do some mapping based on column mapped by the user . Also this process instances will consume more memory if the file is huge and used frequently).

    2. Similar to the above approach (let users map the columns with the file headers in a UI ) and do rest of works on the DB end with TSQL . (Note this approach was for a requirement where the file had 20k+rows, upto 20 columns and very frequent file uploads upto 10k+ average on a day and appian is just orchestration tool here )
    a. Parse just the file header and call a stored proc to create a table with unique name in run time with the parsed column names .
    b. use the plugin import csv to database to dump the file data to table created.
    c. use another stored proc to move data and delete the table after verification.
Children
No Data