Hello Appian Community,
I've been given a somewhat complex JSON format to ingest into Appian and stage all the data into the DB. The intention of the data is for reference/comparison/reporting. I've tested the regular readtextfromfile/fromJson/jsonPath functions to grab each attribute but it feels very tedious/clunky to have an JSON path expression for ever attribute I need. Could I get any advice on a more efficient way to parse this data and get it in the DB?
Data Context:
One JSON will have thousands of "FullProduct" entities so I will be dealing with files with a size of at most 25MB. This structure is the final format so there is no altering the original file.
JSON Sample:
{ "FULLPRODUCTFILE": [ { "FullProduct": { "ID": "*dfg333", "Details": { "StatusType": "LIVE", "Status": "ACTIVE" }, "Classification": [ { "ID": "aaa111", "Classification": { "Class": { "ClassificationID": "ASDAFF", "ClassName": "Article 8", "ClassDescription": "Description", "ClassValue": "ARTICLE8" } }, "ClassificationPurpose": "Purpose2" }, { "ID": "aaa222", "Classification": { "Class": { "ClassificationID": "EETEWTT", "ClassName": "Article 2", "ClassDescription": "Description", "ClassValue": "ARTICLE2" } }, "ClassificationPurpose": "Purpose" } ], "Meetings": [ { "ID": "dfgd22f", "MeetingNumber": 2333, "MeetingDate": "2024-12-18T12:00:00.000", "MeetingDepartment": "DEPARTMENT 1", "MeetingComment": "test" } ], "Case": { "CaseDocument": "Document URL", "CaseDocumentDate": "2024-12-18T12:20:57.000" }, "AllMarks": [ { "ID": "hfhgsgfh", "MarksType": "Primary", "Marks": { "MarkDetails": { "MarkName": "TEST_NAME", "MarkType": "TYPE", "MarkDocument": { "MarkDocName": "TEST1" }, "MarkNotes": { "MarkComment": "Test COMMENT" } } } } ], "Rules": { "RuleYear": "2024", "RuleSet": 2 } } } ] }
Discussion posts and replies are publicly visible
Do you need to do this frequently?
Where are these files coming from, and how do you plan to upload them into Appian?
When this data is coming from another system, why not directly connect to it?
This file is being received via SFTP and being dropped into an Appian Folder to be processed on a daily basis. The external system where the JSON originates from does not have a usable API at the moment, so a file-based subscription is our only route.
The daily files we receive will be closer to 5MB-10MB about 90% of the time (Incremental Files). We do still need to be able to support large files to about 25MB (Full Load Files) for database refreshes.