Overview
Plug-in Notes:
Key Features & Functionality
The Appian Log Reader Plug-in contains functionality to:
Would it be possible to create a new version of the readCsvLog functions that doesn't strip out text-containing quotes as found in the original CSV file? In particular the "Error Message" column is liable to have text that contains commas, and since each log row is returned as mere plaintext, in order to create a dictionary of data we're forced to use the split() function on commas. But the function also removes the quote marks around strings that contain commas, so we have no way to verify we're splitting on the right things without really having to make big guesses. I bring this up now because yet another corner case in the heuristics I was using to read the row data has cropped up, causing extra headaches.
Honestly I'm not sure why this function doesn't return CDT or at least JSON data - that would make it incredibly less of a headache to use.
As a concrete example, here's a row in the CSV file itself showing the error message wrapped in quotes:
Whereas here's the same row, straight out of the readCsvLogPaging() function (notice the quotes have been stripped by the plug-in):
CC: Mark Talbot, Sam Zacks, April Schuppel
Hey Mike Somnath Dey is the owner of the plugin I will let him know about this. I did add a function called csvtotextarray. My expectation is that this function should correctly parse any CSV line returned by the smart service. I submitted the plugin update tonight.
At this point, this plug-in seems pretty much abandoned. Any additional feedback here would be appreciated.
Michael Chirlin - are there any updates as to who's in charge of maintenance on this plug-in? Is it being actively maintained / is there any chance of any of the issues or inconsistencies i've enumerated, being addressed in the foreseeable future?
Checking back in Mark Talbot / Somnath Dey, i can confirm that "csvToTextArray" fails to correctly parse CSV rows as returned by the original "readCsvLogPaging" function which (as i previously noted quite a while ago) incorrectly strips quote escaping from CSV text containing a comma (basically causing one cell worth of data to be treated as two cells when it contains a comma).
As I noted somewhere, that seems to have been fixed in the "tailCsvLogPaging" function (i.e. passing a returned row from that function through "csvToTextArray" returns the expected number of fields, even when a row cell includes a comma); however there seem to still be unresolved issues even with that one - for one, the "headers" value seems to *only* return blank, and additionally, with no "start index" (as i noted a year or so ago) it leaves me unable to really execute my use case of creating a grid and allowing users to page through it (starting of course with most-recent-first).
Any chance of some new changes to harden the behavior a bit and get them acting consistently, etc? I have an Admin tool that queries error messages from the log but I have to bend over backwards to parse the rows carefully enough that it doesn't blow up on me, and it seems like every other week I need to install yet another heuristic to sanitize a comma appearing in a data row in a new way. It's not really scalable.