Overview
Plug-in Notes:
Key Features & Functionality
The Appian Log Reader Plug-in contains functionality to:
Hello,
I'm trying to use this plugin to get values from records_usage.csv in the context of our GDPR solution. I can read the file with readcsvlog but filter on Timestamp seem not working. When I run this:
readcsvlog( csvPath: "/audit/records_usage.csv", timestampColumnName:"Timestamp", timestampStart: a!subtractDateTime(startDateTime: today(), days:100), timestampEnd: today() )
The function return 0 row.
Other filter works, for example:
readcsvlog( csvPath: "/audit/records_usage.csv", filterColumName:"Timestamp", filterOperator:"startsWith", filterValue:"1 Feb 2023")
But my goal is retrieve a periode and this kind of filter doesn't support "Between"
Regards
Jean-Alain
Good day. For one of our client`s requirement, we are trying to extract the data admin logs from the .log files by using this plug-in. We are ale to extract the data logs from the main file "rdbms-audit.log.yyyy-MM-dd" however we are not able to extract from the extension files of the main file like "rdbms-audit.log.yyyy-MM-dd.1", "rdbms-audit.log.yyyy-MM-dd.2" etc
Request your support/guidance on the above challenge. Thank you.
At this point, this plug-in seems pretty much abandoned. Any additional feedback here would be appreciated.
I am having the same issue and have not found a resolution as of yet. Has anyone had any success implementing this on HA?
yes.
Hi, is this plugin compatible with version 22.3 ?
Michael Chirlin - are there any updates as to who's in charge of maintenance on this plug-in? Is it being actively maintained / is there any chance of any of the issues or inconsistencies i've enumerated, being addressed in the foreseeable future?
Checking back in Mark Talbot / Somnath Dey, i can confirm that "csvToTextArray" fails to correctly parse CSV rows as returned by the original "readCsvLogPaging" function which (as i previously noted quite a while ago) incorrectly strips quote escaping from CSV text containing a comma (basically causing one cell worth of data to be treated as two cells when it contains a comma).
As I noted somewhere, that seems to have been fixed in the "tailCsvLogPaging" function (i.e. passing a returned row from that function through "csvToTextArray" returns the expected number of fields, even when a row cell includes a comma); however there seem to still be unresolved issues even with that one - for one, the "headers" value seems to *only* return blank, and additionally, with no "start index" (as i noted a year or so ago) it leaves me unable to really execute my use case of creating a grid and allowing users to page through it (starting of course with most-recent-first).
Any chance of some new changes to harden the behavior a bit and get them acting consistently, etc? I have an Admin tool that queries error messages from the log but I have to bend over backwards to parse the rows carefully enough that it doesn't blow up on me, and it seems like every other week I need to install yet another heuristic to sanitize a comma appearing in a data row in a new way. It's not really scalable.
this has (apparently) been addressed in a more recent reply. In my environment though i'm unable to test or confirm one way or another.