Log Reader

Overview

Plug-in Notes:

  • Allows only System Administrators or members of the Designers group to access log data.  
  • Does not load files outside the logs directory or files that do not end in .log.* or .csv.*.
  • On multi-application-server environments, logs will only be shown from a single application server.
  • Custom headers can be specified as an optional parameter. This is meant to be used for .csv files that don't contain a header.

Key Features & Functionality

The Appian Log Reader Plug-in contains functionality to:

  • Read files in the Appian /log folder (.csv or .log) and return them as structured data
  • Aggregate data from a .csv file
  • Serve as the source for a service-backed record for viewing logs
Anonymous
  • Hello,


    We have configured the Appian environment in HA and in order to access the logs, we have seen in some comments that it has to be done by putting the server prefix in the log path.


    Is there any way to extract all the log data from the different servers in HA?


    We need to be able to read directly from the log without putting any server in the path to have all the updated information.


    Thank you very much

  • Hello,

    I'm trying to use this plugin to get values from records_usage.csv in the context of our GDPR solution. I can read the file with readcsvlog but filter on Timestamp seem not working. When I run this:

    readcsvlog(
        csvPath: "/audit/records_usage.csv",
        timestampColumnName:"Timestamp",
        timestampStart: a!subtractDateTime(startDateTime: today(), days:100),
        timestampEnd: today()
    )

    The function return 0 row.

    Other filter works, for example:

    readcsvlog(
       csvPath: "/audit/records_usage.csv",
       filterColumName:"Timestamp",
       filterOperator:"startsWith",
       filterValue:"1 Feb 2023"
    )

    But my goal is retrieve a periode and this kind of filter doesn't support "Between"

    Regards

    Jean-Alain

  • Good day. For one of our client`s requirement, we are trying to extract the data admin logs from the .log files by using this plug-in. We are ale to extract the data logs from the main file "rdbms-audit.log.yyyy-MM-dd" however we are not able to extract from the extension files of the main file like  "rdbms-audit.log.yyyy-MM-dd.1", "rdbms-audit.log.yyyy-MM-dd.2" etc

    Request your support/guidance on the above challenge. Thank you.

  • At this point, this plug-in seems pretty much abandoned.  Any additional feedback here would be appreciated.

  • I am having the same issue and have not found a resolution as of yet.  Has anyone had any success implementing this on HA?

  • Hi, is this plugin compatible with version 22.3 ?

  • - are there any updates as to who's in charge of maintenance on this plug-in?  Is it being actively maintained / is there any chance of any of the issues or inconsistencies i've enumerated, being addressed in the foreseeable future?

  • Checking back in / , i can confirm that "csvToTextArray" fails to correctly parse CSV rows as returned by the original "readCsvLogPaging" function which (as i previously noted quite a while ago) incorrectly strips quote escaping from CSV text containing a comma (basically causing one cell worth of data to be treated as two cells when it contains a comma).

    As I noted somewhere, that seems to have been fixed in the "tailCsvLogPaging" function (i.e. passing a returned row from that function through "csvToTextArray" returns the expected number of fields, even when a row cell includes a comma); however there seem to still be unresolved issues even with that one - for one, the "headers" value seems to *only* return blank, and additionally, with no "start index" (as i noted a year or so ago) it leaves me unable to really execute my use case of creating a grid and allowing users to page through it (starting of course with most-recent-first).

    Any chance of some new changes to harden the behavior a bit and get them acting consistently, etc?  I have an Admin tool that queries error messages from the log but I have to bend over backwards to parse the rows carefully enough that it doesn't blow up on me, and it seems like every other week I need to install yet another heuristic to sanitize a comma appearing in a data row in a new way.  It's not really scalable.