Log Reader

Overview

Plug-in Notes:

  • Allows only System Administrators or members of the Designers group to access log data.  
  • Does not load files outside the logs directory or files that do not end in .log.* or .csv.*.
  • On multi-application-server environments, logs will only be shown from a single application server.
  • Custom headers can be specified as an optional parameter. This is meant to be used for .csv files that don't contain a header.

Key Features & Functionality

The Appian Log Reader Plug-in contains functionality to:

  • Read files in the Appian /log folder (.csv or .log) and return them as structured data
  • Aggregate data from a .csv file
  • Serve as the source for a service-backed record for viewing logs
  • The Log Reader application provided with the download demonstrates a service-backed record for viewing logs, as well as reports on specific log files. Administrators can view reports on system health over time, including design best practices, system load, and database performance. The application also contains a process that checks details from system.csv and alerts administrators if memory or CPU utilization exceeds a threshold.
  • Tail a log file with tailcsv and taillog. Note that tail is an expensive operation which is optimized for reading the last few lines of a large log file. It's not advisable to tail the entire log file from the end to the beginning. However, tail will perform much better than the other log functions when reading the last few lines from a very large log file. Use the batch size and timestamp filters to limit the number of lines read by the tailcsv and taillog functions.
  • Takes a line of text in CSV format and returns a text array
Anonymous
  • For a 3 node system, can I specify the server from which it reads?

  • Thanks Mike. Additional column was the issue. There was an extra attribute after upgrade.

  • Download the log file in question and look back through the historical data (assuming it might be one that spans multiple days) and check whether the amount of columns has increased since the 20.4 update.  As I mentioned in some older posts here, when the number of columns is inconsistent, the function fails.  As mentioned, you can try the new "tail" functions which should read the most recent row(s) from the log file and that would bypass this issue.

    If the log file you're trying to read is the type that's rolled over on a daily basis, on the other hand, I'd still start out by downloading one of the current files, but this time maybe just check that its columns are still the ones you're trying to reference in your query.  I'm not clear which log file you're looking at since I don't know what's in your cons!SC_AUDIT_FILEPATHS constant.

  • Hi All,

    After our on premise environment upgrade to 20.4 version, readcsvlogpagingwithheaders() is not working as expected.

    Whenever we are giving header attribute along with filter options, we get output as null. Below is the code snippet:

    readcsvlogpagingwithheaders(
    csvPath: cons!SC_AUDIT_FILEPATHS[1] & if(
    local!date = today(),
    null,
    "." & datetext(
    local!date,
    "yyyy-MM-dd"
    )
    ),
    startIndex: 1,
    batchSize: - 1,
    headers: {
    "loggedInTime",
    "loggedInUser",
    "attempt",
    "ipAddress",
    "source",
    "agent"
    },
    filterColumName: "source",
    filterOperator: "=",
    filterValue: "Portal",
    timestampColumnName: null,
    timestampStart: null,
    timestampEnd: null
    )

    Any suggestions to resolve this issue?

    Thanks.

  • As an aside, it's majorly frustrating and confusing that Community fails to stack the in-thread replies in any comprehensible order here.

  • that sounds good - for clarification does your new function assume the CSV text row will be "quote escaped" like it is in the original CSV file, or with quotes stripped like returned by the current "readCsvLog" functions?

    Also, do you mean the function has been added to this plug-in, whenever the update is published at least?

  • - i was figuring a start index would work the same way it does normally, except of course that it implies positions from the end of the file, instead of positions from the start of the file.

    My use case is that I want to create a paging grid of process errors, showing the most recent ones first, and otherwise page-able like normal.  Without a relative start index in the function, I have no direct way of doing this, other than just increasing my batch size by X increments then artificially transforming the resulting query to trim to my desired number.  That seems like a bit of an unnecessary pain when the function could just include the ability to call a "start [from the end] index".

  • Hi All,

    I just noticed that whenever i read a login-audit file, the first line is always skipped while fetching the data. Is there is anyway to overcome this?

  • Thanks for the heads up on this one, I'll investigate.

  • The lack of a start index was an intentional design artifact. The tail plugin should always start from the end and work it's way backwards.