Log Reader

Overview

Plug-in Notes:

  • Allows only System Administrators or members of the Designers group to access log data.  
  • Does not load files outside the logs directory or files that do not end in .log.* or .csv.*.
  • On multi-application-server environments, logs will only be shown from a single application server.
  • Custom headers can be specified as an optional parameter. This is meant to be used for .csv files that don't contain a header.

Key Features & Functionality

The Appian Log Reader Plug-in contains functionality to:

  • Read files in the Appian /log folder (.csv or .log) and return them as structured data
  • Aggregate data from a .csv file
  • Serve as the source for a service-backed record for viewing logs
  • The Log Reader application provided with the download demonstrates a service-backed record for viewing logs, as well as reports on specific log files. Administrators can view reports on system health over time, including design best practices, system load, and database performance. The application also contains a process that checks details from system.csv and alerts administrators if memory or CPU utilization exceeds a threshold.
  • Tail a log file with tailcsv and taillog. Note that tail is an expensive operation which is optimized for reading the last few lines of a large log file. It's not advisable to tail the entire log file from the end to the beginning. However, tail will perform much better than the other log functions when reading the last few lines from a very large log file. Use the batch size and timestamp filters to limit the number of lines read by the tailcsv and taillog functions.
  • Takes a line of text in CSV format and returns a text array
Anonymous
Parents
  • Would it be possible to create a new version of the readCsvLog functions that doesn't strip out text-containing quotes as found in the original CSV file?  In particular the "Error Message" column is liable to have text that contains commas, and since each log row is returned as mere plaintext, in order to create a dictionary of data we're forced to use the split() function on commas.  But the function also removes the quote marks around strings that contain commas, so we have no way to verify we're splitting on the right things without really having to make big guesses.  I bring this up now because yet another corner case in the heuristics I was using to read the row data has cropped up, causing extra headaches.

    Honestly I'm not sure why this function doesn't return CDT or at least JSON data - that would make it incredibly less of a headache to use.

    As a concrete example, here's a row in the CSV file itself showing the error message wrapped in quotes:

    Whereas here's the same row, straight out of the readCsvLogPaging() function (notice the quotes have been stripped by the plug-in):

    CC: ,

  • Edit: I just tried the newer "tailCsvLogPaging()" function and I notice it *does* include the quotes around the text column from the same row in question.  Unfortunately it seems the "tail..." function versions don't have a way of setting a start index for paging use, so I'm not sure exactly how to incorporate that into the tool I've previously created to allow paging through the Design_errors log.

  • As an aside, it's majorly frustrating and confusing that Community fails to stack the in-thread replies in any comprehensible order here.

Comment Children
No Data