AI Analytics and Monitoring

Overview

Monitoring model performance over time: By versioning models and tracking execution statistics like accuracy, the application provides visibility into how well models are performing and where improvements may be needed. This helps ensure models remain effective as data and use cases evolve.

Gathering user feedback: Features like prompt playback and feedback collection allow users to test models, while also surfacing insights on how the output is received and where errors occur most. This feedback loop helps drive better model development.

Standardizing model deployment: The application aims to standardize how models are configured, versioned, tested and deployed across environments. This helps improve reproducibility, as well as governance over how AI/ML solutions are managed within the business.

Key Features & Functionality

Prompt Playground - Allows users to quickly test AI responses using a prompt playground without fully developing a model. Prompts can be exported and imported.

Classification Monitoring - Supports generative and ML-based classification models. Models are versioned and classification accuracy is monitored. Reconciliation is supported.

Extraction Development and Monitoring - Supports structured and unstructured extraction models. Fields can be configured manually or using an extraction tool. Models are versioned and extraction accuracy at the field level is monitored. Reconciliation is supported.

Reconciliation - Framework and custom reconcile forms are supported.

Anonymous
Parents Comment Children
No Data