AI Knowledge Assistant

Overview

The AI Knowledge Assistant is an innovative UI component tailored to revolutionize user interactions within the Appian platform. By harnessing the power of large language models, this tool offers an immersive chatbot experience, allowing users not just to converse with state-of-the-art Generative AI models, but also source answers from their Appian Knowledge Center documents. This means, beyond traditional data retrieval, users can now semantically search and engage with their private data. Designed with adaptability in mind, the AI Knowledge Assistant seamlessly integrates with various organizational themes, ensuring a cohesive brand experience.

Key Features & Functionality

  1. Intuitive Chat Experience: With the integration of large language models, users can have natural, flowing conversations with the system, making data retrieval feel less like querying and more like conversing.
  2. Semantic Document Search: Go beyond keyword searches. With the Appian AI Chatbot, users can chat with documents, extracting nuanced insights from the Appian Knowledge Center. This semantic search capability ensures that the information retrieved is contextually relevant and precise.
  3. Privacy-Centric Interactions: Recognizing the importance of data privacy, the Chatbot is designed to allow users to interact with their private data in a secure environment. This ensures that sensitive information remains protected while still being accessible.
  4. Customizable Themes: Every organization is unique, and the Chatbot celebrates this individuality. It comes with a range of customizable themes that can be tailored to resonate with your company's brand aesthetics.
  5. Vector Database Integration: At the core of its functionality, the Chatbot utilizes a vector database. This advanced technology allows for efficient and accurate document interactions, transforming the way users engage with written content.
  6. Document Security: The AI Knowledge Assistant manages the security of Appian Documents to prevent unauthorized access to documents that users do not have the necessary permissions to view or edit.
  7. Response Streaming: The AI Knowledge Assistant delivers responses seamlessly by streaming information, providing users with a chatbot-like experience. This intuitive interface incorporates a convenient stop button, allowing users to halt the response stream at any point during the interaction. This feature enhances user control, enabling them to manage the flow of information according to their preferences and needs
  8. Embedded PDF Viewer: This empowers users to seamlessly view referenced sources directly within the component
  9. Markdown Response: Provides styled responses rendered in markdown format with clear structure and visual appeal, featuring titles, bulleted lists, code blocks, and more.

Notes:

Anonymous
  • The latest update includes usage alongside response metadata within the "assistant" messages. Please ensure you've updated to the most recent version of the plug-in.

  • Regarding "Included token usage info for OpenAI & Azure OpenAI request/response in the conversation saveInto"

    When viewing the value of  conversationSaveInto: local!conversation, It does showing any token usage information. 

  • Please update to the latest Vector Database v3.2.3. This should resolve this issue as well as your previous "Processing Documents" bug. Hope this helps!

  • Thank you for updating.

    However, after installing the update, we cannot get rid of the following message that blocks usage.

    There seems be to no change possible/required to the function name or anything else in order to resolve this error.

    Please review

  • v3.1.4 Release Notes
    1. Security Patch updated.
    2. Fixed minor bugs in the document uploading functionality.
    3. Handled the Database Lock error occurring in the HA environments.
    4. Included token usage info for OpenAI & Azure OpenAI request/response in the conversation saveInto.

    IMPORTANT NOTE: If you are using this plugin in production, open a support case and ask to increase Heap Max for app server by 1GB. This will increase query performance and allow the plugin to handle a larger number of concurrent users.

  • Hi,

    We have implemented this feature successfully in production. We have increased the Heap Max for app server by 1GB on beforehand.

    The feature works, however, it continues to show 'Processing Documents' as a user-textballoon together with a loading circle. Even while that remains, the feature is usable.

     

    Is there anything we can do about this?
    Thank you.
  • The AI Knowledge Assistant does not work with the "Appian" mobile app on an iPhone. It just spins and does not allow you to type anything.

    It does work from a browser, it just does not work from the mobile app.

  • Hi andersonc6744,

    It looks like GPT is responding with ```markdown``` instead of specific markdown tags. We'll look into modifying the prompt we send behind the scenes, but in the meantime you can add "Make sure to never use the ```markdown``` tag. Always use specific markdown tags." to the system prompt which should curb this behavior.

  • I am using v3.1.3 with Vector DB v3.2.2 and Appian 24.2

    The chat response from AI does not display properly.

    When the problem occurs, you can see the background color of the response does not match the chat background. The text display in solid white background like a white box without borders. In addition, the text does not properly wrap. Instead, a horizontal slider bar is displayed under the white box.

  • Hi hansvt,

    Happy to help. Feel free to reach out to techpartners@appian.com with the the component SAIL setup and the vector database configuration. I can attempt to debug from there.