Kafka Tools

Overview

KafkaTools provides the following functionality that allows publishing to and consuming from topics on Kafka Servers.

Smart Services

  • Publish To Kafka
  • Consume From Kafka

In order to process the messages consumed from Kafka, it is recommended to use the Transaction Manager application. The models designed to process messages will be configured an assigned through the transaction manager job types. See documentation for Transaction Manager

Key Features & Functionality

Please refer to the README for additional details.

Anonymous
  • Hi Sylvian/Team,

    As you know, Kafka Plugin is one such Smart Service which allows the workflow to send a Kafka Message to a designated topic. This smart service is configured as one of the workflow steps based on the business requirement. Since this Smart Service is provided by Appian, the internal functioning of this Smart Service is abstracted by the consumers. The Smart Service in current form does not support retrieval of certificates from AWS Secrets Manager at runtime instead in expect the .JKS file to be provided at runtime which should be prestored in Appian internal document store.

    Issue description: Currently we are exporting the application and pushing it to Bitbucket as part of our CICD application deployment. We have reviewed our code with client security team, They have found .jks file in the content of the application code and as per our industry best practices, we should not store any .jks files in bitbucket code.

    1: Is there any alternate solution to not use .jks file?

    2: Are there any other methods available to assure that the .jks files can be stored in the Appian Secured Document Center, but also not stored in the Bitbucket repository?

    Appreciate your response.

  • Hi, 

    For the "Data Source Name", it's indeed the name of your business data source (the same name from Data source in Admin Console).

    For the "Transaction Table Name", if you use a custom table instead of the tm_job_transaction, your table must have the same structure.

    Oracle exemple : 

    ID (Number)

    JOB_TYPE_ID (Number) - controlled by a sequence for auto-increment

    CONTEXT_JSON (varchar2)

    SCHEDULED_DATE (Timestamp)

    STATUS_ID (Number)

    TOPIC (varchar2)

    PARTITION (Number)

    OFFSET (Number)

    KEY (Varchar2)

    And yes, we success to retrieve kafka messages with this plug-in.

    Regards

  • Hi Miguel Galán,

    Much appreciate for helping! Yep, as I update to "jdbc/Appian" at least it give me some error message.

    Right now it saying "Failed to construct kafka consumer". 

    Did you successfully subscribe from Kafka via this plug-in?

    Thank you so much!

    Best Regards

  • Hi gavins5922,

    For "Data Source Name":

    You have to include the JDBC of your database in Text type, not the table. Example: "jdbc/Appian" (if you have no other Data Source configured, this is the default Data Source)

    For "Transaction Table Name":

    I haven't tried this, I use the "tm_job_transaction" table but does the table you are setting up have the same columns as tm_job_transaction? I guess if it has the same structure, it should work...

    Hope this help you

    Best regards

  • How to set-up "Data Source Name" and "Transaction Table Name"?

    For "Data Source Name":

            -  I have create a dummy table along with data type & data store & Record Type, since the type is text, I have tried put the table name, data store name, even recordtype, non of them working

    For "Transaction Table Name":

            -  Since it default linked to "tm_job_transaction", I tried leave there also tried use the table I created, nether of them work.

    When I run the PM, it do give me success, however nothing received, no message, no table got updated.

  • Our organisation use Confluent Kafka with events/messages sent in JSON with schema (AVRO).
    Will the Kafka Tool support AVRO serialization and AVRO deserialization in near future?

  • The solution that my client has adopted regarding this is to develop an external microservice and put it as an intermediate layer between Appian and Confluent. The microservice is able to read the message in AVRO format and transform it to String type, and will publish it to a Kafka topic in Confluent in string format. So, the reading from Appian is correct, because we are finally reading a string type. I hope this information helps you :)

  • Me and my client is phasing exactly the same problem. Except byte array deserializing (with ByteArrayDeserializer) we also need to deserialize the Avro to be able to store a readable string into TM database.

    Have you got any input or coming any further into a solution to the problem?

  • V1.4.1. Release Notes
    • Adds support for consuming from multiple topics

  • v1.4.0 Release Notes
    • Adds support for setting the Key and Partition when Publishing to Kafka