Pros and Cons of Process driven vs DB driven task

Certified Senior Developer

Can someone please explain what are major pros and cons of process driven and  db driven task and in which use case we should consider db driven and process driven approach.

  Discussion posts and replies are publicly visible

Parents
  • Certified Lead Developer

    My general go-to approach is database-driven tasks. The reason is simple: Appian is continuously investing in the Data Fabric, but hasn't invested in foundational features for a!queryProcessAnalytics() in quite a while.

    At enterprise scale (whether it's high volume per time or low volume over a long time)

    • Organizations usually have database-related skillsets and technologies to analyze data, and often it is important to fold in aggregate process data to those reports to get any meaningful productivity metrics
    • Appian's process audits trails are not reportable in any automated way, but Record Events are. Custom audit updates are also straightforward to setup and include with Record Events.
    • A user taking action is almost always in context of a record - i.e. the task is almost always tightly coupled to the data. So as a system design, it's not a stretch to put task metadata into the database with links to the data modified by the task. For what it's worth, this can be incredibly useful when understanding how to action the results from Process Insights.
    • Compared to database-backed grids, process reports for task lists are not great for organizations that expect requirements to change over the lifetime of a project. The indirectly hard-coded structure of process reports (e.g. "c4") introduces a lot of complexity when updating task list aggregations for new requirements - particularly when it comes to common questions like "what's due in the next N days" or "sort these tasks by a custom priority". In other words, I'm claiming that process reports are amongst Appian's anti-low code elements these days.
    • With database-backed tasks, automations (RPA, externally-triggered REST API's, scheduled jobs) can be easily integrated into the same process data model or record data model without the need to cancel existing tasks or deal with process-related state data. This decouples automations from user-based interactions, which is a huge time and stress saver when modifying things down the road.

    The most annoying (IMO) things that database-driven tasks lack are:

    • Concurrent task acceptance / locking - logic needs to exist to see if someone else is performing the same action on the same task (using a!queryProcessAnalytics(), a primary record identifier, and a task type). It can be hard to get this right, but it's also one of those "easy when you know how" type things.
    • For task escalations / SLA's, additional logic needs to be defined in the data model and task management processes. Usually this is as simple as a "dueDate" timestamp field. If automations need to happen based upon late tasks, usually those are scheduled jobs that handle late tasks in batches - i.e. not on a task-by-task basis. This bullet point isn't a draw back to me, but it is something to be aware of.
Reply
  • Certified Lead Developer

    My general go-to approach is database-driven tasks. The reason is simple: Appian is continuously investing in the Data Fabric, but hasn't invested in foundational features for a!queryProcessAnalytics() in quite a while.

    At enterprise scale (whether it's high volume per time or low volume over a long time)

    • Organizations usually have database-related skillsets and technologies to analyze data, and often it is important to fold in aggregate process data to those reports to get any meaningful productivity metrics
    • Appian's process audits trails are not reportable in any automated way, but Record Events are. Custom audit updates are also straightforward to setup and include with Record Events.
    • A user taking action is almost always in context of a record - i.e. the task is almost always tightly coupled to the data. So as a system design, it's not a stretch to put task metadata into the database with links to the data modified by the task. For what it's worth, this can be incredibly useful when understanding how to action the results from Process Insights.
    • Compared to database-backed grids, process reports for task lists are not great for organizations that expect requirements to change over the lifetime of a project. The indirectly hard-coded structure of process reports (e.g. "c4") introduces a lot of complexity when updating task list aggregations for new requirements - particularly when it comes to common questions like "what's due in the next N days" or "sort these tasks by a custom priority". In other words, I'm claiming that process reports are amongst Appian's anti-low code elements these days.
    • With database-backed tasks, automations (RPA, externally-triggered REST API's, scheduled jobs) can be easily integrated into the same process data model or record data model without the need to cancel existing tasks or deal with process-related state data. This decouples automations from user-based interactions, which is a huge time and stress saver when modifying things down the road.

    The most annoying (IMO) things that database-driven tasks lack are:

    • Concurrent task acceptance / locking - logic needs to exist to see if someone else is performing the same action on the same task (using a!queryProcessAnalytics(), a primary record identifier, and a task type). It can be hard to get this right, but it's also one of those "easy when you know how" type things.
    • For task escalations / SLA's, additional logic needs to be defined in the data model and task management processes. Usually this is as simple as a "dueDate" timestamp field. If automations need to happen based upon late tasks, usually those are scheduled jobs that handle late tasks in batches - i.e. not on a task-by-task basis. This bullet point isn't a draw back to me, but it is something to be aware of.
Children
No Data