Process Upgrade Guidance

Certified Associate Developer

I want to know what is the most effective strategy to handle old production processes that were designed with a methodology that involves extensive and long-lasting processes. In addition, we face numerous PRO issues that require quick solutions from the support team. Simultaneously, improvements are being made to extend or enhance the functionality of these processes. However, problems arise when we have to solve an issue that involves the modification of objects that are also being altered in the enhancements and are not yet stable. Another challenge is managing active instances with old versions of the flow; when trying to implement a new version of the process, conflicts can arise due to incompatibility between the old version of the instance and the new version of the process. Appian suggests a method in their documentation, but I find it has several limitations. I am interested to know how they handle these situations in other projects to learn from the best practices applied.

docs.appian.com/.../Process_Upgrade.html

  Discussion posts and replies are publicly visible

  • 0
    Certified Lead Developer

    The best practice to avoid long running process is now more than 10 years old. The scenarios you describe can be pretty complicated to handle. If your support effort is above your limits, I suggest to set up a small project to change the process implementation and fix active instances. 

    In my projects I follow the approach described here with great success: https://appian.rocks/2023/03/20/design-long-running-tasks/

  • 0
    Certified Associate Developer
    in reply to Stefan Helzle

    What you suggest doesn't seem to solve my problem. Imagine that I am working on the evolution of process 1 in the development environment and my team is in the middle of this development. Simultaneously, the support team receives an urgent issue in the production environment that requires modifications to process 1. It is not possible to deploy process 1 and its dependent objects to production because they include changes that could negatively impact the current operation in that environment.

  • 0
    Certified Associate Developer

    Hi Raul,

    In case of dependent problems related to changes, this cannot be solved 100%, but to a certain extent this can be solved

    1.1  You can avoid writing larger expressions in script task or any unattended nodes in process model to expression rules, whenever any issue rises , make sure you deploy only the rule which caused the problem not the process model

    .1.2 Same with interface, divide the interface as much as you can with sub interfaces and sub rules, this will avoid the deployment issues with conflicts.

    1.3 Creating a copy of new interface  _V1._V2 etc and using it in process model will not effect the existing instance of process model, the active instance will use the old interfaces and new one will be picked for new process instances.

    1.4 worst case scenario you may have to deploy PM that is unavoidable, but this practice of breakdown of down code into expression rules as much as you can, will eliminate such conflicts while deployments. 

    1.5 Also you can check with redesign, keeping the process instance active for longer duration is not a right thing,  try redesign using new records features and related actions.

  • 0
    Certified Lead Developer
    in reply to Raúl Gómez Moya

    I hot-fix urgent PROD problems on PRE-PROD which has the same code version as PROD. Then deploy that to PROD and add a user story to finally fix it on DEV.

    Issues with active instances are somehow managed on PROD, depending on the issue.

    Issues that are not urgent, are just added to the normal development cycle and to be developed & deployed ASAP.

    If this still does not fit your situation, then please help me to better understand it.

  • For resolving production issues when there are current dev/test efforts for other enhancements, as Stefan notes a Pre-Prod or "Hotfix" environment that is synched with production is very helpful. You can dev/test in that environment, push to prod, then [manually] make the same changes in the dev environment.

    Another method you might be able to use, if possible, is to halt testing on current upgrades, backfill your production application to TEST, fix/deploy from there, then re-deploy your Dev changes to Test and resume testing (after also adding the 'fix' in Dev manually).

    For enhancements to processes with long-running instances, I always maintain a "version" data point in the CDT. I store this in a process variable (which saves to the CDT on launch), any time I make changes such as adding data points to a CDT and interfaces, increment the version by 1, then interface and other logic (sub process, expression rules, etc) can reference the version to see if it should attempt to utilize the newer logic. E.g., showWhen: ri!CDT.version>3.

    Another methodology that works well for me is completely avoiding passing CDTs to sub processes. In all of my current proceses, the parent will persist data to the DB, pass the unique ID only to the sub process, the sub process will query back based on ID to populate it's own CDT and work from there.  Then, the sub knows which 'version' of the parent has been started on and you can design the logic accordingly.