I want to know what is the most effective strategy to handle old production processes that were designed with a methodology that involves extensive and long-lasting processes. In addition, we face numerous PRO issues that require quick solutions from the support team. Simultaneously, improvements are being made to extend or enhance the functionality of these processes. However, problems arise when we have to solve an issue that involves the modification of objects that are also being altered in the enhancements and are not yet stable. Another challenge is managing active instances with old versions of the flow; when trying to implement a new version of the process, conflicts can arise due to incompatibility between the old version of the instance and the new version of the process. Appian suggests a method in their documentation, but I find it has several limitations. I am interested to know how they handle these situations in other projects to learn from the best practices applied.
docs.appian.com/.../Process_Upgrade.html
Discussion posts and replies are publicly visible
The best practice to avoid long running process is now more than 10 years old. The scenarios you describe can be pretty complicated to handle. If your support effort is above your limits, I suggest to set up a small project to change the process implementation and fix active instances.
In my projects I follow the approach described here with great success: https://appian.rocks/2023/03/20/design-long-running-tasks/
What you suggest doesn't seem to solve my problem. Imagine that I am working on the evolution of process 1 in the development environment and my team is in the middle of this development. Simultaneously, the support team receives an urgent issue in the production environment that requires modifications to process 1. It is not possible to deploy process 1 and its dependent objects to production because they include changes that could negatively impact the current operation in that environment.
Hi Raul,
In case of dependent problems related to changes, this cannot be solved 100%, but to a certain extent this can be solved
1.1 You can avoid writing larger expressions in script task or any unattended nodes in process model to expression rules, whenever any issue rises , make sure you deploy only the rule which caused the problem not the process model
.1.2 Same with interface, divide the interface as much as you can with sub interfaces and sub rules, this will avoid the deployment issues with conflicts.
1.3 Creating a copy of new interface _V1._V2 etc and using it in process model will not effect the existing instance of process model, the active instance will use the old interfaces and new one will be picked for new process instances.
1.4 worst case scenario you may have to deploy PM that is unavoidable, but this practice of breakdown of down code into expression rules as much as you can, will eliminate such conflicts while deployments.
1.5 Also you can check with redesign, keeping the process instance active for longer duration is not a right thing, try redesign using new records features and related actions.
I hot-fix urgent PROD problems on PRE-PROD which has the same code version as PROD. Then deploy that to PROD and add a user story to finally fix it on DEV.
Issues with active instances are somehow managed on PROD, depending on the issue.
Issues that are not urgent, are just added to the normal development cycle and to be developed & deployed ASAP.
If this still does not fit your situation, then please help me to better understand it.
For resolving production issues when there are current dev/test efforts for other enhancements, as Stefan notes a Pre-Prod or "Hotfix" environment that is synched with production is very helpful. You can dev/test in that environment, push to prod, then [manually] make the same changes in the dev environment.
Another method you might be able to use, if possible, is to halt testing on current upgrades, backfill your production application to TEST, fix/deploy from there, then re-deploy your Dev changes to Test and resume testing (after also adding the 'fix' in Dev manually).
For enhancements to processes with long-running instances, I always maintain a "version" data point in the CDT. I store this in a process variable (which saves to the CDT on launch), any time I make changes such as adding data points to a CDT and interfaces, increment the version by 1, then interface and other logic (sub process, expression rules, etc) can reference the version to see if it should attempt to utilize the newer logic. E.g., showWhen: ri!CDT.version>3.
Another methodology that works well for me is completely avoiding passing CDTs to sub processes. In all of my current proceses, the parent will persist data to the DB, pass the unique ID only to the sub process, the sub process will query back based on ID to populate it's own CDT and work from there. Then, the sub knows which 'version' of the parent has been started on and you can design the logic accordingly.
After several days of consulting with various experts in the field, I have summarized the following recommendations for properly managing old productive processes that face problems and need updates in an environment where versions may be incompatible for various reasons:
Use of Certification or Release Candidate Environments: It is crucial to maintain a certification environment that faithfully simulates the production environment. This allows testing changes and solutions before their definitive implementation, helping to identify and solve problems without compromising the production environment. Here is a guide on what the ideal environment structure should look like: Recommended Environments.
Version and Change Management: Implementing rigorous control over versions and changes is essential. This includes techniques such as maintaining version control on common data types (CDT), which allow differentiating between old and new versions of a process. Although I personally find that this can complicate the code, it allows adjusting the process logic according to the specific version of the data.
Modularization and Encapsulation: Dividing the process into smaller, manageable components, such as subinterfaces, expression rules, and subprocesses, facilitates the implementation of changes without impacting other parts of the process. This strategy also helps prevent conflicts during implementations by allowing updates to specific parts of the process.
Specific Solutions for Urgent Problems: For urgent production problems, it is practical to first resolve these complications in an environment that shares the same code version as production (Release Candidate) and then implement the solution in the production environment. Subsequently, a task should be added to resolve the same issue in the development and pre-production environment as soon as possible.
Redesign Using New Features: New features of the platform should be leveraged to redesign processes that may be causing problems due to their duration or obsolete structure. Using records and related actions instead of long processes can significantly improve performance and manageability.
Coordination and Communication Among Teams: It is vital to ensure that there is good communication and coordination among development, support, and operations teams to ensure that changes are understood and managed appropriately by all involved.
I also found these links that explain how to do this.
community.appian.com/.../backward-compatible-design-planning-for-subsequent-deployments
community.appian.com/.../addressing-a-production-application-defect
Please feel free to add your opinions so that we can come to the most accurate conclusions together.
Best regards.
In this blog post, I describe my go-to solution for long running processes.
https://appian.rocks/2023/03/20/design-long-running-tasks/