We’re experiencing recurring high memory usage in our Appian Cloud DEV environment that is leading to emergency restarts. This typically occurs outside of normal development hours, when there is little to no active user activity.
As part of troubleshooting, we have:
Cleared all processes that had stopped due to exceptions
Removed older versions of process models and interfaces
Reviewed for obvious runaway process instances
Reviewed guidance in KB-2011
Despite this cleanup, the memory spikes continue.
Has anyone encountered similar behavior in a DEV environment with low user activity? If so:
Were scheduled processes, integrations, or unattended background jobs the root cause?
Are there specific logs or metrics in Appian Cloud that are most helpful for identifying memory drivers?
Does exporting and removing older, inactive applications from DEV meaningfully reduce memory consumption, or is memory primarily driven by actively executing objects?
Any guidance on deeper diagnostics or best practices for isolating memory usage in Appian Cloud would be greatly appreciated.
Discussion posts and replies are publicly visible
I suggest to open a support case and discuss this with Appian.
High memory spikes in Appian Cloud often stem from unarchived processes, excess object versions, and unattended scheduled jobs running outside dev hours.Check Health Check report and Monitor the Process Metrics to identify culprits.Export inactive apps to cut versions, archive old processes, review schedules, then contact Support for diagnostics.https://community.appian.com/support/w/kb/1574/kb-2011-how-to-address-high-memory-usage-in-appian-cloud-environmentshttps://docs.appian.com/suite/help/26.2/understanding-the-health-check-report.html
We have had this exact same problem in our sandbox environment, we did everything they suggested, deleted all processes, clean up everyting and still have like 80% of the memory busy, used by Appian itself I guess because nothing else is running. We have asked to increase the memory from 16 GB to 32 GB and waiting for a response.
We encountered this issue and were told it could have something to do with non unicode characters in extra long text fields in Record Types. Still waiting to hear back from support for the fix.
That is interesting. When you do hear about a fix, if you wouldn't mind sharing I would really appreciate that.
Beyond storage and archive cleanup, one area worth investigating closely is code-level performance, particularly how process models and expressions are structured. A few patterns that frequently contribute to memory issues:
1. Nested LoopsUsing multiple a!forEach() calls, especially nested ones, or chaining loops over large datasets without pagination, can quietly accumulate memory.
2. Scheduled Processes Running Off-HoursCheck the Process Monitor for any recurring scheduled processes and review their logic for inefficient queries or large in-memory data handling.
3. Integrations and Connected System CallsUnattended integration calls that return large payloads and store them in process variables (especially Map) can hold memory for the lifetime of a process instance. If those instances aren't completing or cleaning up properly, memory accumulates.
4. Record VolumeQuerying without appropriate pagingInfo limits can pull unexpectedly large result sets. These tend to be missed in DEV.