<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://community.appian.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/"><channel><title>Support</title><link>https://community.appian.com/support/</link><description /><dc:language>en-US</dc:language><generator>Telligent Community 12</generator><item><title>Wiki Page: KB-1447 Appian Cloud Vulnerability Testing</title><link>https://community.appian.com/support/w/kb/762/kb-1447-appian-cloud-vulnerability-testing</link><pubDate>Tue, 14 Apr 2026 15:29:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:99d81c09-22a3-4960-bd27-956b147956c1</guid><dc:creator>Kaushal Patel</dc:creator><description>Purpose Cloud customers can perform security-related activities against their Appian environments such as penetration testing and vulnerability scanning as well as software composition analysis scans on installers, containers and plugin jars. This article outlines assessment rules and accepted formats for submitting vulnerabilities to Appian. Appian Cloud Assessment Rules All planned security testing by customers must be submitted to Appian Technical Support at least 3 US business days (Mon-Fri 9:00 AM to 6:00 PM EST/EDT) prior to testing via a support case. The following details must be provided in the support case to prevent Appian or its hosting service providers from adding the test source IP addresses to a block list: Contact information Start time of test (including timezone) Test duration Expected peak bandwidth in Gigabits per second (Gbps) Source IP addresses generating the test traffic Only perform assessments against the Appian Cloud Sites or FQDNs for which you have explicit approval. Make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service. Social engineering (e.g. phishing, vishing, smishing) is prohibited. Denial of service attacks are prohibited. Appian recommends performing assessments against a test or development site whenever possible, rather than a production site. Appian considers any information identified during a security test of an Appian Cloud site to be Confidential Information that is protected under Appian’s contractual agreements with its customers. This obligation to protect Appian’s Confidential Information must flow down to any third-party security consultants hired by Appian customers. Please indicate in the support case whether a third-party entity will be used for security testing and whether the third-party entity has executed a non-disclosure agreement appropriate for the purpose. Submitting Results The following applies to all submissions: Appian reviews security scan results only for recent hotfixes. Customers running older hotfix versions should upgrade to a recent hotfix and resubmit security scan results before Appian team initiates review. All documentation (including results, summaries, and reproduction steps) must be submitted in English. Appian will not accept findings that are missing information within the provided templates. Submissions much be done via support case. Appian Vulnerabilities This section is applicable to penetration testing or vulnerability scans against Appian installations. Fill out the Appian Vulnerability Submission Worksheet according the instructions below: All submitted vulnerabilities must be validated by the assessor prior to submission. Appian does not accept invalidated results or direct output from automated scanners without additional manual validation. Appian requires verifiable evidence such as screenshots, payloads, or any other associated proof-of-concept material as well as manual reproduction steps in order to properly validate any reported vulnerability findings. All scanning or testing documentation must be accompanied by: A summarized index of all issues found, with the severity level of each issue. Clear evidence performed by the assessor showing that the proposed vulnerability can be used to exploit the system, for example by: Allowing inappropriate access to the system or its data. Allowing inappropriate modification of the system or its data. Inappropriate use of a component of the system or as a whole. A description of the risk to the system. Guidance on how to reach the impacted end point(s). Clear steps on how to reproduce the issue. Appian Third-Party Component Vulnerabilities This section is applicable to Software Composition Analysis scans against Appian installers, containers and plugin jars. Fill out the Appian third-party vulnerability submission worksheet according to the instructions below : If the vulnerability reporting source is vendor specific (ex: BlackDuck or X-Ray), the customer should provide as much explanatory detail as possible in the Description column in order for Appian to effectively validate the issue. What to Expect Next Appian will review the findings (assuming all submission requirements have been met) and either accept or reject each one. For rejected findings, Appian will provide an explanation as to why the reported vulnerability was rejected (false positive, configuration-level controls available to mitigate, etc.). For accepted findings, Appian will classify the severity of the finding as Low/Medium/High/Critical. Appian Support will provide analyses and impact assessments of the report and individual findings through the support case. Affected Versions This article applies to all versions of Appian Cloud Last Reviewed: April 2026</description><category domain="https://community.appian.com/support/tags/Security">Security</category><category domain="https://community.appian.com/support/tags/Cloud">Cloud</category></item><item><title>Wiki Page: KB-1354 How to manage high disk usage in Appian Cloud environments</title><link>https://community.appian.com/support/w/kb/548/kb-1354-how-to-manage-high-disk-usage-in-appian-cloud-environments</link><pubDate>Fri, 03 Apr 2026 14:31:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:527de8ca-e841-453a-92f8-6e8a7bf6d46b</guid><dc:creator>pauline.delacruz</dc:creator><description>Purpose This article details root causes as well as corrective actions to be taken in situations where an Appian Cloud environment is experiencing high disk usage. In addition, some strategies to optimize and control disk usage in Appian environments are listed. In Appian Cloud environments specifically, the top disk consumers are usually those that are listed below. Note: Appian Support may open a Support Case to notify you if disk usage in your environment crosses 80%. A member of Appian Support will follow up on the Support Case to discuss the largest consumers of disk on your site and potential remediation steps. Instructions Depending on which component of Appian is causing high disk usage, the steps to remediate the issue will vary. Purchasing additional disk usage It is possible that sufficient cleanup of actionable consumers of disk is not possible. In this situation, a disk space increase may be necessary to bring disk usage to healthy levels. This is especially common for sites with 75 GB of storage. This amount of storage space is the minimum that can be provisioned for an Appian Cloud environment and is meant to be used only as a starting point for customers. Many Appian Cloud environments will experience a workload whose storage requirements exceed 75 GB of disk space, therefore requiring a disk space increase. Additional storage is hot-deployable, meaning a site restart is not required to add additional disk space. Note: Once additional disk space has been added to the server, it cannot be removed. Actionable Components Knowledge Centers &amp;amp; Documents Logs RDBMS Data Archived Processes &amp;amp; Process History Engine Transactions and KDBs RPA Executions RPA Logs Knowledge Centers &amp;amp; Documents Knowledge Centers are one of the most common causes for high disk usage in Appian Cloud environments. You are responsible for deleting any unnecessary documents in the appropriate knowledge centers. Appian Support is unable to delete documents for you on Cloud sites. Appian Support can provide a breakdown of the largest Knowledge Centers as requested. If you already have a Support Case for addressing high disk usage in your environment, you can request this breakdown in that Support Case. Otherwise, you can create a new Support Case to request this breakdown. In order to access and delete the documents in the specified knowledge centers, perform the following: Navigate to /suite/design/ . Click on the Objects tab. Filter by the Folder object type. Type in the search bar, where is one of the IDs in the breakdown provided. Click Search. Select &amp;quot;Search UUID and ID&amp;quot; in the dropdown to the right of the search bar. KC 0 and KC 7 You may see Appian Support reference KC 0 or KC 7 in a disk breakdown. These are unique Knowledge Centers which are present in every Appian environment. KC 0 contains system images, import/export logs, and deployment packages. This folder cannot be accessed from the frontend. Deployment packages are automatically deleted after 30 days, and you can modify this retention period in the Appian Administration Console . All other files in this folder are stored indefinitely. Appian Support can manually remove files older than a certain number of days from this folder with your permission. To request this manual cleanup, please update your high disk usage Support Case (if one exists) or open a new Support Case. KC 7 is the Temporary Documents Knowledge Center . By default, files are kept in this Knowledge Center for 30 days. This retention period can be modified by Appian Support. T o request a change to this retention period, please update your high disk usage Support Case (if one exists) or open a new Support Case. Logs Application logs will naturally accumulate in your environment. Application logs using a high amount of disk is not necessarily a cause for immediate concern. However, if application logs are growing rapidly or if disk space is limited, this may require further attention. While many application logs roll over or are compressed after a certain number of days, there are no global settings for managing log files. Appian Support can compress application logs older than a certain number of days as necessary. T o request cleanup of application logs on your site, please update your high disk usage Support Case (if one exists) or open a new Support Case. Compressed logs will still be available to download via /suite/logs as a .gz file. If application logs have grown a significant amount in a short time, this can be indicative of a recurring issue in your Appian environment that is repeatedly printing errors in the logs. These errors should be addressed as soon as possible to avoid further increases in disk space. If you have requested additional loggers to be enabled or existing loggers to be modified, it is important that you let Appian Support know as soon as these loggers can be disabled. Leaving them on can increase the size and disk utilization of the logs and is not recommended for long periods of time. Note that all loggers will be reset to default values upon a site restart. RDBMS Data The amount of disk space that the Appian Cloud Database consumes is directly related to the amount of data being stored in the database. If MySQL data is using a high amount of disk space, consider reducing the amount of data stored in the cloud database. Alternatively, consider moving the data from the Appian Cloud server to an external database server. Note: If rows are deleted from the cloud database, disk space may not immediately be freed. This is because the rows are not immediately removed, just marked as &amp;quot;deleted&amp;quot; internally. If you delete data from your cloud database but do not see a drop in disk usage, please reach out to Appian Support via Support Case. Appian Support can review your cloud database and reclaim disk space from deleted rows if necessary. RDBMS Logs In addition to the RDBMS data mentioned above, all connections made to your cloud database are audited, and all operations performed on the cloud database are stored in the RDBMS binary logs. By default, the binary logs are purged after four days, and the audit logs are removed after 30 days. However, on compliant sites, the audit logs are kept indefinitely for security purposes. If either of these types of logs are consuming a significant amount of disk space on your site, consider reducing the amount of activity on your cloud database. Alternatively, feel free to open a Support Case with Appian Support to discuss additional options. Archived Processes and Process History Processes which have been archived are moved out of memory and onto disk. Additionally, the process history for all processes is stored on disk. If archived processes or process history are the top consumers of disk space in your environment, there are a few site properties that can be configured by Appian Support to remediate the issue going forward. Auto-compress archived processes - The site can be configured to automatically compress archived processes after a certain number of days (e.g. automatically compress archives older than 7 days). A compressed archived process can still be unarchived normally. Auto-delete archived processes - The site can be configured to automatically delete process archives after a number of days (e.g. automatically delete archives older than 7 days). Once an archived process is deleted, it cannot be recovered or unarchived. Auto-delete process history - The site can be configured to automatically delete process history after a number of days (e.g. automatically delete process history older than 7 days). Once a process has its history deleted, that process can no longer be viewed from the frontend. Appian Support requires that you provide a number of days to set as the value for the properties above. Additionally, the properties for auto-deletion of archived processes and process history must be configured to the same number of days. Please be aware that Appian Support will need to schedule a maintenance window to deploy any changes to these properties. Note: Archived processes are automatically compressed after 7 days by default. Engine Transactions and KDBs The engines are processes that run in memory, but they frequently write data to disk. The most common culprits of high disk usage related to the engines are engine transactions and KDBs. Engine Transactions Engine transactions are copies of every write transaction that gets applied to the engines. They are used to rebuild the engine in case of an unexpected crash. Appian Support may refer to engine transactions as Kafka logs, since they are managed by Kafka. However, unlike normal log files, engine transactions cannot be manually deleted. Instead, they automatically roll over whenever the engines checkpoint. By default, the engines will try to checkpoint at least every 22 hours, and the latest three checkpoints are saved by default. If the engine transactions are taking up a high amount of disk space, try reviewing the processes running on your site. Typically, engine transactions scale with the amount of process activity on a site, but certain design implementations (extremely large process variables or many looping processes) can cause extreme growth of the engine transactions. Reach out to Appian Support via Support Case if you have any questions or suspect that engine transactions are responsible for a significant amount of disk usage on your site. Engine KDBs Whenever the engines checkpoint, they store a snapshot of themselves on disk. This snapshot is referred to as a KDB file. Just like engine transactions, only the past three KDB files are saved by default, and they roll over with each checkpoint. If the engine KDBs are taking up a high amount of disk space, this means that the engines themselves must be taking up a lot of RAM. Please review KB-2011 on addressing high engine memory usage and try to lower the memory used by the engines. Reducing the memory footprint of the engines will lead to a drop in the size of the KDB files for that engine after the next checkpoint. RPA Executions Robotic executions on your site will generate many artifacts, including screenshots, execution videos, and execution logs. If robotic executions are responsible for a significant amount of disk usage, please review the Automatic Process Clean-Up options for your robotic tasks. Consider modifying the clean-up properties associated with your robotic tasks to help reduce the amount of disk space taken by their respective artifacts. RPA Logs RPA logs will naturally accumulate on your site as you use RPA. If you are actively using RPA, please navigate to the RPA Console, select &amp;quot;Settings&amp;quot; on the left-hand sidebar, and select &amp;quot;Maintenance&amp;quot;. This will allow you to review, download, and delete RPA logs on your site. You may also reach out to Appian Support to request the compression or deletion of RPA logs. If you are not using RPA on your site and RPA logs are still taking a noticeable amount of disk space, please reach out to Appian Support. Affected Versions This article applies to all versions of Appian. Last Reviewed: April 2026</description><category domain="https://community.appian.com/support/tags/administration">administration</category><category domain="https://community.appian.com/support/tags/how_2D00_to">how-to</category><category domain="https://community.appian.com/support/tags/disk%2busage">disk usage</category><category domain="https://community.appian.com/support/tags/Cloud">Cloud</category></item><item><title>Wiki Page: KB-2377 Information about the TeamPCP / CanisterWorm Supply Chain compromise</title><link>https://community.appian.com/support/w/kb/3792/kb-2377-information-about-the-teampcp-canisterworm-supply-chain-compromise</link><pubDate>Thu, 02 Apr 2026 17:24:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:2b641df6-4046-4163-8fdb-477ef1c73152</guid><dc:creator>Kaushal Patel</dc:creator><description>In late February and March 2026, a widespread supply chain campaign orchestrated by a threat actor known as TeamPCP (associated with the &amp;quot;CanisterWorm&amp;quot; malware) compromised over 50 open-source libraries across multiple ecosystems, including PyPI, npm, Docker Hub, and GitHub Actions. While the campaign impacted dozens of libraries, notable targets included the litellm library on PyPI (versions 1.82.7 and 1.82.8) and Aqua Security&amp;#39;s vulnerability scanner, Trivy ( CVE-2026-33634 ). Appian has investigated this broader campaign and affected services, and determined that it is not impacted. No vulnerable versions of the affected libraries associated with the TeamPCP/CanisterWorm compromise are present in the Appian Cloud environment or any of Appian’s products. We will continue to monitor the situation and provide any updates as appropriate. Additional Notes: The following CVE was released with additional information on the scope of the vulnerability: CVE-2026-33634 - (Aquasecurity Trivy Embedded Malicious Code Vulnerability) Supporting Documentation: https://www.endorlabs.com/learn/teampcp-isnt-done https://www.mend.io/blog/canisterworm-the-self-spreading-npm-attack-that-uses-a-decentralized-server-to-stay-alive/ Affected Versions This article applies to all supported versions of Appian. Last reviewed: April 2, 2026</description><category domain="https://community.appian.com/support/tags/Security">Security</category></item><item><title>Wiki Page: KB-2376 Information about the Axios Supply Chain Compromise</title><link>https://community.appian.com/support/w/kb/3791/kb-2376-information-about-the-axios-supply-chain-compromise</link><pubDate>Wed, 01 Apr 2026 20:38:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:33a29212-f81a-4bc5-a233-f241e0302302</guid><dc:creator>Kaushal Patel</dc:creator><description>On 31 March 2026, an Axios npm package that uses a JavaScript library to enable applications to make HTTP/S requests and is included as a dependency in millions of applications was compromised. Between ~00:21 and ~03:30 UTC, malicious versions ( axios@1.14.1 and axios@0.30.4 ) were published using a compromised maintainer account. Appian has investigated this vulnerability and affected services, and determined that it is not impacted , as no vulnerable versions of the packages are used in the Appian Cloud environment or any of Appian’s products. We will continue to monitor the situation and provide any updates as appropriate. Supporting Documentation: https://snyk.io/blog/axios-npm-package-compromised-supply-chain-attack-delivers-cross-platform/ https://www.mend.io/blog/poisoned-axios-npm-account-takeover-50-million-downloads-and-a-rat-that-vanishes-after-install/ Affected Versions This article applies to all supported versions of Appian. Last reviewed: April 1, 2026</description><category domain="https://community.appian.com/support/tags/Security">Security</category></item><item><title>Wiki Page: KB-1157 How to reset the analytics engines for self-managed installations of Appian</title><link>https://community.appian.com/support/w/kb/374/kb-1157-how-to-reset-the-analytics-engines-for-self-managed-installations-of-appian</link><pubDate>Wed, 01 Apr 2026 16:43:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:715fa449-6f10-469e-b6df-b3f710610ab4</guid><dc:creator>pauline.delacruz</dc:creator><description>Purpose This article outlines the process to reset the Analytics Engines in self-managed installations of Appian. NOTE: These steps should only be performed when advised to do so by Appian Technical Support. They should only be performed on the process analytics engine. These steps are not supported on any other engine. Instructions Note: For the most up-to-date steps, always refer to the official Appian documentation . The steps below are provided as a reference for common deployment types. Appian on Kubernetes (AoK) Resetting the analytics engines on AoK requires a number of extra steps. Shut down the webapp cluster by scaling its statefulset to 0. kubectl -n scale statefulset appian-webapp --replicas=0 Checkpoint the execution and process-design engines. kubectl -n exec -i --tty appian-service-manager- -0 -- ./serviceManagerScriptWrapper.sh services/bin/checkpoint.sh -s execution,process-design -w Scale down the execution engine statefulset to 0. kubectl -n scale statefulset appian-service-manager-execution --replicas=0 (If the site is HA) Scale down the analytics engine statefulset to 1. kubectl -n scale statefulset appian-service-manager-analytics --replicas=1 Scale down the process-design statefulset to 0. kubectl -n scale statefulset appian-service-manager-process-design --replicas=0 Stop the analytics engine with the OOTB script from within each analytics engine pod. kubectl -n exec -i --tty appian-service-manager-analytics -0 -- ./serviceManagerScriptWrapper.sh services/bin/stop.sh -s analytics Reset the analytics engine with the OOTB script from within each analytics engine pod. kubectl -n exec -i --tty appian-service-manager-analytics -0 -- ./serviceManagerScriptWrapper.sh services/bin/resetAnalytics.sh -s analytics Scale all statefulsets back up to their desired number. Classic Linux Stop all app servers. /tomcat/apache-tomcat/bin/stop-appserver.sh Stop the execution, analytics, and process-design engines. /services/bin/stop.sh -p -s analytics,execution,process-design Reset the analytics engine with the OOTB script. /services/bin/resetAnalytics.sh -p -s analytics Start the engines back up. /services/bin/start.sh -s analytics,execution,process-design Start the app server. /tomcat/apache-tomcat/bin/start-appserver.sh Affected Versions This article applies to all versions of self-managed installations of Appian. Last Reviewed: April 2026</description><category domain="https://community.appian.com/support/tags/self_2D00_managed">self-managed</category><category domain="https://community.appian.com/support/tags/engines">engines</category><category domain="https://community.appian.com/support/tags/infrastructure">infrastructure</category></item><item><title>Wiki Page: KB-2375 How to Submit Issues and Change Requests for Appian Automated Testing Plugins (FitNesse, Cucumber, Selenium)</title><link>https://community.appian.com/support/w/kb/3751/kb-2375-how-to-submit-issues-and-change-requests-for-appian-automated-testing-plugins-fitnesse-cucumber-selenium</link><pubDate>Mon, 30 Mar 2026 16:18:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:9f0e1645-b513-45ea-a753-e9f240521c1e</guid><dc:creator>Kaushal Patel</dc:creator><description>Purpose The purpose of this article is to specify the information required to report issues or asking questions about appian-selenium-api. Instructions Please create an issue item for appian-selenium-api through GitLab issues . Impact or reason for the change Clarify what type of tests cannot be performed Steps on how to recreate the issue Any additional information, such as testing data or apps, that may help diagnose or resolve the issue To contribute and resolve an outstanding issue , view the projects contributing.md file . Note: Appian greatly appreciates all feedback on our product. However, please be aware that it is not our policy to provide information on when or how an enhancement request may be implemented in the product. Affected Versions This article applies to the latest version of Appian Selenium API. Last Reviewed: March 2026</description><category domain="https://community.appian.com/support/tags/automated%2bTesting">automated Testing</category><category domain="https://community.appian.com/support/tags/how_2D00_to">how-to</category><category domain="https://community.appian.com/support/tags/selenium">selenium</category></item><item><title>Wiki Page: KB-2374 Update to Appian Forum Email Notifications - March 2026</title><link>https://community.appian.com/support/w/kb/3788/kb-2374-update-to-appian-forum-email-notifications---march-2026</link><pubDate>Sun, 29 Mar 2026 23:08:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:327fcec1-02e0-4c3c-9287-bb1d14e554c1</guid><dc:creator>Maggie Deppe-Walker</dc:creator><description>Purpose Appian Forum email notifications are used to notify shareholders of updates. This article outlines a change in the expected format of this communication going forward. Upcoming changes As of March 27th, 2026 the sender address for Appian Forum notifications will be updated. Change Details Previous Sender: forum@appian.com New Sender: forum@forum.appian.com Required Action If you use automated inbox rules to manage these emails, please create an additional rule to now include the new address: forum@forum.appian.com . Note: You will continue to receive non-notification correspondence, such as case updates or health checks analysis for instance, from forum@appian.com. Last Reviewed: March 2026</description></item><item><title>Wiki Page: KB-1082 Issues related to search server indices</title><link>https://community.appian.com/support/w/kb/3550/kb-1082-issues-related-to-search-server-indices</link><pubDate>Wed, 25 Mar 2026 14:50:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:b0a0be4c-59dc-4d64-b5a6-745d013af73f</guid><dc:creator>pauline.delacruz</dc:creator><description>Symptoms Users may experience one of the following symptoms related to search server indices when using any search server-related functionality. Symptom 1 For Appian 7.10 and later, when running Rule Performance in the Admin Console , the report is not displayed and a Waiting indicator is shown. After waiting for a few minutes, the indicator disappears, and the following error is displayed in the application server log: ERROR com.appiancorp.common.logging.GWTRemoteLoggingService - Swallowed an error with no error code. ResponseClass: class com.appiancorp.gwt.tempo.client.designer.EvaluateUiResponse Note: The above error message is very generic and there are multiple possibilities for a root cause. In this case, the rule performance timing out is the root cause. Symptom 2 When running an impact analysis, users will see the following error modal in their browser: A server error was encountered while processing your request. Please try again. After seeing this error, the following error will be printed in the application server log: [[ACTIVE] ExecuteThread: &amp;#39;33&amp;#39; for queue: &amp;#39;weblogic.kernel.Default (self-tuning)&amp;#39;] ERROR com.appiancorp.gwt.ia.server.GetImpactAnalysisImpl - Error running impact analysis using search server. targetObjects=[TypedValue[it=80,v=10568]] java.lang.IllegalStateException: Data in the index is over 10 minutes behind the system of record. upToDateAsOfBySource: {k-content=Optional.of(2015-12-28 03:01:23.49), k-process-design=Optional.of(2015-12-28 03:01:33.17), k-personalization=Optional.of(2015-12-28 03:01:34.2), rdbms-primary=Optional.of(2015-12-28 03:01:34.21)} Symptom 3 The following error will be repeated in the application server log: [elasticsearch[Client 2636E3BE][transport_client_worker][T#5]{New I/O worker #5}] ERROR com.appian.dl.repo.es.LoggingBulkResponseActionListener - Bulk request failed item: [opType=index, index=xray-rule-execution, type=05cce14550a54e01b503027da635996c2, id=DT-1, status=SERVICE_UNAVAILABLE, message=UnavailableShardsException[[xray-rule-execution][0] Primary shard is not active or isn&amp;#39;t assigned is a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@3ec3bf65]] Symptom 4 During an attempt to upload a file, object, or application into Appian, the progress bar stalls at 100% and does not allow import or upload. Symptom 5 File uploads take an inordinate amount of time, sometimes longer than 15 minutes. The following is printed in the application server log: INFO [stdout] (elasticsearch[Client 7FE43B33][transport_client_worker][T#4]{New I/O worker #4}) [elasticsearch[Client 7FE43B33][transport_client_worker][T#4]{New I/O worker #4}] ERROR com.appian.dl.repo.es.LoggingBulkResponseActionListener - Bulk request failures occurred. Summary: [opType=index, index=designer-objects-ia, type=DT-10, status=SERVICE_UNAVAILABLE, count=1, firstMessage=UnavailableShardsException[[designer-objects-ia][0] Primary shard is not active or isn&amp;#39;t assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@515eabc9]]. The following is printed in the search-server.log : [WARN ][org.elasticsearch.indices.cluster] [Node localhost:9300] [designer-objects-ia][0] sending failed shard after recovery failure org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [designer-objects-ia][0] failed to recover shard at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:290) at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.elasticsearch.index.translog.TranslogCorruptedException: translog corruption while reading from stream at org.elasticsearch.index.translog.ChecksummedTranslogStream.read(ChecksummedTranslogStream.java:72) at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:260) ... 4 more Caused by: org.elasticsearch.ElasticsearchException: failed to read [DT-10][_a-0000dd1b-8f2d-8000-0315-010000010000_18993] at org.elasticsearch.index.translog.Translog$Index.readFrom(Translog.java:522) at org.elasticsearch.index.translog.ChecksummedTranslogStream.read(ChecksummedTranslogStream.java:68) ... 5 more Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: No version type match [48] at org.elasticsearch.index.VersionType.fromValue(VersionType.java:307) at org.elasticsearch.index.translog.Translog$Index.readFrom(Translog.java:519) ... 6 more Symptom 6 The following error is printed in the application server log: [ServerService Thread Pool -- 65] ERROR com.appiancorp.ix.analysis.LoadingPagedIterator - Error getting content item [identifier=441937] com.appian.dl.repo.QueryException: Query failed [request=QueryRequest{from=Type -10 (id=-10), timeZone=null, query=Query[Selection[relationships.uuid (show)], criteria[((type = TypedValue[it=3,v=Application]) AND (relationships.uuid in TypedValue[it=103,v={SYSTEM_CONTENT_ICON_NEWS_EVENT_SHOPPING_CART_RED}]))], PagingInfo[startIndex=0, batchSize=-1, sort=[]], options=QueryOptions{dataLimitInBytes=100000000, cardinalityPrecisionThreshold=0, timeoutMs=-1}]}, ES search request={ ... Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to execute phase [query], all shards failed Symptom 7 The following error is printed in the application server log: [ServerService Thread Pool -- 63] FATAL com.appiancorp.common.web.StartupHaltingServletContextListener - Halting JVM startup: An unexpected error occurred while trying to initialize and validate the Appian data source. com.appiancorp.suiteapi.common.exceptions.AppianException: An unexpected error occurred while trying to initialize and validate the Appian data source. (APNX-1-4179-004) ... Caused by: java.lang.IllegalStateException: [jdbc/AppianDS] Could not synchronize the search index with the database data. ... Caused by: java.lang.IllegalStateException: unexpected docvalues type NONE for field &amp;#39;id&amp;#39; (expected=SORTED). Use UninvertingReader or index with docvalues. Symptom 8 Users will notice the Appian Designer freeze when creating new objects or performing basic operations to existing objects. Symptom 9 The following error is printed in the application server log: ERROR com.appiancorp.tempo.abdera.TempoEntryAdapter - Error retrieving entries. java.lang.IllegalArgumentException: No enum constant com.appian.dl.repo.PersistenceMetadataImpl.Field.textField Symptom 10 After upgrading Appian, when navigating to Tempo (or any of its tabs), the page fails to load certain user interface components. The components that do not load include the navigation tabs, user avatar, left-side search bars and pickers, and all News posts. The following image is an example of what is seen in this situation: Symptom 11 The following warn trace is seen in the application server log: WARN com.appiancorp.security.auth.activity.UserActivityFilter - Could not record user activity: secCtx=XXXXXXX, authDetails=AuthenticationDetails [Details] -- MasterNotDiscoveredException[ null ] The MasterNotDiscoveredException points to the following error in the search-server.log : [DEBUG][org.elasticsearch.action.admin.cluster.health] [Node SITE_FQDN:PORT#] no known master node, scheduling a retry [DEBUG][org.elasticsearch.action.admin.cluster.health] [Node SITE_FQDN:PORT#] timed out while retrying [cluster:monitor/health] after failure (timeout [Xs]) The search_server_cluster.csv log file located in /logs/data-metrics/ will show that the search server was/is in a down state (RED): DAY/MONTH/YEAR TIME GMT,appian-search-cluster,RED, false ,X,X,X,X,X,X,X Cause The search indices were corrupted due to one of the following reasons: The machine the search server is hosted on ran out of disk space. The search server did not shut down properly. The search server indices experienced an unknown corruption or are in unhealthy state. Action DISCLAIMER: Non-production: Please note that these steps should only be ran if instructed by Appian Support via support case/call. Production: Please consult Appian Support before you run the steps below. We strongly advise that these commands are only ran with a member of Appian Support present. Unblock the search indices Follow the steps in KB-1763 to first attempt unblocking the search indices. If this does not resolve the issue, continue with the below steps . Use the delete API to rebuild individual search server indices Starting from Appian 20.3, refreshing the indices by deleting search server data directory should be treated as a last resort due to the introduction of document extraction. If the customer is only reporting issues in certain functions, attempt to rebuild the individual index with the delete API. If the environment is highly available (HA), the deletion only needs to happen on one of the nodes hosting search servers. Appian on Kubernetes (AoK) Scale down the webapp stateful sets to have 0 replicas to not have running pods. Exec into the Search Server pod kubectl -n exec -it -- bash Run the following to authenticate with the Search Server alias curl=&amp;#39;curl -u &amp;quot;appian:$APPIAN_USER_PASSWORD&amp;quot;&amp;#39; Run the following command to list the indices and find the name of the index in question curl -s localhost:9200/_cat/indices?v Issue with importing/exporting application: curl -XDELETE localhost:9200/ix-activity- I ssue with reviewing dependencies of objects: curl -XDELETE localhost:9200/designer-objects-ia- I ssue with viewing news (tempo posts): curl -XDELETE localhost:9200/news- Issue with viewing performance metrics of expression rules: curl -XDELETE localhost:9200/xray-rule-execution- After the command finishes, the following message will be outputted: {&amp;quot;acknowledged&amp;quot;:true} Verify that the index in question is not longer displayed in the output of the following command: curl -s localhost:9200/_cat/indices?v To bring up the pods for the components, scale up webapp stateful sets to their previous number of replicas. Linux Stop all application servers if not yet stopped. Authenticate with the Search Server Appian 24.3 and later: AUTHHEADER=&amp;quot;Authorization: Basic $(awk &amp;#39;/^conf.search-server.user.appian.password=/ { match($0, /conf.search-server.user.appian.password=(.*)/, arr); print arr[1] }&amp;#39; /usr/local/appian/ae/search-server/conf/custom.properties | awk &amp;#39;{print &amp;quot;appian:&amp;quot;$1}&amp;#39; | xargs echo -n | base64 -w0)&amp;quot; alias curl=&amp;#39;curl --header &amp;quot;$AUTHHEADER&amp;quot;&amp;#39; Appian 20.4 to 24.2: APIKEY=$(awk &amp;#39;/^conf.data.search-server.restclient.apiKey=/ { match($0, /conf.data.search-server.restclient.apiKey=(.*)/, arr); print arr[1] }&amp;#39; /conf/custom.properties); AUTHHEADER=&amp;quot;Authorization: ApiKey $(echo -n $APIKEY | base64 -w0)&amp;quot;; alias curl=&amp;#39;curl --header &amp;quot;$AUTHHEADER&amp;quot; &amp;#39; Run the following command to list the indices and find the name of the index in question Appian 24.3 and later: curl -s localhost:9200/_cat/indices?v Appian 20.4 to 24.2: curl localhost:9200/_cat/indices?v Issue with importing/exporting application: curl -XDELETE localhost:9200/ix-activity- I ssue with reviewing dependencies of objects: curl -XDELETE localhost:9200/designer-objects-ia- I ssue with viewing news (tempo posts): curl -XDELETE localhost:9200/news- Issue with viewing performance metrics of expression rules: curl -XDELETE localhost:9200/xray-rule-execution- After the command finishes, the following message will be outputted: {&amp;quot;acknowledged&amp;quot;:true} Verify that the index in question is not longer displayed in the output of the following command: Appian 24.3 and later: curl -s localhost:9200/_cat/indices?v Appian 20.4 to 24.2: curl localhost:9200/_cat/indices?v Start the application server(s). If the issue persists after running the delete API to rebuild search server indices , proceed with removing the search server data . Recreate the search indices Note: Keep in mind that running the below steps has the caveat of losing prediction data, if any, for documentation extraction: Appian on Kubernetes Scale down the webapp stateful sets to have 0 replicas to not have running pods. Edit the Search Server statefulset kubectl -n edit sts appian-search-server Add the following entry under the search-server container: command: [&amp;quot;sh&amp;quot;, &amp;quot;-c&amp;quot;, &amp;quot;sleep infinity&amp;quot;] Delete the search server pod(s) to restart with a sleeping pod(s) kubectl -n delete appian-data-server-0 Recreate the search indices using the steps below. All search indices, except ones related to document extraction can be rebuilt upon application server restart exec into the search server pod(s) kubectl -n exec -it -- bash Delete the contents of /search-server/data/ If HA, repeat inside the remaining search server pod(s) Edit the appian-search-server statefulset and remove the entry from step 3 under the search-server container kubectl -n edit sts appian-search-server Restart the Search Server pod(s) and monitor startup kubectl -n delete To bring up the pods for the components, scale up the webapp stateful sets to their previous number of replicas. Linux Stop all app servers and search servers according to the documentation . Recreate the search indices using the steps below. All search indices, except ones related to document extraction can be rebuilt upon application server restart: (Appian 18.3 and later) Delete the contents of the /search-server/data/ directory from every server. Do not delete the directory itself . (Appian 18.2 and earlier) Remove the /_admin/search-local/ directory. Repeat this for every node hosting application server. Start the search server according to the documentation . In some instances, users may choose to use a different directory for the search indices. To confirm the location of the search indices, perform the following: Open custom.properties . Search for one of the following properties, depending on your version of Appian: Appian 7.11 and earlier: conf.data.primary.datasource.search.index Appian 16.1 and later: conf.data.APPIAN_DATA_SOURCE.search.index Affected Versions This article applies to all versions of Appian. Last Reviewed: March 2026</description><category domain="https://community.appian.com/support/tags/search%2bserver">search server</category><category domain="https://community.appian.com/support/tags/infrastructure">infrastructure</category></item><item><title>Wiki Page: KB-2162 Appian Forum maintenance schedule</title><link>https://community.appian.com/support/w/kb/2109/kb-2162-appian-forum-maintenance-schedule</link><pubDate>Tue, 24 Mar 2026 17:49:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:28c760d9-c90c-411f-9d2d-0fef6e14e91a</guid><dc:creator>pauline.delacruz</dc:creator><description>Purpose Appian Forum is a mission-critical site that contains functionality such as support case management, Health Check processing, account management, etc. The site undergoes regular monthly patching and will be unavailable on the following dates from 8-11 PM US Eastern Time. Note: Additional ad-hoc maintenance may be performed from time to time, in which case a notification will be posted on Appian Community in advance of the scheduled maintenance activity. 2026 January 10th February 7th March 7th March 28th April 25th May 23rd June 20th July 18th August 15th September 12th October 10th November 7th December 5th 2027 January 2nd January 30th February 27th March 20th Affected Versions This article applies to all versions of Appian. Last Reviewed: March 2026</description><category domain="https://community.appian.com/support/tags/administration">administration</category><category domain="https://community.appian.com/support/tags/maintenance">maintenance</category><category domain="https://community.appian.com/support/tags/forum">forum</category></item><item><title>Wiki Page: KB-2242 "PKIX path building failed" error seen when sending emails in a self-managed Kubernetes install</title><link>https://community.appian.com/support/w/kb/3060/kb-2242-pkix-path-building-failed-error-seen-when-sending-emails-in-a-self-managed-kubernetes-install</link><pubDate>Wed, 18 Mar 2026 19:10:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:b08e4e55-2164-4f54-9c37-b11e8af328b3</guid><dc:creator>Kaushal Patel</dc:creator><description>Symptoms Sending emails over HTTPS fails with the following error in the webapp pod log: jakarta.mail.MessagingException: Could not convert socket to TLS; ... javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Cause This is because the certificate being presented by the SMTP server is not trusted by the webapp pod for one of the following reasons: The certificate is self-signed. The certificate is signed by a Certificate Authority, but the server is not presenting the full certificate chain with all intermediate certs up to the CA root cert. Action The external certificate needs to be added to the default Java trust store. This can be done by following the instructions below: Extract the default Java trust store from the Appian webapp deployment: kubectl -n *namespace* cp -webapp-0:/usr/local/appian/ae/java/lib/security/cacerts ./cacerts Import the target server’s certificate and CA root certificate into the cacerts trust store: keytool -import -alias targetServerCert -file ./ .PEM -keystore ./cacerts -storepass changeit keytool -import -alias myRootCA -file ./ .pem -keystore ./cacerts -storepass changeit Confirm that the certificates were added to the cacerts trust store: keytool -list -keystore ./cacerts -storepass changeit Create a secret based on the above cacerts file: kubectl create secret generic cacerts-secret --from-file=keystore.jks=./cacerts -n Configure the Appian Custom Resource to mount the customized trust store by adding the following in the Appian site yaml, under .spec.webapp additionalVolumes: - name: keystore-secret secret: secretName: &amp;quot;cacerts-secret&amp;quot; items: - key: keystore.jks path: cacerts additionalVolumeMounts: - name: keystore-secret mountPath: /usr/local/appian/ae/java/lib/security/cacerts subPath: cacerts readOnly: true Start the Appian site. You will find your customized cacerts trust store at /usr/local/appian/ae/java/lib/security/cacerts alongside other original files in the ~/security directory Affected Versions This article applies to all versions of self-managed Appian on Kubernetes. Last Reviewed: July 2025</description><category domain="https://community.appian.com/support/tags/email">email</category><category domain="https://community.appian.com/support/tags/appianOnKubernetes">appianOnKubernetes</category><category domain="https://community.appian.com/support/tags/infrastructure">infrastructure</category></item><item><title>Wiki Page: KB-2372 Known unexpected behaviour of left() and leftb() functions</title><link>https://community.appian.com/support/w/kb/3676/kb-2372-known-unexpected-behaviour-of-left-and-leftb-functions</link><pubDate>Tue, 17 Mar 2026 15:42:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:ce93003d-a43b-423c-a043-2b5fb049173f</guid><dc:creator>pauline.delacruz</dc:creator><description>Purpose The purpose of this article is to demonstrate the current unexpected behavior of the functions left() , len() , lenb() and leftb() . Symptoms When using left() , leftb() , len() and lenb() , there is unexpected behavior in how these functions count certain characters, such as emojis. Cause The following results demonstrate that currently left() and leftb() do not provide the expected outputs based on their function documentation: len() counts an emoji as 1 character (expected) lenb() counts an emoji as 4 bytes (expected) left() counts an emoji as 2 characters (unexpected, expected 1) leftb() counts an emoji as 2 bytes (unexpected, expected be 4) left and leftb() always ouput the same result (2 in this case) As a result of this inconsistency, it is not possible to truncate a length of text to a specific number of bytes. Workaround Check the string’s byte length using lenb(ri!text) . If it&amp;#39;s within the limit, return the string. If it exceeds the limit, use regexfirstmatch() to trim one character at a time. Repeat step 2 until the byte length is within the limit. Note : This method fails if more than 529 characters need to be removed (due to recursion limits). Emoji-heavy messages are more likely to cause this issue. Affected Versions This article applies to all versions of Appian. Last Reviewed: March 2026</description><category domain="https://community.appian.com/support/tags/integration">integration</category></item><item><title>Wiki Page: KB-1187 "PKIX path building failed: error when attempting to make a call to an external server" error thrown when making web service calls over HTTPS or LDAPS</title><link>https://community.appian.com/support/w/kb/403/kb-1187-pkix-path-building-failed-error-when-attempting-to-make-a-call-to-an-external-server-error-thrown-when-making-web-service-calls-over-https-or-ldaps</link><pubDate>Tue, 17 Mar 2026 15:40:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:e9477483-a544-4428-9671-0f07863179cf</guid><dc:creator>pauline.delacruz</dc:creator><description>Symptoms Making a web service call to an external server fails and the following is seen in the application server log: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Cause The certificate presented by the external server is not trusted by Appian because it has not been imported to the trust store. Action Cloud If the connection to the external server originates from one of the services documented here , upload the certificate ( .pem ) to the Trusted Server Certificates section of the Admin Console. Otherwise: If the certificate is self-signed, obtain a new certificate from a publicly-trusted CA. If the certificate is already CA-signed, ensure that the external server is configured to present all intermediate certificates up to the CA root certificate. Self-managed 18.3 and later If the connection to the external server originates from one of the services documented here , upload the certificate (.pem) to the Trusted Server Certificates section of the Admin Console. Otherwise, refer to the steps below to import the certificate into the Java trust store. Note: In Appian 19.1 and later, Java comes bundled with Appian so /java should be used instead of JAVA_HOME . 18.2 and earlier Import the certificate into the default Java trust store: Linux $JAVA_HOME/bin/keytool -import -trustcacerts -file #PATH TO FILE# -alias ##ALIASNAME## -keystore $JAVA_HOME/jre/lib/security/cacerts Windows &amp;quot;%JAVA_HOME%\bin\keytool&amp;quot; -import -trustcacerts -file #PATH TO FILE# -alias ##ALIASNAME## -keystore &amp;quot;%JAVA_HOME%\jre\lib\security\cacerts&amp;quot; If importing multiple certificates, make sure that the alias is different for each command. The alias can be anything, usually the name this certificate was issued for. Verify that the import was successful: Linux $JAVA_HOME/bin/keytool -list -keystore $JAVA_HOME/jre/lib/security/cacerts | grep ##ALIASNAME## Windows &amp;quot;%JAVA_HOME%\bin\keytool&amp;quot; -list -keystore &amp;quot;%JAVA_HOME%\jre\lib\security\cacerts&amp;quot; | findstr ##ALIASNAME## The above command (without the | grep ##ALIASNAME## or | findstr ##ALIASNAME## ) can also be used to check what certificates are currently in the trust store. These are the default trusted certificates that come up with a standard Appian Installation. Restart the application server to deploy changes. Note: C ertificates imported using the steps above are cleared on hotfixes and upgrades, after which they need to be re-imported to the trust store. Affected Versions This article applies to all versions of Appian. Last Reviewed: March 2026</description><category domain="https://community.appian.com/support/tags/administration">administration</category><category domain="https://community.appian.com/support/tags/process%2bmodels">process models</category><category domain="https://community.appian.com/support/tags/integration">integration</category><category domain="https://community.appian.com/support/tags/admin%2bconsole">admin console</category><category domain="https://community.appian.com/support/tags/web%2bservices">web services</category><category domain="https://community.appian.com/support/tags/Certificates">Certificates</category></item><item><title>Wiki Page: KB-2371 Information about the pac4j-jwt security vulnerability (CVE-2026-29000)</title><link>https://community.appian.com/support/w/kb/3782/kb-2371-information-about-the-pac4j-jwt-security-vulnerability-cve-2026-29000</link><pubDate>Mon, 09 Mar 2026 15:06:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:d5bb1928-4d0d-45d3-81d2-25a7f476c352</guid><dc:creator>pauline.delacruz</dc:creator><description>On 05 March 2026, a critical vulnerability was discovered related to the pac4j-jwt library that affects multiple versions of the security framework. Applications using affected versions of the JwtAuthenticator implementation may process maliciously crafted, encrypted JSON Web Tokens (JWE) in a way that allows an attacker to bypass authentication and gain unauthorized access to protected resources. Affected pac4j-jwt versions include 4.x (prior to 4.5.9), 5.x (prior to 5.7.9), and 6.x (prior to 6.3.3). Appian has investigated this vulnerability and its services, and determined that it is not impacted, as pac4j-jwt is not utilized within the Appian Cloud environment or any of Appian’s products. We will continue to monitor the situation and provide any updates as appropriate. Additional Notes: The following CVE was released with additional information on the scope of the vulnerability: CVE-2026-29000 - (pac4j-jwt JwtAuthenticator Authentication Bypass) Supporting Documentation: https://www.codeant.ai/security-research/pac4j-jwt-authentication-bypass-public-key https://nvd.nist.gov/vuln/detail/CVE-2026-29000 https://www.cve.org/CVERecord?id=CVE-2026-29000 https://www.pac4j.org/blog/security-advisory-pac4j-jwt-jwtauthenticator.html Affected Versions This article applies to all supported versions of Appian. Last reviewed: March 9, 2026</description><category domain="https://community.appian.com/support/tags/Security">Security</category></item><item><title>Wiki Page: KB-2370 Information about the Cisco Adaptive Security Appliance vulnerability (CVE-2026-20127 and CVE-2022-20775)</title><link>https://community.appian.com/support/w/kb/3779/kb-2370-information-about-the-cisco-adaptive-security-appliance-vulnerability-cve-2026-20127-and-cve-2022-20775</link><pubDate>Wed, 04 Mar 2026 20:17:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:5e3b4ae7-d770-4145-9e83-1df59060206b</guid><dc:creator>pauline.delacruz</dc:creator><description>On 28 September 2022, Cisco released a security advisory regarding a vulnerability within their software-defined wide-area-networking (SD-WAN) product causing potential privilege escalation. On 25 February 2026, Cisco updated their advisory, stating that they had witnessed attempted exploitation of the previous vulnerabilities, and on the same day, CISA released an Emergency Directive requiring all federal agencies and contractors to identify and mitigate the vulnerabilities identified in the advisory. Appian has investigated these vulnerabilities and services and determined that it is not impacted, as we do not use Cisco SD-WAN. We will continue to monitor the situation and provide any updates as appropriate. Additional Notes: The following CVEs were released with additional information on the scope of the vulnerability: CVE-2026-20127 - (Cisco Catalyst SD-WAN Controller Authentication Bypass Vulnerability) CVE-2022-20775 - (Cisco SD-WAN Software Privilege Escalation Vulnerability) Supporting Documentation https://www.cisa.gov/news-events/directives/ed-26-03-mitigate-vulnerabilities-cisco-sd-wan-systems https://www.cisco.com/c/en/us/support/docs/csa/cisco-sa-sd-wan-priv-E6e8tEdF.html Affected Versions This article applies to all supported versions of Appian. Last reviewed: March 3, 2026</description><category domain="https://community.appian.com/support/tags/Security">Security</category></item><item><title>Wiki Page: KB-2369 "APNX-1-4561-032. Please refresh the page and try again." error due to stale portal pages</title><link>https://community.appian.com/support/w/kb/3772/kb-2369-apnx-1-4561-032-please-refresh-the-page-and-try-again-error-due-to-stale-portal-pages</link><pubDate>Thu, 26 Feb 2026 22:51:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:4d4063c7-da8e-4a90-9a7b-2835ccf07a2e</guid><dc:creator>pauline.delacruz</dc:creator><description>Symptoms Users may experience the following symptoms when accessing a portal that was republished: The following error is observed in the portal_errors.csv file: APNX-1-4561-032. Please refresh the page and try again. Users receive an email with the above error and the following recommended action: To troubleshoot, try to reproduce the error in the portal, download the Portal Server Log from the portal object in the relevant environment, or check the interface object for this page. Users see the below error on the front-end when accessing a portal. Cause This error message typically occurs around portal republishing. The error message only happens when users access a stale portal page. Stale portal pages happen due to portal republishing. If a user has an open browser window to your Appian portal, and the portal is published while that page is active, this error occurs. Action To resolve the issue, a user with access to Appian Designer can manually unpublish the impacted portal and then publish it again. Affected Versions This article applies to all versions of Appian Cloud. Last Reviewed: February 2025</description><category domain="https://community.appian.com/support/tags/portals">portals</category><category domain="https://community.appian.com/support/tags/integration">integration</category></item><item><title>Wiki Page: KB-2253 2026 Holidays and On-Call Hours FAQ</title><link>https://community.appian.com/support/w/kb/3191/kb-2253-2026-holidays-and-on-call-hours-faq</link><pubDate>Wed, 25 Feb 2026 14:39:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:e9df1735-334d-4a5f-b569-7575d5937673</guid><dc:creator>Kaushal Patel</dc:creator><description>Table of Contents: Customer support during on-call hours and holidays On-call hours within each region Holidays within each region Blackout Periods Customer support during on-call hours and holidays Appian Professional or Signature, legacy Advanced, Enterprise, Premier and Premier Plus customers are entitled to 24x7 support for production issues (P1 and P2 cases only). This means that these customers will be supported over the weekends and bank holidays by on-call engineers. Below are the required steps Advanced or Enterprise, legacy Premier and Premier Plus customers must take to get support from on-call engineers: Raise a support case Contact support via phone On-call hours within each region Time zone Hours EST Fri 18:00 to Sun 18:00 GMT/BST Fri 23:00 to Sun 22:00 AEST Sat 09:00 to Mon 08:00 Holiday dates within each region Below is a list of all holidays in various regions for the remainder of 2025 and 2026. A number of these holidays now include blackout dates. See Blackout Periods further down this KB article for details of dates and restricted activities. Region Holiday Date USA Thanksgiving (2 days) Thursday 27th &amp;amp; Friday 28th November 2025 Christmas (2 days) Wednesday 24th &amp;amp; Thursday 25th December 2025 New Year’s Day Thursday 1st January 2026 Memorial Day Monday 25th May 2026 Independence Day Saturday 4th July 2026 Labor Day Monday 7th September 2026 Thanksgiving (2 days) Thursday 26th &amp;amp; Friday 27th November 2026 Christmas (2 days) Friday 25th &amp;amp; Saturday 26th December 2026 New Year’s Day Friday 1st January 2027 UK (EMEA) Christmas (2 Days) Thursday 25th and Friday 26th December 2025 New Year&amp;#39;s Day Thursday 1st January 2026 Good Friday Friday 3rd April 2026 Easter Monday Monday 6th April 2026 Early May UK Bank Holiday Monday 4th May 2026 Spring UK Bank Holiday Monday 25th May 2026 Summer UK Bank Holiday Monday 31st August 2026 Christmas (2 Days) Friday 25th &amp;amp; Saturday 26th December 2026 New Year&amp;#39;s Day Friday 1st January 2027 Sydney (APJ) Labour Day Monday 6th October 2025 Christmas (2 Days) Thursday 25th and Friday 26th December 2025 New Year&amp;#39;s Day Thursday 1st January 2026 Australia Day Monday 26th January 2026 Good Friday Friday 3rd April 2026 Easter Monday Monday 6th April 2026 Anzac Day Saturday 25th April 2026 King&amp;#39;s Birthday (NSW) Monday 8th June 2026 Labour Day Wednesday 7th October 2026 Christmas (2 Days) Wednesday 25th &amp;amp; Thursday 26th December 2026 New Year&amp;#39;s Day Friday 1st January 2027 Blackout Periods During Blackout Dates, only critical maintenance activities are performed. These dates apply globally, even if the holiday only takes place in a single region: Blackout Start Date* Blackout End Date** Wednesday 26th November 2025 Monday 1st December 2025 Friday 19th December 2025 Wednesday 7th January 2026 Thursday 2nd April 2026 Monday 6th April 2026 Friday 22nd May 2026 Monday 25th May 2026 Friday 3rd July 2026 Monday 6th July 2026 Friday 4th September 2026 Tuesday 8th September 2026 Wednesday 25th November 2026 Monday 30th November 2026 Friday 18th December 2026 Monday 4th January 2027 * Maintenance Windows or Upgrades cannot be scheduled after 4:00 PM EST / EDT on the Blackout Start Date ** Maintenance Windows or Upgrades can be scheduled after 10:00 PM EST / EDT on the Blackout End Date</description><category domain="https://community.appian.com/support/tags/administration">administration</category><category domain="https://community.appian.com/support/tags/FAQ">FAQ</category></item><item><title>Wiki Page: KB-2367 OAuth 2.0 Client Credentials Grant connected system shows errors despite successful connection</title><link>https://community.appian.com/support/w/kb/3761/kb-2367-oauth-2-0-client-credentials-grant-connected-system-shows-errors-despite-successful-connection</link><pubDate>Tue, 27 Jan 2026 05:41:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:a11e7c17-1d29-46e1-9a8c-1492e197dd0a</guid><dc:creator>Ryan Good</dc:creator><description>Symptoms When using OAuth 2.0 Client Credentials Grant as a Connected System Object&amp;#39;s authentication, the following errors are seen in tomcat-stdOut.log even though the connection is successful: ERROR com.appiancorp.connectedsystems.http.execution.AppianHttpRequestExecutor - ConnectorRuntimeException [title=Connection failed, Could not authenticate with the connected system or connect to the external system at the specified URL] ERROR com.appiancorp.connectedsystems.http.execution.AppianHttpRequestExecutorPipeline - Could not authenticate with the connected system (UUID: ) or connect to the external system at the specified URL ( ). Check that the credentials in the connected system are correct and test the connection. There is also a log entry line about OAuth token retrieval: INFO com.appiancorp.connectedsystems.http.oauth.HttpOAuthTokenRetriever - Error while retrieving token: request_error attempting to pass Authentication in body Cause The OAuth 2.0 endpoint is incorrectly configured to expect Client Credentials in the request body. Appian follows the ITEF RFC 6749 standard for OAuth 2.0 Client Credentials Grant . This standard specifies that including the client credentials in the request body is not recommended . Appian sends client credentials in the authentication request header (as this is the preferred way by ITEF RFC 6749 ). If the request with credentials in the header fails, Appian will try again with the credentials in the request body. Action Configure the OAuth 2.0 endpoint to expect Client Credentials from Appian in the request header. Affected Versions This article applies to all versions of Appian. Last Reviewed: January 2026</description><category domain="https://community.appian.com/support/tags/integration">integration</category></item><item><title>Wiki Page: KB-2366 How to use the Migration Tool when Migrating to Appian Cloud</title><link>https://community.appian.com/support/w/kb/3730/kb-2366-how-to-use-the-migration-tool-when-migrating-to-appian-cloud</link><pubDate>Thu, 22 Jan 2026 15:19:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:30d62f34-58b2-41b4-8d97-ecd936fbe55e</guid><dc:creator>Kaushal Patel</dc:creator><description>Purpose Appian now provides infrastructure and a tool to assist with migrating data from self-managed environments into Appian Cloud. Appian will provision an AWS S3 bucket and encryption key in Appian Cloud’s AWS Environment, and provide credentials to securely transfer data into Appian Cloud. These steps are meant for “Server Based” Appian, not Appian on Kubernetes (AoK). Instructions Download the migration tool from Forum. Follow the instructions here Ensure you have received the Appian Cloud credentials from your Appian Support contact. Each Appian installation will have a different set of credentials Add the following environment variables to each server of the Appian installation that is migrating to Appian Cloud. For step 3c, s et the AWS_REGION environment variable to us-gov-east-1 if your Appian Cloud site resides in a GovCloud AWS region. Otherwise, set it to us-east-1. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export AWS_REGION= Utilize the --dry-run command below to check if the AWS credentials are setup properly Run the migration tool on every server of the Appian installation that is migrating to Appian Cloud Refer to some example commands below: ./migrate cloud export basic command ./migrate cloud export --help show the various arguments accepted by the script. Not all commands will be covered in this Knowledge Base article, so it is recommended to run --help ./migrate cloud export --dry-run validate your AWS credentials, and see a list of folders that will be migrated One of the first log messages should say “AWS Identity found”, and the identity should include “ appian-cloud-migration ” ./migrate cloud export -i /home/appian run a cloud export where Appian is installed at /home/appian ./migrate cloud export --db-dir migrate database data to Appian Cloud. This data should have previously been exported from your database, and stored in its own directory somewhere on one of your servers Follow the prompts that the migration tool gives you. Enter ? to to get more information about each question “Application server data” and “engine checkpoints data” only needs to be migrated once. Ensure that it is run on a server that runs the application server or engines, respectively. Repeat steps 3 and 4 for each Appian installation that is migrating to Appian Cloud Affected Versions This article applies to all versions of &amp;quot;server based&amp;quot; self-managed Appian. Last Reviewed: January 2026</description><category domain="https://community.appian.com/support/tags/administration">administration</category><category domain="https://community.appian.com/support/tags/Migration">Migration</category></item><item><title>Wiki Page: KB-2250 How to run the Appian Log Generator and Diagnostics scripts</title><link>https://community.appian.com/support/w/kb/3186/kb-2250-how-to-run-the-appian-log-generator-and-diagnostics-scripts</link><pubDate>Wed, 21 Jan 2026 16:42:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:7576ad48-da95-4d56-b3d6-5510d413e42f</guid><dc:creator>Kaushal Patel</dc:creator><description>Purpose Appian is made up of multiple components, all of which must remain healthy for the platform to function properly. Traditionally, checking the health of each component requires running separate built-in scripts, which can be time consuming. Troubleshooting often involves collecting various log files based on the issue at hand - a process that typically requires manually identifying, zipping, and compressing multiple files and folders across different components. This article introduces the following tools that simplify and streamline this process: The Appian Log Generator scripts - Quickly creates a tar file containing relevant logs from different Appian components, reducing the manual effort involved in log collection. The Appian Diagnostics script - G athers essential health metrics from the Appian environment and the underlying server in a fast and efficient way. The Appian on Kubernetes Log Generator and Diagnostics script - Gathers logging, metrics, and configuration details from the Appian environment, designed specifically for Appian on Kubernetes. Table of Contents: What are the Appian Log Generator scripts? What is the Appian Diagnostics script? Equivalent for Appian on Kubernetes Instructions (non-Kubernetes) Log Generator Diagnostics Instructions (Kubernetes) What are the Appian Log Generator scripts? Appian Support maintains two scripts for log generation for non-containerized Appian (running on Windows or Linux OS): appian_sm_log_generator.sh - This script retrieves: The five most recent service_manager*.log . The logs/ service-manager directory which has the kafka and zookeeper logs. The forty five most recent db_* logs. appian_tomcat_ss_ads_log_generator.sh - This script retrieves: The five most recent tomcat-stdOut* log . The search-server directory hosting the search-server logs. The data-server directory hosting the data-server logs. Note that the script only gathers the logs from the server that the script is run on. The script should be run individually on each server that the logs should be fetched from. What is the Appian Diagnostics script? The Appian Diagnostics script appian_health_diagnostics.sh combines the different diagnostic scripts into a single script so that the state of each component in the environment can be understood by running one single script. In addition to the health of the Appian components the script also provides system level details like the RAM, CPU and disk usage of the server used to host Appian. Note that this script does not interact with log files or retrieves any information that could be deemed sensitive in nature, apart from server hostnames. The script only leverages the existing out-of-the-box diagnostic scripts and prints the output to a new log file. The script only gathers the details of the Appian components which are hosted on the server that the script is run on. Equivalent for Appian on Kubernetes In addition to the above scripts, Appian Support also maintains one script for both log generation and diagnostics for Appian on Kubernetes. aok_log_diagnostic_script.sh - This script retrieves: Pod logs for all core Appian components in the site&amp;#39;s namespace. Pod and node resource metrics and status outputs. The Appian custom resource definition (CRD). Instructions (non-Kubernetes) Log Generation Scripts The script to gather engine, service-manager, kafka, and zookeeper logs can be downloaded here . The script to gather tomcat, search-server, and Appian data-server logs can be downloaded here . Run the following steps to execute the script: Place the relevant script in the /logs of the server the logs need to be generated on. For high availability environments, the script should be placed in /shared-logs/*server_name* folder of the server the logs need to be generated on. For example: /shared-logs/machine1.example.com/ Make the script executable. For example: chmod +x appian_sm_log_generator.sh chmod +x appian_tomcat_ss_ads_log_generator.sh Execute the script. For example: ./appian_sm_log_generator.sh ./appian_tomcat_ss_ads_log_generator.sh The script will generate the a file of the format hostname_date*.tar.gz in the directory the script is executed in. Attach the newly generated tar.gz file to the Support Case. Diagnostic Script Run the following steps to execute the script: Download the script to the server which hosts Appian. Make the appian_health_diagnostics.sh script executable. chmod +x appian_health_diagnostics .sh Run the following command: appian_health_diagnostics.sh -d *APPIAN_HOME* -p *SERVICE_MANAGER_PASSWORD* eg: ./ appian_health_diagnostics.sh -d /usr/local/appian/ae -p password The script will generate the diagnostic file in the APPIAN_HOME/logs directory. Attach the newly created diagnostic file to the support case. Instructions (Appian on Kubernetes) The script to gather logs and metrics for Appian on Kubernetes can be downloaded here . Run the following steps to execute the script Place the script in a location where it can reach the cluster. The script works by executing a series of kubectl commands, so any CLI that can reach the cluster&amp;#39;s API Server should work. Make the script executable. For example: chmod +x aok_log_diagnostic_script.sh Execute the script, passing in a parameter for the namespace for your Appian site. For example: ./aok_log_diagnostic_script.sh *my-appian-site* Optionally you can also pass in a parameter to specify how many days of logging to collect (default is logs from the last 2 days). For example, to collect logs from the past 3 days you would run ./aok_log_diagnostic_script.sh *my-appian-site* 3 The script will generate a file called aok_log_diagnostic_bundle.tar in the directory the script is executed in. Attach the newly generated . tar file to the Support Case. Affected Versions This article applies to Appian 18.3 and later. Last Reviewed: January 2025</description><category domain="https://community.appian.com/support/tags/Tomcat">Tomcat</category><category domain="https://community.appian.com/support/tags/logging">logging</category><category domain="https://community.appian.com/support/tags/how_2D00_to">how-to</category><category domain="https://community.appian.com/support/tags/search%2bserver">search server</category><category domain="https://community.appian.com/support/tags/data_2D00_server">data-server</category><category domain="https://community.appian.com/support/tags/service%2bmanager">service manager</category></item><item><title>Wiki Page: KB-2233 Appian Self-Managed Vulnerability Testing</title><link>https://community.appian.com/support/w/kb/3085/kb-2233-appian-self-managed-vulnerability-testing</link><pubDate>Fri, 16 Jan 2026 19:01:00 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:5f7ad94d-fcba-4c5c-abe7-9f9222307796</guid><dc:creator>Kaushal Patel</dc:creator><description>Purpose Self-managed customers can perform security-related activities against their Appian installation such as penetration testing and vulnerability scanning as well as software composition analysis scans on installers, containers and plugin jars. This article outlines accepted formats for submitting vulnerabilities to Appian. Submitting Results The following applies to all submissions: Appian reviews security scan results only for recent hotfixes. Customers running older hotfix versions should upgrade to a recent hotfix and resubmit security scan results before Appian team initiates review. All documentation (including results, summaries, and reproduction steps) must be submitted in English. Appian will not accept findings that are missing information within the provided templates. Submissions much be done via support case. Appian Vulnerabilities This section is applicable to penetration testing or vulnerability scans against Appian installations. Fill out the Appian Vulnerability Submission Worksheet according to the instructions below: All submitted vulnerabilities must be validated by the assessor prior to submission. Appian does not accept unvalidated results or direct output from automated scanners without additional manual validation. Appian requires verifiable evidence such as screenshots, payloads, or any other associated proof-of-concept material as well as manual reproduction steps in order to properly validate any reported vulnerability findings. All scanning or testing documentation must be accompanied by: A summarized index of all issues found, with the severity level of each issue. Clear evidence performed by the assessor showing that the proposed vulnerability can be used to exploit the system, for example by: Allowing inappropriate access to the system or its data. Allowing inappropriate modification of the system or its data. Inappropriate use of a component of the system or as a whole. A description of the risk to the system. Guidance on how to reach the impacted end point(s). Clear steps on how to reproduce the issue. Appian Third-Party Component Vulnerabilities This section is applicable to Software Composition Analysis scans against Appian installers, containers and plugin jars. Fill out the Appian third-party vulnerability submission worksheet according to the instructions below : Version (major and hotfix) must be provided. Self-managed vs. leveraging Appian on Kubernetes must be specified. If the vulnerability reporting source is vendor specific (ex: BlackDuck or X-Ray), the customer should provide as much explanatory detail as possible in the Description column in order for Appian to effectively validate the issue. What to Expect Next Appian will review the findings (assuming all submission requirements have been met) and either accept or reject each one. For rejected findings, Appian will provide an explanation as to why the reported vulnerability was rejected (false positive, configuration-level controls available to mitigate, etc.). For accepted findings, Appian will classify the severity of the finding as Low/Medium/High/Critical. Appian Support will provide analyses and impact assessments of the report and individual findings through the support case. Affected Versions This article applies to all self-managed versions of Appian. Last Reviewed: May 2023</description><category domain="https://community.appian.com/support/tags/self_2D00_managed">self-managed</category><category domain="https://community.appian.com/support/tags/Security">Security</category></item></channel></rss>