KB-2356 How to migrate a machine-based installation's shared data volume to AoK in place

Purpose

If you have a machine-based Appian deployment that utilizes a shared filesystem (e.g. AWS Elastic File System, NFS), you can migrate certain files on the shared filesystem to your Appian on Kubernetes (AoK) site in place. Instead of using the migration tool to export, merge, and import the files on your shared filesystem, you can expose the shared filesystem as a PVC in your Kubernetes cluster and mount it to your AoK containers.
This guide assumes that you have a relatively standard HA or distributed deployment, where only the following data, and optionally shared-logs, exist on your shared filesystem.

Instructions

Pre-Requisites

Before running the migration tool, please back up your site. Ensure your shared filesystem is backed up as well.

The following steps will change the directory structure of and write new data to your shared filesystem. If you need to abort your AoK migration, you should be able to revert to your machine-based deployment using a backup of your shared filesystem.

Step 1: Choose which files to migrate in place

The following directories are eligible for this process:

  • engine-checkpoints
  • content-documents
  • archived-processes
  • shared-logs

Step 2: Change paths on the shared filesystem

The haExistingClaim and healthCheckExistingClaim PVCs can bind to the same shared filesystem, but they should be configured to point to distinct subdirectories within that filesystem to prevent data collision.

Reorganize the directory structure within your shared filesystem so the haExistingClaim PVC can point to one subdirectory of the shared filesystem and the healthCheckExistingClaim can point to a separate subdirectory.

Typical HA and distributed deployments will have a folder structure on the shared filesystem:

/
└──_admin
└──server
└──shared-logs

Create new folders in the top level of your shared filesystem:

/
└──shared-data
└──health-check (only if you have shared-logs)

If you have shared-logs but opt not to migrate them in place, you should still separate them into the logs directory. This will ensure your haExistingClaim PVC does not contain shared-logs.

Reorganize the files to match this structure:

/shared-data 
└── _admin
└── accdocs1
└── accdocs2
└── accdocs3
└── mini
└── models
└── plugins
└── process_notes
└── shared
└── services
└── data
└── server
└── server
└── archived-process
└── msg
/health-check (only if you have shared-logs)
└── shared-logs

Original Directory

(These refer to the directories’ default paths. Please take any custom paths into account)

Target Directory

/_admin/accdocs1/
/shared-data/_admin/accdocs1/
/_admin/accdocs2/
/shared-data/_admin/accdocs2/
/_admin/accdocs3/
/shared-data/_admin/accdocs3/
/_admin/mini/
/shared-data/_admin/mini/
/_admin/models/
/shared-data/_admin/models/
/_admin/plugins/
/shared-data/_admin/plugins/
/_admin/process_notes/
/shared-data/_admin/process_notes/
/_admin/shared/
/shared-data/_admin/shared/
/server/msg/
/shared-data/server/msg/
/server/archived-process/
/shared-data/server/archived-process/
/server/

(exclude /server/msg/ and /server/archived-process/)
/shared-data/services/data/server/
/shared-logs/
/health-check/shared-logs/

Step 3: Change ownership and permissions

Appian Operator sets the fsGroup for all pods, alleviating most permissions issues. However, fsGroup doesn’t have an effect on some shared filesystems (e.g. NFS, EFS). If this is the case, ensure that:

  • Any mount directories on the shared filesystem are mounted with permissions accessible by the appian user (UID 500)
    • If using EFS, you can use Access Points to enforce a user identity
  • Ensure that directories and files under /shared-data and /health-check are accessible by the appian user (UID 500)

Step 4: Create a Persistent Volume and Persistent Volume Claim for your shared filesystem

In your Kubernetes cluster, create 2 ReadWriteMany(RWX) Persistent Volumes from your shared filesystem, one to be claimed by haExistingClaim and the other to be claimed by healthCheckExistingClaim.

haExistingClaim’s PV should have a root path to /shared-data and healthCheckExistingClaim’s PV should have a root path to /health-check.

If you’re using AWS EFS and EFS Access Points, you can do so using the EFS CSI driver:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <PV name>
spec:
  capacity:
    storage: <storage capacity>
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  csi:
    driver: efs.csi.aws.com
    volumeHandle: <EFS identifier e.g. fs-03cada681ae7588c9>::<Access Point identifier with root path of /shared-data e.g. fsap-08ab03e134c34fe01>
# ONLY IF MIGRATING SHARED-LOGS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <PV name>
spec:
  capacity:
    storage: <storage capacity>
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  csi:
    driver: efs.csi.aws.com
    volumeHandle: <EFS identifier e.g. fs-03cada681ae7588c9>::<Access Point identifier with root path of /health-check e.g. fsap-0790259c651ae90c7>

Create a Persistent Volume Claim against each of the Persistent Volumes created in the previous step. For example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: <PVC name>
spec:
  accessModes:
    - ReadWriteMany 
  resources:
    requests:
      storage: <storage capacity request>
  storageClassName: ""
  volumeName: <PV name>

Step 5: Run the migration tool

Run the migration tool export command with the option to exclude types of files from Step 1. Run the migration tool merge command. Return to these instructions and complete the next step when you reach the import step of the migration tool.

Step 6: Update the Appian CR

Before the import step of the migration tool, customize the Appian CR generated by the merge step.

Step 6a: Set haExistingClaim

The /shared-data directory of your shared filesystem will serve as your ReadWriteMany volume for Webapp and Service Manager’s haExistingClaim.

Under the webapp and serviceManager sections, uncomment and set the haExistingClaim field to the name of the shared-data PVC you created in Step 4.

Step 6b: Set healthCheckExistingClaim (optional)

The /health-check directory of your shared filesystem will serve as your ReadWriteMany volume for Webapp's healthCheckExistingClaim.

Under the webapp section, uncomment and set the healthCheckExistingClaim field to the name of the health-check PVC you created in Step 4.

Step 7: Run the migration tool import command and complete the migration

Here is an example of executing the import command for a Unix/Linux system of a distributed site:

./migrate import -z merge.zip -a appian.yaml -n <site namespace>

Once this process is completed, the tool will start up your new Appian deployment. If you have not setup how to expose Appian, the front end will not be accessible. However you should check that all the Appian services were able to come up and verify that the number of pods match your site's defined replica count. At this point, you may want to visit the Tasks, Configuration, and Reference sections of the Appian on Kubernetes documentation for instructions on upkeep, upgrading, and additional configurations.

Affected Versions

This article applies to all self-managed versions of Appian.

Last Reviewed: October 2025

Related
Recommended