This document explains the strategy and compliance posture for Appian Private AI services operating within Canada. To provide our Canadian customers with access to the most advanced Generative AI models and ensure high availability, Appian leverages an underlying architecture that may include AWS’s Cross-Region Inference Service (CRIS).
This approach remains fully compliant with Canadian data protection laws (CCCS and TBS) and Appian's own SOC 2 and ISO compliance agreements.
The key principle is the critical distinction between data at rest and data in transit.
This model is aligned with guidance from Canadian regulatory bodies, including the Canadian Centre for Cyber Security (CCCS), which has approved this transient processing model for workloads.
The demand for Generative AI is growing at an unprecedented rate, placing an enormous strain on the global supply of specialized GPU (Graphics Processing Unit) and Data Center capacity.
Using a cross-region inference model allows AWS to route requests from the Canadian region to a US region with available capacity. This is not a data-hosting strategy; it is a ‘compute and availability’ strategy to ensure Canadian customers are not at a competitive disadvantage.
Appian Private AI is a foundational element of our platform, built on the principle of "private by design." This privacy extends to how we handle all AI processing, which occurs within our robust, SOC 2-compliant security boundary.
Here is the step-by-step data flow for a cross-region AI request using the Generative AI Skill:
This architecture is fully aligned with modern Canadian data governance and privacy principles.
The Key Distinction: Inference vs. Retention
Canadian data residency and sovereignty laws (such as those in British Columbia, Quebec, and for public sector contracts) are primarily concerned with the persistent retention of data at rest. The "data residency" requirement is to ensure that Canadian data is not retained in a foreign jurisdiction, where it could be subject to foreign laws and access requests.
The inference model does not violate this principle. The data resides in Canada. It merely takes a momentary, encrypted, and secure "trip" to a specialized processor for inference before returning home.
Regulatory Precedent (CCCS and TBS)
This is not a new or untested legal theory. The Government of Canada has already assessed and approved this model.
The entire AI process is enveloped within Appian's comprehensive, independently-audited security and compliance framework.
Control Area
Appian Private AI Implementation
Appian Cloud Compliance
The Appian Cloud platform (Canada) is independently audited and compliant with SOC 2 Type II, ISO 27001/27017/27018, and Canada Protected B. These agreements cover all services managed within the platform, including the handling and orchestration of AI requests.
Encryption at Rest
100% of customer data (records, documents, database) is encrypted at rest using AES-256 within the Appian Cloud Canada region.
Encryption in Transit
All data transmitted between Appian and the AWS Bedrock endpoints is encrypted using TLS. This includes all cross-region traffic, which travels over the private AWS backbone, isolating it from the public internet.
Data Privacy (No Training)
A core tenant of Appian Private AI is that your data is your data. Customer data (prompts, inputs and responses) is never used to train or improve any AI models. This is a contractual guarantee.
Auditing and Logging
All API logs (e.g., the fact that an AI skill was called at a certain time) are captured and retained exclusively in the Canadian source region. The content of the prompt/input/response is never logged or retained in the US processing region. This provides a complete, in-Canada audit trail for compliance.
To leverage the cross-region inference architecture described above, System Administrators must explicitly configure the AI settings in the Appian Administration Console. This configuration authorizes the platform to route inference requests to the US region for the selected models.
Prerequisites
Configuration Steps
Verification Once saved, the model becomes immediately available for use in AI Skills, the AI Copilot, and other generative AI capabilities. You can verify the configuration by testing a prompt in an AI Skill design object; the response will now be generated via the designated US inference endpoint.