KB-2361 Addressing Canadian Data Residency and Compliance for Appian Private AI with Cross-Region Inference

Executive Summary

This document explains the strategy and compliance posture for Appian Private AI services operating within Canada. To provide our Canadian customers with access to the most advanced Generative AI models and ensure high availability, Appian leverages an underlying architecture that may include AWS’s Cross-Region Inference Service (CRIS).

This approach remains fully compliant with Canadian data protection laws (CCCS and TBS) and Appian's own SOC 2 and ISO compliance agreements.

The key principle is the critical distinction between data at rest and data in transit.

  • Data at Rest: All customer data within the Appian platform—such as data in records, application artifacts, and business documents—is persistently retained and encrypted at rest within the Appian Cloud Canada (Montreal: ca-central-1) region.
  • Data in Transit: For specific Generative AI tasks, the data (the prompt and input data) is sent over a secure channel for inference only to a US region. Once the inference is completed, the response is sent back over a secure channel to the Canadian Region. No data is retained in the US region.

This model is aligned with guidance from Canadian regulatory bodies, including the Canadian Centre for Cyber Security (CCCS), which has approved this transient processing model for workloads.

The Need for Cross-Region Inference

The demand for Generative AI is growing at an unprecedented rate, placing an enormous strain on the global supply of specialized GPU (Graphics Processing Unit) and Data Center capacity.

  1. High Demand for Advanced Models: The most powerful Large Language Models (LLMs), such as the latest Claude models (Sonnet 4.5 / Haiku 4.5), require massive, cutting-edge GPU clusters to run.
  2. Infrastructure Scaling: This specialized hardware is not a simple commodity. Global demand far outpaces supply, and new capacity is brought online in centralized "hyperscale" data centers first, which are primarily in the US.
  3. The Challenge: Waiting for this highly-constrained GPU capacity to be physically deployed in every sovereign region (like Canada) would mean Canadian customers would face significant delays—months or even years—in accessing the latest AI models, or they would face severe performance bottlenecks on older, over-provisioned in-country models.

Using a cross-region inference model allows AWS to route requests from the Canadian region to a US region with available capacity. This is not a data-hosting strategy; it is a ‘compute and availability’ strategy to ensure Canadian customers are not at a competitive disadvantage.

How Appian Private AI Manages Data and Compliance

Appian Private AI is a foundational element of our platform, built on the principle of "private by design." This privacy extends to how we handle all AI processing, which occurs within our robust, SOC 2-compliant security boundary.

Here is the step-by-step data flow for a cross-region AI request using the Generative AI Skill:

  1. Data at Rest (Canada): A user in Appian (in the Canada region) triggers an AI skill. The data for the prompt and the input (e.g., text from a record) is read from Appian's Canadian-hosted database.
  2. Encryption in Transit (Appian): The data is sent over a secure, private TLS channel to the AWS Bedrock API endpoint.
  3. Cross-Region Routing (AWS): The AWS CRIS service, acting as a smart router, directs this secure request to a US region (e.g., us-east-1, us-east-2, us-west-2) based on current capacity. This entire transit occurs over the private AWS global network backbone, not the public internet.
  4. Data Inference (USA): The US-based service receives the encrypted request.
  • The data is decrypted and processed entirely for inference.
  • CRITICAL: At no point is the customer's prompt or data ever retained on disk in the US region. It is not logged, and it is never used to train or improve the underlying AI models.
  1. Encrypted Return: The model's response is generated, encrypted, and sent back over to the private AWS global network and subsequently back to the Appian Cloud Canada region via the same secure channel the request was made from.
  2. Data at Rest (Canada): The Appian platform receives the secure response and uses it in the process—where it may then be retained in the Appian record in Canada.

Alignment with Canadian Data Governance

This architecture is fully aligned with modern Canadian data governance and privacy principles.

The Key Distinction: Inference vs. Retention

Canadian data residency and sovereignty laws (such as those in British Columbia, Quebec, and for public sector contracts) are primarily concerned with the persistent retention of data at rest. The "data residency" requirement is to ensure that Canadian data is not retained in a foreign jurisdiction, where it could be subject to foreign laws and access requests.

The inference model does not violate this principle. The data resides in Canada. It merely takes a momentary, encrypted, and secure "trip" to a specialized processor for inference before returning home.

Regulatory Precedent (CCCS and TBS)

This is not a new or untested legal theory. The Government of Canada has already assessed and approved this model.

  • Canadian Centre for Cyber Security (CCCS): The CCCS has assessed AWS Bedrock (the underlying service) as compliant for CCCS Medium Profile (formerly Protected B) workloads. This assessment was granted specifically with the understanding that it involved inference capabilities located in US AWS regions, based on the fact that the data is transient and processed for inference only.
  • Treasury Board Secretariat (TBS): The TBS has updated its cloud policy to move from a rigid "storage in Canada only" requirement to a more modern, risk-based approach. This allows for the use of secure, cross-border services where data residency is the principal delivery option, but transient processing is used to access capabilities.

End-to-End Security and Appian Cloud Compliance

The entire AI process is enveloped within Appian's comprehensive, independently-audited security and compliance framework.

Control Area

Appian Private AI Implementation

Appian Cloud Compliance

The Appian Cloud platform (Canada) is independently audited and compliant with SOC 2 Type II, ISO 27001/27017/27018, and Canada Protected B. These agreements cover all services managed within the platform, including the handling and orchestration of AI requests.

Encryption at Rest

100% of customer data (records, documents, database) is encrypted at rest using AES-256 within the Appian Cloud Canada region.

Encryption in Transit

All data transmitted between Appian and the AWS Bedrock endpoints is encrypted using TLS. This includes all cross-region traffic, which travels over the private AWS backbone, isolating it from the public internet.

Data Privacy (No Training)

A core tenant of Appian Private AI is that your data is your data. Customer data (prompts, inputs and responses) is never used to train or improve any AI models. This is a contractual guarantee.

Auditing and Logging

All API logs (e.g., the fact that an AI skill was called at a certain time) are captured and retained exclusively in the Canadian source region. The content of the prompt/input/response is never logged or retained in the US processing region. This provides a complete, in-Canada audit trail for compliance.

How to Enable this in Appian?

To leverage the cross-region inference architecture described above, System Administrators must explicitly configure the AI settings in the Appian Administration Console. This configuration authorizes the platform to route inference requests to the US region for the selected models.

Prerequisites

  • Role: You must be a System Administrator to modify these settings.
  • Environment: Access to the Administration Console.

Configuration Steps

  1. Open the Appian Administration Console.
  2. Navigate to the AI Services page (listed in the left-hand menu).
  3. Select the Appian tab.
  4. Locate the desired model in the list (e.g., Claude Sonnet 4.5 or Claude Haiku 4.5).
  5. In the Inference Profile dropdown for that model, select the US inference profile.
    • This will reflect the list of possible destination regions where the inference will take place.
    • By selecting a US profile, you are enabling the cross-region data transit described in Section 3.
  6. Click Save Changes.

Verification Once saved, the model becomes immediately available for use in AI Skills, the AI Copilot, and other generative AI capabilities. You can verify the configuration by testing a prompt in an AI Skill design object; the response will now be generated via the designated US inference endpoint.

Related
Recommended