Hi everyone,
We’re building a large BPM platform in Appian for a bank. It handles multiple core processes like Loan Origination, KYC, and other services. We’re also integrating with both CRM and a Document Management System (DMS) — so document handling is a key topic for us.
CRM sends us structured application data via Kafka events.
We use a central table called BPM_PROCESS_MASTER to track each case (with caseId, correlationId, processKey, etc.).
BPM_PROCESS_MASTER
We built a shared table BPM_DOCUMENT_LOG to store documents with fields like:
BPM_DOCUMENT_LOG
processId
entityType
entityRefId
documentType
AppianDocId (if it’s stored in Appian)
AppianDocId
DMS_UUID (if it’s stored in our DMS)
DMS_UUID
This setup gives us flexibility:
We can store documents at any level (applicant, guarantor, collateral, etc.)
We can handle new document uploads mid-process (like if Credit asks CRM to re-send more docs)
It keeps our business data clean and avoids spreading document fields everywhere
Our vendor is pushing back. They want to store document references inside each business table (like APPLICANT, COLLATERAL, etc.). Not only that — they also want to create dedicated relationships for each document inside those tables, meaning that for every document, there would be a foreign key column in the business entity pointing to the DOCUMENT_LOG. For example:
APPLICANT
COLLATERAL
DOCUMENT_LOG
APPLICANT would have columns like cbjConsentDocId, idCopyDocId
cbjConsentDocId
idCopyDocId
COLLATERAL would have collateralAttachmentId, etc.
collateralAttachmentId
This tightly couples documents to each business entity, and makes our model less flexible, especially considering:
New documents may be requested during the process
Document types differ by process (e.g., KYC vs Loan)
Some documents exist only in DMS (not Appian)
We believe a centralized and normalized DOCUMENT_LOG table is more scalable across processes and systems.
They say: The vendor mentioned Appian has a 3-level relationship depth limit, which makes saving deeper/nested data and documents together challenging, especially in bulk operations.
Additionally, the vendor raised a concern around bulk inserts of related entities like applicants and collaterals: when inserting many records at once, Appian returns generated IDs but does not specify which ID corresponds to which entity in the payload. This makes it difficult to correctly link multiple uploaded documents to their respective entities.
Honestly, this sounds more like a workaround than a limitation. But we want to do it right — and follow real best practices.
Is Appian really limited in a way that prevents us from using a shared DOCUMENT_LOG table?
Is it better to keep document references in a centralized table, especially when we deal with:
Multiple processes
DMS integration (not just Appian documents)
Mid-process uploads
How do others handle this in large-scale or event-driven setups?
We’d love to hear how others approached this — or if we’re missing something important.
Thanks in advance!CRM To Appian simplified sample JSON payload{ "meta": { "process": "Retail.Murabaha.CarLoan.CreditReview", "eventType": "START_CREDIT_REVIEW", "source": "CRM", "destination": "BPM", "isSynchronously": false, "isAcknowledgment": true, "correlationId": "abc12345-0000-1111-2222-1234567890ab", "transactionId": "def67890-0000-2222-3333-0987654321ba", "submittedBy": 1001, "timestamp": "2025-07-13T12:00:00Z" }, "data": { "application": { "applicationType": "1", "branchCode": "1000", "submissionDate": "2025-07-13T12:00:00Z", "requestedAmount": 15000.0, "currency": "USD", "tenorMonths": 48, "annualProfitRate": 6.0, "totalAmount": 18000.0, "installmentAmount": 375.0, "productInfo": { "carBrand": "GenericCar", "carType": "Sedan", "productionYear": 2020, "attachments": { "carLicenseAttachment": "uuid-car-license-001" } }, "applicants": [ { "cif": 12345, "personalInfo": { "fullName": "John Doe", "nationalId": "A123456789" }, "attachments": { "idCopy": "uuid-id-copy-001" } }, { "cif": 12346, "personalInfo": { "fullName": "Jane Smith", "nationalId": "B987654321" }, "attachments": { "idCopy": "uuid-id-copy-002" } } ], "collaterals": [ { "collateralType": "Vehicle", "collateralAmount": 10000.0, "attachments": { "collateralAttachment": "uuid-collateral-001" } } ] } }}
Discussion posts and replies are publicly visible
In my experience, trying to centralize things across multiple Appian applications is a mixed bag. I have seen systems where colliding requirements and expectations lead to serious issues.
The real challenge is to find the right level to generalize certain functionality. There is no wrong or right as it always depends on so many things.
ahmadb3492 said:We believe a centralized and normalized DOCUMENT_LOG table is more scalable across processes and systems. They say: The vendor mentioned Appian has a 3-level relationship depth limit, which makes saving deeper/nested data and documents together challenging, especially in bulk operations.
I do not opt for centralized objects, and try to build a simple data model, even if that means it is only slightly normalized.
Then, I do not know of any hard limit in terms of relationship depth. But, I have seen deeply nested models to suffer in performance, when, and only when, queries become complex. But this is no different to a slow view in a database.
Hi Stefan, really appreciate your insight.
You're absolutely right blindly centralizing across applications can cause more harm than good if not well-scoped. And your point about finding the right level of generalization is spot on.
In our case, we’re building a single unified Appian BPM platform for a bank, not multiple siloed apps. This BPM layer handles many core processes (e.g., Loan Origination, KYC, Complaints, etc.), and receives structured data from CRM via Kafka. Each process involves different entity types (e.g., applicants, collaterals, companies), and each of those can require documents at different stages — including mid-process uploads triggered by Credit or Risk teams.
So our design centralizes only the document logging layer via a normalized DOCUMENT_LOG table
which stores:
process_id
entity_type (like APPLICANT or COLLATERAL)
entity_ref_id
document_type
Appian document ID and/or DMS UUID (we use an external DMS)
This avoids scattering document columns across business tables and gives us a clean, auditable document trail across processes.
We're not trying to build one centralized record or interface — just a decoupled way to log and track documents cleanly.
As for the vendor’s concern: they argued that Appian has a 3-level relationship limit and that this prevents them from saving nested structures with documents. But that seems more like a limitation of how they’re using a!writeRecords() rather than an actual Appian restriction. We’ve suggested saving the documents after saving the related entities (using the returned primary keys), which avoids the issue entirely.
So to summarize:
We agree that overgeneralization is risky
But in this case, document tracking is the exact kind of concern worth centralizing
The "3-level depth limit" seems more about tooling convenience than a real architectural blocker
Thanks again for your thoughtful reply. Would love to hear if others have used similar patterns — especially in enterprise Appian+DMS scenarios.