If you have a database schema table with data that multiple applications need access to...
1. Do you create a single Record Type which multiple application use/share? This in my opinion tightly couples your applications.
2. Do you create separate Record Types per application so they can evolve separately? This in my opinion allows each application to evolve it's Record Type independently, and the only tight coupling is the shared database table structure.
What are your thoughts and recommendations on the two approaches?
Discussion posts and replies are publicly visible
What exactly does "need access to" mean?
Both approaches can be valid. My thoughts:
- tight vs. loose coupling
- duplication of data
- uniformity of the data structures
- sharing of record actions
In general, I think that shared data should not be duplicated. But then, the ownership of that data/record/actions/processes needs to sit outside of a project.
Thanks for your reply. By "need access to", I meant the ability to read certain parts of the data (not modify it). i.e. perhaps there are 20 fields/columns, but one application needs to read 5 fields, another 2 fields, and another all 20.I would advocate that a single application owns the data, and contains all the objects which facilitate the modification of that data.
However, if another application needs access to read parts of that same data, they could either:
1. Directly read the table(s) in that applications schema, or2. Receive a copy of that data within their own applications schema (duplication).
If someone went with option 1, then should they also share the original applications Record Type's, or rather create their own?
If you're only interested in reading the data (i.e. not use Record Actions but simply fetch data to be viewed), then I would advocate a domain-based model where a single domain/app "owns" the data and exposes services (WebAPIs) for other Applications to read the data. This decouples the apps, the domain retains ownership for managing the i/o of the data, and makes the separate apps' evolution independent. You'll have to conjure up a versioning approach to manage the evolution of the exposed services to manage changes but that's no more than you would if each app had their own Record type.
If your organization is capable of centralizing organizational ownership and maintenance I recommend to create a single shared record.
If you cannot guarantee that, I recommend to let every project create whatever the need, based on that table.
Another option could be to publish that centralized record by a web API. Then others can fetch data as needed. That adds overhead and you need a good reason for going that way.
Thanks Stewart Burchell , this is also an approach I considered because I am from a Java / Microservices background, because then your WebAPI's could also be consumed from applications outside of your Appian platform as well. However, if all your applications are ONLY developed on the Appian platform, would you still suggest exposing the services via WebAPI's? Is there not a large overhead? I would have assumed Appian had an internal "app integration" mechanism instead of direct data access.
I have done exactly that in the past. I think the principles still apply. For example; I've created an Appian Portal as a separate Application and have it and a back-end Appian application exchanging information by WebAPIs.
Awesome thank you. I like what I'm hearing.Other than WebAPI's, do you know of any other ways to handle internal application integration? i.e. integration between different Appian applications.E.g. for asynchronous integration, I have thought of using Message Event's. E.g. A Master Data application creates a new record where it's process model fires off a message event containing the data of that record, and a bunch of other applications process models are listening for those message events to read them and write the relevant data to their own application tables.
My applications are driven by a business process. And that process knows what should happen. A scenario where something creates new data outside of a business process flow and then something else decides it needs needs to be notified, seems to come from a microservices approach where you want a messaging infrastructure for communication.
That's just my experience and YMMV.
Message Events - I believe you can operate in two different ways here. The first is as you describe - the event carries the payload. One possible problem is that because you're processing asynchronously the payload may be out of date by the time an application consumes it. If that's high risk/impact for you then the second way is to treat the event as a "pointer" to the payload and have the consuming app fetch the data in real-time/synchronously when it consumes it. Another problem with the first can be related to the size of the payload, so the second pattern can fetch the data in chunks which makes it easier to handle.
For the second pattern the event may point back at the original application so you fetch the data via APIs. But it could also point at, say, a database table that acts as a "queue".
And speaking of queues you can use classic Message Queues which you can also design around the patterns already described - payload-carrying or pointer-carrying messages.
Thank you Stewart Burchell and for your input/insight, much appreciated.