Antipatterns: Solution Design Mistakes to Avoid in Appian

Skills Covered

Low-code development platforms, such as Appian, make creating applications faster and easier compared to common high-code development languages. That said, low-code solution design mistakes (or ‘antipatterns’) can reduce coding efficiency and hinder solution scalability and performance. Appian defines an antipattern as a suboptimal solution design to a common problem. A recent survey of Appian’s Customer Success team identified 10 antipatterns most frequently observed in customer solutions (see graph below).

Activity Chains

Activity Chains

Applying Activity Chains Across Many Nodes In A Process Model

Activities in a process model instance are prioritized and processed based on a first in-first out approach that includes all other process activities occurring on the platform. When using activity chaining, the processing of that activity is given a higher priority in the processing queue. Chaining too many activities together will take processing resources away from other activities happening in the platform for longer periods of time. As such, users or systems may experience a decrease in performance. 

Appian’s Recommendations:

  1. Only chain when not chaining would negatively impact user experience. For example, between two user input tasks completed by the same person, or between a user input task and an important write to the database that would change the context of the page they will be returned to.
  2. Shorten long process chains by combining transactions into the smallest number of nodes or by moving transactions that can be done asynchronously outside of the chain.
  3. Avoid chaining activities across sub processes or time consuming system nodes like integrations.

Example:


In this model, nearly all steps in the process are activity chained. Chaining between the Start Node and the “Review Request” user input task provides no benefit to the user, and simply ties up precious process resources. A different user will be completing the second user input task, so there is no difference in user experience between chaining vs not chaining, and therefore this flow should not be chained.

The top flow, however, may need to be chained. If the outcome of the “Write Maintenance Record” node will impact what is seen when the user returns to the page they launched the action from (such as the status of the request), then you may consider chaining this path. Otherwise, this should not be chained either. 

Batch Size

Defaulting Paging Batch Size to -1 or Max Size

In order to access data, a designer may leverage a!queryRecordType or a!queryEntity to query against records or the DB respectively. There is a required parameter to both of these functions, “pagingInfo”, that requires the user to set the startIndex and batchSize. For dynamic data sets, there may be a propensity to set this value to either the max allowable batch size for record queries or to -1 for entity queries. There are times where this is a warranted and justified design decision, but if the correct design thinking has not been applied then it can lead to bringing too much data into memory, causing slow performance and poor user experience. Also, in the unfortunate event that a query’s filters have not been correctly applied, this could return a full dataset that could cause extremely slowness and even outages in some cases.

Appian’s Recommendations:

  1. As an emergency brake, set an upper bound that is well above the threshold that would seem appropriate for the returned dataset, but not so high that returning that amount of data could cause serious performance issues.
  2. Do not use the max batch size of -1 unless there is a very good reason to do so, and typically only in situations where processing is happening asynchronously (i.e. a user is not involved) or during off-hours.

Example:

In this use case, there is a query to return service requests based on id or request type. If you pass in an id, it is assumed you should only return 1 item, as that is the primary key. If you pass in a request type, then it can return many values. If you forget to pass in either variable, and ignoreFiltersWithEmptyValues is set to true, then it will return the whole dataset. Given the size of this data, returning the full dataset returns this error, which a user would see on the front-end as a pink box error.

In this case, you’d likely want to either cap the batchSize at the maximum allowable value, or pass in paging info if this query could be used to populate a grid, which should have a batch size based on the number of rows being displayed to the user.

Inflexible Processes

Long-Lived Process Instances

To be very clear, it is a common business need to implement business workflows that take a quarter, a year, or even multiple years to complete. This only becomes an issue when a single process model is used to span that entire business process.

For example, developers sometimes attempt to mirror application processes and business processes: if a business process takes 6 months to complete, the application’s process also lasts 6 months. Long-lived processes consume more memory and are inflexible when it comes to changing application needs. They can also hinder future maintenance or updates to the process.

Appian’s Recommendations:

  1. Break long processes down into smaller, shorter processes for increased flexibility. Natural break points usually align with stopping points where human actors need to take prolonged action outside of the application (eg. training, completion of a project, etc.).
  2. Take a records-first approach by centering workflows around data and records, instead of focusing on BPM workflows that mimic real life.

Example:

Consider an example of a long-running process. We will pretend that each individual user input task can take up to 2 weeks to complete, and that there is also a 3 week wait timer in the middle of the second flow that would add additional time. In total, this process could take up to 11 weeks to complete, meaning this process will be consuming memory on the platform for at least 11 weeks, and potentially longer based on the archiving settings set on the process model.

Instead, consider breaking out each step of the workflow into a separate process model and calling them via related actions. This means that users will only activate the task when they are prepared to work on it, and the process instance will only live as long as it takes to complete the user input task. 

Complex Logic

Adding Complex Logic Directly into PM Nodes

Process models provide a lot of flexibility to configure entire complex workflows directly within the confines of the process modeler. That said, a process model node is not always the best place to keep your workflow’s important business logic. Adding important logic directly to inputs/outputs of nodes or into script tasks imposes limitations on maintenance and the ability to change in-flight processes.

Firstly, as a best practice, important business logic should be encapsulated in expressions so that it can be referenced, reused, and maintained more easily. Secondly, if for whatever reason there is a bug in the business logic that has been added directly into PM nodes, it is a much harder process to fix that logic in live processes than to simply update the expression. 

Appian’s Recommendations:

  1. Do not put complex business logic directly into process model nodes, and instead reference expressions that contain that logic.
  2. Only put simplistic logic into PM nodes’ input/outputs, such as basic setters and getters that allow the process to proceed as expected.

Example:

We have a multi-step process model. Step 1 is a user input task, step 2 is an XOR, and the XOR branches down multiple flows based on complex business logic configured directly into the gateway. There are 1000 live instances currently sitting on step 1, the user input task. A bug is found within the logic of the XOR. When a process launches, it launches based on the process model version that is currently published. Simply fixing the logic in the process model and re-publishing will not address the 1000 live instances that are about to hit the bug once the user input task is completed. If this logic had been encapsulated into an expression rule, simply updating the expression rule would allow all 1000 live instances to proceed appropriately.



Querying in a Loop

Querying in a Loop

A common mistake among novice developers is to query for data in a loop, such as forEach() or reduce().This can cause unnecessary strain on the application, as this does not properly leverage what each hardware component is best at. A database is very good at taking query parameters that would result in multiple rows being returned and returning that to the user, therefore each roundtrip to the database often has a higher cost than completing the query itself. Not only that, but each time a query is executed, connections/threads in the database, engines, CPU, etc. are being dedicated to that action that could be used for other activities. 

Appian’s Recommendations:

  1. Parameterize your query rules so that you can properly execute those queries without a loop.

Example: 

Here is the wrong way to implement a query.

…and here is the correct way.

Returning All Fields

Returning All Fields on a Query When Not Needed

There are sometimes use cases where you are executing a query for the sole intent of grabbing a subset of the fields from that record or table. You may be inclined to return the full dataset and then index out the appropriate fields, but that means you are pulling more data than is actually useful into memory. To avoid wasteful memory consumption, leverage a selection parameter to set the fields that you need.

Appian’s Recommendations:

  1. When querying data, only return data that is absolutely necessary to what you are trying to accomplish in your application.

Example:

Here is the wrong way..

.. and here is the right way

.

Synchronous Processing

Using Synchronous Processing When it’s Actually Asynchronous

Synchronous processing means that operations must be completed in sequence and only one can be completed at a time. Asynchronous processing means multiple operations can happen simultaneously and therefore another task can kick off while another is completing. This can be discussed through two lenses; 1) How can I properly leverage parallel processing when there are activities that are not dependent on one another to complete? 2) How do I ensure I do not make a user wait to move onto another action, if there is no reason to do so?

How you launch a process, chain the process, and how you leverage subprocesses requires design thinking to determine whether they should be synchronous or asynchronous. 

Appian’s Recommendations:

  1. Unless a user needs to wait for completion of a process, use asynchronous methods to start or complete a process.
  2. Only activity chain up until the point required to support user experience (i.e. between user input tasks or to ensure particular activities, such as writes, complete before going back to their prior page).
  3. Use a startProcess node instead of a subprocess if leveraging MNI and processing is asynchronous.
  4. Most headless actions should be considered inherently asynchronous, at least from the perspective of the user, and should leverage an asynchronous method for launching those processes.

Example:

Here is an example of a process model that has both synchronous and asynchronous concepts. Only one node can execute at a time except for in the middle section, where multiple writes can occur in parallel. This means that the writes are happening asynchronously. This will increase the speed at which the process completes, and can be accomplished because the writes are not dependent on each other.

Logic Overload

Process Models with Lots of Logic

Process models have a rich feature-set, but that doesn’t mean every piece of dynamism or logic should exist within that process. Part of the reason is that you do not want process models to become too large, as they will then require more memory to run. Another reason is maintenance and backward compatibility - as noted in antipattern #4, process model instances are based on the published version at the time of launch. If changes need to be made to the logic of the process, then in-flight processes will need to be addressed rather than updating core interfaces or rules within that process.

Appian’s Recommendations:

  1. Minimize the number of nodes in a process model.
  2. Encapsulate logic into rules and interfaces, rather than in the process model.

Example:

Here is a process model with logic that would be better served outside of the process model. In this use case, based on the user’s group they would see a different version of an approval interface. 

Rather than put that logic in a process model, it is better to do this directly within the interface itself and then dynamically show the appropriate interface content to the user. This will be more scalable and maintainable, and will reduce the need to branch workflows frequently throughout your process model, reducing memory footprint.

Excessive Rule Inputs

Excessive Rule Inputs

Have you ever opened up an expression rule or interface, just to find it has a never-ending list of inputs? If you have, it is a tell-tale sign that an object is trying to do too many things. Oftentimes we see that when trying to make a rule more generic, you add parameters so that it can be used in many different scenarios. As the use cases for that rule proliferate, so too do the edge cases and the need for more parameters. Once the list of rule inputs starts requiring a scroll bar, it may be time to split this rule into multiple rules based on common patterns between use cases. Another path forward may be to leverage a helper input, either of its own CDT type or map type, that contains many elements within it. For example, I could pass in a single rule input called “supplementalData” that is a map type containing an array of other data types and data.

Appian’s Recommendations:

  1. Do not force components to be reusable for the sake of reusability. Analyze whether the use case is truly different enough to justify splitting logic into multiple rules that require fewer parameters.
  2. Consider using “helper” rule inputs that contain multiple data elements instead of passing them all individually.
  3. If rule inputs can be combined into a single rule input in a meaningful way, consider doing so to decrease maintenance costs.

Example:

The below example doesn’t have too many rule inputs yet, but given some of the design decisions here that could happen fairly quickly.

Notice that interface components, in this case columns, are being shown/hidden based on individually set rule inputs. If new columns or interface components are added, then a new rule input would need to be added for every new component if this design was continued. This could quickly proliferate, and result in many unnecessary rule inputs.

Instead, I can pass in a single rule input that holds all the sections that should be dynamically shown to the user.

Screen Size Considerations

Not Designing for All Applicable Screen Sizes

The interface object has a lot of power and flexibility to create complex, rich interfaces that provide an excellent user experience. There are many interface layouts, components, and configurations that can be designed, and as a result of this flexibility designers need to be mindful of how interfaces look in different aspect ratios and page sizes. Simple tweaks to interfaces can be the difference between a wonderful UX in both desktop and mobile, and a terrible one.

Appian’s Recommendations:

  1. Understand the current and future usages of the application, and common screen sizes for people within the user base.
  2. Leverage the tools in the interface designer to view interfaces in different desktop width, tablet, and mobile configurations.

Example:  

Be sure to click all options in the section highlighted with the arrow, and be particularly mindful of those that you know your customers will be using.