This methodology is intended to provide an entry point for projects of all sizes and types, while identifying opportunities to extend and customize the performance testing effort based on specific project goals and constraints.
Note: some activities are based on Performance Testing Guidance for Web Applications.
In order to simplify project estimation and scheduling Appian has developed a performance testing task plan. You can use this plan as a starting point and extend it to address specific performance objectives.
Validate your performance test plan with the Performace Testing Initiation Best Practices Checklist.
JMeter Analysis Worksheet
Appian Log Analysis Worksheet
There are multiple stakeholders responsible for performance testing. The ownership of the respective responsibilities may differ based on the composition of your team. What’s important is that all areas are owned by someone on the team.
We’ve provided a sample presentation to leverage as a Sprint 0 artifact once you've identified the stakeholder responsibilities and the non-functional requirements.
Business Stakeholder
Appian Performance Test Architect
IT Team
Performance Test Engineer
Appian Tech Lead
Planning and designing performance tests requires identifying and modeling usage scenarios, providing test data for those scenarios, and specifying the metrics to be collected during the test execution. This process can be extremely complex and time consuming. The following guidance provides one simple approach that can be extended based on project goals and constraints. Teams with extensive experience in performance test design may chose to develop their own approach.
Identify unique application user roles and estimate the number of concurrent users accessing the application on average and during the hour of peak usage. For example, throughout the day there will be about 20 Sales Reps accessing the system at any given time, but between 10am-11am there will be 100. Identify how user load will grow over time in response to business changes.
Identify potential usage scenarios and the associated user role using the acceptance criteria, existing production usage, business use case documentation, functional test designs, and ad-hoc Appian features (eg: post a message, search News). Consider scenarios that are initiated by users and scenarios that are initiated by the system (eg: scheduled processes).
Estimate how often each scenario will be executed in production:
Prioritize the scenarios using the following guidelines:
Performance tests simulate production workloads, and more accurate simulations produce more accurate results. Modeling an insufficient portion (<80%) of the total workload will significantly reduce the predictive value of performance testing. You should test with at least a few use cases that add to more than 80% of total user activity.
Document the sequence of actions involved in each scenario or reuse existing test scenario documentation (eg: UAT scripts). For scenarios with multiple paths either select the most likely or most important path or create separate scenarios for each path. Between each user-initiated action, include a delay to simulate “think time” spent reading or entering data. Incorporate a random element by specifying the average delay and a standard deviation.
Design a set of performance tests at different loads. For example:
Validate the test design by comparing the number of activities to be performed in the test against business expectations. Revise the scenario and user load estimates until the test design closely approximates the expected production load.
Nearly all test scenarios require user input (eg: login credentials, task form values, message text). Using the same inputs for every instance of a test scenario can cause unexpected performance problems (false positives) or cause other performance problems to go unnoticed (false negatives). Add variability to a scenario using the following procedure:
Record a sample set of valid values for each input
Related inputs (eg: username and password) should be recorded together
Example test data for creating a Case and collecting three data points, title, description and priority (notice the various edge cases and different users executing the action):
Application performance often depends on how much data is already in the application. Unless the test scenarios are designed to add data during the test, prepopulated data will need to be loaded before each test run. Consider the following types of data that may impact the performance acceptance criteria:
Determine how prepopulated data will be generated or loaded. For example:
If possible, take a snapshot of the environment after test data has been pre-populated and use that to restore a clean state before future test runs.
Metric data can come from a variety of sources (eg: client testing tools, server monitoring tools, Appian logs) and each source can often be measured and recorded in several ways. To ensure consistency across all tests it is important to document the specific definition and collection procedure for each metric. Metric documentation can include the following information:
Verify that the documented metrics will be able to determine whether the performance acceptance criteria are being met. Additional metrics for individual system components (eg: web service, query rule) can be a useful metric when diagnosing performance problems.
The test environment should be setup as part of the overall configuration management plan, typically performed by IT (or the Appian Cloud team). Special consideration should be given to any monitoring tools that need to be installed or configured. Make sure that the system time is correct on all servers and client machines. If Appian logs are used as a source of metric collection then the appropriate changes should be made to the log4j settings.
The application to be tested should be deployed to the environment using the same procedure to be used in production. Once the application is deployed, run several sample tests (smoke test) to confirm that the environment and application are configured properly and that the expected metric values are being collected.
Implementation of the test design is closely tied to the available tools (eg: JMeter). It typically involves recording test scripts based on actual user interaction and then modifying those scripts to add variable inputs and validation checks. Tool-specific techniques are not covered here.
Test development is an iterative build-verify cycle. Tests should be run frequently using a single user to confirm that they interact correctly with the application and that the expected metric values are being generated.
Recorded test scripts contain numerous references to system identifiers that cannot be reused (eg: task ids). Locate and replace system identifiers with variables. Extract the runtime values from earlier server responses (typically using regular expressions) and update the variables.
If the test design calls for variable data, locate the static references in the recorded requests (eg: form inputs) and replace them with variables. Populate the variables with dynamic values from a lookup table, random generator, or other data source.
Build in assertions to verify that the expected response is returned. For example, verify the title of the next task in an activity chain. If it doesn't match then the previous submission may have failed validation or followed an unexpected process path. Additional examples include:
Additional considerations when implementing tests include:
Verify that the test environment, application, and monitoring tools are functioning properly by running a small set of tests (smoke test). Remove existing data (eg: processes, database records, documents, log files) to restore the environment to a clean base state. If the test design calls for prepopulated data, add it after old data has been removed.
Run the tests and monitor their execution. Watch for errors that may indicate a problem with the tests. For example:
If the tests are not going to provide valid results it is better to halt them early.
Once halted, it is recommended to first verify the test scenario and infrastructure stability before performing application-side troubleshooting. If the test results are invalid because of a systemic problem there may be little value in further analysis.
Record and store the raw test results and observations for later reference. Make sure to associate the results with a specific version of the test implementation.
Compare the performance test results against the performance acceptance criteria. If there are any failures, analyze the underlying metrics to determine the root cause. For example:
Validate that the tangible outcomes of a test align with expected values by analyzing the number of processes started or completed, service calls executed, records created, etc. For example:
Develop recommendations to resolve performance concerns based on this investigation. For example:
Generate a stakeholder report that includes a summary of acceptance criteria results and recommendations for addressing any failures. Represent results visually and highlight targets or thresholds. Have the details of any root cause analysis available to support your recommendations. Address any impacts on the testing process or performance results due to limitations in the test environment, tools, or testing schedule.
Revise the test design to incorporate any knowledge gained from test execution and analysis. For example, if the stress conditions did not cause the application to reach capacity try increasing the user load or add additional pre-populated data.
Retest the application after the recommendations have been implemented to confirm performance improvements. Include trend analysis in subsequent stakeholder reports.