User Acceptance Testing Overview

This article provides guidance to application delivery teams regarding the planning and execution of User Acceptance Testing, commonly known as UAT. Although UAT is typically carried out by customer resources, it is important that all team members understand the role of UAT in ensuring a successful implementation. This document should be used by Appian project leads to provide guidance to the customer — preferably during Sprint 0 — about the value and importance of UAT. This will help ensure that UAT is given sufficient attention during the execution of the project.

What is UAT

UAT is a distinct test activity from the testing of individual stories, which is carried out to verify that the acceptance criteria defined in a story have been met. UAT has a broader perspective of end-to-end business scenarios, which will normally be supported by the features provided in a group of related stories rather than in a single story.

For example, an application in the financial services domain may involve an enquiry from a customer, resulting in a proposal to provide a product, which on acceptance by the customer leads to the opening of an account. Numerous user stories would have been created in a product backlog for this scenario, but UAT is concerned with the overall process flow: does the application appropriately handle all expected (and unexpected) scenarios, does it allow the required outcome to be achieved, does it provide a good user experience for both the customer and application users?

UAT validates that an application is fit for purpose, and not just ‘fit for specification’. To be effective, UAT should be carried out by a representative cross-section of the actual users of the application, not by specialized test resources or by business analysts, to ensure that a real-world perspective is brought to the testing activity.

Why is UAT Necessary

With the above definition of UAT in mind, it can be seen that UAT brings a number of benefits:

  • UAT can uncover requirements gaps. Stories deliberately break down requirements into small units of work to support agility and velocity, but when a narrative thread is constructed from a sequence of stories it is quite possible to identify scenarios that have not been addressed.
    • For example a prospective customer applies for a loan, an offer is prepared and awaiting approval, but meanwhile the prospect makes contact and requests a higher loan amount. The application may not support the ability to change the offer before it has been approved.
  • Different stakeholders have different perspectives and different expectations. When end users test the features in an application it is likely they will provide different feedback from that provided by the team members performing story-based testing.
  • UAT testers become very familiar with the features of a new application, gaining skill and confidence using the application before go-live. This puts them in a good position to train other users prior to application rollout, following the widely accepted train-the-trainer model. 
  • UAT tests real-world scenarios with real-world data. Applications behave differently according to the data they consume, and the test data created to support story-based testing is often not representative of real-world data.
  • It is significantly easier to address defects (including misaligned requirements) while the project team is still fully engaged, prior to go-live, than it is after an application is in use. Governance of a live application typically imposes a much less agile change cycle than is possible during development, and changing an application often becomes more complex once live data exists.
  • The conduct of UAT helps identify what changes need to be made to operational procedures so that the new application can be introduced with minimal disruption to business as usual. UAT testers can thus provide a valuable input to change management activities.

When Should UAT Be Performed

In the old word of waterfall projects, UAT was usually seen as the last phase of the project before an application goes live. This is clearly not aligned with Agile principles. If UAT uncovers new requirements (which is quite possible) only towards the end of the project, there is little room to re-prioritize the product backlog to take account of these requirements.

UAT should therefore be carried out on an ongoing basis, starting as soon as possible after the start of a project, i.e. when enough features have been developed (i.e. are ‘Done’, including completion of story-based testing) to support an end-to-end business scenario.

To prepare for this ongoing test effort, the Product Owner should appoint a regular team of end users that spend a part of their day job testing new features, both on an ad-hoc basis on completion of each sprint and as a more intensive activity when a discrete UAT phase is included in the project schedule. This same UAT team should participate in all sprint showcases so that they are aware of the new features being delivered and can formulate appropriate test scenarios.

Where Should UAT Be Performed

Typically, there is a testing environment shared by dedicated testers on the development team (story-based testing), and by end users conducting UAT. Since the recommendation is that both types of testing take place in parallel over the duration of a project, in a shared environment it is important to define the rules of engagement so that testers do not corrupt each other's tests. This requires ‘logical separation’ of test data, e.g. by agreeing on naming conventions for data that will be used by the two different teams, so that data that is prepared by one team to support a specific test case is not inadvertently consumed and rendered unusable by the other team.

A separate UAT environment is worth considering, particularly for ongoing development programs where multiple applications will be developed over a period of years rather than months. This allows UAT testers to have full control over their test data, and also ensures that they will only be testing application features that have purposely been deployed to this environment, once all story-based testing is completed.

UAT Roles

Role

Responsibility

Product Owner

Provides oversight of the UAT activity, communicates the product vision and minimum viable product (MVP) to the UAT team, reviews and prioritizes defects raised during UAT, determines whether the outcome of UAT means that the application should proceed to go live, or whether more features need to be added.

Test Manager

Defines and communicates to the UAT team the test process and tools that will be used, ensures that a full-coverage test case suite is defined of the application features, manages the test team, tracks the progress of test execution and defect resolution.

Tester

Executes test cases, documents their outcome, and provides feedback on application usability.

Test Case Writer

Develops test cases by consolidating acceptance criteria from multiple stories into coherent end-to-end process scenarios. Prepares test data to support the design of each test case.

In a small-scale project these roles may be compressed, for example the Product Owner may also be the Test Manager, the business analyst who supports story grooming and definition of acceptance criteria may also be the Test Case Writer. It is critical however that Testers come from outside the project team, to lend a different perspective, and that they are representative of all the personas that will use the system.

How to Prepare For and Execute UAT

  • Identify UAT testers well in advance of the test activity; ideally as part of project initiation activities during Sprint 0. Ensure they have been given permission to focus sufficient time to perform dedicated testing, e.g. a given number of hours every day for a week, and that arrangements have been made to provide cover for their day-to-day duties during this time. If business constraints do not make this early commitment of resources possible, monitor this resourcing issue via the project's risk register.
  • Define the process and tools used to manage UAT. Where will test cases be defined and tracked? Where will defects be raised and tracked? Jira is a natural choice for defect tracking, particularly if it is already being used to manage the product backlog and development tasks. Jira can also be used to define and track the status of test cases, but may be seen as ‘too technical’ by end users. A spreadsheet can be an effective alternative.
  • Ensure UAT testers attend every sprint showcase, so that their understanding of the application grows in parallel with the development of the application.
  • Provide time in the test schedule for developers to remediate issues so that they can be retested. In a short-duration project, UAT may be scheduled over as little as 1 week. Testing should be scheduled for the first half of each day, leaving the second half available for remediation. In a longer duration project, where UAT may be scheduled over several weeks, consider dedicating the first 2-3 days of each week to test activities, with the remainder of the week devoted to remediation activities.
  • Prepare a security matrix that identifies the group membership required by each UAT tester, and use this to prepare user accounts in the test environment.
  • Provide a dedicated location for UAT. Ideally, UAT users are co-located with the project team, so that assistance can be provided without delay when needed, and so that testers are not distracted by their day-to-day responsibilities.
  • Prepare UAT scenarios towards the end of each sprint; some will be incomplete until additional features are developed in subsequent sprints, but there should be some simple scenarios that can be tested. One or more users who are subject matter experts should generate these test cases, and the Product Owner should review them to ensure they are consistent with the product vision.
  • Test cases should be recorded formally in a register so that they can be reviewed and tracked. Excel can be used for this purpose, it isn’t mandatory to use a special purpose test management tool.
  • Supplement scenario-based testing — which will often require collaboration between testers, adopting different personas — with simple test challenges that can be used as warm-up exercises and which can be extended beyond the regular UAT test team. Set a list of tasks for users to accomplish using the features of the application, and ask a group of them to work through these tasks individually, all working at the same time and over a short period such as 30 minutes or 1 hour. This will reveal how intuitive the application is, and also reveal defects as novice users will tend to use the application in unpredictable ways.
  • Prepare test data. This must represent real-world data, but of course care must be taken not to allow this data to leak outside the boundaries of UAT testing, for example by using email addresses that represent real people outside of the project team. The application may provide data management screens that allow most if not all test data to be created manually by user input. However it may be more efficient to populate spreadsheets with data and request the assistance of the development team to load this data into the application database. Whilst this approach has the disadvantage of requiring technical support, both in defining the spreadsheet templates as well as importing the data, it has the benefit of providing reloadable data when tests need to be repeated.
  • Create a recording of each scenario as it is tested. This will assist in diagnosis of any defects that are found, and once a ‘clean’ run through of a scenario is achieved this will provide useful training material for other users.
  • Anything other than an obvious defect, such as an error message that blocks the completion of a test scenario, should be reviewed and prioritized for a future sprint by the Product Owner.
  • Track the progress of every test case. Ideally this should be visual and accessible to the UAT testers themselves, so that they can see progress and be motivated to work towards finalization of UAT. While the preference is to use the same tool that is used during story development (typically Jira) to track test progress and manage the issues that arise, alternative and more business-friendly alternatives such as spreadsheets should also be considered if this will increase user engagement.
  • Ensure feedback is provided to UAT testers acknowledging their input and demonstrating how feedback was (or will be) addressed by adding new features or adding items to the product backlog.
  • Conduct UAT retrospectives to learn lessons both about the effectiveness of the application development process and the effectiveness of the UAT process itself.  Were requirements well-defined and well-communicated, or did gaps emerge during UAT?

How to Develop Test Cases

  • The Test Case Writer (TCW) should be considered part of the project team, and should be identified before Sprint 1 has completed.
  • The TCW should catalogue the features delivered in each sprint and start to slot these into end-to-end test scenarios.
  • In some cases sufficient features may have been delivered to complete an end-to-end scenario, in other cases test scenarios will evolve gradually as more features are delivered from each sprint.
  • The TCW should use both the documented requirements of each story as well as the delivered features to develop test scenarios, seeking assistance from the story-based tester if needed.
  • Each test case should have a defined scope, or business scenario, and consist of a number of test scripts that address the different pathways or outcomes that should be supported.
    • For example, a test case may address a customer applying for a credit card and receiving a decision in principle. Test scripts could include an online application, an application by telephone, an application in joint names, an application where proof of address and income has been requested, an application that has been accepted, and an application that has been rejected.
  • Each test script should clearly articulate the business scenario that is being tested, enumerate the test steps in the appropriate sequence, and define the expected outcomes.
  • Test scripts should also identify the test data that is to be used to support the test. If new data will be created by the tester during the execution of a test script, the script should provide clear instructions about what data should be created.
  • Test cases and scripts should be reviewed both by the Product Owner and by a representative of the development team to make sure they align with the product vision and do not include features that have been excluded from the MVP.
  • Test cases should seek to address the high level flow through a happy path and all anticipated exception scenarios. Test cases should not focus on data input and validation functionality at each step in a process, as this should have already been validated during story-based testing.
  • The TCW is creating instructions for the business users who will actually carry out UAT, so must ensure that test scripts provide sufficient detail to guide users through test execution.
  • As a minimum, is is suggested that the following data points are tracked for each test case:
    • Objective: a high level description of the purpose of the test case
    • In scope: list of the features that will be tested
    • Out of scope: list of any features that will explicitly not be tested (for avoidance of doubt)
    • Preconditions: what actions already need to have taken place, and what data needs to already exist before the test can be executed?
    • List of scenarios that will be tested (i.e. the individual test scripts that make up the test case). For each step in each scenario include:
      • Description of the action to be performed by the user
      • Expected result
      • Actual result
      • Status (pass/fail)
      • Comments (to elaborate on any failures, for example)

Common Risks

Risk

Mitigation

Users will try to test that the application supports current ways of working.t

The Product Owner must communicate the product vision and explain what the target state looks like. Users identified for participation in UAT should be aligned with (or neutral to) the product vision. Don’t let UAT be dominated by change-resistant users.

UAT uncovers new requirements that put go-live at risk if the application is not considered production-ready.

Following Agile development methodology should have minimized this risk, with a business-empowered Product Owner who has identified and prioritized the features required for MVP. The Product Owner must remain engaged throughout UAT to ensure new requirements are given appropriate priority, and to negotiate with users to exclude certain features from the MVP.

UAT is considered an afterthought and not actively addressed until late in the project schedule.

Follow the guidance in this document!

Customer does not have sufficient resources to dedicate to UAT activity.

Project manager should track this risk and continue to encourage and educate the client on the benefits of performing UAT early and often.

Conclusion

User Acceptance Testing is an important part of any successful Appian project. Hopefully, you find that the recommendations in this guide provide a foundation from which to construct a UAT plan that works best for your project's context and circumstances. Regardless of how your UAT is designed and implemented, be sure to continually incorporate it throughout the project lifecycle to help minimize missed requirements, validate and reshape the product roadmap, and ultimately ensure that end users will derive value from using the application your team has worked so hard to build!