Best way to conduct code reviews?

In most software development projects, code is stored in Git, and code reviews can be easily conducted in a lightweight fashion using GitHub pull requests.


With Appian there is no easy or intuitive way to do this. The only way to review code (that we have found) is to review the object in its totality. For expressions and interfaces you can manually comb through the Version history, pick out the prior definition, and run it through a text diff tool locally, but this is tedious, and doesn't allow collaboration as easily. For process models, it is impossible, which is another reason we try to make our models as small as possible and do everything in SAIL.

 

Has anyone built a tool for enabling efficient and collaborative code reviews, or is this just generally accepted as a limitation of the Appian product and an impediment to generating high quality code?

  Discussion posts and replies are publicly visible

  • Generally we review our code story by story. We list the objects that were created in our scrum board so that the reviewer will know what items to look at. We require heavily commented code to increase the efficiency of the review. Additionally, for expression rules, you can require that your developers create test cases, which will greatly speed up the review and testing of code.

    The ease for code review is definitely a limitation of Appian, but the way to go about it is to organize outside of Appian.

  • Hi,

    As jacob suggested heavily commented code is one of the key for increasing the efficiency of review. Creating test cases for expression rules at the time they are written is very good practice and since now Appian provides Automated Testing for Expression Rules, it will be really helpful while reviewing the test expressions.
    For a proper functional testing you can use Fitness/UI Testing, which is really helpful for testing the proper functionality of an application.

  • Certified Lead Developer
    in reply to inder_arora
    No tool - all manual. Our developers create change logs. Those change logs then go through peer review, where all objects are validated to be in the patch app and the objects themselves are reviewed against checklists.
  • Certified Lead Developer
    As far as Process Model's go, you can generate the documentation for the different versions of the PM's and then you can use a diff checking tool to compare the different versions and see what has been changed in the PM.
  • Running the health check in the lower environments can also be a good way to automatically detect bad practices, provided you don't have a lot of junk objects in the environment.
  • I think code reviewers should check whether the process has been designed as per the best practices proposed by Appian. They should make a checklist of the best practices in both process model design and Interfaces and check if these best practices have been implemented. Ex: Checking the performance view for the UI(Interfaces). It should be as minimal as possible say less than 100ms to avoid bad end user experience. For process models, check for security, archieval settings,alerts configured and if a new field is added to the UI, u need to consider how the existing instances work without breaking up.
    We did built a code review tool, some time back which checks whether the best practices have been followed in the process and generate a report by uploading the package into the tool.
  • Appian Employee
    in reply to ashokv

    Per Ashok and Christine, there are two levels of reviews - code structure and best practices/design review.

    For the former, there is no easy way to do it - SAIL could be maintained in git and potentially diffed. From there, things will get progressively more challenging all the way to process models which at this point have to reviewed visually. As Christine noted, change logs with peer review are the best way in this situation.

    As for design reviews, we have best practices checklists for each stage of a project. Additionally, I always start my review with teams running Appian Health Check, which checks for many items on the best practices lists and focusing on the flagged risks. It is incredibly useful - at some point teams include it into the delivery cycle and only escalate items that they are having a hard time resolving.

  • I would first look at the size of your project to determine what method works best for you. The suggestions here work for most cases, but depending on the complexity of the application, team size, and number of artifacts, you might have to come up with a review process that works best for your team. It's highly manual. Reviewing based on a story is a good starting point, but also having a vetted checklist that is sufficient to determine if is a code is reviewed properly is a consistent and reliable way for you to move forward. Not everyone will require the same level and attention during a review, so you may have to make that determination yourself. Jacob's suggestion and Mike Cichy are definitely 2 areas for you to explore. However looking for some automated way to do this like a diff or something, you might hit some roadblocks there.
  • there is no automated tool to do this , developers need to maintain the change log and then review has to be done object by object before deployment to higher environments.
  • You can create a checklist of the points that needs to be validated.Latest version of Appian supports automated testting of expression rules.These are few links which would help you to review your code.
    docs.appian.com/.../ux_getting_started.html
    docs.appian.com/.../Database_Schema_Best_Practices.html
    docs.appian.com/.../Tempo_Best_Practices.html