Best way to conduct code reviews?

In most software development projects, code is stored in Git, and code reviews can be easily conducted in a lightweight fashion using GitHub pull requests.


With Appian there is no easy or intuitive way to do this. The only way to review code (that we have found) is to review the object in its totality. For expressions and interfaces you can manually comb through the Version history, pick out the prior definition, and run it through a text diff tool locally, but this is tedious, and doesn't allow collaboration as easily. For process models, it is impossible, which is another reason we try to make our models as small as possible and do everything in SAIL.

 

Has anyone built a tool for enabling efficient and collaborative code reviews, or is this just generally accepted as a limitation of the Appian product and an impediment to generating high quality code?

  Discussion posts and replies are publicly visible

Parents
  • I would first look at the size of your project to determine what method works best for you. The suggestions here work for most cases, but depending on the complexity of the application, team size, and number of artifacts, you might have to come up with a review process that works best for your team. It's highly manual. Reviewing based on a story is a good starting point, but also having a vetted checklist that is sufficient to determine if is a code is reviewed properly is a consistent and reliable way for you to move forward. Not everyone will require the same level and attention during a review, so you may have to make that determination yourself. Jacob's suggestion and Mike Cichy are definitely 2 areas for you to explore. However looking for some automated way to do this like a diff or something, you might hit some roadblocks there.
Reply
  • I would first look at the size of your project to determine what method works best for you. The suggestions here work for most cases, but depending on the complexity of the application, team size, and number of artifacts, you might have to come up with a review process that works best for your team. It's highly manual. Reviewing based on a story is a good starting point, but also having a vetted checklist that is sufficient to determine if is a code is reviewed properly is a consistent and reliable way for you to move forward. Not everyone will require the same level and attention during a review, so you may have to make that determination yourself. Jacob's suggestion and Mike Cichy are definitely 2 areas for you to explore. However looking for some automated way to do this like a diff or something, you might hit some roadblocks there.
Children
No Data