A key focus of our development efforts here in Newsweaver is on code quality. Every line of code written or refactored must pass through various checks, both automated and manual, to ensure it meets our coding standards. We take a zero tolerance approach to issues flagged at any stage.
You may think that this is setting a very high standard, but it is our experience that adhering to this very quickly becomes the norm once you get used to the pride and peace of mind that can only come from not cutting any corners.
We practice Test Driven Development here in Newsweaver. Some of us are more traditional/rigid on this than others (in terms of writing tests first). The end result though is that all code coverage metrics must be met (over 90% coverage, both lines and branches). This figure does not tell you how effectively your code is tested rather, how your code is NOT tested. We use them to ensure that we haven’t overlooked (or purposely skipped!!) testing any sections of code.
We do not strive for 100% coverage, though we sometimes achieve it! ;-), due to diminishing returns and the tendency to write meaningless tests just to achieve that figure but we do find that aiming for the 90% mark means that any code of consequence is tested. It guides us to write clean code, with small clear methods and objects that make our code easily testable.
Once the logic is tested an army of linters and bug finders descend upon the code! These automated tools crawl over it, highlighting areas it deems suspicious or not adhering to accepted style guidelines. With the aid of these informative tools a developer, in particular one new to a particular language, immediately learns what is best practice and can apply it. From my point of view I was new to Scala and functional programming in general. Having wart remover (lovely name I know) go over my code was a real eye opener and helped me along the road to writing cleaner more functional code. Also being part of the build (and built into the IDE in most cases) means that, as you write the code, you are immediately fixing issues. Again positive habits are reinforced.
We find a zero tolerance approach is very important here as when issues build up they counterintuitively become easier to ignore. One piece of red on a screen jumps out a lot more than a sea of red. Sometimes (very rarely) issues do arise that we must make exceptions for. In this case we discuss them as a team and once we are all happy we add the exception to the config file of the linter so it is on the record and doesn’t show up in future analysis.
Both code coverage and style checkers run as part of our builds so that developers can get immediate feedback on their progress. Once the local build passes (the build fails if the quality thresholds are not met) the developer is free to commit the code and our builds on Bamboo kick off further analysis of the code using SonarQube.
Sonar Quality Gates
SonarQube is a continuous inspection platform for managing your code quality. It provides coarse and fine-grained views of important code quality metrics allowing you to track them over time and gives the ability to drill down into individual issues. There really is nowhere to hide with it. As a team it is our clean code conscience.
As well as tracking code coverage and identifying style violations it also highlights smells like code complexity, code duplication and circular dependencies that should be tackled .
Another extremely useful feature is that of quality gates. Similar to the quality thresholds in the build, you can set thresholds in Sonar. If these gates are not passed by the code it automatically sends an email to the full team with the updated quality status of the project (new issues added, status goes from green to red). As I said, nowhere to hide :)
We also use SonarQube to record and share our commitment to high standards. Seeing your projects up on a dashboard with no style violations and great test coverage garners a real sense of achievement. I know it is essentially the same thing as having your build pass locally, however the visual and public aspect to it emphasises the importance of quality in the overall development process. Furthermore it is something you can use to build trust when dealing with people in the company external to development.
The above static analysis of the code is a solid foundation, but you can still get away with writing some terrible code. This is where manual peer code reviews or pull requests (PR) come in. We develop on feature branches and before merging with the main branch a developer creates a PR and adds the rest of the team as reviewers. The developer will add a detailed description to this PR which contains an overview of the work done as well as any areas of the code worthy of note.
The rest of the team then service these requests. Atlassian Stash’s PR process provides a variety of ways for the reviewers to visualize the diffs between the new and old code. This allows us not only to get a feel for and and learn from what the developer did, but also give a critique of the code as we see it, offering possible improvements that can be made (both functional and nonfunctional). The original developer can then take on board these suggestions and make follow-up commits. This fix -> feedback loop continues until the team approves the PR and the code can be merged.
This process is a fantastic way of sharing knowledge in terms of new work being done and experience in terms of suggesting changes. It improves the end code in ways that an automated process never could.
The final section of the pipeline (one that not all code passes through) is that of group code review. This is where the full team gets together in a room and goes over a piece of code together to discuss its merits (or lack of). We call these for a variety of reasons. Maybe someone has done something with a new piece of tech (our first Spark apps for example). Other times a pull request might have gone through a number of iterations with many differing opinions and people need to get together of work out the issues and agree on an approach going forward. These are very interactive and engaging meetings and we tend to arrive at consensus relatively quickly, freeing the team up to move on.
So there is an overview of how code quality fits into our development process. Rather than being seen as a chore, it is seen in a positive light within our team. It solidifies our collective thinking with regard to coding standards. It is fair, as the same thresholds are applied to all from the student on internship to the most senior developer. Finally, it saves time, as issues are caught and fixed early, allowing people to move on quickly to their next task safe in the knowledge that they are leaving good clean work behind them.