Skip to content
Are you buying Quality software?

Are you buying Quality software?

In our previous article, Building Better Software, we have already shown how clean we try to maintain our codebase. We strongly believe that it's possible to run a successful business and have clean code at the same time. While it's relatively easy to show some nice metrics a single time, it's much hard to keep your code clean over time.

Even if we release some major features like On-Premise load testing, we still follow the boy scout's rule: Always leave the campground cleaner than you found it. Clean Code is not only compatible with business needs, it's almost mandatory if you want to be quickly competitive on a given market.

Unlike most software companies who don't really care about the quality of the software they ship, we do care that you get the best bang for the buck load testing tool. Do you know any other software company which regularly publishes code quality metrics? Probably not. We're trying here to introduce a new movement in favor of clean code.

What is Clean Code

Clean code

Clean code focus developing software that's easy to read, maintain and evolve. Good programmers have quickly understood that we are writing code for humans, not computers. Dikjstra, a famous mathematician, told something very interesting in 1972 at the Turing Award Lecture:

Elevate your Load Testing!
Request a Demo

Good programmers have understood how difficult it is to create programs and maintain them in the long run. The people who are bad at programming are the people who don't understand that their brain has a strictly limited computing capacity. Humble programmers, on the other side, try to compensate their small brain by reducing the load required to understand their code.

The one who is brave enough to say I don't know during a code interview is the one you should hire. Clean code is the essence of humble programming: it's a set of tools put together to write code that's as simple as possible to do a given task.

If you can't explain part of your code with a simple sentence, then it's too complicated and needs to be refactored.

Most companies fail to understand why their programmers should write clean code because it seems to bring no added value to the customer. This is wrong.

Less bugs

Software bugs

Clean code is highly testable and thoroughly tested. Bugs are part of any software, no matter how clean the codebase is. It's not possible to eradicate all bugs, even if you have an infinite budget to fix them.

Automated tests are the heart of clean code. And tested code is less prone to bugs and regressions that non-tested one. If a company ships a feature without testing it, what does it mean? For me, it just means that the feature requirements doesn't explicitely ask the developer to ship something that works.

Let me tell you a small story I have read in Code complete. It's a conversation between Robert C. Martin (R) and a project manager (Patrick). Robert and the project manager are having a discussion about performance and testability:

  • Robert: I suggest we replace the existing algorithm with a new one which is both tested and clean.
  • Patrick: But, our existing algorithm implementation is faster than yours.
  • Robert: sure it's faster, but it doesn't work. If the requirement is to ship something that doesn't work, then I can make it run instantly.

Of course, many companies test manually the code being shipped. Manual testing is a waste of time and effort. Manual testing involves humans, and humans are known to failed every now and then. Humans are especially bad at doing repetitive tasks. Robots are best for this: everything that can be automated must be automated.

As I said, we may ship software with defects. But, clean code software has inherently less bugs per lines of code than poorly maintained code.

Less time debugging, more time developing

Debugging

It's as simple as that: if you spend less time fixing bugs in your system, you can spend more time developing. Developing includes refactoring existing code to meet new needs. Everytime a new feature is introduced in the system, the system should be adapted to make it perfectly fit.

One can argue that the time spent debugging is traded in favor of time spent refactoring existing code. And it's true. Instead of tracking bugs in a big ball of mud, engineers spend more time rewriting unadapted code to fit new features.

The time spent tracking a bug with the debugger is lost. If you spend time refactoring and unit testing an understandable system instead, this time is spent investing in the future. Future bugs will be easier to reproduce. That's the biggest difference between badly designed systems and good ones: bugs can be reproduced in no time.

Less time reproducing bugs

In a well designed software, bugs can be reproduced by writing a unit test which validates that the code fails to run properly in the given situation. Once the incriminated code has been fixed, unit tests are re-run. Those tests will not only verify that the fix is working but also validate that the previous tests are still running.

Badly designed systems are difficult to test. This kind of system hasn't been designed to be tested by automated tools. Everytime a fix is made, either manual or integration tests must be run. This is time consuming, requires a lot of effort and error-prone.

Less regressions

Any unit test acts as a Regression test. It verifies everytime it's being run that the code behaves as expected. Everytime a new bug is discovered, a test is added to the suite to validate the fix is working fine. With this kind of software, regressions are almost non existent.

Happier Coding Team

Coding Team

Clean code not only affects the development speed but also the coding team sanity. Developers are happier to work with clean code because it puts less load on their brain. Who wouldn't be satisfied if someone else makes his life easier? The human factor of writing clean code is both disregarded by companies and the most important in their long-term success.

Most companies struggle to understand that the happiness of their developers does not depend much on the work place. Top 5% developers are willing to work for companies who care about the quality of their work. You can't attract the best talents if you don't care about how your software is being built. You can't built awesome software if you can't attract the best developers to write it.

Sure, it's possible to make a lot of business with a poorly maintained codebase in many situations. But ask yourself just one question: Would you buy a car whose brakes can fail anytime? I guess you wouldn't.

At Octoperf we believe that writing quality software is the key to ship reliable software fast. We care about the quality of the code we write, even if the customers don't immediately perceive the benefits.

Sonarqube analysis

In the following section, we are going to expose our internal Sonarqube metrics. Sonarqube is an open-source code metrics server to manage code quality. It allows you to analyze possible issues within your code from a Web UI. It includes the following well-known code quality checkers: PMD, FindBugs and Checkstyle.

Quality Profile

Sonarqube Quality Profile

We have recently strenghtened our Sonarqube Quality Profile. We have enabled more than 700 rules and we will continue to enable further more rules in the future. The more rules you have enabled, the stricter the coding becomes. In fact, constraints can actually make you more creative.

These quality rules enforces coding standards, without killing creativity. Think of the car industry, cars are built with very strict safety rules. Despite these strong rules, we still see amazing new cars being designed. When issues come up showing that your code doesn't meet the expected quality requirements, it doesn't mean you did bad. You should see it as a way to improve your code.

We found that many of the issues raised by the quality rules are obviously the sign there is something wrong in the code. And it's almost always followed by a myriad of bugs. Let's dive into our backend codebase and explore the metrics being exposed.

Code Structure

Code Structure Metrics

Lines of code

Our backend codebase is reaching almost 32.5K lines of code, which is quite compact considering it's the entire application code. We have 1100 classes, which means we have an average 32 lines of code per class. It may seem very low, but if you follow the Single Responsibility principle, it's quite standard. We prefer having a lot of small classes doing a single thing than big ones with many responsibilities.

Statements

There are only 5.6K statements, which means that out of 32.5K lines of code, there are only about 5.6K lines effectively doing something. Other lines are either comments, curly braces or class / method declarations. The reason we can do so much with so little code is that we rely on open-source APIs everytime it's possible. The application intelligence is fragmented in many different open-source libraries put together.

Code complexity

The most interesting metric is code complexity. This metric shows how hard it is to understand and maintain the codebase. Developers spend 80% of their time reading code. If we make the code simple to read by reducing complexity, it reduces the developer brain load. A developer which spends less time reasoning about how the system works has more time to spend on more useful tasks.

95% of our classes and methods have a complexity between 1 and 2, which is very low. In most companies, you should see an average code complexity between 5 and 8. I've seen in some companies classes exceeding a score of 20! After a given complexity score, the code becomes simply unmanageable. It happens when a class steadily grows without being refactored and broken into smaller pieces. It ends up being a monster no one wants to touch anymore. This is known as the Lava Flow Anti-pattern.

Comments

If we take a look at the comments, it obviously shows we're not commenting our code very much. The reason is we consider the code to be the documentation. We'd rather rewrite the code to make it clearer rather than compensate with comments. Comments should only tell what code can't. For example, This method may become slow due to blabla, which is a very good comment telling us something about the processing time of the method.

There is a debate whenever every public method should be commented or not. Just remember that comments are like code: they need to be maintained and kept synced with the code it covers. We think that in most case it's a duplicate of the information already within the code.

Coverage

Tests coverage

Code Coverage

we cover 100% lines of the code being shipped by unit tests. Once you get used to it, it doesn't require much more effort. Every time we ran non-tested code in our testing environment it failed miserably. We test everything because we have seen that bugs can be anywhere, not only in complex code paths.

When you write a simple function, it may seem unnecessary to test it since it's so simple. It's wrong thinking like this. The simpler the code, the less attention you pay when you write it. Even if you cover 100% of your code with tests, it doesn't mean it's bug free. It just give your more confidence in the code. Spending time testing is always worthwhile in the long run. If a developer needs to refactor the code or fix a bug, the tests ensure that the code still works as expected.

Unit tests

We have almost 3.6K tests to cover the 32.5K lines of code, which means we have almost 1 unit test per 10 lines of code. It only takes about 9 minutes on our build machine to run the entire test suite on our backend.

It means that the entire backend code base is at least run one time and verify by thousands of tests in just under 10 minutes. This is a huge advantage over companies running manual or integration tests, which usually takes days or weeks. Of course, unit tests cannot check everything. We complement our unit tests by manual testing too during the development phase. Manual testing ensures that the components tested individually are working together. Usually, most of the bugs have been eliminated during the unit testing phase.

Technical Debt

Technical Debt

Definition

Technical Debt reflects the work to remediate potential issues and code coverage. It reflects the code compliance to the quality profile. A temporary technical debt is usually not an issue and happens frequently during the development phase. The code being written during the development phase usually requires to be reworked a few times before it meets the quality requirements. Having some technical debt during that phase is completely normal.

The technical debt is also usually seen as a trade-off to ship software quicker. We think it's wrong to make a trade-off on code quality to ship code faster. The code being shipped with less quality is more likely to fail in production. You have very high chances to spend time debugging and fixing the code in production when shipping bad code. As explained before, this is a waste of time.

Is it worth the effort

Skipping the testing phase to gain time is the greatest false knowledge in the software industry. Many companies think that it's better to ship something good rather than something perfect, like Facebook. Shipping code with little to no tests to gain time is not good, it's very bad.

Writing code that meets high quality requirements is not something natural. It's something you need to learn and be used to. But, once you get used to, it's actually faster because you spend your time on tasks with a high ROI.

When you repeatedly ship badly design code, it stacks up pretty quickly and ends up slowing down the development team. Every code around is risky, no ones want to touch it. Every single line being modified leads to a regression. Some guys really like to work in a war zone, but most don't. If the best software engineers start leaving your company, you should start to worry about the quality of code. Your customers won't be satisfied if you ship code that's not working. After all, if the requirement for a feature is not to work, you can ship it immediately, doesn't it make any difference?

Gold Plating

The opposite of this idea is Gold plating. Software companies often tend to think that spending time testing or write Beautiful code (which should be named understandable code) is gold plating. It's not.

Gold plating is:

  • spending time adding features to a product before launching it to market, without feedback from potential customers,
  • writing code that tries to forecast future evolutions,
  • over-optimizing code that's already fast enough.

If you want to make trade-offs to ship your product faster:

  • reduce the number of features,
  • find APIs which can reduce the amount of lines of code you have to write,
  • reduce the complexity of new features.

Never make a trade-off on quality, you will end up with a technical debt you can't remediate.

Code Duplication

Code duplication

Code duplication is evil. If this metric goes over 0, you're in hell. This is probably the first thing to fix when starting with legacy code. Every single fix made in duplicated code needs to be replicated in all copies. It is insane to maintain. Who knows where all the duplicates are? "Oh, it seems like a duplicate has already been fixed but not the original code."

It's acceptable to have some duplications during the development phase, for testing purpose. But it should end up being remediated when the feature is finished.

Conclusion

As you have seen, we write clean code because we think it's the best way to ship software fast. There are just so many examples of companies who failed due to code bloat, that you can't be wrong when you're doing things right. Even if most people think you are wrong, see it as a challenge and a way to improve yourself.

Want to become a super load tester?
Request a Demo