Engineering @ Pluralsight: Clean Code for Speedy Delivery
At Pluralsight we’ve put a lot of thought and effort into developing the engineering practices we use every day. These practices are based on principles that we’ve come to value and those principles lead to practices. We’ve documented those principles and practices in our Engineering at Pluralsight document. On the left side of each page is a principle that we value, and on the right side is a list of items that we Do, Encourage, and Avoid. This post is going to focus on the two principles: We continuously verify the correctness of our code, and We maintain a clean, secure, high-quality code base.
We continuously verify the correctness of our code
What We Do
To support the principle “We continuously verify the correctness of our code”, there is one practice that we ask all engineers at Pluralsight to adopt:
We maintain a suite of good unit tests for all production code. All production code should be tested. This doesn’t mean we get crazy about things like code coverage metrics, but it is our philosophy. We want to ensure that everything is unit tested by a fast, reliable, and automated test so that we don’t have to be relied upon to test everything manually – a process that is fraught with human error. Instead of testing things manually, every time we commit and push our code, we have an automated build that kicks off, compiles the code if necessary, and runs all our tests. Typically this all happens in a matter of just a few seconds or a couple of minutes after pushing our code. Of course, ideally, we’re running these tests locally before we even push for an even quicker feedback loop and to avoid breaking our build which can impede deployment in the event we have to respond quickly and deploy something.
I won’t go into lots of detail about how to do testing as our blog already has several posts that cover it. Here are a couple you could check out, but there are more so feel free to browse around our blog:
Different Types of Unit Tests
What kind of test?
What We Encourage
There is also another practice that we encourage to help us continually verify the correctness of our code:
We encourage (acceptance) test driven development. We’ll get to the (acceptance) part in a minute. First, you might be wondering, “What’s the difference between unit testing and test driven development?” Unit testing is just simply writing small tests for some unit of code. It could be written before or after the code is written. Test-Driven Development (TDD), on the other hand, requires that you write your tests first, then your implementation, and then refactor if necessary. If you haven’t done it before it can be a little mind bending.
You might ask, “How can I possibly write a test when I don’t even know what the implementation looks like, yet?” It’s a great question and it is one concept that is at the heart of TDD. The answer is, you don’t have to know the implementation before you start writing tests. In fact, when doing TDD, we intentionally avoid trying to create the whole implementation at once. Instead, we start with the smallest piece of functionality and write a test for that. Then implement it. Then write the next test.
For example, say you were wanting to write a coin-changer app and you’re working on the function that takes in an amount and returns the appropriate coins as an array. Your first test wouldn’t be, “Test making change for $25.99,” instead we’d start with the simplest possible piece of code. Perhaps, “Test that the function returns an array.” And then once the test is written and failing, you’d write the implementation and all it would do is return an array. Then you might start with “Test making change for $0.00”. And then you just keep writing tests and implementing small pieces. One major advantage is that it leads to a more simple design. We’re less likely to over-architect a solution when we’re working iteratively like this. It’s also easy to ensure that every bit of logic is tested because we follow the rule, “don’t implement anything you haven’t written a test for.”
What I just described are unit tests; they’re testing a small, isolated unit of code. Sometimes, however, you want to test a larger section of your code to make sure all the pieces are working well together. This is where acceptance testing comes in. Unit tests tend to isolate other parts of code with things like mocks. With acceptance testing, you’re typically testing all your code from just below the UI layer all the way down through the different layers.
When you combine acceptance testing with TDD you get what we call double-loop testing. So in the coin changer example, say you are creating an API that will be called from some UI. You’ll likely have an API layer, a business layer, and maybe a data layer if you’re tracking money coming in and change going out. With acceptance test driven development, you’d start by writing an acceptance test that calls the API and expects a certain result. That test would fail because the API endpoint doesn’t exist, yet. Then you start writing unit tests for each of the different pieces that need to be implemented: The api layer, the coin-changer logic, etc. Once you have written all of those tests and they’re passing, you should be done and at that point you’d expect your acceptance test to pass.
In this way, we are using TDD to ensure not just that all the individual pieces work, but also that they all work together. And then, of course, all these tests run automatically when we push to our code repository.
There’s lots more to TDD, but this isn’t a blog post on TDD so I’ll stop here. Here’s a quick primer from our blog on getting started with TDD:
Test Driven Development Fundamentals
What We Avoid
There is also one practice that we avoid at Pluralsight with regard to code quality:
We do not rely on a separate QA team for testing. At first blush, this might feel contrary to our goal of code quality. Why would we not want a QA team to help ensure our quality? It’s mostly a mindset thing, but also helps with our flow of value to our customers. It really stems from our Software Craftsmanship roots. We want our engineers to feel responsible and passionate about quality and part of that is knowing that it is up to them to ensure it. We don’t want to take a stab and then throw it over a wall for someone else to test for quality. Rather, we want to make sure we’re building it in as we go. Unit testing, TDD, and knowing that it’s completely up to you to deliver quality creates a powerful mindset around quality. To quote a local burger joint, we want “quality and a lot of it.”
This is also important to our lean development methodology. We don’t work in sprints. When a pair or mob of developers is finished with a small piece of work, they commit it and push it. All the tests run and then they check it out in our staging environment. When they are confident all is well, they ship it! No waiting for the sprint to end or for a planned maintenance window, we just ship it right then…in the middle of the day…with no downtime. If we had a QA team, not only would it reduce the responsibility on the engineers to build in the quality from the beginning, it would also have a tremendous impact on that ability to quickly deploy value to our customers.
We maintain a clean, secure, high-quality code base
What We Do
Here are a couple of practices we use to help us maintain a clean, secure, and high-quality code base:
We take the time to write quality code in order to maintain speed of delivery. Writing good, clean code is difficult. Writing code that reads how it is intended to work is difficult. Naming is difficult. Refactoring can also be difficult. But all of them are critical to long-term maintainability. Sometimes we’ll spend an unusally significant amount of time talking about some section of code because it is overly complex and we’re trying to figure out how to simplify it or because it’s hard to make the code convey to a reader what it actually does. Breaking things down into simple, small, legible chunks takes time. Avoiding over-architected code takes effort. And it is all worth it. As Bob Martin said in his book Clean Code: “The ratio of time spent reading (code) versus writing is well over 10 to 1 … (therefore) making it easy to read makes it easier to write.” And so we take the time to do it right the first time. Ok, that’s not true, we do it wrong plenty of times, but it’s certanly our goal to spend the time to wite quality code which helps us maintain speed of delivery in the future.
We dedicate at least 20% of our time to technical debt reduction. Like I said, we try to do it right the first time, but we get it wrong plenty often. Architecture evolves. Customer needs evolve. Our thinking evolves and we gain more clarity about what we’re building as we build it. All of these things mean that our code needs to evolve along with it and that requires constant, intentional refactoring. Technical debt, or the need to go back and clean up or change code after it’s written, happens for a wide variety of reasons – but it always happens, constantly. We’ve found that if we are not consistently and intentionally dedicating about 20% of our time to cleaning up technical debt, that our codebases begin to decay and that impacts our sustained speed of delivery. It’s hard to make that 20% happen, but it is something we’re committed to and we talk about frequently because we know that, in the end, it speeds up our ability to deliver. And you might wonder, “Isn’t it risky to be constantly refactoring and cleaning up your code? What if you break something?” Well that’s where our comprehensive suite of tests come into play. We’re more confident in our ability to refactor and clean up code because we know we have a suite of tests that will likely tell us if we break something! Not to mention, a messy codebase is much more risky and condusive to bugs than a clean codebase is.
What We Encourage
There’s one more principle that we also encourage to help us maintain clean, secure, high-quality code:
We encourage regular refactoring. This is related to our tech-debt reduction efforts and is encouraged for all the same reasons. The main difference between this and tech-debt is that this is constant and ongoing in the moment instead of after the fact. We encourage engineers to constantly be refactoring as they go.
With our Test-Driven Development efforts, there are three phases: Red - Write a test which fails (and turns red); Green - Make it pass by impmenting it; and Refactor - Once your test is green, take a look at your code and your test and see if you can clean them up. This is a constant effort to always be reflecting on your code and considering if it would make sense to those coming along behind you. It’s an important part of maintaining a clean code base.
Conclusion
All of this combines to help us create code and systems that are clean and easy to maintain. The tests help to reduce the number of bugs that slip by us into production. The clean code helps us better understand the code so that we have greater confidence and create fewer bugs for our test to find, or not find if our tests aren’t perfect (they’re not). And all of it helps us to ship value to our customers faster.