Skip to content
Home » Test-Driven Development for Microservices

Test-Driven Development for Microservices

If you read Kent Beck’s “Test-Driven Development By Example” you will learn that writing tests first enables you to make sense of your design as you go.

The book was written around the time of Service-Oriented Architectures but before Microservices were widely considered as an architectural pattern. Writing a monolithic application with classes and libraries as our areas of separation of responsibility, and crucially within a single code-base, is typically somewhat conceptually simpler than the usual scenario for a multi-repository, API-separated microservices application. So does that mean that we can’t do TDD for microservices or that it’s a bad idea? Also what does doing TDD mean for microservices?

It’s good to remember that Test Driven Development isn’t the goal. The purpose of TDD is to force us into good habits around design and maintainability. We want the benefits of TDD in microservices but does the architecture allow it?

Many of the microservices I have seen could benefit from taking a more TDD approach to design and development. Why? Because a lot of the time the testing is focussed mainly on integration and higher levels towards end-to-end testing. This is perhaps a natural result of microservices being loosely coupled, often having a UI, often using NoSQL data stores under the hood (which again enable quick prototyping and loose coupling) and having many interfaces as a matter of design which invariably leads us to consider the microservices interfaces as the most important parts to test.

TDD tells us something else – start with a test, write your code to satisfy that test. With traditional approaches to building microservices, the internals of these services perhaps get overlooked.

For a moment let us consider the testing pyramid.

I’ve borrowed the image from The Practical Test Pyramid which goes on to say:

Mike Cohn’s original test pyramid consists of three layers that your test suite should consist of (bottom to top):

  1. Unit Tests
  2. Service (or Integration) Tests
  3. User Interface (or end 2 end) Tests

Unfortunately the concept of the test pyramid falls a little short if you take a closer look. Your best bet is to remember two things from Cohn’s original test pyramid:

  1. Write tests with different granularity
  2. The more high-level you get the fewer tests you should have

I believe that these last two rules are excellent advice. I’ve seen many examples of Microservices tests which focus too much on integration, on interfaces between classes, services and external services and therefore tend to ignore the validation of data within the services to greater or lesser extents. Often there are many attractive testing frameworks which promise to make testing easier for us. Fixtures in particular are one shortcut which allow us to create objects which are prepopulated (usually using object reflection) to save us having to manually and labouriously build suitable testing objects. However, Fixtures can make us believe that testing is just simple and straightforward and that a passing test is a sign of quality. TDD tends to takes the opposite, more stoic, approach. TDDmakes us think carefully about what test we want to write to prove our future functionality and then design our code accordingly. These two approaches could not in some ways be more dissimilar.

So what about the other types of testing we typically see at an integration level?

Fakes and Mocks and Contract Testing

Fakes, mocks, stubs, doubles, contracts, spies and so on. These are discussed in many places but here is a full Martin Fowler piece on the differences between Mocks and Stubs and some things to consider. Having read this, it seems that I’m probably more of a Classical TDDer than I am a Mockist TDDer. I prefer to use real objects rather mocks when it comes to writing tests, however I’m not averse to a mock used alongside real objects.

Here is another piece – a Martin Fowler overview about Contract Testing which also makes for interesting reading about how we can even define tests for our interfaces that live outside of our microservices. This might be an effective way to test our API Gateway and ensure that “all contracts are being honoured”.

So Is Unit Testing Overrated or a Waste of Time?

Finally, while researching for this article I came across this post which echoes a lot of what has been said already above and emphasises that testing is not a “one size fits all” solution for every circumstance. Sometimes we do need to focus on unit testing, sometimes we get more value and more flexibility focusing higher up the pyramid.

Above all it is vital that developers understand that Unit Testing is not development testing. Unit tests are not a technical check that things are working, but a fundamental sign-off that business logic at the lowest possible level is functionally correct.

I’ve slightly adapted the conclusions from the above article into ones that I feel more aligned with as regards taking a testing approach to a particular system:

  1. Think critically and challenge best practices in testing as a team.
  2. Don’t fully rely on the test pyramid but stay pragmatic.
  3. Aim for the highest level of integration while maintaining coverage, reasonable speed and cost.
  4. Avoid sacrificing software design for testability but also let your testing guide your design.
  5. Consider mocking only as a last resort (I’m a big fan of this last part).

What would you add, remove or edit on that list? How have you found the balance between complexity, design and quality when it comes to creating microservices? What’s the right balance of Unit, Integration and E2E testing? Do we get distracted by the APIs? And could that be itself the right approach?

Above all else, is it just important to have a testing strategy and to stick with it consistently rather than just applying test philosophy adhoc to parts of our code?

I’d love to hear your thoughts.