Skip to content
Home » When Speed and Quality are the Same Thing

When Speed and Quality are the Same Thing

I recently have been working on a project where I provided support to help increase the speed and quality of delivery in one specific development team. The architecture was microservices in .NET. There was an established change process which is PR based, with a single long pipeline to production.

Initially, I was there to provide advice and assist with ‘writing stories’ and the change process in one particular team. As it transpired, I ended up helping out in the code, in the technical and domain architecture and the delivery process across the whole program.

Assessing the Playing Field

I wrote out a list of areas I could see for potential improvement. Here’s an edited list of the findings:

1. Too many user stories were written as specifications, often by those at a BA/PO level.

2. Trust between BAs/POs and Devs was low. Interactions were poor. There was a tendency to micromanage, which in turn didn’t help trust.

3. Releases were backing up, release cadence was slow. Release branches needed heavy manual testing. In between, PRs are held up for a long time.

4. Deployment was a lot of manual work, data migrations, complex environment making it highly unpredictable. The team spent a lot of time deploying and manual testing.

5. Environments were often not aligned, and there was only ever “one good environment”.

6. Handover of developed work (Definition of Done) was often informal and quality/testing only done after deployment to Test environment and beyond.

7. Regarding prod problems (or urgent issues) it was unclear who was in the lead despite a documented escalation process.

8. Combined release process was slow and infrequent – deployments were difficult to arrange and error prone.

9. Distributed architecture was hard to troubleshoot and debug.

10. Component teams. Features would often touch multiple components and therefore multiple teams. Components often not isolated from each other’s models.

11. Poor unit testing around the models. Lack of understanding of OO principles.

12. Front-end and back-end were independently deployed but closely coupled i.e. if the releases weren’t closely aligned, it breaks.

13. The back-end architecture was too closely coupled between services indicating the domain models were not well understood or correctly distributed.

13. Infrastructure seemed set, but what about exploring other opportunities with serverless etc or teams being able to decide for themselves?

Getting Some Suggestions..

A way forward was determined along the following lines:

  • Updating the DoD (Definition of Done) and DoR (Definition of Ready) to enforce better requirements and that the dev team designs and delivers quality code that has been (automatically) tested by the team itself.
  • Ensuring and showing high unit code coverage and good health for all components. Initially many of the services coverage was below 50%.
  • Automated regression testing and limited automated integration testing (design these systems ensuring they cover business use cases).
  • Common dashboarding for showing DORA-like metrics in addition to test coverage etc.
  • Testing only works if environments are reliable. Think about making environments more useful and dynamic.
  • Query the CI/CD pipelines regarding the ambition to release more often. What alternatives can we try?
  • How can we make our services more independent through architectural and refactoring opportunities?
  • Creating healthy competition between component teams for the fastest/cleanest/quickest service etc…

Leading to Some Actions..

Later on we see the benefits of some of these activities. While there is still a long way to go the team can now:

  • Report on code coverage and are improving it.
  • Discuss opportunities within the team to refactor and redesign to make testing simpler.
  • Investing in better end-to-end testing automation.
  • Taking the time to show and discuss the technical architecture and can contemplate starting to use more advanced techniques such as DDD (bounded contexts) or Residuality Theory (see Adjacency Matrixes). Although, practically speaking, just looking for dependencies between services to start with.

Above all the team is having healthy conversations about what it means to improve its codebase and design.

This underlines the importance of talking about quality, refactoring and being honest within the team about what we’re trying to achieve. Investment in the small things every day pays dividends. While it’s easy to fall back and do what we’ve always done, sometimes it takes a new perspective to move forward and invest.

Conclusions

This case study mainly explores the role of psychological safety in the workplace. When developers don’t feel valued enough to write stories and continuously have their work questioned, they won’t feel safe enough to give their best or care about the result. Why should they?

Technical challenges become compounded by organisational challenges. In this situation, it can be easy to think that you need to make sweeping changes to have an effect. However, with attention to detail around what daily tasks are being performed, it’s possible to change how a team thinks about itself. Create an environment where people feel empowered to voice their thoughts and act on them to improve their daily work. Improving quality and refactoring improves the everyday experience of everyone working with the software. Once you have an acceptable base level of quality, your need for manual testing decreases and your release cadence increases. Small steps, consistently applied every day, bring significant changes.

Get in touch

Are you struggling with quality and delivery issues? Have you tried different team configurations but don’t see any improvement in speed and agility?

I’m here to help. You can get in touch here.





Discover more from Richard Bown

Subscribe now to keep reading and get access to the full archive.

Continue reading