Prelegent: Remigiusz Dudek
Our software becomes more complex with every iteration we spend on it. There are new systems we need to integrate with or if you’re using microservice approach there are dozens of new services in your environment.
How on earth you’re supposed to test such system from end-2-end perspective. So far it seemed we have only 2 solutions
- Deploy the system in a prod-like configuration (with all the collaborators) and perform heavy manual tests
- Deploy the system in a prod-like configuration (similarly as above) and automate all the end-2-end tests that are done as a regression
First option is really tempting, especially at the beginning of the project when the investment is low and you can afford to test new features and do regression tests manually. The biggest issue with this approach is that in time the suite grows and since manual execution hinders our agility we start to invest more in our regression testing team to retain our initial velocity. Last resort action is to perform regression outside of an iteration, at last how agile can you be with your legs tied by a week(s) long regression. The purpose of all these actions is to delude ourselves that we’re still Agile… but in fact we’re walking on a thin ice.
Option number two seems like a good direction. It requires more investment at start, but it pays-back in time…. doesn’t it? In theory it looks really nice, but we need to remember that there are a few things we need to take care of:
- Automate the end-2-end test
- to automate our deployment (or to be more precise we need to have a way to automatically build entire environment with all the collaborators).
- we need to consider and manage exact versions of all the services that we collaborate with in order to avoid a situation, that our services is tested against one version of its collaborator but then released to production where it will be working (or not) with a different version.
Let’s say that we’ve met all the requirements listed above. Now let’s take a small step back and take a look at a bit bigger picture. My experience shows that a regular service has a handful of collaborators (2-10), but these collaborators have their own collaborators which in turn have their own, etc. Some (potentially all) services are developed independently hence we have two approaches:
2a. Build multiple full-fledged environments, one for each service but that is an additional cost that usually is not taken under consideration at start
2b. Build one common environment where all services are deployed. But how to ensure isolation of tests and how to approach a situation when I want to deploy new version of my not yet released service and test it end-2-end, should I reserve such environment? We can see how such approach can quickly become a bottleneck.
Mind one silent assumption, that I’ve made so far: your service is integrated only with other services developed by the company you work for. What if you integrate with a 3rd party software?
Consumer driven contract gives us a third option:
- Establish contracts with all the collaborators and test your service in isolation using contracts whenever it comes to contacting an external system, regardless of whether it is developed within the company or outside (I must admit that the former case requires some more orchestration on managerial level but it is doable)
The idea is really simple… at last you don’t really need a real collaborator to test your own service. The only thing you need to know is who the collaborator will respond to your request.
During our workshop we will create two REST services integrated with each other and show how to develop them separately in a Behavior Driven manner using Consumer Driven Contract approach.