As a software industry veteran I’ve s̶e̶e̶n̶ / e̶x̶p̶e̶r̶i̶e̶n̶c̶e̶d̶ / i̶n̶f̶l̶i̶c̶t̶e̶d̶ / been victimized by any number of inventive approaches to integrating and testing distributed systems, so the title of this post is a bit tongue-in-cheek. I’ve been sharing about my experience building a Python implementation of the KillrVideo microservice tier. In the previous posts, I shared how I got started on this project, about building GRPC service stubs and advertising the endpoints in etcd. This time, I’d like to elaborate about why I built this service scaffolding first before implementing any business logic.
We already had an integration test (Yay!)
First of all, it’s relevant to note the unique nature of this particular implementation effort. KillrVideo is an existing application with a web application service tier, and supporting infrastructure. Implementations of the service layer already exist in Java, C#, and Node.js — the Python implementation is just the latest.
There already exists an integration test for KillrVideo. The killrvideo-integration-tests repository focuses on the microservices tier, measuring compliance with the functional specifications documented at killrvideo.github.io (athough the tests were created before the specification).
The integration tests are implemented in Java and use the Cucumber framework.
Running the Integration Tests
Running the integration tests is pretty straightforward. You’ll want to clone the killrvideo-integration-tests repository so that you have a local copy. The complete instructions for running the tests are found in the
README.markdown file, but it basically amounts to running the tests via a Maven target (
mvn clean test).
There is an existing setup to run the integration tests in Docker but as I was working on this effort I realized it is currently not working quite as expected. So that is a TODO item — running the integration tests in Docker.
One thing I wish I’d discovered earlier
Since the whole unit test suite takes about a minute to run, I quickly developed the habit or running the whole set. But Of course over a bunch of iterations this adds up. I eventually clued in to the fact that I could run individual tests by name, for example the following command which runs only the tests for the User Management Service:
mvn test -Dcucumber.options="--tags @user_scenarios"
This works because the tests for that service are in a feature file with the referenced tag. Testing selected features began to save me some wait time.
Making peace with Test-Driven Development
Now that I’ve introduced the integration tests, maybe I should take a step back and share a bit of personal history.
I was first introduced to the concepts of Test Driven Development and Behavior Driven Development several years ago on a project where the use of these methodologies was mandated by the customer. I remember sitting through a workshop in which the instructor presented an example beginning with zero code and proceeded to write a single test, then introduced only enough code to pass the test, then iterated. I’m sure many of you can relate.
I’m still not sure if this was done to prove a point, or if it actually represented the way this individual actually coded in practice. In any case the whole experience struck me as entirely wasteful, since the coder, by providing literally zero design up front, was knowingly signing up for a state of continual refactoring. I understood this as a reaction against “Big Design Up Front” and over-engineering, but it seemed to go too far in the opposite direction. My other criticism was the fact that this approach yielded several multiples more test code than application code, which seemed like total overkill. As I wasn’t in an active development role at the time, the debate soon faded from my radar.
When I found myself looking at an existing test suite and starting a new implementation, I recalled this years-old memory with a chuckle. I thought this would be the perfect opportunity to create my own approximation of TDD by running the tests, letting them fail, and implementing just enough service functionality to make one test after another pass.
Of course, the significant departure of my between modified approach and classic TDD is that I’m doing this at the level of integration testing rather than unit testing. There is something I actually like about using the principle of Test-Driven Development to drive development of a service with a well-defined contract.
Tests are failing, now what?
Once I had the tests running, of course none of them were passing, as expected. The next step is to begin implementing the service functionality. Here is where I was able to take advantage of the structure of the integration tests and leverage the thinking of others before me. The integration tests build in a logical order that roughly reflects a user’s experience of the application:
- The first test script tests the user management service in order to verify the ability to add users to the system — after all, there’s no point in the system without users.
- The second test script tests the ability to add videos to KillrVideo and then retrieve them.
- Subsequent tests cover the ability of users to rate and comment on videos, and gather statistics on this activity.
- The final tests cover the ability to search for videos and recommend videos.
So the order of my service implementations fell right out of this:
- User Management Service
- Video Catalog Service
- Comments Service
- Rating Service
- Statistics Service
- Search Service
- Suggested Videos Service
Something else I should have done
One of the testing strategies I failed to take care of until later in the development process was to test my services against the existing web application as it was being built. Because of the logical order of service development matches the users experience with the application pretty faithfully, I think this would have been an effective complement to running the integration tests. In fact, there was an error that wasn’t covered by the integration tests that I didn’t discover until running the web application. This bug led to missing preview images. This is of course why effective integration typically involves both automated and at least some manual testing.
And now, down to business (logic)
Now that I’ve described all of the coding and test setup for KillrVideo Python, in the next post we can finally talk about the “meat” — implementing the actual business logic of the KillrVideo Python services, including data persistence using DataStax Enterprise and the DataStax Python Driver.
This article is cross-posted from Jeff's personal blog on Medium.