Hootsuite is no stranger to tearing down monoliths. In fact, over the past year we’ve built fifteen microservices to go along with the deconstruction of our monolith. The extraction of each microservice came with its own complications, but they have all generally followed one development blueprint.
A monolith is the result of legacy code becoming fossilized after years of coding with ongoing contributions from multiple people. This makes the code incredibly hard to refactor as it is always in use, and always depended on. In this case, microservices are used as a tool to tear apart the monolith to address the problem of unreliability. However, a microservice in general is not limited in its purpose or functionality.
Scoping & Prioritization
The first major decision when tearing down your monolith is which part to break off first. Do you want to slice it horizontally or vertically, julienne it or dice it? At Hootsuite, before writing any code, all business objects and use cases were outlined to determine what might make a good microservice. Models that frequently interact with each other were carefully grouped so that a tangled web of distributed services would not be created.
It is clear that business objects like a Team and a Member of a Team should be migrated together. However, if each Team must belong to a single Organization, then it would make sense to also pull out the Organization object instead of leaving it inside the monolith.
When outlining the possible microservices, you need to prioritize and decide what to work on first. A microservice should always result in reduced legacy code while relieving pain and enabling other projects. There should be a substantial positive net gain throughout the entire process, not just at the end. Hootsuite prioritized tasks based on necessity from upcoming features and services so that there were as few blockers as possible when they were needed. Also, discovering and dealing with complex logic that has a wide scope is best addressed earlier rather than later. A few indicators of complicated code include: being associated with obvious features of the product seen by the consumer, vague cyclomatic complexity, and either a lack of test coverage or nearly every existing test looks as if it is related to the code in scope.
When planning, keep in mind that a key benefit to microservices is that they are self-contained, and specifications (e.g., language and frameworks) rarely matter in the short term as long as they can communicate with each other. This gives opportunity to experiment without collecting a large amount of technical debt. At Hootsuite, most microservices are written in Scala with variations on its internal architecture in search of a stack that is most fitting and useful. The possibilities stretch out against the skyline, and with each new microservice is one more exciting journey!
Hootsuite + Scala + Play
Once the priorities have been set and it is clear which part of the monolith should be torn down, it is time to decide how to do it. Chipping away one small chunk and integrating it from end to end at the beginning can save a lot of potential backtracking. For simpler services, starting from the outermost layers and doing wide spread implementation of a simple functionality provides good exposure to the code base while simultaneously making quick progress. For more complex services and their relationships, it is often more efficient to do pinpoint integrations. These methods allow for early discovery of blockers and readjustment of scope without hindering velocity.
Given that the objective is to migrate a single database table, one small chunk may be a simple read by primary key endpoint. After this is created, every such call in the monolith would be rerouted to use the service. However if the objective is to migrate a set of database tables with relationships to each other, then a small chunk could be rewriting one controller to completely use the microservice. After this initial implementation it will be easier to start chipping away at the next layer on the deconstruction list.
Regardless of the complexity of the microservice, feature flagging will be very useful. Hootsuite’s Dark Launch mechanism enables ease of integration while allowing quick restoration of previous behaviour. Even with Dark Launch, learning doesn’t happen until code is in Production, especially when it is hard to determine the viability of the new code. A good way to gain confidence is comparing outputs between the monolith and the microservice, logging differences if they exist, and then returning the monolith’s output when there is variation. This is particularly helpful when there is uncertainty in test coverage as this covers all live use cases.
When tearing down the monolith, there will always be traps and complications. A recurring one is the scope, which can be very explosive in nature. Setting priorities straight and having hard constraints are necessary for when things become much more complex than they initially seemed. It is important to have clear instructions for tasks, especially if it is the first of its kind. This way when it becomes copy-paste code, meaning code that is not directly copy and pasted but referenced closely for future implementation, it will not accumulate future technical debt that could have been easily avoided. Proper documentation of any and all changes is also a key to having a cohesive project and avoiding backtracking.
It’s important when migrating code to not debug it at the same time. Worrying about non-critical fixes should come after making sure the behaviour of the service is a carbon copy of the monolith. Doing this can already be difficult especially if another language is being used, but refactoring and fixing bugs at the same time has the ability to hide edge case behaviour. When these behaviour bugs resurface during the integration process, it will be much harder to figure out what the service should be doing if there’s no comparison.
Exercise scientific control to reduce headaches when debugging
There is no silver bullet when it comes to tearing down a monolith. Every shiny new microservice is built by deciphering questionable and unnecessarily complex legacy code, which eventually transforms into a champion pain reliever and enabler for the future of your product and organization. It all begins with deciding the scope and how the monolith will be split, along with prioritization of what will be most beneficial every step of the way. Experimentation is not only allowed but encouraged as every service will be a little bit different than the last. When it comes to the actual teardown, it is important to choose a good strategy depending on the complexity of the microservice. Even after all of this, there will still be traps and lots of them. Learn from others, be open to changes, and show true grit every step of the way. The process is what you make of it, but I believe that there is no such thing as a useless experience.
About the Author
Tammy is a Co-op Software Developer on the Internal Platform team at Hootsuite. She works on building high performance and scalable microservices in Scala. Her secret double life is one of photography, gaming, and drawing. Follow her on Instagram.