How to move from Monolith to Micro Services Architecture

I hear you…

Building and changing anything in a Monolith system is not easy. As you grow as a company you will feel this pain and it will just grow.

Now that you have made the mistake of building a Monolith, lets start resolving it.

Note: This is a time taking process and benefits will only come with time.

As per Martin Fowler, best way is to go for Strangler Fig Pattern. In this pattern you should actually strangulate your monolith with time by building microservices around it and moving away from it service by service by service.

Now to decide which service to go for first, Prioritizing Which Modules to Convert into Services.

Converting a module into a service is typically time consuming. You want to rank modules by the benefit you will receive. It is usually beneficial to extract modules that change frequently. Once you have converted a module into a service, you can develop and deploy it independently of the monolith, which will accelerate development.

It is also beneficial to extract modules that have resource requirements significantly different from those of the rest of the monolith. It is useful, for example, to turn a module that has an in-memory database into a service, which can then be deployed on hosts, whether bare metal servers, VMs, or cloud instances, with large amounts of memory.

Similarly, it can be worthwhile to extract modules that implement computationally expensive algorithms, since the service can then be deployed on hosts with lots of CPUs. By turning modules with particular resource requirements into services, you can make your application much easier and less expensive to scale. When figuring out which modules to extract, it is useful to look for existing coarse-grained boundaries (a.k.a seams). They make it easier and cheaper to turn modules into services. An example of such a boundary is a module that only communicates with the rest of the application via asynchronous messages. It can be relatively cheap and easy to turn that module into a microservice.

Let’s begin the refactoring process

First: Start moving all the new services/functionality away from the monolith and build it separately. You would need to build a request router on top which will handle the incoming HTTP requests, similar to API gateway.

The router sends requests corresponding to new functionality to the new service. It routes legacy requests to the monolith

The other component is the glue code (Anti corruption layer), which integrates the service with the monolith. A service rarely exists in isolation and often needs to access data owned by the monolith. The glue code, which resides in either the monolith, the service, or both, is responsible for the data integration. The service uses the glue code to read and write data owned by the monolith and vice versa. Anti corruption layer makes sure that the sanctity of data model exists in both monolith and new service.

Second: A strategy that shrinks the monolithic application is to split the presentation layer from the business logic and data access layers.

After the split, the presentation logic application makes remote calls to the business logic application

It enables you to develop, deploy, and scale the two applications independently of one another.

Now you can easily combine the first strategy with second strategy and both the departments can handle and scale both layers separately, building microservices for each.

Third: Domain level separation; turn existing modules within the monolith into standalone microservices. Each time you extract a module and turn it into a service, the monolith shrinks.

As Monolith shrinks, you must write code to enable the monolith and the service to communicate through an API that uses an inter-process communication (IPC) mechanism.

Now you have yet another service that can be developed, deployed, and scaled independently of the monolith and any other services.

Another great strategy is to build monoliths which are easily shrinkable later. Once you start to grow as a company it would be easier to scale the system into microservices and create individual teams to take care of these microservices.

There are a lot of things you have to build around microservices to make sure that system is working in sync.

  1. You would need messaging service like Kafka to transfer data in between these micro services.

2. Build API Gateway: The API Gateway is responsible for tasks such as load balancing, caching, access control, API metering, and monitoring, and can be implemented effectively.

3. Service Registry: As client makes a request to a service via a load balancer. The load balancer queries the service registry (zookeeper in Kafka) and routes each request to an available service instance.

4. Go for Server-less deployment of these microservices or a service instance per container.

a. Server-less: To deploy a microservice, you package it as a ZIP file and upload it to AWS Lambda. You also supply metadata, which among other things specifies the name of the function that is invoked to handle a request (a.k.a. an event). AWS Lambda automatically runs enough instances of your microservice to handle requests.

b. Service Instance per container: To use this pattern, you package your service as a container image. A container image is a filesystem image consisting of the applications and libraries required to run the service.

You usually run multiple containers on each physical or virtual host. You might use a cluster manager such as Kubernetes or Marathon to manage your containers

However, unlike VMs, containers are a lightweight technology. Container images are typically very fast to build.

There a lot of other components but I tried to cover few important ones.

A Product Manager and A believer who jumped into the startup world with dreams!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store