For years now, most of us have heard the word “Microservices” and how it makes an application-independent. While some may also have worked using this architecture. But today’s article is majorly focused on the shifting from monolithic to microservices, how, in some cases, microservices are not acting as expected, and what is the solution for improving its state, scalability, and streams while actually making it an independent architecture.
Let’s get to the onset, where the hype of microservices architecture really started and the reasons it emerged in the first place. When we had big monolithic applications where making changes required negotiations between different teams and coming to a shared agreement to take even one step ahead. Modifications in monolithic apps had made the process slower and more frustrating, which led to the idea, “What if we put these components into an isolated context where different teams own different development contexts from beginning to end?” That’s where the concept of microservices started to emerge.
But what are monoliths, what are they slow, what’s microservice, and how can it help?
Without further ado, let’s get started with the basics.
A monolithic architecture is a unified model for designing a software application. “Monolithic” refers to all composed in one piece. Monolithic-based software is self-contained with tightly coupled components or functions and a large codebase that can be burdensome to manage over time.
When the complexity of the monolithic app increases, the codebase can be challenging to maintain, making it problematic for new developers to change or modify the code according to the changing technical or business requirements. Besides, with the continuously evolving necessities that can be more complex than before, implementing changes without compromising the code quality and affecting application performance becomes impossible.
Moreover, to update a feature in a monolithic application, developers have to compile the complete codebase and redeploy the entire application rather than that part, making deploying regularly difficult while reducing the agility and increasing the time to market.
Sometimes the resource requirements in the monolithic application can be conflicting for different application components, making it hard to find the required resources and scale the application. Another problem that had us thinking about shifting to microservices was the application’s reliability. A bug or error in any part of the application can bring down the application instantly and become unavailable to the users.
As discussed before, microservices evolved to overcome the drawbacks of monolithic applications.
Software development using microservices is a modular approach to creating large applications from small components (services). Microservices applications are distributed into multiple isolated services where each service runs a unique process. Developers can change segments independently without worrying about the other parts of the application and making unintentional changes within other aspects.
Typically, microservices speed up the development, deployment, and maintenance of an application independently. Usually, microservices are built using programming languages like Java and Spring Boot and communicate with each other using APIs. Microservices applications provide fault isolation and tolerance, i.e., in the case of a bug or error in one service that doesn’t take down the entire application. After debugging the component, it is deployed independently to its respective service instead of the complete application redeployment.
Microservices architecture offers a cleaner, independent, evolvable design that makes the application easily adaptable and scalable.
As we know, all the services need to work together and communicate with each other. What we did was, start using REST APIs with JSON as a data-interchange format. It sounded straightforward because who doesn’t know how to work with REST and send JSON. Additionally, JSON is supported by almost everything, and if it doesn’t, one might say, “I will write it down within a weekend” (Surely you will, but it can take anywhere from months to years.)
And this is the idea that all the developers around the world have adopted. As discovered after a bit, this pattern in a lot of ways, makes us all fall back into the tight coupling that we were trying to overcome. All this has ended up getting microservices architecture a humorous nickname: “distributed monolith” where you only have the problems of microservices but somehow none of its benefits, making debugging even more complex and you still have to negotiate every change.
Some of the other major challenges that developers had faced while using microservices are:
When you don't take precautions, your business logic can leak all over the place, and clients will know a great deal about the internal workings.
With the increasing complexity in architecture, making changes can become riskier, requiring you to run continuous tests on these services together, if you aren’t careful.
The biggest challenge a developer can face that also slows down the development in many companies is the need to convince others to connect with them if they have something to add to the application.
Another most annoying challenge one can face is the need to serialize and deserialize JSON on every hop, and open and close HTTP connections on almost anything, increasing latency.
Now that we know the challenges developers might face when using microservices architecture for application development, it is important to know the patterns, tools, and technologies that can help us overcome these challenges.
We have mentioned the ways that can make developers more efficient.
Most of you must be familiar with the idea of an API gateway which is an API management tool residing between a client and a backend services collection. It acts as a reverse proxy that accepts all the API calls, aggregates the required services, and returns the expected response.
Rate-limiting, authentication, and statistics are all managed by API Gateways. In the case of microservices applications where a single call can require calling dozens of services, API Gateway acts as one entry point for every client. It also provides request routing, protocol translation, security, and microservices composition in the app. API Gateway stops internal workings exposure to external clients, merges communication protocols, decreases microservices complexity, separates external APIs from microservices APIs to visualize design testing, and improves the efficiency of the application.
A service mesh is a way to control the way different services share data. It is a dedicated platform layer above the infrastructure that can make the communication between services managed, secure, and observable. Service meshes use compatible tools to determine all the problems of executing services, including networking, monitoring, and security. It allows developers to stop worrying about taking measures for addressing challenges in microservices and focus on developing and managing applications for the client.
With a service mesh, a microservice won’t communicate with other microservices, instead, all the communication will happen on top of the service mesh (or sidecar proxy). Some of the built-in support that service mesh offers are service discovery, routing, security, observability, container deployment, access control, resilience, and interservice communication protocols like gRPC, HTTP 1.x, or HTTP2.
Moreover, a service mesh is language-agnostic, i.e, it is independent of any programming language. So you can write your microservice using any technology and it will still work with the service mesh. Two of the popular open-source platforms for service mesh implementation are Istio and Linkerd.
An application design pattern that separates features like monitoring and security, and inter-service communication from the main application architecture to ease the maintenance and tracking is “sidecar proxy”. It is attached to a parent application to help add or extend its functionality. It is typically attached to the service mesh control panel, containers, or microservices.
A sidecar proxy manages the flow of traffic between microservices, collects telemetry data (logs, metrics, and traces), and sets policies. In short, sidecar proxy minimizes code redundancy, reduced code complexity, and loose coupling between services of the microservice application.
Kafka Streams, a client library, is used for building microservices, where the inputs and outputs are stored in clusters of Kafka. Apache Kafka is a distributed event streaming that is highly scalable and fault-tolerant. Kafka Streams blends the simplicity of writing and deploying Java applications on the client-side with the advantages of the server-side cluster of Kafka.
It is equally viable for different use cases, be it small, medium, or large. Kafka is integrated with Kafka security and does not require any processing cluster. Microservices applications can effectively manage large data volumes using Kafka.
Kafka accepts data streams from different sources and allows real-time data analysis. Additionally, it can quickly scale up and down with minimum downtime.
We had the service mesh to solve some of our internal communication. Now another issue with microservices is the risk of making changes in point-to-point requests and responses. Moreover, two services are more aware of each other than they should be while adding new things are much more difficult than anticipated.
In a request-response pattern, a service communicates with other services to maybe get validations or in other cases information about a client.
However, with an event-driven approach, microservices architecture becomes scalable, adaptable, dependable, and easy to maintain over time.
Decoupled services communicate with each other and trigger events with an event-driven architecture. These events can be an update, a state change, or items being placed in a shopping cart (in the case of the eCommerce web app).
Simply put, an event-driven system switches the request-response pattern and makes services more autonomous - which is why we opt for microservices in the first place.
Each microservice can either be stateful or stateless. Although when it comes to microservices, many of us think about the stateless services that communicate with each other through HTTP (REST APIs), it can sometimes create problems as mentioned above. So to avoid that, here we are about to discuss the stateful microservices achieved through an event-driven (streaming) system.
In the event-driven microservices, in addition to the request-response pattern, services publish messages that represent facts (events) and register to the topics/queues to receive responses/messages/events.
However, some of the patterns that you should acknowledge before implementing event-driven architectures are Saga, Command and Query Responsibility Segregation (CQRS), Event Sourcing, and Publish-Subscribe.
One thing to remember when the term “event-driven microservices” is used it means stateful services that have their own databases they maintain. A stateful microservice maintains a state in some form so that it can function. Moreover, instead of storing these states internally, event-driven microservices should store these states externally in data stores like NoSQL or RDBMS databases.
Usually, a stateful microservice is used in systems that require real-time updates on data and event changes. You can either use Apache Kafka or Amazon Kinesis which are distributed systems designed for streams and are fault-tolerant, horizontally scalable and often described as event streaming architectures.
Some of the advantages that stateful streaming in microservices can offer are event flow tracking, performance control, and reliable event sourcing.
Even with stateful streaming, we still have services communicating with one another, whether through request-response or event-driven and it still has some issues involved. Until now, we have just discussed events. But we haven’t discussed events and what’s inside them.
The messages that are sent from one service to another are in JSON and have a bunch of fields like Social or Property IDs, and lots of metadata about everything going on. But the problem that we had learned at the very beginning of the blog is that HTTP and JSON can be ridiculously slow. Although it cannot be solved instantly, the popular choice here that can be used to solve the problem are gRPC and HTTP/2 which can make things remarkably faster.
But another problem that one might encounter is that no matter if you are using JSON or gRPC, some changes are still hazardous. For instance, messages that are used to communicate, have a schema with fields and fields of data. And to do any sort of testing or validation, there are a bunch of things that depend on the exact data type with the same field name that you might not even know about. However, if you make any changes, it will likely break the system.
The key here is that you have a great way to test the compatibility of schema, otherwise the system might have incredible damage.
However, the way to look at it is whether you are using REST or gRPC or even writing an event, you need contracts (APIs) of what communication and messages look like. So, testing and validating the APIs. If you have Kafka in the event-driven system as a large message queue - schema registries are used.
The idea here is that developers create events, register them on the schema registry and it gets validated with all the existing schemas automatically. In case of incompatibility, it will send a data synchronization error. But waiting till something hits production is frustrating. So, if a developer is using the schema registry, they can use the Maven plugin - to validate the schema, give it a definition, and check schema registry compatibility.
Deploying, monitoring the services, and making sure that they can be easily scaled, takes a lot of effort. That’s where the concept of serverless arises. In serverless development, developers don't need to manage servers to create and run applications. Although the servers are still there they are abstracted from the development environment. All the provisioning, maintenance, and scaling are managed by the cloud provider. All that developers need to do is package the code into the containers to be deployed and the website or web app is scaled up and down automatically responding to the demands. Some of the serverless providers in the market are AWS Lambda, IBM Cloud Functions, Microsoft Azure Functions, Parse, and KNative.
Using serverless microservices will allow developers to write their function and send it to the cloud provider. And the cloud provider makes sure that the microservices web app is scaled immediately to handle every event that comes around.
So that was all about microservices’ state, scalability, and streams. And with that, we will like to wrap up the blog. We hope that you find the information shared with you interesting and useful for your future projects.
Keep in mind that microservices aren’t best in all the scenarios, and sometimes it is better to work with a monolithic architecture. But you can always enquire what is the best suitable architecture for your idea, and they will help with the right set of tools and technologies along with the budget to develop the overall project.
Source: Decipher Zone