The microservices architecture greatly improves the management of an application for the teams responsible for it in the company. Event-driven architecture (EDA) offers a gain in robustness for the information system. Many of you may consider migrating to an EDA, but not everyone will benefit from it. To distinguish between desire and need, and to determine whether or not you are ready to move to an event-driven architecture, it is best to seek guidance and advice. In our article, you will find the must-know specific features of an EDA.

 

What is an event-driven architecture?

An event-driven architecture (EDA) is an architecture in which events are at the heart of its operation. Here, “event” should be understood as the result of a change of state in an application, generated - usually - by the users of the application, the management team or internally, by the application itself.  


The particular feature of this architecture is that it is based on an asynchronous communication system which allows for weak coupling, among other things. Thus, messages (events) can be sent simultaneously without the need for an immediate response or processing. This asynchronicity, reinforced by the “Fire and Forget” principle - which establishes that, regardless of its nature and volume, the event is sent and then forgotten - means that the event is never blocked during processing. By contrast, synchronous communication requires a direct and instantaneous response to every request.


Message queuing is also a crucial element in the operation of an event-driven architecture. Indeed, this software component enables communication between microservices by placing each event in a queue. The queued events will then be split up to facilitate processing.


The producer creates an event and sends it. The event is included directly in a queue and then distributed to each of the microservices subscribed to the queue (if they are subscribed, they are therefore involved in the processing of this type of event). These subscribers are known as consumers. Then the broker (a mediating message agent) manages the queue and the distribution of messages, which makes possible EDA-specific fluidity and asynchronicity.


Kafka and RabbitMQ are the best-known brokers in the field, the former because of its proven performance and robustness, the latter because of its ease of use.


For optimal use of an event-driven architecture, there is a set of design patterns guaranteeing that it works properly. The following are some of these design patterns:

 

  • CQRS (Command and Query Responsibility Segregation). The implementation of this design pattern makes it possible to separate read requests from write requests. The data is therefore stored in two different databases in order to optimise the application´s performance, but also to log each action (all system events are logged) so that they can be replayed if necessary;
  • Event Sourcing: each new event is stored in the database, which enables all events to be compiled. For example, in a banking application, if a user makes a transfer, a new entry is added to the database; this is a transfer, so the entry is a debit operation (-200 €). Thanks to Event Sourcing, a “compilation” between the current balance and the transfer ensures that no information is lost. The entire sequence of events leading to the current balance is logged, and all events can be replayed. Obviously, this entails a significant storage problem: the database used can quickly become saturated;
  • Event Notification corresponding to the “Fire and Forget” principle. The sender sends their message without expecting anything in return.

The event-oriented architecture today meets the growing need for applications to exchange large volumes of data. The complex implementation of this architecture may require expert support for companies that wish to use it. 

 

How does the broker facilitate the operation of the event-driven architecture? 

The implementation of an event-driven architecture requires the use of a broker. In order for the company to benefit from the full effectiveness of this system, it is essential for the company to:

  

  • master all the broker's functionalities;
  • ensure that the broker is an integral part of the implementation of the EDA and that they provide queue management to facilitate the transmission of events.

In the process, the broker receives the event from the producer and transmits it in the exchange. With software like RabbitMQ, this can take many forms: 

  • Direct: the message is sent to the queue via association keys for transmission to the consumer;
  • Topic: the message is processed by the routing pattern so that it is sent to the queues that meet certain conditions;
  • Fanout: the message will be sent to all queues.

  

Depending on the chosen broker, you can set the message retention time (one hour, several years, infinity, etc.) using the broker's administration tool, the advantage being that you can replay the events for the pre-set time.


 
The objective of a broker is to facilitate the exchange of events and avoid blockages between microservices. Because it’s not only microservices that need rapid dissemination - this also applies to very large volumes of data (millions). The processes enabled by brokers lead to a high level of performance and robustness

 

What are the drawbacks of event-driven architecture?

Implementing an architecture like event-driven architecture is not trivial, and there are obstacles to its proper use that can make it unpleasant for companies choosing this model.


It is therefore essential to understand precisely which target audiences the application is aimed at and the needs they give rise to. Without this, the complexity of moving to an EDA does not justify the migration. Then the infrastructure resources must be evaluated: is the transition to an event-driven architecture technically feasible?


The broker is also a key element in the successful management of the EDA. Whether it is Kafka, RabbitMQ or another solution, the teams in charge will have to master this queue management system.


Next, the volume of stored data - which we have already mentioned - must be addressed and evaluated: storage in the cloud comes at a significant cost, but one that is difficult to negotiate with an EDA. Indeed, saved events can take up a lot of space. However, as these events are likely to be replayed for analysis and debugging purposes, they need to be kept.


Finally, the decoupling of the event operated by the broker in the queue adds complexity. Although it improves and accelerates asynchronous communication, it makes debugging and maintenance more difficult


These are all issues that the company will need to address when considering an event-driven architecture. 

 

What can an event-driven architecture do for your company?

Companies today increasingly need architectures such as EDA, as most business involves the need to manage more and more data in a shorter and shorter time. The EDA meets this need (if the obstacles listed above can be addressed).


The use of an EDA brings scalability to applications and improves business resilience. Indeed, businesses themselves can debug their application or even anticipate a deteriorating situation. They can quickly confine an error to the microservice in question and thus keep the availability and responsiveness of their application intact (only the affected microservice becomes unavailable). This resilience gives them real autonomy. 
  
Event-driven architecture also improves business robustness, as the company can handle many more requests simultaneously, which means larger volumes of information. With event-driven architecture, the response time is shorter and, thanks to message queuing, your company will not lose any data


Are you ready to move to event-driven architecture? At Alter Solutions, you will find the advice and support necessary for the success of your digital transformation projects. Our consultancy firm offers audits to improve system robustness and resilience without ever losing sight of the necessary identification of technical factors, skills requirements and cost control. 


We are also PASSI certified, a guarantee of confidence in security audits. Awarded by the ANSSI (French National Agency for Information Systems Security), this qualification guarantees the quality of our support for clients in responding to cybersecurity challenges. Vulnerabilities are permanent threats to normal company operations. Our complementary expertise enables you to keep a balanced "complexity-cost-benefit" ratio.

You benefit from seamless support for optimal implementation and management.

Partilha este artigo