API,  Data modelling,  Events

Microservices

1.1. Event sourcing

https://www.confluent.io/blog/apache-kafka-for-service-architectures/

Events should have: a unique ID, message version, schema version, the name of the service that raised the event, data.

The most common approach for handling breaking changes like these is to create two topics: orders-v1 and orders-v2, for messages with the old and new schemas respectively. Assuming Orders are mastered by the Orders Service, this gives us a couple of options:

The Orders service can dual-publish in both schemas at the same time.

We can add a process that down-converts from orders-v2 topic to orders-v1 topic. A simple KStreams job is typically used to do this kind of down-conversion.

Services continue in this dual-topic mode until all services have fully migrated to the v2 topic, at which point the v1 topic can be archived or deleted as appropriate.

Another alternative is to mix in a stateless protocol like HTTP, layered over a backbone of events. For example, with a gateway. Alternatively, you can open separate interfaces in each service, one for events and another for request response. This is a great pattern, and probably the most commonly used one we see. So, Events are created for all state changes and HTTP is used for all request response. See figure below. This pattern makes a lot of sense because events are, after all, the dataset of your system (so Kafka works well). Queries are more like ephemeral chit-chat (so HTTP is perfect). We covered more event-driven design patterns in the last post.

1.2. Modules services

Each module can have its own microservice in order to implement its business logic and be decoupled by the other in case of updates. See modules section for more details.

Each module’s business logic can be accessed by other microservice in order to avoid unnecessary API calls.

Business logic can be called directly or via API in case a service is disconnected from the database.