API,  Data modelling,  Events

Events

Microservice Events

All database access will raise events, especially the writes. This has to be implemented (or override) in the base implementation of the ORM.

The cache system will listen for these events in order to invalidate the modified data and refresh it.

For entities that are extensions for other models, the event should be raised for the parent entity.

All microservices can have access to the data models (entity models).

All microservices can request data through any implementation of data access, but it is recommended to use the best practices for each case.

For example, if a sensitive piece of information has to be accessed (read or written to), it is better to request it from its respective microservice rather than reading/writing it directly from the database, or use the public methods.

1.1. Event sourcing

https://www.confluent.io/blog/apache-kafka-for-service-architectures/

Events should have: a unique ID, message version, schema version, the name of the service that raised the event, data.

The most common approach for handling breaking changes like these is to create two topics: orders-v1 and orders-v2, for messages with the old and new schemas respectively. Assuming Orders are mastered by the Orders Service, this gives us a couple of options:

The Orders service can dual-publish in both schemas at the same time.

We can add a process that down-converts from orders-v2 topic to orders-v1 topic. A simple KStreams job is typically used to do this kind of down-conversion.

Services continue in this dual-topic mode until all services have fully migrated to the v2 topic, at which point the v1 topic can be archived or deleted as appropriate.

Another alternative is to mix in a stateless protocol like HTTP, layered over a backbone of events. For example, with a gateway. Alternatively, you can open separate interfaces in each service, one for events and another for request response. This is a great pattern, and probably the most commonly used one we see. So, Events are created for all state changes and HTTP is used for all request response. See figure below. This pattern makes a lot of sense because events are, after all, the dataset of your system (so Kafka works well). Queries are more like ephemeral chit-chat (so HTTP is perfect). We covered more event-driven design patterns in the last post.