I would start by sharing a messaging infrastructure like RabbitMQ, with streaming / storage / event handling like Kafka. These are two different things made for two (or more) different purposes.
As for the source of events, you should have a place where you should store events. This store should only be append and support fast reading of unstructured data based on identity. One example of this persistence is EventStore .
Event identification comes with CQRS, which means that you must project your changes (events) into another store that you can request. This is done by projecting events into this repository, where events are processed to change the state of a domain object. It is important to understand that using message infrastructure for forecasts is usually a bad idea. This is due to the nature of messaging and two-phase commit.
If you look at how events are saved, you can see that they are stored in the repository as a single transaction. If you need to post events, this will be a different transaction. Since you share two different infrastructures, everything can break.
The problem with messages as such is that messages generally guarantee delivery "at least once," and the order of messages is usually not guaranteed. Also, when your message user fails and the NACK sends the message, it will be resent, but usually a bit later, breaking the sequence again.
Streamlining and duplication concerns someone who is not applicable to event streaming servers such as Kafka. In addition, EventStore guarantees that only event delivery will be possible only if you are using a catch-up subscription.
In my experience, messages are used to send commands and to implement an event-driven architecture to connect independent services in a reactive way. Event repositories, on the other hand, are used to save events, and only the events that get there are then projected into the request repository, and also published on the message bus.