Using Protobuf classes against having structure / display level in Java

I could not find "Best Practice" on the Internet for using gRPC and protobuf as part of a project. I am implementing an event-based server-side application. The kernel defines aggregates, events, and domain services without external dependencies. The gRPC server calls the main services passing in the request objects, which ultimately go into the published events. Events are serialized using protobuf and published on the wire. Currently, we are faced with the question of whether our events should be directly created by protobuff classes, or should we keep the kernel and events separate and implement the mapper / serializer layer to convert events between protobuf ↔ core

If we don’t consider another approach, please direct us :)

Thanks for the help.

+5
source share
2 answers

Domain model objects and data transfer objects (Protobuf message) should be separated as much as possible. To do this, it’s best to convert domain model objects to Google Protobuf messages and vice versa. We made protobuf-converter to make it extremely simple.

+3
source

Protobufs are really good for consistent and backward compatibility, but not so good for being first-class Java objects. Adding custom functions to protos is currently not possible. You can get many benefits using Protobufs at the stub level, wrap them in one of your Pojos events, and pass them inside yourself:

public final class Event { private final EventProto proto; public void foo() { // do something with proto. } } 

Most projects do not change their .proto file, which is often and almost never incompatible (neither wire nor API). The need to change a lot of code due to proto-changes has never been a problem in my experience.

+1
source

Source: https://habr.com/ru/post/1246395/


All Articles