WCF and DTO size

We have a level of business logic / data access that we show at several different endpoints through the WCF service. We created a DTO for use as a data service contract. We will use this service through different endpoints for several different applications. In some applications, we need only a few fields from the DTO, while in others we may need almost all of them. For those who need only a few, we really do not want each object to be sent by telegram every time - we would like to get rid of what we really need for this application.

I went back and forth between creating specific DTO sets for use with each application (overkill?) And using something like EmitDefaultValue=false for members that are needed only in certain applications. I also looked at using XmlSerializer rather than DataContractSerializer to have more control over serialization inside the service.

My question is: first, is it worth worrying about the size of the data that we transfer? Secondly, if we assume that the answer is yes or that we decide to take care of it even if it is no, what approach is recommended here and why?

EDIT

Thanks for the answers so far. I was worried that we might fall into premature optimization. I would like to leave the question open for now, however, in the hope that I can get answers to everything else, both for my own instruction and in case someone else has this question, and has good reason for optimization.

+4
source share
3 answers

If you are worried? May be. Performance / stress check your services and find out.

If you decide that you don't care ... a couple of options:

  • Create another service (or possibly different operations in the same service) that return partially hydrated DataContracts. Thus, these new services and / or operations return the same DataContrcts, but only partially hydrated.

  • Create light versions of your DataContracts and return them. Basically the same as in option 1, but with this approach you do not need to worry about consumers abusing the full DataContract (potentially getting unnecessary reference exceptions, etc.).

I prefer option 2, but if you have control over your customers, option 1 may work for you.

+1
source

First of all, do we have to worry about the size of the data we transfer?

You did not specify the number and size of the fields, but in general: None . You already have an envelope and overhead for setting up a channel, a few more bytes will not matter much.

Therefore, if we are not talking about hundreds of doublings or something like that, I would first wait, and if there is a real problem: experiment and measurement.

+2
source

It seems you can enter the zone of "premature optimization." I would avoid using specific DataContracts for the object due to maintenance work, this will cause problems in the long run. However, if your application has a real need to hide information from some client applications, and not another, then it is useful to have several DataContracts for this object. @Henk is right, if you are not dealing with massive deeply nested objects (in this case you have a different problem), then do not "optimize" your design, but simply reduce the network transmission packets.

0
source

Source: https://habr.com/ru/post/1346773/


All Articles