WCF - Large Image Returns - Your Experience and Tips

We use the WCF service level to return images from the repository. Some of the images are color, multi-page, almost all TIFF formats. We are sluggish - one of many questions.

1.) What was your experience with image return through WCF 2.) Do you have any tips for returning large images? 3.) Are all messages serialized through SOAP correctly?
4.) Does wcf do the poor job of compressing large tiff files?

Thanks everyone!

+4
source share
5 answers

Good. To repeat the answers of ZombieSheep and Seba Gomez, you definitely need to see the streaming data. Thus, you can easily integrate GZipStream into this process. On the client side, you can cancel the compression process and convert the stream back to the desired image.

With streaming, there is a selected number of classes that can be used as parameter types / return types, and you need to constantly change the bindings.

Here is the MSDN site for enabling streaming. This is the MSDN page that describes the restrictions on streaming contracts.

I assume that you also control the code on the client side, it can be very difficult if you do not. I used only streaming when I had control with both the server and the client.

Good luck.

+5
source

If you use another .NET assembly as your client, you can use two methodologies to return large chunks of data, streaming or MTOM.

Streaming allows you to transfer a TIFF image as if it were a regular file stream on a local file system. See here for more details on the choice and their pros and cons.

Unfortunately, you still have to transfer a large block of data, and I see no way around this, given the points already raised.

+2
source

I just wanted to add that it is very important to make sure that your data is transmitted without buffering.

I read somewhere that even if you set transferMode to "Streamed", if you do not work with the stream itself, or with the message, or with the implementation of IXmlSerializable, the message is not broadcast.

Be sure to keep this in mind.

+2
source

What bindings do you use? WCF will have some overhead, but if you use basic-http with MTOM, you will lose most of the 64 base. You still have headers, etc.

Another option would be (wait for it ...) not to use WCF here - maybe just a handler (ashx, etc.) that returns a binary file.

Re compression - WCF itself will not have a lot of compression; transport can, especially through IIS, etc. with gzip turned on - however, images are notorious for being difficult to compress.

+1
source

In the previous project that I worked, we had a similar problem. We had a C # web service that received media requests. Media can vary from files to images and are stored in a database using BLOB columns. Initially, a web method that handled media extraction requests read a fragment from a BLOB and returned to the caller. It was one round trip to the server. The problem with this approach is that the client has no feedback on the progress of the operation.

The computer has no problems of science, which cannot be solved by an additional level of indirection.

We started by refactoring the method in three ways.

Method1 to set up a conversation between the caller and the web service. This includes request information (such as media identifier) ​​and exchange of capabilities. The web service responded with a ticking identifier, which is used by the caller for future requests. This initial call is used to allocate resources.

Method2 is called sequentially until more is received for the media. The call includes information about the current offset and the marked identifier that was provided when calling Method1 . Return updates the current position.

Method3 is called to complete the request when Method2 reports that the reading of the request material has been completed. This frees up allocated resources.

This approach is practical because you can immediately inform the user about the progress of the operation. You have a bonus that splits requests for Method2 in different threads. Progress than can be communicated in a piece, as some BitTorrent clients do.


Depending on the size of the BLOB, you can load it from the database at a time or by reading it in pieces as well . This means that you can use a balanced mechanism based on a given watermark (BLOB size) that allows you to load it in one go or chunks.


If there is a performance problem, consider packing the results using GZipStream or reading message encoders, and in particular, pay attention to the binary and message transfer optimization (MTOM) mechanism.

0
source

Source: https://habr.com/ru/post/1277652/


All Articles