What 130 second timeout kills my WCF streaming service call?

Most recently, I began to investigate the complex problem with WCF streaming in which a CommunicationException is thrown if the client waits more than 130 seconds between sendings to the server.

Here is the complete exception:

System.ServiceModel.CommunicationException was unhandled by user code HResult=-2146233087 Message=The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '23:59:59.9110000'. Source=mscorlib StackTrace: Server stack trace: at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.BufferedStream.Write(Byte[] array, Int32 offset, Int32 count) at System.Xml.XmlStreamNodeWriter.FlushBuffer() at System.Xml.XmlStreamNodeWriter.GetBuffer(Int32 count, Int32& offset) at System.Xml.XmlUTF8NodeWriter.InternalWriteBase64Text(Byte[] buffer, Int32 offset, Int32 count) at System.Xml.XmlBaseWriter.WriteBase64(Byte[] buffer, Int32 offset, Int32 count) at System.Xml.XmlDictionaryWriter.WriteValue(IStreamProvider value) at System.ServiceModel.Dispatcher.StreamFormatter.Serialize(XmlDictionaryWriter writer, Object[] parameters, Object returnValue) at System.ServiceModel.Dispatcher.OperationFormatter.OperationFormatterMessage.OperationFormatterBodyWriter.OnWriteBodyContents(XmlDictionaryWriter writer) at System.ServiceModel.Channels.Message.OnWriteMessage(XmlDictionaryWriter writer) at System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder.WriteMessage(Message message, Stream stream) at System.ServiceModel.Channels.HttpOutput.WriteStreamedMessage(TimeSpan timeout) at System.ServiceModel.Channels.HttpOutput.Send(TimeSpan timeout) at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.SendRequest(Message message, TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at WcfService.IStreamingService.SendStream(MyStreamUpRequest request) at Client.Program.<Main>b__0() in c:\Users\jpierson\Documents\Visual Studio 2012\Projects\WcfStreamingTest\Client\Program.cs:line 44 at System.Threading.Tasks.Task.Execute() InnerException: System.IO.IOException HResult=-2146232800 Message=Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. Source=System StackTrace: at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state) at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.ServiceModel.Channels.BytesReadPositionStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] buffer, Int32 offset, Int32 count) InnerException: System.Net.Sockets.SocketException HResult=-2147467259 Message=An existing connection was forcibly closed by the remote host Source=System ErrorCode=10054 NativeErrorCode=10054 StackTrace: at System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] buffers, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) InnerException: 

It seems that the server closed the connection prematurely due to the inactivity of the connection. If I instead transfer the momentum to the server, even one byte at a time, then I will never get this exception, and I can continue to transmit data indefinitely. I developed a very simple sample application to demonstrate this, which uses basicHttpBinding with Streamed transferMode, and I insert an artificial delay in the implementation of the user thread on the client, which is delayed by 130 seconds. This mimics something like a wait condition under a buffer in which the stream provided in my service call from the client does not deliver data to the WCF infrastructure fast enough to satisfy some unidentifiable timeout value, which is about 130 seconds a character.

Using the WCF service trace tools, I can find an HttpException with a message that says: "The client is disconnected because the base request has completed. Access to HttpContext is no longer available."

From the IIS Express trace log file, I see an entry that says: "The I / O operation was interrupted due to a stream or application request (0x800703e3)"

I configured the server and client timeouts to use a value above 130 seconds just to exclude them. I tried idleTimeout in IIS Express and lots of ASP.NET-related timeout values ​​to find out where this problem came from, but still no luck. The best information I can find so far is a comment in the FireFox problem tracker by a developer who describes a similar problem working outside of the WCF architecture. For this reason, I assume that the problem may be related more likely to IIS7 or, possibly, to Windows Server.

User Binding on Web.config Server

 <binding name="myHttpBindingConfiguration" closeTimeout="02:00:00" openTimeout="02:00:00" receiveTimeout="02:00:00" sendTimeout="02:00:00"> <textMessageEncoding messageVersion="Soap11" /> <httpTransport maxBufferSize="65536" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Streamed" /> </binding> 

Client side configuration in code:

  var binding = new BasicHttpBinding(); binding.MaxReceivedMessageSize = _maxReceivedMessageSize; binding.MaxBufferSize = 65536; binding.ReaderQuotas.MaxStringContentLength = int.MaxValue; binding.ReaderQuotas.MaxArrayLength = int.MaxValue; binding.TransferMode = TransferMode.Streamed; binding.ReceiveTimeout = TimeSpan.FromDays(1); binding.OpenTimeout = TimeSpan.FromDays(1); binding.SendTimeout = TimeSpan.FromDays(1); binding.CloseTimeout = TimeSpan.FromDays(1); 

In response to the idea of ​​wals to try to find out if I get different results by placing my service on my own, I want to add that I did this and find that I get the same results as when I posted to IIS. What does it mean? I assume this means that the problem is either with WCF or with the underlying network infrastructure in Windows. I am using the 64-bit version of Windows 7, and we found this problem by running various clients and running part of the service on Windows 2008 Server.

Update 2013-01-15

I found some new hints thanks to DarkWanderer when I realized that WCF uses HTTP.sys under self-service scripts in Windows 7. It made me see what I can configure for HTTP.sys and what problems people report for HTTP.sys it is similar to what i am experiencing. This will lead me to a log file located in the C: \ Windows \ System32 \ LogFiles \ HTTPERR \ httperr1.log directory, which appears to be logging certain types of HTTP problems from HTTP.sys. In this log, I see the following type of log entry every time I run my test.

2013-01-15 17:17:12 127.0.0.1 59111 127.0.0.1 52733 HTTP / 1.1 POST /StreamingService.svc - - Timer_EntityBody -

So, to determine which conditions may lead to a Timer_EntityBody error, and which settings in IIS7 or elsewhere may be relevant to when and if this error occurs.

From the official IIS website :

The connection expired before the request entity object appeared. When it is clear that the request has an entity body, the HTTP API includes the Timer_EntityBody Timer. The initial limit for this timer is set to connectionTimeout. Each time other information is received for this request, the HTTP API resets the timer to give the connection more minutes, as specified in the connectionTimeout attribute.

Attempting to change the connectionTimeout attribute as the link above assumes that in applicationhost.config for IIS Express does not seem to make any difference. Perhaps IIS Express ignores this configuration and uses an internal hard coded value? Having tried something on my own, I found that new http: netsh commands were added to show and add timeout values, so I came up with the following command to go, but, unfortunately, this did not affect this error either.

netsh http add timeout timeouttype = IdleConnectionTimeout value = 300

+15
iis-7 wcf streaming
Jan 10 '13 at
source share
5 answers

It turns out that this problem is caused by the Connection timeout value that is used by HTTP.sys and which can be specified through IIS Manager through Advanced settings for a particular site. This value is configured by default to disconnect the connection when both the header and body were not received within 120 seconds. If a data impulse is received from the body, then the server restarts the timer (Timer_EntityBody) within the timeout value, then the reset timer waits for additional data.

Connection Time-out setting in IIS

This is like the documentation regarding Timer_EntityBody and connectionTimeout , but it was difficult to determine because IIS Express seems to ignore the connectionTimeout value specified in the restriction element in applicationhost.config no matter what the documentation says. To determine this, I had to install the full version of IIS on my development machine and change the setting above after placing my site there.

Since we host the real service in IIS in Windows 2008, the above solution will work for me, however, the question remains as to how to correctly change the connection timeout value when you are Self Hosting.

+15
Jan 15 '13 at 23:45
source share

Judging by mistake:

The socket connection was interrupted. This may be due to an error processing your message or exceeding the reception timeout of the remote host or a problem with the network resource. The local timeout connector was "23: 59: 59.9110000"

This seems to be a simple TCP timeout.

You can verify this by running the application as a self-hosted, and then running this command in the console:

 netstat -no |find "xxxxx" 

where xxxxx is the PID of your server process. This command will show you the connections that your server has established and updated every second.

Try connecting to the client and see what happens. Most likely, you will see "CLOSE_WAIT" or "TIME_WAIT" on your connection after about 100-120 seconds - this will mean that it was interrupted due to a timeout.

This can be fixed by adding the following to the configuration:

 <httpTransport maxBufferSize="65536" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Streamed" keepAliveEnabled="true" /> <!-- Here --> 

This parameter is explained here.

+2
Jan 15 '13 at 9:50
source share

Probably a long snapshot, but ... Check your IIS application pool, if enabled. It is in advanced settings, grouping process models.

+1
Jan 10 '13 at
source share

Try this, for me it solved the problem. The problem is that the underlying http.sys core has a timeout and it will break the connection.

 http://mmmreddy.wordpress.com/2013/07/11/wcf-use-of-http-transport-sharing-persistent-tcp-sessions/ netsh http add timeout timeouttype=idleconnectiontimeout value=120 
0
Aug 27 '13 at 15:48
source share

Did you read http://support.microsoft.com/kb/946086 ?

I have observed such stream interrupts in my ISAPI extensions. After shutdown, buffering in IIS 7 according to this caliper note has ever been fined.

0
Oct 10 '13 at 21:12
source share



All Articles