Most recently, I began to investigate the complex problem with WCF streaming in which a CommunicationException is thrown if the client waits more than 130 seconds between sendings to the server.
Here is the complete exception:
System.ServiceModel.CommunicationException was unhandled by user code HResult=-2146233087 Message=The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '23:59:59.9110000'. Source=mscorlib StackTrace: Server stack trace: at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.BufferedStream.Write(Byte[] array, Int32 offset, Int32 count) at System.Xml.XmlStreamNodeWriter.FlushBuffer() at System.Xml.XmlStreamNodeWriter.GetBuffer(Int32 count, Int32& offset) at System.Xml.XmlUTF8NodeWriter.InternalWriteBase64Text(Byte[] buffer, Int32 offset, Int32 count) at System.Xml.XmlBaseWriter.WriteBase64(Byte[] buffer, Int32 offset, Int32 count) at System.Xml.XmlDictionaryWriter.WriteValue(IStreamProvider value) at System.ServiceModel.Dispatcher.StreamFormatter.Serialize(XmlDictionaryWriter writer, Object[] parameters, Object returnValue) at System.ServiceModel.Dispatcher.OperationFormatter.OperationFormatterMessage.OperationFormatterBodyWriter.OnWriteBodyContents(XmlDictionaryWriter writer) at System.ServiceModel.Channels.Message.OnWriteMessage(XmlDictionaryWriter writer) at System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder.WriteMessage(Message message, Stream stream) at System.ServiceModel.Channels.HttpOutput.WriteStreamedMessage(TimeSpan timeout) at System.ServiceModel.Channels.HttpOutput.Send(TimeSpan timeout) at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.SendRequest(Message message, TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at WcfService.IStreamingService.SendStream(MyStreamUpRequest request) at Client.Program.<Main>b__0() in c:\Users\jpierson\Documents\Visual Studio 2012\Projects\WcfStreamingTest\Client\Program.cs:line 44 at System.Threading.Tasks.Task.Execute() InnerException: System.IO.IOException HResult=-2146232800 Message=Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. Source=System StackTrace: at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state) at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.ServiceModel.Channels.BytesReadPositionStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.ServiceModel.Channels.HttpOutput.WebRequestHttpOutput.WebRequestOutputStream.Write(Byte[] buffer, Int32 offset, Int32 count) InnerException: System.Net.Sockets.SocketException HResult=-2147467259 Message=An existing connection was forcibly closed by the remote host Source=System ErrorCode=10054 NativeErrorCode=10054 StackTrace: at System.Net.Sockets.Socket.MultipleSend(BufferOffsetSize[] buffers, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.MultipleWrite(BufferOffsetSize[] buffers) InnerException:
It seems that the server closed the connection prematurely due to the inactivity of the connection. If I instead transfer the momentum to the server, even one byte at a time, then I will never get this exception, and I can continue to transmit data indefinitely. I developed a very simple sample application to demonstrate this, which uses basicHttpBinding with Streamed transferMode, and I insert an artificial delay in the implementation of the user thread on the client, which is delayed by 130 seconds. This mimics something like a wait condition under a buffer in which the stream provided in my service call from the client does not deliver data to the WCF infrastructure fast enough to satisfy some unidentifiable timeout value, which is about 130 seconds a character.
Using the WCF service trace tools, I can find an HttpException with a message that says: "The client is disconnected because the base request has completed. Access to HttpContext is no longer available."
From the IIS Express trace log file, I see an entry that says: "The I / O operation was interrupted due to a stream or application request (0x800703e3)"
I configured the server and client timeouts to use a value above 130 seconds just to exclude them. I tried idleTimeout in IIS Express and lots of ASP.NET-related timeout values ββto find out where this problem came from, but still no luck. The best information I can find so far is a comment in the FireFox problem tracker by a developer who describes a similar problem working outside of the WCF architecture. For this reason, I assume that the problem may be related more likely to IIS7 or, possibly, to Windows Server.
User Binding on Web.config Server
<binding name="myHttpBindingConfiguration" closeTimeout="02:00:00" openTimeout="02:00:00" receiveTimeout="02:00:00" sendTimeout="02:00:00"> <textMessageEncoding messageVersion="Soap11" /> <httpTransport maxBufferSize="65536" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Streamed" /> </binding>
Client side configuration in code:
var binding = new BasicHttpBinding(); binding.MaxReceivedMessageSize = _maxReceivedMessageSize; binding.MaxBufferSize = 65536; binding.ReaderQuotas.MaxStringContentLength = int.MaxValue; binding.ReaderQuotas.MaxArrayLength = int.MaxValue; binding.TransferMode = TransferMode.Streamed; binding.ReceiveTimeout = TimeSpan.FromDays(1); binding.OpenTimeout = TimeSpan.FromDays(1); binding.SendTimeout = TimeSpan.FromDays(1); binding.CloseTimeout = TimeSpan.FromDays(1);
In response to the idea of ββwals to try to find out if I get different results by placing my service on my own, I want to add that I did this and find that I get the same results as when I posted to IIS. What does it mean? I assume this means that the problem is either with WCF or with the underlying network infrastructure in Windows. I am using the 64-bit version of Windows 7, and we found this problem by running various clients and running part of the service on Windows 2008 Server.
Update 2013-01-15
I found some new hints thanks to DarkWanderer when I realized that WCF uses HTTP.sys under self-service scripts in Windows 7. It made me see what I can configure for HTTP.sys and what problems people report for HTTP.sys it is similar to what i am experiencing. This will lead me to a log file located in the C: \ Windows \ System32 \ LogFiles \ HTTPERR \ httperr1.log directory, which appears to be logging certain types of HTTP problems from HTTP.sys. In this log, I see the following type of log entry every time I run my test.
2013-01-15 17:17:12 127.0.0.1 59111 127.0.0.1 52733 HTTP / 1.1 POST /StreamingService.svc - - Timer_EntityBody -
So, to determine which conditions may lead to a Timer_EntityBody error, and which settings in IIS7 or elsewhere may be relevant to when and if this error occurs.
From the official IIS website :
The connection expired before the request entity object appeared. When it is clear that the request has an entity body, the HTTP API includes the Timer_EntityBody Timer. The initial limit for this timer is set to connectionTimeout. Each time other information is received for this request, the HTTP API resets the timer to give the connection more minutes, as specified in the connectionTimeout attribute.
Attempting to change the connectionTimeout attribute as the link above assumes that in applicationhost.config for IIS Express does not seem to make any difference. Perhaps IIS Express ignores this configuration and uses an internal hard coded value? Having tried something on my own, I found that new http: netsh commands were added to show and add timeout values, so I came up with the following command to go, but, unfortunately, this did not affect this error either.
netsh http add timeout timeouttype = IdleConnectionTimeout value = 300