Why changing SendTimeout does not help for hosted WCF services?

In .NET 3.0, you would handle two different timeouts:

· Binding.SendTimeout

This is the timeout that specifies how long the client can wait for the transport to complete data writing until throwing exception. It is client side setting. If the request would likely take longer than the default (1 minute), you would need to increase it.

· Binding.ReceiveTimeout

This is the timeout that specifies how long the service can wait from the beginning of receiving a request until the message is processed. It’s server-side setting. When you send a large message to the service and the service needs long time to process, you would need to increase this setting.

Ideally, using these two timeouts should solve most timeout problems. However, when a WCF service is hosted in IIS/ASP.NET, another setting would also control the lifetime of the request:

· HttpRuntimeSection.ExecutionTimeout

The default of this value is 110 seconds. When you have very slow service operations which would cause timeouts to happen, the request would be aborted and you can find an ASP.NET EventLog entry that tells that the request has timed out. As per the link, you can configure this setting through web.config as following:

<configuration>

  <system.web>

  <httpRuntime executionTimeout="600"/>

  </system.web>

</configuration>

This would set the timeout to be 600 seconds (10 minutes). From code, you can use the following API to achieve the same:

· HttpApplication.Server.ScriptTimeout

If you use ASMX services, you would hit this exact problem too.

Fortunately this has been enhanced in .NET 3.0 SP1 so that this is taken care of internally. The ScriptTimeout was set to be Int.MaxValue for WCF requests. In this way, WCF has the full control of the lifetime of the requests.