I am running the latest version of the WindowsAzure.Storage library, 6.1.1. This was previously a known issue , but it is believed to have been fixed in .NET 4.5.1. This is exactly what I have.
I push a table in Azure Storage with 100 meter rows to insert. I focused on making the code fast and scalable; it maximizes the performance of the Azure D12 VM with Datacenter 2012 R2. I see 5,000 - 10,000 objects being processed per second (reading a file, process, loading).
Update: This ONLY happens on Azure VM. On my home system this does not happen.
A process always crashes in ~ 16,384 batches (about 320,000 entries) with a classic port exhaustion error: usually only one use of each socket address (protocol / network address / port) is allowed.
I did the usual things: increased MaxUserPort (64434) and decreased TcpTimedWaitDelay (15 seconds). MaxUserPort seems to be ignored if it suspiciously logical 16,384 fails.
Netstat shows that ports never close in the first place. The state on all of them remains “Installed” until the process itself is closed, then they disappear.
The actual connection code comes down to:
var acx = CloudStorageAccount.Parse(conn);
var client = acx.CreateCloudTableClient();
var table = client.GetTableReference("Test");
var op = new TableBatchOperation();
foreach (var record in batch)
op.InsertOrReplace(record);
try
{
await table.ExecuteBatchAsync(op, opsConfig, null);
Interlocked.Add(ref totalUploaded, batch.Count);
}
catch...
, , - TableBatchOperations, // .
, -, , . , , .
! , , .