Tableclient.RetryPolicy Vs. Transientfaulthandling

Both me and my colleague were tasked with finding the connection-retry logic for Azure Table Storage. After some searching, I found this cool Enterprise Library package that contains the Microsoft.Practices.TransientFaultHandling namespace.

Following a few code examples, I finished creating the Incremental repetition strategy and wrapped one of our memory calls with the retryPolicy ExecuteAction retryPolicy ExecuteAction :

 /// <inheritdoc /> public void SaveSetting(int userId, string bookId, string settingId, string itemId, JObject value) { // Define your retry strategy: retry 5 times, starting 1 second apart, adding 2 seconds to the interval each retry. var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(2)); var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(StorageConnectionStringName)); try { retryPolicy.ExecuteAction(() => { var tableClient = storageAccount.CreateCloudTableClient(); var table = tableClient.GetTableReference(SettingsTableName); table.CreateIfNotExists(); var entity = new Models.Azure.Setting { PartitionKey = GetPartitionKey(userId, bookId), RowKey = GetRowKey(settingId, itemId), UserId = userId, BookId = bookId.ToLowerInvariant(), SettingId = settingId.ToLowerInvariant(), ItemId = itemId.ToLowerInvariant(), Value = value.ToString(Formatting.None) }; table.Execute(TableOperation.InsertOrReplace(entity)); }); } catch (StorageException exception) { ExceptionHelpers.CheckForPropertyValueTooLargeMessage(exception); throw; } } } 

Feeling amazing, I went to show my colleague, and he complacently noted that we can do the same without turning on the Enterprise Library, because the CloudTableClient object already has a tool for the retry policy. His code looked like this:

 /// <inheritdoc /> public void SaveSetting(int userId, string bookId, string settingId, string itemId, JObject value) { var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(StorageConnectionStringName)); var tableClient = storageAccount.CreateCloudTableClient(); // set retry for the connection tableClient.RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(2), 3); var table = tableClient.GetTableReference(SettingsTableName); table.CreateIfNotExists(); var entity = new Models.Azure.Setting { PartitionKey = GetPartitionKey(userId, bookId), RowKey = GetRowKey(settingId, itemId), UserId = userId, BookId = bookId.ToLowerInvariant(), SettingId = settingId.ToLowerInvariant(), ItemId = itemId.ToLowerInvariant(), Value = value.ToString(Formatting.None) }; try { table.Execute(TableOperation.InsertOrReplace(entity)); } catch (StorageException exception) { ExceptionHelpers.CheckForPropertyValueTooLargeMessage(exception); throw; } } 

My question is:

Is there any significant difference between the two approaches besides their implementation? Both of them seem to achieve the same goal, but are there any cases when it is better to use one over the other?

+6
source share
1 answer

Functionally speaking, both are the same - they both repeat requests in case of transient errors. However, there are several differences:

  • Policy reprocessing in the warehouse client library handles retries for storage operations, and repeated error processing attempts not only process storage operations, but also repeat SQL Azure, service bus, and cache operations in case of transient errors. Therefore, if you have a project in which you use more of this repository, but you only have one approach for handling temporary errors, you can use an application block with temporary error handling.
  • One thing that I liked about the short-term error handling block is that you can intercept retry operations that you cannot do with the retry policy. For example, look at the code below:

      var retryManager = EnterpriseLibraryContainer.Current.GetInstance<RetryManager>(); var retryPolicy = retryManager.GetRetryPolicy<StorageTransientErrorDetectionStrategy>(ConfigurationHelper.ReadFromServiceConfigFile(Constants.DefaultRetryStrategyForTableStorageOperationsKey)); retryPolicy.Retrying += (sender, args) => { // Log details of the retry. var message = string.Format(CultureInfo.InvariantCulture, TableOperationRetryTraceFormat, "TableStorageHelper::CreateTableIfNotExist", storageAccount.Credentials.AccountName, tableName, args.CurrentRetryCount, args.Delay); TraceHelper.TraceError(message, args.LastException); }; try { var isTableCreated = retryPolicy.ExecuteAction(() => { var table = storageAccount.CreateCloudTableClient().GetTableReference(tableName); return table.CreateIfNotExists(requestOptions, operationContext); }); return isTableCreated; } catch (Exception) { throw; } 

In the above code example, I could intercept replay operations and do something there if I wanted. This is not possible in the client repository library.

Having said all this, it is usually recommended that you go with the policy of re-checking the warehouse client library for the repeated storage operation, since it is an integral part of the package and, therefore, will be updated with the latest changes in the library.

+4
source

Source: https://habr.com/ru/post/954456/


All Articles