Redis Efficiency on AWS EC2 Micro Instance

I made a funny observation on a Redis instance deployed to my AWS EC2 Micro instance (test environment)

I measured the execution time of various operations that Redis should have hit. So, the runtime (average) is shown below:

Jedis -> Redis Connection is 63 milliseconds Read of top Element in a list using lrange(<listname>,0,1) is 44 milliseconds Read of entire Elements of set is 5ms Iteration over entire Set space is 60ms( Set space approx 130 elements) Iteration over subset of elements of set is 5ms ( Subset element size is 5) 

Now the first 2 operations (connecting and extracting the top item in the list) bother me.

To connect, the code is shown below:

  Jedis redis= new Jedis("localhost"); 

And to retrieve the top item in the list:

  String currentDate = redis.lrange(holderDate,0,1).get(0); 

Now from the Redis lrange Command:

Time complexity: O (S + N), where S is the initial offset, and N is the number of elements in the specified range.

Now from my code S will be 0 and N will be 1.

Then my question is: what causes these runtimes for these somewhat trivial operations.

Are there any EC2 Micro instance features that could adversely affect the performance of these operations.

Some key information about deploying Redis:

 redis_version:2.4.10 used_memory:2869280 used_memory_human:2.74M used_memory_rss:4231168 used_memory_peak:2869480 used_memory_peak_human:2.74M mem_fragmentation_ratio:1.47 

Thanks in advance.

+4
source share
1 answer

Are there any EC2 Micro instance features that adversely affect the performance of these operations.

The Amazon EC2 t1.micro instance type is somewhat unique and heavily throttled by definition, see Micro Instances :

Micro instances (t1.micro) provide a small amount of coordinated CPU resources and increase processor capacity in short packets when additional cycles are available. They are well suited for applications and websites that require additional calculation cycles periodically . [emphasis mine]

The latter is true in principle, but the volume of throttling takes many users by surprise - while the exact algorithm is not specified, the documentation is explained and especially. it well illustrates the general strategy and effect, which in practice seems to give about ~ 97%, the so-called theft time, when throttling begins, see the section When an instance Uses its allocated resources :

We expect that your application will consume only a certain amount of CPU resources for a certain period of time. If the application consumes more than your instance has allocated processor resources, we temporarily limit it so that it runs at a low processor level. If your instance continues to use all allocated resources, its performance will degrade . We will increase the time when we limit its CPU level, thereby increasing before the instance allows it to pop up again. [emphasis mine]

This certainly makes the mood at the highest level, as Didier Spezia rightly commented already. Please note that while other types of EC2 instances can also demonstrate theft time (which is a common artifact of virtualization platforms where physical processors can be shared by different virtual machines), the corresponding patterns are more regular in any case, therefore performance tests are possible in principle, although in general the following restrictions apply:

  • you will need to run tests across multiple instances, at least to account for different amounts of theft time due to random CPU loads on neighboring virtual machines.
  • you should not run the benchmarking application on the same virtual machine as the regular one, as this obviously affects the result.
+5
source

Source: https://habr.com/ru/post/1479943/


All Articles