|
Measures reported by RedisMemoryTest
Like CPU resources, the Redis server has to be sized with adequate memory resources as well to achieve optimal performance. Excessive memory usage can cause significant deterioration in server performance. This is why, it is imperative that administrators track memory usage, detect if actual usage has reached the maximum memory configuration of the server, and configure suitable eviction policies to resolve the memory contention. To achieve this, administrators can use the RedisMemoryTest test.
This test reports the memory usage of the Redis server and the maximum memory that the server is configured to use. Administrators are notified if the actual memory usage reaches the maximum memory configuration, thus prompting them to configure appropriate eviction policies. Additionally, the test sheds light on problem conditions such as memory fragmentation and memory swapping, so that administrators can finetune memory allocations based on the memory demands of the server operating system
Outputs of the test : One set of results for the target Redis server
The measures made by this test are as follows:
| Measurement |
Description |
Measurement Unit |
Interpretation |
| maxmemory |
Indicates the value of the maxmemory configuration directive of the server. |
MB |
The maxmemory configuration directive is used in order to configure Redis to use a specified amount of memory for the data set.
Setting maxmemory to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB. |
| Memory_used |
Indicates the amount of memory used by the server. |
MB |
If the value of this measure is equal to the value of the Maximum memory allocated to Redis measure, it indicates that the server has exhausted its memory limit. The exact behavior Redis follows when the maxmemory limit is reached is configured using the maxmemory policy configuration directive.
The following policies are available:
noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
allkeyslru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
volatilelru: evict keys by trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.
allkeysrandom: evict keys randomly in order to make space for the new data added.
- volatilerandom: evict keys randomly in order to make space for the new data added, but only evict keys with an expire set.
volatilettl: evict keys with an expire set, and try to evict keys with a shorter time to live (TTL) first, in order to make space for the new data added.
The policies volatilelru, volatilerandom and volatilettl behave like noeviction if there are no keys to evict matching the prerequisites.
Picking the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis INFO output in order to tune your setup.
In general as a rule of thumb:
Use the allkeyslru policy when you expect a powerlaw distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.
Use the allkeysrandom if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform (all elements likely accessed with the same probability).
Use the volatilettl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.
The volatilelru and volatilerandom policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
It is also worth noting that setting an expire to a key costs memory, so using a policy like allkeyslru is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure.
|
| used_memory_lua |
Indicates the amount of memory used by the lua engine. |
MB |
|
| used_memory_peak |
Indicates the high watermark of memory consumption by the Redis server. |
MB |
|
| used_memory_rss |
Indicates the amount of memory that Redis allocated as seen by the operating system. |
MB |
|
| Mem_frag_ratio |
Indicates the the ratio of memory used by the operating system compared to the amount of memory allocated by Redis |
Percent |
A fragmentation ratio less than 1.0 means that Redis requires more memory than is available on the system and so it has resorted to using swap memory resources. A fragmentation ratio greater than 1.0 indicates that fragmentation is taking place and the Redis instance is consuming more physical memory than has been requested. A healthy Redis server Memory fragmentation ratio is slightly more than 1.0. A ratio greater than 1.5 indicates that excessive fragmentation is taking place. In such instances, you should restart Redis server to allow the operating system to recover the memory that has become unusable due to fragmentation. |
| total_system_memory |
Indicates the total amount of memory the Redis host has. |
MB |
|
|