Redis缓存使用优化[译]

原文:http://sorentwo.com/2015/07/27/optimizing-redis-usage-for-caching.html?utm_source=redisweekly&utm_medium=email

如果你和我一样认为Redis是一个合适的cache工具,那么,通过下面的内容你将学到4个有效的方法来优化Redis作为你的基础缓存设施。即使你使用托管的解决方案,下面的优化方法仍然是有帮助的。

使用专用的缓存实例

Redis不像Memcached一样是多线程机制,它是单进程单线程的。考虑到Redis的处理速度,以及它要处理的工作量,如果你使用单实例配置,当你的平台的流量开始上升,需要处理的后台任务持续增加,发布/订阅频道转发超过数以千计的网络负载,缓存命中率持续增加时,就会有大量的Redis请求被阻塞,这时,Redis只能通过丢弃一些后台任务或是增加一组负载均衡服务器来解决当前的瓶颈。

使用多个独立实例的配置,可以缓解单实例的压力。多个实例:一个用于后台任务,一个用于发布/订阅,另外一个专门用于缓存。不要依赖将数据分割成多个redis数据库!因为这些数据库后面仍然只有一个处理进程,所有上面的警告依然适用。

总结下:

  • 对不同的工作负载使用专用的Redis实例;
  • 不要使用数据库(/ 0,/ 1,/ 2)来分区工作负载。

放宽持久化策略

你应该关注持久化和复制这两个Redis特性,即使你的目标只是建立一个缓存,这两个特性有助于当你升级或重启后,你的数据仍然不会丢失。

—Antirez

Each Redis instance has its own configuration file and can be tuned according to the use-case. Caching servers, for example, can be configured to use RDB persistence to periodically save a single backup instead of AOF persistence logs. By only taking periodic snapshots of the database RDB maximizes performance at the expense of up-to-the-second consistency. For a hybrid Redis instance that may be storing business critical background jobs data consistency is paramount. With a cache it is alright to lose some data in the event of a disaster, after reboot most of the cache will be warm and intact.

总结下:

Do optimize cache persistence speed by favoring RDB over AOF.
Do set stop-writes-on-bgsave-error to no to prevent all writes from failing when snapshotting fails. This requires proper monitoring and alerts for failures, which you are doing anyhow, right?
Do not disable persistence entirely, it is valuable for warming the cache after an upgrade or restart.

有效管理内存

Once you have a Redis instance dedicated to caching you can start to optimize memory management in ways that don’t make sense for a hybrid database. When ephemeral and long-lived data is co-mingled it is imperative that ephemeral keys have a TTL and Redis is free to clean up expired keys.

Redis can manage memory in a variety of ways. The management policies vary from never evicting keys (noeviction) to randomly evicting a key when memory is full (allkeys-random). Hybridized databases typically use volatile-* policies, which require the presence of expiration values or they behave identically to noeviction. There is another policy that works better for cache data, allkeys-lru. The allkeys-lru policy attempts to remove the less recently used (LRU) keys first in order to make space for the new data added.

It is also worth to note that setting an expire to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure.

—Redis Documentation

Redis uses an approximated LRU algorithm instead of an exact algorithm. What this means is that you can conserve memory in favor of inaccuracy by tuning the number of samples to check with each eviction. Set maxmemory-samples to a low level, say around 5, for “good enough” eviction with a low memory footprint. Lastly, and most importantly, set a maxmemory limit to a comfortable amount of RAM. Without a limit Redis can’t function properly as a LRU cache and will start replying with errors when memory consuming commands start failing.

总结下:

Do set a maxmemory limit.
Do use allkeys-lru policies for dedicated cache instances. Let Redis manage key eviction by itself.
Do not set expire for keys, it adds additional memory overhead per key.
Do tune the precision of the LRU algorithm to favor speed over accuracy. Redis does not pick the best candidate for eviction, it samples a small number of keys and chooses the entry with the oldest access time.

正确使用缓存

Because of Redis data structures, the usual pattern used with memcached of destroying objects when the cache is invalidated, to recreate it from the DB later, is a primitive way of using Redis.

—Antirez

Only storing serialized HTML or JSON as strings, the standard way of caching for web applications, doesn’t fully utilize Redis as a cache. One of the great strengths of Redis over Memcached is the rich set of data structures available. Ordered lists, structured hashes, and sorted sets are particularly useful caching tools only available through Redis. Caching is more than stuffing everything into strings.

Let’s look at the Hash type for a specific example.

Small hashes are encoded in a very small space, so you should try representing your data using hashes every time it is possible. For instance if you have objects representing users in a web application, instead of using different keys for name, surname, email, password, use a single hash with all the required fields.

—Redis Documentation

Instead of storing objects as a serialized string you can store the object as fields and values available through a single key. Using a Hash saves web servers the work of fetching an entire serialized value, de-serializing it, updating it, re-serializing it, and finally writing it back to the cache. Eliminating that flow for every minor update pushes the work into Redis and out of your applications, where it is supposed to be.

总结下:

  • Do use the native Redis types wherever possible (list, set, zset, hash).
  • Do not use the string type for structured data, reach for a hash.
    Happy optimizing. Go forth and cache!