redis的超时处理
1) 当再次访问该数据时, 发现该数据已超时过期, 则删掉; 返回给调用者为空。(被动发现)
2) redis server起来之后, 会注册定时器事件(每毫秒触发1次超时), 在该定时器处理函数中, 轮流各db;大致策略是从该db.expired dict中:
-----a. 尝试取20(ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP)次随机key,判断是否有过期的,过期的处理掉。 如果该db中过期的不足5个(ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP/4),则不再处理该db、转而处理下一个db。
-----b. 执行a 16的整数倍次。如果此时发现本次处理时间已太长(>25ms), 则不再处理。否则继续a  16次。typedef struct redisDb {dict *dict;                 /* The keyspace for this DB */dict *expires;              /* Timeout of keys with a timeout set */如果数据设置了超时,则dict和expires中都会放一份。dict *blocking_keys;        /* Keys with clients waiting for data (BLPOP) */dict *ready_keys;           /* Blocked keys that received a PUSH */dict *watched_keys;         /* WATCHED keys for MULTI/EXEC CAS */int id;long long avg_ttl;          /* Average TTL, just for stats */数据近似平均过期时间
} redisDb;struct redisServer {/* General */char *configfile;           /* Absolute config file path, or NULL */int hz;                     /* serverCron() calls frequency in hertz */redisDb *db;dict *commands;             /* Command table */dict *orig_commands;        /* Command table before command renaming. */aeEventLoop *el; unsigned lruclock:22;       /* Clock incrementing every minute, for LRU */分钟级别的超时设置。unsigned lruclock_padding:10;int shutdown_asap;          /* SHUTDOWN needed ASAP */....
};#define REDIS_LRU_CLOCK_MAX ((1<<21)-1) /* Max value of obj->lru */
#define REDIS_LRU_CLOCK_RESOLUTION 10 /* LRU clock resolution in seconds */
typedef struct redisObject {unsigned type:4;unsigned notused:2;     /* Not used */unsigned encoding:4;unsigned lru:22;        /* lru time (relative to server.lruclock) */ lru = server.lruclockint refcount;void *ptr;
} robj;每次访问该key时,即更新该key对应的value的最近访问时间为server.lruclock。 db.c:lookupKey()。
在读或者写时, 都会调用lookupKey更新。int expireIfNeeded(redisDb *db, robj *key) {mstime_t when = getExpire(db,key);//得到设置的截止超时时间。 绝对时间。未设置超时则为-1。....不符合超时条件,退出; 底下会处理超时。/* Delete the key */server.stat_expiredkeys++;propagateExpire(db,key);//将本次数据超时信息传递给slave、写AOFnotifyKeyspaceEvent(REDIS_NOTIFY_EXPIRED, "expired",key,db->id);//将本次超时信息广播给相关订阅者。?return dbDelete(db,key);//从db.expires和db.dict中删除掉。
}robj *lookupKey(redisDb *db, robj *key) {dictEntry *de = dictFind(db->dict,key->ptr);if (de) {robj *val = dictGetVal(de);/* Update the access time for the ageing algorithm.* Don't do it if we have a saving child, as this will trigger* a copy on write madness. */if (server.rdb_child_pid == -1 && server.aof_child_pid == -1)val->lru = server.lruclock;return val;} else {return NULL;}
}
robj *lookupKeyRead(redisDb *db, robj *key) {robj *val;expireIfNeeded(db,key);  //判断该key是否超时、超时就都淘汰--->底下的读就会失败、因为已经被淘汰了。val = lookupKey(db,key); //if (val == NULL)server.stat_keyspace_misses++;elseserver.stat_keyspace_hits++;return val;
}redis.c:initServer中,增加定时器事件执行serverCron函数:
1)  该函数更新server.lruclock = (server.unixtime/REDIS_LRU_CLOCK_RESOLUTION) & REDIS_LRU_CLOCK_MAX;//server.unixtime =是通过time(NULL)得到的。
即server.lruclock 是当前时间/10 后结果的低21位。
2)  执行databasesCron。该函数负责处理key的超时、rehash、resize。 其中key超时函数activeExpireCyclevoid activeExpireCycle(int type) {/* This function has some global state in order to continue the work* incrementally across calls. */static unsigned int current_db = 0; /* Last DB tested. */从上次访问的db的下一个db接着开始处理static int timelimit_exit = 0;      /* Time limit hit in previous call? */static long long last_fast_cycle = 0; /* When last fast cycle ran. */unsigned int j, iteration = 0;unsigned int dbs_per_call = REDIS_DBCRON_DBS_PER_CALL;long long start = ustime(), timelimit;if (type == ACTIVE_EXPIRE_CYCLE_FAST) {/* Don't start a fast cycle if the previous cycle did not exited* for time limt. Also don't repeat a fast cycle for the same period* as the fast cycle total duration itself. */
        //该变量为静态变量。上次函数未退出时, timelimit_exit为0; 1则退出。//由于本函数是由定时器事件触发,所以某些情况下可能有多个本函数在执行。
        if (!timelimit_exit) return;
if (start < last_fast_cycle + ACTIVE_EXPIRE_CYCLE_FAST_DURATION*2) return;
last_fast_cycle = start;
}

/* We usually should test REDIS_DBCRON_DBS_PER_CALL per iteration, with
* two exceptions:
*
* 1) Don't test more DBs than we have.
* 2) If last time we hit the time limit, we want to scan all DBs
* in this iteration, as there is work to do in some DB and we don't want
* expired keys to use memory for too much time. */
if (dbs_per_call > server.dbnum || timelimit_exit)
dbs_per_call = server.dbnum;

/* We can use at max ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC percentage of CPU time
* per iteration. Since this function gets called with a frequency of
* server.hz times per second, the following is the max amount of
* microseconds we can spend in this function. */
//ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC 25.
//server.hz default值是 10. 那么timelimit约为25000. (与ustime比, 那么就是25ms)
timelimit = 1000000*ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC/server.hz/100;//
    timelimit_exit = 0;    if (timelimit <= 0) timelimit = 1;    if (type == ACTIVE_EXPIRE_CYCLE_FAST)        timelimit = ACTIVE_EXPIRE_CYCLE_FAST_DURATION; /*1000 in microseconds. */    for (j = 0; j < dbs_per_call; j++) {//dbs_per_call数据库个数。        int expired;        redisDb *db = server.db+(current_db % server.dbnum);//逐个db轮流处理。 每个db只处理        /* Increment the DB now so we are sure if we run out of time         * in the current DB we'll restart from the next. This allows to         * distribute the time evenly across DBs. */        current_db++;        /* Continue to expire if at the end of the cycle more than 25%         * of the keys were expired. */        do {            unsigned long num, slots;            long long now, ttl_sum;            int ttl_samples;            /* If there is nothing to expire try next DB ASAP. */            if ((num = dictSize(db->expires)) == 0) {                db->avg_ttl = 0;                break;            }            slots = dictSlots(db->expires);            now = mstime();            /* When there are less than 1% filled slots getting random             * keys is expensive, so stop here waiting for better times...             * The dictionary will be resized asap. */            if (num && slots > DICT_HT_INITIAL_SIZE &&                (num*100/slots < 1)) break;            /* The main collection cycle. Sample random keys among keys             * with an expire set, checking for expired ones. */            expired = 0;            ttl_sum = 0;            ttl_samples = 0;            if (num > ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP)                num = ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP;//20。。。。            while (num--) {                dictEntry *de;                long long ttl;                if ((de = dictGetRandomKey(db->expires)) == NULL) break;//取db->expires的随机桶的首元素                ttl = dictGetSignedIntegerVal(de)-now;                if (activeExpireCycleTryExpire(db,de,now)) expired++;//如果已经超时,则从db删掉、增加server.stat_expiredkeys                if (ttl < 0) ttl = 0;                ttl_sum += ttl;                ttl_samples++;            }            /* Update the average TTL stats for this database. */            if (ttl_samples) {                long long avg_ttl = ttl_sum/ttl_samples;                if (db->avg_ttl == 0) db->avg_ttl = avg_ttl;                /* Smooth the value averaging with the previous one. */                db->avg_ttl = (db->avg_ttl+avg_ttl)/2;            }            /* We can't block forever here even if there are many keys to             * expire. So after a given amount of milliseconds return to the             * caller waiting for the other active expire cycle. */            iteration++;            if ((iteration & 0xf) == 0 && /* check once every 16 iterations. */                (ustime()-start) > timelimit)            {                timelimit_exit = 1;            }            if (timelimit_exit) return;            /* We don't repeat the cycle if there are less than 25% of keys             * found expired in the current DB. */        } while (expired > ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP/4);    }//end for(int i=.., i < dbnum...)}

												

redis代码 数据超时实现相关推荐

  1. redis lettuce 超时_Spring Cache 操作 Redis 实现数据缓存(上)

    点击上方☝SpringForAll社区 轻松关注!及时获取有趣有料的技术文章 本文来源:http://www.mydlq.club/article/55/ . 一.缓存概念知识 . 1.是什么缓存 . ...

  2. Spring整合Redis做数据缓存(Windows环境)

    当我们一个项目的数据量很大的时候,就需要做一些缓存机制来减轻数据库的压力,提升应用程序的性能,对于java项目来说,最常用的缓存组件有Redis.Ehcache和Memcached. Ehcache是 ...

  3. spring boot 缓存_Spring Boot 集成 Redis 实现数据缓存

    Spring Boot 集成 Redis 实现数据缓存,只要添加一些注解方法,就可以动态的去操作缓存了,减少代码的操作. 在这个例子中我使用的是 Redis,其实缓存类型还有很多,例如 Ecache. ...

  4. oracle缓存和Redis,说说数据缓存那点事:Redis和memcached对比

    [此为"一森咖记"公众号--第86篇文章] 本文预计阅读15分钟 [引言] 当我们为一个并发量较大的应用做数据架构时,会考虑使用缓存,意欲达到三个目标: 1. 加快用户访问速度,提 ...

  5. SpringBoot+Redis完成数据缓存(内容丰富度一定超出你的想象)

    SpringBoot+Redis完成数据缓存 去年今日此门中 人面桃花相映红 人面不知何处去 桃花依旧笑春风 感谢相遇!感谢自己,努力的样子很给力! 为了更多朋友看见,还是和大家说一声抱歉,限制为粉丝 ...

  6. SpringBoot 结合 Spring Cache 操作 Redis 实现数据缓存

    点击上方"后端技术精选",选择"置顶公众号" 技术文章第一时间送达! 作者:超级小豆丁 http://www.mydlq.club/article/55/ 系统 ...

  7. Springboot:整合redis对接口数据进行缓存

    文章目录 Springboot:整合redis对接口数据进行缓存 一.注解及其属性介绍 @Cacheable @CacheEvict @CachePut @CacheConfig 二.代码实现 1.创 ...

  8. spring + redis 实现数据的缓存

    1.实现目标 通过redis缓存数据.(目的不是加快查询的速度,而是减少数据库的负担) 2.所需jar包 注意:jdies和commons-pool两个jar的版本是有对应关系的,注意引入jar包是要 ...

  9. 基于 abp vNext 和 .NET Core 开发博客项目 - 使用Redis缓存数据

    基于 abp vNext 和 .NET Core 开发博客项目 - 使用Redis缓存数据 转载于:https://github.com/Meowv/Blog 在日志记录中使用的静态方法有人指出写法不 ...

最新文章

  1. 北大博士整理B站实战项目!yyds!
  2. [C#]使用CMD命令删除文件函数
  3. Service xxx does not have a SELinux domain defined
  4. mybatis$和#的区别
  5. Coil - Google推荐的协程图片加载库
  6. 51Nod 1050 循环数组最大子段和
  7. 微软发布.NET 5.0 RC1,未来将只有一个.NET
  8. signal函数说明
  9. 如何在oracle中查询所有用户表的表名、主键名称、索引、外键等
  10. v-for和v-if的问题
  11. 以太坊到底是什么 | 工作原理
  12. ThinkPHP入门篇(一)
  13. 四个开放源代码审查工具【图文】
  14. 如何下载FLASH动画
  15. 台达b3伺服参数设置方法_台达B2系列伺服电机的调试方法和注意事项
  16. 关于matlab匿名函数,求导
  17. css中的相对定位、绝对定位、固定定位
  18. 劳伦斯.拉里.埃里森(甲骨文公司总裁)在耶鲁大学的演讲稿
  19. python拼图_利用python制作拼图小游戏的全过程
  20. 渗透测试实战指南笔记

热门文章

  1. Caffe代码导读(5):对数据集进行Testing
  2. Java 设计模式 -- 建造者模式
  3. 美团点评SQL优化工具SQLAdvisor开源
  4. Java Servlet 技术简介
  5. 彻底弄懂 HTTP 缓存机制 —— 基于缓存策略三要素分解法
  6. Latex中bib文件制作(参考文献制作)
  7. Coursera课程Python for everyone:Quiz: Object Oriented Programming
  8. boost源码剖析之:泛型编程精灵type_traits(rev#2)
  9. 【OpenCV3】Opencv3.2.0在Hisi3521下的交叉编译和移植
  10. 微信公众号手机无法直接下载APK文件是怎么回事