内存快速分配和慢速分配

内存页面的分配最终都交由伙伴系统的页面分配器。页面分配的函数在内核有各种各样的实现,但最终都会调用一个共同的接口::__alloc_pages_nodemask()

常见的页面分配的API

__alloc_pages_node  /*返回struct page的指针*/__alloc_pages__alloc_pages_nodemaskalloc_pages         /*返回struct page的指针*/alloc_pages_current__alloc_pages_nodemask__get_free_pages    /*返回页面的虚拟地址*/__get_free_pagesalloc_pagesalloc_pages_current__alloc_pages_nodemask

他们最终都调用了__alloc_pages_nodemask。

伙伴系统的心脏

__alloc_pages_nodemask()是伙伴系统的心脏

struct page *
__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,nodemask_t *nodemask)
{struct page *page;unsigned int alloc_flags = ALLOC_WMARK_LOW;gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */struct alloc_context ac = { };/** There are several places where we assume that the order value is sane* so bail out early if the request is out of bound.*/if (unlikely(order >= MAX_ORDER)) {//请求页的阶数超过了最大阶数就失败了WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));return NULL;}gfp_mask &= gfp_allowed_mask;alloc_mask = gfp_mask;if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))return NULL;finalise_ac(gfp_mask, &ac);/** Forbid the first pass from falling back to types that fragment* memory until all local zones are considered.*/alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask);    /* First allocation attempt */page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);if (likely(page))goto out;/** Apply scoped allocation constraints. This is mainly about GFP_NOFS* resp. GFP_NOIO which has to be inherited for all allocation requests* from a particular context which has been marked by* memalloc_no{fs,io}_{save,restore}.*/alloc_mask = current_gfp_context(gfp_mask);ac.spread_dirty_pages = false;/** Restore the original nodemask if it was potentially replaced with* &cpuset_current_mems_allowed to optimize the fast-path attempt.*/if (unlikely(ac.nodemask != nodemask))ac.nodemask = nodemask;page = __alloc_pages_slowpath(alloc_mask, order, &ac);out:if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&unlikely(__memcg_kmem_charge(page, gfp_mask, order) != 0)) {__free_pages(page, order);page = NULL;}trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype);return page;
}
EXPORT_SYMBOL(__alloc_pages_nodemask);

通过上述源码其实可以总结出__alloc_pages_nodemask它的核心其实是做了3件事:

prepare_alloc_context   //1.准备参数
alloc_flags_nofragment  //2.根据区域和gfp掩码请求添加分配标志
get_page_from_freelist  //3.快路径尝试分配内存
__alloc_pages_slowpath  //4.慢路径尝试分配内存

prepare_alloc_context

static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,int preferred_nid, nodemask_t *nodemask,struct alloc_context *ac, gfp_t *alloc_mask,unsigned int *alloc_flags)
{ac->high_zoneidx = gfp_zone(gfp_mask);ac->zonelist = node_zonelist(preferred_nid, gfp_mask);ac->nodemask = nodemask;ac->migratetype = gfpflags_to_migratetype(gfp_mask);if (cpusets_enabled()) {*alloc_mask |= __GFP_HARDWALL;if (!ac->nodemask)ac->nodemask = &cpuset_current_mems_allowed;else*alloc_flags |= ALLOC_CPUSET;}fs_reclaim_acquire(gfp_mask);fs_reclaim_release(gfp_mask);might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);if (should_fail_alloc_page(gfp_mask, order))return false;if (IS_ENABLED(CONFIG_CMA) && ac->migratetype == MIGRATE_MOVABLE)*alloc_flags |= ALLOC_CMA;return true;

prepare_alloc_context它主要是做了如下的事情:

1.填充alloc_context结构体

2.对gfp掩码做处理存放在alloc_mask中

3.填充alloc_flags字段

他做完预备工作之后,执行finalise_ac获得可分配的内存域zone。

alloc_flags_nofragment

static inline unsigned int
alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
{unsigned int alloc_flags = 0;if (gfp_mask & __GFP_KSWAPD_RECLAIM)alloc_flags |= ALLOC_KSWAPD;#ifdef CONFIG_ZONE_DMA32if (!zone)return alloc_flags;if (zone_idx(zone) != ZONE_NORMAL)return alloc_flags;/** If ZONE_DMA32 exists, assume it is the one after ZONE_NORMAL and* the pointer is within zone->zone_pgdat->node_zones[]. Also assume* on UMA that if Normal is populated then so is DMA32.*/BUILD_BUG_ON(ZONE_NORMAL - ZONE_DMA32 != 1);if (nr_online_nodes > 1 && !populated_zone(--zone))return alloc_flags;alloc_flags |= ALLOC_NOFRAGMENT;
#endif /* CONFIG_ZONE_DMA32 */return alloc_flags;
}

alloc_flags_nofragment主要做的事是先看掩码是否允许kswapd周期回收,如果是的话就设置alloc标志允许在内存不足的时候周期回收。

将这些前期准备都做好内存首先执行的是快路径(fastpath)分配。

快路径分配(fast)

如果检查完内存区发现内存水位线当前内存区的空闲页面数大于设置比对的水位线,就可以直接分配,采取快路径的方式。

get_page_from_freelist

该函数的主要作用是从空闲页面链表中尝试分配内存,是内存分配的fastpath。

static struct page *
get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,const struct alloc_context *ac)
{struct zoneref *z;struct zone *zone;struct pglist_data *last_pgdat_dirty_limit = NULL;bool no_fallback;retry:/** Scan zonelist, looking for a zone with enough free.* See also __cpuset_node_allowed() comment in kernel/cpuset.c.*/no_fallback = alloc_flags & ALLOC_NOFRAGMENT;z = ac->preferred_zoneref;for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,ac->nodemask) {struct page *page;unsigned long mark;if (cpusets_enabled() &&(alloc_flags & ALLOC_CPUSET) &&!__cpuset_zone_allowed(zone, gfp_mask))continue;/** When allocating a page cache page for writing, we* want to get it from a node that is within its dirty* limit, such that no single node holds more than its* proportional share of globally allowed dirty pages.* The dirty limits take into account the node's* lowmem reserves and high watermark so that kswapd* should be able to balance it without having to* write pages from its LRU list.** XXX: For now, allow allocations to potentially* exceed the per-node dirty limit in the slowpath* (spread_dirty_pages unset) before going into reclaim,* which is important when on a NUMA setup the allowed* nodes are together not big enough to reach the* global limit.  The proper fix for these situations* will require awareness of nodes in the* dirty-throttling and the flusher threads.*/if (ac->spread_dirty_pages) {if (last_pgdat_dirty_limit == zone->zone_pgdat)continue;if (!node_dirty_ok(zone->zone_pgdat)) {last_pgdat_dirty_limit = zone->zone_pgdat;continue;}}if (no_fallback && nr_online_nodes > 1 &&zone != ac->preferred_zoneref->zone) {int local_nid;/** If moving to a remote node, retry but allow* fragmenting fallbacks. Locality is more important* than fragmentation avoidance.*/local_nid = zone_to_nid(ac->preferred_zoneref->zone);if (zone_to_nid(zone) != local_nid) {alloc_flags &= ~ALLOC_NOFRAGMENT;goto retry;}}mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);if (!zone_watermark_fast(zone, order, mark,ac_classzone_idx(ac), alloc_flags)) {int ret;#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT/** Watermark failed for this zone, but see if we can* grow this zone if it contains deferred pages.*/if (static_branch_unlikely(&deferred_pages)) {if (_deferred_grow_zone(zone, order))goto try_this_zone;}
#endif/* Checked here to keep the fast path fast */BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK);if (alloc_flags & ALLOC_NO_WATERMARKS)goto try_this_zone;if (node_reclaim_mode == 0 ||!zone_allows_reclaim(ac->preferred_zoneref->zone, zone))continue;ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);switch (ret) {case NODE_RECLAIM_NOSCAN:/* did not scan */continue;case NODE_RECLAIM_FULL:/* scanned but unreclaimable */continue;default:/* did we reclaim enough */if (zone_watermark_ok(zone, order, mark,ac_classzone_idx(ac), alloc_flags))goto try_this_zone;continue;}}try_this_zone:page = rmqueue(ac->preferred_zoneref->zone, zone, order,gfp_mask, alloc_flags, ac->migratetype);if (page) {prep_new_page(page, order, gfp_mask, alloc_flags);/** If this is a high-order atomic allocation then check* if the pageblock should be reserved for the future*/if (unlikely(order && (alloc_flags & ALLOC_HARDER)))reserve_highatomic_pageblock(page, zone, order);return page;} else {#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT/* Try again if zone has deferred pages */if (static_branch_unlikely(&deferred_pages)) {if (_deferred_grow_zone(zone, order))goto try_this_zone;}
#endif}}/** It's possible on a UMA machine to get through all zones that are* fragmented. If avoiding fragmentation, reset and try again.*/if (no_fallback) {alloc_flags &= ~ALLOC_NOFRAGMENT;goto retry;}return NULL;
}

函数遍历在内存域链表上内存域,尝试找到合适的页面进行分配。首先就是做一些参数的检查,若有不满足,直接continue跳过当前zone;wmark_pages()会根据alloc_flags中设置的是min或low或high去算出该zone的watermark是多少;然后将该watermark传入zone_watermark_ok()判断该zone的free pages是否满足该水线。(检查过程会根据内存分配的紧急程度放宽watermark)其中high low min水位线用哪根儿具体由alloc_flags中的ALLOC_WMARK_xx标志决定,在__alloc_pages_nodemask中可以看到设置的线是low。若水位不ok,则根据回收模式node_reclaim的设置,判断是回收或是跳过当前zone。之后进入分配的核心,调用rmqueue从伙伴系统中取页。

rmqueue

内核中将order-0的请求和大于order-0的请求在处理上做了区分。现在的处理器动不动就十几个核,而zone就那么几个,当多个核要同时访问同一个zone的时候,不免要在zone的锁的竞争上耗费大量时间。社区开发者发现系统中对order-0的请求在内核中出现的频次极高,且order-0所占内存仅一个页的大小,于是就实现了per cpu的"内存池",用来满足order-0页面的分配,这样就在一定程度上缓解了伙伴系统在zone的锁上面的竞争。

如果order=0,调用rmqueue_pcplist()

static struct page *rmqueue_pcplist(...)
{/*关闭本地中断并保存中断状态(因为中断上下文也可以分配内存)*/local_irq_save(flags);/*获取当前CPU上目标zone中的per_cpu_pages指针*/pcp = &this_cpu_ptr(zone->pageset)->pcp;/*获取per_cpu_pages中制定迁移类型的页面list*/list = &pcp->lists[migratetype];/*从链表上摘取目标页面*/page = __rmqueue_pcplist(zone,  migratetype, alloc_flags, pcp, list);/*若分配成功,更新当前zone的统计信息*/if (page) {__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);zone_statistics(preferred_zone, zone);}/*恢复中断*/local_irq_restore(flags);return page;
}

如果order>0,在__rmqueue_smallest()中从小到大循环遍历各个order的free_list链表,直到使用get_page_from_free_area()成功从链表上摘取到最小且合适(order和migratetype都合适)的pageblock

__rmqueue_smallest()get_page_from_free_area()

慢路径分配(slow)

快路径(fastpath)检查了各个zone的low watermark,若所有zone的内存水位线都低于low,则失败并进入慢路径(slowpath),就要进行回收了。

static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,struct alloc_context *ac)
{bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;struct page *page = NULL;unsigned int alloc_flags;unsigned long did_some_progress;enum compact_priority compact_priority;enum compact_result compact_result;int compaction_retries;int no_progress_loops;unsigned int cpuset_mems_cookie;int reserve_flags;/** We also sanity check to catch abuse of atomic reserves being used by* callers that are not in atomic context.*/if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) ==(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))gfp_mask &= ~__GFP_ATOMIC;retry_cpuset:compaction_retries = 0;no_progress_loops = 0;compact_priority = DEF_COMPACT_PRIORITY;cpuset_mems_cookie = read_mems_allowed_begin();/** The fast path uses conservative alloc_flags to succeed only until* kswapd needs to be woken up, and to avoid the cost of setting up* alloc_flags precisely. So we do that now.*/alloc_flags = gfp_to_alloc_flags(gfp_mask);/** We need to recalculate the starting point for the zonelist iterator* because we might have used different nodemask in the fast path, or* there was a cpuset modification and we are retrying - otherwise we* could end up iterating over non-eligible zones endlessly.*/ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,ac->high_zoneidx, ac->nodemask);if (!ac->preferred_zoneref->zone)goto nopage;if (alloc_flags & ALLOC_KSWAPD)wake_all_kswapds(order, gfp_mask, ac);/** The adjusted alloc_flags might result in immediate success, so try* that first*/page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);if (page)goto got_pg;/** For costly allocations, try direct compaction first, as it's likely* that we have enough base pages and don't need to reclaim. For non-* movable high-order allocations, do that as well, as compaction will* try prevent permanent fragmentation by migrating from blocks of the* same migratetype.* Don't try this for allocations that are allowed to ignore* watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.*/if (can_direct_reclaim &&(costly_order ||(order > 0 && ac->migratetype != MIGRATE_MOVABLE))&& !gfp_pfmemalloc_allowed(gfp_mask)) {page = __alloc_pages_direct_compact(gfp_mask, order,alloc_flags, ac,INIT_COMPACT_PRIORITY,&compact_result);if (page)goto got_pg;if (order >= pageblock_order && (gfp_mask & __GFP_IO) &&!(gfp_mask & __GFP_RETRY_MAYFAIL)) {/** If allocating entire pageblock(s) and compaction* failed because all zones are below low watermarks* or is prohibited because it recently failed at this* order, fail immediately unless the allocator has* requested compaction and reclaim retry.** Reclaim is*  - potentially very expensive because zones are far*    below their low watermarks or this is part of very*    bursty high order allocations,*  - not guaranteed to help because isolate_freepages()*    may not iterate over freed pages as part of its*    linear scan, and*  - unlikely to make entire pageblocks free on its*    own.*/if (compact_result == COMPACT_SKIPPED ||compact_result == COMPACT_DEFERRED)goto nopage;}/** Checks for costly allocations with __GFP_NORETRY, which* includes THP page fault allocations*/if (costly_order && (gfp_mask & __GFP_NORETRY)) {/** If compaction is deferred for high-order allocations,* it is because sync compaction recently failed. If* this is the case and the caller requested a THP* allocation, we do not want to heavily disrupt the* system, so we fail the allocation instead of entering* direct reclaim.*/if (compact_result == COMPACT_DEFERRED)goto nopage;/** Looks like reclaim/compaction is worth trying, but* sync compaction could be very expensive, so keep* using async compaction.*/compact_priority = INIT_COMPACT_PRIORITY;}}retry:/* Ensure kswapd doesn't accidentally go to sleep as long as we loop */if (alloc_flags & ALLOC_KSWAPD)wake_all_kswapds(order, gfp_mask, ac);reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);if (reserve_flags)alloc_flags = reserve_flags;/** Reset the nodemask and zonelist iterators if memory policies can be* ignored. These allocations are high priority and system rather than* user oriented.*/if (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) {ac->nodemask = NULL;ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,ac->high_zoneidx, ac->nodemask);}/* Attempt with potentially adjusted zonelist and alloc_flags */page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);if (page)goto got_pg;/* Caller is not willing to reclaim, we can't balance anything */if (!can_direct_reclaim)goto nopage;/* Avoid recursion of direct reclaim */if (current->flags & PF_MEMALLOC)goto nopage;/* Try direct reclaim and then allocating */page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,&did_some_progress);if (page)goto got_pg;/* Try direct compaction and then allocating */page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,compact_priority, &compact_result);if (page)goto got_pg;/* Do not loop if specifically requested */if (gfp_mask & __GFP_NORETRY)goto nopage;/** Do not retry costly high order allocations unless they are* __GFP_RETRY_MAYFAIL*/if (costly_order && !(gfp_mask & __GFP_RETRY_MAYFAIL))goto nopage;if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,did_some_progress > 0, &no_progress_loops))goto retry;/** It doesn't make any sense to retry for the compaction if the order-0* reclaim is not able to make any progress because the current* implementation of the compaction depends on the sufficient amount* of free memory (see __compaction_suitable)*/if (did_some_progress > 0 &&should_compact_retry(ac, order, alloc_flags,compact_result, &compact_priority,&compaction_retries))goto retry;/* Deal with possible cpuset update races before we start OOM killing */if (check_retry_cpuset(cpuset_mems_cookie, ac))goto retry_cpuset;/* Reclaim has failed us, start killing things */page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);if (page)goto got_pg;/* Avoid allocations with no watermarks from looping endlessly */if (tsk_is_oom_victim(current) &&(alloc_flags == ALLOC_OOM ||(gfp_mask & __GFP_NOMEMALLOC)))goto nopage;/* Retry as long as the OOM killer is making progress */if (did_some_progress) {no_progress_loops = 0;goto retry;}nopage:/* Deal with possible cpuset update races before we fail */if (check_retry_cpuset(cpuset_mems_cookie, ac))goto retry_cpuset;/** Make sure that __GFP_NOFAIL request doesn't leak out and make sure* we always retry*/if (gfp_mask & __GFP_NOFAIL) {/** All existing users of the __GFP_NOFAIL are blockable, so warn* of any new users that actually require GFP_NOWAIT*/if (WARN_ON_ONCE(!can_direct_reclaim))goto fail;/** PF_MEMALLOC request from this context is rather bizarre* because we cannot reclaim anything and only can loop waiting* for somebody to do a work for us*/WARN_ON_ONCE(current->flags & PF_MEMALLOC);/** non failing costly orders are a hard requirement which we* are not prepared for much so let's warn about these users* so that we can identify them and convert them to something* else.*/WARN_ON_ONCE(order > PAGE_ALLOC_COSTLY_ORDER);/** Help non-failing allocations by giving them access to memory* reserves but do not use ALLOC_NO_WATERMARKS because this* could deplete whole memory reserves which would just make* the situation worse*/page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac);if (page)goto got_pg;cond_resched();goto retry;}
fail:warn_alloc(gfp_mask, ac->nodemask,"page allocation failure: order:%u", order);
got_pg:return page;
}

首先通过gfp_to_alloc_flags(), 根据gfp_mask对内存分配标识进行调整,通过first_zones_zonelist()重新计算首选内存域; 因为可能在fastpath中使用的nodemask不同,或者cpuset进行了修改,正在重试, 这样需要重新计算preferred zone,以免无限的遍历不符合要求的zone。 如果alloc_flag标志ALLOC_KSWAPD, 那么会通过wake_all_kswapds唤醒kswapd内核线程。使用调整后的标志来尝试第一次慢速路径内存分配,分配的函数也是get_page_from_freelist,如果分配失败,满足“允许直接回收内存(can_direct_reclaim)” 或者 "不适用pfmemalloc的内存分配请求"等条件,将会进行一次内存的压缩并分配页面。

retry 的过程中会重新唤醒kswapd线程(防止意外的休眠),调整zone后通过get_page_from_freelist 重新进行内存分配,如果分配失败了,并且不能够直接内存回收, 就跳转到"no_page"。__alloc_pages_direct_reclaim()尝试直接内存回收后分配页面,__alloc_pages_direct_compact()进行第二次直接内存压缩后分配页面,should_reclaim_retry()会判断是否需要重新回收,然后调转到“retry”. 如果gfp_mask中有noretry标志或者GFP_RETRY_MAYFAIL标志,那么不会重新retry, 直接跳转到"no_page".should_compact_retry()会判断是否需要重新压缩,然后跳转到”retry",check_retry_cpuset()如果检测到由于cpuset发生变化而检测到竞争条件,跳转到最开始的"retry_cpuset"。__alloc_pages_may_oom(), 如果内存回收失败,会尝试进行oom kill 一些进程,进行内存的回收。如果当前task由于OOM而处于被杀死的状态,则跳转移至“nopage”

最后的nopage,如果gfp_mask标志位有nofail选项,则将重试直到分配到页面为止; 如果没有该标志,说明page没有分配成功,直接返回NULL。__alloc_pages_cpuset_fallback(), 使用ALLOC_HARDER标志,如果节点耗尽,则回退以忽略cpuset的限制。

内存快速分配和慢速分配相关推荐

  1. 【C 语言】结构体 ( 结构体中嵌套一级指针 | 分配内存时先 为结构体分配内存 然后再为指针分配内存 | 释放内存时先释放 指针成员内存 然后再释放结构头内存 )

    文章目录 一.结构体中嵌套一级指针 1.声明 结构体类型 2.为 结构体 变量分配内存 ( 分配内存时先 为结构体分配内存 然后再为指针分配内存 ) 3.释放结构体内存 ( 释放内存时先释放 指针成员 ...

  2. ntfs分配单元大小_万字长文图解 Go 内存管理分析:工具、分配和回收原理

    " golang的内存分析工具怎么用?内存和回收原理,这一篇就够了" 大纲 1. 目录 2. 由一个问题展开 3. 名字说明 4. 内存怎么采样? 4.1 编译期间逃逸分析 4.2 ...

  3. 操作系统4小时速成:内存管理,程序执行过程,扩充内存,连续分配,非连续分配,虚拟内存,页面替换算法

    操作系统4小时速成:内存管理,程序执行过程,扩充内存,连续分配,非连续分配,虚拟内存,页面替换算法 2022找工作是学历.能力和运气的超强结合体,遇到寒冬,大厂不招人,可能很多算法学生都得去找开发,测 ...

  4. 水星怎么设置网速最快_水星路由器怎么设置限速(分配合理网速)设置教程图解...

    <水星路由器怎么设置限速(分配合理网速)设置教程图解>是由花火网为您收集修改整理而来,更多相关内容请关注花火网互联网常识栏目. 关于路由器设置限速已经不是什么新鲜事了,如今很多路由器都支持 ...

  5. 详细讲解从用户空间申请内存到内核如何为其分配内存的过程

    Linux内存管理 摘要:本章首先以应用程序开发者的角度审视Linux的进程内存管理,在此基础上逐步深入到内核中讨论系统物理内存管理和内核内存的使用方法.力求从外到内.水到渠成地引导网友分析Linux ...

  6. 【IM苹果推iMessage】苹果真机推送自动分配任务,自动分配任务,让您瞄准中高端客户

    推荐内容IMESSGAE相关 作者✈️@IMEAE推荐内容 iMessage苹果推软件 *** 点击即可查看作者要求内容信息 作者✈️@IMEAE推荐内容 1.家庭推内容 *** 点击即可查看作者要求 ...

  7. 写分配与写不分配的区别

    写分配和写不分配是指计算机存储器中的写操作. 写分配是指当向存储器写入数据时,系统会为其分配一块实际的物理内存.这意味着,当写入的数据被放入内存中时,它将占用内存的实际空间,且在需要时不能被其他程序重 ...

  8. 在TLAB(线程本地分配缓存)上分配对象

    目录 使用TLAB性能差异 示例 分配策略 Java对象分配过程 JVM的Thread Local Allocation Buffer,即TLAB线程本地分配缓存,作用是加速分配对象空间,以前的日志里 ...

  9. 【计算机网络】网络安全 : 公钥分配 ( 公钥使用者 | 公钥分配 | CA 证书格式 | CA 证书吊销 )

    文章目录 一.公钥使用者 二.公钥分配 三.CA 证书格式 四.CA 证书吊销 一.公钥使用者 公钥密码体质中 , 用户的公钥也不能随意的公布 , 公钥无法防止伪造 , 欺骗 , 接收者无法确认公钥使 ...

最新文章

  1. 使用ClickOnce部署VS2005中的WinForm应用程序.(ZT)
  2. Docker设置HTTP代理
  3. Spring Cloud 5分钟搭建教程(附上一个分布式日志系统项目作为参考) - 推荐
  4. python源码多平台编译_提升Python程序运行效率的6个方法
  5. 初识推荐算法---算法背景、算法概念介绍、推荐信息选取、常用推荐算法简介
  6. C++类中的main函数
  7. 25个Apache性能优化技巧推荐
  8. python的invalid syntax是什么意思_python中出现invalid syntax报错的几种原因
  9. 计算机软件配置项csci
  10. 华三数据中心SDN技术发展应用
  11. 改变世界的17个数学公式
  12. 福利:推荐一个免费的抠图网站
  13. 电磁波谱与通讯技术,5G特点
  14. c语言程序设计景点售票系统,c语言售票系统.docx
  15. 【转载】冗余与热备的概念区别
  16. exchange 2016升级版本及最新安全补丁
  17. 步进电机调速,S曲线调速算法你会吗?
  18. 【协议】MQTT、CoAP、HTTP比较,MQTT协议优缺点
  19. Senparc.Weixin.MP SDK 微信公众平台开发教程(十六):AccessToken自动管理机制
  20. 模块学习(五)——矩阵键盘

热门文章

  1. HTML、HTML5、XHTML、XML、XSL、DTD、XML Schema 简单介绍
  2. 如何解决Maven依赖冲突
  3. 知识竞赛策划书,这样写就全面了
  4. 江苏省计算机二级c语言题型分值,计算机二级C语言题型和评分标准
  5. 云计算 + AI + 遥感卫星,人类对地探测进入黄金时代
  6. 2021年全球在线美容和个人护理产品收入大约53770百万美元,预计2028年达到169270百万美元
  7. MIMIC-III数据库的应用(一)
  8. 最新进展:钉钉被小学生逼疯,拍片在线求饶哈哈哈哈
  9. C语言:零幺串(N0为最大连续零串的个数,N1为最大一串的个数)
  10. sdut 2055来淄博旅游