程序运行过程中,有些内存是短暂的驻留 用完一段时间之后就可以将内存释放以供后面再次使用,但是有些内存一旦申请之后,会长期使用而得不到释放。长久运行有可能造成碎片。以《professional linux kernel architecture》中用例说明如下图所示:

上图中有60个page,经过长期运行内存不断释放和申请,最终占用的内存分布到各个角落,还剩余大约25的pages分开分布空闲状态,针对这种场景由于空闲页周围都被占用,无法利用buddy算法的优势进行合并,因此尽管还有大约25%的空闲页可以使用,但是当要申请连续超过2个page的连续物理空间仍然会申请失败。针对上述问题,Mel Gormal内核在2.6.24版本中引入anti fragment机制,开发出一系列patch:Avoiding fragmentation with page clustering v27

The core observation in Mel's patch set remains that some types of memory are more easily reclaimed than others. A page which is backed up on a filesystem somewhere can be readily discarded and reused, for example, while a page holding a process's task structure is pretty well nailed down. One stubborn page is all it takes to keep an entire large block of memory from being consolidated and reused as a physically-contiguous whole. But if all of the easily-reclaimable pages could be kept together, with the non-reclaimable pages grouped into a separate region of memory, it should be much easier to create larger blocks of free memory.

So Mel's patch divides each memory zone into three types of blocks: non-reclaimable, easily reclaimable, and movable. The "movable" type is a new feature in this patch set; it is used for pages which can be easily shifted elsewhere using the kernel's page migration mechanism.

linux内核那些事之ZONE中有对anti fragment机制有一个大概介绍。为了解决上述场景产生的内存碎片问题,anti fragment方案中将物理页按照属性划大概划分为可以移动movable、可回收reclaim、不可移动no-movable等属性,其中不可移动页面尽量分布到内存两边不占用中间部分,当内存碎片太多时,将可移动页面进行合并整理以便能够整理出更大连续内存,或者将可回收页面回收到过来,通过上述几个手段整理出连续物理内存。

  • movable pages: 标记为可移动页面,为了整理出更大连续空间物理内存,可以将该页面中数据进行移动,移动之后需要刷新page table以便虚拟内存能够正常访问到,一般用户空间内存申请默认使用该策略,除非特别说明,系统初始化之后,一般将所有内存都标记为可移动,后面根据程序实际运行再进行划分。
  • reclaim page:该页面不能直接移动,但是可以通过将该页面回收整理出更大内存,被删除之后其内容可以通过其他方法恢复,一般使用文件映射会采用该方法,因此可以从文件中恢复数据。通常ksapd内核线程会进行该操作。
  • non-move page;页面不可移动,一般内核申请内存使用该策略。

相关数据结构

migrate type(页迁移属性)

内核将页面属性使用enum migratetype来定义,目前所支持的migrate type如下(include\linux\mmzone.h):


enum migratetype {MIGRATE_UNMOVABLE,MIGRATE_MOVABLE,MIGRATE_RECLAIMABLE,MIGRATE_PCPTYPES,   /* the number of types on the pcp lists */MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
#ifdef CONFIG_CMA/** MIGRATE_CMA migration type is designed to mimic the way* ZONE_MOVABLE works.  Only movable pages can be allocated* from MIGRATE_CMA pageblocks and page allocator never* implicitly change migration type of MIGRATE_CMA pageblock.** The way to use it is to change migratetype of a range of* pageblocks to MIGRATE_CMA which can be done by* __free_pageblock_cma() function.  What is important though* is that a range of pageblocks must be aligned to* MAX_ORDER_NR_PAGES should biggest page be bigger then* a single pageblock.*/MIGRATE_CMA,
#endif
#ifdef CONFIG_MEMORY_ISOLATIONMIGRATE_ISOLATE,  /* can't allocate from here */
#endifMIGRATE_TYPES
};
  • MIGRATE_UNMOVABLE:页面不可移动。
  • MIGRATE_MOVABLE:页面可移动。
  • MIGRATE_RECLAIMABLE:页面可回收。
  • MIGRATE_PCPTYPES,MIGRATE_HIGHATOMIC:页面被per-cpu-page占有。
  • MIGRATE_CMA:CMA内存页面。
  • MIGRATE_ISOLATE:该页面被隔离不可申请。
  • MIGRATE_TYPES:最大支持的type。

page migrate type管理

page migrate type整体管理结构如下:

其管理结构按照层次划分如下:

  • 如果对每个page都创建相应的migrate type, 那么将会占用大量物理内存,反而会造成负面优化效果,因此内核将pageblock_nr_pages个物理页组合成一个page block进行管理,同一个page block内的物理页面拥有相同的migrate type页迁移属性。当page block的页迁移属性改变时,在该block内的所有页属性都将改变。
  • 一个page block内共包含pageblock_nr_pages个页面,pageblock_nr_pages定义如下:
extern unsigned int pageblock_order;#else /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE *//* Huge pages are a constant size */
#define pageblock_order     HUGETLB_PAGE_ORDER#endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */#else /* CONFIG_HUGETLB_PAGE *//* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
#define pageblock_order     (MAX_ORDER-1)#endif /* CONFIG_HUGETLB_PAGE */#define pageblock_nr_pages (1UL << pageblock_order)
  • pageblock_nr_pages 大小为2的pageblock_order次方,pageblock_order大小与是否开启huge page有关,如果不开启huge page,pageblock大小等于(MAX_ORDER-1)即与buddy所能管理的最大MAX_ORDER(注意采用buddy 最大order,主要是为了和buddy对齐,方便进行管理,MAX_ORDER即buddy内可以在一个page block内进行拆分和合并)。 
  • page block migrate type:即描述一个page block的页迁移属性,如上图所示它除了 migratetype之外还包括一些额外属性page block属性如:PB_migrate_skip:
/* Bit indices that affect a whole block of pages */
enum pageblock_bits {PB_migrate,PB_migrate_end = PB_migrate + PB_migratetype_bits - 1,/* 3 bits required for migrate types */PB_migrate_skip,/* If set the block is skipped by compaction *//** Assume the bits will always align on a word. If this assumption* changes then get/set pageblock needs updating.*/NR_PAGEBLOCK_BITS
};
  • PB_migrate: 相当与从MIGRATE_UNMOVABLE开始
  • PB_migrate_end:相当于MIGRATE_TYPES结束,定义如下:
#define PB_migratetype_bits 3
  • 共占用三位 大小相当于从2的3次方从0~5,而了 migratetype最大为5,内核中常使用(MIGRATE_TYPES > (1 << PB_migratetype_bits)) 检查其范围防止出现越界现象。
  • PB_migrate_skip:表明时该block 在内存压缩时跳过。
  • 一个page block migrate type共占有NR_PAGEBLOCK_BITS位。
  • 一个zone 内可以划分的page block数目为:(zonesize/pageblock_nr_pages),在内核中其计算公式为:roundup(zonesize, pageblock_nr_pages)。
  • 一个zone page block migrate type共占有的位数:zone page block number * NR_PAGEBLOCK_BITS。
  • zone->pageblock_flags: 为一个zone内所有内存page block 页迁移属性管理,其占有大小等于:

(zone page block number * NR_PAGEBLOCK_BITS)/BITS_PER_LONG

usemap_size()

usemap_size提供了一个zone内 pageblock_flags计算函数,如下:

/** Calculate the size of the zone->blockflags rounded to an unsigned long* Start by making sure zonesize is a multiple of pageblock_order by rounding* up. Then use 1 NR_PAGEBLOCK_BITS worth of bits per pageblock, finally* round what is now in bits to nearest long in bits, then return it in* bytes.*/
static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned long zonesize)
{unsigned long usemapsize;zonesize += zone_start_pfn & (pageblock_nr_pages-1);usemapsize = roundup(zonesize, pageblock_nr_pages);usemapsize = usemapsize >> pageblock_order;usemapsize *= NR_PAGEBLOCK_BITS;usemapsize = roundup(usemapsize, 8 * sizeof(unsigned long));return usemapsize / 8;
}

page block migrate type init

内核初始化中,其实并不区分moveable, reclain以及unmoveble,初始化中都将所有的页面初始化为moveable,在memmap_init_zone中完成:

start_kernel()
-->setup_arch()---->x86_init.paging.pagetable_init---->native_pagetable_init()---->paging_init---->zone_sizes_init(x86)---->free_area_init(x86)---->free_area_init---->free_area_init_node---->free_area_init_core---->memmap_init---->memmap_init_zone

memmap_init_zone函数源码:


/** Initially all pages are reserved - free ones are freed* up by memblock_free_all() once the early boot process is* done. Non-atomic initialization, single-pass.*/
void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,unsigned long start_pfn, enum memmap_context context,struct vmem_altmap *altmap)
{unsigned long pfn, end_pfn = start_pfn + size;struct page *page;... ...for (pfn = start_pfn; pfn < end_pfn; ) {... ...if (!(pfn & (pageblock_nr_pages - 1))) {set_pageblock_migratetype(page, MIGRATE_MOVABLE);cond_resched();}pfn++;}
}

 可以看到在zone初始化过程中,会将pageblock_nr_pages个page组成一个page block 将所有页面都设置成MIGRATE_MOVABLE属性。之后申请过程中,如果指定从UNMOVABLE中申请,而UNMOVABLE页面数目为0,则会启动fallback机制从MOVABLE中streal 一个page block给UNMOVABLE,并将属性修改为MIGRATE_UNMOVABLE。

struct free_area

仅仅有上述结构管理page migrate typw还不能满足内存管理需求,为了加速分配和释放过程,2.6版本中还对zone内管理buddy结构进入了改动引入了struct free_are结构:

struct free_area {struct list_head   free_list[MIGRATE_TYPES];unsigned long      nr_free;
};
  • struct free_are将在同一个order级别内的内存按照migrate type进行管理,以便在申请内存时能够根据migrate_type能够快速找到空闲页。
  • nr_free:表明该order 级别有多少空闲内存统计所有migrate type。

因此就演进成当前buddy管理结构:

  • 在同一个zone内将不同级别的order 内存挂载到free_are中。
  • free_are用于管理相同级别内所有mirgrate type内存,按照不同 mirgrate type进行分开管理方便进行内存申请释放和合并等操作。

set/get migrate type

内核提供了set/get migrate type常用两个封装函数:

void set_pageblock_migratetype(struct page *page, int migratetype)

 设置page中对应的page block 页迁移属性:

  • struct page *page:所要设置的页面。
  • int migratetype:所需要设置页迁移属性
static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned long pfn)

获取对应页迁移属性migratetype:

  • struct page *page:所要设置的页面。
  • unsigned long pfn:页面所属页帧号.

set_pageblock_migratetype

set_pageblock_migratetype:设置指定页面migrate type,其实本质是设置page 所属pageblock的migrate type.

void set_pageblock_migratetype(struct page *page, int migratetype)
{if (unlikely(page_group_by_mobility_disabled &&migratetype < MIGRATE_PCPTYPES))migratetype = MIGRATE_UNMOVABLE;set_pageblock_flags_group(page, (unsigned long)migratetype,PB_migrate, PB_migrate_end);
}

 set_pageblock_flags_group定义如下:

#define set_pageblock_flags_group(page, flags, start_bitidx, end_bitidx) \set_pfnblock_flags_mask(page, flags, page_to_pfn(page),        \end_bitidx,                    \(1 << (end_bitidx - start_bitidx + 1)) - 1)

set_pfnblock_flags_mask为设置对应page block migrate:

void set_pfnblock_flags_mask(struct page *page, unsigned long flags,unsigned long pfn,unsigned long end_bitidx,unsigned long mask)

 设置page中对应的page block 页迁移属性:

  • struct page *page:所要设置的页面。
  • unsigned long flags:设置的migrate type。
  • unsigned long pfn:页面所属页帧号。
  • unsigned long end_bitidx:设置的页迁移属性结束bit
  • unsigned long mask: 页迁移属性掩码。

set_pfnblock_flags_mask

set_pfnblock_flags_mask实现如下:


/*** set_pfnblock_flags_mask - Set the requested group of flags for a pageblock_nr_pages block of pages* @page: The page within the block of interest* @flags: The flags to set* @pfn: The target page frame number* @end_bitidx: The last bit of interest* @mask: mask of bits that the caller is interested in*/
void set_pfnblock_flags_mask(struct page *page, unsigned long flags,unsigned long pfn,unsigned long end_bitidx,unsigned long mask)
{unsigned long *bitmap;unsigned long bitidx, word_bitidx;unsigned long old_word, word;BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4);BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits));bitmap = get_pageblock_bitmap(page, pfn);bitidx = pfn_to_bitidx(page, pfn);word_bitidx = bitidx / BITS_PER_LONG;bitidx &= (BITS_PER_LONG-1);VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);bitidx += end_bitidx;mask <<= (BITS_PER_LONG - bitidx - 1);flags <<= (BITS_PER_LONG - bitidx - 1);word = READ_ONCE(bitmap[word_bitidx]);for (;;) {old_word = cmpxchg(&bitmap[word_bitidx], word, (word & ~mask) | flags);if (word == old_word)break;word = old_word;}
}
  • 对NR_PAGEBLOCK_BITS和MIGRATE_TYPES长度进行检查 防止出现越界
  • get_pageblock_bitmap(page, pfn):获取page所对应的zone中pageblock_flags。
  • bitidx = pfn_to_bitidx(page, pfn):计算该page 对应page block的起始bit位。
  • word_bitidx = bitidx / BITS_PER_LONG;:求得page block对应pageblock_flags index,在第几个unsigned long 中。
  • bitidx &= (BITS_PER_LONG-1):求得page block对应unsigned long中内的偏移。
  • bitidx += end_bitidx;最后结束bit位置。
  • mask <<= (BITS_PER_LONG - bitidx - 1):切换到[page block对应unsigned long中mask。
  • flags <<= (BITS_PER_LONG - bitidx - 1):[page block对应unsigned long中标记位置。
  • word = READ_ONCE(bitmap[word_bitidx]):读取当前page block 对应的migrate type。
  • cmpxchg(&bitmap[word_bitidx], word, (word & ~mask) | flags);:比较并交互直到设置完成成功位置。

get_pfnblock_migratetype

get_pfnblock_migratetype实现如下:

static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned long pfn)
{return __get_pfnblock_flags_mask(page, pfn, PB_migrate_end, MIGRATETYPE_MASK);
}

最终通过__get_pfnblock_flags_mask实现。

__get_pfnblock_flags_mask

/*** get_pfnblock_flags_mask - Return the requested group of flags for the pageblock_nr_pages block of pages* @page: The page within the block of interest* @pfn: The target page frame number* @end_bitidx: The last bit of interest to retrieve* @mask: mask of bits that the caller is interested in** Return: pageblock_bits flags*/
static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page,unsigned long pfn,unsigned long end_bitidx,unsigned long mask)
{unsigned long *bitmap;unsigned long bitidx, word_bitidx;unsigned long word;bitmap = get_pageblock_bitmap(page, pfn);bitidx = pfn_to_bitidx(page, pfn);word_bitidx = bitidx / BITS_PER_LONG;bitidx &= (BITS_PER_LONG-1);word = bitmap[word_bitidx];bitidx += end_bitidx;return (word >> (BITS_PER_LONG - bitidx - 1)) & mask;
}
  • 转换过程与 set_pfnblock_flags_mask类似,最终获取到page block对应的migrate type

Fall Back机制

由__rmqueue()申请内存流程(见《linux内核那些事之buddy(快速分配get_page_from_freelist())(3)》):

 在__rmqueue申请内存是当当前zone中指定的migrate type申请内存失败时,将进入fallback机制,从当前zone的其他migrate type中寻找是否有多余的空闲内存。核心处理函数为__rmqueue_fallback()。

__rmqueue_fallback

__rmqueue_fallback大概流程如下:

结合代码分析:


/** Try finding a free buddy page on the fallback list and put it on the free* list of requested migratetype, possibly along with other pages from the same* block, depending on fragmentation avoidance heuristics. Returns true if* fallback was found so that __rmqueue_smallest() can grab it.** The use of signed ints for order and current_order is a deliberate* deviation from the rest of this file, to make the for loop* condition simpler.*/
static __always_inline bool
__rmqueue_fallback(struct zone *zone, int order, int start_migratetype,unsigned int alloc_flags)
{struct free_area *area;int current_order;int min_order = order;struct page *page;int fallback_mt;bool can_steal;/** Do not steal pages from freelists belonging to other pageblocks* i.e. orders < pageblock_order. If there are no local zones free,* the zonelists will be reiterated without ALLOC_NOFRAGMENT.*/if (alloc_flags & ALLOC_NOFRAGMENT)min_order = pageblock_order;/** Find the largest available free page in the other list. This roughly* approximates finding the pageblock with the most free pages, which* would be too costly to do exactly.*/for (current_order = MAX_ORDER - 1; current_order >= min_order;--current_order) {area = &(zone->free_area[current_order]);fallback_mt = find_suitable_fallback(area, current_order,start_migratetype, false, &can_steal);if (fallback_mt == -1)continue;/** We cannot steal all free pages from the pageblock and the* requested migratetype is movable. In that case it's better to* steal and split the smallest available page instead of the* largest available page, because even if the next movable* allocation falls back into a different pageblock than this* one, it won't cause permanent fragmentation.*/if (!can_steal && start_migratetype == MIGRATE_MOVABLE&& current_order > order)goto find_smallest;goto do_steal;}return false;find_smallest:for (current_order = order; current_order < MAX_ORDER;current_order++) {area = &(zone->free_area[current_order]);fallback_mt = find_suitable_fallback(area, current_order,start_migratetype, false, &can_steal);if (fallback_mt != -1)break;}/** This should not happen - we already found a suitable fallback* when looking for the largest page.*/VM_BUG_ON(current_order == MAX_ORDER);do_steal:page = get_page_from_free_area(area, fallback_mt);steal_suitable_fallback(zone, page, alloc_flags, start_migratetype,can_steal);trace_mm_page_alloc_extfrag(page, order, current_order,start_migratetype, fallback_mt);return true;}
  • alloc_flags & ALLOC_NOFRAGMENT是否设置,如果设置则说明不允许从大于order的buddy中申请内存,min_order=pageblock_order,因为ALLOC_NOFRAGMENT不设置有可能需要对buddy进行分裂造成新的内存碎片。
  • 从【 MAX_ORDER - 1,min_order】范围内,从高到低寻找合适的oder,为什么从高到低开始?这是因为一个page block数量是MAX_ORDER - 1级别的,尽量一次性从MAX_ORDER - 1中做迁移。
  • find_suitable_fallback:寻找合适的fall back migrate type。
  • 如果没有寻找到合适的fall back migrate type,切换order继续寻找,直到小于min_order为止。
  • 如果寻找到合适fall back migrate type,则需要查看使用允许can_steal从fall back migrate type steal 内存出来。
  • 如果允许从fall back migrate type steal 内存出来,则进入steal page流程,从fall back migrate type 中的current_order级别中 窃取出内存,并修改其page block迁移属性。
  • 如果不允许从fall back migrate type steal 内存出来,则切换order,直到fall back migrate type对应的级别小于min_order,能够找到则还是要窃取出与与指定order级别不一样的内存出来,并分裂多余内存出来。

find_suitable_fallback

find_suitable_fallback 根据传入的migrate type寻找合适的 fall back migrate type.


/** Check whether there is a suitable fallback freepage with requested order.* If only_stealable is true, this function returns fallback_mt only if* we can steal other freepages all together. This would help to reduce* fragmentation due to mixed migratetype pages in one pageblock.*/
int find_suitable_fallback(struct free_area *area, unsigned int order,int migratetype, bool only_stealable, bool *can_steal)
{int i;int fallback_mt;if (area->nr_free == 0)return -1;*can_steal = false;for (i = 0;; i++) {fallback_mt = fallbacks[migratetype][i];if (fallback_mt == MIGRATE_TYPES)break;if (free_area_empty(area, fallback_mt))continue;if (can_steal_fallback(order, migratetype))*can_steal = true;if (!only_stealable)return fallback_mt;if (*can_steal)return fallback_mt;}return -1;
}
  • 可以看到其核心是从fallbacks 一个二维数组中寻找合适migrate type。

fallbacks数组

fallbacks数组为定义好的回退规则,指定migrate type内存不足时优先fall back migrate type机制,定义如下

/** This array describes the order lists are fallen back to when* the free lists for the desirable migrate type are depleted*/
static int fallbacks[MIGRATE_TYPES][4] = {[MIGRATE_UNMOVABLE]   = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE,   MIGRATE_TYPES },[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES },[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE,   MIGRATE_TYPES },
#ifdef CONFIG_CMA[MIGRATE_CMA]         = { MIGRATE_TYPES }, /* Never used */
#endif
#ifdef CONFIG_MEMORY_ISOLATION[MIGRATE_ISOLATE]     = { MIGRATE_TYPES }, /* Never used */
#endif
};
  • MIGRATE_UNMOVABLE:依次按照MIGRATE_RECLAIMABLE、MIGRATE_MOVABLE申请。当UNMOVABLE内存不足时,首先查看RECLAIM中是否有多余内存,尽量使用可回收内存内存区域,可以避免内存提前紧张。当RECLAIM中内存还是不足时,从MOVABLE可移动区域中申请内存。
  •  MIGRATE_MOVABLE:依次按照MIGRATE_RECLAIMABLE、MIGRATE_UNMOVABLE优先级进行内存申请。
  • [MIGRATE_RECLAIMABLE:按照 MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE优先级进行内存申请。

参考资料

Group pages of related mobility together to reduce external fragmentation v28 [LWN.net]

Short topics in memory management [LWN.net]

Avoiding - and fixing - memory fragmentation [LWN.net]

linux内核那些事之buddy(anti-fragment机制)(4)相关推荐

  1. linux内核那些事之buddy(慢速申请内存__alloc_pages_slowpath)(5)

    内核提供__alloc_pages_nodemask接口申请物理内存主要分为两个部分:快速申请物理内存get_page_from_freelist(linux内核那些事之buddy(快速分配get_p ...

  2. linux内核那些事之buddy(anti-fragment机制-steal page)(5)

    继<linux内核那些事之buddy(anti-fragment机制)(4)>,在同一个zone内指定的migrate type中没有足够内存,会启动fallback机制,从fallbac ...

  3. linux内核那些事之buddy

    buddy算法是内核中比较古老的一个模块,很好的解决了相邻物理内存碎片的问题即"内碎片问题",同时有兼顾内存申请和释放效率问题,内核从引入该算法之后一直都能够在各种设备上完好运行, ...

  4. linux内核那些事之pg_data_t、zone结构初始化

    free_area_init 继续接着<linux内核那些事之ZONE>,分析内核物理内存初始化过程,zone_sizes_init()在开始阶段主要负责对各个类型zone 大小进行计算, ...

  5. linux内核那些事之Sparse vmemmap

    <inux内核那些事之物理内存模型之SPARCE(3)>中指出在传统的sparse 内存模型中,每个mem_section都有一个属于自己的section_mem_map,如下图所示: 而 ...

  6. linux内核那些事之mmap_region流程梳理

    承接<linux内核那些事之mmap>,mmap_region()是申请一个用户进程虚拟空间 并根据匿名映射或者文件映射做出相应动作,是实现mmap关键函数,趁这几天有空闲时间 整理下mm ...

  7. linux内核那些事之struct page

    struct page page(页)是linux内核管理物理内存的最小单位,内核将整个物理内存按照页对齐方式划分成千上万个页进行管理,内核为了管理这些页将每个页抽象成struct page结构管理每 ...

  8. linux内核那些事之ZONE

    struct zone 从linux 三大内存模型中可以了解到,linux内核将物理内存按照实际使用用途划分成不同的ZONE区域,ZONE管理在物理内存中占用重要地位,在内核中对应的结构为struct ...

  9. linux内核那些事之物理内存模型之FLATMEM(1)

    linux内核中物理内存管理是其中比较重要的一块,随着内核从32位到64位发展,物理内存管理也不断进行技术更新,按照历史演进共有FLATMEM.DISCONTIGMEM以及SPRARSEMEM模型.( ...

最新文章

  1. 在leangoo里怎么创建看板,更改看板名称?
  2. python的类型化_显式类型化的Python版本?
  3. LeetCode Remove Duplicates from Sorted List II
  4. php 阻塞消息队列,linux 消息队列阻塞
  5. 简单的学习心得:网易云课堂Android开发第六章SQLite与ContentProvider
  6. spring aop示例_Spring查找方法示例
  7. 5_Windows下利用批处理切换IP地址
  8. 移植的7zip到Vxworks 取名vx7zip
  9. C语言 *** stack smashing detected *** 问题的解决
  10. 关于wordpress站点地图代码调试
  11. 列表边框column-rule
  12. 查询Linux的公网及内网IP
  13. Bellman-ford算法与Dijkstra算法(RIP和OSPF的基本算法)
  14. EasyRecovery如何恢复游戏——英雄联盟
  15. 网络 如何解决输入路由器管理地址192.168.1.1进不去
  16. 编写一个油猴脚本,去除百度首页的广告卡片(亲测有效)
  17. QT中使用ActiveX
  18. 【CSS】学习iview的icon样式+font字体
  19. window下xmind-pro-8破解版
  20. usb设备驱动之uvc设备

热门文章

  1. 从Eclipse切换到IDEA后需要做的事情
  2. redis主从的配置和使用
  3. 架构设计:服务自动化部署和管理流程
  4. Linux系统:centos7下搭建ZooKeeper3.4中间件,常用命令总结
  5. Keras实现LeNet-5网络,与可视化网络
  6. Spring MVC 实践 - Component
  7. hadoop--集群崩溃处理方法
  8. 常用的前端跨域的几种方式
  9. goim 中的 data flow 数据流转及思考
  10. TDD容易被忽略的五大前提