free_area_init

继续接着《linux内核那些事之ZONE》,分析内核物理内存初始化过程,zone_sizes_init()在开始阶段主要负责对各个类型zone 大小进行计算,并根据内核启动参数movable_node和kernelcore,以及实际物理内存情况,计算出各个node节点所划分出的可以移动的内存给ZONE_MOVABLE,并将各个节点ZONE_MOVABLE的内存保存到zone_movable_pfn数组中,最后所有准备工作完成之后,free_area_init()在最后会具体初始各个节点信息:

void __init free_area_init(unsigned long *max_zone_pfn)
{... ...for_each_online_node(nid) {pg_data_t *pgdat = NODE_DATA(nid);free_area_init_node(nid);/* Any memory on that node */if (pgdat->node_present_pages)node_set_state(nid, N_MEMORY);check_for_memory(pgdat, nid);}
}

进入到free_area_init_node()函数初始化各个具体node 节点物理内存相关管理结构。

free_area_init_node()

free_area_init_node()该函数具有初始化pg_data_t以及node相关节点信息:


static void __init free_area_init_node(int nid)
{pg_data_t *pgdat = NODE_DATA(nid);unsigned long start_pfn = 0;unsigned long end_pfn = 0;/* pg_data_t should be reset to zero when it's allocated */WARN_ON(pgdat->nr_zones || pgdat->kswapd_highest_zoneidx);get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);pgdat->node_id = nid;pgdat->node_start_pfn = start_pfn;pgdat->per_cpu_nodestats = NULL;pr_info("Initmem setup node %d [mem %#018Lx-%#018Lx]\n", nid,(u64)start_pfn << PAGE_SHIFT,end_pfn ? ((u64)end_pfn << PAGE_SHIFT) - 1 : 0);calculate_node_totalpages(pgdat, start_pfn, end_pfn);alloc_node_mem_map(pgdat);pgdat_set_deferred_range(pgdat);free_area_init_core(pgdat);
}

主要流程:

  • get_pfn_range_for_nid(nid, &start_pfn, &end_pfn), 从memblock中获取物理内存的实际起始和结束地址(注意里面包括空洞)
  • 初始化pgdat,设置pgdat->node_id 、pgdat->node_start_pfn、pgdat->per_cpu_nodestats.
  • calculate_node_totalpages:计算节点内实际物理页,并设置zone
  • alloc_node_mem_map: 申请node_mem_map内存空间(注意sparse内存模型mem_map位于mem_section中,因此此函数为空)
  • 如果开启了CONFIG_DEFERRED_STRUCT_PAGE_INIT则设置first_deferred_pfn。
  • free_area_init_core:设置zone 中的数据结构

calculate_node_totalpages()

计算节点内实际内存页数目:


static void __init calculate_node_totalpages(struct pglist_data *pgdat,unsigned long node_start_pfn,unsigned long node_end_pfn)
{unsigned long realtotalpages = 0, totalpages = 0;enum zone_type i;for (i = 0; i < MAX_NR_ZONES; i++) {struct zone *zone = pgdat->node_zones + i;unsigned long zone_start_pfn, zone_end_pfn;unsigned long spanned, absent;unsigned long size, real_size;spanned = zone_spanned_pages_in_node(pgdat->node_id, i,node_start_pfn,node_end_pfn,&zone_start_pfn,&zone_end_pfn);absent = zone_absent_pages_in_node(pgdat->node_id, i,node_start_pfn,node_end_pfn);size = spanned;real_size = size - absent;if (size)zone->zone_start_pfn = zone_start_pfn;elsezone->zone_start_pfn = 0;zone->spanned_pages = size;zone->present_pages = real_size;totalpages += size;realtotalpages += real_size;}pgdat->node_spanned_pages = totalpages;pgdat->node_present_pages = realtotalpages;printk(KERN_DEBUG "On node %d totalpages: %lu\n", pgdat->node_id,realtotalpages);
}
  • zone_spanned_pages_in_node()函数会扫描该节点内所有的memblock,并取得所有memblock中的最小起始pfn start_pfn和结束end_pfn,如果memblock中的region存在空洞,则会包含
  • zone_absent_pages_in_node: 计算start_pfn和end_pfn中的 空洞页
  • size = spanned: 为扫描出的页数目(包含空洞)
  • real_size = size - absent:计算出真实的物理内存页数目
  • 将size 和real_size分别赋值给相应的zone->spanned_pages 和zone->present_pages
  • totalpages:该节点内的所有pfn(包括空洞)
  • realtotalpages:该节点内的真实物理内存页数目
  • 将totalpages和realtotalpages 分别赋值给pgdat->node_spanned_pages和pgdat->node_present_pages 。

free_area_init_core

详细初始化zone中的数据:

static void __init free_area_init_core(struct pglist_data *pgdat)
{enum zone_type j;int nid = pgdat->node_id;pgdat_init_internals(pgdat);pgdat->per_cpu_nodestats = &boot_nodestats;for (j = 0; j < MAX_NR_ZONES; j++) {struct zone *zone = pgdat->node_zones + j;unsigned long size, freesize, memmap_pages;unsigned long zone_start_pfn = zone->zone_start_pfn;size = zone->spanned_pages;freesize = zone->present_pages;/** Adjust freesize so that it accounts for how much memory* is used by this zone for memmap. This affects the watermark* and per-cpu initialisations*/memmap_pages = calc_memmap_size(size, freesize);if (!is_highmem_idx(j)) {if (freesize >= memmap_pages) {freesize -= memmap_pages;if (memmap_pages)printk(KERN_DEBUG"  %s zone: %lu pages used for memmap\n",zone_names[j], memmap_pages);} elsepr_warn("  %s zone: %lu pages exceeds freesize %lu\n",zone_names[j], memmap_pages, freesize);}/* Account for reserved pages */if (j == 0 && freesize > dma_reserve) {freesize -= dma_reserve;printk(KERN_DEBUG "  %s zone: %lu pages reserved\n",zone_names[0], dma_reserve);}if (!is_highmem_idx(j))nr_kernel_pages += freesize;/* Charge for highmem memmap if there are enough kernel pages */else if (nr_kernel_pages > memmap_pages * 2)nr_kernel_pages -= memmap_pages;nr_all_pages += freesize;/** Set an approximate value for lowmem here, it will be adjusted* when the bootmem allocator frees pages into the buddy system.* And all highmem pages will be managed by the buddy system.*/zone_init_internals(zone, j, nid, freesize);if (!size)continue;set_pageblock_order();setup_usemap(pgdat, zone, zone_start_pfn, size);init_currently_empty_zone(zone, zone_start_pfn, size);memmap_init(size, nid, j, zone_start_pfn);}
}
  • calc_memmap_size: 计算mem_map 所需要的内存页数目,并打印出来
  • dma_reserve:如果有dma保留内存,则打印出来
  • zone_init_internals: 初始化zone主要数据
  • set_pageblock_order: 如果配置CONFIG_HUGETLB_PAGE_SIZE_VARIABLE, 则设置pageblock_order。
  • setup_usemap: 如果不是sparse 内存模式,则需要在此创建pageblock_flags。sparse的pageblock_flags 是在sparse初始化时候创建的。
  • init_currently_empty_zone: 初始化free_area 链表,此时buddy还为构建起来,并设置zone->initialized = 1。
  • memmap_init:初始化完成zone之后最后初始化每个页。

memmap_init

继续初始化页级别:


void __meminit __weak memmap_init(unsigned long size, int nid,unsigned long zone,unsigned long range_start_pfn)
{unsigned long start_pfn, end_pfn;unsigned long range_end_pfn = range_start_pfn + size;int i;for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn);if (end_pfn > start_pfn) {size = end_pfn - start_pfn;memmap_init_zone(size, nid, zone, start_pfn,MEMMAP_EARLY, NULL);}}
}

该函数主要是遍历memblock中的所有页,并调用memmap_init_zone初始化。

memmap_init_zone

从start_pfn 开始初始化size 个页:


/** Initially all pages are reserved - free ones are freed* up by memblock_free_all() once the early boot process is* done. Non-atomic initialization, single-pass.*/
void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,unsigned long start_pfn, enum memmap_context context,struct vmem_altmap *altmap)
{unsigned long pfn, end_pfn = start_pfn + size;struct page *page;if (highest_memmap_pfn < end_pfn - 1)highest_memmap_pfn = end_pfn - 1;#ifdef CONFIG_ZONE_DEVICE/** Honor reservation requested by the driver for this ZONE_DEVICE* memory. We limit the total number of pages to initialize to just* those that might contain the memory mapping. We will defer the* ZONE_DEVICE page initialization until after we have released* the hotplug lock.*/if (zone == ZONE_DEVICE) {if (!altmap)return;if (start_pfn == altmap->base_pfn)start_pfn += altmap->reserve;end_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);}
#endiffor (pfn = start_pfn; pfn < end_pfn; ) {/** There can be holes in boot-time mem_map[]s handed to this* function.  They do not exist on hotplugged memory.*/if (context == MEMMAP_EARLY) {if (overlap_memmap_init(zone, &pfn))continue;if (defer_init(nid, pfn, end_pfn))break;}page = pfn_to_page(pfn);__init_single_page(page, pfn, zone, nid);if (context == MEMMAP_HOTPLUG)__SetPageReserved(page);/** Mark the block movable so that blocks are reserved for* movable at startup. This will force kernel allocations* to reserve their blocks rather than leaking throughout* the address space during boot when many long-lived* kernel allocations are made.** bitmap is created for zone's valid pfn range. but memmap* can be created for invalid pages (for alignment)* check here not to call set_pageblock_migratetype() against* pfn out of zone.*/if (!(pfn & (pageblock_nr_pages - 1))) {set_pageblock_migratetype(page, MIGRATE_MOVABLE);cond_resched();}pfn++;}
}
  • __init_single_page: 初始化单个页struct page 内的数据,首先将page 清零,然后通过在page->flags中 初始化page所属的zone以及node id,计数设置为1, _mapcount设置为-1,初始化_last_cpupid 以及初始化lru。
  • 将page block的 属性设置为MIGRATE_MOVABLE,如果后续需要申请其他类型的页则从MIGRATE_MOVABLE进行转换。

the kernel favors large page groups when pages must be ‘‘stolen’’ from different migrate zones from those the allocation is intended for. Because all pages initially belong to the movable zone, stealing pages is required when regular, unmovable kernel allocations are performed.Naturally, not too many movable allocations will have been performed during boot, so chances are good that the allocator can pick maximally sized blocks and transfer them from the movable to the nonmovable list. Because the blocks have maximal size, no fragmentation is introduced into the movable zone!

pageblock_flags

内核为了防止在长期运行过程中出现较多的碎片,申请较大连续内存失败 问题,将页划分为可移动、可回收、不可回收等属性,当内存不足或者碎片较多时,会将可回收内存进行回收,以及将可移动内存进行移动以变腾出连续空闲内存,用于较大内存申请,起始划分出的页属性为:

enum migratetype {MIGRATE_UNMOVABLE,MIGRATE_MOVABLE,MIGRATE_RECLAIMABLE,MIGRATE_PCPTYPES,    /* the number of types on the pcp lists */MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
#ifdef CONFIG_CMA/** MIGRATE_CMA migration type is designed to mimic the way* ZONE_MOVABLE works.  Only movable pages can be allocated* from MIGRATE_CMA pageblocks and page allocator never* implicitly change migration type of MIGRATE_CMA pageblock.** The way to use it is to change migratetype of a range of* pageblocks to MIGRATE_CMA which can be done by* __free_pageblock_cma() function.  What is important though* is that a range of pageblocks must be aligned to* MAX_ORDER_NR_PAGES should biggest page be bigger then* a single pageblock.*/MIGRATE_CMA,
#endif
#ifdef CONFIG_MEMORY_ISOLATIONMIGRATE_ISOLATE,  /* can't allocate from here */
#endifMIGRATE_TYPES
};

同时为了管理这些页可移动属性,必须为每个页申请管理空间,这样当内存较大时也会占用较多内存,为了减少这些可移动属性所占内存,内核将多个页归属一个block(即page block)进行管理,在一个block内的所有页都具有相等属性,可以通过/proc/pagetypeinfo 查看各个zone内个属性值的页使用状态:

以上是作者机器打印的信息,在该系统中 page block oder为9 即每512个页组成一个block 页迁移属性,在一个block内 512个页可迁移属性相同,在内核中pageblock_order变量表示为page block order,在该系统中pagblock_order为9 , pageblock_nr_pages变量表示为多少个页组成一个block 页迁移属性。

pageblock_flags管理

针对不同的内存管理模型pageblock_flags 管理数据结构位置不同,sparse内存模型pageblock_flags 位于mem_section的usage结构中:


struct page;
struct page_ext;
struct mem_section {/** This is, logically, a pointer to an array of struct* pages.  However, it is stored with some other magic.* (see sparse.c::sparse_init_one_section())** Additionally during early boot we encode node id of* the location of the section here to guide allocation.* (see sparse.c::memory_present())** Making it a UL at least makes someone do a cast* before using it wrong.*/unsigned long section_mem_map;struct mem_section_usage *usage;
#ifdef CONFIG_PAGE_EXTENSION/** If SPARSEMEM, pgdat doesn't have page_ext pointer. We use* section. (see page_ext.h about this.)*/struct page_ext *page_ext;unsigned long pad;
#endif/** WARNING: mem_section must be a power-of-2 in size for the* calculation and use of SECTION_ROOT_MASK to make sense.*/
};

mem_section_usage结构定义如下:


struct mem_section_usage {
#ifdef CONFIG_SPARSEMEM_VMEMMAPDECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION);
#endif/* See declaration of similar field in struct zone */unsigned long pageblock_flags[0];
};

而对于其他非sparse内存模型pageblock_flags 定义位于zone中:


struct zone {... ...
#ifndef CONFIG_SPARSEMEM/** Flags for a pageblock_nr_pages block. See pageblock-flags.h.* In SPARSEMEM, this map is stored in struct mem_section*/unsigned long     *pageblock_flags;
#endif /* CONFIG_SPARSEMEM */... ...
} ____cacheline_internodealigned_in_smp;

linux内核那些事之pg_data_t、zone结构初始化相关推荐

  1. linux内核那些事之Sparse内存模型初始化

    由于现在运行的设备中大都采用sparse内存模型,而<understanding the linux virtual memory manager>书中主要以2.4和2.6内核源码基础上进 ...

  2. linux内核那些事之buddy

    buddy算法是内核中比较古老的一个模块,很好的解决了相邻物理内存碎片的问题即"内碎片问题",同时有兼顾内存申请和释放效率问题,内核从引入该算法之后一直都能够在各种设备上完好运行, ...

  3. linux内核那些事之Sparse vmemmap

    <inux内核那些事之物理内存模型之SPARCE(3)>中指出在传统的sparse 内存模型中,每个mem_section都有一个属于自己的section_mem_map,如下图所示: 而 ...

  4. linux内核那些事之buddy(anti-fragment机制)(4)

    程序运行过程中,有些内存是短暂的驻留 用完一段时间之后就可以将内存释放以供后面再次使用,但是有些内存一旦申请之后,会长期使用而得不到释放.长久运行有可能造成碎片.以<professional l ...

  5. linux内核那些事之buddy(anti-fragment机制-steal page)(5)

    继<linux内核那些事之buddy(anti-fragment机制)(4)>,在同一个zone内指定的migrate type中没有足够内存,会启动fallback机制,从fallbac ...

  6. linux内核那些事之buddy(慢速申请内存__alloc_pages_slowpath)(5)

    内核提供__alloc_pages_nodemask接口申请物理内存主要分为两个部分:快速申请物理内存get_page_from_freelist(linux内核那些事之buddy(快速分配get_p ...

  7. linux内核那些事之mmap_region流程梳理

    承接<linux内核那些事之mmap>,mmap_region()是申请一个用户进程虚拟空间 并根据匿名映射或者文件映射做出相应动作,是实现mmap关键函数,趁这几天有空闲时间 整理下mm ...

  8. Linux内核中的PCB里面task_struct结构体中的具体信息

    1.PCB进程控制块--->task_struct 广义上,所有的进程信息被放在一个叫做进程控制块的数据结构中,可以理解为进程属性的集合. 每个进程在内核中都有一个进程控制块来维护进程的相关信息 ...

  9. Linux内核 eBPF基础:ftrace基础-ftrace_init初始化

    Linux内核 eBPF基础 ftrace基础:ftrace_init初始化 荣涛 2021年5月12日 本文相关注释代码:https://github.com/Rtoax/linux-5.10.13 ...

最新文章

  1. Sniffer pro 找不到网卡的解决方法
  2. protocol buffer介绍(protobuf)
  3. IDC运维团队技术交流总结篇————换个角度看世界
  4. 前端学习(2861):简单秒杀系统学习之前端优化css
  5. spikingjelly里面的元组处理方式
  6. mybatis 自定义转换规则_Mybatis使用小技巧-自定义类型转换器
  7. go-基础知识二-数据类型-变量
  8. 菜鸟学习oracle
  9. 基于fo-dicom 的 Worklist CStore 我的学习实现路线
  10. oc 之中的 汉字字符串转化成为拼音 汉字字符串的排序
  11. 自动驾驶技术基础——GNSS
  12. 20170603学习笔记整理
  13. android自定义打电话界面,两种Android打电话实现方法
  14. 软件工程—思考项目开发那些事(一)
  15. 阳性,阴性,假阳性,假阴性,敏感度,特异性
  16. APP上架各大应用市场教程:所需材料与注意事项
  17. 线程sta模式_STA和MTA线程模式的区别
  18. [读书笔记]5个小技巧让你写出更好的JavaScript[图]
  19. 每日新闻 | 董明珠与雷军开启新赌约:10亿不要了,再赌5年
  20. 中国市级食品药品监督管理局将使用区块链技术以保证质量

热门文章

  1. 【JEECG技术文档】JEECG部门管理员操作手册
  2. Minidao_1.6.2版本发布,超轻量Java持久化框架
  3. JEECG Online Coding 开发流程
  4. Oracle数据库的三种标准的备份方法
  5. MATLAB GPU加速
  6. 数据结构拾遗(1) --红黑树的设计与实现(上)
  7. php实现数值的整数次方
  8. Mysql入门的10条语句
  9. thinkphp图片加载_标题栏ico展示
  10. 修改程序的形而上学思考(随笔)