kubernetes等容器技术可以将所有的业务进程运行在公共的资源池中,提高资源利用率,节约成本,但是为避免不同进程之间相互干扰,对底层docker, kubernetes的隔离性就有了更高的要求,kubernetes作为一门新盛的技术,在这方面还不够成熟, 近期在一个staging集群就发生了,inode资源被耗尽的事件:

现象

在测试集群中,许多pod被Evicted掉

[root@node01 ~]$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
default-http-backend-78d96f979f-5ljx4   1/1       Running   0          8d
perfcounter-proxy-8b884c4ff-2ng4j       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-5hq5k       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-66qfw       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-6hf7f       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-6knrm       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-6m9p5       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-768g6       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-7d74k       0/2       Evicted   0          20h
perfcounter-proxy-8b884c4ff-998kx       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-bmvjc       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-cbh6m       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-cd8jb       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-d2m25       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-dgtkk       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-ftf2r       0/2       Evicted   0          20h
perfcounter-proxy-8b884c4ff-hdz9x       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-hgftx       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-ks5sq       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-kwf6x       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-lnqct       2/2       Running   0          20h
perfcounter-proxy-8b884c4ff-ngs9s       0/2       Evicted   0          2d

pod驱逐一般发生在某个node状态notready后,比如说磁盘写满,网络异常等,此时kuernetes就会将该异常node上面的pod进行驱逐,以免影响服务实例数,但是上面显示的如此频繁的驱逐是一定有问题的,机器状态不稳定,不断的flapping,执行kubectl describe node node04之后发现是由于node04机器上inode资源不足导致:

Events:Type     Reason                 Age               From                            Message----     ------                 ----              ----                            -------Warning  EvictionThresholdMet   3d (x3 over 4d)   kubelet, node04.kscn  Attempting to reclaim nodefsInodesNormal   NodeHasDiskPressure    3d (x3 over 4d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasDiskPressureNormal   NodeHasNoDiskPressure  3d (x3 over 4d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasNoDiskPressureWarning  EvictionThresholdMet   2d (x8 over 2d)   kubelet, node04.kscn  Attempting to reclaim nodefsInodesNormal   NodeHasDiskPressure    2d (x2 over 2d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasDiskPressureNormal   NodeHasNoDiskPressure  2d (x2 over 2d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasNoDiskPressureWarning  EvictionThresholdMet   20h (x9 over 1d)  kubelet, node04.kscn  Attempting to reclaim nodefsInodesNormal   NodeHasDiskPressure    20h (x6 over 1d)  kubelet, node04.kscn  Node node04.kscn status is now: NodeHasDiskPressureNormal   NodeHasNoDiskPressure  20h (x6 over 1d)  kubelet, node04.kscn  Node node04.kscn status is now: NodeHasNoDiskPressure

登录到node04,查看后确实/home分区inode数据可用比较少

[root@node04 gaorong]# df -i
Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/vda1      1310720 150802 1159918   12% /
devtmpfs       4116012    359 4115653    1% /dev
tmpfs          4118414     61 4118353    1% /dev/shm
tmpfs          4118414   2083 4116331    1% /run
tmpfs          4118414     15 4118399    1% /sys/fs/cgroup
/dev/vdb        819200 762459   56741   94% /home

解决

首先需要确定到底是那个服务导致inode耗尽的,确定inode占用较多的是那个目录, 根据目录可以确定是那个服务了,这里使用inodes这个工具来显示每个目录的占用inodes数目,发现/home/docker/aufs这个目录占用较多:

[root@node04 gaorong]# inodes  --d /home/docker/aufs/ -t 50000 -e 500
------------------------------------------
[CONFIG] Directory to scan specified as /home/docker/aufs/
------------------------------------------
[CONFIG] Tree directories above 50000 inodes
------------------------------------------
[CONFIG] Exclude directories below 100 inodes
------------------------------------------INODE USAGE SUMMARY
------------------------------------------INODES |       SIZE | DIRECTORY
------------------------------------------493221   |   16G      | diff                --5417   |   --130M   | --016fe5bdd62c9264bcda0c44ef1548aaf8b82acfee2b0b8943e7394218118550--166    |   --65M    | --047d364456b37521445751910a4251faa6adcd50908f4d2f064ecbe80f20d332--10536  |   --203M   | --0a0b37e475c91a49cea4d732f83e6f87010edae86e25f4c7e14203e66c4122ea... 此处省略若干行....--2050   |   --64M    | --f5fa7a7efabad90d1dfa4519f5c7fae611bdbfc9a9b43eb58b8c8bc035a15336--8714   |   --304M   | --f7618789addc900474c931ce9bbdc52abd5bf61ab98cd23d46e6189bd41094d1--241    |   --174M   | --faf7ea4dc438088688dded3164ce7a11018a7a5dc346d621b43965bc3b0e60cb--10921  |   --325M   | --fb08d1ed58bcdd3374ed835fcb27554ab52bcdd4822d502eb00066dec4d70650386      |   1.6M     | layers              179712   |   6.4G     | mnt                 --7110   |   --467M   | --06feaf7d4336e97f3b11440865b97c0fadedd5488126215216be616c656e82f5--10764  |   --337M   | --26f394cad00c1876f832e2a4fb83816253dd8d70303a0d3b96a46edaf3564a05--18914  |   --1.2G   | --3912a58cf254540753d8accf69ed7a8c1c9d9539d73d35be9bd105dde94effe5--2965   |   --168M   | --401c0221bb62e36b3fcfb181420b64ec21147783860c72732299dba2f69f0280--3759   |   --85M    | --773333aef4b7f00f1f585b4c41fcbe20906c28b53999cfb78df268034d9b59e4--18692  |   --626M   | --79a76870faa7ed5f218b87e942d12523e5a9dd14cdb8449ccea5af279f45b526--26462  |   --758M   | --81d5c11dcc68df6ceee547fe84236c0234e4a63844e4ffc19047b6846185871b--2333   |   --24M    | --91bc77e078602f1e7285d2d742368a7a17ad0c0c3c736305fcbe68a11aa23e4c--10623  |   --218M   | --97a5ff749684b366c4a3ecff5536990e1db907577065deb7839e9da3c116ff93--18914  |   --1.2G   | --a4b945841c02a8a38463096f5cc1fd314563d94c78f8ccc92221b7f2571b2c3c--6749   |   --157M   | --aaa03c1ad6cd6fb332cce4c033eb74e2ae9f62364d5ea6e9d434aa88b6925644--17989  |   --488M   | --b6fb0a75102f0f25aa14a2f76b3013fab19c3f49b9ab49a2a8c230d52b84ff74--28536  |   --674M   | --d01bdbddb09cf36411c6179cb9657390eb4ed0f440f8aa05c94e38f7de49439c--5284   |   --113M   | --e3c555a3df172a2cd852cff210b43693bbc1cfa0b7cb5dbaee7645a4abdcfdb0
------------------------------------------
673320     | 22G        | /home/docker/aufs/
------------------------------------------

/home/docker/aufs目录是docker采用aufs作为storage driver时用来存储镜像layer的目录,镜像中的每个文件都会在这里找到对应的文件。(docker默认的存储位置是/var/lib/docker,此环境因为项目需要改为/home/docker目录),aufs中小文件太多导致inode耗尽也是是比较常见的问题,如果使用deviceMapper等这种块设备的storage driver就不存在这个问题了。kubernetes会定期自动清理imagefs中的文件,删除没用的镜像和容器, 如果还是没有足够的资源的话才会驱逐pod。经过确认之后,该node上面的镜像都在使用中, kubernetes应该是尽力去清理镜像了, 清理完之后还是有inode pressure, 于是只能驱逐pod了, 驱逐之后node状态变为ready , 随后有新创建的pod会调度到该node上面,运行一段时间后inode资源又不足, 又进行驱逐回收... 如此循环往复,导致集群状态不稳定,有大量的pod被evict掉。因为Image资源我们没有办法继续清理,只能交给kubernetes去回收了。看看还有那些目录占用资源较多。
上面显示aufs占用的是673320,但df -i显示占用的inode数目是819200,说明还有其他地方导致占用比较多,约20w。仔细查找后发现另一个占用比较多的目录home/docker/volumes

[root@node04 docker]# inodes  --d /home/docker/volumes -t 50000 -e 500
------------------------------------------
[CONFIG] Directory to scan specified as /home/docker/volumes
------------------------------------------
[CONFIG] Tree directories above 50000 inodes
------------------------------------------
[CONFIG] Exclude directories below 500 inodes
------------------------------------------INODE USAGE SUMMARY
------------------------------------------INODES |       SIZE | DIRECTORY
------------------------------------------10995    |   715M     | 0afe5e70b2b22d1bee735977a8e931e2f2a65da5d79c08babf08a4de1a69877a6576     |   378M     | 0d0453ed64d4e830f408cf920a5ae13cefe1d4bfe3444464b9e612e699872a178497     |   414M     | 1708cd1ce59818132e93bada3db8926a9eb03e08f553ad66f9df7384244087046862     |   289M     | 2940dea37d29501423d6eb50056034c89a9caf92cd2cd567ea3ffc02b54c813d3224     |   102M     | 2fe657a647b3c8b9b9a4d552cdeee682bd465edff14d90477a3277d79eeed8079522     |   409M     | 354c182b47707cc173de72ccf7ad99cabccc61d1f44e70a72d66a32334894d1415204    |   533M     | 4d49d25d55ab02f59fef56d984bfd811c3a2c8ec02378c0d7e860fac9df9d3ee9546     |   407M     | 4e1d2bb4ab8ca730df17b8a16910740dba8046cf8d9dc71625a017d67e16f95f9205     |   408M     | 51acc932cc72f620c9fded3cb93ea877467b78364731d35e58c08a72b2b247c29801     |   481M     | 56a0e0a016c8c65cc27430dcd4053a18dec81bfd444da505148fe3fc30f4506c17675    |   473M     | 6ee2b44c4351790ef36bb7a1774fa2647b6d4c9ecc5dce1bcb33a9a522ea08089035     |   484M     | 7928e4c94867ee49de041565e92a4c36517a6a7f904149dbd5b7df331b4f6a0e7788     |   285M     | 85c76690344f6b11ccaa95153df0b317b6ade6a6f6179e19ad3c40b9c2ba4d2711193    |   856M     | 8d51785afbeb6754e70286c73ee1fa5044b8eab02b606d692e64eea3423034493054     |   79M      | 9252da456a675c2c6519568b1bae3c7e410f1d5dc89e47f4e13bdd26964367e48604     |   377M     | 962d4c2e6dd47f642aafe49557b660b9767b859c472ae0002c4f2ef4789bb3505366     |   132M     | 9bb9e9d8eff02f63eeea8dee9c351d6608b3e061d659a013ee5370b9619617079171     |   480M     | b8edb07ae19b7e5920b3aa254fb437670cc2f21f570cde693a778400a8e2784f15088    |   592M     | bd017c95ed823d8bbe7e4d7ee8bb40160c86b613fe9683c9a09a1e4dde3f017e7813     |   285M     | bf89566ba7852b07dc52fa5ef9d3a467dbcc5608532f6c2b0682feebed50840114454    |   1.2G     | c5a858a1ea25032f90d97b3765bebd32ffcecd5b911087451cd6d4fdaee1c92c10427    |   467M     | c9548dfca3e3de3dd1387c47a1a6771fe8aaf3908ab17a58287dad8511fa416c4530     |   215M     | cc4bde58318575c1c5fa8dde53e05d1f082b5a9944cecec23ac50989cebd963c8036     |   381M     | ced3dd5ecabdde6e313db4514290b30f735f8f18431ac6cb848d65d980e101b56439     |   374M     | d0b3ca6b65cf9a686f0d521a813cb27d91e9dfd6078b80171e1dc16fdba6a92215891    |   589M     | e6292e24cc7d4a5ec43ed5ede58e9356b369e8e72f24327d297dd668fc20a64011088    |   831M     | ef175210467e4b58fb2215e35d679435699f8ce1f043a6225ff4fa1033c0e345
------------------------------------------
259083     | 15G        | /home/docker/volumes
------------------------------------------

/home/docker/volumes目录是docker创建的,用于支持VOLUME 定义匿名卷,(默认路径是 /var/lib/docker/volumes , 此处是由于项目需要改变为 /home/docker/volumes目录)。我们平时启动容器的时候,挂载数据卷一般是执行docker run -ti -v /home/gaorong/:/home/gaorong nginx,这样就将容器里的"/home/gaorong"目录和宿主机上的"/home/gaorong"绑定在一起, 但是如果我们没有显式指定要挂载在宿主机的那个目录, 那docker 只能为其创建一个默认的目录了,比如执行docker run -ti -v /home/gaorong nginx, 他在容器中的目录"/home/gaorong",其实挂载的是宿主机上/var/libe/docker/volumes这个目录。
我们看看里面有些什么?

[root@node04 volumes]# tree -L 3
.
├── 0ad528934774355e22f4afa2df56a85cbe18bed0f922f37d054c1c0362baf648
│   └── _data
│       └── luhualin
├── 16d08b2bdb76ebeede9b6ee6da1378e44cf47ba825d6545f1a0ead8e21f9ffc0
│   └── _data
├── 1708cd1ce59818132e93bada3db8926a9eb03e08f553ad66f9df738424408704
│   └── _data
│       ├── global-bigdata-micsql
│       └── global-bigdata-micsql.tmp
├── 1a5c420084d42c370ac670d6e483cc2a960f4da71cf0fa0a765e1bf099a7c027
│   └── _data
│       ├── data
│       ├── meta
│       └── wal
├── 1fca6975f0af913c116aad7273a32f0a2795e62f3439ab779b39524c9434cf5d
│   └── _data
│       └── ContainerCloud
├── 3bb2d14dd39cca6d93296c742bbce8446c79a5c441b81c60d07a661a9a252b63
│   └── _data
├── 3c4845e864469457b3dac0a077e7f542db0dcfd9e20fbb112fde0e514d23a261
│   └── _data├── 53ea91b9a833cb676d71cf336ffab97f20fe326bbd165427b64c9685e26a0f1f
│   └── _data
│       └── ContainerCloud...此处省略若干行...
├── fc82dfd844191eb8baf1a68d3378e8fe0553d9cc705eb2fdfd3e5398a17783e2
│   └── _data
│       ├── k8s-node-frigga
│       └── k8s-node-frigga.tmp
├── fd3dec13f142dc59ad8dbbc2acf3841fd893f6d548ebe74aa2348af8f26fa448
│   └── _data
└── metadata.db

可以看到其中有许多小文件,该集群中有些CICD 任务, 下载了许多源码文件,小而多的文件占用inode较多。 我们可以通过docker volume ls查看到目前没有被删除的匿名volume

[root@node04 volumes]# docker volume ls
DRIVER              VOLUME NAME
local               3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae
local               2714f98661dfb0c6f94c99a0acf2c33f30d491667ce251e6499b069497095260...此处省略若干行...
local               ac04c0925a0203cd87c55eaf1c6f094cf7e0b2cf3173c80d9de459f2aca1ccd4

接下来就需要定位是那个容器/pod创建的,为什么会创建匿名volume。 执行docker ps -a | grep -v NAME | awk '{print $1}' | xargs docker inspect | grep -B 100 3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae看一看是哪个容器创建的这些volume,找到其中一个容器 docker inpsect一下:

[root@node04 global-bigdata]# docker inspect 25b3d4902305
[{"Id": "25b3d4902305258371fbc71851dcb4aade62e393f567750f4776b7a3843e1ae4","Created": "2019-02-27T06:58:27.091296766Z","Path": "/docker-entrypoint.sh","Args": ["zkServer.sh","start-foreground"],...此处省略若干行..."GraphDriver": {"Data": null,"Name": "aufs"},"Mounts": [{"Type": "bind","Source": "/home/kubelet/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/volumes/kubernetes.io~secret/default-token-h7vj9","Destination": "/var/run/secrets/kubernetes.io/serviceaccount","Mode": "ro,rslave","RW": false,"Propagation": "rslave"},{"Type": "bind","Source": "/home/kubelet/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/etc-hosts","Destination": "/etc/hosts","Mode": "","RW": true,"Propagation": "rprivate"},{"Type": "bind","Source": "/home/kubelet/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/containers/zk/1d590aa4","Destination": "/dev/termination-log","Mode": "","RW": true,"Propagation": "rprivate"},{"Type": "volume","Name": "ac04c0925a0203cd87c55eaf1c6f094cf7e0b2cf3173c80d9de459f2aca1ccd4","Source": "/home/docker/volumes/ac04c0925a0203cd87c55eaf1c6f094cf7e0b2cf3173c80d9de459f2aca1ccd4/_data","Destination": "/data","Driver": "local","Mode": "","RW": true,"Propagation": ""},{"Type": "volume","Name": "3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae","Source": "/home/docker/volumes/3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae/_data",    <- 注意看这里 "Destination": "/datalog","Driver": "local","Mode": "","RW": true,"Propagation": ""},{"Type": "volume","Name": "2714f98661dfb0c6f94c99a0acf2c33f30d491667ce251e6499b069497095260","Source": "/home/docker/volumes/2714f98661dfb0c6f94c99a0acf2c33f30d491667ce251e6499b069497095260/_data","Destination": "/logs","Driver": "local","Mode": "","RW": true,"Propagation": ""}],"Config": {"Hostname": "zk-7848b46c9d-6nqhz","Domainname": "","User": "0","AttachStdin": false,"AttachStdout": false,"AttachStderr": false,"ExposedPorts": {"2181/tcp": {},"2888/tcp": {},"3888/tcp": {}},"Tty": false,"OpenStdin": false,"StdinOnce": false,"Cmd": ["zkServer.sh","start-foreground"],"Healthcheck": {"Test": ["NONE"]},"ArgsEscaped": true,"Volumes": {"/data": {},"/datalog": {},"/logs": {}},"WorkingDir": "/zookeeper-3.4.13","Entrypoint": ["/docker-entrypoint.sh"],"OnBuild": null,"Labels": {"annotation.io.kubernetes.container.hash": "ee6d75c4","annotation.io.kubernetes.container.ports": "[{\"containerPort\":2181,\"protocol\":\"TCP\"}]","annotation.io.kubernetes.container.restartCount": "0","annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log","annotation.io.kubernetes.container.terminationMessagePolicy": "File","annotation.io.kubernetes.pod.terminationGracePeriod": "30","io.kubernetes.container.logpath": "/var/log/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/zk/0.log","io.kubernetes.container.name": "zk","io.kubernetes.docker.type": "container","io.kubernetes.pod.name": "zk-7848b46c9d-6nqhz","io.kubernetes.pod.namespace": "rpc","io.kubernetes.pod.uid": "1461d38d-3a5d-11e9-a499-fa163e08f614","io.kubernetes.sandbox.id": "a0fd95b0bb06a573e385f23dc21187c23b8232d59a252d0d8790770e946851a5"}},}
]

看到确实就是该容器mount这个目录, 其对应的image是zookper这个image, 我们看看其DockerFile是怎么写的:

[root@node04 global-bigdata]# docker image  history zookper
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
06b178591ab3        3 weeks ago         /bin/sh -c #(nop)  CMD ["zkServer.sh" "start…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENTRYPOINT ["/docker-entr…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop) COPY file:e241c4b758b1c071…   1.13kB
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV PATH=/usr/local/sbin:…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  EXPOSE 2181 2888 3888        0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  VOLUME [/data /datalog /l…   0B                    <- 注意看这里
<missing>           3 weeks ago         /bin/sh -c #(nop) WORKDIR /zookeeper-3.4.13     0B
<missing>           3 weeks ago         |2 DISTRO_NAME=zookeeper-3.4.13 GPG_KEY=C61B…   61.1MB
<missing>           3 weeks ago         /bin/sh -c #(nop)  ARG DISTRO_NAME=zookeeper…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ARG GPG_KEY=C61B346552DC5…   0B
<missing>           3 weeks ago         /bin/sh -c set -ex;     adduser -D "$ZOO_USE…   4.83kB
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV ZOO_USER=zookeeper ZO…   0B
<missing>           3 weeks ago         /bin/sh -c apk add --no-cache     bash     s…   4.12MB
<missing>           3 weeks ago         /bin/sh -c set -x  && apk add --no-cache   o…   79.5MB
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV JAVA_ALPINE_VERSION=8…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV JAVA_VERSION=8u191       0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV PATH=/usr/local/sbin:…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV JAVA_HOME=/usr/lib/jv…   0B
<missing>           3 weeks ago         /bin/sh -c {   echo '#!/bin/sh';   echo 'set…   87B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV LANG=C.UTF-8             0B
<missing>           4 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>           4 weeks ago         /bin/sh -c #(nop) ADD file:2a1fc9351afe35698…   5.53MB    

原来是DockerFile中指定了VOLUME,但在启动改pod/容器的时候却没有mount到具体的某个目录,所以就会docker会创建一个匿名volume, 挂载到/home/docker/volume下,在其中写入大量小文件,进而占用该文件系统的inode。
查看官方的Dockfile文档里有这么一句话:

The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can’t be guaranteed to be available on all hosts. For this reason, you can’t mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.

也就是说我们在启动该镜像的时候必须覆盖该Volume mountpoint,如果不覆盖的话就会在/home/docker/volumes目录下生成一个默认的mountPoint, 对应于kubernetes,也就是我们必须手动指定volume来挂载到该目录上, 可以是emptyDir, PV等whatever只要覆盖就不会写到该默认的路径下面了。
如果不进行覆盖该mount point, 使用默认配置,则该匿名volume的生命周期对应于pod的生命周期,pod删除就会自动删除该volume,/home/docker/volumes下对应的数据也会删除。如果想在pod退出后还保存这部分数据的就应该小心了。
言归正传,对于这种inode耗尽的问题要想从本质上解决问题还是得限制每个pod/容器可以占用的inode数目, 但是目前大多数的container storage driver不支持inode隔离的, 还是得使用诸如deviceMapper这类基于块设备的存储插件比较合适, 还有就是单独给一些特殊的pod/容器,IO比较特殊的任务单独挂载一块磁盘(当然此处磁盘也可以是虚拟化出来的), 避免相互影响。

转载于:https://www.cnblogs.com/gaorong/p/10472009.html

kubernetes continually evict pod when node's inode exhausted相关推荐

  1. Kubernetes 中 Evicted pod 是如何产生的

    线上被驱逐实例数据 最近在线上发现很多实例处于 Evicted 状态,通过 pod yaml 可以看到实例是因为节点资源不足被驱逐,但是这些实例并没有被自动清理,平台的大部分用户在操作时看到服务下面出 ...

  2. 【云原生】Kubernetes PDB(Pod Disruption Budget)介绍与简单使用

    文章目录 一.概述 二.PDB 应用场景 1)自愿中断和非自愿中断场景 1.非自愿性中断场景 2.自愿性中断场景 2)PDB 关键参数和注意事项 三.示例演示 1)使用 minAvailable 的P ...

  3. 为什么Kubernetes要引入pod的概念,而不直接操作Docker容器

    首先我们要明确一个概念,Kubernetes并不是只支持Docker这一个容器运行时,通过我的另一篇文章什么是Kubernetes的CRI-容器运行时接口介绍的内容,我们知道Kubernetes通过C ...

  4. 【kubernetes系列】Pod篇实战操作

    目录 一.命令终端 创建Pod 查看Pod 访问Pod中容器 进入Pod内部 删除Pod 配置文件yaml创建 二.Dashboard 创建Pod 创建多容器的Pod 进入容器 访问容器应用 一.命令 ...

  5. 记一次 Kubernetes 集群 Pod Eviction 问题排查过程

    声明: 本博客欢迎转发,但请保留原作者信息! 新浪微博:@Lingxian_kong; 微信公众号:飞翔的尘埃; 内容系本人学习.研究和总结,如有雷同,实属荣幸! 现象:一个普通的 k8s 集群,3 ...

  6. Kubernetes 学习总结(25)—— Kubernetes 中的 pod 与容器的区别和联系

    前言 容器本可以成为轻量级虚拟机的替代品.但是由于 Docker/OCI 的标准化,最广泛使用的容器形式是每个容器只有一个进程服务.这种方法有很多优点--增加隔离性.简化水平扩展.更高的可重用性等.但 ...

  7. Kubernetes集群中部署Node节点

    Kubernetes集群中的Node节点部署 kubernetes的Node节点包含如下组件: flanneld docker kubelet kube-proxy 环境变量 需要的变量. $ # 替 ...

  8. Kubernetes 进阶训练营 Pod基础

    Pod基础 K8s架构图 组件 kube-apiserver kube-controller-manager kube-controller-manageer kube-scheduler kubel ...

  9. Jenkins Pipeline Kubernetes 如何创建 Pod

    Jenkins Pipeline & Kubernetes 如何创建 pod 文章目录 Jenkins Pipeline & Kubernetes 如何创建 pod 1. 前言 2. ...

最新文章

  1. AWS EBS是 Elastic Block Store 的简写
  2. 《2021全球脑科学发展报告》发布
  3. 替代方法_ASD干预:替代行为的正确使用方法和注意事项
  4. Ember.js 入门指南——handlebars属性绑定
  5. CV之Image Caption:Image Caption算法的相关论文、设计思路、关键步骤相关配图之详细攻略
  6. shell获取git最近一次提交信息_Git修改commit提交信息
  7. uva111346Probability
  8. InstallShield 2020
  9. python程序题求roc-auc是一种常用的模型评价指标_Keras 利用sklearn的ROC-AUC建立评价函数详解...
  10. 【安利UI设计师】电商购物移动应用程序界面设计UI套件包,轻松完成项目需求。
  11. docker重置mysql密码
  12. QTTabBar 安装记录(Win10 enable .NET)
  13. 编译 scintilla 并且缩小 SciLexer的 体积的做法
  14. Unity3D鼠标控制摄像机“左右移动控制视角+WASD键盘控制前后左右+空格键抬升高度”脚本
  15. 大连发展中韩跨境电子商务势在必行
  16. 做软件测试必须了解的7个常用术语
  17. 能同时模拟键盘及鼠标的神器--51单片机可控制
  18. mybatis-plus使用注意事项
  19. java怎么连接sql_java怎么连接SQL Server
  20. CNN Architecture

热门文章

  1. 硬盘整数分区大小计算,整数分区大小计算公式
  2. Carsim/Matlab/Simulink之ABS仿真模型搭建
  3. 全球10位著名CEO成功之道-------皮埃尔·奥米迪亚
  4. Vue vuex vue-router
  5. TPAMI 2022 | 金字塔池化的骨干网络,各大任务都涨点!南开达摩院联合推出P2T
  6. MYSQL —(二)筛选、聚合和分组、查询
  7. 面向对象程序设计简答题
  8. 十种免费网站访问分析工具
  9. 关于 MATLAB 你首先要会的基础(权且可当做期末复习备考)
  10. Random Thoughts #12 @2013:P2P狂想曲