Kubernetes CKA认证运维工程师笔记-Kubernetes存储
Kubernetes CKA认证运维工程师笔记-Kubernetes存储
- 1. 为什么需要存储卷?
- 2. 数据卷概述
- 3. 临时存储卷,节点存储卷,网络存储卷
- 3.1 临时存储卷:emptyDir
- 3.2 节点存储卷:hostPath
- 3.3 网络存储卷:NFS
- 4. 持久卷概述
- 4.1 PV与PVC使用流程
- 5. PV生命周期
- 5.1 PV 静态供给
- 5.2 PV 动态供给
- 5.3 案例:应用程序使用持久卷存储数据
- 6. 有状态应用部署:StatefulSet 控制器
- 7. 应用程序配置文件存储:ConfigMap
- 8. 敏感数据存储:Secret
1. 为什么需要存储卷?
容器部署过程中一般有以下三种数据:
- 启动时需要的初始数据,例如配置文件
- 启动过程中产生的临时数据,该临时数据需要多个容器间共享
- 启动过程中产生的持久化数据,例如MySQL的data目录
2. 数据卷概述
- Kubernetes中的Volume提供了在容器中挂载外部存储的能力
- Pod需要设置卷来源(spec.volume)和挂载点(spec.containers.volumeMounts)两个信息后才可以使用相应的Volume
常用的数据卷:
- 本地(hostPath,emptyDir)
- 网络(NFS,Ceph,GlusterFS)
- 公有云(AWS EBS)
- K8S资源(configmap,secret)
3. 临时存储卷,节点存储卷,网络存储卷
3.1 临时存储卷:emptyDir
emptyDir卷:是一个临时存储卷,与Pod生命周期绑定一起,如果Pod删除了卷也会被删除。
应用场景:Pod中容器之间数据共享
范例:Pod内容器之前共享数据
[root@k8s-master ~]# vi pod-v.yaml
[root@k8s-master ~]# cat pod-v.yaml
apiVersion: v1
kind: Pod
metadata:name: my-pod
spec:containers:- name: writeimage: centoscommand: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]volumeMounts:- name: datamountPath: /data- name: readimage: centoscommand: ["bash","-c","tail -f /data/hello"]volumeMounts:- name: datamountPath: /datavolumes:- name: dataemptyDir: {}
[root@k8s-master ~]# kubectl apply -f pod-v.yaml
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns 1/1 Running 2 3d23h 10.244.36.74 k8s-node1 <none> <none>
my-pod 2/2 Running 2 4m32s 10.244.36.86 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl logs my-pod
error: a container name must be specified for pod my-pod, choose one of: [write read]
[root@k8s-master ~]# kubectl logs my-pod read
7
8
9
10
...
# 默认进入写容器
[root@k8s-master ~]# kubectl exec -it my-pod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulting container name to write.
Use 'kubectl describe pod/my-pod -n default' to see all of the containers in this pod.
[root@my-pod /]# cd /data/
[root@my-pod data]# ls
hello
[root@my-pod data]# cat hello
1
2
3
4
5
6
...
[root@my-pod data]# tail -f hello
70
71
72
73
74
75
76
...
# 进入读容器
[root@my-pod data]# exit
exit
command terminated with exit code 130
[root@k8s-master ~]# kubectl exec -it my-pod -c read bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@my-pod /]# cd /data/
[root@my-pod data]# tail -f hello
91
92
93
94
95
96
97
....
# 进入到分布的节点也可进行查看
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns 1/1 Running 2 3d23h 10.244.36.74 k8s-node1 <none> <none>
my-pod 1/2 CrashLoopBackOff 5 16m 10.244.36.86 k8s-node1 <none> <none>
# 打开分布的节点
[root@k8s-node1 ~]# ls /var/lib/kubelet/
config.yaml device-plugins pki plugins_registry pods
cpu_manager_state kubeadm-flags.env plugins pod-resources
[root@k8s-node1 ~]# ls /var/lib/kubelet/pods/
075b0581-87cd-4196-a03c-007ea51584a0 a5f0ec32-818c-44af-8c4f-5b30b36f596e
1ab11649-6fbe-4538-8608-fb62399d2b8b ab151aff-ac4b-4895-a3dd-4cc8df0c09be
2e1362f8-f67c-44c0-a45f-db99c9026fa4 b9d0512c-4696-468f-9469-d31b54e8cb64
36bcdc4c-d7ac-4429-89c6-a035c6dda3c6 cb250494-02f8-404e-b2cb-8430fe62e752
5a5a4820-fb2c-48be-bbdb-94b044266f03 ffad9162-22f4-48fa-84e0-5d11cb3fd3f9
9677d5e0-6531-4351-86c5-e84e243f7e2f
[root@k8s-node1 ~]# docker ps|grep my-pod
cfa549def6de centos "bash -c 'for i in {…" 9 seconds ago Up 8 seconds k8s_write_my-pod_default_ab151aff-ac4b-4895-a3dd-4cc8df0c09be_6
037a43215f94 centos "bash -c 'tail -f /d…" 16 minutes ago Up 16 minutes k8s_read_my-pod_default_ab151aff-ac4b-4895-a3dd-4cc8df0c09be_0
47f12c464fba registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 17 minutes ago Up 17 minutes k8s_POD_my-pod_default_ab151aff-ac4b-4895-a3dd-4cc8df0c09be_0
[root@k8s-node1 ~]# cd /var/lib/kubelet/pods/ab151aff-ac4b-4895-a3dd-4cc8df0c09be
[root@k8s-node1 ab151aff-ac4b-4895-a3dd-4cc8df0c09be]# ls
containers etc-hosts plugins volumes
[root@k8s-node1 ab151aff-ac4b-4895-a3dd-4cc8df0c09be]# cd volumes/
[root@k8s-node1 volumes]# ls
kubernetes.io~empty-dir kubernetes.io~secret
[root@k8s-node1 volumes]# cd kubernetes.io~empty-dir/
[root@k8s-node1 kubernetes.io~empty-dir]# ls
data
[root@k8s-node1 kubernetes.io~empty-dir]# cd data/
[root@k8s-node1 data]# ls
hello
[root@k8s-node1 data]# tail -f hello
91
92
93
94
95
96
97
98
99
100
3.2 节点存储卷:hostPath
hostPath卷:挂载Node文件系统(Pod所在节点)上文件或者目录到Pod中的容器。
应用场景:Pod中容器需要访问宿主机文件
范例:将宿主机/tmp目录挂载到容器/data目录
[root@k8s-master ~]# vi pod-h.yaml
[root@k8s-master ~]# cat pod-h.yaml
apiVersion: v1
kind: Pod
metadata:name: my-pod2
spec:containers:- name: busyboximage: busyboxargs:- /bin/sh- -c- sleep 36000volumeMounts:- name: datamountPath: /datavolumes:- name: datahostPath:path: /tmptype: Directory
[root@k8s-master ~]# kubectl apply -f pod-h.yaml
pod/my-pod2 created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns 1/1 Running 2 4d6h 10.244.36.74 k8s-node1 <none> <none>
my-pod 1/2 CrashLoopBackOff 63 7h 10.244.36.86 k8s-node1 <none> <none>
my-pod2 1/1 Running 0 22s 10.244.169.169 k8s-node2 <none> <none>
[root@k8s-master ~]# kubectl exec -it my-pod2 -- sh
/ # cd /data/
/data # ls
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
/data # touch a.txt
/data # ls
a.txt
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
/data # [root@k8s-node2 ~]# cd /tmp/
[root@k8s-node2 tmp]# ls
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
[root@k8s-node2 tmp]# ls
a.txt
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
3.3 网络存储卷:NFS
NFS卷:提供对NFS挂载支持,可以自动将NFS共享路径挂载到Pod中
NFS:是一个主流的文件共享服务器。
# yum install nfs-utils
# vi /etc/exports
/ifs/kubernetes *(rw,no_root_squash)
# mkdir -p /ifs/kubernetes
# systemctl start nfs
# systemctl enable nfs
注:每个Node上都要安装nfs-utils包
[root@k8s-node2 tmp]# yum install nfs-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
...
Complete!
[root@k8s-node2 tmp]# cd
[root@k8s-node2 ~]# vi /etc/exports
[root@k8s-node2 ~]# mkdir -p /ifs/kubernetes
[root@k8s-node2 ~]# systemctl start nfs
[root@k8s-node2 ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.[root@k8s-node1 ~]# mount -t nfs 10.0.0.63:/ifs/kubernetes /mnt
[root@k8s-node1 ~]# cd /mnt/
[root@k8s-node1 mnt]# ls
[root@k8s-node1 mnt]# [root@k8s-node2 mnt]# cd /ifs/kubernetes/
[root@k8s-node2 kubernetes]# ls
[root@k8s-node2 kubernetes]# touch a.txt
[root@k8s-node2 kubernetes]# ls
a.txt[root@k8s-node1 mnt]# ls
a.txt[root@k8s-master ~]# vi dep-n.yaml
[root@k8s-master ~]# cat dep-n.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80volumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwnfs:server: 10.0.0.63path: /ifs/kubernetes
[root@k8s-master ~]# kubectl apply -f dep-n.yaml
deployment.apps/nginx-deployment created
[root@k8s-master ~]# kubectl expose deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
service/nginx-deployment exposed
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
my-dep NodePort 10.111.199.51 <none> 80:31734/TCP 25d
my-service NodePort 10.100.228.0 <none> 80:32433/TCP 19d
nginx-deployment NodePort 10.102.245.67 <none> 80:31176/TCP 13s
web NodePort 10.96.132.243 <none> 80:31340/TCP 28d
web666 NodePort 10.106.85.63 <none> 80:30008/TCP 5d6h[root@k8s-master ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.0.61:6443 29d
my-dep <none> 25d
my-service 10.244.169.164:80,10.244.169.167:80 19d
nginx-deployment 10.244.169.164:80,10.244.169.167:80 37s
web 10.244.169.158:8080 28d
web666 10.244.169.158:80 5d6h
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 0 26m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 3m12s
nginx-deployment-577f9758bc-fh4c5 1/1 Running 0 3m12s
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 3m12s
[root@k8s-master ~]# curl 10.102.245.67
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
[root@k8s-master ~]# kubectl exec -it nginx-deployment-577f9758bc-8jffx bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-577f9758bc-8jffx:/# cd /usr/share/nginx/html/
root@nginx-deployment-577f9758bc-8jffx:/usr/share/nginx/html# ls
a.txt[root@k8s-node2 kubernetes]# vi index.html
[root@k8s-node2 kubernetes]# cat index.html
<h1>hello nginx!</h1>root@nginx-deployment-577f9758bc-8jffx:/usr/share/nginx/html# ls
a.txt index.html
4. 持久卷概述
- PersistentVolume(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理
- PersistentVolumeClaim(PVC):让用户不需要关心具体的Volume实现细节
4.1 PV与PVC使用流程
容器应用
apiVersion: apps/v1
kind: Deployment
metadata:name: deploy-pvclabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginx2template:metadata:labels:app: nginx2spec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80volumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwpersistentVolumeClaim:claimName: my-pvc
卷需求模板:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-pvc
spec:accessModes:- ReadWriteManyresources:requests:storage: 5Gi
数据卷定义:
apiVersion: v1
kind: PersistentVolume
metadata:name: my-pv
spec:capacity:storage: 5GiaccessModes:- ReadWriteManynfs:path: /ifs/kubernetesserver: 10.0.0.63
[root@k8s-master ~]# vi dep-pvc.yaml
[root@k8s-master ~]# cat dep-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: deploy-pvclabels:app: nginx
spec:replicas: 3selector:matchLabels:app: nginx2template:metadata:labels:app: nginx2spec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80volumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwpersistentVolumeClaim:claimName: my-pvc---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-pvc
spec:accessModes:- ReadWriteManyresources:requests:storage: 5Gi
[root@k8s-master ~]# kubectl apply -f dep-pvc.yaml
deployment.apps/deploy-pvc created
persistentvolumeclaim/my-pvc created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-pvc-76846b5956-2g67t 0/1 Pending 0 7s
deploy-pvc-76846b5956-kxlmr 0/1 Pending 0 7s
deploy-pvc-76846b5956-sc2k6 0/1 Pending 0 7s
dns 0/1 Error 2 4d6h
my-pod 0/2 Error 65 7h44m
my-pod2 1/1 Running 0 44m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 21m
nginx-deployment-577f9758bc-fh4c5 0/1 ContainerCreating 0 21m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 21m
[root@k8s-master ~]# kubectl describe pod deploy-pvc-76846b5956-2g67t
Name: deploy-pvc-76846b5956-2g67t
Namespace: default
Priority: 0
Node: <none>
Labels: app=nginx2pod-template-hash=76846b5956
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/deploy-pvc-76846b5956
Containers:nginx:Image: nginx:1.14.2Port: 80/TCPHost Port: 0/TCPEnvironment: <none>Mounts:/usr/share/nginx/html from www (rw)/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:Type StatusPodScheduled False
Volumes:www:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: my-pvcReadOnly: falsedefault-token-8grtj:Type: Secret (a volume populated by a Secret)SecretName: default-token-8grtjOptional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 27s (x2 over 28s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
[root@k8s-master ~]# vi pv.yaml
[root@k8s-master ~]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: my-pv
spec:capacity:storage: 5GiaccessModes:- ReadWriteManynfs:path: /ifs/kubernetesserver: 10.0.0.63
[root@k8s-master ~]# kubectl apply -f pv.yaml
persistentvolume/my-pv created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-pvc-76846b5956-2g67t 0/1 Pending 0 3m1s
deploy-pvc-76846b5956-kxlmr 0/1 Pending 0 3m1s
deploy-pvc-76846b5956-sc2k6 0/1 Pending 0 3m1s
dns 0/1 Error 2 4d7h
my-pod 0/2 Error 65 7h47m
my-pod2 1/1 Running 0 47m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 23m
nginx-deployment-577f9758bc-fh4c5 0/1 ContainerCreating 0 23m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 23m
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-pvc-76846b5956-2g67t 0/1 ContainerCreating 0 3m6s
deploy-pvc-76846b5956-kxlmr 0/1 ContainerCreating 0 3m6s
deploy-pvc-76846b5956-sc2k6 0/1 ContainerCreating 0 3m6s
dns 0/1 Error 2 4d7h
my-pod 0/2 Error 65 7h47m
my-pod2 1/1 Running 0 47m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 23m
nginx-deployment-577f9758bc-fh4c5 0/1 ContainerCreating 0 23m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 23m
[root@k8s-master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound my-pv 5Gi RWX 3m23s
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Bound default/my-pvc 32s
[root@k8s-master ~]# kubectl expose deployment deploy-pvc --port=80 --target-port=80 --type=NodePort
service/deploy-pvc exposed
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deploy-pvc NodePort 10.98.3.1 <none> 80:31756/TCP 17s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
my-dep NodePort 10.111.199.51 <none> 80:31734/TCP 26d
my-service NodePort 10.100.228.0 <none> 80:32433/TCP 19d
nginx-deployment NodePort 10.102.245.67 <none> 80:31176/TCP 26m
web NodePort 10.96.132.243 <none> 80:31340/TCP 28d
web666 NodePort 10.106.85.63 <none> 80:30008/TCP 5d6h
5. PV生命周期
AccessModes(访问模式):
AccessModes 是用来对PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
- ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
- ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
- ReadWriteMany(RWX):读写权限,可以被多个节点挂载
RECLAIM POLICY(回收策略):
目前PV 支持的策略有三种:
- Retain(保留):保留数据,需要管理员手工清理数据
- Recycle(回收):清除PV 中的数据,效果相当于执行rm -rf /ifs/kuberneres/*
- Delete(删除):与PV 相连的后端存储同时删除
STATUS(状态):
一个PV 的生命周期中,可能会处于4种不同的阶段:
- Available(可用):表示可用状态,还未被任何PVC 绑定
- Bound(已绑定):表示PV 已经被PVC 绑定
- Released(已释放):PVC 被删除,但是资源还未被集群重新声明
- Failed(失败):表示该PV 的自动回收失败
5.1 PV 静态供给
现在PV使用方式称为静态供给,需要K8s运维工程师提前创建一堆PV,供开发者使用。
[root@k8s-master ~]# kubectl delete -f dep-pvc.yaml
deployment.apps "deploy-pvc" deleted
persistentvolumeclaim "my-pvc" deleted
[root@k8s-master ~]# cp pv.yaml test-pv.yaml
[root@k8s-master ~]# vi test-pv.yaml
[root@k8s-master ~]# cat test-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:name: pv0001
spec:capacity:storage: 5GiaccessModes:- ReadWriteManynfs:path: /ifs/kubernetes/pv0001server: 10.0.0.63---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv0002
spec:capacity:storage: 15GiaccessModes:- ReadWriteManynfs:path: /ifs/kubernetes/pv0002server: 10.0.0.63
---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv0003
spec:capacity:storage: 25GiaccessModes:- ReadWriteManynfs:path: /ifs/kubernetes/pv0003server: 10.0.0.63
---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv0004
spec:capacity:storage: 30GiaccessModes:- ReadWriteManynfs:path: /ifs/kubernetes/pv0004server: 10.0.0.63
---
[root@k8s-master ~]# kubectl apply -f test-pv.yaml
persistentvolume/pv0001 created
persistentvolume/pv0002 created
persistentvolume/pv0003 created
persistentvolume/pv0004 created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 0/1 Error 2 4h40m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h17m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 3h19m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h17m
pod-taint 0/1 Completed 4 6d3h
test-76846b5956-2b458 1/1 Running 0 19m
test-76846b5956-6lkb7 0/1 ContainerCreating 0 19m
test-76846b5956-8ns4z 1/1 Running 0 19m
web-96d5df5c8-6czpv 0/1 Completed 3 4d12h
web-96d5df5c8-6f4ww 0/1 Error 1 3h19m
web-96d5df5c8-6hc68 1/1 Running 2 3h19m
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 4h45m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h21m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 3h23m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h21m
pod-taint 1/1 Running 5 6d3h
test-76846b5956-2b458 1/1 Running 0 23m
test-76846b5956-6lkb7 1/1 Running 0 23m
test-76846b5956-8ns4z 1/1 Running 0 23m
web-96d5df5c8-6czpv 1/1 Running 4 4d12h
web-96d5df5c8-6f4ww 1/1 Running 2 3h23m
web-96d5df5c8-6hc68 1/1 Running 2 3h23m
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 3h45m
pv0001 5Gi RWX Retain Bound default/my-pvc 2m29s
pv0002 15Gi RWX Retain Available 2m29s
pv0003 25Gi RWX Retain Available 2m29s
pv0004 30Gi RWX Retain Available 2m29s[root@k8s-master ~]# kubectl exec -it test-76846b5956-2b458 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@test-76846b5956-2b458:/# cd /usr/share/nginx/html/
root@test-76846b5956-2b458:/usr/share/nginx/html# ls
root@test-76846b5956-2b458:/usr/share/nginx/html# touch a.txt
root@test-76846b5956-2b458:/usr/share/nginx/html# ls
a.txt
[root@k8s-node2 kubernetes]# ll pv0001/
total 0
-rw-r--r-- 1 root root 0 Dec 21 09:56 a.txt
[root@k8s-node2 kubernetes]# ll pv0002/
total 0
[root@k8s-node2 kubernetes]# ll pv0003/
total 0
[root@k8s-node2 kubernetes]# ll pv0004/
total 0
[root@k8s-node2 kubernetes]#[root@k8s-master ~]# vi test-pvc.yaml
[root@k8s-master ~]# cat test-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: test2labels:app: nginx
spec:replicas: 1selector:matchLabels:app: nginx2template:metadata:labels:app: nginx2spec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80volumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwpersistentVolumeClaim:claimName: my-pvc2---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-pvc2
spec:accessModes:- ReadWriteManyresources:requests:storage: 26Gi[root@k8s-master ~]# kubectl apply -f test-pvc.yaml
deployment.apps/test2 created
persistentvolumeclaim/my-pvc2 created
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h5m
pv0001 5Gi RWX Retain Bound default/my-pvc 22m
pv0002 15Gi RWX Retain Available 22m
pv0003 25Gi RWX Retain Available 22m
pv0004 30Gi RWX Retain Bound default/my-pvc2 22m
1、pvc与pv有什么匹配条件?
- 存储空间
- 访问模式
2、存储空间字段能限制实际存储容量?
- 不能,存储空间只能用于匹配一种标记,具体可用容量取决于网络存储(nfs、ceph)
- 能不能自动设置这个容量,取决于网络存储技术支不支持(K8S逐步对部分存储提供自动限制的支持)
3、容量匹配策略
- 匹配最接近的PV容量
- 如果都满足不了,pod处于pending状态,等待可分配的PV
4、PV与PVC关系
- 中国婚姻一样,一夫一妻
5.2 PV 动态供给
PV静态供给明显的缺点是维护成本太高了!
因此,K8s开始支持PV动态供给,使用StorageClass对象实现。
基于NFS实现PV动态供给流程图
部署NFS实现自动创建PV插件
git clone https://github.com/kubernetes-incubator/external-storage
cd nfs-client/deploy
kubectl apply -f rbac.yaml # 授权访问apiserver
kubectl apply -f deployment.yaml # 部署插件,需修改里面NFS服务器地址与共享目录
kubectl apply -f class.yaml # 创建存储类
kubectl get cs # 查看存储类
5.3 案例:应用程序使用持久卷存储数据
[root@k8s-master ~]# unzip nfs-client.zip
Archive: nfs-client.zipcreating: nfs-client/inflating: nfs-client/class.yaml inflating: nfs-client/deployment.yaml inflating: nfs-client/rbac.yaml [root@k8s-master ~]# cd nfs-client/
[root@k8s-master nfs-client]# ls
class.yaml deployment.yaml rbac.yaml
[root@k8s-master nfs-client]#
[root@k8s-master nfs-client]# ls
class.yaml deployment.yaml rbac.yaml
[root@k8s-master nfs-client]# vi deployment.yaml
[root@k8s-master nfs-client]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:name: nfs-client-provisioner
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: lizhenliang/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs- name: NFS_SERVERvalue: 10.0.0.63- name: NFS_PATHvalue: /ifs/kubernetesvolumes:- name: nfs-client-rootnfs:server: 10.0.0.63 path: /ifs/kubernetes
[root@k8s-master nfs-client]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:archiveOnDelete: "true"
[root@k8s-master nfs-client]# kubectl apply -f .
storageclass.storage.k8s.io/managed-nfs-storage created
serviceaccount/nfs-client-provisioner created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner unchanged
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master nfs-client]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h18m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 3m22s
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h54m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 3h56m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h54m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 56m
test-76846b5956-6lkb7 1/1 Running 0 56m
test-76846b5956-8ns4z 1/1 Running 0 56m
test2-78c4694588-87b9r 1/1 Running 0 26m
[root@k8s-master nfs-client]# cd
[root@k8s-master ~]# cp test-pvc.yaml test-pvc2.yaml
[root@k8s-master ~]# vi test-pvc2.yaml
[root@k8s-master ~]# cat test-pvc2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: auto-pvlabels:app: nginx
spec:replicas: 1selector:matchLabels:app: nginx2template:metadata:labels:app: nginx2spec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80volumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwpersistentVolumeClaim:claimName: my-pvc3---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-pvc3
spec:accessModes:- ReadWriteManyresources:requests:storage: 40Gi
[root@k8s-master ~]# kubectl apply -f test-pvc2.yaml
deployment.apps/auto-pv created
persistentvolumeclaim/my-pvc3 created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
auto-pv-6969ddf4bc-4c8xx 0/1 Pending 0 8s
my-pod2 1/1 Running 3 5h21m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 7m1s
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h58m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h58m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 60m
test-76846b5956-6lkb7 1/1 Running 0 60m
test-76846b5956-8ns4z 1/1 Running 0 60m
test2-78c4694588-87b9r 1/1 Running 0 29m
[root@k8s-master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound pv0001 5Gi RWX 60m
my-pvc2 Bound pv0004 30Gi RWX 29m
my-pvc3 Pending 17s
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h35m
pv0001 5Gi RWX Retain Bound default/my-pvc 52m
pv0002 15Gi RWX Retain Available 52m
pv0003 25Gi RWX Retain Available 52m
pv0004 30Gi RWX Retain Bound default/my-pvc2 52m[root@k8s-master ~]# cat test-pvc2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: auto-pvlabels:app: nginx
spec:replicas: 1selector:matchLabels:app: nginx2template:metadata:labels:app: nginx2spec:containers:- name: nginximage: nginx:1.14.2ports:- containerPort: 80volumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwpersistentVolumeClaim:claimName: my-pvc3---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-pvc3
spec:storageClassName: "managed-nfs-storage"accessModes:- ReadWriteManyresources:requests:storage: 40Gi[root@k8s-master ~]# kubectl apply -f test-pvc2.yaml
deployment.apps/auto-pv unchanged
The PersistentVolumeClaim "my-pvc3" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claimscore.PersistentVolumeClaimSpec{... // 2 identical fieldsResources: core.ResourceRequirements{Requests: core.ResourceList{s"storage": {i: resource.int64Amount{value: 42949672960}, Format: "BinarySI"}}},VolumeName: "",
- StorageClassName: &"managed-nfs-storage",
+ StorageClassName: nil,VolumeMode: &"Filesystem",DataSource: nil,}[root@k8s-master ~]# kubectl delete -f test-pvc2.yaml
deployment.apps "auto-pv" deleted
persistentvolumeclaim "my-pvc3" deleted
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h26m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 11m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h3m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h5m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h3m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 65m
test-76846b5956-6lkb7 1/1 Running 0 65m
test-76846b5956-8ns4z 1/1 Running 0 65m
test2-78c4694588-87b9r 1/1 Running 0 34m
[root@k8s-master ~]# kubectl apply -f test-pvc2.yaml
deployment.apps/auto-pv created
persistentvolumeclaim/my-pvc3 created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
auto-pv-6969ddf4bc-69ndt 1/1 Running 0 16s
my-pod2 1/1 Running 3 5h26m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 12m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h3m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h5m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h3m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 65m
test-76846b5956-6lkb7 1/1 Running 0 65m
test-76846b5956-8ns4z 1/1 Running 0 65m
test2-78c4694588-87b9r 1/1 Running 0 34m
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h40m
pv0001 5Gi RWX Retain Bound default/my-pvc 57m
pv0002 15Gi RWX Retain Available 57m
pv0003 25Gi RWX Retain Available 57m
pv0004 30Gi RWX Retain Bound default/my-pvc2 57m
pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 40Gi RWX Delete Bound default/my-pvc3 managed-nfs-storage 32s[root@k8s-node2 kubernetes]# ls
a.txt index.html pv0002 pv0004
default-my-pvc3-pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 pv0001 pv0003
[root@k8s-master ~]# kubectl delete -f test-pvc2.yaml
deployment.apps "auto-pv" deleted
persistentvolumeclaim "my-pvc3" deleted
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h41m
pv0001 5Gi RWX Retain Bound default/my-pvc 58m
pv0002 15Gi RWX Retain Available 58m
pv0003 25Gi RWX Retain Available 58m
pv0004 30Gi RWX Retain Bound default/my-pvc2 58m
[root@k8s-node2 kubernetes]# ls
archived-default-my-pvc3-pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 pv0001 pv0004
a.txt pv0002
index.html pv0003
[root@k8s-master ~]# cat nfs-client/class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:archiveOnDelete: "true"
在创建pvc时指定存储类名称
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-pvc3
spec:storageClassName: "managed-nfs-storage"accessModes:- ReadWriteManyresources:requests:storage: 40Gi
apiVersion: v1
kind: Pod
metadata:name: test-pod
spec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: test-claim
6. 有状态应用部署:StatefulSet 控制器
StatefulSet:
- 部署有状态应用
- 解决Pod独立生命周期,保持Pod启动顺序和唯一性
- 1.稳定,唯一的网络标识符,持久存储
- 2.有序,优雅的部署和扩展、删除和终止
- 3.有序,滚动更新
应用场景:分布式应用、数据库集群
稳定的网络ID
使用Headless Service(相比普通Service只是将spec.clusterIP定义为None)来维护Pod网络身份。
并且添加serviceName: “nginx”字段指定StatefulSet控制器要使用这个Headless Service。
DNS解析名称:<statefulsetName-index>.<service-name> <namespace-name>.svc.cluster.local稳定的存储
StatefulSet的存储卷使用VolumeClaimTemplate创建,称为卷申请模板,当StatefulSet使用VolumeClaimTemplate创建一个PersistentVolume时,同样也会为每个Pod分配并创建一个编号的PVC。
StatefulSet与Deployment区别:有身份的!
身份三要素:
- 域名
- 主机名
- 存储(PVC)
[root@k8s-master ~]# cat statefulset.yaml
apiVersion: v1
kind: Service
metadata:name: nginxlabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: web
spec:selector:matchLabels:app: nginxserviceName: "nginx"replicas: 3template:metadata:labels:app: nginxspec:terminationGracePeriodSeconds: 10containers:- name: nginximage: nginxports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "managed-nfs-storage"resources:requests:storage: 1Gi
[root@k8s-master ~]# kubectl apply -f statefulset.yaml
service/nginx unchanged
statefulset.apps/web created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h49m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 35m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h26m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h28m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h26m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 88m
test-76846b5956-6lkb7 1/1 Running 0 88m
test-76846b5956-8ns4z 1/1 Running 0 88m
test2-78c4694588-87b9r 1/1 Running 0 57m
web-0 0/1 ContainerCreating 0 10s
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h50m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 35m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h26m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h28m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h26m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 88m
test-76846b5956-6lkb7 1/1 Running 0 88m
test-76846b5956-8ns4z 1/1 Running 0 88m
test2-78c4694588-87b9r 1/1 Running 0 58m
web-0 1/1 Running 0 29s
web-1 0/1 ContainerCreating 0 9s
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h50m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 35m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h27m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h29m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h27m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 89m
test-76846b5956-6lkb7 1/1 Running 0 89m
test-76846b5956-8ns4z 1/1 Running 0 89m
test2-78c4694588-87b9r 1/1 Running 0 58m
web-0 1/1 Running 0 49s
web-1 1/1 Running 0 29s
web-2 0/1 ContainerCreating 0 9s
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 5h5m
pv0001 5Gi RWX Retain Bound default/my-pvc 82m
pv0002 15Gi RWX Retain Available 82m
pv0003 25Gi RWX Retain Available 82m
pv0004 30Gi RWX Retain Bound default/my-pvc2 82m
pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 2m39s
pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 2m19s
pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 2m59s
[root@k8s-master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound pv0001 5Gi RWX 91m
my-pvc2 Bound pv0004 30Gi RWX 60m
www-web-0 Bound pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 1Gi RWO managed-nfs-storage 3m2s
www-web-1 Bound pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb 1Gi RWO managed-nfs-storage 2m42s
www-web-2 Bound pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d 1Gi RWO managed-nfs-storage 2m22s
[root@k8s-master ~]# kubectl describe pod web-0
Name: web-0
Namespace: default
Priority: 0
Node: k8s-node1/10.0.0.62
Start Time: Tue, 21 Dec 2021 10:58:24 +0800
Labels: app=nginxcontroller-revision-hash=web-67bb74dcstatefulset.kubernetes.io/pod-name=web-0
Annotations: cni.projectcalico.org/podIP: 10.244.36.109/32cni.projectcalico.org/podIPs: 10.244.36.109/32
Status: Running
IP: 10.244.36.109
IPs:IP: 10.244.36.109
Controlled By: StatefulSet/web
Containers:nginx:Container ID: docker://678e95dad7ae4dcea2c14178d1afd1e7021962dbbbbbe05059a3b49f7f10c3c6Image: nginxImage ID: docker-pullable://nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603Port: 80/TCPHost Port: 0/TCPState: RunningStarted: Tue, 21 Dec 2021 10:58:41 +0800Ready: TrueRestart Count: 0Environment: <none>Mounts:/usr/share/nginx/html from www (rw)/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True
Volumes:www:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: www-web-0ReadOnly: falsedefault-token-8grtj:Type: Secret (a volume populated by a Secret)SecretName: default-token-8grtjOptional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 5m29s (x2 over 5m29s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.Normal Scheduled 5m26s default-scheduler Successfully assigned default/web-0 to k8s-node1Normal Pulling 5m25s kubelet, k8s-node1 Pulling image "nginx"Normal Pulled 5m10s kubelet, k8s-node1 Successfully pulled image "nginx" in 15.568844274sNormal Created 5m10s kubelet, k8s-node1 Created container nginxNormal Started 5m9s kubelet, k8s-node1 Started container nginx
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 5h9m
pv0001 5Gi RWX Retain Bound default/my-pvc 87m
pv0002 15Gi RWX Retain Available 87m
pv0003 25Gi RWX Retain Available 87m
pv0004 30Gi RWX Retain Bound default/my-pvc2 87m
pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 7m4s
pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 6m44s
pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 7m24s
[root@k8s-node2 kubernetes]# ls
archived-default-my-pvc3-pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb pv0001 pv0004
a.txt default-www-web-2-pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d pv0002
default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 index.html [root@k8s-node2 kubernetes]# cd default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59/
[root@k8s-node2 default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59]# ls
[root@k8s-node2 default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59]# vi index.html
[root@k8s-node2 default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59]# cd ../default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb/
[root@k8s-node2 default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb]# vi index.html
[root@k8s-node2 default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb]# [root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-pod2 1/1 Running 3 6h1m 10.244.169.164 k8s-node2 <none> <none>
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 46m 10.244.36.107 k8s-node1 <none> <none>
web-0 1/1 Running 0 11m 10.244.36.109 k8s-node1 <none> <none>
web-1 1/1 Running 0 11m 10.244.36.106 k8s-node1 <none> <none>
web-2 1/1 Running 0 11m 10.244.36.105 k8s-node1 <none> <none>
[root@k8s-master ~]# curl 10.244.36.109
00000
[root@k8s-master ~]# curl 10.244.36.106
11111
[root@k8s-master ~]# kubectl run -it --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # ls
bin dev etc home proc root sys tmp usr var
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: nginx
Address 1: 10.244.36.109 web-0.nginx.default.svc.cluster.local
Address 2: 10.244.36.106 10-244-36-106.my-service.default.svc.cluster.local
Address 3: 10.244.36.105 web-2.nginx.default.svc.cluster.local
/ # nslookup web
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: web
Address 1: 10.96.132.243 web.default.svc.cluster.local
web-0 -> www-web-0(pvc) -> pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59(pv) -> default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59
7. 应用程序配置文件存储:ConfigMap
创建ConfigMap后,数据实际会存储在K8s中(Etcd)Etcd,然后通过创建Pod时引用该数据。
应用场景:应用程序配置
Pod使用configmap数据有两种方式:
- 变量注入
- 数据卷挂载
[root@k8s-master ~]# vi configmap.yaml
[root@k8s-master ~]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: configmap-demo
data:abc: "123"cde: "456"redis.properties: |port: 6379host: 10.0.0.63
[root@k8s-master ~]# kubectl apply -f configmap.yaml
configmap/configmap-demo created
[root@k8s-master ~]# kubectl get configmap
NAME DATA AGE
configmap-demo 3 17s
[root@k8s-master ~]# kubectl get cm
NAME DATA AGE
configmap-demo 3 24s
[root@k8s-master ~]# vi configmap-pod.yaml
[root@k8s-master ~]# cat configmap-pod.yaml
apiVersion: v1
kind: Pod
metadata:name: configmap-demo-pod
spec:containers:- name: demoimage: nginxenv:- name: NAMEvalue: "adu"- name: ABCDvalueFrom:configMapKeyRef:name: configmap-demokey: abc- name: CDEFvalueFrom:configMapKeyRef:name: configmap-demokey: cdevolumeMounts:- name: configmountPath: "/config"readOnly: truevolumes:- name: configconfigMap:name: configmap-demoitems:- key: "redis.properties"path: "redis.properties"
[root@k8s-master ~]# kubectl apply -f configmap-pod.yaml
pod/configmap-demo-pod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 0/1 ContainerCreating 0 7s
my-pod2 1/1 Running 3 7h3m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 108m
pod-taint 1/1 Running 5 6d6h
sh 1/1 Running 1 59m
test-76846b5956-gftn9 1/1 Running 0 55m
test-76846b5956-r7s9k 1/1 Running 0 54m
test-76846b5956-trpbn 1/1 Running 0 55m
test2-78c4694588-87b9r 1/1 Running 0 131m
web-0 1/1 Running 0 73m
web-1 1/1 Running 0 73m
web-2 1/1 Running 0 72m
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 1/1 Running 0 66s
my-pod2 1/1 Running 3 7h4m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 109m
pod-taint 1/1 Running 5 6d6h
sh 1/1 Running 1 59m
test-76846b5956-gftn9 1/1 Running 0 56m
test-76846b5956-r7s9k 1/1 Running 0 55m
test-76846b5956-trpbn 1/1 Running 0 56m
test2-78c4694588-87b9r 1/1 Running 0 132m
web-0 1/1 Running 0 74m
web-1 1/1 Running 0 74m
web-2 1/1 Running 0 73m
[root@k8s-master ~]# kubectl exec -it configmap-demo-pod -- bash
root@configmap-demo-pod:/# echo $NAME
adu
root@configmap-demo-pod:/# echo $ABCD
123
root@configmap-demo-pod:/# echo $CDEF
456
root@configmap-demo-pod:/# echo "echo \$NAME" > a.sh
root@configmap-demo-pod:/# bash a.sh
adu
root@configmap-demo-pod:/# cd /config/
root@configmap-demo-pod:/config# ls
redis.properties
root@configmap-demo-pod:/config# cat redis.properties
port: 6379
host: 10.0.0.63
root@configmap-demo-pod:/config# pwd
/config
root@configmap-demo-pod:/config#
进入pod中测试是否注入变量和挂载:
# echo $ABCD
# ls /config
8. 敏感数据存储:Secret
与ConfigMap类似,区别在于Secret主要存储敏感数据,所有的数据要经过base64编码。
应用场景:凭据
kubectl create secret 支持三种数据类型:
- docker-registry:存储镜像仓库认证信息
- generic:从文件、目录或者字符串创建,例如存储用户名密码
- tls:存储证书,例如HTTPS证书
[root@k8s-master ~]# kubectl create secret --help
Create a secret using specified subcommand.Available Commands:docker-registry Create a secret for use with a Docker registrygeneric Create a secret from a local file, directory or literal valuetls Create a TLS secretUsage:kubectl create secret [flags] [options]Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).
Pod使用Secret数据与ConfigMap方式一样。
将用户名密码进行编码:
[root@k8s-master ~]# echo -n 'admin' | base64
YWRtaW4=
[root@k8s-master ~]# echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
[root@k8s-master ~]# vi secret.yaml
[root@k8s-master ~]# cat secret.yaml
apiVersion: v1
kind: Secret
metadata:name: db-user-pass
type: Opaque
data:username: YWRtaW4=password: MWYyZDFlMmU2N2Rm
[root@k8s-master ~]# kubectl apply -f secret.yaml
secret/db-user-pass created
[root@k8s-master ~]# kubectl get secret
NAME TYPE DATA AGE
db-user-pass Opaque 2 12s
default-token-8grtj kubernetes.io/service-account-token 3 29d
nfs-client-provisioner-token-s26sh kubernetes.io/service-account-token 3 151m
[root@k8s-master ~]# vi secret-pod.yaml
[root@k8s-master ~]# cat secret-pod.yaml
apiVersion: v1
kind: Pod
metadata:name: secret-demo-pod
spec:containers:- name: demoimage: nginxenv:- name: USERvalueFrom:secretKeyRef:name: db-user-passkey: username- name: PASSvalueFrom:secretKeyRef:name: db-user-passkey: passwordvolumeMounts:- name: configmountPath: "/config"readOnly: truevolumes:- name: configsecret:secretName: db-user-passitems:- key: usernamepath: my-username
[root@k8s-master ~]# kubectl apply -f secret-pod.yaml
pod/secret-demo-pod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 1/1 Running 0 56m
my-pod2 1/1 Running 3 7h59m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 165m
pod-taint 1/1 Running 5 6d6h
secret-demo-pod 0/1 ContainerCreating 0 10s
sh 1/1 Running 1 115m
test-76846b5956-gftn9 1/1 Running 0 111m
test-76846b5956-r7s9k 1/1 Running 0 111m
test-76846b5956-trpbn 1/1 Running 0 112m
test2-78c4694588-87b9r 1/1 Running 0 3h7m
web-0 1/1 Running 0 130m
web-1 1/1 Running 0 129m
web-2 1/1 Running 0 129m
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 1/1 Running 0 57m
my-pod2 1/1 Running 3 8h
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 165m
pod-taint 1/1 Running 5 6d6h
secret-demo-pod 1/1 Running 0 24s
sh 1/1 Running 1 116m
test-76846b5956-gftn9 1/1 Running 0 112m
test-76846b5956-r7s9k 1/1 Running 0 111m
test-76846b5956-trpbn 1/1 Running 0 112m
test2-78c4694588-87b9r 1/1 Running 0 3h8m
web-0 1/1 Running 0 130m
web-1 1/1 Running 0 130m
web-2 1/1 Running 0 129m
[root@k8s-master ~]# kubectl exec -it secret-demo-pod -- bash
root@secret-demo-pod:/# ls
bin config docker-entrypoint.d etc lib media opt root sbin sys usr
boot dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
root@secret-demo-pod:/# cd config/
root@secret-demo-pod:/config# ls
my-username
root@secret-demo-pod:/config# cat my-username
adminroot@secret-demo-pod:/config#
课后作业:
1、创建一个secret,并创建2个pod,pod1挂载该secret,路径为/secret,pod2使用环境变量引用该secret,该变量的环境变量名为ABC
- secret名称:my-secret
- pod1名称:pod-volume-secret
- pod2名称:pod-env-secret
apiVersion: v1
kind: Secret
metadata:name: my-secret
type: Opaque
data:password: MWYyZDFlMmU2N2RmapiVersion: v1
kind: Pod
metadata:name: pod-volume-secret
spec:containers:- name: nginximage: nginxvolumeMounts:- name: foomountPath: "/etc/foo"volumes:- name: foosecret:secretName: mysecret
---
apiVersion: v1
kind: Pod
metadata:name: pod-env-secret
spec:containers:- name: nginximage: nginxenv:- name: SECRET_PASSWORDvalueFrom:secretKeyRef:name: mysecretkey: password
2、创建一个pv,再创建一个pod使用该pv
- 容量:5Gi
- 访问模式:ReadWriteOnce
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: test-claim
spec:storageClassName: "managed-nfs-storage"accessModes:- ReadWriteManyresources:requests:storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:name: web
spec:containers:- name: webimage: nginxvolumeMounts:- name: datamountPath: "/mnt"volumes:- name: datapersistentVolumeClaim:claimName: test-claim
3、创建一个pod并挂载数据卷,不可以用持久卷
- 卷来源:emptyDir、hostPath任意
- 挂载路径:/data
apiVersion: v1
kind: Pod
metadata:name: no-persistent-redis
spec:containers:- name: redisimage: redisvolumeMounts:- name: cachemountPath: /data/redisvolumes:- name: cacheemptyDir: {}
4、将pv按照名称、容量排序,并保存到/opt/pv文件
kubectl get pv --sort-by=.metadata.name > /opt/pv
kubectl get pv --sort-by=.spec.capacity.storage > /opt/pv
Kubernetes CKA认证运维工程师笔记-Kubernetes存储相关推荐
- Kubernetes CKA认证运维工程师笔记-Kubernetes安全
Kubernetes CKA认证运维工程师笔记-Kubernetes安全 1. Kubernetes安全框架 2. 鉴权,授权,准入控制 2.1 鉴权 2.2 授权 2.3 准入控制 3. 基于角色的 ...
- Kubernetes CKA认证运维工程师笔记-Kubernetes网络
Kubernetes CKA认证运维工程师笔记-Kubernetes网络 1. Service 存在的意义 2. Pod与Service的关系 3. Service三种常用类型 4. Service代 ...
- Kubernetes CKA认证运维工程师笔记-Kubernetes调度
Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理 1. 创建一个Pod的工作流程 2. Pod中影响调度的主要属性 3. 资源限制对Pod调度的影响 4. no ...
- Kubernetes CKA认证运维工程师笔记-Kubernetes监控与日志
Kubernetes CKA认证运维工程师笔记-Kubernetes监控与日志 1. 查看集群资源状况 2. 监控集群资源利用率 3. 管理K8s组件日志 4. 管理K8s应用日志 1. 查看集群资源 ...
- Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理
Kubernetes CKA认证运维工程师笔记-Kubernetes应用程序生命周期管理 1. 在Kubernetes中部署应用流程 2. 使用Deployment部署Java应用 2.1 Pod与D ...
- Kubernetes CKA认证运维工程师笔记-Docker快速入门
Kubernetes CKA认证运维工程师笔记-Docker快速入门 1. Docker 概念与安装 1.1 Docker 是什么 1.2 Docker 基本组成 1.3 版本与支持平台 1.4 Do ...
- SRE运维工程师笔记-Linux基础入门
SRE运维工程师笔记-Linux基础入门 1. Linux基础 1.1 用户类型 1.2 终端terminal 1.2.1 终端类型 1.2.2 查看当前的终端设备 1.3 交互式接口 1.3.1 交 ...
- SRE运维工程师笔记-Linux用户组和权限管理
SRE运维工程师笔记-Linux用户组和权限管理 用户.组和权限 内容概述 1. Linux安全模型 1.1 用户 1.2 用户组 1.3 用户和组的关系 1.4 安全上下文 2. 用户和组的配置文件 ...
- SRE运维工程师笔记-文件查找和压缩
SRE运维工程师笔记-文件查找和压缩 1. 文件查找 1.1 locate 1.2 find 1.2.1 指定搜索目录层级 1.2.2 对每个目录先处理目录内的文件,再处理目录本身 1.2.3 根据文 ...
- SRE运维工程师笔记-Linux文件管理和IO重定向
SRE运维工程师笔记-Linux文件管理和IO重定向 1. 文件系统目录结构 1.1 文件系统的目录结构 1.2 常见的文件系统目录功能 1.3 应用程序的组成部分 1.4 CentOS 7 以后版本 ...
最新文章
- R语言glmnet交叉验证选择(alpha、lambda)拟合最优elastic回归模型:弹性网络(elasticNet)模型选择最优的alpha值、模型最优的lambda值,最终模型的拟合与评估
- ACM Computer Factory
- numpy比较运算符和其对应的通用函数
- 科幻片天际SKYLINE,喜欢科幻的朋友不要错过。
- Cisco 交换机配置端口镜像
- 每个Java开发者都应该知道的5个JDK工具
- OSPF通过MPLS ×××
- 支付宝上线宠物防走丢功能
- 学习自动驾驶技术 学习之路_一天学习驾驶
- Python3-环境篇-01-Python3安装
- 有道词典生词本到excel的装换
- 毕业论文 - 写作问题总结 和 tips
- CPU的平均指令周期 怎么算,如何计算处理器的机器周期
- 点击 按钮 下载图片
- 计算机连接网络需要什么,宽带怎么安装需要什么_安装宽带步骤-系统城
- 清华姚班教授:​我见过太多博士生精神崩溃,身体垮掉,一事无成
- 电脑开机画面如何更换
- 浙江大华流媒体服务器型号,大华DAHUA网络视频存储服务器DH-EVS7024S-DY产品中心_DAV数字音视工程网...
- 开发App,如何更好的进行技术选型
- php 日记账余额,现金日记账,每天自动汇总(含年终福利)