k8s存储

  • k8s存储
    • storageclass自动创建pv
      • StatefulSet

k8s存储

docker存储----k8s存储
docker的容器层可以提供存储:存储在可写层(CopyOnWrite)
docker的数据持久化解决方案:
data volume.---->1、bind mount 2、docker managervolume
其实两者没有什么显著差别,不同点在于,是否需要提前准备dockerHost上的相关文件或目录
Volume
与docker类似:将宿主机上的某个文件或者目录挂载到Pod内
emptyDir
举个栗子:

[root@master ~]# vim emptyDir.yaml
kind: Pod
apiVersion: v1
metadata:name: producer-consumer
spec:containers:- name: producerimage: busyboxvolumeMounts:- name: shared-volumemountPath: /producer_dir  #容器内的路径args:- /bin/sh- -c- echo "hello world" > /producer_dir/hello.txt; sleep 30000- name: consumerimage: busyboxvolumeMounts:- name: shared-volumemountPath: /consumer_dirargs:- /bin/sh- -c- cat /consumer_dir/hello.txt; sleep 30000volumes:- name: shared-volumeemptyDir: {}

总结:根据上述yaml文件分析,volumes是指k8s的存储方案.容器内volumeMounts使用的是volumes内定义的存储,所以现在可以理解为,volumes定义的dockerHost上的目录或文件,分别挂载到了producer(/producer_dir)和consumer(/consumer_dir)这个两个容器的对应目录。可以判断出在consumer这个容器的/consumer_dir目录下,应该也会有一个hello.txt的文件。
//验证查看conumer容器的日志

[root@master yaml]# kubectl logs producer-consumer consumer
hello world

//查看一个Pod运行在了那个Node节点上?

[root@master yaml]# kubectl get pod -o wide
NAME                READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
producer-consumer   2/2     Running   0          102s   10.244.1.2   node01   <none>           <none>

//到对应Node节点上查看该容器的详细信息(Mounts)
在对应节点docker ps 查看容器名称 找到容器名称后:

[root@node01 ~]# docker inspect k8s_consumer_producer-consumer_default_a7c  (容器名称)
"Mounts": [{"Type": "bind","Source": "/var/lib/kubelet/pods/a7c2c37b-f1cf-4777-a37b-d3"Destination": "/consumer_dir","Mode": "","RW": true,"Propagation": "rprivate"},

PS: 查看到该容器的Mounts字段,等于是运行docker容器的时候使用这么一条命令:
docker run -v /producer_dir busybox
emptyDir的使用场景:如果Pod的被删除,那么数据也会被删除,不具备持计化。Pod内的容器,需要共享数据卷的时候,使用的临时数据卷。

HostPath
相对于emtpyDir来说,hostPath就等于运行容器是使用的命令:
docker run -v /host/path:/container/path
//这里没有创建新的yaml文件,直接将emptyDir.yaml文件的volumes字段更改为:hostPath.

[root@master yaml]# mkdir /data/hostPath -p
kind: Pod
apiVersion: v1
metadata:name: producer-consumer
spec:containers:- name: producerimage: busyboxvolumeMounts:- name: shared-volumemountPath: /producer_dir  #容器内的路径args:- /bin/sh- -c- echo "hello world" > /producer_dir/hello.txt; sleep 30000- name: consumerimage: busyboxvolumeMounts:- name: shared-volumemountPath: /consumer_dirargs:- /bin/sh- -c- cat /consumer_dir/hello.txt; sleep 30000volumes:- name: shared-volumehostPath:path: "/data/hostPath"

总结: HostPath这种方式相比较emptyDir来说,持久化功能强,但它实际增加了Pod与host的耦合,所以大部分资源不会使用这种持久化方案,通常关于docker或者k8s集群本身才会使用

PV、PVC
PV:Persisten(持久的、稳固的)Volume
是k8s集群的外部存储系统,一般是设定好的存储空间(文件系统中的一个目录)

PVC: PersistenvolumeClaim(声明、申请)
如果应用需要用到持久化的时候,可以直接向PV申请空间。

基于NFS服务来创建的PV

主机名 IP地址
master 192.168.1.20
node01 192.168.1.21
node02 192.168.1.22

//3台节点都安装nfs-工具包和rpc-bind服务。


[root@master ~]# yum -y install nfs-utils rpcbind
[root@node01 ~]# yum -y install nfs-utils rpcbind
[root@node02 ~]# yum -y install nfs-utils rpcbind

//这里准备将NFS服务部署在master节点上,需要在master节点上提前规划好共享的目录

[root@master ~]# mkdir /nfsdata
[root@master ~]# vim /etc/exeports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable nfs-server
[root@node02 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
[root@node01 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
[root@node02 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *

//创建pv.yaml文件

[root@master ~]# mkdir yaml
[root@master ~]# mkdir -p /nfsdata/pv1
[root@master ~]# cd yaml
[root@master yaml]# vim pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:name: pv1
spec:capacity:          #PV的容量storage: 1GiaccessModes:      #访问模式- ReadWriteOncepersistentVolumeReclaimPolicy: RecyclestorageClassName: nfsnfs:path: /nfsdata/pv1server: 192.168.1.20
[root@master yaml]# kubectl apply -f pv.yaml
[root@master yaml]# kubectl get pv

PS: pv所支持的访问模式:
ReadWriteOnce: PV能以read-write的模式mount到单个节点。
ReadOnlyMany: PV能以read-only 的模式mount到多个节点。
ReadWriteMany: PV能以read-write的模式Mount到多个节点。

persistentVolumeReclaimPolicy:PV空间的回收策略:
Recycle: 会清除数据,自动回收。
Retain: 需要手动清理回收。
Delete: 云存储专用的回收空间使用命令。

//创建一个PVC,向刚才的PV申请使用空间,注意,这里PV与PVC的绑定,通过storageClassName和accessModes这两个字段共同决定

[root@master yaml]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: pvc1
spec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: nfs
[root@master yaml]# kubectl apply -f pvc.yaml

kubectl get pvc可以查看到pv和pvc关联上了

[root@master yaml]# kubectl get pvc
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc1   Bound    pv1      1Gi        RWO            nfs            7s

总结:
1.当系统中的PV被绑定之后,就不会被其他PVC所绑定了。

2.如果系统中有多个可以满足PVC要求的PV,则系统会自动选择一个符合PVC申请空间大小的PV进行绑定,尽量不浪费存储空间。

//创建一个Pod,来使用上述PVC

[root@master yaml]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:name: pod1
spec:containers:- name: pod1image: busyboxargs:- /bin/sh- -c- sleep 30000volumeMounts:- name: mydatamountPath: "/data"volumes:- name: mydatapersistentVolumeClaim:claimName: pvc1
[root@master yaml]# kubectl apply -f pod.yaml
[root@master yaml]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          43s
注意: 一定要提前创好pv.yaml文件nfs字段下path:的目录,否则pod创建失败

//验证,/data/pv1目录与Pod内"/data"目录的一致性

[root@master yaml]# cd /nfsdata/pv1/
[root@master pv1]# echo "hello" > test.txt
[root@master pv1]# kubectl exec pod1 cat /data/test.txt
hello

PV的空间回收
当回收策略为; recycle

[root@master ~]# kubectl get pv,pvc
NAME                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   REASON   AGE
persistentvolume/pv1   1Gi        RWO            Recycle          Bound    default/pvc1   nfs                     20mNAME                         STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc1   Bound    pv1      1Gi        RWO            nfs            17m
// 验证dockerhost上PV上存放的数据
[root@master ~]# ls /nfsdata/pv1/
test.txt

//删除Pod资源,PVC

[root@master ~]# kubectl delete pod pod1
pod "pod1" deleted
[root@master ~]# kubectl delete pvc pvc1
persistentvolumeclaim "pvc1" deleted

//查看PV的过程,Released(释放)—>Available(可用)

[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM          STORAGECLASS   REASON   AGE
pv1    1Gi        RWO            Recycle          Released   default/pvc1   nfs                     25m
[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv1    1Gi        RWO            Recycle          Available           nfs                     25m
//验证,数据依然被删除
[root@master ~]# ls /nfsdata/pv1/

PS: 在释放空间的过程中,其实K8S生成了一个新的Pod,由这个Pod执行删除数据的操作。

当回收策略为: Retian

accessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: Retain             //更改回收策略storageClassName: nfsnfs:

//重新运行pv,pvc,pod资源

[root@master yaml]# kubectl apply -f pv.yaml
persistentvolume/pv1 created
[root@master yaml]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc1 created
[root@master yaml]# kubectl apply -f pod.yaml
pod/pod1 created

然后在Pod内,创建对应的资源,再尝试删除PVC,和Pod,验证PV目录下,数据是否还会存在?

[root@master yaml]# cd /nfsdata/pv1/
[root@master pv1]# echo "hi" > test.txt
[root@master pv1]# kubectl exec pod1 cat /data/test.txt
hi
[root@master pv1]# kubectl delete pod pod1
pod "pod1" deleted
[root@master pv1]# kubectl delete pvc pvc1
persistentvolumeclaim "pvc1" deleted
[root@master pv1]# ls
test.txt
数据依旧存在

PV,PVC的运用
现在部署一个MySQL服务,并且将MySQL的数据进行持久化存储。

1、创建PV,PVC

[root@master pv]# vim pvmysql.yamlkind: PersistentVolume
apiVersion: v1
metadata:name: mysql-pv
spec:capacity:storage: 1GiaccessModes:- ReadOnlyManypersistentVolumeReclaimPolicy: RecyclestorageClassName: nfsnfs:path: /nfsdata/pv1server: 192.168.1.20
[root@master pv]# vim pvcmysql.yamlkind: PersistentVolumeClaim
apiVersion: v1
metadata:name: mysql-pvc
spec:accessModes:- ReadOnlyManyresources:requests:storage: 1GistorageClassName: nfs

查看

[root@master yaml]# kubectl apply -f pvmysql.yaml
persistentvolume/mysql-pv created
[root@master yaml]# kubectl apply -f pvcmysql.yaml
persistentvolumeclaim/mysql-pvc created
[root@master yaml]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS   REASON   AGE
mysql-pv   1Gi        ROX            Recycle          Bound      default/mysql-pvc   nfs                     84s
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pvc   Bound    mysql-pv   1Gi        ROX            nfs            80s

2、部署MySQL

[root@master pv]# vim mysql.yamlkind: Deployment
apiVersion: extensions/v1beta1
metadata:name: mysql
spec:template:metadata:labels:test: mysqlspec:containers:- name: mysqlimage: mysql:5.7env:- name: MYSQL_ROOT_PASSWORDvalue: 123.comvolumeMounts:- name: mysql-testmountPath: /var/lib/mysqlvolumes:- name: mysql-testpersistentVolumeClaim:claimName: mysql-pvc
---
kind: Service
apiVersion: v1
metadata:name: mysql
spec:type: NodePortselector:test: mysqlports:- port: 3306targetPort: 3306nodePort: 31306

查看

[root@master yaml]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
mysql-5d6667c5b-f4ttx   1/1     Running   0          11s

在MYSQL数据库中添加数据

[root@master pv]# kubectl exec -it mysql-5d6667c5b-bw4cp bash
root@mysql-5d6667c5b-bw4cp:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> SHOW DATABASES;            //查看当前的库。
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.04 sec)mysql> CREATE DATABASE TEST;               //创建test库
Query OK, 1 row affected (0.02 sec)mysql> USE TEST;                          //选择使用test库
Database changed
mysql> SHOW TABLES;                            //查看test库中的表
Empty set (0.00 sec)mysql> CREATE TABLE my_id(id int(4));             //创建my_id表
Query OK, 0 rows affected (0.03 sec)mysql> INSERT my_id values (9527);                //往my_id表中插入数据
Query OK, 1 row affected (0.02 sec)mysql> SELECT * FROM my_id;                         //查看my_id表中所有数
+------+
| id   |
+------+
| 9527 |
+------+
1 row in set (0.00 sec)mysql> exit

模拟mysql故障转移
先查看运行mysql服务的Pod,在哪个节点,然后将该节点挂起,该节点的容器就会转移到正常运行的节点

[root@master pv]# kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   Ready      master   35d   v1.15.0
node01   Ready      <none>   35d   v1.15.0
node02   NotReady   <none>   35d   v1.15.0
节点已关闭
[root@master pv]# kubectl get pod -o wide
NAME                           READY   STATUS        RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
mysql-5d6667c5b-5pnm6          1/1     Running       0          13m    10.244.1.6   node01   <none>           <none>
mysql-5d6667c5b-bw4cp          1/1     Terminating   0          52m    10.244.2.4   node02   <none>           <none>

结果:
已关闭节点显示停止,在正常运行节点新生成一个容器并且正常运行
新生成Pod后,进入新Pod验证数据是否会存在

[root@master pv]# kubectl exec -it mysql-5d6667c5b-5pnm6 bash
root@mysql-5d6667c5b-5pnm6:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| TEST               |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.04 sec)mysql> use TEST;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
mysql> show tables;
+----------------+
| Tables_in_TEST |
+----------------+
| my_id          |
+----------------+
1 row in set (0.00 sec)mysql> select * from my_id;
+------+
| id   |
+------+
| 9527 |
+------+
1 row in set (0.01 sec)mysql> exit
数据存在

storageclass自动创建pv

存储
简单一点理解: storageclass能够帮组我们自动的创建PV
Provisioner: 提供者(存储提供者)

创建共享目录
[root@master ~]# mkdir /nfsdata

一、开启NFS

[root@master ~]# yum -y install nfs-utils rpcbind
[root@node01 ~]# yum -y install nfs-utils rpcbind
[root@node02 ~]# yum -y install nfs-utils rpcbind
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@master ~]# showmount -e
Export list for master:
/nfsdata *
[root@node01 ~]# systemctl start rpcbind
[root@node01 ~]# systemctl enable rpcbind
[root@node01 ~]# systemctl start nfs-server
[root@node01 ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@node01 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
[root@node02 ~]# systemctl start rpcbind
[root@node02 ~]# systemctl enable rpcbind
[root@node02 ~]# systemctl start nfs-server
[root@node02 ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@node02 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *

二、创建名称空间

[root@master yaml]# vim ns.yamlapiVersion: v1
kind: Namespace
metadata:name: bdqn

运行,查看

[root@master yaml]# kubectl apply -f ns.yaml
namespace/bdqn created
[root@master yaml]# kubectl get ns
NAME              STATUS   AGE
bdqn              Active   11s

三、开启rbac权限 (mkdir /yaml文件目录)
PS: RBAC基于角色的访问控制–全拼Role-Based Access Control

[root@master yaml]# vim rbac.yamlapiVersion: v1
kind: ServiceAccount
metadata:name: nfs-provisionernamespace: bdqn
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: nfs-provisioner-runner
rules:-  apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]-  apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]-  apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]-  apiGroups: [""]resources: ["events"]verbs: ["watch", "create", "update", "patch"]-  apiGroups: [""]resources: ["services", "endpoints"]verbs: ["get","create","list", "watch","update"]-  apiGroups: ["extensions"]resources: ["podsecuritypolicies"]resourceNames: ["nfs-provisioner"]verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-provisioner
subjects:- kind: ServiceAccountname: nfs-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-provisioner-runnerapiGroup: rbac.authorization.k8s.io

运行

[root@master yaml]# kubectl apply -f rbac.yaml
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

四、创建nfs-deployment.yaml

[root@master yaml]# vim nfs-deployment.yamlapiVersion: extensions/v1beta1
kind: Deployment
metadata:name: nfs-client-provisionernamespace: bdqn
spec:replicas: 1strategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisionervolumeMounts:- name: nfs-client-rootmountPath:  /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: bdqn-test- name: NFS_SERVERvalue: 192.168.1.20 - name: NFS_PATHvalue: /nfsdata

运行,查看

[root@master yaml]# kubectl apply -f nfs-deployment.yaml
deployment.extensions/nfs-client-provisioner created
[root@master yaml]# kubectl get deployments -n bdqn
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           54s

五、创建storageclass资源

[root@master yaml]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: storageclassnamespace: bdqn
provisioner: bdqn-test
reclaimPolicy: Retain

运行,查看

[root@master yaml]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/storageclass created
[root@master yaml]# kubectl get storageclasses -n bdqn
NAME           PROVISIONER   AGE
storageclass   bdqn-test     21s

六、创建PVC验证

[root@master yaml]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: test-pvcnamespace: bdqn
spec:storageClassName: storageclassaccessModes:- ReadWriteOnceresources:requests:storage: 200Mi

运行

[root@master yaml]# kubectl apply -f pvc.yaml
persistentvolumeclaim/test-pvc created

查看

[root@master yaml]# kubectl get pvc -n bdqn
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-b68ee3fa-8779-4aaa-90cf-eea914366441   200Mi      RWO            storageclass   23s
PS:这里显示pvc和storageclass自动创建的pv关联上了
[root@master yaml]# ls /nfsdata/
bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441

测试

[root@master nfsdata]# cd bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441/
[root@master bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441]# echo hello > index.html
[root@master nfsdata]# kubectl exec -it -n bdqn nfs-client-provisioner-856d966889-s7p2j sh
~ # cd /persistentvolumes/bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914
366441/
/persistentvolumes/bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441 # cat index.html
hello

七、创建一个Pod测试

[root@master yaml]# vim pod.yamlapiVersion: v1
kind: Pod
metadata:name: test-podnamespace: bdqn
spec:containers:- name: test-podimage: busyboxargs:- /bin/sh- -c- sleep 3000volumeMounts:- name: nfs-pvmountPath: /testvolumes:- name: nfs-pvpersistentVolumeClaim:claimName: test-pvc

运行,查看

[root@master yaml]# kubectl apply -f pod.yaml
pod/test-pod created
[root@master yaml]# kubectl get pod -n bdqn
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-856d966889-s7p2j   1/1     Running   0          28m
test-pod                                  1/1     Running   0          25s

测试

[root@master yaml]# ls /nfsdata/
bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441
[root@master yaml]# echo 123456 > /nfsdata/bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441/test.txt
[root@master yaml]# kubectl exec -n bdqn test-pod cat /test/test.txt
123456

如果用deployment测试yaml该怎么写? 并且用私有镜像

 kind: Deployment
apiVersion: extensions/v1beta1
metadata:name: deployment-svc
spec:replicas: 1template:metadata:labels:app: httpdspec:containers:- name: deployment-svcimage: 192.168.229.187:5000/httpd:v1volumeMounts:- name: nfs-pvmountPath: /usr/local/apache2/htdocsvolumes:- name: nfs-pvpersistentVolumeClaim:claimName: test-pvc
---
kind: Service
apiVersion: v1
metadata:name: test-svc
spec:type: NodePortselector:app: httpdports:- protocol: TCPport: 80targetPort: 80nodePort: 30000

StatefulSet

概述
StatefulSet概述RC、Deployment、DaemonSet都是面向无状态的服务,它们所管理的Pod的IP、名字,启停顺序等都是随机的,而StatefulSet是什么?顾名思义,有状态的集合,管理所有有状态的服务,比如MySQL、MongoDB集群等。
StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称,启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必须要用到共享存储。
在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的headless service,headless service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。
除此之外,StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名,这个域名的格式为:

$(podname).(headless server name)
FQDN: $(podname).(headless servername).namespace.svc.cluster.local

运行有状态的服务
通过一个栗子:

apiVersion: v1
kind: Service
metadata:name: headless-svclabels:app: headless-svc
spec:ports:- name: mywebport: 80selector:app: headless-podclusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: statefulset
spec:serviceName: headless-svcreplicas: 3selector:matchLabels:app: headless-podtemplate:metadata:labels:app: headless-podspec:containers:- name: mywebimage: nginx

运行

[root@master yaml]# kubectl apply -f statefulset.yaml
service/headless-svc created
statefulset.apps/statefulset created

查看

[root@master yaml]# kubectl get pod -w
NAME            READY   STATUS              RESTARTS   AGE
statefulset-0   0/1     ContainerCreating   0          13s
statefulset-0   1/1     Running             0          36s
statefulset-1   0/1     Pending             0          0s
statefulset-1   0/1     Pending             0          0s
statefulset-1   0/1     ContainerCreating   0          0s
statefulset-1   1/1     Running             0          47s
statefulset-2   0/1     Pending             0          0s
statefulset-2   0/1     Pending             0          0s
statefulset-2   0/1     ContainerCreating   0          0s
statefulset-2   1/1     Running             0          17s
[root@master yaml]# kubectl get pod
NAME            READY   STATUS    RESTARTS   AGE
statefulset-0   1/1     Running   0          2m23s
statefulset-1   1/1     Running   0          107s
statefulset-2   1/1     Running   0          60s

什么是 headless service 无头服务?
在用Deployment时,每一个Pod名称是没有顺序的,是随机字符串,因此是Pod名称是无序的,但是在statefulset中要求必须是有序,每一个pod不能被随意取代,pod重建后pod名称还是一样的。而pod IP是变化的,所以是以Pod名称来识别。pod名称是pod唯一性的标识符,必须持久稳定有效。这时候要用到无头服务,它可以给每个Pod一个唯一的名称。

什么是volumeClaimTemplate?
对于有状态的副本集都会用到持久存储,对于分布式系统来讲,它的最大特点是数据是不一样的,所以各个节点不能使用同一存储卷,每个节点有自已的专用存储,但是如果在Deployment中的Pod template里定义的存储卷,是所有副本集共用一个存储卷,数据是相同的,因为是基于模板来的,而statefulset中每个Pod都要自已的专有存储卷,所以statefulset的存储卷就不能再用Pod模板来创建了,于是statefulSet使用volumeClaimTemplate,称为卷申请模板,它会为每个Pod生成不同的pvc,并绑定pv,从而实现各pod有专用存储。这就是为什么要用volumeClaimTemplate的原因.

//接着,在上述yaml文件中,添加volumeClaimTemplate字段
一,开启NFS

[root@master ~]# mkdir /nfsdata
[root@master ~]# yum -y install nfs-utils
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl enable nfs-server
[root@master ~]# showmount -e
Export list for master:
/nfsdata *

二,开启rbac权限

[root@master yaml]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: nfs-provisioner-runner
rules:-  apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]-  apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]-  apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]-  apiGroups: [""]resources: ["events"]verbs: ["watch", "create", "update", "patch"]-  apiGroups: [""]resources: ["services", "endpoints"]verbs: ["get","create","list", "watch","update"]-  apiGroups: ["extensions"]resources: ["podsecuritypolicies"]resourceNames: ["nfs-provisioner"]verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-provisioner
subjects:- kind: ServiceAccountname: nfs-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-provisioner-runnerapiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac.yaml

三,创建nfs-deployment.yaml

[root@master yaml]# vim nfs-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: nfs-client-provisioner
spec:replicas: 1strategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisionervolumeMounts:- name: nfs-client-rootmountPath:  /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: bdqn-test- name: NFS_SERVERvalue: 192.168.1.20- name: NFS_PATHvalue: /nfsdatavolumes:- name: nfs-client-rootnfs:server: 192.168.1.20path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deploy.yaml

四,创建storageClass资源

[root@master yaml]# vim storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:name: storageclass
provisioner: bdqn-test
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f storage.yaml

五,创建statefulSet资源

[root@master yaml]# vim statefulset.yamlapiVersion: v1
kind: Service
metadata:name: headless-svclabels:app: headless-svc
spec:ports:- name: mywebport: 80selector:app: headless-podclusterIP: None---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: statefulset
spec:serviceName: headless-svcreplicas: 3selector:matchLabels:app: headless-podtemplate:metadata:labels:app: headless-podspec:containers:- name: mywebimage: nginxvolumeMounts:- name: test-storagemountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: test-storageannotations:volume.beta.kubernetes.io/storage-class: storageclassspec:accessModes:- ReadWriteOnceresources:requests:storage: 100Mi

//写完之后,直接运行,并且,在此之前,我们并没有创建PV,PVC,现在查看集群中的资源,是否有这两种资源?
运行statefulset.yaml文件

[root@master yaml]# kubectl apply -f statefulset.yaml
service/headless-svc created
statefulset.apps/statefulset created
[root@master yaml]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                STORAGECLASS   REASON   AGE
pvc-95c6b358-e875-42f1-ab40-b740ea2b18db   100Mi      RWO            Delete           Bound      default/test-storage-statefulset-2   storageclass            3m45s
pvc-9910b735-b006-4b31-9932-19e679eddae8   100Mi      RWO            Delete           Bound      default/test-storage-statefulset-1   storageclass            4m1s
pvc-b68ee3fa-8779-4aaa-90cf-eea914366441   200Mi      RWO            Delete           Released   bdqn/test-pvc                        storageclass            83m
pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598   100Mi      RWO            Delete           Bound      default/test-storage-statefulset-0   storageclass            5m17s
[root@master yaml]# kubectl get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-storage-statefulset-0   Bound    pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598   100Mi      RWO            storageclass   33m
test-storage-statefulset-1   Bound    pvc-9910b735-b006-4b31-9932-19e679eddae8   100Mi      RWO            storageclass   4m10s
test-storage-statefulset-2   Bound    pvc-95c6b358-e875-42f1-ab40-b740ea2b18db   100Mi      RWO            storageclass   3m54s

//从上述结果中,我们知道,storageclass为我们自动创建了PV,volumeClaimTemplate为我们自动创建PVC,但是否能够满足我们所说的,每一个Pod都有自己独有的数据持久化目录,也就是说,每一个Pod内的数据都是不一样的。

//分别在对应的PV下,模拟创建不同的数据。

[root@master yaml]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-856d966889-xrfgf   1/1     Running   0          5m51s
statefulset-0                             1/1     Running   0          4m22s
statefulset-1                             1/1     Running   0          3m56s
statefulset-2                             1/1     Running   0          3m40s
[root@master yaml]# kubectl exec -it statefulset-0 bash
root@statefulset-0:/# echo 00000 > /usr/share/nginx/html/index.html
root@statefulset-0:/# exit
exit
[root@master yaml]# kubectl exec -it statefulset-1 bash
root@statefulset-1:/# echo 11111 > /usr/share/nginx/html/index.html
root@statefulset-1:/# exit
exit
[root@master yaml]# kubectl exec -it statefulset-2 bash
root@statefulset-2:/# echo 22222 > /usr/share/nginx/html/index.html
root@statefulset-2:/# exit
exit

查看对应Pod的数据持久化目录,可以看出,每个Pod的内容都不一样

[root@master ~]# cd /nfsdata/
[root@master nfsdata]# ls
default-test-storage-statefulset-0-pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598
default-test-storage-statefulset-1-pvc-9910b735-b006-4b31-9932-19e679eddae8
default-test-storage-statefulset-2-pvc-95c6b358-e875-42f1-ab40-b740ea2b18db
[root@master nfsdata]# cat default-test-storage-statefulset-0-pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598/index.html
00000
[root@master nfsdata]# cat default-test-storage-statefulset-1-pvc-9910b735-b006-4b31-9932-19e679eddae8/index.html
11111
[root@master nfsdata]# cat default-test-storage-statefulset-2-pvc-95c6b358-e875-42f1-ab40-b740ea2b18db/index.html
22222

即使删除Pod,然后statefulSet这个Pod控制器会生成一个新的Pod,这里不看Pod的IP,名称肯定和之前的一致,而且,最主要是持久化的数据仍然存在。

[root@master yaml]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-856d966889-xrfgf   1/1     Running   0          21m   10.244.1.5   node01   <none>           <none>
statefulset-0                             1/1     Running   0          20m   10.244.2.5   node02   <none>           <none>
statefulset-1                             1/1     Running   0          19m   10.244.1.6   node01   <none>           <none>
statefulset-2                             1/1     Running   0          41s   10.244.2.7   node02   <none>           <none>
[root@master yaml]# kubectl delete pod statefulset-2
pod "statefulset-2" deleted
[root@master yaml]# kubectl get pod
NAME                                      READY   STATUS              RESTARTS   AGE
nfs-client-provisioner-856d966889-xrfgf   1/1     Running             0          21m
statefulset-0                             1/1     Running             0          20m
statefulset-1                             1/1     Running             0          19m
statefulset-2                             0/1     ContainerCreating   0          5s
[root@master yaml]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-856d966889-xrfgf   1/1     Running   0          22m
statefulset-0                             1/1     Running   0          20m
statefulset-1                             1/1     Running   0          20m
statefulset-2                             1/1     Running   0          17s
[root@master yaml]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-856d966889-xrfgf   1/1     Running   0          23m   10.244.1.5   node01   <none>           <none>
statefulset-0                             1/1     Running   0          21m   10.244.2.5   node02   <none>           <none>
statefulset-1                             1/1     Running   0          21m   10.244.1.6   node01   <none>           <none>
statefulset-2                             1/1     Running   0          99s   10.244.2.8   node02   <none>           <none>
查看数据存在
[root@master yaml]# curl 10.244.2.8
22222

k8s存储+storageclass自动创建pv+StatefulSet自动创建pvc相关推荐

  1. kubernetes存储系统介绍(Volume、PV、dynamic provisioning,阿里云服务器nfs创建pv,hostpath创建pv)

    全栈工程师开发手册 (作者:栾鹏) 架构系列文章 K8S存储系统 K8S的存储系统从基础到高级又大致分为三个层次:普通Volume,Persistent Volume 和动态存储供应(dynamic ...

  2. K8S存储值之PV和PVC

    1. 概念: 1.1. PersistentVolume (PV): 是由管理员设置的存储,它是群集的一部分.就像节点是集群中的资源一样,PV也是集群中的资源.PV是Volume之类的卷插件,但具有独 ...

  3. 【Kubernetes存储篇】StorageClass存储类动态生成PV详解

    文章目录 一.StorageClass存储类理论 二.案例:Storageclass存储类实战演示 1.搭建NFS服务端 2.搭建NFS供应商(provisioner) 3.创建StorageClas ...

  4. k8s存储之Volumes卷类型

    一.Volumes配置管理 (一)容器中的文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题.首先,当容器崩溃时,kubelet 将重新启动容器,容器中的文件将会丢失,因为容器会以干净 ...

  5. 无法创建文件系统以及无法创建PV时怎么办?

    我们平常对磁盘分区格式化的时候有时无法格式化,报告的信息为: 1 "/dev/sdb3 is apparently in use by the system; will not make a ...

  6. linux 无法创建文件,无法创建文件系统以及无法创建PV时怎么办?

    我们平常对磁盘分区格式化的时候有时无法格式化,报告的信息为:"/dev/sdb3 is apparently in use by the system; will not make a fi ...

  7. k8s数据持久化之statefulset的数据持久化,并自动创建PV与PVC

    StatefulSet是为了解决有状态服务的问题,对应的Deployment和ReplicaSet是为了无状态服务而设计,其应用场景包括: 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化 ...

  8. kubectl 创建pvc_k8s数据持久化之statefulset的数据持久化,并自动创建PV与PVC

    Statefulset StatefulSet是为了解决有状态服务的问题,对应的Deployment和ReplicaSet是为了无状态服务而设计,其应用场景包括:稳定的持久化存储,即Pod重新调度后还 ...

  9. k8s 1.24.0版本使用nfs-provisioner4.0.0动态创建PV

    一.nfs-client-provisioner简介 nfs-client-provisioner 可动态为kubernetes提供pv卷,是Kubernetes的简易NFS的外部provisione ...

最新文章

  1. Galaxy v-21.01 发布,新的流程和历史栏体验
  2. json 和 pickle
  3. 前台获取json未定义问题之两种常用解决办法
  4. PowerDesigner(一)-PowerDesigner概述(系统分析与建模)
  5. java.io.IOException: 你的主机中的软件中止了一个已建立的连接。
  6. 集成灶带给我的是无尽烦恼,大家的集成灶用得如何?
  7. 浏览器地址栏传中文乱码
  8. Spring 通过XML配置装配Bean
  9. 菜鸟学做Django--图书管理系统
  10. 使用计算机录制声音10,Win10怎么录制电脑内部声音 Windows10电脑自身录音教程
  11. 计算机芯片的形成和发展,计算机中将cpu集成在一块芯片上所形成的元器件是什么...
  12. 评估电源质量20M带宽限制的问题
  13. 西班牙、阿根廷和委内瑞拉的五所大学提供加密货币课程
  14. 【UML建模】(5) UML建模之活动图
  15. Linux多网卡多路由设置
  16. python社区微信群_30行Python代码,打造一个简单的微信群聊助手,简单方便
  17. [附源码]计算机毕业设计Python+uniapp基于Android的自来水收费系统3e359(程序+源码+LW+远程部署)
  18. 高品质的网页设计: 实例与技巧
  19. OrCAD怎样把原理图输出为DXF格式
  20. Linux基础 -- 文件操作、进程、监测命令

热门文章

  1. 最短路计数(dp+最短路)
  2. 华为分布式文件存储服务器配置,分布式存储服务器
  3. python 有限域函数库_python – Sympy:在有限域中求解矩阵
  4. 硬核干货合集!500+篇Java干货技术文章整理|资源|书单|工具|面试指南|强烈建议打开!
  5. 服务器返回状态码说明
  6. Java —— 冒泡排序
  7. 设计模式(Design Pattern)详细整理(含思维导图)
  8. C++ 什么时候调用析构函数和构造函数
  9. systemUI之statusBar
  10. opencv--GrabCut