前言

相关的教程很多, 但是也很乱. 自己整理一篇. 使用过程中还遇到一些问题, 顺手记录一些.


创建PV

一般的PV都是创建在集群, 使用NFS共享管理的. 但是本次实验只有一台机器. 所以简单一点.

  • 创建安装目录
localhost:pv sean$ pwd
/Users/sean/Software/MiniK8s/zookeeper/pv
localhost:pv sean$ ls
zk1 zk2 zk3
  • 装备pv的yml配置文件.
apiVersion: v1
kind: PersistentVolume
metadata:name: k8s-pv-zk01namespace: toolslabels:app: zkannotations:volume.beta.kubernetes.io/storage-class: "anything"
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncehostPath:path: /Users/sean/Software/MiniK8s/zookeeper/pv/zk01persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:name: k8s-pv-zk02namespace: toolslabels:app: zkannotations:volume.beta.kubernetes.io/storage-class: "anything"
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncehostPath:path: /Users/sean/Software/MiniK8s/zookeeper/pv/zk02persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:name: k8s-pv-zk03namespace: toolslabels:app: zkannotations:volume.beta.kubernetes.io/storage-class: "anything"
spec:capacity:storage: 5GiaccessModes:- ReadWriteOncehostPath:path: /Users/sean/Software/MiniK8s/zookeeper/pv/zk03persistentVolumeReclaimPolicy: Recycle
---
  • 需要注意的是这个配置文件有一个namespace. 在使用的时候-n tools才能查询到你需要的答案.
  • 此外pv说得就是PersistentVolume. 也就是k8s的物理磁盘概念.
  • 安装kubectl create -f zk-pv.yaml
localhost:zookeeper sean$ kubectl create -f zk-pv.yaml
persistentvolume/k8s-pv-zk01 created
persistentvolume/k8s-pv-zk02 created
persistentvolume/k8s-pv-zk03 created
  • 查看
localhost:zookeeper sean$ kubectl get pv -o wide
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE   VOLUMEMODE
k8s-pv-zk01   5Gi        RWO            Recycle          Available           anything                9s    Filesystem
k8s-pv-zk02   5Gi        RWO            Recycle          Available           anything                8s    Filesystem
k8s-pv-zk03   5Gi        RWO            Recycle          Available           anything                8s    Filesystem

安装实例集群与网络

  • 创建namespace
localhost:zookeeper sean$ kubectl get ns
NAME                   STATUS   AGE
default                Active   67m
kube-node-lease        Active   67m
kube-public            Active   67m
kube-system            Active   67m
kubernetes-dashboard   Active   65m
localhost:zookeeper sean$ kubectl create ns tools
namespace/tools created

这一步骤. 在我看的那个教程里面是没有的. 但是没有这个步骤铁定报错. 所以我将这个步骤写在此处.

  • 创建yml
apiVersion: v1
kind: Service
metadata:name: zk-hsnamespace: toolslabels:app: zk
spec:selector:app: zkclusterIP: Noneports:- name: serverport: 2888- name: leader-electionport: 3888
---
apiVersion: v1
kind: Service
metadata:name: zk-csnamespace: toolslabels:app: zk
spec:selector:app: zktype: NodePortports:- name: clientport: 2181targetPort: 2181nodePort: 31811
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:name: zk-pdbnamespace: tools
spec:selector:matchLabels:app: zkmaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: zknamespace: tools
spec:selector:matchLabels:app: zk # has to match .spec.template.metadata.labelsserviceName: "zk-hs"replicas: 3 # by default is 1updateStrategy:type: RollingUpdatepodManagementPolicy: Paralleltemplate:metadata:labels:app: zk # has to match .spec.selector.matchLabelsspec:containers:- name: zkimagePullPolicy: Alwaysimage: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10resources:requests:memory: "200Mi"cpu: "0.1"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start-zookeeper \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \--conf_dir=/opt/zookeeper/conf \--client_port=2181 \--election_port=3888 \--server_port=2888 \--tick_time=2000 \--init_limit=10 \--sync_limit=5 \--heap=512M \--max_client_cnxns=60 \--snap_retain_count=3 \--purge_interval=12 \--max_session_timeout=40000 \--min_session_timeout=4000 \--log_level=INFO"readinessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5livenessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5volumeMounts:- name: datadirmountPath: /var/lib/zookeepervolumeClaimTemplates:- metadata:name: datadirannotations:volume.beta.kubernetes.io/storage-class: "anything"spec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 1Gi

注意这个yml主要分成3块.

  • 第一块是构建 2888:2888 3888:3888的集群内网络
  • 第二块是构建 2181:31811的集群外网络. 虽然到最后我还是无法访问. 看起来后续再研究这一块.
  • 第三块是构建3个真实的pods.
  • 运行脚本
localhost:zookeeper sean$ kubectl apply -f zk.yaml
service/zk-hs unchanged
service/zk-cs created
poddisruptionbudget.policy/zk-pdb unchanged
statefulset.apps/zk configured

小插曲1

  • The Service "zk-cs" is invalid: spec.ports[0].nodePort: Invalid value: 21811: provided port is not in the valid range. The range of valid ports is 30000-32767. 这个之前是21811, 但是端口号不支持. 我将其改成了31811.
    小插曲2 资源不够的时候会出现最后一个结点pending的情况.
localhost:zookeeper sean$ kubectl get pods -n tools
NAME   READY   STATUS    RESTARTS   AGE
zk-0   1/1     Running   0          3m54s
zk-1   1/1     Running   0          3m54s
zk-2   0/1     Pending   0          3m54s
  • 校验
# 查看pods
kubectl get pod -l app=zk -o wide -n tools
localhost:zookeeper sean$ kubectl get pods -n tools
NAME   READY   STATUS    RESTARTS   AGE
zk-0   1/1     Running   0          59s
zk-1   1/1     Running   0          2m59s
zk-2   1/1     Running   0          3m59s# 查看网络
localhost:zookeeper sean$ kubectl get svc -n tools
NAME    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
zk-cs   NodePort    10.10x.10x.19x   <none>        2181:31811/TCP      47m
zk-hs   ClusterIP   None             <none>        2888/TCP,3888/TCP   48mlocalhost:zookeeper sean$ for i in 0 1 2; do kubectl exec zk-$i -n tools zkServer.sh status; done
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader# 查看结点详情
kubectl describe pods

由于最后我还是没有通过31811访问到集群. 所以只能对单个pods进行了映射. 相关命令.
kubectl port-forward -n tools zk-0 2181:2181 &

  • zk 相关命令
# 链接
./zkCli.sh -timeout 500000 -server 10.111.255.148:2181
[zk: 127.0.0.1:2181(CONNECTED) 0] ls
[zk: 127.0.0.1:2181(CONNECTED) 1] create /my node
Created /my
[zk: 127.0.0.1:2181(CONNECTED) 6] ls /
[zookeeper, my]

操作超时. 失败.

  • 超时
[zk: 10.111.255.148:31811(CONNECTING) 0] ls
[zk: 10.111.255.148:31811(CONNECTING) 1] 2021-03-28 19:13:24,864 [myid:] - >WARN  [main->SendThread(10.111.255.148:31811):ClientCnxn$SendThread@1102] - Session >0x0 for server null, unexpected error, closing socket connection and attempting >reconnect
java.net.ConnectException: Operation timed outat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at >org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.j>ava:361)at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
  • 失败
[zk: 192.168.49.2:2181(CONNECTING) 0] ls
[zk: 192.168.49.2:2181(CONNECTING) 1] ls /
2021-03-28 19:42:11,581 [myid:] - WARN  [main->SendThread(192.168.49.2:2181):ClientCnxn$SendThread@1102] - Session 0x0 >for server null, unexpected error, closing socket connection and attempting >reconnect
java.net.ConnectException: Operation timed outat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at >org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.j>ava:361)at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Exception in thread "main" > >org.apache.zookeeper.KeeperException$ConnectionLossException: > > KeeperErrorCode = ConnectionLoss for /at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1500)at >org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:720)at >org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:588)at >org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360)at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:323)at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)


Others

  • kubectl expose 暴露服务(未成功)
localhost:~ sean$ kubectl expose service zk-0 --port=2181 --target-port=2181 --external-ip=127.0.0.1 --name use-zk
Error from server (NotFound): services "zk-0" not found
localhost:~ sean$ kubectl expose service zk-cs --port=2181 --target-port=2181 --external-ip=127.0.0.1 --name use-zk
Error from server (NotFound): services "zk-cs" not found
localhost:~ sean$ kubectl expose service -n tools zk-cs --port=2181 --target-port=2181 --external-ip=127.0.0.1 --name use-zk
The Service "use-zk" is invalid: spec.externalIPs[0]: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
localhost:~ sean$ kubectl expose service -n tools zk-cs --port=2181 --target-port=2181 --external-ip=10.108.102.198 --name use-zk
service/use-zk exposed
localhost:~ sean$ kubectl get services -n tools
NAME     TYPE        CLUSTER-IP       EXTERNAL-IP      PORT(S)             AGE
use-zk   ClusterIP   10.107.97.21     10.108.102.198   2181/TCP            51s
zk-cs    NodePort    10.108.102.198   <none>           2181:31811/TCP      75m
zk-hs    ClusterIP   None             <none>           2888/TCP,3888/TCP   76m
  • 网络相关
localhost:~ sean$ kubectl get svc -n tools
NAME    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
zk-cs   NodePort    10.1xx.25x.1x8   <none>        2181:31811/TCP      31m
zk-hs   ClusterIP   None             <none>        2888/TCP,3888/TCP   120m
localhost:~ sean$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3h8m
localhost:~ sean$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
minikube   Ready    control-plane,master   3h10m   v1.20.2   192.168.49.2   <none>        Ubuntu 20.04.1 LTS   4.19.76-linuxkit   docker://20.10.3
localhost:~ sean$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
minikube   Ready    control-plane,master   3h16m   v1.20.2   192.168.49.2   <none>        Ubuntu 20.04.1 LTS   4.19.76-linuxkit   docker://20.10.3

Reference

[1]. k8s部署zookeeper集群
[2]. kubernetes 中 kafka 和 zookeeper 有状态集群服务部署实践 (一)
[3]. 在Minikube上运行Kafka集群
[4]. k8s 如何对外提供服务
[5]. k8s集群化部署之使用service暴露应用

[Mac/Minikube] 使用K8s搭建ZooKeeper集群相关推荐

  1. 使用Cloudera Manager搭建zookeeper集群及HDFS HA实战篇

    使用Cloudera Manager搭建zookeeper集群及HDFS HA实战篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.使用Cloudera Manager搭建zo ...

  2. 基于docker搭建zookeeper集群、kafka集群(多台真机之间的集群)

    基于docker搭建zookeeper集群.kafka集群---二(多台真机之间的集群) https://blog.csdn.net/diebiao6526/article/details/10143 ...

  3. linux 使用 nginx 搭建 zookeeper 集群

    搭建 zookeeper 集群,笔者这里使用3台 centos7 服务器,它们 ip 分别是 192.168.0.125:192.168.0.123:192.168.0.117,后面简称 125,12 ...

  4. dockerer-compose搭建zookeeper集群,工作中最新亲测能用,超详细

    作为一名菜鸟Java工程师,公司这几天让我搭建zookeeper集群,对于搭建过的人来说十分简单,对zookeeper不太熟悉的人来说还是有一定难度的,这一周我在公司的角色更像是运维人员搭建各种集群, ...

  5. 基于docker搭建zookeeper集群、kafka集群

    zookeeper集群搭建 https://www.cnblogs.com/znicy/p/7717426.html     #Docker中搭建zookeeper集群,昵称:zni.feng htt ...

  6. linux下搭建zookeeper集群

    linux下搭建zookeeper集群 1.准备 1.下载zookeeper压缩包 (注:下载3.4.14版本,3.5以上运行时会少jar包) 2.系统:centOS7 安装好java环境 3.将压缩 ...

  7. 使用K8S部署zookeeper集群

    1.目的: 本次的目的是通过使用k8s搭建一个三节点的zookeeper集群,因为zookeeper集群需要用到存储,所以我们需要准备三个持久卷(Persistent Volume) 简称就是PV. ...

  8. 基于centos8搭建zookeeper集群

    [README] 本文基于centos8 搭建 1,其他linux版本,命令可能不同: 2,集群包括3个节点,如下(因为采用NAT模型进行网络连接,需要让windows和linux机器在同一个网段): ...

  9. CentOS 7 上搭建 ZooKeeper 集群

    ZooKeeper 是什么? ZooKeeper 是分布式系统中的分布式协调服务,用于实现分布式同步,维护配置信息,分组和命名服务. 软件版本: CentOS 7.9 ZooKeeper 3.5.5 ...

最新文章

  1. ThinkPHP 3.2 Token表单令牌
  2. error LNK2019: 无法解析的外部符号 public: virtual void * __thiscall
  3. PUTTY登录树莓派Network error:Software caused connection abort
  4. 组件化的css-module
  5. Win7和Ubuntu14.10双系统
  6. 移动应用可以通过微信沟通接口连接公众号 微信涨粉多了一个新通道
  7. 「leetcode」491.递增子序列【回溯算法】详细图解!
  8. nginx搭建视频服务器
  9. 【Excel】设置自定义单元格格式
  10. conan-transit服上的库列表
  11. html5 语音唤醒,语音交互:从语音唤醒(KWS)聊起
  12. 【Cesium】智慧城市建筑白模泛光特效
  13. 【PyTorch】下载的预训练模型的保存位置(Windows)
  14. ppt从第二页设置页码
  15. 将大文件 分卷 压缩
  16. 《算法和数据结构》排序篇
  17. 安全合规--40--基于欧美法律法规的企业隐私合规体系建设经验总结(四)
  18. 详解六大QQ病毒特征及清除
  19. 基于监督学习+自监督学习的智能抠图,精确到头发丝 | CVPR2020
  20. 基于SpringBoot的房屋租赁管理系统的设计与实现

热门文章

  1. 使用CAPL 内置函数 memcpy 和memcmp 处理数组的若干问题
  2. 分布式事务之基础理论(CAID、CAP与BASE)
  3. 从零开始安装ubuntu22.04并搭建远程深度学习环境
  4. STM32——跑马灯实现
  5. word保存html格式批注没有了,word批注框里字体消失的解决办法
  6. linux 桌面 v2ex,程序员:他人笑我桌面太凌乱,我笑他人看不穿
  7. RabbitMQ的基础应用
  8. CSS3 文字与字体相关样式
  9. APS生产排产软件的供应商
  10. PHP7.1 mcrypt_module_open() is deprecated