k8s部署有状态(StatefulSet)zk-kafka集群

一共是五台服务器:

功能 IP
node-1 192.168.10.201
node-2 192.168.10.202
node-3 192.168.10.203
node-4 192.168.10.204
NFS 192.168.10.151

一,创建fns共享存储:
1.五台服务器安装nfs:
yum -y install rpcbind nfs-utils
systemctl enable nfs-server; systemctl enable rpcbind ;system restart nfs-server ;systemctl restart nfs-server
2. NFS151上创建共享目录:
vim /etc/exports
/data/zk/data1 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/zk/data2 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/zk/data3 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/kafka/data1 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/kafka/data2 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/kafka/data3 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
为每一个节点都单独创建了一个目录。

二,创建zk集群相关
1,创建namespace:
vim namespaces.yaml

apiVersion: v1
kind: Namespace
metadata:name: zk-kafkalabels:name: zk-kafka
  1. 创建zk的pv:
    vim zk_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:namespace: zk-kafkaname: zk-data1
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncenfs:server: 192.168.10.151path: /data/zk/data1
---
apiVersion: v1
kind: PersistentVolume
metadata:namespace: zk-kafkaname: zk-data2
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncenfs:server: 192.168.10.151path: /data/zk/data2
---
apiVersion: v1
kind: PersistentVolume
metadata:namespace: zk-kafkaname: zk-data3
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncenfs:server: 192.168.10.151path: /data/zk/data3

3.创建zk集群:
vim zk.yaml

apiVersion: v1
kind: Service
metadata:namespace: zk-kafkaname: zk-hslabels:app: zk
spec:ports:- port: 2888name: server- port: 3888name: leader-electionclusterIP: Noneselector:app: zk
---
apiVersion: v1
kind: Service
metadata:namespace: zk-kafkaname: zk-cslabels:app: zk
spec:type: NodePortports:- port: 2181targetPort: 2181name: clientnodePort: 2181selector:app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:namespace: zk-kafkaname: zk-pdb
spec:selector:matchLabels:app: zkmaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:namespace: zk-kafkaname: zok
spec:serviceName: zk-hsreplicas: 3selector:matchLabels:app: zktemplate:metadata:labels:app: zkspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- zktopologyKey: "kubernetes.io/hostname"containers:- name: kubernetes-zookeeperimagePullPolicy: Alwaysimage: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10resources:requests:memory: "1Gi"cpu: "0.5"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start-zookeeper \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \
apiVersion: v1
kind: Service
metadata:namespace: zk-kafkaname: zk-hslabels:app: zk
spec:ports:- port: 2888name: server- port: 3888name: leader-electionclusterIP: Noneselector:app: zk
---
apiVersion: v1
kind: Service
metadata:namespace: zk-kafkaname: zk-cslabels:app: zk
spec:type: NodePortports:- port: 2181targetPort: 2181name: clientnodePort: 2181selector:app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:namespace: zk-kafkaname: zk-pdb
spec:selector:matchLabels:app: zkmaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:namespace: zk-kafkaname: zok
spec:serviceName: zk-hsreplicas: 3selector:matchLabels:app: zktemplate:metadata:labels:app: zkspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- zktopologyKey: "kubernetes.io/hostname"containers:- name: kubernetes-zookeeperimagePullPolicy: Alwaysimage: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10resources:requests:memory: "1Gi"cpu: "0.5"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start-zookeeper \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \--conf_dir=/opt/zookeeper/conf \--client_port=2181 \--election_port=3888 \--server_port=2888 \--tick_time=2000 \--init_limit=10 \--sync_limit=5 \--heap=512M \--max_client_cnxns=60 \--snap_retain_count=3 \--purge_interval=12 \--max_session_timeout=40000 \--min_session_timeout=4000 \--log_level=INFO"readinessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5livenessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5volumeMounts:- name: datadirmountPath: /var/lib/zookeepervolumeClaimTemplates:- metadata:name: datadirspec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 10Gi

···等zok0-2都启动之后开始创建Kafka集群···

三,创建kafka集群:
1.创建Kafka-pv:
vim kafka_pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:namespace: zk-kafkaname: kafka-data1
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncenfs:server: 192.168.10.151path: /data/kafka/data1
---
apiVersion: v1
kind: PersistentVolume
metadata:namespace: zk-kafkaname: kafka-data2
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncenfs:server: 192.168.10.151path: /data/kafka/data2
---
apiVersion: v1
kind: PersistentVolume
metadata:namespace: zk-kafkaname: kafka-data3
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncenfs:server: 192.168.10.151path: /data/kafka/data3

2.创建kafka集群:
vim kafka.yaml

apiVersion: v1
kind: Service
metadata:namespace: zk-kafkaname: kafka-hslabels:app: kafka
spec:ports:- port: 1099name: jmxclusterIP: Noneselector:app: kafka
---
apiVersion: v1
kind: Service
metadata:namespace: zk-kafkaname: kafka-cslabels:app: kafka
spec:type: NodePortports:- port: 9092targetPort: 9092name: clientnodePort: 9092selector:app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:namespace: zk-kafkaname: kafka-pdb
spec:selector:matchLabels:app: kafkamaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:namespace: zk-kafkaname: kafoka
spec:serviceName: kafka-hsreplicas: 3selector:matchLabels:app: kafkatemplate:metadata:labels:app: kafkaspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- kafkatopologyKey: "kubernetes.io/hostname"containers:- name: k8skafkaimagePullPolicy: Alwaysimage: leey18/k8skafkaresources:requests:memory: "1Gi"cpu: "0.5"ports:- containerPort: 9092name: client- containerPort: 1099name: jmxcommand:- sh- -c- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \--override listeners=PLAINTEXT://:9092 \--override zookeeper.connect=zok-0.zk-hs.zk-kafka.svc.cluster.local:2181,zok-1.zk-hs.zk-kafka.svc.cluster.local:2181,zok-2.zk-hs.zk-kafka.svc.cluster.local:2181 \--override log.dirs=/var/lib/kafka \--override auto.create.topics.enable=true \--override auto.leader.rebalance.enable=true \--override background.threads=10 \--override compression.type=producer \--override delete.topic.enable=false \--override leader.imbalance.check.interval.seconds=300 \--override leader.imbalance.per.broker.percentage=10 \--override log.flush.interval.messages=9223372036854775807 \--override log.flush.offset.checkpoint.interval.ms=60000 \--override log.flush.scheduler.interval.ms=9223372036854775807 \--override log.retention.bytes=-1 \--override log.retention.hours=168 \--override log.roll.hours=168 \--override log.roll.jitter.hours=0 \--override log.segment.bytes=1073741824 \--override log.segment.delete.delay.ms=60000 \--override message.max.bytes=1000012 \--override min.insync.replicas=1 \--override num.io.threads=8 \--override num.network.threads=3 \--override num.recovery.threads.per.data.dir=1 \--override num.replica.fetchers=1 \--override offset.metadata.max.bytes=4096 \--override offsets.commit.required.acks=-1 \--override offsets.commit.timeout.ms=5000 \--override offsets.load.buffer.size=5242880 \--override offsets.retention.check.interval.ms=600000 \--override offsets.retention.minutes=1440 \--override offsets.topic.compression.codec=0 \--override offsets.topic.num.partitions=50 \--override offsets.topic.replication.factor=3 \--override offsets.topic.segment.bytes=104857600 \--override queued.max.requests=500 \--override quota.consumer.default=9223372036854775807 \--override quota.producer.default=9223372036854775807 \--override replica.fetch.min.bytes=1 \--override replica.fetch.wait.max.ms=500 \--override replica.high.watermark.checkpoint.interval.ms=5000 \--override replica.lag.time.max.ms=10000 \--override replica.socket.receive.buffer.bytes=65536 \--override replica.socket.timeout.ms=30000 \--override request.timeout.ms=30000 \--override socket.receive.buffer.bytes=102400 \--override socket.request.max.bytes=104857600 \--override socket.send.buffer.bytes=102400 \--override unclean.leader.election.enable=true \--override zookeeper.session.timeout.ms=6000 \--override zookeeper.set.acl=false \--override broker.id.generation.enable=true \--override connections.max.idle.ms=600000 \--override controlled.shutdown.enable=true \--override controlled.shutdown.max.retries=3 \--override controlled.shutdown.retry.backoff.ms=5000 \--override controller.socket.timeout.ms=30000 \--override default.replication.factor=1 \--override fetch.purgatory.purge.interval.requests=1000 \--override group.max.session.timeout.ms=300000 \--override group.min.session.timeout.ms=6000 \--override inter.broker.protocol.version=0.10.2-IV0 \--override log.cleaner.backoff.ms=15000 \--override log.cleaner.dedupe.buffer.size=134217728 \--override log.cleaner.delete.retention.ms=86400000 \--override log.cleaner.enable=true \--override log.cleaner.io.buffer.load.factor=0.9 \--override log.cleaner.io.buffer.size=524288 \--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \--override log.cleaner.min.cleanable.ratio=0.5 \--override log.cleaner.min.compaction.lag.ms=0 \--override log.cleaner.threads=1 \--override log.cleanup.policy=delete \--override log.index.interval.bytes=4096 \--override log.index.size.max.bytes=10485760 \--override log.message.timestamp.difference.max.ms=9223372036854775807 \--override log.message.timestamp.type=CreateTime \--override log.preallocate=false \--override log.retention.check.interval.ms=300000 \--override max.connections.per.ip=2147483647 \--override num.partitions=1 \--override producer.purgatory.purge.interval.requests=1000 \--override replica.fetch.backoff.ms=1000 \--override replica.fetch.max.bytes=1048576 \--override replica.fetch.response.max.bytes=10485760 \--override reserved.broker.max.id=1000 "env:- name: KAFKA_HEAP_OPTSvalue : "-Xmx512M -Xms512M"- name: KAFKA_OPTSvalue: "-Dlogging.level=INFO"volumeMounts:- name: kafkadatadirmountPath: /var/lib/kafkareadinessProbe:exec:command:- sh- -c- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9092"volumeClaimTemplates:- metadata:name: kafkadatadirspec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 10Gi

等pod启动完毕就ok了。

k8s部署有状态(StatefulSet)zk-kafka集群相关推荐

  1. ZooKeeper和Kafka集群部署

    1. 基础环境配置 (1)主机名配置 使用secureCRT对3台云主机进行连接. 3个节点修改主机名为zookeeper1.zookeeper2.zookeeper3,命令如下: zookeeper ...

  2. k8s部署kafka集群

    前言 环境:centos 7.9 k8s集群.zookeeper集群 本篇将在k8s中部署kafka集群,kafka依赖zookeeper集群,zookeeper集群我们已经搭建好了,可以参考http ...

  3. k8s部署zookeeper,kafka集群(李作强)

    采用网上镜像:mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10 准备共享存储:nfs,glusterfs,seaweed或其他,并在node ...

  4. 阿里云k8s一键部署有状态StatefulSet nacos2.0.3

    阿里云k8s一键部署有状态StatefulSet nacos2.0.3 项目目录 centos 配置连接集群 kubectl 客户端执行k8s脚本 kubectl 执行结果,一键生成StatefulS ...

  5. k8s 中部署kafka集群

    由于开发过程中使用到了kafka,又不想自己部署kafka,索性采用k8s 部署kafka集群,以求做到随时插拔. 创建命名空间 apiVersion: v1 kind: Namespace meta ...

  6. 【K8S】 基于Kubernetes部署Kafka集群

    主要参考了https://stackoverflow.com/questions/44651219/kafka-deployment-on-minikube和https://github.com/ra ...

  7. Kafka集群部署详细步骤(包含zookeeper安装步骤)

    Kafka集群部署 注意:如果jdk1.8和zookeeper都安装设置过之后可以直接安装kafka跳过其它步骤 kafka基础简介及基本命令 1.环境准备 1.1集群规划 node01  node0 ...

  8. 备份k8s_树莓派k8s集群安装kafka集群及监控

    安装准备 树莓派k8s集群 root@pi4-master01:~# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-I ...

  9. Kafka:Docker Compose部署Kafka集群

    创建目录用于存放Docker Compose部署Kafka集群的yaml文件: mkdir -p /root/composefile/kafka/ 写入该yaml文件: vim /root/compo ...

最新文章

  1. SP 短信开发-基础知识篇
  2. UWP入门(二) -- 基础笔记
  3. nginx配置多个server
  4. 【GAN优化】详解对偶与WGAN
  5. 【数据结构】用java实现不同的七种排序算法和性能比较
  6. IOS研究之App转让流程须知具体介绍
  7. SAP UI5 Gateway后台ETAG校验逻辑
  8. 《HTML5与CSS3实战指南》——2.2 基本的HTML5模板
  9. iOS项目开发实战——制作视图的缩放动画
  10. 抱怨一下有些邮件列表的气氛
  11. mysql存储过程教程(1)
  12. PMP学习笔记之四 第三章 单个项目管理过程
  13. Docker Compose 笔记
  14. 谷歌浏览器不支持html2.0,谷歌浏览器不能播放视频怎么办_chrome浏览器无法播放视频的解决方法-系统城...
  15. 面向对象编程思想详解汇总
  16. K8s简述NodePort
  17. Jruby On Rails 的安装及部署实践
  18. python制作地图
  19. 计算机系统课程 笔记总结 CSAPP第四章 处理器体系结构(4.1-4.3)
  20. JavaScript高级(一)

热门文章

  1. html中插入gif的代码,JavaScript插入动态样式实现代码
  2. 2018年9月16日北京马拉松感受
  3. linux ls命令全称,5分钟带你了解Linux常用命令全称
  4. P3398 仓鼠找sugar (倍增LCA)
  5. 基因型与表型的交互作用如何分析,多元回归来搞定
  6. 安卓图片手势缩放-源码
  7. uC/OS-IIIFreeRTOS区别
  8. uC/OS-Ⅲ实时操作系统内核原理总结
  9. SpaceVim + Anaconda + vimspector 实现python调试
  10. Java实现画图板画一只百度熊