Helm部署高可用rabbitmq k8s 镜像集群
前提条件:k8s集群、harbor私服、helm、storage Class
安装部署
添加bitnami仓库并查找rabbitmq
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
[kmning@k8s-register-node ~]$ helm search repo rabbitmq
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/rabbitmq 11.13.0 3.11.13 RabbitMQ is an open source general-purpose mess...
bitnami/rabbitmq-cluster-operator 3.2.10 2.2.0 The RabbitMQ Cluster Kubernetes Operator automa...
拉取chat到本地
helm pull bitnami/rabbitmq --version 11.13.0
tar -zxvf rabbitmq-11.13.0.tgz
cp rabbitmq/values.yaml ./values-rabbitmq.yaml
对本地values-rabbitmq.yaml进行修改,配置非常多,根据实际情况进行修改,比如我主要修改了如下内容
通用配置修改
global:imageRegistry: "k8s-register-node.com:443"imagePullSecrets: []storageClass: "managed-nfs-storage"查找所有用到镜像的配置,修改成私服,storageClass修改成我们定义的storageClass即可。
image:registry: k8s-register-node.com:443repository: lib-proxy/bitnami/rabbitmqtag: 3.11.13-debian-11-r0persistence:enabled: truestorageClass: "managed-nfs-storage"
rabbitmq配置修改
auth:username: kmningpassword: "yourpwd"existingPasswordSecret: ""erlangCookie: "secretcookie"existingErlangSecret: ""
如果不希望把密码配置到配置文件中,可以在安装时通过提供参数的方式设置
--set auth.username=euht,auth.password=yourpwd,auth.erlangCookie=secretcookie
开启clustering.forceBoot
clustering:enabled: trueaddressType: hostnamerebalance: falseforceBoot: true
指定时区
extraEnvVars: - name: TZvalue: "Asia/Shanghai"
指定副本数
replicaCount: 3
持久化配置
persistence:enabled: truestorageClass: "managed-nfs-storage"selector: {}accessMode: ReadWriteOnceexistingClaim: ""size: 8Gi
helm安装rabbitmq集群
kubectl create ns rabbitmq-cluster
helm -n rabbitmq-cluster install rabbitmq-cluster rabbitmq-11.13.0.tgz -f values-rabbitmq.yaml \
--set useBundledSystemChart=true
安装后打印
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ helm -n rabbitmq-cluster install rabbitmq-cluster rabbitmq-11.13.0.tgz -f values-rabbitmq.yaml \
> --set useBundledSystemChart=true
NAME: rabbitmq-cluster
LAST DEPLOYED: Fri Apr 28 06:38:58 2023
NAMESPACE: rabbitmq-cluster
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: rabbitmq
CHART VERSION: 11.13.0
APP VERSION: 3.11.13** Please be patient while the chart is being deployed **Credentials:echo "Username : euht"echo "Password : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-cluster -o jsonpath="{.data.rabbitmq-password}" | base64 -d)"echo "ErLang Cookie : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-cluster -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 -d)"Note that the credentials are saved in persistent volume claims and will not be changed upon upgrade or reinstallation unless the persistent volume claim has been deleted. If this is not the first installation of this chart, the credentials may not be valid.
This is applicable when no passwords are set and therefore the random password is autogenerated. In case of using a fixed password, you should specify it when upgrading.
More information about the credentials may be found at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases.RabbitMQ can be accessed within the cluster on port 5672 at rabbitmq-cluster.rabbitmq-cluster.svc.cluster.localTo access for outside the cluster, perform the following steps:To Access the RabbitMQ AMQP port:echo "URL : amqp://127.0.0.1:5672/"kubectl port-forward --namespace rabbitmq-cluster svc/rabbitmq-cluster 5672:5672To Access the RabbitMQ Management interface:echo "URL : http://127.0.0.1:15672/"kubectl port-forward --namespace rabbitmq-cluster svc/rabbitmq-cluster 15672:15672
查看已安装chat
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ helm -n rabbitmq-cluster list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
rabbitmq-cluster rabbitmq-cluster 1 2023-04-28 06:38:58.749511901 +0000 UTC deployed rabbitmq-11.13.0 3.11.13
如果需要卸载
helm -n rabbitmq-cluster uninstall rabbitmq-cluster
查看服务部署情况
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ kubectl get pods -n rabbitmq-cluster
NAME READY STATUS RESTARTS AGE
rabbitmq-cluster-0 1/1 Running 0 3m53s
rabbitmq-cluster-1 1/1 Running 0 2m18s
rabbitmq-cluster-2 0/1 ContainerCreating 0 73s
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ kubectl get svc -n rabbitmq-cluster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq-cluster ClusterIP 10.43.65.148 <none> 5672/TCP,4369/TCP,25672/TCP,15672/TCP 4m14s
rabbitmq-cluster-headless ClusterIP None <none> 4369/TCP,5672/TCP,25672/TCP,15672/TCP 4m14s
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ kubectl get pv -n rabbitmq-cluster
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-7f4b0a27-c370-45d5-8af3-633954ae39ef 8Gi RWO Delete Bound rabbitmq-cluster/data-rabbitmq-cluster-2 managed-nfs-storage 114s
pvc-83f15f1e-9f16-4eeb-acaa-28c627ad90f3 8Gi RWO Delete Bound rabbitmq-cluster/data-rabbitmq-cluster-0 managed-nfs-storage 4m33s
pvc-b7245f5d-72d4-45a1-8328-83928bfdd347 8Gi RWO Delete Bound rabbitmq-cluster/data-rabbitmq-cluster-1 managed-nfs-storage 2m59skmning@k8s-master-1:~/rabbitmq-k8s-cluster$ kubectl get pvc -n rabbitmq-cluster
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-rabbitmq-cluster-0 Bound pvc-83f15f1e-9f16-4eeb-acaa-28c627ad90f3 8Gi RWO managed-nfs-storage 5m1s
data-rabbitmq-cluster-1 Bound pvc-b7245f5d-72d4-45a1-8328-83928bfdd347 8Gi RWO managed-nfs-storage 3m26s
data-rabbitmq-cluster-2 Bound pvc-7f4b0a27-c370-45d5-8af3-633954ae39ef 8Gi RWO managed-nfs-storage 2m21s
服务正常,此时,我们已经可以使用服务域名rabbitmq-cluster.rabbitmq-cluster.svc.cluster.local
去访问这个集群了。
查看集群状态,随便进入一个pod
kubectl exec -it -n rabbitmq-cluster rabbitmq-cluster-0 -- bash# 查看集群状态
rabbitmqctl cluster_status# 列出策略(尚未设置镜像模式)
rabbitmqctl list_policies#设置集群名称
rabbitmqctl set_cluster_name [cluster_name]
服务暴露
5672的服务端口,k8s集群内的应用只需要通过rabbitmq-cluster.rabbitmq-cluster.svc.cluster.local:5672
即可连接到集群,而15672端口我们手动创建一个NodePort,让外网可以访问,方便管理。
先获取sts的selector
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ kubectl get svc -n rabbitmq-cluster -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
rabbitmq-cluster ClusterIP 10.43.65.148 <none> 5672/TCP,4369/TCP,25672/TCP,15672/TCP 17m app.kubernetes.io/instance=rabbitmq-cluster,app.kubernetes.io/name=rabbitmq
rabbitmq-cluster-headless ClusterIP None <none> 4369/TCP,5672/TCP,25672/TCP,15672/TCP 17m app.kubernetes.io/instance=rabbitmq-cluster,app.kubernetes.io/name=rabbitmq
创建nodePort服务
rabbitmq-cluster-svc-nodeport.yaml
apiVersion: v1
kind: Service
metadata:name: rabbitmq-cluster-nodeportnamespace: rabbitmq-cluster
spec:ports:- nodePort: 30072port: 5672name: rab-sv-portprotocol: TCPtargetPort: 5672- nodePort: 30073port: 15672name: rab-ad-portprotocol: TCPtargetPort: 15672 selector:app.kubernetes.io/instance: rabbitmq-clusterapp.kubernetes.io/name: rabbitmqtype: NodePort
创建后随便使用一个工作节点IP进行访问:http://yourWorker:30072
可见,集群已经正常运行。
镜像模式配置
进入任意一个pod
kubectl exec -it -n rabbitmq-cluster rabbitmq-cluster-0 -- bash
# 设置镜像模式
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all" , "ha-sync-mode":"automatic"}'
# 列出策略
rabbitmqctl list_policies
打印如下
kmning@k8s-master-1:~/rabbitmq-k8s-cluster$ kubectl exec -it -n rabbitmq-cluster rabbitmq-cluster-0 -- bash
I have no name!@rabbitmq-cluster-0:/$ rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all" , "ha-sync-mode":"automatic"}'
Setting policy "ha-all" for pattern "^" to "{"ha-mode":"all" , "ha-sync-mode":"automatic"}" with priority "0" for vhost "/" ...
I have no name!@rabbitmq-cluster-0:/$ rabbitmqctl list_policies
Listing policies for vhost "/" ...
vhost name pattern apply-to definition priority
/ ha-all ^ all {"ha-mode":"all","ha-sync-mode":"automatic"} 0
镜像队列模式设置成功。此时,随意连接一个节点,创建队列和交互机,把数据发送到队列,所有节点将同步队列的数据,避免在消费数据前数据丢失的风险。然后,如果有消费者对数据进行消费,所有节点对应的数据也将被清理。
最后,在k8s集群内部访问这个rabbitmq集群只需要使用Service的域名即可,如下
rabbitmq-cluster.rabbitmq-cluster.svc.cluster.local:5672
这样一来,访问这个节点,k8s自动为我们做了负载均衡(kube-proxy组件),我们不需要再配置nginx负载均衡。
spring-boot配置示例:
spring:rabbitmq:host: rabbitmq-cluster.rabbitmq-cluster.svc.cluster.localport: 5672virtual-host: /username: kmningpassword: yourpwd
Helm部署高可用rabbitmq k8s 镜像集群相关推荐
- RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03
服务器IP hostname 节点说明 端口 管控台地址 账号 密码 192.168.0.115 mq-01 rabbitmq master 5672 http://192.168.0.115:156 ...
- RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成负载均衡组件 Ha-Proxy_02
服务器IP hostname 节点说明 端口 管控台地址 账号 密码 192.168.0.115 mq-01 rabbitmq master 5672 http://192.168.0.115:156 ...
- keepalive+nginx实现负载均衡高可用_高可用、负载均衡 集群部署方案:Keepalived + Nginx + Tomcat...
前言:初期应用较小,一般以单机部署为主,即可满足业务的需求,随着业务的不断扩大,单机部署的模式无法承载这么大的业务量,需要进行服务集群化的部署,本文主要介绍服务器Tomcat多实例部署,搭载Keepa ...
- 深入浅出百亿请求高可用Redis(codis)分布式集群揭秘
摘要:作为noSql中的kv数据库的王者,redis以其高性能,低时延,丰富的数据结构备受开发者青睐,但是由于redis在水平伸缩性上受限,如何做到能够水平扩容,同时对业务无侵入性是很多使用redis ...
- 高可用的Redis主从复制集群,从理论到实践
作者:Sicimike blog.csdn.net/Baisitao_/article/details/105545410 前言 我们都知道,服务如果只部署一个节点,很容易出现单点故障,从而导致服务不 ...
- 高可用,完全分布式Hadoop集群HDFS和MapReduce安装配置指南
原文:http://my.oschina.net/wstone/blog/365010#OSC_h3_13 (WJW)高可用,完全分布式Hadoop集群HDFS和MapReduce安装配置指南 [X] ...
- 云服务器搭建高可用keepalived+nginx+emqx集群
云服务器搭建高可用keepalived+nginx+emqx集群 一.高可用emqx集群搭建 1 单机搭建emqx 2 配置认证和鉴权插件 3 搭建emqx集群 二.nginx搭建负载均衡 1 ngi ...
- 【Redis】高可用架构之Cluster集群和分⽚
高可用架构之Cluster集群和分⽚ 1. 前言 2. Cluster 模式介绍 2.1 什么是Cluster模式? 2.2 为什么需要Cluster模式? 2.2.1 垂直拓展(scale up)和 ...
- Linux 高可用(HA)集群之Pacemaker详解
大纲 说明:本来我不想写这篇博文的,因为前几篇博文都有介绍pacemaker,但是我觉得还是得写一下,试想应该会有博友需要,特别是pacemaker 1.1.8(CentOS 6.4)以后,pacem ...
最新文章
- Django Python:完整的BUNDLE + Django真实项目2021
- JVM---对象的实例化内存布局与访问定位
- 使用python实现多维数据降维操作
- win101909要不要更新_win10更新好还是不更新_win10更新有什么用
- 【Linux系统编程】IO多路复用之epoll
- java 百度地图地址解析_百度地图Java地址解析和经纬度解析
- C++和Rust_Kotlin、Rust两个充满了骚操作的编程语言,值得一玩
- 学校校车运营各项安全管理制度_学校校车接送安全管理制度(通用3篇)
- 使用 pycharm安装各个模块
- yii2 asset.php,Yii2中使用asset压缩js,css文件的方法_php实例
- 我的世界java版伪光影_我的世界光影核心V2伪风光材质包
- C++ Primer 5th Edition(英文版)kindle.mobi
- python将PDF文件转化为图片
- ios10--拳皇动画
- 三款极简好用的epub阅读器
- js 递归树根据子节点获取所有父节点
- [分享]错误“应用程序Xcode的这个版本不能与此版本的OS X配合使用”以及Mac源码和IOS开发资料分享
- SAPBP_SAP刘梦_新浪博客
- 基于ESP32的WiFi-RSSI定位
- 力扣Leetcode:5. 最长回文子串(Python)