KubeSphere又开始对接公有云了,这一次是阿里云 SLB
一、原理图
二、集群修改配置
1.新增启动参数(仅限于 kubeadm 安装的 k8s 集群)
(每个 master 节点都需要修改)
1.1kube-apiserver 修改路径:
/etc/kubernetes/manifests
1.2 新增 kube-apiserver 参数(配置完自动重启)
kube-apiserver.yaml
- --cloud-provider=external#在这个下面加
spec:containers:- command:
2.新增 kube-controller-manager (配置完自动重启)
(每个 master 节点都需要修改)
kube-controller-manager 修改路径:
/etc/kubernetes/manifests
2.1 新增 kube-controller-manager 参数
- --cloud-provider=external#在这个下面加
spec:containers:- command:
3.kubelet 修改路径:
(所有节点都需要修改并重启 master 和 node)
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
获取–provider-id=cn-hangzhou.i-bp1auxi4brx5qtsvf019" (获取到后添加到下面的参数中)
echo `curl -s http://100.100.100.200/latest/meta-data/region-id`.`curl -s http://100.100.100.200/latest/meta-data/instan--provider-id=cn-hangzhou.i-bp1auxi4brx5qtsvf019"ce-id`
Environment="KUBELET_CLOUD_PROVIDER_ARGS=--cloud-provider=external --provider-id=cn-hangzhou.i-bp1auxi4brx5qtsvf019"
新增末尾参数
$KUBELET_CLOUD_PROVIDER_ARGS
二、创建 configmap
1.阿里云上创建子用户(获取 AKSK)
2.单独创建 RAM 权限策略给到子用户
{"Version": "1","Statement": [{"Action": ["ecs:Describe*","ecs:AttachDisk","ecs:CreateDisk","ecs:CreateSnapshot","ecs:CreateRouteEntry","ecs:DeleteDisk","ecs:DeleteSnapshot","ecs:DeleteRouteEntry","ecs:DetachDisk","ecs:ModifyAutoSnapshotPolicyEx","ecs:ModifyDiskAttribute","ecs:CreateNetworkInterface","ecs:DescribeNetworkInterfaces","ecs:AttachNetworkInterface","ecs:DetachNetworkInterface","ecs:DeleteNetworkInterface","ecs:DescribeInstanceAttribute"],"Resource": ["*"],"Effect": "Allow"},{"Action": ["cr:Get*","cr:List*","cr:PullRepository"],"Resource": ["*"],"Effect": "Allow"},{"Action": ["slb:*"],"Resource": ["*"],"Effect": "Allow"},{"Action": ["cms:*"],"Resource": ["*"],"Effect": "Allow"},{"Action": ["vpc:*"],"Resource": ["*"],"Effect": "Allow"},{"Action": ["log:*"],"Resource": ["*"],"Effect": "Allow"},{"Action": ["nas:*"],"Resource": ["*"],"Effect": "Allow"}]
}
3.赋权
4.创建 configmap
注意:填写刚才创建的 AKSK 以 BASE64 版本(可以百度转换一下)
apiVersion: v1
kind: ConfigMap
metadata:name: cloud-confignamespace: kube-system
data:cloud-config.conf: |-{"Global": {"accessKeyID": "TFRBSTV0SGV0S1k1MXFoRWd0bTluWExM","accessKeySecret": "d0JjYk5ZTXNZeXlPN280T3ZYcTJvcWdsWmNrQU1q"}}
kubectl apply -f cloud-config.yaml
5、创建 cloud-controller-manager-configmap
vim /etc/kubernetes/cloud-controller-manager.conf
获取内网 ip 地址在 server 处进行更改
kind: Config
contexts:
- context:cluster: kubernetesuser: system:cloud-controller-managername: system:cloud-controller-manager@kubernetes
current-context: system:cloud-controller-manager@kubernetes
users:
- name: system:cloud-controller-manageruser:tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
apiVersion: v1
clusters:
- cluster:certificate-authority-data: $CA_DATAserver: https://192.168.1.76:6443name: kubernetes
查询替换 CA_DATA
cat /etc/kubernetes/pki/ca.crt|base64 -w 0
三、部署阿里云负载均衡插件
1.查询目前集群的 cidr 地址,并修改到 yaml 文件中
kubectl cluster-info dump |grep cidr
2.获取阿里云内部 hostname 的名字
echo `curl -s http://100.100.100.200/latest/meta-data/region-id`.`curl -s http://100.100.100.200/latest/meta-data/instance-id`
3.修改集群中 hostname 的名字(所有节点必须修改)
hostnamectl set-hostname <阿里云内部hostname>
4.部署阿里云插件(修改 cidr 地址)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: system:cloud-controller-manager
rules:- apiGroups:- ""resources:- persistentvolumes- services- secrets- endpoints- serviceaccountsverbs:- get- list- watch- create- update- patch- apiGroups:- ""resources:- nodesverbs:- get- list- watch- delete- patch- update- apiGroups:- ""resources:- services/statusverbs:- update- patch- apiGroups:- ""resources:- nodes/statusverbs:- patch- update- apiGroups:- ""resources:- events- endpointsverbs:- create- patch- update
---
apiVersion: v1
kind: ServiceAccount
metadata:name: cloud-controller-managernamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: system:cloud-controller-manager
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:cloud-controller-manager
subjects:- kind: ServiceAccountname: cloud-controller-managernamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: system:shared-informers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:cloud-controller-manager
subjects:- kind: ServiceAccountname: shared-informersnamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: system:cloud-node-controller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:cloud-controller-manager
subjects:- kind: ServiceAccountname: cloud-node-controllernamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: system:pvl-controller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:cloud-controller-manager
subjects:- kind: ServiceAccountname: pvl-controllernamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: system:route-controller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:cloud-controller-manager
subjects:- kind: ServiceAccountname: route-controllernamespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:app: cloud-controller-managertier: control-planename: cloud-controller-managernamespace: kube-system
spec:selector:matchLabels:app: cloud-controller-managertier: control-planetemplate:metadata:labels:app: cloud-controller-managertier: control-planeannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:serviceAccountName: cloud-controller-managertolerations:- effect: NoScheduleoperator: Existskey: node-role.kubernetes.io/master- effect: NoScheduleoperator: Existskey: node.cloudprovider.kubernetes.io/uninitializednodeSelector:node-role.kubernetes.io/master: ""containers:- command:- /cloud-controller-manager- --kubeconfig=/etc/kubernetes/cloud-controller-manager.conf- --address=127.0.0.1- --allow-untagged-cloud=true- --leader-elect=true- --cloud-provider=alicloud- --use-service-account-credentials=true- --cloud-config=/etc/kubernetes/config/cloud-config.conf- --configure-cloud-routes=true- --allocate-node-cidrs=true- --route-reconciliation-period=3m# replace ${cluster-cidr} with your own cluster cidr- --cluster-cidr=172.20.0.0/16image: registry.cn-hangzhou.aliyuncs.com/acs/cloud-controller-manager-amd64:v1.9.3.339-g9830b58-aliyunlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 10258scheme: HTTPinitialDelaySeconds: 15timeoutSeconds: 15name: cloud-controller-managerresources:requests:cpu: 200mvolumeMounts:- mountPath: /etc/kubernetes/name: k8s- mountPath: /etc/ssl/certsname: certs- mountPath: /etc/pkiname: pki- mountPath: /etc/kubernetes/configname: cloud-confighostNetwork: truevolumes:- hostPath:path: /etc/kubernetesname: k8s- hostPath:path: /etc/ssl/certsname: certs- hostPath:path: /etc/pkiname: pki- configMap:defaultMode: 420items:- key: cloud-config.confpath: cloud-config.confname: cloud-configname: cloud-config
kubectl apply -f cloud-controller-manager.yml
四、kubesphere 平台创建相关服务
1.创建工作负载
1.1 选择集群管理
1.2 创建工作负载
kind: Deployment
apiVersion: apps/v1
metadata:name: kubesphere-router-rukounamespace: kubesphere-controls-system
spec:replicas: 1selector:matchLabels:app: kubespherecomponent: ks-routerproject: kubesphere-controls-systemtier: backendtemplate:metadata:creationTimestamp: nulllabels:app: kubespherecomponent: ks-routerproject: kubesphere-controls-systemtier: backendannotations:prometheus.io/port: '10254'prometheus.io/scrape: 'true'sidecar.istio.io/inject: 'false'spec:containers:- name: nginx-ingress-controllerimage: >-registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.35.0args:- /nginx-ingress-controller- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'- '--annotations-prefix=nginx.ingress.kubernetes.io'- '--update-status'- '--update-status-on-shutdown'- '--configmap=$(POD_NAMESPACE)/kubesphere-router-rukou-nginx'- '--watch-namespace=kubesphere-controls-system'- '--election-id=kubesphere-router-rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--publish-service=kubesphere-controls-system/rukou'- '--report-node-internal-ip-address'ports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCPenv:- name: POD_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespaceresources: {}livenessProbe:httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10timeoutSeconds: 1periodSeconds: 10successThreshold: 1failureThreshold: 3readinessProbe:httpGet:path: /healthzport: 10254scheme: HTTPtimeoutSeconds: 1periodSeconds: 10successThreshold: 1failureThreshold: 3terminationMessagePath: /dev/termination-logterminationMessagePolicy: FileimagePullPolicy: IfNotPresentsecurityContext:runAsNonRoot: falserestartPolicy: AlwaysterminationGracePeriodSeconds: 30dnsPolicy: ClusterFirstserviceAccountName: kubesphere-router-serviceaccountserviceAccount: kubesphere-router-serviceaccountsecurityContext: {}affinity: {}schedulerName: default-schedulerstrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 25%maxSurge: 25%revisionHistoryLimit: 10progressDeadlineSeconds: 600
2.创建服务
kind: Service
apiVersion: v1
metadata:name: rukounamespace: kubesphere-controls-systemannotations:service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners trueservice.beta.kubernetes.io/alibaba-cloud-loadbalancer-id SLB实例ID
spec:ports:- name: http-80protocol: TCPport: 80targetPort: 80- name: https-443protocol: TCPport: 443targetPort: 443selector:app: kubespherecomponent: ks-routerproject: kubesphere-controls-systemtier: backendtype: LoadBalancersessionAffinity: NoneexternalTrafficPolicy: Cluster
注意:工作负载 yaml 文件中的 args 下面的 publish-service 必须和刚才创建的服务的 namesapce 和 name 强关联,否则失败!
注意:服务配置文件中的 SLBID 请替换成自己的
3 针对需要对外暴露的项目创建网关
3.1 创建应用路由
如果要设置泛域名请打开编辑模式参照下图修改
五、最终测试访问
掏出手机微信扫一扫哦
KubeSphere又开始对接公有云了,这一次是阿里云 SLB相关推荐
- 2020云盘点:公有云芯基建,AWS、阿里云、紫光云等创新发力
芯片最初属于英特尔.三星等芯片大佬的垄断性优势业务.然而,随着公有云的发展迅猛,对于AI芯片.算法方面的诉求越来越高,同时对算力.网络芯片能力的要求也有了更多变化,除了与芯片大佬的常规合作之外,自造芯 ...
- 中国公有云IaaS、PaaS排名:阿里云、腾讯云、中国电信、AWS、百度云
5月6日,国际数据公司 (IDC)最新发布的<中国公有云服务市场(2018下半年)跟踪>报告显示, 2018下半年中国公有云服务整体市场规模(IaaS/PaaS/SaaS)超40亿美金,其 ...
- 公有云历史故障汇总(2017-2021 阿里云/腾讯云/AWS厂商)
以下资料是从百度搜索后汇总的, 仅供参考: 如有遗漏的可评论区留言,我会尽快补齐
- IDC发布视频云市场报告:公有云厂商占主导,腾讯云持续领跑
国际数据公司(IDC)发布最新<中国视频云市场跟踪(2019 下半年)>报告,报告显示2019年中国视频云市场规模达到46.2亿美元,其中,视频云解决方案市场增速迅猛,腾讯云位列第一. 根 ...
- Rancher通过Aliyun-slb服务对接阿里云SLB教程
阿里云负载均衡(Server Load Balancer)是将访问流量根据转发策略分发到后端多台云服务器(ECS)的流量分发控制服务. 本文将详尽演示Rancher如何通过Aliyun-slb服务对接 ...
- Rancher通过Aliyun-slb服务对接阿里云SLB教程 1
概要 阿里云负载均衡(Server Load Balancer)是将访问流量根据转发策略分发到后端多台云服务器(Elastic Compute Service,简称 ECS)的流量分发控制服务. 负载 ...
- 什么是公有云管理平台?星外和ZKEYS云管理平台哪个好?
随着近年互联网IDC事业的发展,整个IT业界最火热的主题之一便是云计算.特别是中小企业借助云计算,显著降低了软硬件资源投入成本,甚至不费吹灰之力也能像大企业那样利用高.精.尖技术,轻松搭建属于自己的网 ...
- 2018年公有云iaas_2018年如何学会正确做云
2018年公有云iaas 在过去的五年中,我曾担任过各种职务,将高性能计算(HPC)引入公共云. 从说"您不能在云中执行HPC"到"如何在云中执行HPC"这一行 ...
- 为什么说“公有云”起家的青云科技是“混合云”第一股?
关注ITValue,看企业级最新鲜.最价值报道! 科创板混合云第一股"北京青云科技股份有限公司"日前成功上市,股票代码688316,简称"青云科技"本次共发行 ...
最新文章
- 通过SolrJ 4.9管理Solr core
- 技术前沿:Redis推出性能碾压ES和Mongo的大杀器
- Linux常用命令——压缩与解压缩命令
- domain logic approaches
- Graphviz之DT:手把手教你使用可视化工具Graphviz将dot文件转为结构图的pdf文件
- java json u0026_特殊字符的json序列化
- 并植入QTE系统的局域网聊天程序
- CSS-垂直|水平居中问题的解决方法总结
- 数据库增加列或删除列操作
- css3禅密花园叫什么名字_CSS秘密花园: 自定义下划线
- Die notwendige Evolution menschlichen Verhalten
- python什么是接口设计_给女朋友讲什么叫接口设计!
- mysql limit 用法-分页
- (引)XPath 示例
- Java有关数组例题_Java基础——数组例题二维数组
- WebAPI HelpPage出现Failed to generate the sample for media type 'application/x-www-form-urlencoded'. 错
- 可编程并行接口芯片8255
- coreos mysql_CoreOS 实战:在 UOS上体验CoreOS 操作全记录
- WDM在不同Windows版本上的音频支持
- URL 重写模块导致 IIS7 应用程序池自动关闭