1、部署Kubernetes云计算平台,至少准备两台服务器,此处为3台

Kubernetes Master节点:192.168.0.111
Kubernetes Node1节点:192.168.0.112
Kubernetes Node2节点:192.168.0.113

2、每台服务器主机都运行如下命令

systemctl stop firewalld
systemctl disable firewalld
yum -y install ntp
ntpdate pool.ntp.org #保证每台服务器时间一致性
systemctl start ntpd
systemctl enable ntpd

3、Kubernetes Master 安装与配置 Kubernetes Master节点上安装etcd和Kubernetes、flannel网络,命令如下

yum install kubernetes-master etcd flannel -y

Master /etc/etcd/etcd.conf 配置文件,代码如下

cat>/etc/etcd/etcd.conf<<EOF
# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/data/etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.0.111:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.111:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.111:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.0.111:2380,etcd2=http://192.168.0.112:2380,etcd3=http://192.168.0.113:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.111:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF
mkdir  -p  /data/etcd/;chmod 757 -R /data/etcd/
systemctl restart etcd.service

Master /etc/kubernetes/config配置文件,命令如下:

cat>/etc/kubernetes/config<<EOF
# kubernetes  system config
# The following values are used to configure various aspects of all
# kubernetes i, including
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.0.111:8080"
EOF

将Kubernetes 的apiserver进程的服务地址告诉kubernetes的controller-manager,scheduler,proxy进程。

Master /etc/kubernetes/apiserver 配置文件,代码如下:

cat>/etc/kubernetes/apiserver<<EOF
# kubernetes system config
# The following values are used to configure the kube-apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.111:2379,http://192.168.0.112:2379,http://192.168.0.113:2379"
# Address range to use for i
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
EOFfor i in etcd kube-apiserver kube-controller-manager kube-scheduler ;do systemctl restart $i ;systemctl enable $i ;systemctl status $i;done

启动Kubernetes Master节点上的etcd, apiserver, controller-manager和scheduler进程及状态

4、Kubernetes Node1安装配置

在Kubenetes Node1节点上安装flannel、docker和Kubernetes

yum install kubernetes-node etcd docker flannel*rhsm* -y

在Node1节点上配置

vim node1 /etc/etcd/etcd.conf 配置如下

cat>/etc/etcd/etcd.conf<<EOF
##########
# [member]
ETCD_NAME=etcd2
ETCD_DATA_DIR="/data/etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.0.112:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.112:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.112:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.0.111:2380,etcd2=http://192.168.0.112:2380,etcd3=http://192.168.0.113:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.112:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF
mkdir  -p  /data/etcd/;chmod 757 -R /data/etcd/;service etcd restart

配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置。

Node1 kubernetes配置 vim 配置 /etc/kubernetes/config

cat>/etc/kubernetes/config<<EOF
# kubernetes system config
# The following values are used to configure various aspects of all
# kubernetes services, including
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.0.111:8080"
EOF

配置/etc/kubernetes/kubelet代码如下

cat>/etc/kubernetes/kubelet<<EOF
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.112"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.0.111:8080"
# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.0.123:5000/centos68"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
EOF
for  I  in  etcd kube-proxy  kubelet  docker ;do systemctl  restart  $I ;systemctl  enable  $I;systemctl status $I ;done
iptables -P FORWARD ACCEPT

分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态

 4、在Kubernetes Node2节点上安装flannel、docker和Kubernetes

yum install kubernetes-node etcd docker flannel *rhsm* -y

Node2 节点配置Etcd配置

Node2 /etc/etcd/etcd.config 配置flannel内容如下:

cat>/etc/etcd/etcd.conf<<EOF
##########
# [member]
ETCD_NAME=etcd3
ETCD_DATA_DIR="/data/etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.0.113:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.113:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.113:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.0.111:2380,etcd2=http://192.168.0.112:2380,etcd3=http://192.168.0.113:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.113:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF
mkdir  -p  /data/etcd/;chmod 757 -R /data/etcd/;service etcd restart

Node2 Kubernetes 配置

vim /etc/kubernete/config

cat>/etc/kubernetes/config<<EOF
# kubernetes system config
# The following values are used to configure various aspects of all
# kubernetes services, including
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.0.111:8080"
EOF

配置文件/etc/kubernetes/kubelet 代码如下

cat>/etc/kubernetes/kubelet<<EOF
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.113"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.0.111:8080"
# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.0.123:5000/centos68"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
EOF
for  I  in  etcd kube-proxy  kubelet  docker ;do systemctl  restart  $I;systemctl  enable  $I ;systemctl status $I ;done
iptables -P FORWARD ACCEPT

此时可以在Master节点上使用kubectl get nodes 查看加入到kubernetes集群的两个Node节点:此时kubernetes集群环境搭建完成

5、Kubernetes flanneld网络配置

Kubernetes整个集群所有的服务器(Master minion)配置Flanneld,/etc/sysconfig/flanneld 代码如下

cat>/etc/sysconfig/flanneld<<EOF
# Flanneld configuration options
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.0.111:2379"
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
EOF
service flanneld restart

在Master 服务器,测试Etcd集群是否正常,同时在Etcd配置中心创建flannel网络配置

6、Kubernetes Dashboard UI界面

Kubernetes实现的最重要的工作是对Docker容器集群统一的管理和调度,通常使用命令行来操作Kubernetes集群及各个节点,命令行操作非常不方便,如果使用UI界面来可视化操作,会更加方便的管理和维护。

在Node节点提前导入两个列表镜像 如下为配置kubernetes dashboard完整过程

1  docker load <pod-infrastructure.tgz,将导入的pod镜像名称修改,命令如下:

docker tag $(docker images|grep none|awk '{print $3}') registry.access.redhat.com/rhel7/pod-infrastructure

2  docker load <kubernetes-dashboard-amd64.tgz,将导入的pod镜像名称修改,命令如下:

docker tag $(docker images|grep none|awk '{print $3}') bestwu/kubernetes-dashboard-amd64:v1.6.3

然后在Master端,创建dashboard-controller.yaml,代码如下

apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"
spec:selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'spec:containers:- name: kubernetes-dashboardimage: bestwu/kubernetes-dashboard-amd64:v1.6.3resources:# keep request = limit to keep this container in guaranteed classlimits:cpu: 100mmemory: 50Mirequests:cpu: 100mmemory: 50Miports:- containerPort: 9090args:- --apiserver-host=http://192.168.0.111:8080
        livenessProbe:httpGet:path: /port: 9090initialDelaySeconds: 30timeoutSeconds: 30

创建dashboard-service.yaml,代码如下:

apiVersion: v1
kind: Service
metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"
spec:selector:k8s-app: kubernetes-dashboardports:- port: 80targetPort: 9090

创建dashboard dashborad pods模块:

kubectl create -f dashboard-controller.yaml
kubectl create -f dashboard-service.yaml 

创建完成后,查看Pods和Service的详细信息:

kubectl  get  namespace
kubectl get  deployment --all-namespaces
kubectl get  svc  --all-namespaces
kubectl  get  pods  --all-namespaces
kubectl get pod  -o wide  --all-namespaces
kubectl  describe  service/kubernetes-dashboard  --namespace="kube-system"
kubectl  describe  pod/kubernetes-dashboard-468712587-754dc --namespace="kube-system"
kubectl  delete pod/kubernetes-dashboard-468712587-754dc --namespace="kube-system"--grace-period=0 --force

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
注释:rpm2cpio命令用于将rpm软件包转换为cpio格式的文件
cpio命令主要是用来建立或者还原备份档的工具程序,cpio命令可以复制文件到归档包中,或者从归档包中复制文件。
-i 还原备份档
-v 详细显示指令的执行过程

转载于:https://www.cnblogs.com/legenidongma/p/10713409.html

kubernetes ui 搭建相关推荐

  1. Kubernetes Dashboard搭建流程

    一. 背景 通过 二进制方式 完成了kubernetes的安装,可以正常使用kubernetes的各种功能了,但是有点不足的是,只能通过命令查看/创建/删除/修改等操作,没有直观的Web UI界面感受 ...

  2. 容器编排技术 -- Kubernetes从零开始搭建自定义集群

    容器编排技术 -- Kubernetes从零开始搭建自定义集群 1 设计和准备 1.1 学习 1.2 Cloud Provider 1.3 节点 1.4 网络 1.4.1 网络连接 1.4.2 网络策 ...

  3. Unity3D---精灵图片裁剪及简单UI搭建

    [千锋合集]史上最全Unity3D全套教程|匠心之作_哔哩哔哩_bilibilip255 精灵图片裁剪 UI搭建 阴影效果 描边效果

  4. Kubernetes dashboard搭建

    Kubernetes dashboard搭建 1.下载yaml文件 下载kubernetes-dashboard.yaml文件 wget https://raw.githubusercontent.c ...

  5. Kubernetes二进制搭建集群(保姆级)

    Kubernetes二进制搭建集群(保姆级教程) 1.1 环境准备 版本说明 签名工具: 虚拟机/服务器配置 1.2 系统初始化(所有节点均需操作) 1.2.1 主机名解析 1.2.2 时间同步(所有 ...

  6. 在 Kubernetes 中, 搭建高可用集群

    永久地址:在 Kubernetes 中, 搭建高可用集群(保存网址不迷路

  7. kubernetes 联邦搭建(kubefed)

    kubernetes 联邦搭建(kubefed) 混合云 集群联邦(Federation)的目的是实现单一集群统一管理多个Kubernetes集群的机制,这些集群可能是跨地区(Region),也可能是 ...

  8. 如何利用Reveal神器查看各大APP UI搭建层级

    作者 乔同X 2016.08.22 19:45 写了3195字,被42人关注,获得了73个喜欢 如何利用Reveal神器查看各大APP UI搭建层级 字数413 阅读110 评论0 喜欢5 title ...

  9. Spark in action on Kubernetes - Playground搭建与架构浅析

    前言 Spark是非常流行的大数据处理引擎,数据科学家们使用Spark以及相关生态的大数据套件完成了大量又丰富场景的数据分析与挖掘.Spark目前已经逐渐成为了业界在数据处理领域的行业标准.但是Spa ...

最新文章

  1. Word中项目符号和编号用法详解
  2. 使用Shell(bash) 来检查 git 本地某个分支是否存在
  3. QTP自动化测试自学手册V2.0版本
  4. 【译】混沌工程与区块链
  5. SAP CRM中间件下载equipment时遇到的一个错误
  6. Pandas中的元素替换
  7. 使用Unoconv和LibreOffice进行格式转换实现在线预览 doc,doxc,xls,xlsx,ppt,pptx 文件
  8. Linux系统编程25:基础IO之亲自实现一个动静态库
  9. Windows server 2008 R2实现多用户远程连接
  10. redis 备份导出rdb_Redis学习——Redis持久化之RDB备份方式保存数据
  11. Visual Studio Code 配合 Node.js 轻松实现JS断点调试
  12. Linux虚拟文件系统(内核初始化二)
  13. python概述ppt_江红-第1章-Python概述ppt
  14. 金蝶EAS初始化操作手册之科目表
  15. 产品学习:智能生产调度管理系统
  16. 7-4 接话茬 (20分)
  17. 爬虫碰到谷歌验证码的一些解决思路
  18. js写一个气泡屏保能碰撞
  19. 互联网:2023年,省市县行政区划名称及编码对照表、最新省市区表1
  20. 华为OLT (Mt5683 5680t)自动下发WAN连接配置(DHCP,Static IP)

热门文章

  1. div超出不换行_文字超出显示点点点之ellipsis 设置
  2. 还有前景吗_喷码机行业还有前景吗 2021喷码机市场份额有多大
  3. linux6个服务级别,RHEL 6 和 RHEL 7 的一些有关运行级别,服务管理,服务启动等方面的区别介绍...
  4. 【机器视觉学习笔记】python安装OpenCV并设置自动补全及代码提示
  5. Fedora开机自动登录指定用户(root或普通用户)
  6. 在windows下写makefile编译代码
  7. [oralce] 利用CRT的端口转发功能直接用plsql访问数据库
  8. vue的slot作用域插槽使用案例
  9. [NodeJs] 如果发现node_modules中有个模块代码有bug,你该怎么办?
  10. Taro+react开发(78):taro生命周期render