1. 服务器说明

使用的是3台ubuntu16.04的虚拟机,具体信息如下:

172.16.100.238 master

172.16.100.239 master1

172.16.100.240 master2

172.16.100.241 worker

所有操作均使用root用户

2、安装docker-ce,kubelet,kubeadm,kubectl(所有节点)

2.1 禁用swap,防火墙(让所有机器之间都可以通过任意端口建立连接)

swapoff -a

永久关闭 注释/etc/fstab文件里swap相关的行

用vi修改/etc/fstab文件,在swap分区这行前加 # 禁用掉,保存退出

systemctl stop firewalld systemctl disable firewalld #查看状态 systemctl status firewalld

2.2 安装docker-ce 添加aliyun docker 源

apt-get update

apt-get -y install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

apt-get -y update

2.3 查看源中的 docker 版本

apt-cache madison docker-ce

2.4 安装 docker版本 18.06.3-ce

apt install docker-ce=18.06.3~ce~3-0~ubuntu

systemctl enable docker

2.5 验证 docker 的安装

docker version

2.6 如果docker版本不对需要删除后重新安装

apt autoremove docker-ce

2.7 安装Kubernetes,使用aliyun源

apt-get update && apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF

apt-get update

2.8 查看缓存版本

apt-cache madison kubelet

2.9 安装版本1.15.0

apt-get install kubelet=1.15.0-00 kubeadm=1.15.0-00 kubectl=1.15.0-00

systemctl enable kubelet && systemctl start kubelet

2.10 修改docker Cgroup Driver 为systemd

docker info | grep Cgroup

mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'

tee /etc/docker/daemon.json <<-'EOF'

{

"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],

"exec-opts": ["native.cgroupdriver=systemd"]

}

EOF

#查看

more /etc/docker/daemon.json

{

"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],

"exec-opts": ["native.cgroupdriver=systemd"]

}

#重新启动 docker

systemctl daemon-reload systemctl restart docker

3、系统设置(所有master节点)

3.1 设置主机名

每个节点的主机名必须都不一样,并且保证所有点之间可以通过hostname互相访问。

# 查看主机名 hostname 修改主机名(master, master1, master2)

hostnamectl set-hostname <your_hostname>

# 配置host,使所有节点之间可以通过hostname互相访问

$ vim /etc/hosts

172.16.100.238 master

172.16.100.239 master1

172.16.100.240 master2

172.16.100.250 VIP(虚拟IP)

4 、安装keepalived (master一主两备)

apt-get install -y keepalived

4.1 创建keepalived 配置文件(三台master主机) $ mkdir -p /etc/keepalived $ mkdir -p /etc/keepalived

$mkdir -p /etc/keepalived

master

root@ubuntu:/etc/keepalived# cat keepalived.conf

! Configuration File for keepalived

global_defs {

router_id keepalive-master

}

vrrp_instance VI-kube-master {

state MASTER

interface ens3 # # 绑定的网卡

virtual_router_id 68

priority 100

dont_track_primary

advert_int 3

virtual_ipaddress {

172.16.100.250 #虚拟IP

}

}

master1 配置文件

root@ubuntu:/etc/keepalived# cat keepalived.conf

! Configuration File for keepalived

global_defs {

router_id keepalive-backup01

}

vrrp_instance VI-kube-master {

state BACKUP

interface ens3

virtual_router_id 68

priority 90

dont_track_primary

advert_int 3

virtual_ipaddress {

172.16.100.250

}

}

master2 配置文件

root@ubuntu:/etc/keepalived# cat keepalived.conf

! Configuration File for keepalived

global_defs {

router_id keepalive-backup02

}

vrrp_instance VI-kube-master {

state BACKUP

interface ens3

virtual_router_id 68

priority 80

dont_track_primary

advert_int 3

virtual_ipaddress {

172.16.100.250

}

}

4.2 启动keepalived(3台master)

systemctl enable keepalived

systemctl start keepalived

# 检查状态

service keepalived status

# 查看日志

journalctl -f -u keepalived

# 查看虚拟ip

ip a

5、安装haproxy(3台master)

apt-get install -y haproxy

5.1 编写配置文件/etc/haproxy/haproxy.cfg

global log 127.0.0.1 local2

chroot /var/lib/haproxy

pidfile /var/run/haproxy.pid

maxconn 4000

user haproxy

group haproxy

daemon defaults

mode tcp

log global

retries 3 t

imeout connect 10s

timeout client 1m

timeout server 1m

frontend kubernetes bind *:6443

mode tcp

default_backend kubernetes-master

backend kubernetes-master

balance roundrobin

server master 172.16.100.238:6443 check maxconn 2000

server master2 172.16.100.239:6443 check maxconn 2000

server master3 172.16.100.240:6443 check maxconn 2000

5.2 启动,查看状态

systemctl enable haproxy

systemctl start haproxy

systemctl status haproxy 或者 service haproxy status

# 查看日志 journalctl -f -u haproxy

6、部署第一个master节点

6.1 编写 kubeadm-config.yaml 配置文件

apiVersion: kubeadm.k8s.io/v1beta1

kind: ClusterConfiguration

kubernetesVersion: v1.15.0

controlPlaneEndpoint: "172.16.100.250:6443"

networking:

# CNI provider.

podSubnet: "10.244.0.0/16"

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

6.2 集群初始化:

kubeadm init --config=kubeadm-config.yaml

返回信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

kubeadm join 172.16.100.250:6443 --token 3m5ijz.5sxlq1ls9c29551x \

--discovery-token-ca-cert-hash sha256:7dd5ab3ae17ac88dfe65e619b4adc6aae9c9b41ed9c6336df04c4f4c5080af02 \

--experimental-control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.100.250:6443 --token 3m5ijz.5sxlq1ls9c29551x \

--discovery-token-ca-cert-hash sha256:7dd5ab3ae17ac88dfe65e619b4adc6aae9c9b41ed9c6336df04c4f4c5080af02

6.3 kubectl配置(根据上一步的提示)

root用户执行:

export KUBECONFIG=/etc/kubernetes/admin.conf

非root用户执行:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.4 测试kubectl kubectl get pods --all-namespaces

如果pod状态不是running,使用kubectl describe pod –n kube-system+pod名 查看,当前有两个pod的状态是Pending,查看pod的详细信息

kubectl describe pod coredns-6967fb4995-7mmpn -n kube-system

0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

解决:(master 节点去污)

kubectl taint nodes --all node-role.kubernetes.io/master-

查看master节点的详细信息

kubectl descriebe node master -o wide

n:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

网络组件还没安装

6.5 安装网络组件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

稍等会查看pod状态

kubectl get pods --all-namespaces

所有pod状态都正常

查看master节点状态

kubectl get node -o wide

master 状态已变成ready,master节点安装完成。

7. 其他主节点

7.1 从第一个master节点拷贝证书到其他两个master节点

#拷贝pki 证书(在master1节点上执行) mkdir -p /etc/kubernetes/pki scp -r root@172.16.100.238:/etc/kubernetes/pki /etc/kubernetes

scp root@172.16.100.238:/etc/kubernetes/admin.conf /etc/kubernetes/

7.2 master1加入集群

使用之前保存的join命令加入集群 kubeadm join 172.16.100.250:6443 --token 3m5ijz.5sxlq1ls9c29551x \

--discovery-token-ca-cert-hash sha256:7dd5ab3ae17ac88dfe65e619b4adc6aae9c9b41ed9c6336df04c4f4c5080af02 \

--experimental-control-plane

报错:

error execution phase control-plane-prepare/certs: error creating PKI assets: failed to write or validate certificate "apiserver": certificate apiserver is invalid: x509: certificate is valid for master, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not master1

master1使用了master的证书,需要重新为master1生成证书文件,只保留以下的证书,将其他的证书删除:

scp root@172.16.100.238:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/

scp root@172.16.100.238:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/

scp root@172.16.100.238:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/

scp root@172.16.100.238:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/

scp root@172.16.100.238/etc/kubernetes/admin.conf /etc/kubernetes/

再次执行:

kubeadm join 172.16.100.250:6443 --token 3m5ijz.5sxlq1ls9c29551x \

--discovery-token-ca-cert-hash sha256:7dd5ab3ae17ac88dfe65e619b4adc6aae9c9b41ed9c6336df04c4f4c5080af02 \

--experimental-control-plane

返回信息:

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.

* The Kubelet was informed of the new secure connection details.

* Control plane (master) label and taint were applied to the new node.

* The Kubernetes control plane instances scaled up.

* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

配置kubectl

export KUBECONFIG=/etc/kubernetes/admin.conf

7.3 再次查看集群状态

kubectl get nodes -o wide

7.4 master2 节点拷贝master节点的证书

mkdir -p /etc/kubernetes/pki/etcd

scp root@172.16.100.238:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/

scp root@172.16.100.238:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/

scp root@172.16.100.238:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/

scp root@172.16.100.238:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/

scp root@172.16.100.238:/etc/kubernetes/admin.conf /etc/kubernetes/

加入集群

kubeadm join 172.16.100.250:6443 --token 3m5ijz.5sxlq1ls9c29551x \

--discovery-token-ca-cert-hash sha256:7dd5ab3ae17ac88dfe65e619b4adc6aae9c9b41ed9c6336df04c4f4c5080af02 \

--experimental-control-plane

返回 :

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.

* The Kubelet was informed of the new secure connection details.

* Control plane (master) label and taint were applied to the new node.

* The Kubernetes control plane instances scaled up.

* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

配置kubectl

export KUBECONFIG=/etc/kubernetes/admin.conf

查看集群节点

kubectl get nodes -o wide

7.5 node节点加入集群

kubeadm join 172.16.100.250:6443 --token 3m5ijz.5sxlq1ls9c29551x \

--discovery-token-ca-cert-hash sha256:7dd5ab3ae17ac88dfe65e619b4adc6aae9c9b41ed9c6336df04c4f4c5080af02

配置kubectl

scp root@172.16.100.238:/etc/kubernetes/admin.conf /etc/kubernete

export KUBECONFIG=/etc/kubernetes/admin.conf

# 等待一会,并观察日志 journalctl -f # 查看集群状态 # 1.查看节点 kubectl get nodes

# 2.查看pods kubectl get pods --all-namespaces

安装完成。

Kubernetes 1.15.0 ubuntu16.04 高可用安装步骤相关推荐

  1. Ubuntu16.04 Caffe 编译安装步骤记录

    历时一周终于在 ubuntu16.04 系统成功安装 caffe 并编译,网上有很多教程,但是某些步骤并没有讲解详尽,导致配置过程总是出现各种各样匪夷所思的问题,尤其对于新手而言更是欲哭无泪,在我饱受 ...

  2. Rancher HA 高可用安装步骤

    完整内容参考:High Availability (HA) Install 本文对关键步骤进行记录(翻译). 本文使用的 CentOS Linux release 7.6.1810,部分内容可能和具体 ...

  3. Ubuntu16.04 Caffe2 编译安装步骤记录

    我的本机环境如下,任何的环境上的不一致可能会带来一些安装上的问题,所以这个教程只是一个简单的参考. 环境 操作系统: Ubuntu 16.04 GPU型号: Tesla M40 24GB Python ...

  4. VM15.5.0+Ubuntu16.04.6+ns2.35仿真平台

    VM15.5.0+Ubuntu16.04.6+ns2.35仿真平台 步骤一.安装虚拟机:VMware® Workstation 15 Pro(版本15.5.0) (1)下载虚拟机应用程序 (2)双击运 ...

  5. sealos kubernetes(k8s)高可用安装教程

    官网地址 https://www.sealyun.com/instructions 快速开始 环境信息 主机名 IP地址 master0 192.168.0.2 master1 192.168.0.3 ...

  6. Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记

    目录 二进制Metrics&Dashboard安装 二进制高可用集群可用性验证 生产环境k8s集群关键性配置 Bootstrapping: Kubelet启动过程 Bootstrapping: ...

  7. 内网环境下手动部署kubernetes(v1.26.3)高可用集群

    这篇博客主要是记录了手动部署一个高可用的Kubernetes集群的过程.旨在帮助自己及初学者学习kubernetes,并记录下具体的操作过程和总结的知识点.文中可能存在一些问题或不足之处,仅供参考. ...

  8. ansible 建 kubernetes 证书签名请求_最简单的 kubernetes 高可用安装方式!(文末送书)...

    福利 文末留言送 3 本由马哥教育 CEO 马哥(马永亮)撰写的<Kubernetes 进阶实战>,希望大家点击文末的留言小程序积极留言,每个人都有机会. 前言 本文教你如何用一条命令构建 ...

  9. Kubernetes实战(一):k8s v1.11.x v1.12.x 高可用安装

    说明:部署的过程中请保证每个命令都有在相应的节点执行,并且执行成功,此文档已经帮助几十人(仅包含和我取得联系的)快速部署k8s高可用集群,文档不足之处也已更改,在部署过程中遇到问题请先检查是否遗忘某个 ...

最新文章

  1. Oracle的分页查询
  2. 革命性提升-宇宙最强的NLP预训练BERT模型(附官方代码)
  3. MySQL高级 - SQL优化 - 索引提示
  4. .netcore 整合 log4net
  5. win7系统桌面计算机怎么打的开,windows7系统双击计算机打不开怎么解决|win7双击计算机打不开的解决方法...
  6. 邪恶的Java技巧使JVM忘记检查异常
  7. 前后端分离项目token怎么验证_前后端分离,获取token,验证登陆是否失效
  8. 数据结构---哈希表
  9. R语言 正态性检验 Q-Q plot shapiro test
  10. linux 内核 addr2line,内核调试 arm-none-linux-gnueabi-addr2line 工具使用
  11. MATLAB求解一元二次方程
  12. 基于python的证件照_利用python自动生成证件照
  13. Andriod.mk用法
  14. Onedrive不限速还有5T空间,且行且珍惜
  15. Git使用学习(七、版本回滚)
  16. STATA regress回归结果分析
  17. 2009.01.19(山寨)
  18. 关于EBGP用回环口起邻居遇到的问题
  19. EXPDP 指定排除某些表
  20. 【飞郁2022新课程】32 - xdbg的认识与设置

热门文章

  1. 48个Dreamweaver超级实用技巧。
  2. HTML5期末大作业:网上预订网站设计——火车票网上预订网站(4个页面) HTML+CSS+JavaScript web网页设计实例作业
  3. 轻松制作U盘PE系统重装Win、Windows系统的流程和方法
  4. python无法import pyzbar(No module named ‘pyzbar‘)
  5. c语言吃豆豆小游戏代码,高手帮我改下我的吃豆豆游戏吧
  6. 【计算机网络】运输层:传输协议TCP概述
  7. java无法引用int_Java中的“int不能被解除引用”
  8. POJ1753 filp game
  9. HTML5期末大作业:关于酒店主题网站设计——高级酒店公寓网页(4页) HTML+CSS+JavaScript...
  10. 天融信防火墙tos登录提示操作超时