文章目录

  • K8S 二进制安装部署
    • kubernetes
      • Master节点部署插件:
      • Node节点部署插件:
        • 优化节点并安装Docker
        • k8s架构图
    • 一、节点规划
      • 1、插件规划参考
      • 2、环境准备
    • 二、系统优化
    • 三、安装docker
      • 1、华为源安装 -- 方式一
      • 2、docker 阿里云安装 -- 方式二
      • 3、docker安装脚本 -- 方式三
      • 4、扩展 docker卸载
    • 四、生成+颁发集群证书 (master01执行)
    • 1.准备证书生成工具
      • 2、生成根证书
      • 3、生成根证书请求文件
      • 证书详解
      • 4、生成根证书
      • 参数详解
    • 五、部署ETCD集群
      • 1、节点规划
    • 2.创建ETCD集群证书
      • 2、创建ETCD集群证书
      • 配置项详解
        • 3、生成ETCD证书
      • 参数详解
        • 4、分发ETCD证书
      • 5、部署ETCD
      • 6、注册ETCD服务 (三台master节点上同时执行)
      • 7、测试ETCD服务
    • 六、部署master节点
      • 1、集群规划
    • 一、创建证书
      • 1.创建集群CA证书
      • 1)创建ca集群证书
      • 2)创建根证书签名
      • 3)生成根证书
      • 2.创建集群普通证书
      • 1)创建kube-apiserver的证书
        • 1> 创建证书签名配置
        • 2> 生成证书
      • 2)创建controller-manager的证书
        • 1> 创建证书签名配置
      • 2> 生成证书
      • 3)创建kube-scheduler的证书
        • 1> 创建证书签名配置
        • 2> 开始生成
      • 4)创建kube-proxy证书
        • 1> 创建证书签名配置
        • 2> 开始生成
      • 5)创建集群管理员证书
        • 1> 创建证书签名配置
        • 2> 开时生成
      • 6)颁发证书
        • 1> 给其他master节点颁发证书(master02、master03)
    • 七、下载安装包+编写配置文件
      • 1.下载安装包并分发组件
        • 1)下载安装包(master01节点)
        • github.com 安装kubernetes-server
      • 2)创建集群配置文件
        • 1>创建kube-controller-manager.kubeconfig(master01节点)
      • 2>创建kube-scheduler.kubeconfig(master01节点)
      • 3>创建kube-proxy.kubeconfig集群配置文件(master01节点)
      • 4>创建超级管理员的集群配置文件(master01节点)
      • 5>颁发集群配置文件(master01节点)
      • 6>创建集群token(master01节点)
    • 八、部署各个组件
      • 1)部署kube-apiserver(所有master节点)
      • 2)对kube-apiserver做高可用
        • 1>安装高可用软件(所有master节点)
        • 2>修改keepalived配置文件(所有master节点)
        • 3>修改haproxy配置文件(所有master节点)
      • 3)部署TLS
        • 1>创建集群配置文件(master01节点)
        • 2>颁发证书(master01节点)
        • 3>创建TLS低权限用户(master01节点)
      • 4)部署contorller-manager
        • 1>编辑配置文件(所有master节点)
        • 2>注册服务(所有master节点)
        • 3>启动服务
      • 5)部署kube-scheduler
        • 1>编写配置文件(所有master节点)
        • 2>注册服务(所有master节点)
        • 3>启动(所有master节点)
      • 6)查看集群状态
      • 7)部署kubelet服务
        • 1>创建kubelet服务配置文件(所有master节点)
        • 2>创建kubelet-config.yaml (所有master节点)
        • 3>注册kubelet服务(所有master节点)
        • 4>启动(所有master节点)
      • 8)部署kube-proxy
        • 1>创建配置文件(所有master节点)
        • 2>创建kube-proxy-config.yml(所有master节点)
        • 3>注册服务(所有master节点)
        • 4>启动(所有master节点)
      • 9)加入集群节点
        • 1>查看集群节点加入请求(master01节点)
        • 2>批准加入(master01节点)
      • 10)安装网络插件
        • 1>下载flannel安装包并安装(master01节点)
        • 2>将flannel配置写入集群数据库(master01节点)
        • 3>注册flannel服务(所有master节点)
        • 4>修改docker启动文件(所有master节点)
        • 5>启动(所有master节点)
      • 11)验证集群网络
      • 12)安装集群DNS(master01节点)
      • 13)验证集群
    • 八、部署node节点
      • 1.节点规划
      • 2.集群优化
      • 3.分发软件包
      • 4.分发证书
      • 5.分发配置文件
      • 6.部署kubelet
      • 7.部署kube-proxy
      • 8.加入集群(master01节点)
      • 9.设置集群角色(master01节点)
    • 九、安装集群图形化界面
      • 1、安装图形化界面
      • 2、部署nginx服务

K8S 二进制安装部署

kubernetes

k8s和docker之间的关系?

docker是一个容器,k8s是一个容器化管理平台

集群角色:master节点: `管理集群
node节点:   `主要用来部署应用

Master节点部署插件:

kube-apiserver :         # 中央管理器,调度管理集群
kube-controller-manager :# 控制器: 管理容器,监控容器
kube-scheduler:          # 调度器:调度容器
flannel:                  # 提供集群间网络
etcd:                    # 数据库
====================================================================================================
kubelet :                # 部署容器,监控容器(只监控自己的那一台)
kube-proxy :              # 提供容器间的网络

Node节点部署插件:

kubelet :   # 部署容器,监控容器(只监控自己的那一台)
kube-proxy : # 提供容器间的网络
优化节点并安装Docker
  • 以下操作均在所有节点执行
  • 准备相应机器,配置节点规划
    • 所有节点配置以下内容到hosts文件
k8s架构图

在架构图中,我们把服务分为运行在工作节点上的服务和组成在集群级别控制板的服务
Kubernetes主要由以下几个核心组件组成:1. etcd保存整个集群的状态
2. apiserver提供了资源的唯一入口,并提供认证、授权、访问控制、API注册和发现等
3. controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等
4. scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上
5. kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
6. Container runtime负责镜像的管理以及Pod和容器的真正运行(CRI)
7. kube-poxy负责为Service提供cluster内部的服务发现和负载均衡除了核心组件,还有一些推荐的组件:8. kube-dns负责为整个集群提供DNS服务
9. Ingress Controller 为服务提供外网入口
10. Heapster提供资源监控
11. Dashboard提供GUIFederation提供跨可用区的集群
12. Fluentd-elasticsearch提供集群日志采集,存储与查询

一、节点规划

主机名称 IP 域名解析
k8s-m-01 192.168.15.51 m1
k8s-m-02 192.168.15.52 m2
k8s-m-03 192.168.15.53 m3
k8s-n-01 192.168.15.54 n1
k8s-n-02 192.168.15.55 n2

1、插件规划参考

# Master节点规划
kube-apiserver
kube-controller-manager
kube-scheduler
flannel
etcd
kubelet
kube-proxy# Node节点规划
kubelet
kube-proxy

2、环境准备

# 1、所有机器需要执行IP
[root@k8s-m-01 ~]# vim base.sh
#!/bin/bash# 1、修改主机名和网卡
hostnamectl set-hostname $1 &&\
sed -i "s#111#$2#g" /etc/sysconfig/network-scripts/ifcfg-eth[01] &&\
systemctl restart network &&\
# 2、修改本机hosts文件
cat >>/etc/hosts <<EOF
172.16.1.51 k8s-m-01 m1
172.16.1.52 k8s-m-02 m2
172.16.1.53 k8s-m-03 m3
172.16.1.54 k8s-n-01 n1
172.16.1.55 k8s-n-02 n2# 虚拟VIP
172.16.1.56 k8s-m-vip vip
EOF

二、系统优化

  • 以下操作均在所有节点执行(如果版本已经更新完,无需优化)
第一部分优化: 基础优化[root@k8s-m-01 ~]# vim base.sh
#!/bin/bash
# 1、修改主机名和网卡
# hostnamectl set-hostname $1 &&\
# sed -i "s#111#$2#g" /etc/sysconfig/network-scripts/ifcfg-eth[01] &&\
# systemctl restart network &&\
# 2、关闭selinux和防火墙和ssh连接
setenforce 0 &&\
sed -i 's#enforcing#disabled#g' /etc/selinux/config &&\
systemctl disable --now firewalld &&\
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config &&\
systemctl restart sshd &&\
# 3、关闭swap分区
# 一旦触发 swap,会导致系统性能急剧下降,所以一般情况下,K8S 要求关闭 swap
# cat /etc/fstab
# 注释最后一行swap,如果没有安装swap就不需要
swapoff -a &&\
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet &&\ #忽略swap# 4、修改本机hosts文件
# cat >>/etc/hosts <<EOF
# 172.16.1.51 k8s-m-01 m1
# 172.16.1.52 k8s-m-02 m2
# 172.16.1.53 k8s-m-03 m3
# 172.16.1.54 k8s-n-01 n1
# 172.16.1.55 k8s-n-02 n2# 虚拟VIP
# 172.16.1.56 k8s-m-vip vip
# EOF#  5、配置镜像源(国内源)
# 默认情况下,CentOS 使用的是官方 yum 源,所以一般情况下在国内使用是非常慢,所以我们可以替换成 国内的一些比较成熟的 yum 源,例如:清华大学镜像源,网易云镜像源等等
rm -rf /ect/yum.repos.d/* &&\
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo &&\
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &&\
yum clean all &&\
yum makecache &&\
# 6、更新系统
#查看内核版本,若内核高于4.0,可不加--exclud选项
yum update -y --exclud=kernel* &&\
# 由于 Docker 运行需要较新的系统内核功能,例如 ipvs 等等,所以一般情况下,我们需要使用 4.0+以上版 本的系统内核要求是 4.18+,如果是 CentOS 8 则不需要内核系统更新
# 7、安装基础常用软件,是为了方便我们的日常使用
yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp ntpdate -y &&\
# 8、更新系统内核
#如果是centos8则不需要升级内核
cd /opt/ &&\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.127-1.el7.elrepo.x86_64.rpm &&\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.127-1.el7.elrepo.x86_64.rpm &&\# 官网https://elrepo.org/linux/kernel/el7/x86_64/RPMS/# 9、安装系统内容
yum localinstall /opt/kernel-lt* -y &&\
# 10、调到默认启动
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg &&\
# 11、查看当前默认启动的内核
grubby --default-kernel &&\
reboot
# 安装完成就是5.4内核
=============================================================================================
第二部分优化: 免密优化
# 1、免密
[root@k8s-master-01 ~]# ssh-keygen -t rsa
[root@k8s-master-01 ~]# for i in m1 m2 m3;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i;done# 2、集群时间同步 crontab -e
# 每隔1分钟刷新一次
*/1 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null
==============================================================================================
第三部分优化: 安装IPVS和内核优化# 1、安装 IPVS 、加载 IPVS 模块 (所有节点)
ipvs 是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选 ipvs
[root@k8s-n-01 ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 2、授权(所有节点)
[root@k8s-n-01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs# 3、内核参数优化(所有节点)
加载IPVS 模块、生效配置
内核参数优化的主要目的是使其更适合 kubernetes 的正常运行
[root@k8s-n-01 ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1   # 可以之间修改这两个
net.bridge.bridge-nf-call-ip6tables = 1  # 可以之间修改这两个
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
# 立即生效
sysctl --system

三、安装docker

  • 以下操作均在所有节点执行

1、华为源安装 – 方式一

Docker 主要是作为 k8s 管理的常用的容器工具之一
# 1、CentOS 7版
进入华为镜像站选择docker源参考依次安装就好# 1、若您安装过docker,需要先删掉,之后再安装依赖:
sudo yum remove docker docker-common docker-selinux docker-engine &&\
sudo yum install -y yum-utils device-mapper-persistent-data lvm2   &&\
# 2、安装doceker源
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo &&\
# 3、软件仓库地址替换
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# 4、重新生成源
yum clean all &&\
yum makecache  &&\
# 5、安装docker
sudo yum makecache fast &&\
sudo yum install docker-ce -y &&\
# 6、设置docker开机自启动
systemctl enable --now docker.service
# 7、docker加速优化
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF# 如果安装centos7步骤若出现报错:
GPG key retrieval failed: [Errno 14] curl#6 - "Could not resolve host: download.docker.com; Unknown error"
# 原因:无法解析主机
# 解决方法1:临时添加114域名解析到配置文件(临时,且不可重启网卡)
[root@k8s-master1 ~] cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114
nameserver 223.5.5.5
# 解决方法2:添加到eth0网卡配置文件内(永久,需重启网卡)
DNS1=114.114.114.114# 2、CentOS 8版
1、下载rpm包
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io1.2.13-3.2.el7.x86_64.rpm
2、安装
yum install containerd.io-1.2.13-3.2.el7.x86_64.rpm -y
3、安装扩展
yum install -y yum-utils device-mapper-persistent-data lvm2
4、配置源
yum-config-manager --add-repo
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5、安装docker
yum install docker-ce -y
6、docker启动
systemctl enable --now docker.service7、docker加速优化
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF

2、docker 阿里云安装 – 方式二

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
systemctl enable --now docker.service
#  Step 5: Docker加速优化服务
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ce.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)sudo yum -y install docker-ce-[VERSION]

3、docker安装脚本 – 方式三

方式一:华为源
[root@k8s-m-01 ~]# vim docker.sh
# 1、清空已安装的docker
sudo yum remove docker docker-common docker-selinux docker-engine &&\
sudo yum install -y yum-utils device-mapper-persistent-data lvm2   &&\
# 2、安装doceker源
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo &&\
# 3、软件仓库地址替换
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# 4、重新生成源
yum clean all &&\
yum makecache  &&\
# 5、安装docker
sudo yum makecache fast &&\
sudo yum install docker-ce -y &&\
# 6、设置docker开机自启动
systemctl enable --now docker.service# 7、创建docker目录、启动服务(所有节点) ------ 单独执行加速docekr运行速度
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
方式二:阿里云
[root@k8s-n-01 ~]# vim docker.sh
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo &&\
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast &&\
sudo yum -y install docker-ce &&\
# Step 4: 开启Docker服务
systemctl enable --now docker.service &&\
#  Step 5: Docker加速优化服务
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF

4、扩展 docker卸载

# 1、卸载旧的版本sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine
#2.卸载依赖
yum remove docker-ce docker-ce-cli containerd.io -y
#3.删除目录
rm -rf /var/lib/docker   #docker默认的工作路径
#4.镜像加速器(docker优化)- 登录阿里云找到容器镜像服务- 找到镜像加速地址- 配置使用

四、生成+颁发集群证书 (master01执行)

  • 以下命令只需要在master01执行即可
kubernetes组件众多,这些组件之间通过HTTP/GRPC互相通信,来协同完成集群中的应用部署和管理工作.

1.准备证书生成工具

#cfssl证书生成工具是一款把预先的证书机构、使用期等时间写在json文件里面会更加高效和自动化。
#cfssl采用go语言编写,是一个开源的证书管理工具,cfssljson用来从cfssl程序获取json输出,并将证书、密钥、csr和bundle写入文件中。# 1、安装证书生成工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 &&\
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64# 2、设置执行权限
chmod +x cfssljson_linux-amd64 &&\
chmod +x cfssl_linux-amd64
或者
chmod +x cfssl*# 3、移动到/usr/local/bin
mv cfssljson_linux-amd64 cfssljson  &&\
mv cfssl_linux-amd64 cfssl &&\
mv cfssljson cfssl /usr/local/bin

2、生成根证书

#根证书:是CA认证中心与用户建立信任关系的基础,用户的数字证书必须有一个受信任的根证书,用户的数字证书才有效。
#证书包含三部分,用户信息、用户的公钥、证书签名。CA负责数字认证的批审、发放、归档、撤销等功能,CA颁发的数字证书拥有CA的数字签名,所以除了CA自身,其他机构无法不被察觉的改动。mkdir -p /opt/cert/cacat > /opt/cert/ca/ca-config.json <<EOF
{"signing": {"default": {"expiry": "8760h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "8760h"}}}
}
EOF# 详解: default:默认策略:默认有效1年profiles:定义使用场景,指定不同的过期时间,使用场景等参数singing:表示该证书可用于签名其他证书,生成的ca.pem证书server auth:表示client可以用该CA对server提供的证书进行校验client auth:表示server可以用该CA对client提供的证书进行验证

3、生成根证书请求文件

cat > /opt/cert/ca/ca-csr.json << EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names":[{"C": "CN","ST": "BeiJing","L": "BeiJing"}]
}
EOF

证书详解

证书项 解释
C 国家
ST
L 城市
O 组织
OU 组织别名

4、生成根证书

[root@k8s-m-01 ca]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/06/29 08:49:16 [INFO] generating a new CA key and certificate from CSR
2021/06/29 08:49:16 [INFO] generate received request
2021/06/29 08:49:16 [INFO] received CSR
2021/06/29 08:49:16 [INFO] generating key: rsa-2048
2021/06/29 08:49:17 [INFO] encoded CSR
2021/06/29 08:49:17 [INFO] signed certificate with serial number 137666249701104309463931206360792420984700751682
[root@k8s-m-01 ca]# ll
total 20
-rw-r--r-- 1 root root  285 Jun 29 08:48 ca-config.json
-rw-r--r-- 1 root root  960 Jun 29 08:49 ca.csr
-rw-r--r-- 1 root root  153 Jun 29 08:48 ca-csr.json
-rw------- 1 root root 1679 Jun 29 08:49 ca-key.pem
-rw-r--r-- 1 root root 1281 Jun 29 08:49 ca.pem

参数详解

参数项 解释
gencert 生成新的key(密钥)和签名证书
–initca 初始化一个新CA证书

五、部署ETCD集群

1、ETCD需要做高可用

2、在一台master节点执行即可

ETCD是基于Raft的分布式key-value存储系统,常用于服务发现,共享配置,以及并发控制(如leader选举,分布式锁等等)。kubernetes使用etcd进行状态和数据存储!

1、节点规划

Ip 主机名 域名解析
172.16.1.51 k8s-m-01 m1
172.16.1.52 k8s-m-02 m2
172.16.1.53 k8s-m-03 m3

2.创建ETCD集群证书

[root@k8s-m-01 ca]# vim /etc/hosts
172.16.1.51 k8s-m-01 m1 etcd-1
172.16.1.52 k8s-m-02 m2 etcd-2
172.16.1.53 k8s-m-03 m3 etcd-3
172.16.1.54 k8s-n-01 n1
172.16.1.55 k8s-n-02 n2# 虚拟VIP
172.16.1.56 k8s-m-vip vip# 注: ectd也可以不用写,因为就在master节点上了

2、创建ETCD集群证书

mkdir -p /opt/cert/etcd
cd /opt/cert/etcdcat > etcd-csr.json << EOF
{"CN": "etcd","hosts": ["127.0.0.1","172.16.1.51","172.16.1.52","172.16.1.53","172.16.1.54","172.16.1.55","172.16.1.56"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing"}]
}
EOF

配置项详解

配置项详解 配置选项
name 节点名称
data-dir 指定节点的数据存储目录
listen-peer-urls 与集群其它成员之间的通信地址
listen-client-urls 监听本地端口,对外提供服务的地址
initial-advertise-peer-urls 通告给集群其它节点,本地的对等URL地址
advertise-client-urls 客户端URL,用于通告集群的其余部分信息
initial-cluster 集群中的所有信息节点
initial-cluster-token 集群的token,整个集群中保持一致
initial-cluster-state 初始化集群状态,默认为new
–cert-file 客户端与服务器之间TLS证书文件的路径
–key-file 客户端与服务器之间TLS密钥文件的路径
–peer-cert-file 对等服务器TLS证书文件的路径
–peer-key-file 对等服务器TLS密钥文件的路径
–trusted-ca-file 签名client证书的CA证书,用于验证client证书
–peer-trusted-ca-file 签名对等服务器证书的CA证书
–trusted-ca-file 签名client证书的CA证书,用于验证client证书
–peer-trusted-ca-file 签名对等服务器证书的CA证书。
3、生成ETCD证书
# 1、生成etcd证书
[root@k8s-m-01 etcd]# cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2021/06/29 08:59:41 [INFO] generate received request
2021/06/29 08:59:41 [INFO] received CSR
2021/06/29 08:59:41 [INFO] generating key: rsa-2048
2021/06/29 08:59:42 [INFO] encoded CSR
2021/06/29 08:59:42 [INFO] signed certificate with serial number 495333324552725195895077036503159161152536226206
2021/06/29 08:59:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
# 2、查看etcd证书
[root@k8s-m-01 etcd]# ll
total 16
-rw-r--r-- 1 root root 1050 Jun 29 08:59 etcd.csr
-rw-r--r-- 1 root root  382 Jun 29 08:58 etcd-csr.json
-rw------- 1 root root 1675 Jun 29 08:59 etcd-key.pem
-rw-r--r-- 1 root root 1379 Jun 29 08:59 etcd.pem

参数详解

参数项 解释
gencert 生成新的key(密钥)和签名证书
-initca 初始化一个新的ca
-ca-key 指明ca的证书
-config 指明ca的私钥文件
-profile 指明请求证书的json文件
-ca 与config中的profile对应,是指根据config中的profile段来生成证书的相关信息
4、分发ETCD证书
# 1、生成分发证书
[root@k8s-m-01 /opt/cert/etcd]# for ip in m{1..3};do
ssh root@${ip} "mkdir -pv /etc/etcd/ssl"
scp ../ca/ca*.pem  root@${ip}:/etc/etcd/ssl
scp ./etcd*.pem  root@${ip}:/etc/etcd/ssl
donemkdir: created directory ‘/etc/etcd’
mkdir: created directory ‘/etc/etcd/ssl’
ca-key.pem                                                      100% 1675   299.2KB/s   00:00
ca.pem                                                          100% 1281   232.3KB/s   00:00
etcd-key.pem                                                    100% 1675     1.4MB/s   00:00
etcd.pem                                                        100% 1379   991.0KB/s   00:00
mkdir: created directory ‘/etc/etcd’
mkdir: created directory ‘/etc/etcd/ssl’
ca-key.pem                                                      100% 1675     1.1MB/s   00:00
ca.pem                                                          100% 1281   650.8KB/s   00:00
etcd-key.pem                                                    100% 1675   507.7KB/s   00:00
etcd.pem                                                        100% 1379   166.7KB/s   00:00
mkdir: created directory ‘/etc/etcd’
mkdir: created directory ‘/etc/etcd/ssl’
ca-key.pem                                                      100% 1675   109.1KB/s   00:00
ca.pem                                                          100% 1281   252.9KB/s   00:00
etcd-key.pem                                                    100% 1675   121.0KB/s   00:00
etcd.pem                                                        100% 1379   180.4KB/s   00:00    # 2、查看分发证书
[root@k8s-m-01 cert]# ll /etc/etcd/ssl/
total 16
-rw------- 1 root root 1679 Jun 29 09:13 ca-key.pem
-rw-r--r-- 1 root root 1281 Jun 29 09:13 ca.pem
-rw------- 1 root root 1675 Jun 29 09:13 etcd-key.pem
-rw-r--r-- 1 root root 1379 Jun 29 09:13 etcd.pem

5、部署ETCD

# 1、下载ETCD安装包
[root@k8s-m-01 ssl]# cd /opt/
[root@k8s-m-01 opt]# ll
wget https://mirrors.huaweicloud.com/etcd/v3.3.24/etcd-v3.3.24-linux-amd64.tar.gz# 2、解压
[root@k8s-m-01 opt]# tar xf etcd-v3.3.24-linux-amd64.tar.gz
# 3、分发至其他节点
[root@k8s-m-01 opt]# for i in m1 m2 m3;do scp ./etcd-v3.3.24-linux-amd64/etcd* root@$i:/usr/local/bin/;done# 4、三节点各自查看自己ETCD版本(是否分发成功)
[root@k8s-m-01 opt]# etcd --version
etcd Version: 3.3.24
Git SHA: bdd57848d
Go Version: go1.12.17
Go OS/Arch: linux/amd64

6、注册ETCD服务 (三台master节点上同时执行)

  • 在三台master节点上同时执行
  • 利用变量主机名与ip,让其在每台master节点注册
[root@k8s-m-01 ~]# vim etcd.sh
#1、创建存放目录
mkdir -pv /etc/kubernetes/conf/etcd &&\
#2、定义变量
ETCD_NAME=`hostname`
INTERNAL_IP=`hostname -i`
INITIAL_CLUSTER=k8s-m-01=https://172.16.1.51:2380,k8s-m-02=https://172.16.1.52:2380,k8s-m-03=https://172.16.1.53:2380
#3、准备配置文件
cat << EOF | sudo tee /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos[Service]
ExecStart=/usr/local/bin/etcd \\--name ${ETCD_NAME} \\--cert-file=/etc/etcd/ssl/etcd.pem \\--key-file=/etc/etcd/ssl/etcd-key.pem \\--peer-cert-file=/etc/etcd/ssl/etcd.pem \\--peer-key-file=/etc/etcd/ssl/etcd-key.pem \\--trusted-ca-file=/etc/etcd/ssl/ca.pem \\--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\--peer-client-cert-auth \\--client-cert-auth \\--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\--listen-peer-urls https://${INTERNAL_IP}:2380 \\--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\--advertise-client-urls https://${INTERNAL_IP}:2379 \\--initial-cluster-token etcd-cluster \\--initial-cluster ${INITIAL_CLUSTER} \\--initial-cluster-state new \\--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF
# 4、加载etcd服务
systemctl daemon-reload  \\
# 4、启动ETCD服务
systemctl enable --now etcd.service \\
# 5、验证ETCD服务
systemctl status etcd.service 

7、测试ETCD服务

  • 在一台master节点执行即可(如master01)
# 1、方式一:
1、第一种方式
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/etcd.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379" \
endpoint status --write-out='table'
2、测试结果
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://172.16.1.51:2379 |  2760f98de9dc762 |  3.3.24 |   20 kB |      true |        55 |          9 |
| https://172.16.1.52:2379 | 18273711b3029818 |  3.3.24 |   20 kB |     false |        55 |          9 |
| https://172.16.1.53:2379 | f42951486b449d48 |  3.3.24 |   20 kB |     false |        55 |          9 |
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
# 2、方式二:
1、第二种方式
ETCDCTL_API=3 etcdctl \
--cacert=/etc/etcd/ssl/etcd.pem \
--cert=/etc/etcd/ssl/etcd.pem \
--key=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379" \
member list --write-out='table'
2、测试结果
+------------------+---------+----------+--------------------------+--------------------------+
|        ID        | STATUS  |   NAME   |        PEER ADDRS        |       CLIENT ADDRS       |
+------------------+---------+----------+--------------------------+--------------------------+
|  2760f98de9dc762 | started | k8s-m-01 | https://172.16.1.51:2380 | https://172.16.1.51:2379 |
| 18273711b3029818 | started | k8s-m-02 | https://172.16.1.52:2380 | https://172.16.1.52:2379 |
| f42951486b449d48 | started | k8s-m-03 | https://172.16.1.53:2380 | https://172.16.1.53:2379 |
+------------------+---------+----------+--------------------------+--------------------------+
# 3、补充:
如果测试不成功出现如下图报错
清理缓存数据,重新启动即可
[root@k8s-m-01 ~]# cd /var/lib/etcd/
[root@k8s-m-01 etcd]# ll
总用量 0
drwx------ 4 root root 29 3月  29 20:37 member
[root@k8s-m-01 etcd]# rm -rf member/  #删除缓存数据
[root@k8s-m-01 etcd]#
[root@k8s-m-01 etcd]# systemctl start etcd
[root@k8s-m-01 etcd]# systemctl status etcd

六、部署master节点

主要把master节点上的各个组件部署成功。
kube-apiserver、控制器、调度器、flannel、etcd、kubelet、kube-proxy、DNS

1、集群规划

172.16.1.51 k8s-m-01 m1
172.16.1.52 k8s-m-02 m2
172.16.1.53 k8s-m-03 m3# kube-apiserver、控制器、调度器、flannel、etcd、kubelet、kube-proxy、DNS

一、创建证书

只需要在任意一台 master01节点上执行(如master01)

1.创建集群CA证书

Master 节点是集群当中最为重要的一部分,组件众多,部署也最为复杂
# 以下证书均是在 /opt/cert/k8s 下生成

1)创建ca集群证书

创建集群各个组件之间的证书

mkdir /opt/cert/k8s
cd /opt/cert/k8s
cat > ca-config.json << EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOF

2)创建根证书签名

cat > ca-csr.json << EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOF
# 1、查看集群证书
[root@k8s-m-01 k8s]# ll
total 8
-rw-r--r-- 1 root root 294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root 214 Jul 16 19:16 ca-csr.json

3)生成根证书

[root@k8s-m-01 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/07/16 19:17:09 [INFO] generating a new CA key and certificate from CSR #生成过程
2021/07/16 19:17:09 [INFO] generate received request
2021/07/16 19:17:09 [INFO] received CSR
2021/07/16 19:17:09 [INFO] generating key: rsa-2048
2021/07/16 19:17:10 [INFO] encoded CSR
2021/07/16 19:17:10 [INFO] signed certificate with serial number 148240958746672103928071367692758832732603343709
[root@k8s-m-01 k8s]# ll  #查看证书
total 20
-rw-r--r-- 1 root root  294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root  960 Jul 16 19:17 ca.csr
-rw-r--r-- 1 root root  214 Jul 16 19:16 ca-csr.json
-rw------- 1 root root 1679 Jul 16 19:17 ca-key.pem
-rw-r--r-- 1 root root 1281 Jul 16 19:17 ca.pem# ca 证书是为了生成普通证书

2.创建集群普通证书

即创建集群各个组件之间的证书

1)创建kube-apiserver的证书

1> 创建证书签名配置
mkdir -p /opt/cert/k8s
cd /opt/cert/k8s
cat > server-csr.json << EOF
{"CN": "kubernetes","hosts": ["127.0.0.1","172.16.1.51","172.16.1.52","172.16.1.53","172.16.1.54","172.16.1.55","172.16.1.56","10.96.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOF
2> 生成证书
[root@k8s-m-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2021/07/16 19:19:37 [INFO] generate received request  #生成过程
2021/07/16 19:19:37 [INFO] received CSR
2021/07/16 19:19:37 [INFO] generating key: rsa-2048
2021/07/16 19:19:37 [INFO] encoded CSR
2021/07/16 19:19:37 [INFO] signed certificate with serial number 526999021546243182432347319587302894152755417201
2021/07/16 19:19:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-m-01 k8s]# ll #查看生成
total 36
-rw-r--r-- 1 root root  294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root  960 Jul 16 19:17 ca.csr
-rw-r--r-- 1 root root  214 Jul 16 19:16 ca-csr.json
-rw------- 1 root root 1679 Jul 16 19:17 ca-key.pem
-rw-r--r-- 1 root root 1281 Jul 16 19:17 ca.pem
-rw-r--r-- 1 root root 1245 Jul 16 19:19 server.csr
-rw-r--r-- 1 root root  591 Jul 16 19:19 server-csr.json
-rw------- 1 root root 1679 Jul 16 19:19 server-key.pem
-rw-r--r-- 1 root root 1574 Jul 16 19:19 server.pem

2)创建controller-manager的证书

1> 创建证书签名配置
cd /opt/cert/k8s
cat > kube-controller-manager-csr.json << EOF
{"CN": "system:kube-controller-manager","hosts": ["127.0.0.1","172.16.1.51","172.16.1.52","172.16.1.53","172.16.1.54","172.16.1.55","172.16.1.56"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:kube-controller-manager","OU": "System"}]
}
EOF

2> 生成证书

#  ca路径一定要指定正确,否则生成是失败![root@k8s-m-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2021/07/16 19:24:21 [INFO] generate received request # 生成过程
2021/07/16 19:24:21 [INFO] received CSR
2021/07/16 19:24:21 [INFO] generating key: rsa-2048
2021/07/16 19:24:21 [INFO] encoded CSR
2021/07/16 19:24:21 [INFO] signed certificate with serial number 105819922871201435896062879045495573023643038916
2021/07/16 19:24:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-m-01 k8s]# ll #查看证书
total 52
-rw-r--r-- 1 root root  294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root  960 Jul 16 19:17 ca.csr
-rw-r--r-- 1 root root  214 Jul 16 19:16 ca-csr.json
-rw------- 1 root root 1679 Jul 16 19:17 ca-key.pem
-rw-r--r-- 1 root root 1281 Jul 16 19:17 ca.pem
-rw-r--r-- 1 root root 1163 Jul 16 19:24 kube-controller-manager.csr
-rw-r--r-- 1 root root  493 Jul 16 19:23 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Jul 16 19:24 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1497 Jul 16 19:24 kube-controller-manager.pem
-rw-r--r-- 1 root root 1245 Jul 16 19:19 server.csr
-rw-r--r-- 1 root root  591 Jul 16 19:19 server-csr.json
-rw------- 1 root root 1679 Jul 16 19:19 server-key.pem
-rw-r--r-- 1 root root 1574 Jul 16 19:19 server.pem

3)创建kube-scheduler的证书

1> 创建证书签名配置
cd /opt/cert/k8s
cat > kube-scheduler-csr.json << EOF
{"CN": "system:kube-scheduler","hosts": ["127.0.0.1","172.16.1.51","172.16.1.52","172.16.1.53","172.16.1.54","172.16.1.55","172.16.1.56"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:kube-scheduler","OU": "System"}]
}
EOF
2> 开始生成
[root@k8s-m-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2021/07/16 19:29:45 [INFO] generate received request #生成过程
2021/07/16 19:29:45 [INFO] received CSR
2021/07/16 19:29:45 [INFO] generating key: rsa-2048
2021/07/16 19:29:45 [INFO] encoded CSR
2021/07/16 19:29:45 [INFO] signed certificate with serial number 626840840439558052435621714466927492785323876680
2021/07/16 19:29:45 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-m-01 k8s]# ll  #查看证书
total 68
-rw-r--r-- 1 root root  294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root  960 Jul 16 19:17 ca.csr
-rw-r--r-- 1 root root  214 Jul 16 19:16 ca-csr.json
-rw------- 1 root root 1679 Jul 16 19:17 ca-key.pem
-rw-r--r-- 1 root root 1281 Jul 16 19:17 ca.pem
-rw-r--r-- 1 root root 1163 Jul 16 19:24 kube-controller-manager.csr
-rw-r--r-- 1 root root  493 Jul 16 19:23 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Jul 16 19:24 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1497 Jul 16 19:24 kube-controller-manager.pem
-rw-r--r-- 1 root root 1135 Jul 16 19:29 kube-scheduler.csr
-rw-r--r-- 1 root root  473 Jul 16 19:29 kube-scheduler-csr.json
-rw------- 1 root root 1679 Jul 16 19:29 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1468 Jul 16 19:29 kube-scheduler.pem
-rw-r--r-- 1 root root 1245 Jul 16 19:19 server.csr
-rw-r--r-- 1 root root  591 Jul 16 19:19 server-csr.json
-rw------- 1 root root 1679 Jul 16 19:19 server-key.pem
-rw-r--r-- 1 root root 1574 Jul 16 19:19 server.pem

4)创建kube-proxy证书

1> 创建证书签名配置
cd /opt/cert/k8s
cat > kube-proxy-csr.json << EOF
{"CN":"system:kube-proxy","hosts":[],"key":{"algo":"rsa","size":2048},"names":[{ "C":"CN","L":"BeiJing","ST":"BeiJing","O":"system:kube-proxy","OU":"System"} ]
}
EOF
2> 开始生成
[root@k8s-m-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2021/06/29 11:14:09 [INFO] generate received request # 生成过程
2021/07/16 19:31:42 [INFO] generate received request
2021/07/16 19:31:42 [INFO] received CSR
2021/07/16 19:31:42 [INFO] generating key: rsa-2048
2021/07/16 19:31:42 [INFO] encoded CSR
2021/07/16 19:31:42 [INFO] signed certificate with serial number 640459339169399647496256803266992860728910371233
2021/07/16 19:31:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-m-01 k8s]# ll  #查看证书
total 84
-rw-r--r-- 1 root root  294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root  960 Jul 16 19:17 ca.csr
-rw-r--r-- 1 root root  214 Jul 16 19:16 ca-csr.json
-rw------- 1 root root 1679 Jul 16 19:17 ca-key.pem
-rw-r--r-- 1 root root 1281 Jul 16 19:17 ca.pem
-rw-r--r-- 1 root root 1163 Jul 16 19:24 kube-controller-manager.csr
-rw-r--r-- 1 root root  493 Jul 16 19:23 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Jul 16 19:24 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1497 Jul 16 19:24 kube-controller-manager.pem
-rw-r--r-- 1 root root 1029 Jul 16 19:31 kube-proxy.csr
-rw-r--r-- 1 root root  294 Jul 16 19:31 kube-proxy-csr.json
-rw------- 1 root root 1679 Jul 16 19:31 kube-proxy-key.pem
-rw-r--r-- 1 root root 1383 Jul 16 19:31 kube-proxy.pem
-rw-r--r-- 1 root root 1135 Jul 16 19:29 kube-scheduler.csr
-rw-r--r-- 1 root root  473 Jul 16 19:29 kube-scheduler-csr.json
-rw------- 1 root root 1679 Jul 16 19:29 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1468 Jul 16 19:29 kube-scheduler.pem
-rw-r--r-- 1 root root 1245 Jul 16 19:19 server.csr
-rw-r--r-- 1 root root  591 Jul 16 19:19 server-csr.json
-rw------- 1 root root 1679 Jul 16 19:19 server-key.pem
-rw-r--r-- 1 root root 1574 Jul 16 19:19 server.pem

5)创建集群管理员证书

为了能让集群客户端工具安全的访问集群,所以要为集群客户端创建证书,使其具有所有集群权限

1> 创建证书签名配置
cd /opt/cert/k8s
cat > admin-csr.json << EOF
{"CN":"admin","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","L":"BeiJing","ST":"BeiJing","O":"system:masters","OU":"System"}]
}
EOF
2> 开时生成
[root@k8s-m-01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2021/07/16 19:33:28 [INFO] generate received request # 生成过程
2021/07/16 19:33:28 [INFO] received CSR
2021/07/16 19:33:28 [INFO] generating key: rsa-2048
2021/07/16 19:33:28 [INFO] encoded CSR
2021/07/16 19:33:28 [INFO] signed certificate with serial number 208208770401522948648550441130579134081563117219
2021/07/16 19:33:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-m-01 k8s]# ll  # 查看证书
total 100
-rw-r--r-- 1 root root 1009 Jul 16 19:33 admin.csr
-rw-r--r-- 1 root root  260 Jul 16 19:33 admin-csr.json
-rw------- 1 root root 1679 Jul 16 19:33 admin-key.pem
-rw-r--r-- 1 root root 1363 Jul 16 19:33 admin.pem
-rw-r--r-- 1 root root  294 Jul 16 19:16 ca-config.json
-rw-r--r-- 1 root root  960 Jul 16 19:17 ca.csr
-rw-r--r-- 1 root root  214 Jul 16 19:16 ca-csr.json
-rw------- 1 root root 1679 Jul 16 19:17 ca-key.pem
-rw-r--r-- 1 root root 1281 Jul 16 19:17 ca.pem
-rw-r--r-- 1 root root 1163 Jul 16 19:24 kube-controller-manager.csr
-rw-r--r-- 1 root root  493 Jul 16 19:23 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Jul 16 19:24 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1497 Jul 16 19:24 kube-controller-manager.pem
-rw-r--r-- 1 root root 1029 Jul 16 19:31 kube-proxy.csr
-rw-r--r-- 1 root root  294 Jul 16 19:31 kube-proxy-csr.json
-rw------- 1 root root 1679 Jul 16 19:31 kube-proxy-key.pem
-rw-r--r-- 1 root root 1383 Jul 16 19:31 kube-proxy.pem
-rw-r--r-- 1 root root 1135 Jul 16 19:29 kube-scheduler.csr
-rw-r--r-- 1 root root  473 Jul 16 19:29 kube-scheduler-csr.json
-rw------- 1 root root 1679 Jul 16 19:29 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1468 Jul 16 19:29 kube-scheduler.pem
-rw-r--r-- 1 root root 1245 Jul 16 19:19 server.csr
-rw-r--r-- 1 root root  591 Jul 16 19:19 server-csr.json
-rw------- 1 root root 1679 Jul 16 19:19 server-key.pem
-rw-r--r-- 1 root root 1574 Jul 16 19:19 server.pem

6)颁发证书

#  Master节点所需证书
1、ca、kube-apiservver
2、kube-controller-manager
3、kube-scheduler
4、用户证书
5、Etcd 证书
1> 给其他master节点颁发证书(master02、master03)
# Master节点所需证书:ca、kube-apiservver、kube-controller-manager、kube-scheduler、用户证书、Etcd证书。
# 1、生成证书的目录
[root@k8s-m-01 /opt/cert/k8s]# mkdir -pv /etc/kubernetes/ssl
[root@k8s-m-01 /opt/cert/k8s]# cp -p ./{ca*pem,server*pem,kube-controller-manager*pem,kube-scheduler*.pem,kube-proxy*pem,admin*.pem} /etc/kubernetes/ssl  #备份一份
# 2、分发证书
[root@k8s-m-01 /opt/cert/k8s]# for i in m1 m2 m3;do
ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
scp /etc/kubernetes/ssl/* root@$i:/etc/kubernetes/ssl
done
# 3、查看证书(所有节点)
[root@k8s-m-01 k8s]# ll /etc/kubernetes/ssl
total 48
-rw------- 1 root root 1679 Aug  1 14:23 admin-key.pem
-rw-r--r-- 1 root root 1359 Aug  1 14:23 admin.pem
-rw------- 1 root root 1679 Aug  1 14:23 ca-key.pem
-rw-r--r-- 1 root root 1273 Aug  1 14:23 ca.pem
-rw------- 1 root root 1679 Aug  1 14:23 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1489 Aug  1 14:23 kube-controller-manager.pem
-rw------- 1 root root 1679 Aug  1 14:23 kube-proxy-key.pem
-rw-r--r-- 1 root root 1379 Aug  1 14:23 kube-proxy.pem
-rw------- 1 root root 1675 Aug  1 14:23 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1464 Aug  1 14:23 kube-scheduler.pem
-rw------- 1 root root 1679 Aug  1 14:23 server-key.pem
-rw-r--r-- 1 root root 1570 Aug  1 14:23 server.pem

七、下载安装包+编写配置文件

在一台master节点执行即可(如master01)

1.下载安装包并分发组件

1)下载安装包(master01节点)
# 1、创建存放目录
mkdir /opt/data
cd /opt/data# 2、下载二进制组件
方式一: 下载server安装包
[root@k8s-m-01 /opt/data]# wget https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz
方式二: 从容器中复制(如果下载不成功可执行这步)
[root@k8s-m-01 /opt/data]# docker run -it  registry.cn-hangzhou.aliyuncs.com/k8sos/k8s:v1.18.8.1 bash
方式三:从自己博客中复制(如果下载不成功可执行这步) # 推荐这种
[root@k8s-m-01 /opt/data]#  wget http://www.mmin.xyz:81/package/k8s/kubernetes-server-linux-amd64.tar.gz# 3、解压分发组件
[root@k8s-m-01 /opt/data]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@k8s-m-01 /opt/data]# cd kubernetes/server/bin
[root@k8s-m-01 bin]# for i in m1 m2 m3;do scp kube-apiserver kube-scheduler kubectl kube-controller-manager kubelet kube-proxy root@$i:/usr/local/bin/;done# 4、下载其他组件 (选做)
[root@k8s-m-01 ~]# cd /opt/data/
[root@k8s-m-01 data]# ll
total 0
[root@k8s-m-01 data]# docker cp 4511c1146868:kubernetes-server-linux-amd64.tar.gz .  其他软件依次复制
[root@k8s-m-01 data]# docker cp 4511c1146868:kubernetes-client-linux-amd64.tar.gz .
[root@k8s-m-01 data]# ll
total 487492
-rw-r--r-- 1 root root  14503878 Aug 18  2020 etcd-v3.3.24-linux-amd64.tar.gz
-rw-r--r-- 1 root root   9565743 Jan 29  2019 flannel-v0.11.0-linux-amd64.tar.gz
-rw-r--r-- 1 root root  13237066 Aug 14  2020 kubernetes-client-linux-amd64.tar.gz
-rw-r--r-- 1 root root  97933232 Aug 14  2020 kubernetes-node-linux-amd64.tar.gz
-rw-r--r-- 1 root root 363943527 Aug 14  2020 kubernetes-server-linux-amd64.tar.gz
github.com 安装kubernetes-server

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-NM16Q5WV-1627809608875)(k8s – 03二进制安装k8s.assets/1624939262567-1627300755335.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qVmmoTw0-1627809608882)(k8s – 03二进制安装k8s.assets/1624939685807.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aj5COEdX-1627809608884)(k8s – 03二进制安装k8s.assets/1624939793564-1627300784961.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hvVBTRYt-1627809608886)(k8s – 03二进制安装k8s.assets/1624940425492.png)]

2)创建集群配置文件

1>创建kube-controller-manager.kubeconfig(master01节点)
# 1、在kuberbetes中,我们需要创建一个配置文件,用来配置集群、用户、命名空间及身份认证等信息
cd /opt/cert/k8s# 2、创建kube-controller-manager.kubeconfig
# vip的地址
export KUBE_APISERVER="https://172.16.1.56:8443"# 3、设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-controller-manager.kubeconfig# 4、设置客户端认证参数
kubectl config set-credentials "kube-controller-manager" \--client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \--client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \--embed-certs=true \--kubeconfig=kube-controller-manager.kubeconfig# 5、设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \--cluster=kubernetes \--user="kube-controller-manager" \--kubeconfig=kube-controller-manager.kubeconfig# 6、配置默认上下文
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

2>创建kube-scheduler.kubeconfig(master01节点)

# 1、创建kube-scheduler.kubeconfig
export KUBE_APISERVER="https://172.16.1.56:8443"# 2、设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-scheduler.kubeconfig# 3、设置客户端认证参数
kubectl config set-credentials "kube-scheduler" \--client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \--client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \--embed-certs=true \--kubeconfig=kube-scheduler.kubeconfig# 4、设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \--cluster=kubernetes \--user="kube-scheduler" \--kubeconfig=kube-scheduler.kubeconfig# 5、配置默认上下文
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

3>创建kube-proxy.kubeconfig集群配置文件(master01节点)

# 1、创建kube-proxy.kubeconfig集群配置文件
export KUBE_APISERVER="https://172.16.1.56:8443"# 2、设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfig# 3、设置客户端认证参数
kubectl config set-credentials "kube-proxy" \--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig# 4、设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \--cluster=kubernetes \--user="kube-proxy" \--kubeconfig=kube-proxy.kubeconfig# 5、配置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4>创建超级管理员的集群配置文件(master01节点)

# 1、创建超级管理员集群配置文件
export KUBE_APISERVER="https://172.16.1.56:8443"# 2、设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=admin.kubeconfig# 3、设置客户端认证参数
kubectl config set-credentials "admin" \--client-certificate=/etc/kubernetes/ssl/admin.pem \--client-key=/etc/kubernetes/ssl/admin-key.pem \--embed-certs=true \--kubeconfig=admin.kubeconfig# 4、设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \--cluster=kubernetes \--user="admin" \--kubeconfig=admin.kubeconfig# 5、配置默认上下文
kubectl config use-context default --kubeconfig=admin.kubeconfig

5>颁发集群配置文件(master01节点)

[root@k8s-m-01 /opt/cert/k8s]# for i in m1 m2 m3; dossh root@$i  "mkdir -pv /etc/kubernetes/cfg"scp ./*.kubeconfig root@$i:/etc/kubernetes/cfgdone
[root@k8s-m-01 /opt/cert/k8s]# ll /etc/kubernetes/cfg/
total 32
-rw------- 1 root root 6105 Jul 16 20:34 admin.kubeconfig
-rw------- 1 root root 6314 Jul 16 20:34 kube-controller-manager.kubeconfig
-rw------- 1 root root 6139 Jul 16 20:34 kube-proxy.kubeconfig
-rw------- 1 root root 6263 Jul 16 20:34 kube-scheduler.kubeconfig

6>创建集群token(master01节点)

# 1、只需要创建一次
[root@k8s-m-01 /opt/cert/k8s]# TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
# 2、必须要用自己机器创建的Token
[root@k8s-m-01 /opt/cert/k8s]#cat > token.csv << EOF
${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
# 3、分发集群token,用于集群TLS认证 (保证所有节点token值一样)
[root@k8s-m-01 k8s]# for i in m1 m2 m3;do scp token.csv root@$i:/etc/kubernetes/cfg/;done

八、部署各个组件

安装各个组件,使其可以正常工作

1)部署kube-apiserver(所有master节点)

# 1、创建kube-apiserver的配置文件
KUBE_APISERVER_IP=`hostname -i`cat > /etc/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--advertise-address=${KUBE_APISERVER_IP} \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=30000-52767 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/etc/kubernetes/cfg/token.csv \\
--kubelet-client-certificate=/etc/kubernetes/ssl/server.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/etc/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log \\
--etcd-servers=https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379 \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem"
EOF# 2、注册kube-apiserver的服务
[root@k8s-m-01 /opt/cert/k8s]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
# 3、重载配置文件
[root@k8s-m-01 /opt/cert/k8s]# mkdir -p /var/log/kubernetes/
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-apiserver.service  #启动服务
# 4、检测状态是否启动
[root@k8s-m-01 k8s]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Fri 2021-07-16 20:51:07 CST; 42s ago

2)对kube-apiserver做高可用

1>安装高可用软件(所有master节点)
# 负载均衡器有很多种,只要能实现api-server高可用都行# 官方推荐: keeplived + haproxy[root@k8s-m-01 ~]# yum install -y keepalived haproxy
2>修改keepalived配置文件(所有master节点)
# 1、根据节点的不同,修改的配置也不同
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
cd /etc/keepalivedKUBE_APISERVER_IP=`hostname -i`cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {script "/etc/keepalived/check_kubernetes.sh"interval 2weight -5fall 3rise 2
}
vrrp_instance VI_1 {state MASTER  # m2、m3节点改成BACKUPinterface eth1mcast_src_ip ${KUBE_APISERVER_IP}virtual_router_id 51priority 100  # 权重 m2改成90 m3改成80advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {172.16.1.56}
}
EOF
# 2、加载keepalived并启动
[root@k8s-m-01 keepalived]# systemctl daemon-reload
[root@k8s-m-01 /etc/keepalived]# systemctl enable --now keepalived
# 4、验证keepalived是否启动
[root@k8s-m-01 keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2021-08-01 14:48:23 CST; 27s ago
[root@k8s-m-01 keepalived]# ip a |grep 56inet 172.16.1.56/32 scope global eth1
3>修改haproxy配置文件(所有master节点)
# 1、高可用软件 --->是做负载均衡 向后负载均衡会用SLB
cat > /etc/haproxy/haproxy.cfg <<EOF
globalmaxconn  2000ulimit-n  16384log  127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode  httpoption  httplogtimeout connect 5000timeout client  50000timeout server  50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorlisten statsbind    *:8006mode    httpstats   enablestats   hide-versionstats   uri       /statsstats   refresh   30sstats   realm     Haproxy\ Statisticsstats   auth      admin:adminfrontend k8s-masterbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server kubernetes-master-01    172.16.1.51:6443  check inter 2000 fall 2 rise 2 weight 100server kubernetes-master-02    172.16.1.52:6443  check inter 2000 fall 2 rise 2 weight 100server kubernetes-master-03    172.16.1.53:6443  check inter 2000 fall 2 rise 2 weight 100
EOF
# 2、启动haproxy
[root@k8s-m-01 /etc/keepalived]# systemctl enable --now haproxy.service
# 3、检查集群状态
[root@k8s-m-01 keepalived]# systemctl status haproxy.service
● haproxy.service - HAProxy Load BalancerLoaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)Active: active (running) since Fri 2021-07-16 21:12:00 CST; 27s agoMain PID: 4997 (haproxy-systemd)

3)部署TLS

TLS bootstrapping 是用来简化管理员配置 kubelet 与 apiserver 双向加密通信的配置步骤的一种机制。当 集群开启了 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,此时如果有很多个节点都需要单独签署证书那将变得非常繁琐且极易出错,导致集群不稳。TLS bootstrapping功能就是让node节点上的kubelet组件先使用一个预定的低权限用户连接到apiserver,然后向apiserver申请证书,由`apiserver 动态签署颁发到Node节点,实现证书签署自动化
# apiserver动态签署颁发道Node节点,实现证书签署自动化
1>创建集群配置文件(master01节点)
cd /opt/cert/k8sexport KUBE_APISERVER="https://172.16.1.56:8443"# 1、设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kubelet-bootstrap.kubeconfig# 2、查看自己的token值
[root@k8s-m-01 k8s]# cat token.csv
e55cf9c5dd20e985e5928b959158f0ee,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
# 3、设置客户端认证参数,此处token必须用上叙token.csv中的token
kubectl config set-credentials "kubelet-bootstrap" \--token=e55cf9c5dd20e985e5928b959158f0ee \ --kubeconfig=kubelet-bootstrap.kubeconfig
# 使用自己的token.csv里面的token
# 4、设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \--cluster=kubernetes \--user="kubelet-bootstrap" \--kubeconfig=kubelet-bootstrap.kubeconfig# 5、配置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
2>颁发证书(master01节点)
# 1、颁发集群配置文件
[root@k8s-m-01 /opt/cert/k8s]# for i in m1 m2 m3; do
scp kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg/
done
3>创建TLS低权限用户(master01节点)
# 1、创建一个低权限用户
[root@k8s-m-01 /opt/cert/k8s]# kubectl create clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created# 2、报错问题解决 (如果没有问题忽略)
1、删除签名
kubectl delete clusterrolebindings kubelet-bootstrap
2、重新创建成功
[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

4)部署contorller-manager

Controller Manager 作为集群内部的管理控制中心,负责集群内的 Node、Pod 副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个 Node意外宕机时,Controller Manager 会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。如果多个控制器管理器同时生效,则会有一致性问题,所以 kube-controller-manager 的高可用,只能是主备模式,而 kubernetes 集群是采用租赁锁实现 leader 选举,需要在启动参数中加入 --leader-elect=true.
1>编辑配置文件(所有master节点)
cat > /etc/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--leader-elect=true \\
--cluster-name=kubernetes \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/12 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=10s \\
--horizontal-pod-autoscaler-use-rest-clients=true"
EOF

2>注册服务(所有master节点)
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF
3>启动服务
# 1、重载配置文件
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload
# 2、启动kube-controller-manager服务
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-controller-manager.service
# 3、验证kube-controller-manager
[root@k8s-m-03 keepalived]# systemctl status kube-controller-manager.service  是否启动成功
● kube-controller-manager.service - Kubernetes Controller ManagerLoaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2021-07-17 10:15:57 CST; 6s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 4003 (kube-controller)

5)部署kube-scheduler

kube-scheduler 是 Kubernetes 集群的默认调度器,并且是集群 控制面 的一部分。对每一个新创建的 Pod或者是未被调度的 Pod,kube-scheduler 会过滤所有的 node,然后选择一个最优的 Node 去运行这个 Pod。kube-scheduler 调度器是一个策略丰富、拓扑感知、工作负载特定的功能,调度器显著影响可用性、性能和容量。调度器需要考虑个人和集体的资源要求、服务质量要求、硬件/软件/政策约束、亲和力和反亲和力规范、数据局部性、负载间干扰、完成期限等。工作负载特定的要求必要时将通过 API
1>编写配置文件(所有master节点)
cat > /etc/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--kubeconfig=/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--bind-address=127.0.0.1 "
EOF
2>注册服务(所有master节点)
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF
3>启动(所有master节点)
# 1、重载配置文件
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload
# 2、启动服务
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-scheduler.service
# 2、验证服务是否启动
[root@k8s-m-01 keepalived]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes SchedulerLoaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2021-07-17 10:22:

6)查看集群状态

[root@k8s-m-01 k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}  

7)部署kubelet服务

1>创建kubelet服务配置文件(所有master节点)
KUBE_HOSTNAME=`hostname`cat > /etc/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--hostname-override=${KUBE_HOSTNAME} \\
--container-runtime=docker \\
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\
--config=/etc/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/etc/kubernetes/ssl \\
--image-pull-progress-deadline=15m \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2"
EOF

2>创建kubelet-config.yaml (所有master节点)
KUBE_HOSTNAME=`hostname -i`cat > /etc/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${KUBE_HOSTNAME}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.96.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

3>注册kubelet服务(所有master节点)
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service[Service]
EnvironmentFile=/etc/kubernetes/cfg/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
4>启动(所有master节点)
# 1、重载配置文件
[root@k8s-m-01 k8s]#  systemctl daemon-reload
# 2、启动服务
[root@k8s-m-01 k8s]# systemctl enable --now kubelet.service
# 3、检查服务状态
[root@k8s-m-01 bin]# systemctl status kubelet.service
● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2021-07-17 13:46:56 CST; 11s agoMain PID: 7611 (kubelet)

8)部署kube-proxy

kube-proxy 是 Kubernetes 的核心组件,部署在每个 Node 节点上,它是实现 Kubernetes Service 的通信与负载均衡机制的重要组件; kube-proxy 负责为 Pod 创建代理服务,从 apiserver 获取所有server 信息,并根据server 信息创建代理服务,实现 server 到 Pod 的请求路由和转发,从而实现 K8s 层级的虚拟转发网
1>创建配置文件(所有master节点)
cat > /etc/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--config=/etc/kubernetes/cfg/kube-proxy-config.yml"
EOF
2>创建kube-proxy-config.yml(所有master节点)
KUBE_HOSTNAME=`hostname -i`
HOSTNAME=`hostname`
cat > /etc/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ${KUBE_HOSTNAME}
healthzBindAddress: ${KUBE_HOSTNAME}:10256
metricsBindAddress: ${KUBE_HOSTNAME}:10249
clientConnection:burst: 200kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfigqps: 100
hostnameOverride: ${HOSTNAME}
clusterCIDR: 10.96.0.0/16
enableProfiling: true
mode: "ipvs"
kubeProxyIPTablesConfiguration:masqueradeAll: false
kubeProxyIPVSConfiguration:scheduler: rrexcludeCIDRs: []
EOF

3>注册服务(所有master节点)
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
4>启动(所有master节点)
# 1、重新加载配置文件
[root@k8s-m-01 /opt/cert/k8s]# systemctl daemon-reload
# 2、启动服务
[root@k8s-m-01 /opt/cert/k8s]# systemctl enable --now kube-proxy.service
# 3、检查服务状态
[root@k8s-m-01 k8s]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes ProxyLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2021-07-17 13:58:27 CST; 18s agoMain PID: 8560 (kube-proxy)

9)加入集群节点

1>查看集群节点加入请求(master01节点)
[root@k8s-m-01 ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-GuGw7Iw82OXhxFqrPP2aih166OiitkkeXvM39JgkgD8   24h   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-HnUNIqs10WxpoJtKxB-dqVbyAIl30jnztkTTY1xXzfM   24h   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-yU53RlO0xCJ1-y0GeRxuEtuOkd9qCoLODDYt5pqiR-Q   24h   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
2>批准加入(master01节点)
[root@k8s-m-01 ~]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
certificatesigningrequest.certificates.k8s.io/node-csr-GuGw7Iw82OXhxFqrPP2aih166OiitkkeXvM39JgkgD8 approved
certificatesigningrequest.certificates.k8s.io/node-csr-HnUNIqs10WxpoJtKxB-dqVbyAIl30jnztkTTY1xXzfM approved
certificatesigningrequest.certificates.k8s.io/node-csr-yU53RlO0xCJ1-y0GeRxuEtuOkd9qCoLODDYt5pqiR-Q approved[root@k8s-m-01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
k8s-m-01   Ready    <none>   14s     v1.18.8
k8s-m-02   Ready    <none>   13s     v1.18.8
k8s-m-03   Ready    <none>   2m13s   v1.18.8

10)安装网络插件

本次选择使用flannel网络插件

kubernetes 设计了网络模型,但却将它的实现交给了网络插件,CNI 网络插件最主要的功能就是实现 POD资源能够跨主机进行通讯。常见的 CNI 网络插件:

Flannel
Calico
Canal
Contiv
OpenContrail
NSX-T
Kube-router
1>下载flannel安装包并安装(master01节点)
# 1、切换在data目录下
[root@k8s-m-01 k8s]# cd /opt/data/
# 2、下载flannel插件
方式一:github下载
[root@k8s-m-01 data]# wget https://github.com/coreos/flannel/releases/download/v0.13.1-rc1/flannel-v0.13.1-rc1-linux-amd64.tar.gz
方式二:自己网站下载 # 推荐
[root@k8s-m-01 data]# wget http://www.mmin.xyz:81/package/k8s/flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s-m-01 data]# tar xf flannel-v0.13.1-rc1-linux-amd64.tar.gz
# 3、分发flannel插件到其他节点
[root@k8s-m-01 /opt/data]# for i in m1 m2 m3;do
scp flanneld mk-docker-opts.sh root@$i:/usr/local/bin/
done
# 4、查看flannel插件 (所有节点)
[root@k8s-m-01 data]# ll /usr/local/bin/
total 545492
-rwxr-xr-x 1 root root  10376657 Jul 25 01:02 cfssl
-rwxr-xr-x 1 root root   2277873 Jul 25 01:01 cfssljson
-rwxr-xr-x 1 root root  22820544 Aug  1 14:03 etcd
-rwxr-xr-x 1 root root  18389632 Aug  1 14:03 etcdctl
-rwxr-xr-x 1 root root  35249016 Aug  1 15:14 flanneld
-rwxr-xr-x 1 root root 120684544 Aug  1 14:35 kube-apiserver
-rwxr-xr-x 1 root root 110080000 Aug  1 14:35 kube-controller-manager
-rwxr-xr-x 1 root root  44040192 Aug  1 14:35 kubectl
-rwxr-xr-x 1 root root 113300248 Aug  1 14:35 kubelet
-rwxr-xr-x 1 root root  38383616 Aug  1 14:35 kube-proxy
-rwxr-xr-x 1 root root  42962944 Aug  1 14:35 kube-scheduler
-rwxr-xr-x 1 root root      2139 Aug  1 15:14 mk-docker-opts.sh
2>将flannel配置写入集群数据库(master01节点)
etcdctl \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379" \
mk /coreos.com/network/config '{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}'
3>注册flannel服务(所有master节点)
cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld address
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\-etcd-cafile=/etc/etcd/ssl/ca.pem \\-etcd-certfile=/etc/etcd/ssl/etcd.pem \\-etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\-etcd-endpoints=https://172.16.1.51:2379,https://172.16.1.52:2379,https://172.16.1.53:2379 \\-etcd-prefix=/coreos.com/network \\-ip-masq
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF

4>修改docker启动文件(所有master节点)
# 1、让flannel接管docker网络,形成集群统一管理的网络
sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.servicesed -i '/ExecReload/a ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.servicesed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.service
5>启动(所有master节点)
# 1、先启动flannel,再启动docker
[root@k8s-m-01 ~]# systemctl daemon-reload
[root@k8s-m-01 ~]# systemctl enable --now flanneld.service
# 2、验证flanneld是否启动
[root@k8s-m-01 data]# systemctl status flanneld.service
● flanneld.service - Flanneld addressLoaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)Active: active (running) since Tue 2021-07-20 23:38:54 CST; 12s agoProcess: 7110 ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS)Main PID: 7084 (flanneld)

11)验证集群网络

# 1、集群节点互ping对方的flannel网络ip a 查看f三台lannel.1的IP===》用ping验证(3台节点互相通)

12)安装集群DNS(master01节点)

CoreDNS 用于集群中 Pod 解析 Service 的名字,Kubernetes 基于 CoreDNS 用于服务发现功能。

# 1、下载DNS安装配置文件包
官网:https://github.com/coredns/deployment
方式一: giitub下载
[root@k8s-m-01 ~]# wget https://github.com/coredns/deployment/archive/refs/heads/master.zip
方式二: 自己网站下载 # 推荐
[root@k8s-m-01 ~]# wget http://www.mmin.xyz:81/package/k8s/deployment-master.zip
# 2、解压并查看镜像
[root@k8s-m-01 ~]# unzip master.zip
[root@k8s-m-01 ~]# cd deployment-master/kubernetes
[root@k8s-m-01 kubernetes]# cat coredns.yaml.sed |grep imageimage: coredns/coredns:1.8.4imagePullPolicy: IfNotPresent
# 3、拉取镜像 (所有机器执行)
[root@k8s-m-02 kubernetes]# docker pull coredns/coredns:1.8.4# 4、执行部署命令
[root@k8s-m-01 ~/deployment-master/kubernetes]# ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f -# 5、验证集群DNS
[root@k8s-m-01 ~/deployment-master/kubernetes]# kubectl get pods -n kube-system
NAME                      READY   STATUS    RESTARTS   AGE
coredns-6ff445f54-m28gw   1/1     Running   0          48s

13)验证集群

# 1、绑定一下超管的用户权限到集群(只需要在一台服务器上执行即可)[root@k8s-m-01 ~/deployment-master/kubernetes]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created# 2、验证集群DNS和集群网络成功[root@k8s-m-01 kubernetes]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.localName:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/
[root@k8s-m-02 ~]# kubectl get pods   # k8s-m-02成功
NAME     READY   STATUS    RESTARTS   AGE
test     1/1     Running   0          15h
test01   1/1     Running   0          4m2s
[root@k8s-m-03 ~]# kubectl get pods -w   # k8s-m-03成功
NAME     READY   STATUS    RESTARTS   AGE
test     1/1     Running   0          15h
test01   1/1     Running   0          4m11s# 3、报错解决
搭建Kubernetes集群时DNS无法解析问题的处理过程  nslookup: can't resolve 'kubernetes'
1、关闭并且禁用开机启动NetworkManager(所有机器)
[root@k8s-m-01 ~]# systemctl stop NetworkManager
[root@k8s-m-01 ~]# systemctl disable NetworkManage
2、DNS配置
[root@k8s-m-01 ~]# vim /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114
nameserver 223.5.5.5

八、部署node节点

Node 节点主要负责提供应用运行环境,其最主要的组件就是 kube-proxy 和 kubelet和flannel。

1.节点规划

主机名 IP 域名解析
k8s-n-01 172.16.1.54 n1
k8s-n-02 172.16.1.55 n2

2.集群优化

# 1.安装并启动docker(两台node都安装docker) #如果安装docker了,就无需执行了
方式一:华为源
[root@k8s-m-01 ~]# vim docker.sh
# 1、清空已安装的docker
sudo yum remove docker docker-common docker-selinux docker-engine &&\
sudo yum install -y yum-utils device-mapper-persistent-data lvm2   &&\
# 2、安装doceker源
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo &&\
# 3、软件仓库地址替换
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# 4、重新生成源
yum clean all &&\
yum makecache  &&\
# 5、安装docker
sudo yum makecache fast &&\
sudo yum install docker-ce -y &&\
# 6、设置docker开机自启动
systemctl enable --now docker.service# 7、创建docker目录、启动服务(所有节点) ------ 单独执行加速docekr运行速度
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
方式二:阿里云
[root@k8s-n-01 ~]# vim docker.sh
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo &&\
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast &&\
sudo yum -y install docker-ce &&\
# Step 4: 开启Docker服务
systemctl enable --now docker.service &&\
#  Step 5: Docker加速优化服务
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
# 2.设置免密登录(所有node节点)
[root@k8s-n-01 ~]# ssh-keygen -t rsa
[root@k8s-n-01 ~]# ssh-keygen -t rsa
# 3.推公钥(master01)
[root@k8s-m-01 k8s]# for i in n1 n2;do
ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i
done

3.分发软件包

[root@k8s-m-01 ~]# cd /opt/data/
[root@k8s-m-01 /opt/data]# for i in n1 n2;do scp flanneld mk-docker-opts.sh kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy root@$i:/usr/local/bin; done

4.分发证书

[root@k8s-m-01 /opt/data]# for i in n1 n2; do ssh root@$i "mkdir -pv /etc/kubernetes/ssl"; scp -pr /etc/kubernetes/ssl/{ca*.pem,admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl; done

5.分发配置文件

# flanneld、etcd的证书、docker.service# 1、分发ETCD证书
[root@k8s-m-01 data]# cd /etc/etcd/ssl/
[root@k8s-m-01 /etc/etcd/ssl]# for i in n1 n2 ;do ssh root@$i "mkdir -pv /etc/etcd/ssl"; scp ./* root@$i:/etc/etcd/ssl; done# 2、分发flannel和docker的启动脚本
[root@k8s-m-01 /etc/etcd/ssl]# for i in n1 n2;do scp /usr/lib/systemd/system/docker.service root@$i:/usr/lib/systemd/system/docker.service; scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system/flanneld.service; done

6.部署kubelet

[root@k8s-m-01 ~]# for i in n1 n2 ;do ssh root@$i "mkdir -pv  /etc/kubernetes/cfg";scp /etc/kubernetes/cfg/kubelet.conf root@$i:/etc/kubernetes/cfg/kubelet.conf; scp /etc/kubernetes/cfg/kubelet-config.yml root@$i:/etc/kubernetes/cfg/kubelet-config.yml; scp /usr/lib/systemd/system/kubelet.service root@$i:/usr/lib/systemd/system/kubelet.service; scp /etc/kubernetes/cfg/kubelet.kubeconfig root@$i:/etc/kubernetes/cfg/kubelet.kubeconfig; scp /etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig; scp /etc/kubernetes/cfg/token.csv root@$i:/etc/kubernetes/cfg/token.csv;
done# 1、修改配置文件kubelet-config.yml和kubelet.conf
1、修改k8s-n-01的文件
[root@k8s-n-01 opt]# grep  'address' /etc/kubernetes/cfg/kubelet-config.yml
address: 172.16.1.54
[root@k8s-n-01 opt]# grep 'hostname-override' /etc/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-n-01
2、修改k8s-n-02的文件
[root@k8s-n-02 opt]#  grep  'address' /etc/kubernetes/cfg/kubelet-config.yml
address: 172.16.1.55
[root@k8s-n-02 opt]# grep 'hostname-override' /etc/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-n-02
# 2、启动kubelet
[root@k8s-n-01 ~]# systemctl daemon-reload
[root@k8s-n-02 ~]# systemctl enable --now kubelet.service
# 3、验证服务是否启动
[root@k8s-n-02 opt]# systemctl status kubelet.service
● kubelet.service - Kubernetes KubeletLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Active: active (running) since Wed 2021-07-21 17:04:04 CST; 5s ago

7.部署kube-proxy

[root@k8s-m-01 ~]# for i in n1 n2 ; do scp /etc/kubernetes/cfg/kube-proxy.conf root@$i:/etc/kubernetes/cfg/kube-proxy.conf; scp /etc/kubernetes/cfg/kube-proxy-config.yml root@$i:/etc/kubernetes/cfg/kube-proxy-config.yml ;  scp /usr/lib/systemd/system/kube-proxy.service root@$i:/usr/lib/systemd/system/kube-proxy.service;  scp /etc/kubernetes/cfg/kube-proxy.kubeconfig root@$i:/etc/kubernetes/cfg/kube-proxy.kubeconfig;done
# 1、修改kube-proxy-config.yml中IP和主机名
1、修改k8s-n-01的文件
[root@k8s-n-01 opt]# vim /etc/kubernetes/cfg/kube-proxy-config.yml
bindAddress: 172.16.1.54
healthzBindAddress: 172.16.1.54:10256
metricsBindAddress: 172.16.1.54:10249
hostnameOverride: k8s-n-01
2、修改k8s-n-02的文件
[root@k8s-n-02 opt]# vim /etc/kubernetes/cfg/kube-proxy-config.yml
bindAddress: 172.16.1.55
healthzBindAddress: 172.16.1.55:10256
metricsBindAddress: 172.16.1.55:10249
hostnameOverride: k8s-n-02
# 2、启动kube-proxy
[root@k8s-n-01 ~]# systemctl daemon-reload
[root@k8s-n-02 ~]# systemctl enable --now kube-proxy.service
# 3、验证服务是否成功
[root@k8s-n-02 opt]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes ProxyLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Wed 2021-07-21 17:11:33 CST; 8s ago

8.加入集群(master01节点)

# 1、查看集群状态
[root@k8s-m-01 ssl]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
etcd-2               Healthy   {"health":"true"}
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}   # 2、查看加入集群请求
[root@k8s-m-01 ssl]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr--Eq4oZsr9HodbE3fYtmfXbZEE461Dbjo3DCO4qOQBZs   7m43s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-GuGw7Iw82OXhxFqrPP2aih166OiitkkeXvM39JgkgD8   42h     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-HnUNIqs10WxpoJtKxB-dqVbyAIl30jnztkTTY1xXzfM   42h     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-jlDDOwALdNM55JxY_2ILSikt0NL1eRWHU-bu0A4MQzY   7m42s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-yU53RlO0xCJ1-y0GeRxuEtuOkd9qCoLODDYt5pqiR-Q   42h     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued# 3、批准加入
[root@k8s-m-01 ssl]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
certificatesigningrequest.certificates.k8s.io/node-csr--Eq4oZsr9HodbE3fYtmfXbZEE461Dbjo3DCO4qOQBZs approved
certificatesigningrequest.certificates.k8s.io/node-csr-jlDDOwALdNM55JxY_2ILSikt0NL1eRWHU-bu0A4MQzY approved
# 4、查看加入状态
[root@k8s-m-01 ssl]#  kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr--Eq4oZsr9HodbE3fYtmfXbZEE461Dbjo3DCO4qOQBZs   8m44s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-GuGw7Iw82OXhxFqrPP2aih166OiitkkeXvM39JgkgD8   42h     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-HnUNIqs10WxpoJtKxB-dqVbyAIl30jnztkTTY1xXzfM   42h     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-jlDDOwALdNM55JxY_2ILSikt0NL1eRWHU-bu0A4MQzY   8m43s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-yU53RlO0xCJ1-y0GeRxuEtuOkd9qCoLODDYt5pqiR-Q   42h     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued# 5、查看加入节点
[root@k8s-m-01 ssl]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
k8s-m-01   Ready    <none>   17h   v1.18.8
k8s-m-02   Ready    <none>   17h   v1.18.8
k8s-m-03   Ready    <none>   18h   v1.18.8
k8s-n-01   Ready    <none>   32s   v1.18.8
k8s-n-02   Ready    <none>   32s   v1.18.8

9.设置集群角色(master01节点)

[root@k8s-m-01 ~]# kubectl label nodes k8s-m-01 node-role.kubernetes.io/master=k8s-m-01
s.io/node=k8s-n-01
[root@k8s-m-01 ~]# kubectl label nodes k8s-m-02 node-role.kubernetes.io/master=k8s-m-02
node/k8s-m-02 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-m-03 node-role.kubernetes.io/master=k8s-m-03
node/k8s-m-03 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-01 node-role.kubernetes.io/node=k8s-n-01
node/k8s-n-01 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-02 node-role.kubernetes.io/node=k8s-n-02
node/k8s-n-02 labeled
[root@k8s-m-01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
k8s-m-01   Ready    master   18h     v1.18.8
k8s-m-02   Ready    master   18h     v1.18.8
k8s-m-03   Ready    master   18h     v1.18.8
k8s-n-01   Ready    node     3m33s   v1.18.8
k8s-n-02   Ready    node     3m33s   v1.18.8

九、安装集群图形化界面

Dashboard 是 基 于 网 页 的 Kubernetes 用 户 界 面 。 您 可 以 使 用 Dashboard 将 容 器 应 用 部 署 到Kubernetes 集群中,也可以对容器应用排错,还能管理集群本身及其附属资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如Deployment,Job,DaemonSet等等)。

1、安装图形化界面

可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。

# 1、下载资源清单并生成
方式一:giitubx下载
[root@k8s-m-01 ~]#  wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
方式二:自己网站下载并生成
[root@k8s-m-01 ~]#  wget  http://www.mmin.xyz:81/package/k8s/recommended.yaml
[root@k8s-m-01 ~]# kubectl apply -f recommended.yaml
方式三:一步生成并安装
[root@k8s-m-01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml  # 2、查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.96.44.119   <none>        8000/TCP   11m
kubernetes-dashboard        ClusterIP   10.96.42.127   <none>        443/TCP    11m# 3、开一个端口,用于访问
[root@k8s-m-01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
type: ClusterIP   =>  type: NodePort  #修改成NodePort# 4、重新查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.44.119   <none>        8000/TCP        12m
kubernetes-dashboard        NodePort    10.96.42.127   <none>        443:40927/TCP   12m# 5、创建token配置文件
[root@k8s-m-01 ~]# vim token.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system# 6、部署token到集群
[root@k8s-m-01 ~]# kubectl apply -f token.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created# 7、获取token
[root@k8s-m-01 ~]#  kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1NeTJxSDZmaFc1a00zWVRXTHdQSlZlQnNjWUdQMW1zMjg5OTBZQ1JxNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxMm56Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2Q4MjIzYi1jYmY1LTQ5ZTUtYjAxMS1hZTAzMzM2MzVhYzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Q4gC_Kr_Ltl_zG0xkhSri7FQrXxdA5Zjb4ELd7-bVbc_9kAe292w0VM_fVJky5FtldsY0XOp6zbiDVCPkmJi9NXT-P09WvPc9g-ISbbQB_QRIWrEWF544TmRSTZJW5rvafhbfONtqZ_3vWtMkCiDsf7EAwDWLLqA5T46bAn-fncehiV0pf0x_X16t72Qqa-aizHBrVcMsXQU0wnYC7jt373pnhnFHYdcJXx_LgHaC1LgCzx5BfkuphiYOaj_dVB6tAlRkQo3QkFP9GIBW3LcVfhOQBmMQl8KeHvBW4QC67PQRv55IUaUDJ_lRC2QKbeJzaUto-ER4YxFwr4tncBwZQ
# 8、验证集群是否成功
[root@k8s-m-01 kubernetes]# kubectl run test01 -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.localName:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/
# 9、通过token访问
192.168.15.51:40927  # 在第五步查看端口

2、部署nginx服务

# 1、部署服务
[root@k8s-m-01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-m-01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
# 2、查看
[root@k8s-m-01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        2d9h
nginx        NodePort    10.105.226.251   <none>        80:32336/TCP   26s


linux12k8s --> 03二进制安装相关推荐

  1. CentOS7.3下二进制安装Kubernetes1.9集群 开启TLS

    Kubernetes的相关原理性的东西之前有叙述过,网上也有很多,这里主要是写的自己在使用二进制的方式搭建Kubernetes1.9的一些方法,对学习的东西做一个自我的总结. Kubernetes有许 ...

  2. 二进制安装部署 4 kubernetes集群---超详细教程

    二进制安装部署kubernetes集群---超详细教程 前言:本篇博客是博主踩过无数坑,反复查阅资料,一步步搭建完成后整理的个人心得,分享给大家~~~ 本文所需的安装包,都上传在我的网盘中,需要的可以 ...

  3. redhat7 32位mysql_Redhat7.3安装MySQL8.0.22的详细教程(二进制安装)

    一.MySQL安装包下载 官网地址:https://dev.mysql.com/downloads/mysql/ 下载步骤: 过滤操作系统版本 选择归档安装包 下载后,上传并md5校验安装包是否与上图 ...

  4. mysql二进制升级_MySQL二进制安装,升级,多实例部署

    MySQL二进制安装,升级,多实例部署 目标 理解线上部署考虑的因素 学会编译安装以及二进制安装mysql 学会升级mysql 学会多实例部署mysql数据库 学会合理部署mysql线上库 考虑因素: ...

  5. centos安装最新版的docker-ce(二进制安装)

    centos安装最新版的docker-ce(二进制安装) 本文转自于https://www.jianshu.com/p/87a84097e635 在centos上安装过docker的都知道,yum i ...

  6. 保姆级二进制安装高可用k8s集群文档(1.23.8)

    保姆级二进制安装高可用k8s集群文档 k8s搭建方式 前期准备 集群规划 机器准备 1.master vagrantfile 2.master install.sh 3.node vagrantfil ...

  7. Kuerbernetes 1.11 二进制安装

    Kuerbernetes 1.11 二进制安装 标签(空格分隔): k8s 2019年06月13日 本文截选 https://k.i4t.com 更多文章请持续关注https://i4t.com 什么 ...

  8. Centos7 二进制安装 Kubernetes 1.13

    目录 1.目录 1.1.什么是 Kubernetes? 1.2.Kubernetes 有哪些优势? 2.环境准备 2.1.网络配置 2.2.更改 HOSTNAME 2.3.配置ssh免密码登录登录 2 ...

  9. Hyperledger Fabric 二进制安装部署 Peer 节点

    Hyperledger Fabric 二进制安装部署 Peer 节点 规划网络拓扑 3 个 orderer 节点; 组织 org1 , org1 下有两个 peer 节点, peer0 和 peer1 ...

最新文章

  1. 凯文·凯利:未来很美好,今天仍是Day1
  2. python yolo-v2 设计批处理程序对训练生成的权重文件进行自动化批量测试,并输出结果到指定txt文件
  3. oracle中文乱码问题
  4. 安装Openface,实现人脸比对
  5. 一文彻底了解Logstash
  6. InnoDB 的索引模型
  7. lodash round
  8. 《你的灯亮着吗?》个人总结
  9. myeclipse8.5安装反编译工具
  10. 第四十期:十年生死两茫茫,Linux QQ突然复活!
  11. 在vue-cli项目下简单使用mockjs模拟数据
  12. mysql 字符串函数
  13. MBIST:用于嵌入式存储器的可测试设计技术
  14. 如何下载Discuz
  15. 定时器函数执行原理揭秘
  16. 2008 r2 server sql 中文版补丁_sql2008 sp3补丁下载-sql server 2008补丁包sp3中文版补丁【32/64位】-东坡下载...
  17. 防止javascript脚本读取cookie信息
  18. 会java 学c_先学Java再学c会简单点吗?
  19. uniapp小程序开发设置系统状态栏高度、全屏背景图设置
  20. 如何快速融入团队并成为团队核心(四)

热门文章

  1. 教授专栏11|张处:企业支付政策和信用风险:来自信用违约掉期(CDS)市场的证据
  2. HTML动画XYZ轴的用法详解
  3. 用c语言如何以图形方式显示家谱,数据结构_家谱管理系统
  4. mysql如果忘记密码怎么办
  5. python setup_python--setUp()和tearDown()应用
  6. Ambari2.7+HDP3.0安装(基于Centos7)
  7. python将pdf转成excel_PDF转EXCEL,python的这个技能知道吗?
  8. WorkPlus移动办公平台,助力企业随时随地“指尖办公”
  9. 离散型最值的期望计算
  10. 免费的pdf转word工具