组件版本 && 集群环境

组件版本:

  • Kubernetes v1.15.3
  • Etcd v3.3.10
  • Flanneld v0.11.0
服务器IP 角色
192.168.1.241 master
192.168.1.242 node1
192.168.1.243 node2

一、部署节点:

集群环境变量:

# 建议使用未用的网段来定义服务网段和Pod 网段
# 服务网段(Service CIDR),部署前路由不可达,部署后集群内部使用IP:Port可达
SERVICE_CIDR="192.254.0.0/16"
# Pod 网段(Cluster CIDR),部署前路由不可达,部署后路由可达(flanneld 保证)
CLUSTER_CIDR="172.18.0.0/16"# kubernetes 服务IP(预先分配,一般为SERVICE_CIDR中的第一个IP)
CLUSTER_KUBERNETES_SVC_IP="192.254.0.1"
# 集群 DNS 服务IP(从SERVICE_CIDR 中预先分配)
CLUSTER_DNS_SVC_IP="192.254.0.2"
# flanneld 网络配置前缀
FLANNEL_ETCD_PREFIX="/kubernetes/network"

二、初始化环境

1. 设置关闭防火墙及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

2. 关闭Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

3. 设置系统参数 - 允许路由转发,不对bridge的数据进行处理

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

如果遇到以下错误,解决办法如下:

4. 创建安装目录

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

5. 安装 Docker

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce
systemctl start docker && systemctl enable docker

6. ssh-key认证

ssh-keygen
ssh-copy-id 192.168.1.241
ssh-copy-id 192.168.1.242
ssh-copy-id 192.168.1.243

三、创建CA证书和密钥对

kubernetes 系统各个组件需要使用TLS证书对通信进行加密,这里我们使用CloudFlare的PKI 工具集cfssl 来生成Certificate Authority(CA) 证书和密钥文件, CA 是自签名的证书,用来签名后续创建的其他TLS 证书。

创建证书是可以创建相对的路径来存放不同的证书,方便日后查找

(本文章没有创建)

1.安装 CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -P /usr/local/src
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -P /usr/local/src
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -P /usr/local/srcchmod +x /usr/local/src/cfssl*mv cfssl_linux-amd64 /usr/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.创建CA

cat ca-config.json
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}

config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
client auth: 表示server 可以用该CA 对client 提供的证书进行验证。
修改CA 证书签名请求:

cat ca-csr.json
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}

CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User
Name);浏览器使用该字段验证网站是否合法;
O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);
生成CA 证书和私钥:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3.分发证书:
将生成的CA 证书、密钥文件、配置文件拷贝到所有机器的/k8s/kubernetes/ssl/目录下:

cp ca* /k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.1.242:/k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.1.243:/k8s/kubernetes/ssl/

四、部署高可用etcd 集群

kubernetes系统使用etcd存储所有的数据,我们这里部署3个节点的etcd集群,这3个节点直接复用kubernetes 的3个节点,分别命名为etcd01、etcd02、etcd03:

  • 192.168.1.241 etcd01
  • 192.168.1.242 etcd02
  • 192.168.1.243 etcd03

1.解压安装文件

下载地址:https://github.com/etcd-io/etcd/releases
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/vim /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.241:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.241:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.241:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.241:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.241:2380,etcd02=https://192.168.1.242:2380,etcd03=https://192.168.1.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

2.创建TLS 密钥和证书
为了保证通信安全,客户端(如etcdctl)与etcd 集群、etcd 集群之间的通信需要使用TLS 加密。
创建etcd 证书签名请求:

cat > etcd-csr.json <<EOF
{"CN": "etcd","hosts": ["192.168.1.241","192.168.1.242","192.168.1.243"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}
EOF

hosts 字段指定授权使用该证书的etcd节点IP
生成etcd证书和私钥:

cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
-ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
#此处可能会出现hosts值无效,不用理,直接跳过
ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pemcp etcd* /k8s/etcd/ssl/

3.创建 etcd的 systemd unit 文件

#要注意下方的变量和证书路径是否和你本地的一致
vim /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/etcd.pem \
--key-file=/k8s/etcd/ssl/etcd-key.pem \
--peer-cert-file=/k8s/etcd/ssl/etcd.pem \
--peer-key-file=/k8s/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/k8s/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

为了保证通信安全,需要指定etcd 的公私钥(cert-file和key-file)、Peers通信的公私钥和CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA 证书(trusted-ca-file);

4.将启动文件、配置文件拷贝到 节点1、节点2:

cd /k8s/
scp -r etcd/ 192.168.1.242:/k8s/
scp -r etcd/ 192.168.1.243:/k8s/
scp /lib/systemd/system/etcd.service 192.168.1.242:/lib/systemd/system/etcd.service
scp /lib/systemd/system/etcd.service 192.168.1.243:/lib/systemd/system/etcd.service

修改对应节点的cfg/etcd文件:

[root@host1 ~]# cat /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"   #每个节点IP的角色都要修改
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.243:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.243:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.243:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.243:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.241:2380,etcd02=https://192.168.1.242:2380,etcd03=https://192.168.1.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"[root@host2 ~]# cat /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.242:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.242:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.242:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.242:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.241:2380,etcd02=https://192.168.1.242:2380,etcd03=https://192.168.1.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

5.启动etcd 服务 注意启用3.4以上版本的时候service文件不需要加变量会自动读取以下是3.4以上版本的service文件

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

#部署到三台节点上

6.验证服务
部署完etcd 集群后,在任一etcd 节点上执行下面命令:

/k8s/etcd/bin/etcdctl \
--ca-file=/k8s/kubernetes/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/etcd.pem \
--key-file=/k8s/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" \
cluster-health
输出结果如下:
member 2e4d105025f61a1b is healthy: got healthy result from https://192.168.1.241:2379
member 8ad9da8a203d86d8 is healthy: got healthy result from https://192.168.1.242:2379
member c1b34b5ace31a23f is healthy: got healthy result from https://192.168.1.243:2379
cluster is healthy

3.4版本以上的测试命令

/k8s/etcd/bin/etcdctl \
--cacert=/k8s/kubernetes/ssl/CA/ca.pem \
--cert=/k8s/etcd/ssl/etcd.pem \
--key=/k8s/etcd/ssl/etcd-key.pem\ --endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" endpoint health

可以看到上面的信息3个节点上的etcd 均为healthy,则表示集群服务正常。

五、部署Flannel 网络

kubernetes 要求集群内各节点能通过Pod 网段互联互通,下面我们来使用Flannel 在所有节点上创建互联互通的Pod 网段的步骤。

1.创建TLS 密钥和证书
etcd 集群启用了双向TLS 认证,所以需要为flanneld 指定与etcd 集群通信的CA 和密钥。
创建flanneld 证书签名请求:

cat > flanneld-csr.json <<EOF
{"CN": "flanneld","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}
EOF

生成flanneld 证书和私钥:

cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
-ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
ls flanneld*
flanneld.csr  flanneld-csr.json  flanneld-key.pem  flanneld.pem
mkdir -p /web/kubernetes/ssl/flanneld
cp flanneld*.pem /web/kubernetes/ssl/flanneld

2.向etcd 写入集群Pod 网段信息
该步骤只需在第一次部署Flannel 网络时执行,后续在其他节点上部署Flanneld 时无需再写入该信息

/web/k8s/etcd/bin/etcdctl \
--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" \
--ca-file=/web/kubernetes/ssl/CA/ca.pem \
--cert-file=/web/kubernetes/ssl/flanneld/flanneld.pem \
--key-file=/web/kubernetes/ssl/flanneld/flanneld-key.pem \
set /kubernetes/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
输出信息:
{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}
ssl]# /k8s/etcd/bin/etcdctl --cacert=/k8s/kubernetes/ssl/CA/ca.pem --cert=/k8s/flannel/ssl/flanneld.pem --key=/k8s/flannel/ssl/flanneld-key.pem --endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" put /kubernetes/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
输出OK

以上是版本etcd3.4.4的命令

写入的 Pod 网段(${CLUSTER_CIDR},172.18.0.0/16)必须与kube-controller-manager 的 --cluster-cidr 选项值一致;

3.安装和配置flanneld

下载地址:https://github.com/coreos/flannel/releases
tar xf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /web/kubernetes/bin/

创建flanneld的systemd unit 文件

cat /lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service[Service]
Type=notify
#EnvironmentFile=/k8s/kubernetes/cfg/flanneld
#ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStart=/web/kubernetes/bin/flanneld --etcd-cafile=/web/kubernetes/ssl/CA/ca.pem --etcd-certfile=/web/kubernetes/ssl/flanneld/flanneld.pem --etcd-keyfile=/web/kubernetes/ssl/flanneld/flanneld-key.pem --etcd-endpoints=https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379 --etcd-prefix=/kubernetes/network
ExecStartPost=/web/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh脚本将分配给flanneld 的Pod 子网网段信息写入到/run/flannel/docker 文件中,后续docker 启动时使用这个文件中的参数值为 docker0 网桥
    flanneld 使用系统缺省路由所在的接口和其他节点通信,对于有多个网络接口的机器(内网和公网),可以用 --iface 选项值指定通信接口(上面的 systemd unit 文件没指定这个选项)
    配置Docker启动指定子网段
cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s[Install]
WantedBy=multi-user.target
4.启动flanneld
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl restart docker

部署到三台节点上

5.检查flanneld 服务,检查分配给各flanneld 的Pod 网段信息l
ifconfig flannel.1

查看集群 Pod 网段(/16)

/web/k8s/etcd/bin/etcdctl
–endpoints=“https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379”
–ca-file=/web/kubernetes/ssl/CA/ca.pem
–cert-file=/web/kubernetes/ssl/flanneld/flanneld.pem
–key-file=/web/kubernetes/ssl/flanneld/flanneld-key.pem
get /kubernetes/network/config

{ “Network”: “172.18.0.0/16”, “Backend”: {“Type”: “vxlan”}}

查看已分配的 Pod 子网段列表(/24)

/k8s/etcd/bin/etcdctl \

–endpoints=“https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379”
–ca-file=/k8s/kubernetes/ssl/ca.pem
–cert-file=/k8s/flanneld/ssl/flanneld.pem
–key-file=/k8s/flanneld/ssl/flanneld-key.pem
ls /kubernetes/network/subnets

/kubernetes/network/subnets/172.18.100.0-24

查看某一 Pod 网段对应的 flanneld 进程监听的 IP 和网络参数

/k8s/etcd/bin/etcdctl \

--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.1681.243:2379" \
--ca-file=/k8s/kubernetes/ssl/ca.pem \
--cert-file=/k8s/flanneld/ssl/flanneld.pem \
--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
get /kubernetes/network/subnets/172.18.100.0-24{"PublicIP":"192.168.1.241","BackendType":"vxlan","BackendData":{"VtepMAC":"e2:29:21:80:5e:d1"}}

6.将配置文件复制到其他节点

scp -r /k8s/kubernetes/bin/* 192.168.1.242:/k8s/kubernetes/bin/
scp -r /k8s/kubernetes/bin/* 192.168.1.243:/k8s/kubernetes/bin/
scp -r /k8s/flanneld/ssl/* 192.168.1.242:/k8s/flanneld/ssl/
scp -r /k8s/flanneld/ssl/* 192.168.1.243:/k8s/flanneld/ssl/
scp /lib/systemd/system/flanneld.service 192.168.1.242:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/flanneld.service 192.168.1.243:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/docker.service 192.168.1.242:/lib/systemd/system/docker.service
scp /lib/systemd/system/docker.service 192.168.1.243:/lib/systemd/system/docker.service

7.确保各节点间Pod 网段能互联互通
在各个节点部署完Flanneld 后,查看已分配的Pod 子网段列表:

/k8s/etcd/bin/etcdctl \
--endpoints="https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379" \
--ca-file=/k8s/kubernetes/ssl/ca.pem \
--cert-file=/k8s/flanneld/ssl/flanneld.pem \
--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
ls /kubernetes/network/subnets/kubernetes/network/subnets/172.18.88.0-24
/kubernetes/network/subnets/172.18.85.0-24
/kubernetes/network/subnets/172.18.100.0-24

六、 部署master 节点

kubernetes master 节点包含的组件有:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

下载解压二进制文件

wget https://dl.k8s.io/v1.15.3/kubernetes-server-linux-amd64.tar.gz

下载不了就要自己找了,有时候需要翻墙

tar -xf kubernetes-server-linux-amd64.tar.gzcd kubernetes/server/bin/
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /web/kubernetes/bin/

创建kubernetes 证书
创建kubernetes 证书签名请求:

cat > kubernetes-csr.json <<EOF
{"CN": "kubernetes","hosts": ["127.0.0.1","192.168.1.240","k8s-api.virtual.local","192.254.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
  • 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,所以上面分别指定了当前部署的 master 节点主机 IP
    以及apiserver 负载的内部域名
  • 还需要添加 kube-apiserver 注册的名为 kubernetes 的服务 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 选项值指定的网段的第一个IP,如 “10.254.0.1”
    生成kubernetes 证书和私钥:
cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | \
cfssljson -bare kubernetesls kub*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem
cp kubernetes*.pem /web/kubernetes/ssl/kubernetes

七、配置和启动kube-apiserver

1.创建kube-apiserver 使用的客户端token 文件:
kubelet 首次启动时向kube-apiserver 发送TLS Bootstrapping 请求,kube-apiserver 验证请求中的token 是否与它配置的token.csv 一致,如果一致则自动为kubelet 生成证书和密钥。
TLS Bootstrapping 使用的Token,可以使用命令

head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
9d3d0413211c8d92ed1b33a913154ce5cat /web/kubernetes/cfg/token.csv
9d3d0413211c8d92ed1b33a913154ce5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

2.创建apiserver配置文件

cat /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.1.241:2379,https://192.168.1.242:2379,https://192.168.1.243:2379 \
--bind-address=192.168.1.240 \
--secure-port=6443 \
--advertise-address=192.168.1.240 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/web/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/web/kubernetes/ssl/kubernetes/kubernetes.pem  \
--tls-private-key-file=/web/kubernetes/ssl/kubernetes/kubernetes-key.pem \
--client-ca-file=/web/kubernetes/ssl/CA/ca.pem \
--service-account-key-file=/web/kubernetes/ssl/CA/ca-key.pem \
--etcd-cafile=/web/kubernetes/ssl/CA/ca.pem \
--etcd-certfile=/web/kubernetes/ssl/kubernetes/kubernetes.pem \
--etcd-keyfile=/web/kubernetes/ssl/kubernetes/kubernetes-key.pem"

3.创建kube-apiserver 的systemd unit文件

cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/web/kubernetes/cfg/kube-apiserver
ExecStart=/web/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
4.启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

1.创建kube-controller-manager(资源控制中心)配置文件

cat /web/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/24 \
--cluster-cidr=172.18.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/web/kubernetes/ssl/CA/ca.pem \
--cluster-signing-key-file=/web/kubernetes/ssl/CA/ca-key.pem  \
--root-ca-file=/web/kubernetes/ssl/CA/ca.pem \
--service-account-private-key-file=/web/kubernetes/ssl/CA/ca-key.pem"

2.创建kube-controller-manager systemd unit 文件

cat /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/web/kubernetes/cfg/kube-controller-manager
ExecStart=/web/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target

3.启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager

1.创建kube-scheduler(资源调度)配置文件

cat /web/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true"

2.创建kube-scheduler systemd unit 文件

cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/web/kubernetes/cfg/kube-scheduler
ExecStart=/web/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
3.启动服务
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service

验证master 节点
将可执行文件添加到 PATH 变量中

echo "export PATH=$PATH:/web/kubernetes/bin/" >>/etc/profile
source /etc/profile

验证:

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"} 如果出现 <unknown>,暂时没啥问题

八、部署Node 节点

kubernetes Node 节点包含如下组件:

  • kubelet
  • kube-proxy
    安装和配置kubelet
    kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
    kubelet就是运行在Node节点上的,接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令,如exec、run、logs等;所以这一步安装是在所有的Node节点上,如果你想把你的Master也当做Node节点的话,当然也可以在Master节点上安装的。
    kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求(certificatesigningrequests):
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

将kubelet 二进制文件拷贝node节点

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.1.242:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.1.243:/k8s/kubernetes/bin/

创建 kubelet bootstrap kubeconfig 文件一下创建需要在同一目录运行

cat environment.sh
BOOTSTRAP_TOKEN=9d3d0413211c8d92ed1b33a913154ce5
KUBE_APISERVER="https://192.168.1.241:6443"   source environment.sh

设置集群参数

kubectl config set-cluster kubernetes \
--certificate-authority=/web/kubernetes/ssl/CA/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  • –embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
  • 设置 kubelet 客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;
    将bootstrap kubeconfig文件拷贝到所有 nodes节点
cp bootstrap.kubeconfig /web/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.1.241:/web/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.1.242:/web/kubernetes/cfg/

创建kubelet 参数配置文件拷贝到所有 nodes节点
创建 kubelet 参数配置模板文件:

vim /web/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.1.241
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.2"]
clusterDomain: cluster.local
failSwapOn: false
authentication:anonymous:enabled: true

注意格式一定要正确,不然会认证失败

创建kubelet配置文件

vim /k8s/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.241 \
--kubeconfig=/web/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/web/kubernetes/cfg/bootstrap.kubeconfig \
--config=/web/kubernetes/cfg/kubelet.config \
--cert-dir=/web/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
创建kubelet systemd unit 文件
vim /usr/lib/systemd/system/kubelet.service [Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service[Service]
EnvironmentFile=/web/kubernetes/cfg/kubelet
ExecStart=/web/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target

拷贝文件:

scp /k8s/kubernetes/cfg/kubelet* 192.168.1.242:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kubelet* 192.168.1.243:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kubelet.service 192.168.1.242:/lib/systemd/system/kubelet.service
scp /lib/systemd/system/kubelet.service 192.168.1.243:/lib/systemd/system/kubelet.service

其他节点需要修改对应的address和hostname-override地址
启动kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

通过kubelet 的TLS 证书请求
kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。
查看未授权的CSR 请求:当授权了,过段时间就会消失

kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   2m37s   kubelet-bootstrap   Pending
kubectl get nodes
No resources found.
通过CSR 请求:
kubectl certificate approve node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I
certificatesigningrequest.certificates.k8s.io/node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I approved
kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   4m9s   kubelet-bootstrap   Approved,Issued
kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.1.240   Ready    <none>   13s   v1.13.1

其余两台节点启动后通过csr请求:

kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   3m22s   kubelet-bootstrap   Pending
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   12m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   3m35s   kubelet-bootstrap   Pending
kubectl certificate approve node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo
certificatesigningrequest.certificates.k8s.io/node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo approved
kubectl certificate approve node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U
certificatesigningrequest.certificates.k8s.io/node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U approved
kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   4m40s   kubelet-bootstrap   Approved,Issued
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   14m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   4m53s   kubelet-bootstrap   Approved,Issued
kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.1.240   Ready    <none>   9s    v1.13.1
192.168.1.241   Ready    <none>   22s   v1.13.1
192.168.1.242   Ready    <none>   10m   v1.13.1

配置kube-proxy
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
创建kube-proxy 证书签名请求:

cat >    <<EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "System"}]
}

kube-apiserver 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限
生成kube-proxy 客户端证书和私钥

cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
-ca-key=/k8s/kubernetes/ssl/ca-key.pem \
-config=/k8s/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pemcp kube-proxy*.pem /k8s/kubernetes/ssl/
scp kube-proxy*.pem 192.168.1.241:/k8s/kubernetes/ssl/
scp kube-proxy*.pem 192.168.1.242:/k8s/kubernetes/ssl/

创建kube-proxy kubeconfig 文件

设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

kubectl config set-credentials kube-proxy \
--client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
--client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

设置默认上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

将kube-proxy kubeconfig文件拷贝到所有 nodes节点

cp kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.1.241:/k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.1.242:/k8s/kubernetes/cfg/

创建 kube-proxy 配置文件

cat /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.1.241 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • –cluster-cidr 必须与 kube-apiserver 的 --service-cluster-ip-range 选项值一致
    创建kube-proxy systemd unit 文件
cat /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy

启动其余两台节点服务:

scp /k8s/kubernetes/cfg/kube-proxy 192.168.1.242:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kube-proxy 192.168.1.243:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kube-proxy.service 192.168.1.242:/lib/systemd/system/kube-proxy.service
scp /lib/systemd/system/kube-proxy.service 192.168.1.243:/lib/systemd/system/kube-proxy.service

修改对应节点的hostname-override地址
集群状态
打node 或者master 节点的标签

kubectl label node 192.168.1.241  node-role.kubernetes.io/master='master'
kubectl label node 192.168.1.242  node-role.kubernetes.io/node='node'
kubectl label node 192.168.1.243  node-role.kubernetes.io/node='node'

查看集群状态:

kubectl get node,cs
NAME                  STATUS   ROLES    AGE   VERSION
node/192.168.1.243   Ready    node     42m   v1.13.1
node/192.168.1.242   Ready    node     42m   v1.13.1
node/192.168.1.241   Ready    master   52m   v1.13.1NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok
componentstatus/controller-manager   Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}

部署dashboard可视化插件

cat kubernetes-dashboard.yaml# Copyright 2017 The Kubernetes Authors.# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# ------------------- Dashboard Secret ------------------- #apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kube-system
type: Opaque---
# ------------------- Dashboard Service Account ------------------- #apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system---
# ------------------- Dashboard Role & Role Binding ------------------- #kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
rules:# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]resources: ["secrets"]verbs: ["create"]# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]resources: ["configmaps"]verbs: ["create"]# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]resources: ["services"]resourceNames: ["heapster"]verbs: ["proxy"]
- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:"]verbs: ["get"]---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system---
# ------------------- Dashboard Deployment ------------------- #kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboard#image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 ports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboard# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---
# ------------------- Dashboard Service ------------------- #kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 31620selector:k8s-app: kubernetes-dashboard
cat dashboard-adminuser.yaml apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system

#查看service和端口

kubectl get service -n kube-system | grep dashboard

创建登录用户

kubectl create -f dashboard-adminuser.yamlkubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

把获取到的Token复制到登录界面的Token输入框中:

部署仪表盘资料
https://blog.csdn.net/networken/article/details/85607593

部署k8s集群资料
https://blog.csdn.net/wfs1994/article/details/86408254

CentOS7 使用二进制部署 Kubernetes v1.15.3集群相关推荐

  1. CentOS7 使用二进制部署 Kubernetes 1.15-1.17集群(均通用,已经尝试,细心)

    转载地址:https://blog.csdn.net/qq_37950254/article/details/95204011 <link rel="stylesheet" ...

  2. 使用二进制包在生产环境部署 Kubernetes v1.13.2 集群

    文章目录 使用二进制包在生产环境部署 Kubernetes v1.13.2 集群 一 背景 二 环境及架构图 2.1 软件环境 2.2 服务器规划 2.3 节点或组件功能简介 2.4 Kubernet ...

  3. Kubespray v2.22.1 在线部署 kubernetes v1.26.5 集群

    文章目录 1. 介绍 2. 预备条件 3. 配置 hostname 4. yum 5. 下载 kubespray 6. 编写 inventory.ini 7. 配置互信 8. 安装 ansible 9 ...

  4. kubespray v2.21.0 部署 kubernetes v1.24.0 集群

    文章目录 1. 前言 2. 创建7台虚拟机 3. 部署 git 3.1 dnf 安装 3.2 tar 安装 4. 下载 kubespray 介质 5. 配置 zsh 终端 6. 配置互信 7. 安装 ...

  5. Kubespray v2.21.0 离线部署 Kubernetes v1.25.6 集群

    文章目录 1. 前言 2. 预备条件 3. 配置代理 4. 下载介质 5. 初始化配置 6. 安装部署工具 6.1 配置 venv 部署环境 6.2 配置容器部署环境 7. 配置互信 8. 编写 in ...

  6. kubeadm 部署 kubernetes:v1.23.4集群

    一.安装前的准备 !!!以下操作需要在所有master和node上执行 1.1.关闭selinux,关闭防火墙 1.2.添加hosts解析 192.168.122.160 master 192.168 ...

  7. kubespray v2.21.0 在线定制部署升级 kubernetes v1.24.0 集群【2】

    文章目录 简介 创建 虚拟机模板 虚拟机名称 配置静态地址 配置代理 yum 配置 配置主机名 安装 git 安装 docker 安装 ansible 配置内核参数 安装 k8s 定制安装 kuber ...

  8. 二进制部署Kubernetes v1.13.4 HA可选

    本次采用二进制文件方式部署,本文过程写成了更详细的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansible 和之前的步骤差不多都 ...

  9. kubernetes V1.10.4 集群部署 (手动生成证书)

    说明:本文档涉及docker镜像,yaml文件下载地址 链接:https://pan.baidu.com/s/1QuVelCG43_VbHiOs04R3-Q 密码:70q2 本文只是作为一个安装记录 ...

最新文章

  1. Win64 驱动内核编程-18.SSDT
  2. 过滤器跟拦截器的区别
  3. android log耗性能吗,一个高性能的Android日志库
  4. github迁移到gitee相关问题
  5. 分布式系统服务器要求,浅谈分布式系统
  6. 【Android开发】高级组件-选项卡
  7. 离职 Oracle 首席工程师怒喷:MySQL 是“超烂的数据库”,建议考虑 PostgreSQL
  8. 2022最新第四方聚合支付系统源码+详细搭建教程
  9. roc_curve()的用法及用途
  10. 上海亚商投顾 早餐FM/0822新能源汽车免征购置税政策延期
  11. 中国人民大学信息学院夏令营经验贴
  12. Ctrl+Shift+End
  13. html js动态时间轴,jQuery时间轴插件timeline.js
  14. 【游记】CQOI2021
  15. zimbra更换服务器域名
  16. pmos低电平驱动_驱动篇 -PMOS管应用
  17. 蓝桥网算法提高 学霸的迷宫
  18. Linux Vi命令使用手册
  19. 在线转换pdf和虚拟打印机生成pdf文件操作攻略
  20. python indexerror out of bound_用Pyinstaller打包时出现IndexError怎么回事?

热门文章

  1. 一、如何下载JDK?
  2. Windows explorer.exe是啥?
  3. 马云任软银集团董事 阿里巴巴全球化布局明朗
  4. 你好,offer(最终版)
  5. 用Python与Watson,将《魔戒》甘道夫的性格可视化!
  6. Behance上值得借鉴的设计风格
  7. jsp tomcat7 mysql_最简单的Jsp环境配置及数据库连接调试(Jdk7+Tomcat7+Mysql5.5)
  8. 猝死前最后4分钟,他本还有一次机会活..
  9. 大模型“研究源”告急:2026年高质量语言数据或将耗尽
  10. 对着Java性能调优,艿艿也很无奈···