K8S——单master节点和基于单master节点的双master节点二进制部署

  • 一、准备
  • 二、ETCD集群
    • 1、master节点
    • 2、node节点
  • 三、Flannel网络部署
  • 四、测试容器间互通
  • 五、单master节点部署
    • 1、部署master组件
    • 2、node节点部署
      • ①、node1节点
      • ②、node2节点
  • 六、基于上一步单master节点的双master节点部署
    • 1、搭建master2节点
    • 2、nginx负载均衡部署
    • 3、node节点配置
  • 七、测试
    • 1、master1上进行操作
    • 2、node1节点curl测试

一、准备

角色分配 操作系统 IP
k8s-master centos7:1708 192.168.184.140
k8s-master2 centos7:1708 192.168.184.145
k8s-node01 centos7:1708 192.168.184.141
k8s-node02 centos7:1708 192.168.184.142
nginx_lbm IP nginx 192.168.184.146
nginx_lbb IP nginx 192.168.184.147
VIP IP 192.168.184.200

二、ETCD集群

1、master节点

#创建/k8s目录
mkdir k8s
cd k8s#创建证书制作的脚本
vim etcd-cert.sh
cat > ca-config.json <<EOF          #CA证书配置文件
{"signing": {                   #键名称"default": {"expiry": "87600h"         #证书有效期(10年)},"profiles": {               #简介"www": {                    #名称"expiry": "87600h","usages": [               #使用方法"signing",                #键"key encipherment",          #密钥验证(密钥验证要设置在CA证书中)"server auth",           #服务器端验证"client auth"          #客户端验证]}}}
}
EOF
cat > ca-csr.json <<EOF             #CA签名
{"CN": "etcd CA",             #CA签名为etcd指定(三个节点均需要)"key": {"algo": "rsa",               #使用rsa非对称密钥的形式"size": 2048              #密钥长度为2048},"names": [                 #在证书中定义信息(标准格式){"C": "CN",              #名称"L": "Beijing",       "ST": "Beijing"     }]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat > server-csr.json <<EOF         #服务器端的签名
{"CN": "etcd","hosts": [                 #定义三个节点的IP地址"192.168.184.140","192.168.184.141","192.168.184.142"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server     #cfssl 为证书制作工具#创建启动脚本
cat etcd.sh
#!/bin/bash
#以下为使用格式:etcd名称 当前etcd的IP地址+完整的集群名称和地址
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1                      #位置变量1:etcd节点名称
ETCD_IP=$2                        #位置变量2:节点地址
ETCD_CLUSTER=$3                       #位置变量3:集群
WORK_DIR=/opt/etcd                  #指定工作目录
cat <<EOF >$WORK_DIR/cfg/etcd              #在指定工作目录创建ETCD的配置文件
#[Member]
ETCD_NAME="${ETCD_NAME}"              #etcd名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"       #etcd IP地址:2380端口。用于集群之间通讯
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #etcd IP地址:2379端口,用于开放给外部客户端通讯
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"  #对外提供的url使用https的协议进行访问
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"     #多路访问
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"     #tokens 令牌环名称:etcd-cluster
ETCD_INITIAL_CLUSTER_STATE="new"          #状态,重新创建
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service     #定义ectd的启动脚本
[Unit]                                #基本项
Description=Etcd Server                 #类似为 etcd 服务
After=network.target                 #vu癌症
After=network-online.target
Wants=network-online.target
[Service]                     #服务项
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd  #etcd文件位置
ExecStart=${WORK_DIR}/bin/etcd \          #准启动状态及以下的参数
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ #以下为群集内部的设定
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \  #群集内部通信,也是使用的令牌,为了保证安全(防范中间人窃取)
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \       #证书相关参数
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536                 #开放最多的端口号
[Install]
WantedBy=multi-user.target               #进行启动
EOF
systemctl daemon-reload                 #参数重载
systemctl enable etcd
systemctl restart etcd#创建证书目录,复制k8s目录下的证书创建脚本
mkdir etcd-cert
cd etcd-cert/
mv ../etcd-cert.sh ./#从官网源中下载制作证书的工具
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo#执行证书制作脚本(etcd-cert目录下)
chmod +x /usr/local/bin/cfssl
chmod +x /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssljson
bash etcd-cert.sh#ETCD 部署
(下载并将软件包放在k8s目录下:etcd-v3.3.10-linux-amd64.tar.gz、flannel-v0.10.0-linux-amd64.tar.gz、kubernetes-server-linux-amd64.tar.gz)#解压etcd-v3.3.10-linux-amd64.tar.gz
cd /etcd
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz#创建ETCD工作目录(cfg:配置文件目录、bin:命令文件目录、ssl:证书文件目录)
mkdir /opt/etcd/{cfg,bin,ssl} -p#拷贝命令文件
mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin#拷贝证书文件
cp etcd-cert/*.pem /opt/etcd/ssl#进入卡住状态等待其他节点加入
bash etcd.sh etcd01 192.168.184.140 etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380


#另起终端,查看产生的配置文件
cd /opt/etcd/cfg
cat etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.226.128:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.226.128:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.226.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.226.128:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.226.128:2380,etcd02=https://192.168.226.132:2380,etcd03=https://192.168.226.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"#查看etcd 状态/进程
ps -ef | grep etcd#将证书和启动脚本推送/复制到两台node节点中
scp -r /opt/etcd/ root@192.168.184.141:/opt
scp -r /opt/etcd/ root@192.168.184.142:/opt
scp -r /usr/lib/systemd/system/etcd.service root@192.168.184.141:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service root@192.168.184.142:/usr/lib/systemd/system/


2、node节点

#查看、修改配置文件
ls /usr/lib/systemd/system/ | grep etcd
vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"                    #需修改节点名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.184.141:2380"  #将url:2380端口的IP地址改为141(本地节点IP)
ETCD_LISTEN_CLIENT_URLS="https://192.168.184.141:2379"        #将url:2379端口的IP地址改为141(本地节点IP)
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.184.141:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.184.141:2379"
#以上两条选项的地址也改为本地IP
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.184.140:2380,etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#同理修改node2节点配置文件#启动服务(先在master节点使用命令,开启等待节点加入,其他两个node节点启动etcd 服务)
[root@k8s-master ~/k8s]# bash etcd.sh etcd01 192.168.184.140 etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380
[root@k8s-node01 /opt/etcd/cfg]# systemctl start etcd
[root@k8s-node02 /opt/etcd/cfg]# systemctl start etcd #检查集群状态(master上执行)
[root@k8s-master ~/k8s]# cd etcd-cert/
[root@k8s-master ~/k8s/etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" cluster-health

三、Flannel网络部署

#首先两个node节点需要先安装docker引擎,具体流程可见:docker容器简介及安装

#写入分配的子网段到ETCD中,供flannel使用(master主机)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
#命令简介--------------------------------------------------
#使用etcdctl命令,借助ca证书,目标断点为三个ETCD节点IP,端口为2379
#set /coreos.com/network/config 设置网段信息
#"Network": "172.17.0.0/16" 此网段必须是集合网段(B类地址),而Pod分配的资源必须在此网段中的子网段(C类地址)
#"Backend": {"Type": "vxlan"}} 外部通讯的类型是VXLAN
----------------------------------------------------------#查看写入的信息(master主机)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" get /coreos.com/network/config#上传flannel软件包到所有的 node 节点并解压(所有node节点)
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz #创建k8s工作目录(所有node节点)
mkdir /opt/kubernetes/{cfg,bin,ssl} -p
mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/#创建启动脚本(两个node节点)
vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld        #创建配置文件
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ #flannel在使用的时候需要参照CA证书
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service #创建启动脚本
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env  #Docker使用的网络是flannel提供的
Restart=on-failure
[Install]
WantedBy=multi-user.target               #多用户模式
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld#开启flannel网络功能(两个node节点)
bash flannel.sh https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379#配置 docker 连接 flannel(两个node节点)
vim /usr/lib/systemd/system/docker.service
-----12行添加
EnvironmentFile=/run/flannel/subnet.env
-----13行修改(添加参数$DOCKER_NETWORK_OPTIONS)
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock#查看flannel分配的子网段
cat /run/flannel/subnet.env #重载进程、重启docker
systemctl daemon-reload
systemctl restart docker





四、测试容器间互通


五、单master节点部署

1、部署master组件

#创建k8s工作目录和apiserver的证书目录
cd ~/k8s
mkdir /opt/kubernetes/{cfg,bin,ssl} -p
mkdir k8s-cert#生成证书
cd k8s-cert
vim k8s-cert.sh
cat > ca-config.json <<EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat > ca-csr.json <<EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -cat > server-csr.json <<EOF
{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","192.168.184.140",           #master1节点"192.168.184.145",           #master2节点(为之后做多节点做准备)"192.168.184.200",         #VIP飘逸地址"192.168.184.146",         #nginx1负载均衡地址()"192.168.184.147",           #nginx2负载均衡地址(备)"kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare servercat > admin-csr.json <<EOF
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admincat > kube-proxy-csr.json <<EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy#直接执行脚本生成K8S的证书
bash k8s-cert.sh#此时查看本地目录的证书文件,应该有8个
ls *.pem#把ca server端的证书复制到k8s工作目录
cp ca*.pem server*.pem /opt/kubernetes/ssl
ls /opt/kubernetes/ssl/#解压kubernetes压缩包
cd ../
tar zxvf kubernetes-server-linux-amd64.tar.gz#复制关键命令到k8s的工作目录中
cd kubernetes/server/bin
cp kube-controller-manager kubectl kube-apiserver kube-scheduler /opt/kubernetes/bin#使用head -c 16 /dev/urandom | od -An -t x | tr -d ’ ',随机生成序列号 生成随机序列号
head -c 16 /dev/urandom | od -An -t x | tr -d ' '#创建token(令牌)文件
cd /opt/kubernetes/cfg
vim token.csv
上一步随机序列号,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
------------------------------
此角色的定位和作用如下:
① 创建位置:在master节点创建bootstrap角色
② 管理node节点的kubelet
③ kubelet-bootstrap 管理、授权system:kubelet-bootstrap
④ 而system:kubelet-bootstrap 则管理node节点的kubelet
⑤ token就是授权给system:kubelet-bootstrap角色,如果此角色没有token的授权,则不能管理node下的kubelet
------------------------------#二进制文件,token,证书准备齐全后,开启apiserver
上传master.zip
cd /root/k8s
unzip master.zip
chmod +x controller-manager.shapiserver.sh 脚本简介-------------------------------------
#!/bin/bashMASTER_ADDRESS=$1                  #本地地址
ETCD_SERVERS=$2                       #群集cat <<EOF >/opt/kubernetes/cfg/kube-apiserver       #生成配置文件到k8s工作目录KUBE_APISERVER_OPTS="--logtostderr=true \\           #从ETCD读取、存入数据
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\                   #绑定地址
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\          #master本地地址
--allow-privileged=true \\                          #允许授权
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\               #plugin插件,包括命名空间中的插件、server端的授权
--authorization-mode=RBAC,Node \\                    #使用RBAC模式验证node端
--kubelet-https=true \\                             #允许对方使用https协议进行访问
--enable-bootstrap-token-auth \\                    #开启bootstrap令牌授权
--token-auth-file=/opt/kubernetes/cfg/token.csv \\   #令牌文件路径
--service-node-port-range=30000-50000 \\            #开启的监听端口
#以下均为证书文件
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOFcat <<EOF >/usr/lib/systemd/system/kube-apiserver.service        #服务启动脚本
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
---------------------------------------------------------#开启apiserver
bash apiserver.sh 192.168.184.140 https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379#查看api进程验证启动状态
ps aux | grep kube#查看配置文件是否正常
cat /opt/kubernetes/cfg/kube-apiserver#查看进行端口是否开启
netstat -natp | grep 6443#查看scheduler启动脚本
vim scheduler.sh
#!/bin/bash
MASTER_ADDRESS=$1
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\  #定义日志记录
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\          #定义master地址,指向8080端口
--leader-elect"                     #定位为leader
EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service                   #定义启动脚本
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler#启动scheduler服务
./scheduler.sh 127.0.0.1#查看进程
ps aux | grep sch#查看服务
systemctl status kube-scheduler.service#启动controller-manager服务
./controller-manager.sh #查看服务
systemctl status kube-controller-manager.service #最后查看master节点状态
/opt/kubernetes/bin/kubectl get cs#把master节点的kubelet、kube-proxy拷贝到node节点
cd kubernetes/server/bin/
scp kubelet kube-proxy root@192.168.184.141:/opt/kubernetes/bin/
scp kubelet kube-proxy root@192.168.184.142:/opt/kubernetes/bin/#进行kube配置
cd ~/k8s
mkdir kubeconfig
cd kubeconfig
(上传kubeconfig.sh脚本)
mv kubeconfig.sh kubeconfig
vim kubeconfig
BOOTSTRAP_TOKEN=b0bff184cbd37dae1351103ad3458685
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \--certificate-authority=$SSL_DIR/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \--token=664f2017059d58e78f6cce2e47ef383b \        #仅修改此处令牌序列号,从/opt/kubernetes/cfg/token.csv中获取--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \--certificate-authority=$SSL_DIR/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \--client-certificate=$SSL_DIR/kube-proxy.pem \--client-key=$SSL_DIR/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig#设置环境变量
export PATH=$PATH:/opt/kubernetes/bin/#使用kubectl命令
kubectl get cs#执行kubeconfig脚本
bash kubeconfig 192.168.184.140 /root/k8s/k8s-cert/#拷贝生成的两个配置文件拷贝到node节点
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.184.141:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.184.142:/opt/kubernetes/cfg/#创建bootstrap角色赋予权限用于连接apiserver请求签名
(只有bootstrap授权之后,node节点才算完整的添加到群集、可以被master节点所管理)
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap






2、node节点部署

①、node1节点

#上传node压缩包并解压
unzip node.zip#执行kubelet脚本,用于请求连接master主机
bash kubelet.sh 192.168.184.141##查看kubelet进程
ps aux | grep kubelet#查看服务状态
systemctl status kubelet.service#在master上检查node1节点的请求(master上操作)
kubectl get csr#颁发证书(master上操作)
kubectl certificate approve 节点1的名称#再次查看csr(master上操作)
kubectl get csr#查看集群节点(master上操作)
kubectl get node#在node1节点启动proxy代理服务
bash proxy.sh 192.168.184.141
systemctl status kube-proxy.service


②、node2节点

#把node1节点的/opt/kubernetes 目录复制到node2节点中(node1上操作)
scp -r /opt/kubernetes/ root@192.168.184.142:/opt#拷贝启动脚本 (node1上操作)
scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.184.142:/usr/lib/systemd/system/#删除所有证书文件
cd /opt/kubernetes/ssl/
rm -rf *#修改kubelet配置文件IP地址
cd ../cfg
vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.184.142 \ #修改为node2节点本地地址
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"#修改kubelet.conf配置文件
vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.184.142              #修改为本地地址
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2                        #DNS解析地址,需要记下来
clusterDomain: cluster.local.
failSwapOn: false
authentication:anonymous:enabled: true#修改kube-proxy 配置文件
vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.184.142 \     #修改为本地地址
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"#启动服务
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy#master节点授权(master上操作)
kubectl get csr
kubectl certificate approve 节点2的名称
kubectl get csr#master查看集群状态(master上操作)
kubectl get node

六、基于上一步单master节点的双master节点部署

1、搭建master2节点

#复制主要文件及目录(master1上操作)
scp -r /opt/kubernetes/ root@192.168.184.145:/opt
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.184.145:/usr/lib/systemd/system/#修改配置文件kube-apiserver中的IP地址(master2上操作)
cd /opt/kubernetes/cfg
vim kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379 \
--bind-address=192.168.184.145 \        #绑定bind地址(本地IP)
--secure-port=6443 \
--advertise-address=192.168.184.145 \ #修改对外展示地址
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"#拷贝master01上已有的etcd证书给master2使用(master1上操作)
scp -r /opt/etcd/ root@192.168.184.145:/opt/#启动api-server、api-scheduler、api-controller-manager服务(master2上操作)
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl status kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl status kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service
systemctl status kube-scheduler.service#添加环境变量(master2上操作)
echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
source /etc/profile#查看集群节点信息
kubectl get node



2、nginx负载均衡部署

#关闭防火墙和核心防护
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config#添加nginx官方yum源、安装nginx
vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0yum list
yum install nginx -y#在nginx配置文件中添加四层转发功能(event和http之间进行插入)
stream {log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log  /var/log/nginx/k8s-access.log  main;upstream k8s-apiserver {server 192.168.184.140:6443;            server 192.168.184.145:6443;}server {listen 6443;proxy_pass k8s-apiserver;       }}#在两个nginx节点部署keeplived服务
##nginx-master
yum install keepalived -y#修改nginx-master的keepalived配置文件
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id NGINX_MASTER
}
vrrp_script check_nginx {    script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100  advert_int 1   authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.184.200/24}track_script { check_nginx     }
}##nginx-slave
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {  notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id NGINX_MASTER
}
vrrp_script check_nginx {    script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90  advert_int 1   authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.184.200/24}track_script {  check_nginx     }
}#在/etc/nginx/目录下创建ngixn检测脚本
vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];thensystemctl stop keepalived
fichmod +x /etc/nginx/check_nginx.sh#启动keepalived服务
systemctl start keepalived.service#nginx-master上使用ip a查看是否有漂移地址#nginx-master上使用pkill nginx模拟故障#nginx-slave上使用ip a 查看漂移地址是否已经漂移过来#nginx-master上先开启nginx服务,再启动keepalived服务,进行恢复




3、node节点配置

#修改两个node节点配置文件统一指向VIP地址
vim /opt/kubernetes/cfg/bootstrap.kubeconfig
server: https://192.168.184.200:6443
vim /opt/kubernetes/cfg/kubelet.kubeconfig
server: https://192.168.184.200:6443
vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
server: https://192.168.184.200:6443#自检
cd /opt/kubernetes/cfg
cat *.kubeconfig | grep 200 *

七、测试

1、master1上进行操作

#创建pod
kubectl run nginx --image=nginx#查看状态
kubectl get pods#查看创建的pod位置
kubectl get pods -o wide#在node1节点查看容器列表
docker ps -a


2、node1节点curl测试

#在node1节点访问Pod IP地址
curl 172.17.9.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)相关推荐

  1. Kubernetes 二进制部署 多节点(基于单节点部署,超详细)3

    目录 一.环境准备 二.master02 节点部署 三.负载均衡部署 1.在lb01.lb02节点上操作 2.在 master01 节点上操作 四.部署 Dashboard UI 1.Dashboar ...

  2. 基于文本和语音的双模态情感分析

    作者 | 陆昱博士 追一科技 来源 | DataFunTalk 今天和大家分享的主题是基于文本和语音的双模态情感分析.大家可能会从自然语言处理的角度认为情感分析已经做得比较成熟了,缺少进一步研究的方向 ...

  3. 【K8S】基于单Master节点安装K8S集群

    写在前面 最近在研究K8S,今天就输出部分研究成果吧,后续也会持续更新. 集群规划 IP 主机名 节点 操作系统版本 192.168.175.101 binghe101 Master CentOS 8 ...

  4. 【重要】kubernetes二进制部署单master节点

    目录 1.安装要求 2.安装规划 3.1.分步骤操作 3.2.一键执行脚本 4.1.安装cfssl证书生成工具 4.2.创建认证中心(根CA中心) 4.3.使用自签CA签发Etcd证书 4.4.部署E ...

  5. kubernetes二进制部署单master节点

    目录 1.安装要求 2.安装规划 3.1.分步骤操作 3.2.一键执行脚本 4.1.安装cfssl证书生成工具 4.2.创建认证中心(根CA中心) 4.3.使用自签CA签发Etcd证书 4.4.部署E ...

  6. K8S搭建单Master集群(二进制部署方式)

    一. 安装要求 (1)多台服务器,操作系统 CentOS7.6-86_x64 (2)硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘40GB或更多 (3)可以访问外网,需要拉取镜像,如果服务 ...

  7. kubernetes (k8s)的二进制部署单节点(etcd和flannel网络)

    文章目录 1 常见的k8s部署方式 2 环境准备 2.1 拓扑 2.2 所有主机关闭防火墙,selinux,swap 2.3 所有主机配置主机名,并再maser上做主机映射 2.4 所有主机将桥接的I ...

  8. 使用Kubeadm部署K8S单节点,速度快于二进制部署

    使用Kubeadmin部署K8S单节点,速度快于二进制部署 一. 环境准备 二.所有节点安装docker 三.所有节点安装kubeadm,kubelet和kubectl 四.部署K8S集群 五.安装d ...

  9. 二进制部署 单Master Kubernetes-v1.14.1集群

    一.部署Kubernetes集群 1.1 Kubernetes介绍 Kubernetes(K8S)是Google开源的容器集群管理系统,K8S在Docker容器技术的基础之上,大大地提高了容器化部署应 ...

最新文章

  1. 软件项目版本号的命名规则及格式
  2. 使用Python进行科学计算:NumPy入门
  3. c语言 int top,顺序栈(C语言,静态栈)
  4. oracle数据库查询代码,ORACLE数据库查询表实例代码
  5. android开发------Activity生命周期
  6. mysql单单写join_MySQL系列之Join大法
  7. android 面向对象,android 面向对象六大原则
  8. JS function 函数基本定义方法
  9. 一种低侵入性的组件化方案 之 组件化需要考虑的几个问题
  10. IDEA 工具从Json自动生成JavaBean
  11. 为什么使用@tablename起别名产生的sql语句不能用_宜信-运维-SQL优化|一文说清Oracle Hint的正确使用姿势...
  12. Java Web编程技术基础
  13. XenCenter导出和导入模板
  14. python海龟画图函数汇总
  15. android中menu重写哪些方法,在Android中Menu的使用
  16. 非标自动化转行机器人_工作4年,自动化工程师该不该转行
  17. Android源码目录简介
  18. CDH 6系列(CDH 6.0.0、CHD 6.1.0等)安装和使用
  19. windows 安装labelme
  20. 谷歌浏览器调用打印机不预览

热门文章

  1. C++版二叉树非递归遍历
  2. 网易云信今年发布的WE-CAN有哪些亮点?
  3. Aupera:FPGA让视频编码与AI结合水到渠成
  4. LeetCode——贪心思想
  5. 三年之久的 etcd3 数据不一致 bug 分析
  6. ffmpeg ffplay ffprobe使用说明
  7. configure: error: Neither flex nor lex was found.
  8. 移动端设备标识码:DeviceID、IMEI、IDFA、UDID和UUID的名词解释
  9. 解决gradle下载慢的问题
  10. Mysql的几个字符串函数【concat、concat_ws、group_concat】