k8s的集群部署,官方提供了三种方式:

  • minikube
    Minikube是一个工具,可以在本地快速运行的一个单点的k8s,仅用于尝试k8s或日常开发的用户使用。部署地址:https://kubernetes.io/docs/setup/minikube/
  • kubeadm
    Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署k8s集群。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
  • 二进制包
    推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。下载地址:https://github.com/kubernetes/kubernetes/releases

(其中minikube一般不用;kubeadm一般中小型公司喜欢用,因为方便、快捷,但如果出现问题,排障较困难;二进制包的部署方式是比较推荐的,手动部署可以对每个组件进行配置,当出现故障时可以通过定位故障点进行排障,但需要对二进制安装熟悉)

单master集群架构图

多master集群架构图

自签SSL证书

Etcd数据库集群部署

  • 二进制包下载地址
    https://github.com/etcd-io/etcd/releases
  • 查看集群状态
/opt/etcd/bin/etcd/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints=”https://192.168.0.x:2379,https://192.168.0.x:2379,https://192.168.0.x:2379” \
cluster-health

Flannel容器集群网络部署

  • Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
  • VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
  • Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。

Flannel容器集群网络部署

k8s二进制部署


服务器规划:

服务器 IP
master1 20.0.0.31/24
master2 20.0.0.34/24
node1 20.0.0.32/24
node2 20.0.0.33/24
lb1 20.0.0.35/24
lb2 20.0.0.36/ 24
Harbor私有仓库 20.0.0.37/24

下载官方构建好的二进制包:
官网地址:https://github.com/kubernetes/kubernetes/releases?after=v1.13.1

//在master上操作

[root@localhost ~]# mkdir k8s
[root@localhost ~]# cd k8s/
[root@localhost ~]# vim etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3WORK_DIR=/opt/etcdcat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOFcat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable etcd
systemctl restart etcd[root@localhost ~]# vim etcd-cert.sh
cat > ca-config.json <<EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat > ca-csr.json <<EOF
{"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]
}
EOFcfssl gencert -initca ca-csr.json | cfssljson -bare ca -#-----------------------cat > server-csr.json <<EOF
{"CN": "etcd","hosts": ["10.206.240.188","10.206.240.189","10.206.240.111"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server[root@localhost k8s]# ls
etcd.sh  etcd-cert.sh

#下载证书制作工具

[root@localhost k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

#下载cfssl官方包

[root@localhost k8s]# bash cfssl.sh
[root@localhost k8s]# ls /usr/local/bin/
cfssl  cfssl-certinfo  cfssljson
#cfssl 生成证书工具
#cfssljson通过传入json文件生成证书
#cfssl-certinfo查看证书信息

#定义ca证书

cat > ca-config.json <<EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"     ]  } }         }
}
EOF

#实现证书签名

cat > ca-csr.json <<EOF
{   "CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]
}
EOF

#生产证书,生成ca-key.pem ca.pem

[root@localhost etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
(提示信息)
2020/01/13 16:32:56 [INFO] generating a new CA key and certificate from CSR
2020/01/13 16:32:56 [INFO] generate received request
2020/01/13 16:32:56 [INFO] received CSR
2020/01/13 16:32:56 [INFO] generating key: rsa-2048
2020/01/13 16:32:56 [INFO] encoded CSR
2020/01/13 16:32:56 [INFO] signed certificate with serial number 595395605361409801445623232629543954602649157326

#指定etcd三个节点之间的通信验证(因为这里先部署单节点,1个master+2个node,要将3个etcd部署在这3个节点,所以指定的是这3个服务器的IP)

cat > server-csr.json <<EOF
{"CN": "etcd","hosts": ["20.0.0.31","20.0.0.32","20.0.0.33"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOF

#生成ETCD证书 server-key.pem server.pem

[root@localhost etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
(提示信息)
2020/01/13 17:01:30 [INFO] generate received request
2020/01/13 17:01:30 [INFO] received CSR
2020/01/13 17:01:30 [INFO] generating key: rsa-2048
2020/01/13 17:01:30 [INFO] encoded CSR
2020/01/13 17:01:30 [INFO] signed certificate with serial number 202782620910318985225034109831178600652439985681
2020/01/13 17:01:30 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

#ETCD 二进制包地址:https://github.com/etcd-io/etcd/releases
下载这些包:
etcd-v3.3.10-linux-amd64.tar.gz
flannel-v0.10.0-linux-amd64.tar.gz
kubernetes-server-linux-amd64.tar.gz

#将这三个包放到当前目录下

[root@localhost etcd-cert]# ls
ca-config.json  etcd-cert.sh                          server-csr.json
ca.csr          etcd-v3.3.10-linux-amd64.tar.gz       server-key.pem
ca-csr.json     flannel-v0.10.0-linux-amd64.tar.gz    server.pem
ca-key.pem      kubernetes-server-linux-amd64.tar.gz
ca.pem          server.cs      cfssl.sh
(此时该目录下的所有东西)[root@localhost etcd-cert]# mv *.tar.gz ../
[root@localhost etcd-cert]# cd ..
[root@localhost k8s]# ls
cfssl.sh   etcd.sh                          flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert  etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz[root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s]# ls etcd-v3.3.10-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md[root@localhost k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p    //配置文件,命令文件,证书[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/

#证书拷贝

[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/

#等待其他节点加入,这时会进入等待状态,这时候输入不了命令

[root@localhost k8s]# bash etcd.sh etcd01 20.0.0.31 etcd02=https://20.0.0.32:2380,etcd03=https://20.0.0.33:2380

#使用另外一个会话打开,会发现etcd进程已经开启

[root@localhost ~]# ps -ef | grep etcd

#拷贝证书到其他节点(node1、node2)

[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@20.0.0.32:/usr/lib/systemd/system/
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@20.0.0.33:/usr/lib/systemd/system/

//在node1节点修改复制过来的etcd配置文件

[root@localhost ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" (这里原本是etcd01,因为是复制过来的所以需要改一下,改为etcd02)
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.32:2380" (IP改为node1的IP)
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.32:2379" (IP改为node1的IP)#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.32:2380" (IP改为node1的IP)
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.32:2379" (IP改为node1的IP)
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.31:2380,etcd02=https://20.0.0.32:2380,etcd03=https://20.0.0.33:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#启动etcd服务

[root@localhost ssl]# systemctl start etcd
[root@localhost ssl]# systemctl status etcd

//在node2节点修改

[root@localhost ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" (这里原本是etcd01,因为是复制过来的所以需要改一下,改为etcd03)
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.33:2380" (IP改为node2的IP)
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.33:2379" (IP改为node2的IP)#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.33:2380" (IP改为node2的IP)
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.33:2379" (IP改为node2的IP)
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.31:2380,etcd02=https://20.0.0.32:2380,etcd03=https://20.0.0.33:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#启动etcd服务

[root@localhost ssl]# systemctl start etcd
[root@localhost ssl]# systemctl status etcd

//在master上检查集群状态

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379" cluster-health
(提示信息)
member 3eae9a550e2e3ec is healthy: got healthy result from https://20.0.0.33:2379
member 26cd4dcf17bc5cbd is healthy: got healthy result from https://20.0.0.32:2379
member 2fcd2df8a9411750 is healthy: got healthy result from https://20.0.0.31:2379
cluster is healthy
(所有节点都有并且出现cluster is healthy就OK)

#在所有node节点部署docker引擎

1.安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm22.设置阿里云镜像源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo3.安装Docker-CE
yum install -y docker-ce4.开启docker服务并设置开机自启
systemctl start docker.service
systemctl enable docker.service5.配置镜像加速配置镜像加速器:
在阿里云的自己的账号里找到加速地址,填入下方的中括号里。
tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors":["https://xxxxxxxx.aliyuncs.com"]}
EOF
(加速地址在阿里云上可以通过自己的账号看到,每个账号都有个单独的加速地址systemctl daemon-reload
systemctl restart docker6.网络优化
vim /etc/sysctl.conf
net.ipv4.ip_forward=1sysctl -p
service network restart
systemctl restart docker

flannel网络配置

//写入分配的子网段到ETCD中,供flannel使用

//在master操作
#写入信息

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
(提示信息)
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

#查看写入的信息

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379" get /coreos.com/network/config
(提示信息)
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

#拷贝到所有node节点(只需要部署在node节点)

[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@20.0.0.32:/root
[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@20.0.0.33:/root

//在所有node节点操作,这里是node1和node2

#将flannel安装包传到node节点服务器上

[root@localhost ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md

#创建一个k8s工作目录

[root@localhost ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

#编写flannel脚本

#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure[Install]
WantedBy=multi-user.targetEOFsystemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

#开启flannel网络功能

[root@localhost ~]# bash flannel.sh https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379
(提示信息)
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

#配置docker连接flannel

[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
(在以下位置添加需要的配置)
EnvironmentFile=/run/flannel/subnet.env 及 $DOCKER_NETWORK_OPTIONS添加的位置:
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env(这个是添加上去的)
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS(这个是添加上去的) -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"

#说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"

#重启docker服务

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

#查看flannel网络(每个节点不一样,根据自己看到的网段为准)

[root@localhost ~]# ifconfig
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450inet 172.17.84.0  netmask 255.255.255.255  broadcast 0.0.0.0inet6 fe80::fc7c:e1ff:fe1d:224  prefixlen 64  scopeid 0x20<link>ether fe:7c:e1:1d:02:24  txqueuelen 0  (Ethernet)RX packets 0  bytes 0 (0.0 B)RX errors 0  dropped 0  overruns 0  frame 0TX packets 0  bytes 0 (0.0 B)TX errors 0  dropped 26 overruns 0  carrier 0  collisions 0

#测试ping通对方docker0网卡验证flannel起到路由作用

[root@localhost ~]# docker run -it centos:7 /bin/bash
[root@5f9a65565b53 /]# yum install net-tools -y
然后ping另一个节点的flannel地址
如果能够ping通,就OK了

部署master组件
//在master上操作,api-server生成证书
#将master.zip传到master节点服务器上

[root@localhost k8s]# unzip master.zip
[root@localhost k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost k8s]# mkdir k8s-cert
[root@localhost k8s]# cd k8s-cert/
[root@localhost k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat > ca-csr.json <<EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]
}
EOFcfssl gencert -initca ca-csr.json | cfssljson -bare ca -#-----------------------cat > server-csr.json <<EOF
{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","20.0.0.31",  //master1"20.0.0.34",  //master2"20.0.0.100",  //vip"20.0.0.35",  //lb (负载均衡master) "20.0.0.36",  //lb (负载均衡backup)"kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server#-----------------------cat > admin-csr.json <<EOF
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin#-----------------------cat > kube-proxy-csr.json <<EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy[root@localhost k8s-cert]# ls
k8s-cert.sh

#生成k8s证书

[root@localhost k8s-cert]# bash k8s-cert.sh

##查看所有证书

[root@localhost k8s-cert]# ls *pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem

##复制ca开头的、server开头的所有证书到 /opt/kubernetes/ssl下

[root@localhost k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/

#解压kubernetes压缩包

[root@localhost k8s-cert]# cd ..
[root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@localhost k8s]# cd /root/k8s/kubernetes/server/bin

#复制关键命令文件到/opt/kubernetes/bin/

[root@localhost bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

#创建角色,用token.csv

[root@localhost k8s]# cd /root/k8s
[root@localhost k8s]# vim /opt/kubernetes/cfg/token.csv
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
(序列号,用户名,id,角色)序列号每个都不一样,在编辑之前用命令随机生成序列号,命令如下
head -c 16 /dev/urandom | od -An -t x | tr -d ' '

#二进制文件,token,证书都准备好后,开启apiserver

[root@localhost k8s]# bash apiserver.sh 20.0.0.31 https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379
(提示信息)
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

#检查进程是否启动成功

[root@localhost k8s]# ps aux | grep kube

#查看配置文件

[root@localhost k8s]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379 \
--bind-address=20.0.0.31 \
--secure-port=6443 \
--advertise-address=20.0.0.31 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

#监听的https端口

[root@localhost k8s]# netstat -ntap | grep 6443
tcp        0      0 192.168.195.149:6443    0.0.0.0:*               LISTEN      46459/kube-apiserve
tcp        0      0 192.168.195.149:6443    20.0.0.31:36806   ESTABLISHED 46459/kube-apiserve
tcp        0      0 192.168.195.149:36806   20.0.0.31:6443    ESTABLISHED 46459/kube-apiserve [root@localhost k8s]# netstat -ntap | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN

#启动controller-manager

[root@localhost k8s]# chmod +x controller-manager.sh
[root@localhost k8s]# ./controller-manager.sh 127.0.0.1
(提示信息)
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

#查看master 节点状态

[root@localhost k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

node节点部署

//在master上操作
#把 kubelet、kube-proxy拷贝到node节点上去

[root@localhost bin]# scp kubelet kube-proxy root@20.0.0.32:/opt/kubernetes/bin/
[root@localhost bin]# scp kubelet kube-proxy root@20.0.0.33:/opt/kubernetes/bin/

//在nod01节点操作(将node.zip传到/root目录下解压)

[root@localhost ~]# ls
anaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz  node.zip   公共  视频  文档  音乐
flannel.sh       initial-setup-ks.cfg                README.md  模板  图片  下载  桌面

#解压node.zip,获得kubelet.sh proxy.sh

[root@localhost ~]# unzip node.zip

//在master上操作

[root@localhost k8s]# mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig/

#拷贝kubeconfig.sh文件进行重命名

[root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig
[root@localhost kubeconfig]# vim kubeconfig
删除以下部分:
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#获取token信息,在master上看

[root@localhost ~]# cat /opt/kubernetes/cfg/token.csv
6351d652249951f79c33acdab329e4c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

# 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \--token=6351d652249951f79c33acdab329e4c4(这里填入刚才查看的序列号) \--kubeconfig=bootstrap.kubeconfig

#设置环境变量(可以写入到/etc/profile中)

[root@localhost kubeconfig]# vim kubeconfig
(将export PATH=$PATH:/opt/kubernetes/bin/添加到末尾行)
//[root@localhost kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/ (也可以直接写入)[root@localhost kubeconfig]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

#生成配置文件

[root@localhost kubeconfig]# bash kubeconfig 20.0.0.31 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".[root@localhost kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

#拷贝配置文件到node节点

[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.32:/opt/kubernetes/cfg/
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.33:/opt/kubernetes/cfg/

#创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)

[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
(提示信息)
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

//在node01节点上操作

[root@localhost ~]# bash kubelet.sh 20.0.0.32
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

#检查kubelet服务启动

[root@localhost ~]# ps aux | grep kube
root     106845  1.4  1.1 371744 44780 ?        Ssl  00:34   0:01 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.195.150 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfgkubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root     106876  0.0  0.0 112676   984 pts/0    S+   00:35   0:00 grep --color=auto kube

//在master上操作

[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A   4m27s   kubelet-bootstrap   Pending(等待集群给该节点颁发证书)[root@localhost kubeconfig]# kubectl certificate approve node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A
(后面的NAME每个都不一样,要复制下来)

#继续查看证书状态

[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A   8m56s   kubelet-bootstrap   Approved,Issued(已经被允许加入群集)

#查看集群节点,成功加入node1节点

[root@localhost kubeconfig]# kubectl get node
NAME              STATUS   ROLES    AGE    VERSION
20.0.0.32   Ready    <none>   118s   v1.12.3

#在node1节点操作,启动proxy服务

[root@localhost ~]# bash proxy.sh 20.0.0.32
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@localhost ~]# systemctl status kube-proxy.service
(runnning状态)

node2节点部署

//在node1节点操作

#把现成的/opt/kubernetes目录复制到node2节点进行修改

[root@localhost ~]# scp -r /opt/kubernetes/ root@20.0.0.33:/opt/

#把kubelet,kube-proxy的service文件拷贝到node2中

[root@localhost ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@20.0.0.33:/usr/lib/systemd/system/

//在node2上操作,进行修改

#首先删除复制过来的证书,等会node2会自行申请证书

[root@localhost ~]# cd /opt/kubernetes/ssl/
[root@localhost ssl]# rm -rf *

#修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)

[root@localhost ssl]# cd ../cfg/
[root@localhost cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.33 \ (原来是20.0.0.32,改成20.0.0.33)
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"[root@localhost cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 20.0.0.33 (原来是20.0.0.32,改成20.0.0.33)
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:anonymous:enabled: true[root@localhost cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.33 \           (原来是20.0.0.32,改成20.0.0.33)
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

#启动服务

[root@localhost cfg]# systemctl start kubelet.service
[root@localhost cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@localhost cfg]# systemctl start kube-proxy.service
[root@localhost cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

//在master上操作查看请求

[root@localhost k8s]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-OaH9HpIKh6AKlfdjEKm4C6aJ0UT_1YxNaa70yEAxnsU   15s   kubelet-bootstrap   Pending
node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A   4m27s   kubelet-bootstrap   approve
(这里又有一个Pending状态的请求,跟刚才一样授权许可加入集群)

#授权许可加入群集

[root@localhost k8s]# kubectl certificate approve node-csr-OaH9HpIKh6AKlfdjEKm4C6aJ0UT_1YxNaa70yEAxnsU

#查看群集中的节点

[root@localhost k8s]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
20.0.0.32   Ready    <none>   11h   v1.12.3
20.0.0.33   Ready    <none>   11h   v1.12.3

至此,单节点部署完成。

部署master2节点

#关闭防火墙、内核

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
vim /etc/selinux/config
SELINUX=disabled

//在master1上操作
#复制kubernetes目录到master2

[root@localhost k8s]# scp -r /opt/kubernetes/ root@20.0.0.34:/opt

#复制master中的三个组件启动脚本

kube-apiserver.service     kube-controller-manager.service      kube-scheduler.service
[root@localhost k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@20.0.0.34:/usr/lib/systemd/system/

//在master2上操作

#修改配置文件kube-apiserver中的IP

[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]# vim kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.31:2379,https://20.0.0.32:2379,https://20.0.0.33:2379 \
--bind-address=20.0.0.34 \  (原来是master1的IP,现在改成master2的IP)
--secure-port=6443 \
--advertise-address=20.0.0.34 \ (原来是master1的IP,现在改成master2的IP)
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

#master2一定要有etcd证书,这里拷贝master1上已有的etcd证书给master2使用

[root@localhost k8s]# scp -r /opt/etcd/ root@20.0.0.34:/opt/

#启动master2中的三个组件服务

[root@localhost cfg]# systemctl start kube-apiserver.service
[root@localhost cfg]# systemctl start kube-controller-manager.service
[root@localhost cfg]# systemctl start kube-scheduler.service

#添加环境变量

[root@localhost cfg]# vim /etc/profile
在末尾添加:export PATH=$PATH:/opt/kubernetes/bin/[root@localhost cfg]# source /etc/profile
[root@localhost cfg]# kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
20.0.0.32   Ready    <none>   2d12h   v1.12.3
20.0.0.33   Ready    <none>   38h     v1.12.3

至此,多节点部署完成。

现在已经有两个master,两个node,三个etcd分别部署在master1、node1、node2上。

k8s负载均衡部署

//在lb1和lb2上操作

#安装nginx服务

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0
[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0[root@localhost ~]# yum -y install nginx

#添加四层转发

[root@localhost ~]# vim /etc/nginx/nginx.conf events {worker_connections  1024;
}
stream {log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log  /var/log/nginx/k8s-access.log  main;upstream k8s-apiserver {server 20.0.0.31:6443;server 20.0.0.34:6443;}server {listen 6443;proxy_pass k8s-apiserver;}}
http {[root@localhost ~]# systemctl start nginx

#部署keepalived服务

[root@localhost ~]# yum -y install keepalived

#修改配置文件

[root@localhost ~]# vim keepalived.conf! Configuration File for keepalived global_defs { # 接收邮件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 邮件发送地址 notification_email_from Alexandre.Cassen@firewall.loc  smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER
} vrrp_script check_nginx {script "/usr/local/nginx/sbin/check_nginx.sh"
}vrrp_instance VI_1 { state MASTER interface eth0virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100    # 优先级,备服务器设置 90 advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS      auth_pass 1111 }  virtual_ipaddress { 10.0.0.188/24 } track_script {check_nginx}
}mkdir /usr/local/nginx/sbin/ -p
vim /usr/local/nginx/sbin/check_nginx.shcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then/etc/init.d/keepalived stop
fichmod +x /usr/local/nginx/sbin/check_nginx.sh[root@localhost ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes

#lb1是Mster配置如下

! Configuration File for keepalived global_defs { # 接收邮件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 邮件发送地址 notification_email_from Alexandre.Cassen@firewall.loc  smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER
} vrrp_script check_nginx {script "/etc/nginx/check_nginx.sh"
}vrrp_instance VI_1 { state MASTER interface ens33virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100    # 优先级,备服务器设置 90 advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS      auth_pass 1111 }  virtual_ipaddress { 20.0.0.100/24 } track_script {check_nginx}
}

#lb2是Backup配置如下

! Configuration File for keepalived global_defs { # 接收邮件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 邮件发送地址 notification_email_from Alexandre.Cassen@firewall.loc  smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER
} vrrp_script check_nginx {script "/etc/nginx/check_nginx.sh"
}vrrp_instance VI_1 { state BACKUP interface ens33virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 90    # 优先级,备服务器设置 90 advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS      auth_pass 1111 }  virtual_ipaddress { 20.0.0.100/24 } track_script {check_nginx}
}[root@localhost ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];thensystemctl stop keepalived
fi[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh
[root@localhost ~]# systemctl start keepalived

#查看lb1地址信息

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ffinet 20.0.0.35/24 brd 20.0.0.255 scope global ens33valid_lft forever preferred_lft foreverinet 20.0.0.100/24 scope global secondary ens33  //漂移地址在lb1中valid_lft forever preferred_lft foreverinet6 fe80::53ba:daab:3e22:e711/64 scope link valid_lft forever preferred_lft forever

#查看lb2地址信息

[root@localhost nginx]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:c9:9d:88 brd ff:ff:ff:ff:ff:ffinet 20.0.0.36/24 brd 20.0.0.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::55c0:6788:9feb:550d/64 scope link valid_lft forever preferred_lft forever

#验证地址漂移(lb1中用pkill nginx,再在lb2中用ip a 查看vip)
#恢复操作(在lb1中先启动nginx服务,再启动keepalived服务)
#nginx站点目录/usr/share/nginx/html

#开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)

[root@localhost cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig

#统统修改为VIP
server: https://20.0.0.100:6443
[root@localhost cfg]# systemctl restart kubelet.service
[root@localhost cfg]# systemctl restart kube-proxy.service

#替换完成直接自检
[root@localhost cfg]# grep 100 *
bootstrap.kubeconfig: server: https://20.0.0.100:6443
kubelet.kubeconfig: server: https://20.0.0.100:6443
kube-proxy.kubeconfig: server: https://20.0.0.100:6443

#在lb1上查看nginx的k8s日志

[root@localhost ~]# tail /var/log/nginx/k8s-access.log

//在master01上操作
#测试创建pod

[root@localhost ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created

#查看状态

[root@localhost ~]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-nf9sk   0/1     ContainerCreating   0          33s   //正在创建中
[root@localhost ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-nf9sk   1/1     Running   0          80s  //创建完成,运行中

备注:
#注意日志问题

[root@localhost ~]# kubectl logs nginx-dbddb74b8-nf9sk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-nf9sk)(解决)
[root@localhost ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

#查看pod网络

[root@localhost ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE
nginx-dbddb74b8-nf9sk   1/1     Running   0          11m   172.17.71.3   20.0.0.32   <none>

#在对应网段的node节点上操作可以直接访问

[root@localhost cfg]# curl 172.17.71.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head><body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

#访问就会产生日志,到master1操作

[root@localhost ~]# kubectl logs nginx-dbddb74b8-nf9sk
172.17.31.1 - - [05/Feb/2020:10:12:36 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

部署Harbor私有仓库

1.安装依赖包

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# setenforce 0
[root@localhost ~]# vim /etc/selinux/config
SELINUX=disabled (修改这项参数)
[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2解释:
·yum-utils 提供了yum-config-manager
·device mapper 存储驱动程序需要device-mapper-persistent-data和lvm2
·Device Mapper是Linux2.6内核中支持逻辑卷管理的通用设备映射机制,
它为实现用于存储资源管理的块设备驱动提供了一个高度模块化的内核架构。

2.设置阿里云镜像源

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装Docker-CE

yum install -y docker-ce

4.启动docker服务

[root@localhost ~]# systemctl start docker.service
[root@localhost ~]# systemctl enable docker.service

5.镜像加速

配置镜像加速器:
在阿里云的自己的账号里找到加速地址,填入下方的中括号里。[root@localhost ~]# tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors":["https://xxxxxxx.mirror.aliyuncs.com"]}
EOF[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

6.网络优化

[root@localhost ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward=1[root@localhost ~]# sysctl -p
[root@localhost ~]# service network restart
[root@localhost ~]# systemctl restart docker

7.下载 Harbor 安装程序

[root@localhost ~]# wget http:// harbor.orientsoft.cn/harbor-1.2.2/harbor-offline-installer-v1.2.2.tgz
(也可以先下载好安装包,然后上传)

8.部署Harbor

[root@localhost ~]# tar zxvf harbor-offline-installer-v1.2.2.tgz -C /usr/local/

9.配置 Harbor 参数文件

[root@localhost ~]# vim /usr/local/harbor/harbor.cfg
//5 hostname=20.0.0.37 (修改第五行参数)
关于 Harbor.cfg 配置文件中有两类参数:所需参数和可选参数(1)所需参数 这些参数需要在配置文件 Harbor.cfg 中设置。
如果用户更新它们并运行 install.sh脚本重新安装 Harbour,
参数将生效。具体参数如下:hostname:用于访问用户界面和 register 服务。它应该是目标机器的 IP 地址或完全限 定的域名(FQDN)
例如 192.168.195.128 或 hub.kgc.cn。不要使用 localhost 或 127.0.0.1 为主机名。ui_url_protocol:(http 或 https,默认为 http)用于访问 UI 和令牌/通知服务的协议。如果公证处于启用状态,则此参数必须为 https。max_job_workers:镜像复制作业线程。db_password:用于db_auth 的MySQL数据库root 用户的密码。customize_crt:该属性可设置为打开或关闭,默认打开。打开此属性时,准备脚本创建私钥和根证书,用于生成/验证注册表令牌。
当由外部来源提供密钥和根证书时,将此属性设置为 off。ssl_cert:SSL 证书的路径,仅当协议设置为 https 时才应用。ssl_cert_key:SSL 密钥的路径,仅当协议设置为 https 时才应用。secretkey_path:用于在复制策略中加密或解密远程 register 密码的密钥路径。2)可选参数
这些参数对于更新是可选的,即用户可以将其保留为默认值,并在启动 Harbor 后在 Web UI 上进行更新。
如果进入 Harbor.cfg,只会在第一次启动 Harbor 时生效,随后对这些参数 的更新,Harbor.cfg 将被忽略。注意:如果选择通过UI设置这些参数,请确保在启动Harbour后立即执行此操作。具体来说,必须在注册或在 Harbor 中创建任何新用户之前设置所需的
auth_mode。当系统中有用户时(除了默认的 admin 用户),auth_mode 不能被修改。具体参数如下:Email:Harbor需要该参数才能向用户发送“密码重置”电子邮件,并且只有在需要该功能时才需要。
请注意,在默认情况下SSL连接时没有启用。如果SMTP服务器需要SSL,但不支持STARTTLS,那么应该通过设置启用SSL email_ssl = TRUE。harbour_admin_password:管理员的初始密码,只在Harbour第一次启动时生效。之后,此设置将被忽略,并且应 UI中设置管理员的密码。
请注意,默认的用户名/密码是 admin/Harbor12345。auth_mode:使用的认证类型,默认情况下,它是 db_auth,即凭据存储在数据库中。对于LDAP身份验证,请将其设置为 ldap_auth。self_registration:启用/禁用用户注册功能。禁用时,新用户只能由 Admin 用户创建,只有管理员用户可以在 Harbour中创建新用户。
注意:当 auth_mode 设置为 ldap_auth 时,自注册功能将始终处于禁用状态,并且该标志被忽略。Token_expiration:由令牌服务创建的令牌的到期时间(分钟),默认为 30 分钟。project_creation_restriction:用于控制哪些用户有权创建项目的标志。默认情况下, 每个人都可以创建一个项目。
如果将其值设置为“adminonly”,那么只有 admin 可以创建项目。verify_remote_cert:打开或关闭,默认打开。此标志决定了当Harbor与远程 register 实例通信时是否验证 SSL/TLS 证书。
将此属性设置为 off 将绕过 SSL/TLS 验证,这在远程实例具有自签名或不可信证书时经常使用。另外,默认情况下,Harbour 将镜像存储在本地文件系统上。在生产环境中,可以考虑 使用其他存储后端而不是本地文件系统,
如 S3、Openstack Swif、Ceph 等。但需要更新 common/templates/registry/config.yml 文件。

10.启动 Harbor

[root@localhost ~]# sh /usr/local/harbor/install.sh

11.查看 Harbor 启动镜像
#查看镜像

[root@localhost ~]# docker images

#查看容器

[root@localhost ~]# docker ps -a
[root@localhost ~]# cd /usr/local/harbor/
[root@localhost ~]# docker-compose ps

如果一切都正常,可以打开浏览器访问 http://20.0.0.37 的管理页面,默认的管理员用户名/密码是 admin/Harbor12345。
(关于Harbor私有仓库搭建等相信相关可以参考我的另一篇Harbor私有仓库文章https://blog.csdn.net/KY05QK/article/details/109718475)

//在Harbor的Web管理界面添加一个项目,自定义一个名称。

#node节点配置连接私有仓库(在node节点上操作)

[root@localhost ~]# vim /etc/docker/daemon.json
{"registry-mirrors": ["https://xxxxxxxx.mirror.aliyuncs.com"],"insecure-registries":["20.0.0.37"] //在这里添加这一行,上一行的结尾要加上逗号
}

#登录harbor私有仓库

[root@localhost ~]# docker login 20.0.0.37
Username: admin
Password:     //输入密码Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

#下载Tomcat镜像进行推送

[root@localhost ~]# docker pull tomcat
[Using default tag: latest
latest: Pulling from library/tomcat
dc65f448a2e2: Pull complete
346ffb2b67d7: Pull complete
dea4ecac934f: Pull complete
8ac92ddf84b3: Pull complete
d8ef64070a18: Pull complete
6577248b0d6e: Pull complete
576c0a3a6af9: Pull complete
6e0159bd18db: Pull complete
8c831308dd9e: Pull complete
c603174def53: Pull complete
Digest: sha256:e895bcbfa20cf4f3f19ca11451dabc166fc8e827dfad9dd714ecaa8c065a3b18
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest

#推送格式

 docker tag SOURCE_IMAGE[:TAG] 20.0.0.37/project/IMAGE[:TAG]//推送格式在点击创建的项目名称后,在右侧点开“推送镜像”选项,会有模板且可以点击模板右边的按钮直接复制

#打标签

[root@localhost ~]# docker tag tomcat 20.0.0.37/project/tomcat

#推送成功

[root@localhost ~]# docker push 20.0.0.37/project/tomcat
The push refers to repository [192.168.195.80/project/tomcat]
462b69413f6f: Pushed
d378747b2549: Pushed
78f5460c83b5: Pushed
c601709dd5d2: Pushed
72ce39f2b7f6: Pushed
33783834b288: Pushed
5c813a85f7f0: Pushed
bdca38f94ff0: Pushed
faac394a1ad3: Pushed
ce8168f12337: Pushed
latest: digest: sha256:8ffa1b72bf611ac305523ed5bd6329afd051c7211fbe5f0b5c46ea5fb1adba46 size: 2421

#然后在该项目下方就可以看见上传的镜像了。

#在另一个node节点下载这个镜像

[root@localhost ~]# docker pull 20.0.0.37/project/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 20.0.0.37/project/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied//出现了报错,因为没有登录,所以不能下载,需要登录
[root@localhost ~]# docker login 20.0.0.37
Username: admin
Password:     //输入密码Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store[root@localhost ~]# docker push 20.0.0.37/project/tomcat
The push refers to repository [192.168.195.80/project/tomcat]
462b69413f6f: Pushed
d378747b2549: Pushed
78f5460c83b5: Pushed
c601709dd5d2: Pushed
72ce39f2b7f6: Pushed
33783834b288: Pushed
5c813a85f7f0: Pushed
bdca38f94ff0: Pushed
faac394a1ad3: Pushed
ce8168f12337: Pushed
latest: digest: sha256:8ffa1b72bf611ac305523ed5bd6329afd051c7211fbe5f0b5c46ea5fb1adba46 size: 2421

部署k8s网页UI界面

//在master01上操作

#创建dashborad工作目录

[root@localhost k8s]# mkdir dashboard

#拷贝官方的文件
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

#将这些文件拷贝到dashboard目录下

[root@localhost dashboard]# ls
dashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yaml
dashboard-controller.yaml  dashboard-secret.yaml  k8s-admin.yaml

#开始创建(顺序不能乱)

[root@localhost dashboard]# kubectl create -f dashboard-rbac.yaml
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created[root@localhost dashboard]# kubectl create -f dashboard-secret.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created[root@localhost dashboard]# kubectl create -f dashboard-configmap.yaml
configmap/kubernetes-dashboard-settings created[root@localhost dashboard]# kubectl create -f dashboard-controller.yaml
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created[root@localhost dashboard]# kubectl create -f dashboard-service.yaml
service/kubernetes-dashboard created

#完成后查看创建在指定的kube-system命名空间下

[root@localhost dashboard]# kubectl get pods -n kube-system
NAME                                    READY   STATUS              RESTARTS   AGE
kubernetes-dashboard-65f974f565-m9gm8   0/1     ContainerCreating   0          88s

#查看如何访问

[root@localhost dashboard]# kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/kubernetes-dashboard-65f974f565-m9gm8   1/1     Running   0          2m49sNAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kubernetes-dashboard   NodePort   10.0.0.243   <none>        443:30001/TCP   2m24s

#访问node的IP就可以访问了(火狐浏览器可以直接访问)
https://20.0.0.37:30001

#谷歌浏览器无法访问的问题解决办法

[root@localhost dashboard]# vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF
{"CN": "Dashboard","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOFK8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboardkubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system[root@localhost dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
2020/02/05 15:29:08 [INFO] received CSR
2020/02/05 15:29:08 [INFO] generating key: rsa-2048
2020/02/05 15:29:09 [INFO] encoded CSR
2020/02/05 15:29:09 [INFO] signed certificate with serial number 150066859036029062260457207091479364937405390263
2020/02/05 15:29:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
secret "kubernetes-dashboard-certs" deleted
secret/kubernetes-dashboard-certs created[root@localhost dashboard]# vim dashboard-controller.yaml
args:# PLATFORM-SPECIFIC ARGS HERE- --auto-generate-certificates- --tls-key-file=dashboard-key.pem (添加)- --tls-cert-file=dashboard.pem (添加)

#重新部署(注意:当apply不生效时,先用delete清除资源,再用apply创建资源)

[root@localhost dashboard]# kubectl apply -f dashboard-controller.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/kubernetes-dashboard configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/kubernetes-dashboard configured

然后在访问页面下方会出现“继续前往20.0.0.32(不安全)”,点击后会出现“kubeconfig”和“令牌”两种登录方式,我们采用令牌登录的方式。

#生成令牌

[root@localhost dashboard]# kubectl create -f k8s-admin.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

#保存

[root@localhost dashboard]# kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-qctfr        kubernetes.io/service-account-token   3      65s
default-token-mmvcg                kubernetes.io/service-account-token   3      7d15h
kubernetes-dashboard-certs         Opaque                                11     10m
kubernetes-dashboard-key-holder    Opaque                                2      63m
kubernetes-dashboard-token-nsc84   kubernetes.io/service-account-token   3      62m

#查看令牌

[root@localhost dashboard]# kubectl describe secret dashboard-admin-token-qctfr -n kube-system
Name:         dashboard-admin-token-qctfr
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 73f19313-47ea-11ea-895a-000c297a15fbType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcWN0ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzNmMTkzMTMtNDdlYS0xMWVhLTg5NWEtMDAwYzI5N2ExNWZiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.v4YBoyES2etex6yeMPGfl7OT4U9Ogp-84p6cmx3HohiIS7sSTaCqjb3VIvyrVtjSdlT66ZMRzO3MUgj1HsPxgEzOo9q6xXOCBb429m9Qy-VK2JxuwGVD2dIhcMQkm6nf1Da5ZpcYFs8SNT-djAjZNB_tmMY_Pjao4DBnD2t_JXZUkCUNW_O2D0mUFQP2beE_NE2ZSEtEvmesB8vU2cayTm_94xfvtNjfmGrPwtkdH0iy8sH-T0apepJ7wnZNTGuKOsOJf76tU31qF4E5XRXIt-F2Jmv9pEOFuahSBSaEGwwzXlXOVMSaRF9cBFxn-0iXRh0Aq0K21HdPHW1b4-ZQwA

#复制以上“token:”后面的所有内容,到“令牌”下方的横线上,然后点击登录即可。

来了,k8s!-----------------k8s集群部署相关推荐

  1. 这一篇 K8S(Kubernetes)集群部署 我觉得还可以

    点赞再看,养成习惯,微信搜索[牧小农]关注我获取更多资讯,风里雨里,小农等你,很高兴能够成为你的朋友. 国内安装K8S的四种途径 Kubernetes 的安装其实并不复杂,因为Kubernetes 属 ...

  2. k8s dashboard_k8s集群部署Dashboard

    部署Dashboard(Web UI) * dashboard-deployment.yaml // 部署Pod,提供Web服务 * dashboard-rbac.yaml // 授权访问apiser ...

  3. 自动化运维之k8s——Kubernetes集群部署、pod、service微服务、kubernetes网络通信

    目录 一.Kubernetes简介 1.Kubernetes简介 2.kubernetes设计架构 3.Kubernetes核心组件 4.kubernetes设计结构 二.Kubernetes部署 1 ...

  4. k8s 二进制集群部署

    目录 写在之前 一. k8s简介 1. k8s 的整体架构 2. 部署说明 3. ansible host文件说明 二. 基础系统部署 1. 设置关闭防火墙及SELINUX 2. 关闭Swap 3.升 ...

  5. Kubernetes(K8S)集群部署搭建图文教程(最全)

    Kubernetes 集群安装 前期准备 集群安装 系统初始化 Harbor采取私有的仓库去镜像使用 集群检测 集群功能演示 前期准备 第一步:Router软路由构建 第二步:centos7安装 5台 ...

  6. k8s二进制集群部署安装文档

    一.架构拓扑图 版本信息: kubernetes v1.18.20/ etcd-v3.4.21 docker 18.09.9-3.el7 calico/node v3.8.9 安装所需要镜像: har ...

  7. 手动安装K8s第三节:etcd集群部署

    手动安装K8s第三节:etcd集群部署 准备安装包 https://github.com/coreos/etcd 版本:3.2.18 wget https://github.com/coreos/et ...

  8. k8s集群部署之环境介绍与etcd数据库集群部署

    角色 IP 组件 配置 master-1 192.168.10.11 kube-apiserver kube-controller-manager kube-scheduler etcd 2c 2g ...

  9. K8S部署工具:KubeOperator集群部署

    K8S部署工具:KubeOperator集群部署 集群信息⚓︎ 项目: 选择集群所属项目 供应商: 支持裸金属(手动模式)和部署计划(自动模式) 版本: 支持版本管理中最新的两个 Kubernetes ...

最新文章

  1. 集群分发脚本xsync
  2. breakdancer检测结构变异
  3. 因为此版本的应用程序不支持其项目类型(.vcproj)的解决方法
  4. 测试自己像什么动物软件叫什么,【测试】你最像哪种动物?
  5. 关于.net Microsoft.Office.Interop.Word组建操作word的问题,如何控制word表格单元格内部段落的样式。...
  6. linux下面使用cpdf合并pdf
  7. 2019 ICPC Asia Yinchuan Regional(9 / 13)
  8. Seeing this, many people find it incredible
  9. abp 使用 hangfire结合mysql
  10. python滑动手机屏幕_appium+python自动化24-滑动方法封装(swipe)
  11. 12-matlab简单读excel
  12. 六层电梯的PLC控制程序
  13. POJ1053 Set Me
  14. 单片机延时函数移植问题
  15. 直播网站服务器带宽多少合适,开直播网速要求(开直播要多少兆宽带)
  16. 举个栗子~Tableau 技巧(220):使用「集」实现不同分析维度图表的数据联动
  17. 老徐WEB:js入门学习 - javascript函数和闭包
  18. unity3d:win32api,托盘运行,开机自启动,浏览文件对话框,无标题栏,自定义标题栏拖动
  19. MySQL-备份恢复
  20. 解决双屏同时只能一个工作,另一个黑屏问题

热门文章

  1. ubuntu服务器安装可视化桌面(Gnome)
  2. 有序边表算法----计算机图形学
  3. 英语语法汇总(4.数量词)
  4. 有关虚拟专用局域网业务VPLS的总结
  5. 台当局死磕美国Uber
  6. cad编辑节点快捷键是什么_CAD常用的快捷键命令
  7. 建立桌面文件管理格子_win10如何创建桌面格子_win10怎么建立桌面文件管理格子...
  8. jvm(一.基础入门)
  9. 我从实习到现在的经历,幸运女神总会来到!
  10. dom4j解析XML入门指北