二进制高可用安装k8s集群部署
参考Kubernetes全栈架构师(二进制高可用安装k8s集群部署篇)–学习笔记
一、二进制高可用基本配置
k8s高可用架构解析,高可用Kubernetes集群规划,设置静态ip,请参考上一篇文章
1、配置所有节点hosts文件(发送键输到入所有会话)
vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain610.218.22.212 k8s-master01
10.218.22.234 k8s-master02
10.218.22.252 k8s-master03
10.218.22.218 k8s-node01
10.218.22.225 k8s-node02
2、CentOS 7安装yum源如下:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
3、必备工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
4、所有节点关闭firewalld 、dnsmasq、selinux(CentOS7需要关闭NetworkManager,CentOS8不需要)
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManagersetenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
检查状态(必须为 Disable)
getenforce
所有节点关闭swap分区,fstab注释swap
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
5、所有节点同步时间,如果是公司的机器一般都有配置时间同步了
安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
所有节点同步时间。时间同步配置如下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
检查时间
date
加入到crontab
crontab -e
# 添加以下内容
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
6、所有节点配置limit:
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
7、Master01节点(取消发送键输到入所有会话)免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
ssh-keygen -t rsa
Master01配置免密码登录其他节点
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
8、所有节点安装基本工具(发送键输到入所有会话)
yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git -y
9、Master01下载安装文件(取消发送键输到入所有会话)
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
10、所有节点(发送键输到入所有会话)升级系统并重启,此处升级没有升级内核,下节会单独升级内核:
yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统
11、二进制系统和内核升级
CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
在master01节点(取消发送键输入到所有会话)下载内核:
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
从master01节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有节点安装内核
cd /root && yum localinstall -y kernel-ml*
所有节点更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfggrubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是不是4.19
grubby --default-kernel
所有节点重启,然后检查内核是不是4.19
rebootuname -a
12、所有节点安装ipvsadm(实现负载均衡):
yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
然后执行
systemctl enable --now systemd-modules-load.service
检查是否加载(需要重启后才可以加载):
lsmod | grep -e ip_vs -e nf_conntrack
13、开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
net.ipv4.ip_forward 不打开的话跨主机通讯不了
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
rebootlsmod | grep --color=auto -e ip_vs -e nf_conntrack
二、二进制基本组件安装
1、Docker安装
所有节点安装Docker-ce 19.03(官方推荐)
yum install docker-ce-19.03.* -y
由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
所有节点设置开机自启动Docker:
systemctl daemon-reload && systemctl enable --now docker
2、K8s及etcd安装
(1)Master01(取消发送键输入到所有的会话)下载kubernetes安装包
访问官网获取最新版本:https://github.com/kubernetes/kubernetes
进入CHANGELOG目录,可以看到目前最新的是1.22,点击Server Binaries获取下载链接,如果有更新的版本需要下载最新的版本
wget https://dl.k8s.io/v1.22.0-beta.1/kubernetes-server-linux-amd64.tar.gz
(2)下载etcd安装包(3.4.13是官方推荐版本,已经经过验证)
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
(3)解压kubernetes安装文件,二进制的安装其实解压之后就安装完成了
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
(4)解压etcd安装文件
tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}
(5)版本查看
#kubelet --versionKubernetes v1.22.0-beta.1# etcdctl version
etcdctl version: 3.4.13
API version: 3.4
(6)将组件发送到其他节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
(7)所有节点创建/opt/cni/bin目录(发送键输入到所有的会话)
mkdir -p /opt/cni/bin
查看分支
cd k8s-ha-install/
git branch -a* masterremotes/origin/HEAD -> origin/masterremotes/origin/manual-installationremotes/origin/manual-installation-v1.16.xremotes/origin/manual-installation-v1.17.xremotes/origin/manual-installation-v1.18.xremotes/origin/manual-installation-v1.19.xremotes/origin/manual-installation-v1.20.xremotes/origin/manual-installation-v1.20.x-csi-hostpathremotes/origin/manual-installation-v1.21.xremotes/origin/master
Master01切换到1.20.x分支(其他版本可以切换到其他分支)(取消发送键输入到所有的会话)
git checkout manual-installation-v1.20.x
三、二进制生成证书详解
二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的
1、Master01下载生成证书工具
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfsslwget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2、所有Master节点创建etcd证书目录(发送键输入到所有的会话,取消node节点)
mkdir /etc/etcd/ssl -p
3、所有节点创建kubernetes相关目录(发送键输入到所有的会话)
mkdir -p /etc/kubernetes/pki
4、Master01节点生成etcd证书(取消发送键输入到所有的会话)
(1)生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
# 这个目录有我们生成证书需要用到的csr文件
cd /root/k8s-ha-install/pki#查看
[root@k8s-master01 pki]# ls
admin-csr.json ca-config.json etcd-ca-csr.json front-proxy-ca-csr.json kubelet-csr.json manager-csr.json
apiserver-csr.json ca-csr.json etcd-csr.json front-proxy-client-csr.json kube-proxy-csr.json scheduler-csr.json# 生成etcd CA证书和CA证书的key
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca2021/07/23 13:58:44 [INFO] generating a new CA key and certificate from CSR
2021/07/23 13:58:44 [INFO] generate received request
2021/07/23 13:58:44 [INFO] received CSR
2021/07/23 13:58:44 [INFO] generating key: rsa-2048
2021/07/23 13:58:44 [INFO] encoded CSR
2021/07/23 13:58:44 [INFO] signed certificate with serial number 65355458767171380149641516060181865353335743374
(2)查看生成的key
ls /etc/etcd/ssl/etcd-ca.csr etcd-ca-key.pem etcd-ca.pem
(3)颁发证书
cfssl gencert \-ca=/etc/etcd/ssl/etcd-ca.pem \-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \-config=ca-config.json \-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.218.22.212,10.218.22.234,10.218.22.252 \-profile=kubernetes \etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
(4)查看生成证书
ls /etc/etcd/ssl/#生成内容
etcd-ca.csr etcd-ca-key.pem etcd-ca.pem etcd.csr etcd-key.pem etcd.pem
(5)将证书复制到其他节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'for NODE in $MasterNodes; dossh $NODE "mkdir -p /etc/etcd/ssl"for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; doscp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}donedone
5、k8s组件证书
(1)Master01生成kubernetes证书
cd /root/k8s-ha-install/pkicfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
(2)查看生成的key
ls /etc/kubernetes/pkica.csr ca-key.pem ca.pem
(3)生成apiserver的客户端证书
10.96.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,如果不是高可用集群,10.218.3.205为Master01的IP,我这里这个是高可用的vip
cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,10.218.3.205,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.218.22.212,10.218.22.234,10.218.22.252 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
(4)查看生成的证书
ls /etc/kubernetes/pkiapiserver.csr apiserver-key.pem apiserver.pem ca.csr ca-key.pem ca.pem
(5)生成apiserver的聚合证书。Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
(6)生成 controller-manage 的证书
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 注意,如果不是高可用集群,10.218.3.205:6443改为master01的地址,6443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://10.218.3.205:6443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# set-credentials 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \--client-certificate=/etc/kubernetes/pki/controller-manager.pem \--client-key=/etc/kubernetes/pki/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 使用某个环境当做默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
(7)生成 scheduler 的证书
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler# 注意,如果不是高可用集群,10.218.3.205:6443改为master01的地址,6443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://10.218.3.205:6443 \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \--client-certificate=/etc/kubernetes/pki/scheduler.pem \--client-key=/etc/kubernetes/pki/scheduler-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
(8)生成admin的证书
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin# 注意,如果不是高可用集群,10.218.3.205:6443改为master01的地址,6443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.218.3.205:6443 --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
6、区分证书
我们用同样的命令生成了 admin.kubeconfig,scheduler.kubeconfig,controller-manager.kubeconfig,它们之间是如何区分的?
查看 admin-csr.json
cat admin-csr.json{"CN": "admin", # 域名"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters", # 部门,相当于admin是属于哪个组的"OU": "Kubernetes-manual"}]
}
我们生成的证书会定义一个用户 admin,它是属于 system:masters 这个组,k8s 安装的时候会有一个 clusterrole,它是一个集群角色,相当于一个配置,它有着集群最高的管理权限,同时会创建一个 clusterrolebinding,它会把 admin 绑到 system:masters 这个组上,然后这个组上的所有用户都会有这个集群的权限
7、创建ServiceAccount Key -> secret
(1)ServiceAccount 是 k8s 一种认证方式,创建 ServiceAccount 的时候会创建一个与之绑定的 secret,这个 secret 会生成一个 token
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
(2)发送证书至其他节点
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
(3)查看证书文件(一共23个文件)
ls /etc/kubernetes/pki/
ls /etc/kubernetes/pki/ |wc -l[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr
admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem
admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem
apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub
[root@k8s-master01 pki]#
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l
23
(4)查看证书过期时间(expiry 过期时间100年)
cat ca-config.json{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}
四、二进制高可用及etcd配置
1、Etcd配置
etcd生产环境中一定要启动奇数个节点,不然容易产生脑裂
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址,注意三个节点的配置是不同的
(1)Master01
vim /etc/etcd/etcd.config.ymlname: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.218.22.212:2380'
listen-client-urls: 'https://10.218.22.212:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.218.22.212:2380'
advertise-client-urls: 'https://10.218.22.212:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.218.22.212:2380,k8s-master02=https://10.218.22.234:2380,k8s-master03=https://10.218.22.252:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
Master02
vim /etc/etcd/etcd.config.ymlname: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.218.22.234:2380'
listen-client-urls: 'https://10.218.22.234:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.218.22.234:2380'
advertise-client-urls: 'https://10.218.22.234:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.218.22.212:2380,k8s-master02=https://10.218.22.234:2380,k8s-master03=https://10.218.22.252:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
Master03
vim /etc/etcd/etcd.config.ymlname: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.218.22.252:2380'
listen-client-urls: 'https://10.218.22.252:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.218.22.252:2380'
advertise-client-urls: 'https://10.218.22.252:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.218.22.212:2380,k8s-master02=https://10.218.22.234:2380,k8s-master03=https://10.218.22.252:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
所有Master节点创建etcd service并启动(发送键输入到所有的会话,取消node节点)
vim /usr/lib/systemd/system/etcd.service[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
Alias=etcd3.service
所有Master节点创建etcd的证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
查看etcd状态
export ETCDCTL_API=3
etcdctl --endpoints="10.218.22.212:2379,10.218.22.234:2379,10.218.22.252:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.218.22.212:2379 | d47768538fbe6e4f | 3.4.13 | 20 kB | true | false | 3 | 9 | 9 | |
| 10.218.22.234:2379 | 84a44ce3d2a87a83 | 3.4.13 | 20 kB | false | false | 3 | 9 | 9 | |
| 10.218.22.252:2379 | 5a8e242cc814d239 | 3.4.13 | 20 kB | false | false | 3 | 9 | 9 | |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
五、高可用配置(未使用可跳过)
我这里用的公司内部负载,F5或nginx4层,没用haproxy和keepalived,但记录一下,如果在自己机器上可以使用。
如果不是高可用集群,haproxy和keepalived无需安装公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的如果用阿里云的话,kubectl控制端不能放在master节点,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,推荐使用腾讯云,腾讯云修复了这个问题。
所有Master节点(node节点取消发送键输入到所有会话)通过yum安装HAProxy和KeepAlived:
yum install keepalived haproxy -y
所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同):
vim /etc/haproxy/haproxy.cfg
ggdG
添加以下内容,注意首行global是否复制完整
globalmaxconn 2000ulimit-n 16384log 127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode httpoption httplogtimeout connect 5000timeout client 50000timeout server 50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorfrontend k8s-masterbind 0.0.0.0:16443bind 127.0.0.1:16443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server k8s-master01 192.168.232.128:6443 checkserver k8s-master02 192.168.232.129:6443 checkserver k8s-master03 192.168.232.130:6443 check
所有Master节点配置KeepAlived,配置不一样,注意每个节点的IP和网卡(interface参数)
mkdir /etc/keepalived
vim /etc/keepalived/keepalived.conf
ggdG
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
script_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2 rise 1
}
vrrp_instance VI_1 {state MASTERinterface ens33mcast_src_ip 192.168.232.128virtual_router_id 51priority 101advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.232.236}track_script {chk_apiserver}
}
Master02节点的配置:
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
script_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2
rise 1
}
vrrp_instance VI_1 {state BACKUPinterface ens33mcast_src_ip 192.168.232.129virtual_router_id 51priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.232.236}track_script {chk_apiserver}
}
Master03节点的配置:
! Configuration File for keepalived
global_defs {router_id LVS_DEVEL
script_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2
rise 1
}
vrrp_instance VI_1 {state BACKUPinterface ens33mcast_src_ip 192.168.232.130virtual_router_id 51priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.232.236}track_script {chk_apiserver}
}
所有master节点(发送键输入到所有会话,取消node节点)配置KeepAlived健康检查文件:
vim /etc/keepalived/check_apiserver.sh
#!/bin/basherr=0
for k in $(seq 1 3)
docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi
我们通过KeepAlived虚拟出来一个VIP,VIP会配置到一个master节点上面,它会通过haproxy暴露的16443的端口反向代理到我们的三个master节点上面,所以我们可以通过VIP的地址加上16443访问到我们的API server
健康检查会检查haproxy的状态,三次失败就会将KeepAlived停掉,停掉之后KeepAlived会跳到其他的节点
添加权限
chmod +x /etc/keepalived/check_apiserver.sh
启动haproxy
systemctl daemon-reload
systemctl enable --now haproxy
启动keepalived
systemctl enable --now keepalived
查看系统日志(Sending gratuitous ARP on ens33 for 192.168.232.236)
tail -f /var/log/messagescat /var/log/messages | grep 'ens33' -5
查看ip
ip a
六、二进制K8s组件配置
- Apiserver
- ControllerManager
- Scheduler
所有节点创建相关目录(发送键输入到所有的会话)
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
1、Apiserver
所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,10.218.3.205改为master01的地址
(1)Master01配置(取消发送键输入到所有的会话)
注意k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
vim /usr/lib/systemd/system/kube-apiserver.service[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2 \--logtostderr=true \--allow-privileged=true \--bind-address=0.0.0.0 \--secure-port=6443 \--insecure-port=0 \--advertise-address=10.218.22.212 \--service-cluster-ip-range=10.96.0.0/12 \--service-node-port-range=30000-32767 \--etcd-servers=https://10.218.22.212:2379,https://10.218.22.234:2379,https://10.218.22.252:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--client-ca-file=/etc/kubernetes/pki/ca.pem \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \--service-account-key-file=/etc/kubernetes/pki/sa.pub \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \--enable-bootstrap-token-auth=true \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \--requestheader-allowed-names=aggregator \--requestheader-group-headers=X-Remote-Group \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
(2)Master02配置
vim /usr/lib/systemd/system/kube-apiserver.service[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2 \--logtostderr=true \--allow-privileged=true \--bind-address=0.0.0.0 \--secure-port=6443 \--insecure-port=0 \--advertise-address=10.218.22.234 \--service-cluster-ip-range=10.96.0.0/12 \--service-node-port-range=30000-32767 \--etcd-servers=https://10.218.22.212:2379,https://10.218.22.234:2379,https://10.218.22.252:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--client-ca-file=/etc/kubernetes/pki/ca.pem \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \--service-account-key-file=/etc/kubernetes/pki/sa.pub \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \--enable-bootstrap-token-auth=true \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \--requestheader-allowed-names=aggregator \--requestheader-group-headers=X-Remote-Group \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
(3)Master03配置
vim /usr/lib/systemd/system/kube-apiserver.service[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2 \--logtostderr=true \--allow-privileged=true \--bind-address=0.0.0.0 \--secure-port=6443 \--insecure-port=0 \--advertise-address=10.218.22.252 \--service-cluster-ip-range=10.96.0.0/12 \--service-node-port-range=30000-32767 \--etcd-servers=https://10.218.22.212:2379,https://10.218.22.234:2379,https://10.218.22.252:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--client-ca-file=/etc/kubernetes/pki/ca.pem \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \--service-account-key-file=/etc/kubernetes/pki/sa.pub \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \--enable-bootstrap-token-auth=true \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \--requestheader-allowed-names=aggregator \--requestheader-group-headers=X-Remote-Group \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
(4)所有Master节点开启kube-apiserver(发送键输入到所有的会话,取消node节点)
systemctl daemon-reload && systemctl enable --now kube-apiserver
(5)检测kube-server状态
systemctl status kube-apiserver● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since Fri 2021-07-23 16:23:47 CST; 10s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 10956 (kube-apiserver)Tasks: 13Memory: 302.4MCGroup: /system.slice/kube-apiserver.service└─10956 /usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=64...Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.176533 10956 storage_rbac.go:331] created rolebinding.rbac.aut...system
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.185158 10956 storage_rbac.go:331] created rolebinding.rbac.aut...system
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.190144 10956 healthz.go:257] poststarthook/rbac/bootstrap-role...readyz
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.192942 10956 storage_rbac.go:331] created rolebinding.rbac.aut...system
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.201305 10956 storage_rbac.go:331] created rolebinding.rbac.aut...system
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.209273 10956 storage_rbac.go:331] created rolebinding.rbac.aut...public
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: W0723 16:23:54.322312 10956 lease.go:233] Resetting endpoints for master serv...2.212]
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.323449 10956 controller.go:611] quota admission added evaluato...points
Jul 23 16:23:54 k8s-master01 kube-apiserver[10956]: I0723 16:23:54.334503 10956 controller.go:611] quota admission added evaluato...k8s.io
Hint: Some lines were ellipsized, use -l to show in full.
2、ControllerManager
所有Master节点配置kube-controller-manager service
注意k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改
vim /usr/lib/systemd/system/kube-controller-manager.service[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--logtostderr=true \--address=127.0.0.1 \--root-ca-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/pki/sa.key \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--pod-eviction-timeout=2m0s \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr=172.16.0.0/12 \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--node-cidr-mask-size=24Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
所有Master节点启动kube-controller-manager
systemctl daemon-reloadsystemctl enable --now kube-controller-manager
查看启动状态
systemctl status kube-controller-manager
3、Scheduler
所有Master节点配置kube-scheduler service
vim /usr/lib/systemd/system/kube-scheduler.service [Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--logtostderr=true \--address=127.0.0.1 \--leader-elect=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
启动
systemctl daemon-reloadsystemctl enable --now kube-scheduler
查看启动状态
systemctl status kube-scheduler
七、二进制使用Bootstrapping自动颁发证书
(1)它可以给 node 节点自动颁发证书,也就是给 keepalived 颁发证书
为什么这个证书不是手动管理?因为 k8s 主节点可能是固定的,创建好之后一直就是那几台,但是 node 节点可能变化比较多,如果添加,删除,故障维护节点的时候手动添加会比较麻烦,keepalived 证书和主机名是有绑定的,而我们的主机名又是不一样的,所以需要有一种机制自动颁发 keepalived 发来的证书请求
(2)在Master01创建bootstrap(取消发送键输入到所有的会话)
注意,如果不是高可用集群,10.218.3.205:6443 改为master01的地址,6443改为apiserver的端口,默认是6443
cd /root/k8s-ha-install/bootstrap
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.218.3.205:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
bootstrap-kubelet.kubeconfig 是一个 keepalived 用来向 apiserver 申请证书的文件
注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证 c8ad9c 字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致
cat bootstrap.secret.yamlapiVersion: v1
kind: Secret
metadata:name: bootstrap-token-c8ad9cnamespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:description: "The default bootstrap token generated by 'kubelet '."token-id: c8ad9ctoken-secret: 2e4d610cf3e7426eusage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
(3)创建配置文件,缺乏此文件无法执行 kubectl get node(The connection to the server localhost:8080 was refused),需要将证书复制过来
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
kubectl 命令只需要一个节点拥有就可以,这是控制节点,不可以让每个节点都拥有,这样非常危险,可以把他放到集群之外的任何一个节点都可以,并不一定是我们的 k8s 节点,任何一台服务器与 k8s 相通即可,需要把这个文件复制过去,就可以访问到我们这个集群
(4)创建 bootstrap
kubectl create -f bootstrap.secret.yaml
八、二进制Node节点及Calico配置
二进制Node节点
- 复制证书
- Kubelet配置
- kube-proxy配置
1、复制证书
node节点使用自动颁发证书的形式配置
Master01节点复制证书至Node节点(取消发送键输入到所有的会话)
cd /etc/kubernetes/for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; dossh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/sslfor FILE in etcd-ca.pem etcd.pem etcd-key.pem; doscp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/donefor FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; doscp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}donedone
2、Kubelet配置
(1)所有节点创建相关目录(发送键输入到所有的会话)
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
(2)所有节点配置kubelet service
vim /usr/lib/systemd/system/kubelet.service# 添加以下内容
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service[Service]
ExecStart=/usr/local/bin/kubeletRestart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
(3)所有节点配置kubelet service的配置文件
vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf# 添加以下内容
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
(4)创建kubelet的配置文件
注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10
vim /etc/kubernetes/kubelet-conf.yml
# 添加以下内容apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
(5)启动所有节点kubelet
systemctl daemon-reload
systemctl enable --now kubelet
(6)查看系统日志
tail -f /var/log/messages
显示只有如下信息为正常,因为Calico还没安装
Unable to update cni config" err="no networks found in /etc/cni/net.d
(7)查看集群状态
kubectl get node
集群状态NotReady,因为Calico还没安装
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady <none> 2m23s v1.22.0-beta.1
k8s-master02 NotReady <none> 2m16s v1.22.0-beta.1
k8s-master03 NotReady <none> 2m16s v1.22.0-beta.1
k8s-node01 NotReady <none> 2m16s v1.22.0-beta.1
k8s-node02 NotReady <none> 2m16s v1.22.0-beta.1
3、kube-proxy配置
注意,如果不是高可用集群,10.218.3.205:6443改为master01的地址,6443改为apiserver的端口,默认是6443
在Master01执行(取消发送键输入到所有的会话)
cd /root/k8s-ha-install
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
SECRET=$(kubectl -n kube-system get sa/kube-proxy \--output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.218.3.205:6443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
在master01将kube-proxy的systemd Service文件发送到其他节点
如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段。
for NODE in k8s-master01 k8s-master02 k8s-master03; doscp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfigscp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.confscp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.servicedonefor NODE in k8s-node01 k8s-node02; doscp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfigscp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.confscp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.servicedone
所有节点启动kube-proxy(发送键输入到所有的会话)
systemctl daemon-reloadsystemctl enable --now kube-proxy
查看状态
systemctl status kube-proxy
4、Calico配置
在master01执行(取消发送键输入到所有的会话)
cd /root/k8s-ha-install/calico/# 修改calico-etcd.yaml的以下位置
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://10.218.22.212:2379,https://10.218.22.234:2379,https://10.218.22.252:2379"#g' calico-etcd.yamlETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yamlsed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml# 更改此处为自己的pod网段
POD_SUBNET="172.16.0.0/12"# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yamlkubectl apply -f calico-etcd.yaml
查看容器状态
kubectl get po -n kube-system
容器状态
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-cdd5755b9-4fzg9 1/1 Running 0 113s
calico-node-8xg62 1/1 Running 0 113s
calico-node-dczxz 1/1 Running 0 113s
calico-node-gn8ws 1/1 Running 0 113s
calico-node-qmwkd 1/1 Running 0 113s
calico-node-zfw8n 1/1 Running 2 (78s ago) 113s
如果容器状态异常可以使用kubectl describe 或者logs查看容器的日志
九、二进制CoreDNS&Metrics&Dashboard安装
- 安装CoreDNS
- 安装Metrics Server
- 安装dashboard
1、安装CoreDNS
(1)安装对应版本(推荐)
cd /root/k8s-ha-install/
如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP
sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml
安装coredns
kubectl create -f CoreDNS/coredns.yaml
(2)安装最新版CoreDNS(不推荐)
git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
(3)查看状态
kubectl get po -n kube-system -l k8s-app=kube-dns
状态
NAME READY STATUS RESTARTS AGE
coredns-fb4874468-nr5nx 1/1 Running 0 49s
(4)强制删除一直处于Terminating的pod(如果需要删除的话)
[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-fb4874468-fgs2h 1/1 Terminating 0 6d20h[root@k8s-master01 ~]# kubectl delete pods coredns-fb4874468-fgs2h --grace-period=0 --force -n kube-system
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "coredns-fb4874468-fgs2h" force deleted[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
No resources found in kube-system namespace.
2、安装Metrics Server
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
(1)安装metrics server
cd /root/k8s-ha-install/metrics-server-0.4.x/kubectl create -f .
等待metrics server启动然后查看状态
kubectl top node
节点状态
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 263m 13% 1239Mi 66%
k8s-master02 213m 10% 1065Mi 57%
k8s-master03 207m 10% 1050Mi 56%
k8s-node01 89m 4% 514Mi 27%
k8s-node02 158m 7% 493Mi 26%
查看pod状态
kubectl top po -A
pod状态
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system calico-kube-controllers-cdd5755b9-4fzg9 3m 18Mi
kube-system calico-node-8xg62 26m 60Mi
kube-system calico-node-dczxz 24m 60Mi
kube-system calico-node-gn8ws 23m 62Mi
kube-system calico-node-qmwkd 26m 60Mi
kube-system calico-node-zfw8n 25m 59Mi
kube-system coredns-fb4874468-nr5nx 3m 10Mi
kube-system metrics-server-64c6c494dc-9x727 2m 18Mi
3、安装dashboard
- 安装指定版本dashboard
- 安装最新版dashboard
- 登录dashboard
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
(1)安装指定版本dashboard
cd /root/k8s-ha-install/dashboard/[root@k8s-master01 dashboard]# ls
dashboard-user.yaml dashboard.yamlkubectl create -f .
(2)安装最新版dashboard
官方GitHub地址:https://github.com/kubernetes/dashboard可以在官方dashboard查看到最新版dashboardkubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
创建管理员用户(安装最新版本的时候)
vim admin.yaml
# 添加以下内容
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system
执行
kubectl apply -f admin.yaml -n kube-system
(3)登录dashboard
在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,因为使用的证书是自签名(属性->快捷方式->目标,粘贴到最后)
--test-type --ignore-certificate-errors
更改dashboard的svc为NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
修改 type: ClusterIP 为 type:NodePort
修改完成之后会暴露一个端口号,查看端口号:
[root@k8s-master01 dashboard]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.98.129.195 <none> 443:30711/TCP 85s[root@k8s-master01 dashboard]# kubectl get pod -A -owide | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-7b4bbf8954-zvgm8 1/1 Running 0 13m 172.27.14.193 k8s-node02 <none> <none>
kubernetes-dashboard kubernetes-dashboard-6c65b776bd-797lx 1/1 Running 0 13m 172.18.195.1 k8s-master03 <none> <none>
根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:访问Dashboard:https://10.218.22.252:30711(请更改为自己的端口),选择登录方式为令牌(即token方式)
查看token值:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
token值Name: admin-user-token-9c4tz
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: d1f2e528-0ef8-4c6b-a384-a18fbca6bc54Type: kubernetes.io/service-account-tokenData
====
ca.crt: 1411 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlNCbEdFa1RQZElhbTBRb29aTTNCTUE1dTJ2enBCeGZxMWJwbmpfZHBXdkEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTljNHR6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkMWYyZTUyOC0wZWY4LTRjNmItYTM4NC1hMThmYmNhNmJjNTQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.KFH5ed0kJEaU1HSpxkitJxqKJGnSNAWogNSGjGn1wEh7R9zKYkAfNLES6Vl3GU9jvxBCEZW415ZFILr96kpgl_88mD-K-AMgQxKLdpghYDx_CnsLtI6e8rLTNkaPS2Uo3sYAy9U280Niop14Yzuar5FQ3AfSbeXGcF_9Jrgyeh5XWPA0h69Au8pUEOkVdpADmuIaFSqfTnmkOSdGqCgFb_QsUqvjo4ifIxKnN6uW8wfR1s4esWkPq569xhCINaUY6g3rnT1jfVTU2XmrURrKOVok0OfSmtXTKCSs2jliEdmx7qEFTrw2KCPnTfORUtTnmdZ2ZnGGx9Fvf_hGaKk1FQ
十、使用Kuboard界面
Kuboard是国人开发的另一个dashboard界面
官网地址:https://www.kuboard.cn/install/v3/install-in-k8s.html
详细内容可以看官网,我这里直接使用命令安装
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
在浏览器中打开链接 http://your-node-ip-address:30080
输入初始用户名和密码,并登录
用户名: admin
密码: Kuboard123
二进制高可用安装k8s集群部署相关推荐
- Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记
目录 二进制Metrics&Dashboard安装 二进制高可用集群可用性验证 生产环境k8s集群关键性配置 Bootstrapping: Kubelet启动过程 Bootstrapping: ...
- (shell批量版)二进制高可用安装k8s集群v1.23.5版本,搭配containerd容器运行时
目录 第1章 安装前准备 1.1 节点规划 1.2 配置NTP 1.3 bind安装DNS服务 1.4 修改主机DNS 1.5 安装runtime环境及依赖 1.5.1 安装docker运行时 1.5 ...
- 高可用安装K8s集群1.20.x
目录 1. 安装说明 2. 节点规划 3. 基本配置 4. 内核配置 5. 基本组件安装 6. 高可用组件安装 7. 集群初始化 8. 高可用Master 9. 添加Node节点 10. Calico ...
- 总结 Underlay 和 Overlay 网络,在k8s集群实现underlay网络,网络组件flannel vxlan/ calico IPIP模式的网络通信流程,基于二进制实现高可用的K8S集群
1.总结Underlay和Overlay网络的的区别及优缺点 Overlay网络: Overlay 叫叠加网络也叫覆盖网络,指的是在物理网络的 基础之上叠加实现新的虚拟网络,即可使网络的中的容器可 ...
- Kubeadm安装高可用的K8S集群--多master单node
Kubeadm安装高可用的K8S集群–多master单node master1 IP 192.168.1.180/24 OS Centos7.6 master2 IP 192.168.1.181/24 ...
- 通过kubeadm部署高可用的k8s集群
1环境准备 注意: 禁用swap 关闭selinux 关闭iptable 优化内核参数限制参数 root@kubeadm-master1:~# sysctl -p net.ipv4.ip_forwar ...
- LVS+keepalived高可用负载均衡集群部署(一) ----数据库的读写分离
l 系统环境: RHEL7 l 硬件环境:虚拟机 l 项目描述:为解决网站访问压力大的问题,需要搭建高可用.负载均衡的 web集群. l 架构说明:整个服务架构采用功能分离的方式部署.后端采用 ...
- haproxy+keepalived实现高可用K8S集群部署
普通的k8s集群: 当某个work节点故障时是高可用的 但是master节点故障时将会发生崩溃 因为k8s api server不可用会导致整个集群群龙无首 高可用的k8s集群: 其原理是将所有wor ...
- 搭建 K8S 环境:Centos7安装生产环境可用的K8S集群图文教程指南
搭建 K8S 环境:Centos7安装生产环境可用的K8S集群图文教程指南 一. K8S 简介 二. K8S 学习的几大拦路虎 2.1 K8S 安装对硬件要求比较高 2.2. K8S 对使用者来说要求 ...
- Blazor+Dapr+K8s微服务之基于WSL安装K8s集群并部署微服务
前面文章已经演示过,将我们的示例微服务程序DaprTest1部署到k8s上并运行.当时用的k8s是Docker for desktop 自带的k8s,只要在Docker for desktop中启用 ...
最新文章
- 美多商城之项目准备-工程创建和配置
- python 使用win32com 操作excel
- 类和对象—对象特性—空指针访问成员函数
- c++ 拷贝构造函数_禁止拷贝构造,禁止bug
- Linux定时备份Oracle Database 翻译
- python高清大图代码_python2的代码从吉卜力网页上下载高清图片
- I9 9900K线程_收藏党抓紧了!英特尔停产i9-9900K特色包装,只因运输太浪费
- 数据全生命周期管理应用平台的组成
- 简单总结手机app测试,弱网测试
- php跨域请求post请求失败,nginx + php 跨域问题,GET可以跨域成功,POST失败
- 用计算机求平方根立方根,利用计算器求平方根、立方根
- linux删除svn版本库
- 怎么用云服务器搭建游戏,搭建游戏用什么云服务器
- 统一网关Gateway-搭建网关服务
- Java基础——正则表达式_校验QQ号码、手机号是否满足规则、邮箱格式验证
- OllyDbg——基础1
- 极客时间和极客学院_极客拔掉
- 神经网络优化算法nag_数值算法组(NAG)向Java开发库添加了新功能
- 小白学变分推断(1)——变分推断概述
- hr背景调查会很详细吗_招聘员工时,HR真的会做背景调查吗?
热门文章
- Lattice Diamond 加入未默认支持flash
- 好一个“Exchange20003”
- system/build.prop参数说明
- Linux:一位猫奴的意外逆袭
- 使用pyinstaller将python脚本转成EXE可执行文件遇到的问题和总结
- android捕获全局异常lin,全局获取 (Activity)Context,实现全局弹出 Dialog
- 2017腾讯暑期实习生从笔试到面试总结(附带华为、阿里面试经历)
- 幽灵蛛(pholcus)(五)--json解析学习资料
- linux centos如何切换时区,如何在CentOS服务器上更改时区?
- 微云html网页,微云收藏在哪里_以及腾讯微云收藏网页版怎么用? - 软件教程 - 格子啦...