使用kubeadm安装Kubernetes

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

最近发布的Kubernetes 1.13中,kubeadm的主要特性已经GA了,但还不包含高可用,不过说明kubeadm可在生产环境中使用的距离越来越近了。

Area Maturity Level
Command line UX GA
Implementation GA
Config file API beta
CoreDNS GA
kubeadm alpha subcommands alpha
High availability alpha
DynamicKubeletConfig alpha
Self-hosting alpha

当然我们线上稳定运行的Kubernetes集群是使用ansible以二进制形式的部署的高可用集群,这里体验Kubernetes 1.13中的kubeadm是为了跟随官方对集群初始化和配置方面的最佳实践,进一步完善我们的ansible部署脚本。

1.准备

1.1系统配置

在安装之前,需要先做如下准备。两台CentOS 7.4主机如下:

cat /etc/hosts
192.168.61.11 node1
192.168.61.12 node2

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

systemctl stop firewalld
systemctl disable firewalld

禁用SELINUX:

setenforce 0
vi /etc/selinux/config
SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

执行命令使修改生效。

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

1.3安装Docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。

安装docker的yum源:

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo

查看最新的Docker版本:

yum list docker-ce.x86_64  --showduplicates |sort -r
docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable

Kubernetes 1.12已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,需要注意Kubernetes 1.12最低支持的Docker版本是1.11.1。Kubernetes 1.13对Docker的版本依赖方面没有变化。 我们这里在各节点安装docker的18.06.1版本。

yum makecache fastyum install -y --setopt=obsoletes=0 \docker-ce-18.06.1.ce-3.el7systemctl start docker
systemctl enable docker

确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。

iptables -nvL
Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)pkts bytes target     prot opt in     out     source               destinationChain FORWARD (policy ACCEPT 0 packets, 0 bytes)pkts bytes target     prot opt in     out     source               destination0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/00     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/00     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/00     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/00     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806,发现默认策略又改回了ACCEPT,这个不知道是从哪个版本改回的,因为我们线上版本使用的1706还是需要手动调整这个策略的。

2.使用kubeadm部署Kubernetes

2.1 安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpghttps://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。

curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
yum makecache fast
yum install -y kubelet kubeadm kubectl...
Installed:kubeadm.x86_64 0:1.13.0-0                                    kubectl.x86_64 0:1.13.0-0                                                           kubelet.x86_64 0:1.13.0-0Dependency Installed:cri-tools.x86_64 0:1.12.0-0                                  kubernetes-cni.x86_64 0:0.6.0-0                                                       socat.x86_64 0:1.7.3.2-2.el7
  • 从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:

    • 官方从Kubernetes 1.9开始就将cni依赖升级到了0.6.0版本,在当前1.12中仍然是这个版本
    • socat是kubelet的依赖
    • cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具

运行kubelet –help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:

......
--address 0.0.0.0   The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
......

而官方推荐我们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster。

kubelet的配置文件必须是json或yaml格式,具体可查看这里。

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。

关闭系统的Swap方法如下:

  swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

  vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。 之前的Kubernetes版本我们都是通过kubelet的启动参数–fail-swap-on=false去掉这个限制的。前面已经分析了Kubernetes不再推荐使用启动参数,而推荐使用配置文件。 所以这里我们改成配置文件配置的形式。

查看/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,看到了下面的内容:

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

上面显示kubeadm部署的kubelet的配置文件–config=/var/lib/kubelet/config.yaml,实际去查看/var/lib/kubelet和这个config.yaml的配置文件都没有被创建。 可以猜想肯定是运行kubeadm初始化集群时会自动生成这个配置文件,而如果我们不关闭Swap的话,第一次初始化集群肯定会失败的。

所以还是老老实实的回到使用kubelet的启动参数–fail-swap-on=false去掉必须关闭Swap的限制。 修改/etc/sysconfig/kubelet,加入:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:

kubeadm init \--kubernetes-version=v1.13.0 \--pod-network-cidr=10.244.0.0/16 \--apiserver-advertise-address=192.168.61.11

因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。

执行时报了下面的错误:

[init] using Kubernetes version: v1.13.0
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

有一个错误信息是running with swap on is not supported. Please disable swap。因为我们决定配置failSwapOn: false,所以重新添加–ignore-preflight-errors=Swap参数忽略这个错误,重新运行。

kubeadm init \--kubernetes-version=v1.13.0 \--pod-network-cidr=10.244.0.0/16 \--apiserver-advertise-address=192.168.61.11 \--ignore-preflight-errors=Swap[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.11]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.506551 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 702gz5.49zhotgsiyqimwqw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join 192.168.61.11:6443 --token 702gz5.49zhotgsiyqimwqw --discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。

其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
  • [certificates]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • 下面的命令是配置常规用户如何使用kubectl访问集群:
      mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 最后给出了将节点加入集群的命令kubeadm join 192.168.61.11:6443 –token 702gz5.49zhotgsiyqimwqw –discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f

查看一下集群状态:

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}

确认个组件都处于healthy状态。

集群初始化如果遇到问题,可以使用下面的命令进行清理:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

2.3 安装Pod Network

接下来安装flannel network add-on:

mkdir -p ~/k8s/
cd ~/k8s
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.10.0,quay.io/coreos/flannel:v0.10.0-amd64

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>

......
containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.10.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=eth1
......

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE
kube-system   coredns-576cbf47c7-njt7l        1/1     Running   0          12m    10.244.0.3      node1   <none>
kube-system   coredns-576cbf47c7-vg2gd        1/1     Running   0          12m    10.244.0.2      node1   <none>
kube-system   etcd-node1                      1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-apiserver-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-controller-manager-node1   1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-flannel-ds-amd64-bxtqh     1/1     Running   0          2m     192.168.61.11   node1   <none>
kube-system   kube-proxy-fb542                1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-scheduler-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>

2.4 master node参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:

kubectl describe node node1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:

kubectl taint nodes node1 node-role.kubernetes.io/master-
node "node1" untainted

2.5 测试DNS

kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-5cc7b478b6-r997p:/ ]$ 

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.6 向Kubernetes集群中添加Node节点

下面我们将node2这个主机添加到Kubernetes集群中,因为我们同样在node2上的kubelet的启动参数中去掉了必须关闭swap的限制,所以同样需要–ignore-preflight-errors=Swap这个参数。 在node2上执行:

kubeadm join 192.168.61.11:6443 --token 702gz5.49zhotgsiyqimwqw --discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f \--ignore-preflight-errors=Swap[preflight] Running pre-flight checks[WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.61.11:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443"
[discovery] Requesting info from "https://192.168.61.11:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443"
[discovery] Successfully established connection with API Server "192.168.61.11:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotationThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.

node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:

kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
node1   Ready    master   16m    v1.13.0
node2   Ready    <none>   4m5s   v1.13.0

如何从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

在node1上执行:

kubectl delete node node2

2.7 kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:

kubectl edit cm kube-proxy -n kube-system

之后重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-pf55q                1/1     Running   0          9s
kube-proxy-qjnnc                1/1     Running   0          14skubectl logs kube-proxy-pf55q -n kube-system
I1208 06:12:23.516444       1 server_others.go:189] Using ipvs Proxier.
W1208 06:12:23.516738       1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1208 06:12:23.516840       1 server_others.go:216] Tearing down inactive rules.
I1208 06:12:23.575222       1 server.go:464] Version: v1.13.0
I1208 06:12:23.585142       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1208 06:12:23.586203       1 config.go:202] Starting service config controller
I1208 06:12:23.586243       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1208 06:12:23.586269       1 config.go:102] Starting endpoints config controller
I1208 06:12:23.586275       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1208 06:12:23.686959       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1208 06:12:23.687056       1 controller_utils.go:1034] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

3.Kubernetes常用组件部署

越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,我们也将使用Helm安装Kubernetes的常用组件。

3.1 Helm的安装

Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.12.0版本:

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.0-linux-amd64.tar.gz
tar -zxvf helm-v2.12.0-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/

为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。 这里的node1节点以及配置好了kubectl。

因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。 详细内容可以查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建rbac-config.yaml文件:

apiVersion: v1
kind: ServiceAccount
metadata:name: tillernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: tiller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:- kind: ServiceAccountname: tillernamespace: kube-system
kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

接下来使用helm部署tiller:

helm init --service-account tiller --skip-refresh
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

tiller默认被部署在k8s集群中的kube-system这个namespace下:

kubectl get pod -n kube-system -l app=helm
NAME                            READY   STATUS    RESTARTS   AGE
tiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s
helm version
Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.11.0 –skip-refresh使用私有镜像仓库中的tiller镜像

3.2 使用Helm部署Nginx Ingress

为了便于将集群中的服务暴露到集群外部,从集群外部访问,接下来使用Helm将Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看我前面整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)。

我们将node1(192.168.61.11)和node2(192.168.61.12)同时做为边缘节点,打上Label:

kubectl label node node1 node-role.kubernetes.io/edge=
node/node1 labeledkubectl label node node2 node-role.kubernetes.io/edge=
node/node2 labeledkubectl get node
NAME    STATUS   ROLES         AGE   VERSION
node1   Ready    edge,master   24m   v1.13.0
node2   Ready    edge          11m   v1.13.0

stable/nginx-ingress chart的值文件ingress-nginx.yaml:

controller:replicaCount: 2service:externalIPs:- 192.168.61.10nodeSelector:node-role.kubernetes.io/edge: ''affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: app operator: Invalues:- nginx-ingress- key: componentoperator: Invalues:- controllertopologyKey: kubernetes.io/hostnametolerations:- key: node-role.kubernetes.io/masteroperator: Existseffect: NoScheduledefaultBackend:nodeSelector:node-role.kubernetes.io/edge: ''tolerations:- key: node-role.kubernetes.io/masteroperator: Existseffect: NoSchedule

nginx ingress controller的副本数replicaCount为2,将被调度到node1和node2这两个边缘节点上。externalIPs指定的192.168.61.10为VIP,将绑定到kube-proxy kube-ipvs0网卡上。

helm repo updatehelm install stable/nginx-ingress \
-n nginx-ingress \
--namespace ingress-nginx  \
-f ingress-nginx.yaml
kubectl get pod -n ingress-nginx -o wide
NAME                                             READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
nginx-ingress-controller-85f8597fc6-g2kcx        1/1     Running   0          5m2s   10.244.1.3   node2   <none>           <none>
nginx-ingress-controller-85f8597fc6-g7pp5        1/1     Running   0          5m2s   10.244.0.5   node1   <none>           <none>
nginx-ingress-default-backend-6dc6c46dcc-7plm8   1/1     Running   0          5m2s   10.244.1.4   node2   <none>           <none>

如果访问http://192.168.61.10返回default backend,则部署完成。

实际测试的结果是无法访问,于是怀疑kube-proxy出了问题,查看kube-proxy的日志,不停的刷下面的log:

I1208 07:59:28.902970       1 graceful_termination.go:160] Trying to delete rs: 10.104.110.193:80/TCP/10.244.1.5:80
I1208 07:59:28.903037       1 graceful_termination.go:170] Deleting rs: 10.104.110.193:80/TCP/10.244.1.5:80
I1208 07:59:28.903072       1 graceful_termination.go:160] Trying to delete rs: 10.104.110.193:80/TCP/10.244.0.6:80
I1208 07:59:28.903105       1 graceful_termination.go:170] Deleting rs: 10.104.110.193:80/TCP/10.244.0.6:80
I1208 07:59:28.903713       1 graceful_termination.go:160] Trying to delete rs: 192.168.61.10:80/TCP/10.244.1.5:80
I1208 07:59:28.903764       1 graceful_termination.go:170] Deleting rs: 192.168.61.10:80/TCP/10.244.1.5:80
I1208 07:59:28.903798       1 graceful_termination.go:160] Trying to delete rs: 192.168.61.10:80/TCP/10.244.0.6:80
I1208 07:59:28.903824       1 graceful_termination.go:170] Deleting rs: 192.168.61.10:80/TCP/10.244.0.6:80
I1208 07:59:28.904654       1 graceful_termination.go:160] Trying to delete rs: 10.0.2.15:31698/TCP/10.244.0.6:80
I1208 07:59:28.904837       1 graceful_termination.go:170] Deleting rs: 10.0.2.15:31698/TCP/10.244.0.6:80

在Kubernetes的Github上找到了这个ISSUE https://github.com/kubernetes/kubernetes/issues/71071,大致是最近更新的IPVS proxier mode now support connection based graceful termination.引入了bug,导致Kubernetes的1.11.5、1.12.1~1.12.3、1.13.0都有这个问题,即kube-proxy在ipvs模式下不可用。而官方称在1.11.5、1.12.3、1.13.0中修复了12月4日k8s的特权升级漏洞(CVE-2018-1002105),如果针对这个漏洞做k8s升级的同学,需要小心,确认是否开启了ipvs,避免由升级引起k8s网络问题。由于我们线上的版本是1.11并且已经启用了ipvs,所以这里我们只能先把线上master node升级到了1.11.5,而kube-proxy还在使用1.11.4的版本。

https://github.com/kubernetes/kubernetes/issues/71071中已经描述有相关PR解决这个问题,后续只能跟踪一下1.11.5、1.12.3、1.13.0之后的小版本了。

参考

  • Installing kubeadm
  • Using kubeadm to Create a Cluster
  • Get Docker CE for CentOS

转载来源:https://www.kubernetes.org.cn/4956.html

使用kubeadm安装Kubernetes相关推荐

  1. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  2. 如何使用 kubeadm 安装 Kubernetes?

    本文讲解如何使用 kubeadm 安装 kubernetes 1.15.2. 作者 | 阿文 责编 | 郭芮 kubeadm 能帮助您建立一个小型的符合最佳实践的 Kubernetes 集群.它可以运 ...

  3. centos7.6使用kubeadm安装kubernetes的master worker节点笔记及遇到的坑

    个人博客原文地址:http://www.lampnick.com/php/760 本文目标 安装docker及设置docker代理 安装kubeadm 使用kubeadm初始化k8s Master节点 ...

  4. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  5. kubeadm 安装kubernetes kube-api证书过期解决方案

    kubeadm 安装kubernetes kube-api证书过期解决方案 problem describection Unable to connect to the server: x509: c ...

  6. 通过kubeadm安装kubernetes 1.7文档记录[docker容器方式]

    参照了网上N多文档,不一一列表,共享精神永存!!!! ================================================== 获取所有安装包 安装包分为两类,rpm安装包 ...

  7. kubeadm安装kubernetes 1.13.2多master高可用集群

    1. 简介 Kubernetes v1.13版本发布后,kubeadm才正式进入GA,可以生产使用,用kubeadm部署kubernetes集群也是以后的发展趋势.目前Kubernetes的对应镜像仓 ...

  8. 通过kubeadm安装kubernetes 1.13.2

    [摘要] 单机安装Kubernetes v1.13.2, 使用Flannel并开启IPVS 1, k8s节点准备工作 在每个master和node节点上准备如下工作 #1 设置并记下Master/No ...

  9. centos7.4 kubeadm安装Kubernetes 1.14.1 集群

    硬件准备 [2台hosts内容一样] [root@kuber-node1 /]# cat /etc/hosts 127.0.0.1 localhost 10.26.3.182 kuber-node1 ...

最新文章

  1. 什么是shell【TLCL】
  2. 嵌套 移动端_360PC端小程序全面开放使用
  3. python代码翻译-Python编程学习 -- 用十几行代码实现一个翻译器
  4. 如何才能学好python编程零基础_Python脚本该怎么学,零基础也可以学好Python编程吗?...
  5. java 关闭串口_java – 打开和关闭串行端口
  6. 主成分分析(PCA)原理详解_转载
  7. kubernetes系列10—存储卷详解
  8. Oracle(二):查询、函数、集合运算
  9. 无码系列-1-架构师启蒙篇
  10. ubuntu之搭建http服务器apache2
  11. java判断字符串不为空_Java判断字符串是否为空的方法
  12. T检验、F检验和统计学意义(P值或sig值)
  13. 华为云云原生之多云管理利器Karmada从0到1的实操【与云原生的故事】
  14. 保护视力健康的电脑屏幕设置
  15. 迅捷pdf转换器:如何将pdf转换成word
  16. 浅识k8s中的准入控制器
  17. 2018美团CodeM 题解
  18. Testing Process - 读书笔记
  19. Comsys学生比赛管理系统
  20. 怎么学计算机知识竞赛,计算机基础学习知识竞赛题目答案解析.docx

热门文章

  1. JavaScript数据结构和算法简述——数组
  2. codeigniter view 试图里再引入view视图
  3. LeetCode 459. Repeated Substring Pattern
  4. Linux sed命令完全攻略(超级详细)
  5. Python与机器视觉(x)图像差分-图像相减
  6. JVM——jinfo命令使用及GC日志分析
  7. ==和equals()比较
  8. mysql服务的关闭与启动
  9. django-基本使用
  10. JSP九大内置对象和四大作用域和Servlet的三大作用域对象