个人博客原文地址:http://www.lampnick.com/php/760

本文目标

安装docker及设置docker代理
安装kubeadm
使用kubeadm初始化k8s Master节点
安装网络插件weave-kube
部署 Kubernetes 的 Worker 节点
部署kubernetes-dashboard
监控组件 – prometheus-operator 部署(https://github.com/coreos/prometheus-operator)

环境

mac virtual box
虚拟机配置:
cpu:2c
memory:1G
disk:8G

前置条件

linux下科学拉取代码请参考 :https://blog.liuguofeng.com/p/4010

1.配置ss自启动

[root@centos7vm ~]#  vim /etc/systemd/system/shadowsocks.service ,内容如下:[Unit]
Description=Shadowsocks
[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/sslocal -c /etc/shadowsocks.json
[Install]
WantedBy=multi-user.target然后shell中执行如下命令
[root@centos7vm ~]#  systemctl enable shadowsocks.service
[root@centos7vm ~]#  systemctl start shadowsocks.service
[root@centos7vm ~]#  systemctl status shadowsocks.service

2.关闭SELinux、防火墙及一些配置(不然会有权限问题)

[root@centos7vm ~]# sestatus
SELinux status: enabled
于是关闭selinux(setenforce 0 在我的centos7.6上不起作用)
[root@centos7vm ~]# vim /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled [root@centos7vm ~]# sestatus
SELinux status: disabled[root@centos7vm ~]# systemctl stop firewalld
[root@centos7vm ~]# systemctl disable firewalld
永久关闭SWAP
[root@centos7vm ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
注释掉SWAP分区项,即可
设置后需要重启才能生效
[root@centos7vm ~]# reboot

3.安装docker及设置docker代理

[root@centos7vm ~]#  yum -y install docker
[root@centos7vm ~]#  mkdir -p /etc/systemd/system/docker.service.d
[root@centos7vm ~]#  vim /etc/systemd/system/docker.service.d/http-proxy.conf
加入
[Service]
Environment="HTTP_PROXY=http://127.0.0.1:8118" "NO_PROXY=localhost,172.16.0.0/16,127.0.0.1,10.244.0.0/16"
[root@centos7vm ~]# vim /etc/systemd/system/docker.service.d/https-proxy.conf
加入
[Service]
Environment="HTTPS_PROXY=http://127.0.0.1:8118" "NO_PROXY=localhost,172.16.0.0/16,127.0.0.1,10.244.0.0/16"
重新加载
[root@centos7vm ~]#  systemctl daemon-reload && systemctl restart docker

4.安装kubeadm

配置kubeadm的yum源
[root@centos7vm ~]#  vim /etc/yum.repos.d/kubernetes.repo
加入
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
安装kubadm
[root@centos7vm ~]# yum install -y kubeadm
这一步会自动安装kubectl,kubecni,kubelet
查看版本
[root@centos7vm ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@centos7vm ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@centos7vm ~]# kubelet --version
Kubernetes v1.14.1
设置kubelet开机自启动并启动
[root@centos7vm ~]#  systemctl enable kubelet.service && systemctl start kubelet

5.使用kubeadm初始化k8s Master节点

先拉镜像,然后关闭代理,如果关闭代理都不行的话,重启下服务器吧
[root@centos7vm ~]# kubeadm config images pull
[root@centos7vm ~]# reboot
[root@centos7vm ~]# kubeadm init
I0424 05:44:25.902645 4348 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0424 05:44:25.902789 4348 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos7vm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.222]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.503495 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node centos7vm as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node centos7vm as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: smz672.4uxlpw056eykpqi3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.16.0.222:6443 --token smz672.4uxlpw056eykpqi3 \--discovery-token-ca-cert-hash sha256:8c0b46999ce4cb50f9add92c7c6b28b15fdfeec49c9b09e605e227e667bc0e6b
出现上面的说明初始化成功了,如上的kubeadm join 命令,就是用来给这个 Master 节点添加更多工作节点(Worker)的命令。后面部署 Worker 节点的时候马上会用到它,所以找一个地方把这条命令记录下来。

6.按kubeadm安装成功的提示执行如下配置命令

需要这些配置命令的原因是:Kubernetes 集群默认需要加密方式访问。所以,这几条命令,就是 将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默 认会使用这个目录下的授权信息访问 Kubernetes 集群。
如果不这么做的话,我们每次都需要通过 export KUBECONFIG 环境变量告诉 kubectl 这个安全配 置文件的位置。
[root@centos7vm ~]# mkdir -p $HOME/.kube
[root@centos7vm ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@centos7vm ~]# chown $(id -u):$(id -g) $HOME/.kube/config[root@centos7vm ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos7vm NotReady master 28m v1.14.1
如果没有执行上面的命令,在执行kubectl get nodes会报如下的错误
[root@centos7vm ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

7.调试

可以看到,这个 get 指令输出的结果里,centos7vm节点的状态是 NotReady,这是为什么呢?
在调试 Kubernetes 集群时,最重要的手段就是用 kubectl describe 来查看这个节点(Node)对 象的详细信息、状态和事件(Event),我们来试一下:
[root@centos7vm ~]# kubectl describe node centos7vm
.....Conditions:Type Status LastHeartbeatTime LastTransitionTime Reason Message---- ------ ----------------- ------------------ ------ -------MemoryPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasSufficientMemory kubelet has sufficient memory availableDiskPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasNoDiskPressure kubelet has no disk pressurePIDPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasSufficientPID kubelet has sufficient PID availableReady False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
......
可以看到 NodeNotReady 的原因在于,我们尚未部署任 何网络插件。
我们还可以通过 kubectl 检查这个节点上各个系统 Pod 的状态,其中,kube-system 是 Kubernetes 项目预留的系统 Pod 的工作空间(Namepsace,注意它并不是 Linux Namespace, 它只是 Kubernetes 划分不同工作空间的单位):
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 0/1 Pending 0 65m
coredns-fb8b8dccf-wcftx 0/1 Pending 0 65m
etcd-centos7vm 1/1 Running 0 64m
kube-apiserver-centos7vm 1/1 Running 0 64m
kube-controller-manager-centos7vm 1/1 Running 0 64m
kube-proxy-xhlxf 1/1 Running 0 65m
kube-scheduler-centos7vm 1/1 Running 0 64m
可以看到,CoreDNS、kube-controller-manager 等依赖于网络的 Pod 都处于 Pending 状态, 即调度失败。这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。

8.安装网络插件weave-kube

拉镜像得开代理哦
[root@centos7vm ~]# kubectl apply -f https://git.io/weave-kube-1.6
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
稍等一会儿,重新检查Pod的状态
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 1/1 Running 0 83m
coredns-fb8b8dccf-wcftx 1/1 Running 0 83m
etcd-centos7vm 1/1 Running 0 82m
kube-apiserver-centos7vm 1/1 Running 0 82m
kube-controller-manager-centos7vm 1/1 Running 0 82m
kube-proxy-xhlxf 1/1 Running 0 83m
kube-scheduler-centos7vm 1/1 Running 0 82m
weave-net-8s4cl 2/2 Running 0 2m5s
可以看到,所有的系统Pod都成功启动了,而刚刚部署的Weave网络插件则在kube-system下 面新建了一个名叫 weave-net-8s4cl 的 Pod,一般来说,这些Pod就是容器网络插件在每个节 点上的控制组件。
Kubernetes支持容器网络插件,使用的是一个名叫CNI的通用接口,它也是当前容器网络的事实标准,市面上的所有容器网络开源项目都可以通过CNI接入Kubernetes,比如Flannel、Calico、Canal、Romana等等,它们的部署方式也都是类似的。

至此,Kubernetes的Master节点就部署完成了。如果你只需要一个单节点的Kubernetes,现在你就可以使用了。不过,在默认情况下,Kubernetes的Master节点是不能运行用户Pod的,所以还需要额外做一个小操作。如下介绍

9.部署 Kubernetes 的 Worker 节点

部署Kubernetes的Worker节点
Kubernetes的Worker节点跟Master节点几乎是相同的,它们运行着的都是一个kubelet组件。唯一的区别在于,在kubeadminit的过程中,kubelet启动后,Master节点上还会自动运行kube-apiserver、kube-scheduler、kube-controller-manger这三个系统Pod。
所以,相比之下,部署Worker节点反而是最简单的,只需要两步即可完成。第一步,在所有Worker节点上执行“安装kubeadm和Docker”一节的所有步骤。第二步,执行部署Master节点时生成的kubeadm join指令:
kubeadm join 172.16.0.222:6443 --token smz672.4uxlpw056eykpqi3 \--discovery-token-ca-cert-hash sha256:8c0b46999ce4cb50f9add92c7c6b28b15fdfeec49c9b09e605e227e667bc0e6b
如果安装完master节点后24小时内没有将work加入,则需要重新生成token
kubeadm token create 生成
kubeadm token list 查看
然后使用kubeadm join加入

10.部署kubernetes-dashboard

[root@centos7vm manifests]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
修改下载的kubernetes-dashboard.yaml文件,更改RoleBinding修改为ClusterRoleBinding,并且修改roleRef中的kind和name,用cluster-admin这个非常牛逼的CusterRole(超级使用户权限,其拥有访问kube-apiserver的所有权限)。如下:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system---
[root@centos7vm manifests]# kubectl apply -f kubernetes-dashboard.yaml
部署完成后查看pod状态
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 1/1 Running 3 16h
coredns-fb8b8dccf-wcftx 1/1 Running 3 16h
etcd-centos7vm 1/1 Running 3 16h
kube-apiserver-centos7vm 1/1 Running 3 16h
kube-controller-manager-centos7vm 1/1 Running 4 16h
kube-proxy-hpz9k 1/1 Running 0 13h
kube-proxy-xhlxf 1/1 Running 3 16h
kube-scheduler-centos7vm 1/1 Running 3 16h
kubernetes-dashboard-5f7b999d65-c759c 1/1 Running 0 2m34s
weave-net-8s4cl 2/2 Running 9 15h
weave-net-d67vs 2/2 Running 1 13h创建dashboard用户请参考:https://github.com/kubernetes/dashboard/wiki/Creating-sample-user
用户创建好之后(To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:)
[root@centos7vm manifests]# kubectl proxy --address=0.0.0.0 --disable-filter=true &
然后浏览器中访问
http://172.16.0.222:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

11.监控组件 – prometheus-operator 部署(https://github.com/coreos/prometheus-operator)

[root@centos7vm ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml
Note: make sure to adapt the namespace in the ClusterRoleBinding if deploying in a namespace other than the default namespace.
部署完成后dashboard效果图

kubernetes-deployment

遇到的问题:

问题一:docker pull images的时候连接超时:需要配置docker proxy,参考第2步设置docker代理便能解决

docker pull images的时候连接超时:需要配置docker proxy,参考第2步设置docker代理便能解决
[root@centos7vm ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks[WARNING HTTPProxy]: Connection to "https://172.16.0.222" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration[WARNING Hostname]: hostname "centos7vm" could not be reached[WARNING Hostname]: hostname "centos7vm": lookup centos7vm on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-apiserver ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-scheduler ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-proxy ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Trying to pull repository k8s.gcr.io/etcd ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Trying to pull repository k8s.gcr.io/coredns ...
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

问题二:kubeadm init时报kubelet未启动

kubeadm init时报kubelet未启动[root@centos7vm ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks[WARNING HTTPProxy]: Connection to "https://172.16.0.222" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration[WARNING Hostname]: hostname "centos7vm" could not be reached[WARNING Hostname]: hostname "centos7vm": lookup centos7vm on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos7vm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.222]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:- 'docker ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster于是通过systemctl status kubelet查看kubelet的状态
[root@centos7vm ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Wed 2019-04-24 01:03:57 EDT; 9min agoDocs: https://kubernetes.io/docs/Main PID: 13947 (kubelet)CGroup: /system.slice/kubelet.service└─13947 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --confi...Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.528123 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.556646 13947 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.636400 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: I0424 01:13:23.636889 13947 kubelet_node_status.go:283] Setting node annotation to enable volum...h/detach
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.637776 13947 controller.go:115] failed to ensure node lease exists, will retry i... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: I0424 01:13:23.639660 13947 kubelet_node_status.go:72] Attempting to register node centos7vm
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.699581 13947 kubelet_node_status.go:94] Unable to register node "centos7vm" with... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.737079 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: W0424 01:13:23.799325 13947 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.800451 13947 kubelet.go:2170] Container runtime network not ready: NetworkReady=...tialized
Hint: Some lines were ellipsized, use -l to show in full.使用docker ps -a | grep kube | grep -v pause查看容器状态
[root@centos7vm ~]# docker ps -a | grep kube | grep -v pause
a7bd1c323bfa 2c4adeb21b4f "etcd --advertise-..." 3 minutes ago Exited (1) 2 minutes ago k8s_etcd_etcd-centos7vm_kube-system_0298d5694df46086cda3a73b7025fd1a_6
4ff843a990ac efb3887b411d "kube-controller-m..." 3 minutes ago Exited (1) 3 minutes ago k8s_kube-controller-manager_kube-controller-manager-centos7vm_kube-system_b9130a6f5c1174f73db1e98992b49b1c_6
0ae5b5dd5df4 cfaa4ad74c37 "kube-apiserver --..." 3 minutes ago Exited (1) 3 minutes ago k8s_kube-apiserver_kube-apiserver-centos7vm_kube-system_c125074e5c436480a1e85165a5af5b9a_6
484743a36cae 8931473d5bdb "kube-scheduler --..." 8 minutes ago Up 8 minutes k8s_kube-scheduler_kube-scheduler-centos7vm_kube-system_f44110a0ca540009109bfc32a7eb0baa_0
查看日志docker logs a7bd1c323bfa,发现/etc/kubernetes/pki没有权限
[root@centos7vm ~]# docker logs a7bd1c323bfa
2019-04-24 07:05:54.108645 I | etcdmain: etcd Version: 3.3.10
2019-04-24 07:05:54.108695 I | etcdmain: Git SHA: 27fc7e2
2019-04-24 07:05:54.108698 I | etcdmain: Go Version: go1.10.4
2019-04-24 07:05:54.108700 I | etcdmain: Go OS/Arch: linux/amd64
2019-04-24 07:05:54.108703 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-04-24 07:05:54.108747 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2019-04-24 07:05:54.109323 C | etcdmain: open /etc/kubernetes/pki/etcd/peer.crt: permission denied发现是SELinux的原因(参考这个文章得到思路)
解决方案如下:
[root@centos7vm ~]# sestatus
SELinux status: enabled
于是关闭selinux(setenforce 0 在我的centos7.6上不起作用)
vim /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled
设置后需要重启才能生效
[root@centos7vm ~]# sestatus
SELinux status: disabled#kubeadm reset
#kubeadm init
成功解决

转载请注明:lampNick » centos7.6使用kubeadm安装kubernetes的master worker节点笔记及遇到的坑

centos7.6使用kubeadm安装kubernetes的master worker节点笔记及遇到的坑相关推荐

  1. kubeadm安装kubernetes之MASTER节点部署

    kubernetes MASTER节点部署 1.初始化环境,基础组件安装 #各个节点配置主机名 hostnamectl set-hostname k8smaster #关闭防火墙 systemctl ...

  2. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  3. centos7.4 kubeadm安装Kubernetes 1.14.1 集群

    硬件准备 [2台hosts内容一样] [root@kuber-node1 /]# cat /etc/hosts 127.0.0.1 localhost 10.26.3.182 kuber-node1 ...

  4. (亲测无坑)Centos7.x使用kubeadm安装K8s集群1.15.0版本

    基础环境配置 三台Centos7.x的服务器,主节点 cpu >=2,node节点>=1 注:(上述cpu为最低配置,否则集群安装部署会报错,无法启动,对其他硬件无硬性要求) 以下操作若无 ...

  5. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  6. 使用kubeadm安装Kubernetes

    使用kubeadm安装Kubernetes kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm ...

  7. 如何使用 kubeadm 安装 Kubernetes?

    本文讲解如何使用 kubeadm 安装 kubernetes 1.15.2. 作者 | 阿文 责编 | 郭芮 kubeadm 能帮助您建立一个小型的符合最佳实践的 Kubernetes 集群.它可以运 ...

  8. kubeadm 安装kubernetes kube-api证书过期解决方案

    kubeadm 安装kubernetes kube-api证书过期解决方案 problem describection Unable to connect to the server: x509: c ...

  9. kubeadm安装kubernetes 1.13.1集群完整部署记录

    k8s是什么 Kubernetes简称为k8s,它是 Google 开源的容器集群管理系统.在 Docker 技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功能,提 ...

最新文章

  1. Factorial Trailing Zeroes
  2. java.io.CharConversionException isHexDigit JS转码问题
  3. 2017网易内推编程题(判断单词):解答代码
  4. 李开复创业9年再看世界:中美科技成平行宇宙,VC也要+AI
  5. java 6789的10000次方,用MSSQL计算2的10000次方
  6. |ViaVoice(IBM语音识别输入系统)下载v9.1官方版 - 欧普软件下载
  7. 【暴力枚举】LeetCode 90. Subsets II
  8. Docker+SVN
  9. MINIDUMP_TYPE详解
  10. 猿编程 python_猿编程客户端下载_猿编程(小学阶段编程课程学习专用) 1.5.2 官方版_极速下载站...
  11. 凹点匹配 matlab源码,粘连类圆形目标图像的分割方法与流程
  12. 近日,软件项目管理高峰论坛成功召开,项目管理平台发布正式亮相……
  13. 电脑开机内存占用过高处理
  14. java 使用itext分割pdf
  15. 【K8S实战】-超详细教程(二)
  16. 读书笔记---Naive Bayes
  17. 信雅达,一家不尊重应聘者的公司
  18. JQuery中append()方法的使用
  19. Table边框使用总结 ,只显示你要显示的边框
  20. AR/VR训练营(无锡站)签约挂牌仪式成功举行

热门文章

  1. 如果说,人生是一次不断选择的旅程,那么当千帆阅尽,最终留下的就是一片属于自己的,独一无二的风景。
  2. 如何进入互联网行业,成为产品经理?没有项目经验如何转行当上产品经理?
  3. 60个实用Android框架排行榜
  4. 匮乏即是富足,自律产生喜悦_当惊喜与喜悦分开时
  5. 搜苹果ipad版_优秀的文本笔记工具 Keep It 苹果软件破解版
  6. 2.4gwifi最高下载速度_2.4gwifi最高下载速度
  7. 刘盈盈计算机科学与技术,四川省2013年度中等职业学校省级优秀毕业生名单_29131...
  8. 神啊,请让我丑一点吧
  9. 欧洲杯第三周的比赛闲聊
  10. 3ds max材质编辑器加载不了、不显示vray