Kubernetes - 手动部署 [ 3 ]

  • 1 部署work node
    • 1.1 创建工作目录并拷贝二进制文件
    • 1.2 部署kubelet
      • 1.2.1 创建配置文件
      • 1.2.2 配置文件
      • 1.2.3 生成kubelet初次加入集群引导kubeconfig文件
      • 1.2.4 systemd管理kubelet
      • 1.2.5 启动并设置开机启动
      • 1.2.6 允许kubelet证书申请并加入集群
    • 1.3 部署kube-proxy
      • 1.3.1 创建配置文件
      • 1.3.2 配置参数文件
      • 1.3.3 生成kube-proxy证书文件
      • 1.3.4 生成kube-proxy.kubeconfig文件
      • 1.3.5 systemd管理kube-proxy
      • 1.3.6 启动并设置开机自启
    • 1.4 部署网络组件(Calico)
    • 1.5 授权apiserver访问kubelet
  • 2 新增加Work Node
    • 2.1 拷贝已部署好的相关文件到新节点
    • 2.2 删除kubelet证书和kubeconfig文件
    • 2.3 修改主机名
    • 2.4 启动并设置开机自启
    • 2.5 在Master上同意新的Node kubelet证书申请
    • 2.6 查看Node状态
  • 3 部署Dashboard
    • 3.1 部署Dashboard
    • 3.2 访问dashboard
  • 4 部署CoreDNS

<接前文…>

环境准备

主机名 操作系统 IP 地址 所需组件
node-251 CentOS 7.9 192.168.71.251 所有组件都安装(合理利用资源)
node-252 CentOS 7.9 192.168.71.252 所有组件都安装
node-253 CentOS 7.9 192.168.71.253 docker kubelet kube-proxy

我们已经在node-251和node-252部署了etcd集群,并在所有机器安装了docker,在node-251(主节点)部署了kube-apiserver,kube-controller-manager和kube-scheduler

接下来我们将在node-252和node-253部署kubelet,kube-proxy等其他组件

1 部署work node

我们在主节点配置,完成之后进行同步,主节点也作为工作节点之一

1.1 创建工作目录并拷贝二进制文件

注: 在所有work node创建工作目录

从主节点k8s-server软件包中拷贝到所有work节点:

cd /opt/kubernetes/server/bin/
for master_ip in {1..3}doecho ">>> node-25${master_ip}"ssh root@node-25${master_ip} "mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} "scp kubelet  kube-proxy root@node-25${master_ip}:/opt/kubernetes/bin/done

1.2 部署kubelet

1.2.1 创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=node-251 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
  • –hostname-override :显示名称,集群唯一(不可重复)。
  • –network-plugin :启用CNI。
  • –kubeconfig : 空路径,会自动生成,后面用于连接apiserver。
  • –bootstrap-kubeconfig :首次启动向apiserver申请证书。
  • –config :配置文件参数。
  • –cert-dir :kubelet证书目录。
  • –pod-infra-container-image :管理Pod网络容器的镜像 init container

1.2.2 配置文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

1.2.3 生成kubelet初次加入集群引导kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.71.251:6443" # apiserver IP:PORT
TOKEN="4136692876ad4b01bb9dd0988480ebba" # 与token.csv里保持一致  /opt/kubernetes/cfg/token.csv# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials "kubelet-bootstrap" \--token=${TOKEN} \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user="kubelet-bootstrap" \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

1.2.4 systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF

1.2.5 启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

1.2.6 允许kubelet证书申请并加入集群

#查看kubelet证书请求
[root@k8s-master1 bin]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4   107s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending#允许kubelet节点申请
[root@k8s-master1 bin]# kubectl certificate approve  node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4
certificatesigningrequest.certificates.k8s.io/node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4 approved#查看申请
[root@k8s-master1 bin]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4   2m35s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued#查看节点
[root@k8s-master1 bin]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   <none>   2m11s   v1.20.10

说明:
由于网络插件还没有部署,节点会没有准备就绪NotReady

1.3 部署kube-proxy

1.3.1 创建配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

1.3.2 配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node-251
clusterCIDR: 10.244.0.0/16
EOF

1.3.3 生成kube-proxy证书文件

# 切换工作目录
cd ~/TLS/k8s# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Shenyang","ST": "Shenyang","O": "k8s","OU": "System"}]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

1.3.4 生成kube-proxy.kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.71.251:6443"kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

1.3.5 systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF

1.3.6 启动并设置开机自启

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

1.4 部署网络组件(Calico)

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。

组网原理
calico组网的核心原理就是IP路由,每个容器或者虚拟机会分配一个workload-endpoint(wl)。

从nodeA上的容器A内访问nodeB上的容器B时:

​核心问题是,nodeA怎样得知下一跳的地址?答案是node之间通过BGP协议交换路由信息。
每个node上运行一个软路由软件bird,并且被设置成BGP Speaker,与其它node通过BGP协议交换路由信息。
可以简单理解为,每一个node都会向其它node通知这样的信息:

我是X.X.X.X,某个IP或者网段在我这里,它们的下一跳地址是我。

通过这种方式每个node知晓了每个workload-endpoint的下一跳地址。

部署

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

配置calico.yaml可能会报错

error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"

查看k8s对应的calico的版本 https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements

等Calico Pod都Running,节点也会准备就绪,笔者虚拟机配置偏低,运行速度较慢。

[root@node-251 kubernetes]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-577f77cb5c-xcgn5   1/1     Running   0          8m26s
calico-node-7dfjw                          1/1     Running   0          8m26s
[root@node-251 kubernetes]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
node-251   Ready    <none>   61m   v1.20.15

1.5 授权apiserver访问kubelet

应用场景:如kubectl logs

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace: ""
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetes
EOFkubectl apply -f apiserver-to-kubelet-rbac.yaml

2 新增加Work Node

2.1 拷贝已部署好的相关文件到新节点

在Master节点将Work Node涉及文件拷贝到新节点 71.252/71.253

for i in {2..3}; do scp -r /opt/kubernetes root@192.168.71.25$i:/opt/; donefor i in {2..3}; do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.71.25$i:/usr/lib/systemd/system; donefor i in {2..3}; do scp -r /opt/kubernetes/ssl/ca.pem root@192.168.71.25$i:/opt/kubernetes/ssl/; done

2.2 删除kubelet证书和kubeconfig文件

删除work node的配置文件

for i in {2..3}; do
ssh root@node-25$i "rm -f /opt/kubernetes/cfg/kubelet.kubeconfig && rm -f /opt/kubernetes/ssl/kubelet*"
done

说明:
这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除。

2.3 修改主机名

在work node修改配置文件的主机名

[root@node-251 kubernetes]# grep 'node-251' /opt/kubernetes/cfg/kubelet.conf
--hostname-override=node-251 \
[root@node-251 kubernetes]# grep 'node-251' /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: node-251

2.4 启动并设置开机自启

在work node执行

systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy

2.5 在Master上同意新的Node kubelet证书申请

#查看证书请求
[root@node-251 kubernetes]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-0nA37A70PmadfExLLiUFejGF0vggS-3O-zMHma5AMnc   85m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34   99s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE   2m10s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending#批准
[root@node-251 kubernetes]# kubectl certificate approve node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34
certificatesigningrequest.certificates.k8s.io/node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34 approved
[root@node-251 kubernetes]# kubectl certificate approve node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE
certificatesigningrequest.certificates.k8s.io/node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE approved#查看
[root@node-251 kubernetes]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-0nA37A70PmadfExLLiUFejGF0vggS-3O-zMHma5AMnc   85m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34   2m24s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE   2m55s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

2.6 查看Node状态

要稍等会才会变成ready,会下载一些初始化镜像

[root@node-251 kubernetes]# kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
node-251   Ready      <none>   86m   v1.20.15
node-252   NotReady   <none>   84s   v1.20.15
node-253   NotReady   <none>   72s   v1.20.15

3 部署Dashboard

在主节点部署

3.1 部署Dashboard

官方教程 https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

[root@node-251 kubernetes]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

使用NodePort的方式访问dashboard
https://www.cnblogs.com/wucaiyun1/p/11692204.html

修改后的recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30001selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.0.0-beta4imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboard# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.1ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumeserviceAccountName: kubernetes-dashboard# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

启动服务器

[root@node-251 kubernetes]# kubectl apply -f recommended.yaml

查看pod和svc启动情况

[root@node-251 kubernetes]# kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.0.0.117   <none>        443:30001/TCP   27s
[root@node-251 kubernetes]# kubectl get pods --namespace=kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP              NODE       NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7b9b99d599-bgdsq   1/1     Running   0          41s   172.16.29.195   node-252   <none>           <none>
kubernetes-dashboard-6d4799d74-d86zt         1/1     Running   0          41s   172.16.101.69   node-253   <none>           <none>

3.2 访问dashboard

创建service account并绑定默认cluster-admin管理员集群角色

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@node-251 kubernetes]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-nkxqf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 7ddf0af3-423d-4bc2-b9b0-0dde859b2e44Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1363 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImJ6R0VRc2tXRURINE5uQmVBMDNhdl9IX3FRRl9HRVh3RFpKWDZMcmhMX2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbmt4cWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2RkZjBhZjMtNDIzZC00YmMyLWI5YjAtMGRkZTg1OWIyZTQ0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.qNxZy8eARwZRh1wObRwn3i8OeOcnnD8ubrgOu2ZVmwmERJ3sHrNMXG5UDJph_SzaNtEo43o22zigxUdct8QJ9c-p9A5oYghuKBnDY1rR6h34mH4QQUpET2E8scNW3vaYxZmqZi1qpOzC72KL39m_cpbhMdfdyNweUY3vUDHrfIXfvDCS82v2jiCa4sjn5aajwwlZhywOPJXN7d1JGZKgg1tzwcMVkhtIYOP8RB3z-SfA1biAy8Xf7bTCPlaFGGuNlgWhgOxTv8M7r6U_KuFfV7S-cQqtEEp1qeBdI70Bk95euH3UJAx55_OkkjLx2dwFrgZiKFXoTNLSUFIVdsQVpQ

访问地址: https://NodeIP:30001,使用输出的token登陆Dashboard(如访问提示https异常,可使用火狐浏览器)

4 部署CoreDNS

CoreDNS主要用于集群内部Service名称解析。
参考 https://blog.csdn.net/weixin_46476452/article/details/127884162

coredns.yaml

apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"app.kubernetes.io/name: coredns
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnsapp.kubernetes.io/name: corednstemplate:metadata:labels:k8s-app: kube-dnsapp.kubernetes.io/name: corednsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: coredns/coredns:1.9.4imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"app.kubernetes.io/name: coredns
spec:selector:k8s-app: kube-dnsapp.kubernetes.io/name: corednsclusterIP: 10.0.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
kubectl apply -f coredns.yaml
[root@node-251 kubernetes]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-577f77cb5c-xcgn5   1/1     Running   3          157m
calico-node-48b86                          1/1     Running   0          125m
calico-node-7dfjw                          1/1     Running   2          157m
calico-node-9d66z                          1/1     Running   0          125m
coredns-6b9bb479b9-gz8zd                   1/1     Running   0          3m20s

测试解析是否正常

[root@node-251 kubernetes]# kubectl run -it --rm dns-test --image=docker.io/library/busybox:latest sh
If you don't see a command prompt, try pressing enter.
/ #
/ #
/ # ls
bin    dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # pwd
/
/ #


至此一个单Master的k8s节点就已经完成了,后续我们还将部署高可用master。

(四)Kubernetes - 手动部署(二进制方式安装)相关推荐

  1. suse11 mysql 5.7_SUSE Linux系统中单实例二进制方式安装MySQL 5.7.22

    下面将在SUSE Linux 11 64位操作系统中以单实例二进制包方式安装MySQL 5.7.22版本数据库.以下的ywnzlinux是机器的主机名,详细安装步骤如下: 一.环境准备 操作系统是SU ...

  2. centos6上以二进制方式安装mariadb5.5

    准备mariadb-5.5.57-linux-x86_64.tar.gz二进制程序包 此包是经过编译的,也就是说我们要在特定的目录下安装: 步骤1.准备mysql用户 mkdir /app/data ...

  3. 通过二进制方式安装innobackupex

    目的 通过二进制方式安装innobackupex 环境 OS:CentOS 6.6 32bit 介绍 官网:https://www.percona.com/ 官方下载地址:https://www.pe ...

  4. aarch64-linux-gnu交叉编译器二进制方式安装

    aarch64-linux-gnu交叉编译器二进制方式安装 安装方法 1.下载交叉编译器 aarch64-linux-gnu 2.创建文件夹 sudo mkdir /usr/local/arm 3.解 ...

  5. 使用二进制方式安装高可用k8s

    使用二进制方式安装高可用k8s

  6. 手动解压方式安装JDK (1)

    一..手动解压方式安装JDK 文章目录 一..手动解压方式安装JDK 1.下载JDK地址 2.创建文件 以及查看文件路径 上传jdk 3.解压jdk 以及查看变量环境中的jdk路径 4.编辑变量环境( ...

  7. Kubernetes部署(一):K8s 二进制方式安装

    一.介绍: docker 完全隔离需要在内核3.8 以上,所以Centos6 不行 所有docker解决不了的事情,k8s来解决. k8s思维引导图vsdx-Linux文档类资源-CSDN下载 1.1 ...

  8. 二进制方式安装k8s-部署kube-controller-manager

    文章目录 1. 创建配置文件 2. systemd管理controller-manager 3. 启动并设置开机启动 注意: 阅读本文前,请确保已经看过以下文章: 学习k8s必看:在下载相关资源的时候 ...

  9. Linux二进制方式安装mysql8

    项目是银行项目,不能wget+下载地址形式下载安装包,所以下载好安装包进行安装 查看当前centos版本和内核 linux版本:centos7.9 cat /etc/redhat-release ca ...

最新文章

  1. 软件测试开发:常见测试类型概念
  2. 漫谈递归:从斐波那契开始了解尾递归
  3. UDP千兆以太网FPGA_verilog实现(五、以太网帧的结构)
  4. EXCEL如何验证重复数据?
  5. Apache Commons Lang StringUtils
  6. 出现$ref的原因及解决方案
  7. pandas打开csv表格表头错位问题解决
  8. isnan 函数 -javascript1.1
  9. 速读训练软件和速读资料
  10. win10应用商店怎么重新安装?
  11. 关于结婚照,还不如PS去~
  12. 根据地址获取经纬度 -- 腾讯地图(PHP后台)
  13. 实验二 单管交流放大电路
  14. 自动化测试与手工测试对比
  15. Large Scale Spectral Clustering with Landmark-Based Representation
  16. Python用python-docx抓取公众号文章写入word
  17. 0x7c9300e8 0x7c9300e8错误
  18. 联想拯救者Y7000,电源键一直红灯。还以为电池出问题了,其实....
  19. 视觉打标软件 在线视觉打标系统 1.金橙子控制板卡 2.自主研发的定位系统
  20. 专业的知识图谱应用门槛正在被不断降低

热门文章

  1. 工厂模式+策略模式组合实现电商促销活动
  2. 基于C++OpenGL实现的小桌子图形绘制
  3. Java调用SMSLib用单口短信猫发送短信详解
  4. 图神经网络的资源和实例总结
  5. 快递查询接口的调用与解析案例
  6. python中随机种子_Pytorch随机种子
  7. office办公软件的集合
  8. 通用gadget详解
  9. vscode设置快捷键删除行
  10. 实施ERP系统常见的误区