KUBERNETES安装及访问web页面
环境:
192.168.253.110 k8s-master
192.168.253.120 k8s-node1
192.168.253.130 k8s-node2
1.修改主机名
[root@localhost ~]# hostnamectl set-hostname k8s-master
[root@localhost ~]# hostnamectl set-hostname k8s-node1
[root@localhost ~]# hostnamectl set-hostname k8s-node2
2.同步三台服务器的时间并关闭防火墙和setenforce
[root@k8s-master ~]# yum -y install ntpdate
[root@k8s-master ~]# ntpdate pool.ntp.org
[root@k8s-node1 ~]# yum -y install ntpdate
[root@k8s-node1 ~]# ntpdate pool.ntp.org
[root@k8s-node2 ~]# yum -y install ntpdate
[root@k8s-node2 ~]# ntpdate pool.ntp.org
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# systemctl stop firewalld #这里只演示了一台,其他两台node节点也需关闭
3.修改hosts文件
[root@k8s-master ~]# vim /etc/hosts
192.168.253.110 k8s-master
192.168.253.120 k8s-node1
192.168.253.130 k8s-node2 #其他两台node节点也需要修改hosts
4.禁用swap内存交换
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# echo "swapoff -a" >> /etc/rc.local #另外两台node节点也需要禁用
5.安装docker并设置加速镜像(18.06指定版本)
[root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo #设置docker源 每台服务器都需要安装[root@k8s-master ~]# yum list docker-ce --showduplicates | sort -r #查询可用的版本[root@k8s-master ~]# yum -y install docker-ce-18.06.3.ce-3.el7 #安装docker指定版本
root@k8s-master ~]# docker --version #版本验证
Docker version 18.06.3-ce, build d7080c1
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# cd /etc/docker
[root@k8s-master docker]# vim daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","registry-mirrors":["https://kfwkfulq.mirror.aliyuncs.com","https://2lqq34jg.mirror.aliyuncs.com","https://pee6w651.mirror.aliyuncs.com","http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn","https://registry.docker-cn.com"]
}#以上node节点也需要进行操作
6.配置k8s.repo源并下载工具
[root@k8s-node2 docker]# cd /etc/yum.repos.d
[root@k8s-node2 yum.repos.d]# vim k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@k8s-node2 yum.repos.d]# yum -y install kubeadm-1.17.0 kubelet-1.17.0 kubectl-1.17.0
[root@k8s-master yum.repos.d]# systemctl enable kubelet
#node节点也需要进行以上操作
7.导入镜像
[root@k8s-master src]# yum -y install unzip
[root@k8s-master src]# unzip k8s-v1.17.0.zip
[root@k8s-master src]# cd k8s-v1.17.0/jingxiang
[root@k8s-master jingxiang]# docker load -i k8s_v1.17.0.tar
[root@k8s-master jingxiang]# docker images #导入的是以下的镜像
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 4 months ago 52.8MB
registry.aliyuncs.com/google_containers/kube-proxy v1.17.0 7d54289267dc 7 months ago 116MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.17.0 5eb3b7486872 7 months ago 161MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.17.0 0cae8d5cc64c 7 months ago 171MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.17.0 78c190f736b1 7 months ago 94.4MB
registry.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 9 months ago 41.6MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 9 months ago 288MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
#node节点要进行以上的操作
8.初始化k8s集群(以及初始化不成功该如何操作)
[root@k8s-master jingxiang]# kubeadm init --apiserver-advertise-address=192.168.253.110 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[root@k8s-master jingxiang]# mkdir -p $HOME/.kube
[root@k8s-master jingxiang]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master jingxiang]# sudo chown $(id -u):$(id -g) $HOME/.kube/config#如果初始化不成功进行以下操作
kubeset reset
rm -rf $HOME/.kube/config file#以上操作只在master上操组
9.将node加入到集群中
[root@k8s-node1 jingxiang]# kubeadm join 192.168.253.110:6443 --token apsfks.fopl7ipi6d6sp3rj \
> --discovery-token-ca-cert-hash sha256:5ad5565c9997dbdae13d8ec2f8e60a73147aeba1c8c76423fadfe7fc2771c351
[root@k8s-node2 jingxiang]# kubeadm join 192.168.253.110:6443 --token apsfks.fopl7ipi6d6sp3rj \
> --discovery-token-ca-cert-hash sha256:5ad5565c9997dbdae13d8ec2f8e60a73147aeba1c8c76423fadfe7fc2771c351 [root@k8s-master jingxiang]# kubectl get nodes 查集群 STATUS不是ready状态需要配置网络
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 3m56s v1.17.0
k8s-node1 NotReady <none> 62s v1.17.0
k8s-node2 NotReady <none> 60s v1.17.0
10.配置集群的网络(这是配置网络的yml文件)
#yml文件
[root@k8s-master k8s-v1.17.0]# vim kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged']- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-amd64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- amd64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-amd64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-amd64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-arm64namespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- arm64hostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-arm64command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-arm64command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-armnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- armhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-armcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-armcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-ppc64lenamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- ppc64lehostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-ppc64lecommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-ppc64lecommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-s390xnamespace: kube-systemlabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linux- key: kubernetes.io/archoperator: Invalues:- s390xhostNetwork: truetolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.12.0-s390xcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.12.0-s390xcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg
[root@k8s-master k8s-v1.17.0]# kubectl create -f kube-flannel.yml 执行yml文件进行创建
[root@k8s-master k8s-v1.17.0]# kubectl get nodes 现在网络显示正常
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 12m v1.17.0
k8s-node1 Ready <none> 10m v1.17.0
k8s-node2 NotReady <none> 9m58s v1.17.0
11.web ui界面
[root@k8s-master k8s-v1.17.0]# vim dashboard.yaml #web界面的yml文件
apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---
apiVersion: v1
kind: ServiceAccount
metadata:name: dashboard-adminnamespace: kubernetes-dashboard
---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30002selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: dashboard-admin
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard
---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.0.0-rc7imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"beta.kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.4ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"beta.kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}[root@k8s-master k8s-v1.17.0]# kubectl create -f dashboard.yaml #执行创建
[root@k8s-master k8s-v1.17.0]# kubectl get pod -A #显示running表示正常
kubernetes-dashboard dashboard-metrics-scraper-b68468655-jd26n 1/1 Running 0 67s
kubernetes-dashboard kubernetes-dashboard-64999dbccd-r8m78 1/1 Running 0 67s
12.浏览器访问
[root@k8s-master k8s-v1.17.0]# ss -tnl|grep 30002 通过这个端口访问
LISTEN 0 128 :::30002 :::*
https://192.168.253.110:3000
创建一个token
[root@k8s-master k8s-v1.17.0]# kubectl get secret -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-8kjlc kubernetes.io/service-account-token 3 4m34s
[root@k8s-master k8s-v1.17.0]# kubectl describe secret dashboard-admin-token-8kjlc -n kubernetes-dashboard
Name: dashboard-admin-token-8kjlc
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 05b26c6b-cb02-4e43-a9ba-38c7553e2c29Type: kubernetes.io/service-account-tokenData
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InR4bWdmaVZaRFl5R1pYcXB1ZWk5R01YZXQzZVl0enNVWjk3VnRldjJpSG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOGtqbGMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDViMjZjNmItY2IwMi00ZTQzLWE5YmEtMzhjNzU1M2UyYzI5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.GWCIVR0j7iwePm4lONHSLsHo_6UqIoDztWrcW3fp2TApMy96Df60Z-ZQpozTYCj3C9JaS8Q1pd0c4MHYyYN_yD6L8XxJUkYRB8Kz39wabmvnbsIODI5ldbN41V1z4HK9kFcNgcngvKbriK5WpffQXCHUohYQwBMViHXVrZy4TWHUmz9DiCM6MSlAgmzFlY2znxkTtRNS91SiEuC5xYlasQseH5SITPwmmPYoNSugdykdPnkU61auFfVrYyrFyaT6uJjK-XA7lfQjDdB3mY_Pf0uE4xKCs96gySpdG7Zlws7-F-gJoLiEjYCKnzVqbF-rFNqpDnK93tsbMndn29S6MQ
KUBERNETES安装及访问web页面相关推荐
- HBase启动成功,但不能访问Web页面
HBase启动成功,但不能访问Web页面 Hadoop,Zookeeper,Hbase启动成功如下 Hbase Shell 启动成功 Zookeeper 启动成功 Hbase安装目录下的conf文件下 ...
- jmeter压测学习11-模拟浏览器访问web页面
前言 在做性能测试的时候,有时候我们希望测试用户访问一个web页面的加载时间,使用 jmeter 压测的话,需模拟浏览器的行为,加载整个页面的内容. 包含一些js,css,png图片资源等文件的加载. ...
- hadoop集群正常启动,却无法访问web页面
我刚刚启动hadoop集群,启动之后发现各个节点都在啊,咋访问不了web页面,上次启动时还能访问啊. 主要是我还能上传文件到集群,也能从集群上查看文件. 我就百度,搜出来基本上都是说你防火墙没关, s ...
- 网络穿透/视频拉转推平台EasyNTS上云网关管理平台启动无法访问web页面排查
TSINGSEE青犀视频开发的网络穿透及视频拉转推产品EasyNTS包括两个部分,分别为软硬结合的EasyNTS上云网关设备和EasyNTS上云网关管理平台,两者结合,从终端到云端,形成了一整套的上云 ...
- 访问web页面出现Whitelabel Error Page原因
代码都没有问题的前提出现上述问题,主要是注解问题导致的 类的注解是@RestController,如果用注解@Controller,就不能访问web层
- Nginx设置访问Web页面时用户名密码验证
1.可能存在的需求 网页不想让所有人访问到,只让知道页面密码的人可以打开 某些组件的web端管理页面无用户名密码认证可直接打开,添加一个用户验证 ES如果不添加search-guard,是否有别的更简 ...
- Linux部署禅道在访问web页面进入www时报错:mysql无法连接(重新解压安装包或者输入命令:setenforce 0即可)
解决办法: linux中输入命令:setenforce 0 (临时关闭linux的selinuxp配置防火墙)即可. 另:临时开启:setenforce 1 如果想要永久关闭: vi /etc/sys ...
- elasticsearch基础6——head插件安装和web页面查询操作使用、ik分词器
文章目录 一.基本了解 1.1 插件分类 1.2 插件管理命令 二.分析插件 2.1 es中的分析插件 2.1.1 官方核心分析插件 2.1.2 社区提供分析插件 2.2 API扩展插件 三.Head ...
- 关于hadoop运行成功但是无法链接web页面
我在网上搜索了相关的案例后,确认了防火墙的关闭,之后在检测id是发现了问题,输入netstat -ntlp出现以下界面其实我们访问web页面的地址不是统一的,应该输入的是http://主机名或ip地址 ...
- php开发路由器界面,路由器Web页面交互Tips(示例代码)
路由器Web页面开发中基于cgi形式, 一个页面对应一个c语言文件,如 network-lan.c -> lan_setup.cgi wan.c -> wan.cgi 以network-l ...
最新文章
- hadoop mysql mybatis_MyBatis简介与配置MyBatis+Spring+MySql
- STL源码剖析---空间配置器
- [mysql]三种方法为root账户指定密码
- 使用Xtrabackup来备份你的mysql
- python去掉停用词_Python - 删除停用词
- 关于 JWT Token 自动续期的解决(根据其他文献参考写的)
- 下载tensorflow速度慢怎么办?
- MATLAB生成三维体数据
- 台式机与笔记本电脑BIOS启动热键整理
- 计算机文化基础多选题答案,计算机基础多选题集(附答案)
- 微信小程序input输入框密码的显示与隐藏
- 中国人民公安大学 网络对抗技术 15网安六区 杨益 201521460031
- python 安装包时添加国内源
- 自助建站系统有什么好处?
- FineUI中用JS在前端与后端中传数据
- 【Matlab】状态空间模型的最小化实现 minreal() 函数
- 土木工程成功转行程序员,真香~
- 【C语言】深入理解数组和指针——初识指针
- 计算机网络做网线,用网线直接把两台电脑联接,怎样做网线
- 【Azure】微软 Azure 基础解析(三)描述云计算运营中的 CapEx 与 OpEx,如何区分 CapEx 与 OpEx