一步步学习k8s(二)

一、管理K8S核心资源的三种基本方法

pod、pod控制器、service、ingress

  • 1、陈述式管理方法:主要依赖命令行CLI工具进行管理
  • 2、声明是管理方法:主要依赖统一资源配置清单(manifest)进行管理
  • 3、GUI式管理方法:主要依赖图形化操作界面(web界面)进行管理

Kubectl命令行工具使用详解

1.1、陈述式管理方法

1.1.1、管理空间名称资源

1.1.1.1、查看名称空间
主要用get查看基本信息
Usage:  kubectl get resource [-o wide|json|yaml] [-n namespace]
Man:    获取资源的相关信息,-n 指定名称空间,-o 指定输出格式resource可以是具体资源名称,如pod nginx-xxx;也可以是资源类型,如pod;或者all(仅展示几种核心资源,并不完整)-A, --all-namespace 表示显示所有名称空间
还可用describe来详细查看
Usage: kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME) [-n namespace]
Man:   描述某个资源信息

登录任一运算节点,我这里选择10.4.7.21

[root@hdss7-21 ~]# kubectl get namespace        # 简写是:kubectl get ns
NAME              STATUS   AGE
default           Active   6d17h
kube-node-lease   Active   6d17h
kube-public       Active   6d17h
kube-system       Active   6d17h
1.1.1.2、查看名称空间内的资源

查看名称空间为default的全部资源

[root@hdss7-21 ~]# kubectl get all [-n default]
NAME                 READY   STATUS    RESTARTS   AGE
pod/nginx-ds-t8mwg   1/1     Running   2          44h
pod/nginx-ds-tjkt2   1/1     Running   2          44hNAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   192.168.0.1   <none>        443/TCP   6d17hNAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/nginx-ds   2         2         2       2            2           <none>          44h
1.1.1.3、创建名称空间
[root@hdss7-21 ~]# kubectl create namespace app
namespace/app created
[root@hdss7-21 ~]# kubectl get ns
NAME              STATUS   AGE
app               Active   12s
default           Active   6d17h
kube-node-lease   Active   6d17h
kube-public       Active   6d17h
kube-system       Active   6d17h
1.1.1.4、删除名称空间
[root@hdss7-21 ~]# kubectl delete ns app
namespace "app" deleted

1.1.2、管理Deployment

1.1.2.1、创建deployment和查看
[root@hdss7-21 ~]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
deployment.apps/nginx-dp created
[root@hdss7-21 ~]# kubectl get deploy -n kube-public
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
nginx-dp   1/1     1            1           69s
[root@hdss7-21 ~]# kubectl get pods -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE
nginx-dp-5dfc689474-mxtnz   1/1     Running   0          2m46s
[root@hdss7-21 ~]# kubectl get pods -n kube-public -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP           NODE                NOMINATED NODE   READINESS GATES
nginx-dp-5dfc689474-mxtnz   1/1     Running   0          4m1s   172.7.21.3   hdss7-21.host.com   <none>           <none>
1.1.2.2、详细查看
[root@hdss7-21 ~]# kubectl describe deployment nginx-dp -n kube-public
Name:                   nginx-dp
Namespace:              kube-public
CreationTimestamp:      Wed, 21 Apr 2021 21:10:41 +0800
Labels:                 app=nginx-dp
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx-dp
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:Labels:  app=nginx-dpContainers:nginx:Image:        harbor.od.com/public/nginx:v1.7.9Port:         <none>Host Port:    <none>Environment:  <none>Mounts:       <none>Volumes:        <none>
Conditions:Type           Status  Reason----           ------  ------Available      True    MinimumReplicasAvailableProgressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-dp-5dfc689474 (1/1 replicas created)
Events:Type    Reason             Age   From                   Message----    ------             ----  ----                   -------Normal  ScalingReplicaSet  14m   deployment-controller  Scaled up replica set nginx-dp-5dfc689474 to 1
1.1.2.3、进入pod资源
[root@hdss7-21 ~]# kubectl exec -it nginx-dp-5dfc689474-mxtnz /bin/bash -n kube-public
root@nginx-dp-5dfc689474-mxtnz:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:07:15:03 brd ff:ff:ff:ff:ff:ffinet 172.7.21.3/24 brd 172.7.21.255 scope global eth0valid_lft forever preferred_lft forever

也可以使用docker exec -it 进入容器

1.1.2.4、删除pod(重启)

等于是一部重启的操作因为删除之后再查看,pod的名字变了

[root@hdss7-21 ~]# kubectl delete pod nginx-dp-5dfc689474-mxtnz -n kube-public
pod "nginx-dp-5dfc689474-mxtnz" deleted
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl get pods -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE
nginx-dp-5dfc689474-vcqw5   1/1     Running   0          3s
1.1.2.5、删除Depolyment
[root@hdss7-21 ~]# kubectl delete deployment nginx-dp -n kube-public
deployment.extensions "nginx-dp" deleted
[root@hdss7-21 ~]# kubectl get all -n kube-publicNo resources found.

1.1.3、管理Service资源

1.1.3.1、创建service

先创建pod控制器

[root@hdss7-21 ~]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
deployment.apps/nginx-dp created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl get all -n kube-public
NAME                            READY   STATUS    RESTARTS   AGE
pod/nginx-dp-5dfc689474-twtg5   1/1     Running   0          24s
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-dp   1/1     1            1           24s
NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-dp-5dfc689474   1         1         1       24s

再创建service

[root@hdss7-21 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public
service/nginx-dp exposed
1.1.3.2、查看service
[root@hdss7-21 ~]# kubectl get all -n kube-public
NAME                            READY   STATUS    RESTARTS   AGE
pod/nginx-dp-5dfc689474-twtg5   1/1     Running   0          5m3sNAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx-dp   ClusterIP   192.168.44.93   <none>        80/TCP    59sNAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-dp   1/1     1            1           5m3sNAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-dp-5dfc689474   1         1         1       5m3s
1.1.3.3、查看端口代理
[root@hdss7-22 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq-> 10.4.7.21:6443               Masq    1      0          0-> 10.4.7.22:6443               Masq    1      0          0
TCP  192.168.44.93:80 nq-> 172.7.21.3:80                Masq    1      0          0
1.1.3.4、扩容service
[root@hdss7-21 ~]# kubectl scale deployment nginx-dp --replicas=2 -n kube-public
deployment.extensions/nginx-dp scaled
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq-> 10.4.7.21:6443               Masq    1      0          0-> 10.4.7.22:6443               Masq    1      0          0
TCP  192.168.44.93:80 nq-> 172.7.21.3:80                Masq    1      0          0-> 172.7.22.3:80                Masq    1      0          0

1.1.4、kubectl用法总结

  • Kubernetes集群管理集群资源的唯一入口是通过相应的方法调用apiserver的接口
  • kubectl是官方的CLI命令行工具,用于与apiserver进行通信,将用户在命令行输入的命令,组织转化为aoiserver能识别的信息,进而实现管理k8s各种资源的一种有效的途径。
  • kubectl命令大全
    • kubectl --help
    • http://docs.kubernetes.org.cn
  • 陈述式管理能满足90%以上的资源管理需求,但缺点也很明显:
    • 命令冗长、复杂、难记
    • 特定场景下,难以实现管理需求
    • 对资源的增、删、查容易,但改很麻烦

1.2、声明式管理方法

声明式资源管理方法依赖于—资源配置清单(yaml/json)

1.2.1、查看资源配置清单的方法

1.2.1.1、查看pods
root@hdss7-21 ~]# kubectl get pods -n kube-public
NAME                      READY   STATUS    RESTARTS   AGE
nginx-dp-5dfc689474-twtg5 1/1     Running   1          23h
[root@hdss7-21 ~]# kubectl get pods nginx-dp-5dfc689474-twtg5 -o yaml -n kube-public
apiVersion: v1
kind: Pod
metadata:creationTimestamp: "2021-04-21T14:01:09Z"generateName: nginx-dp-5dfc689474-labels:app: nginx-dppod-template-hash: 5dfc689474name: nginx-dp-5dfc689474-twtg5namespace: kube-publicownerReferences:- apiVersion: apps/v1blockOwnerDeletion: truecontroller: truekind: ReplicaSetname: nginx-dp-5dfc689474uid: fd61affd-bb07-494b-bd80-cca2830a53a0resourceVersion: "51413"selfLink: /api/v1/namespaces/kube-public/pods/nginx-dp-5dfc689474-twtg5uid: 4c1b4cb9-44c8-4772-b448-16ae4073ac9c
spec:containers:- image: harbor.od.com/public/nginx:v1.7.9imagePullPolicy: IfNotPresentname: nginxresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /var/run/secrets/kubernetes.io/serviceaccountname: default-token-rxkkzreadOnly: truednsPolicy: ClusterFirstenableServiceLinks: truenodeName: hdss7-21.host.compriority: 0restartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}serviceAccount: defaultserviceAccountName: defaultterminationGracePeriodSeconds: 30tolerations:- effect: NoExecutekey: node.kubernetes.io/not-readyoperator: ExiststolerationSeconds: 300- effect: NoExecutekey: node.kubernetes.io/unreachableoperator: ExiststolerationSeconds: 300volumes:- name: default-token-rxkkzsecret:defaultMode: 420secretName: default-token-rxkkz
status:conditions:- lastProbeTime: nulllastTransitionTime: "2021-04-21T14:01:09Z"status: "True"type: Initialized- lastProbeTime: nulllastTransitionTime: "2021-04-22T13:14:49Z"status: "True"type: Ready- lastProbeTime: nulllastTransitionTime: "2021-04-22T13:14:49Z"status: "True"type: ContainersReady- lastProbeTime: nulllastTransitionTime: "2021-04-21T14:01:09Z"status: "True"type: PodScheduledcontainerStatuses:- containerID: docker://4f8cbfdc4c1fc3c8ba8b196f76a3f0c5ec269ba069344849f3b5d3a29f9ca826image: harbor.od.com/public/nginx:v1.7.9imageID: docker-pullable://harbor.od.com/public/nginx@sha256:b1f5935eb2e9e2ae89c0b3e2e148c19068d91ca502e857052f14db230443e4c2lastState:terminated:containerID: docker://3b965ed0b5ce249c19cf0c543ae52578f4224d8e12360267384dcfdefe730b76exitCode: 255finishedAt: "2021-04-22T13:14:41Z"reason: ErrorstartedAt: "2021-04-21T14:01:10Z"name: nginxready: truerestartCount: 1state:running:startedAt: "2021-04-22T13:14:48Z"hostIP: 10.4.7.21phase: RunningpodIP: 172.7.21.2qosClass: BestEffortstartTime: "2021-04-21T14:01:09Z"
1.2.1.2、查看serveice
[root@hdss7-22 ~]# kubectl get svc -n kube-public
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx-dp   ClusterIP   192.168.44.93   <none>        80/TCP    23h
[root@hdss7-22 ~]# kubectl get svc nginx-dp -o yaml -n kube-public
apiVersion: v1
kind: Service
metadata:creationTimestamp: "2021-04-21T14:05:13Z"labels:app: nginx-dpname: nginx-dpnamespace: kube-publicresourceVersion: "48685"selfLink: /api/v1/namespaces/kube-public/services/nginx-dpuid: ed0dc55e-3cae-4c6c-85f1-cf4e5d902267
spec:clusterIP: 192.168.44.93ports:- port: 80protocol: TCPtargetPort: 80selector:app: nginx-dpsessionAffinity: Nonetype: ClusterIP
status:loadBalancer: {}

1.2.2、解释资源配置清单

[root@hdss7-22 ~]# kubectl explain service
KIND:     Service
VERSION:  v1DESCRIPTION:Service is a named abstraction of software service (for example, mysql)consisting of local port (for example 3306) that the proxy listens on, andthe selector that determines which pods will answer requests sent throughthe proxy.FIELDS:apiVersion   <string>APIVersion defines the versioned schema of this representation of anobject. Servers should convert recognized schemas to the latest internalvalue, and may reject unrecognized values. More info:https://git.k8s.io/community/contributors/devel/api-conventions.md#resourceskind <string>Kind is a string value representing the REST resource this objectrepresents. Servers may infer this from the endpoint the client submitsrequests to. Cannot be updated. In CamelCase. More info:https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kindsmetadata    <Object>Standard object's metadata. More info:https://git.k8s.io/community/contributors/devel/api-conventions.md#metadataspec    <Object>Spec defines the behavior of a service.https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-statusstatus   <Object>Most recently observed status of the service. Populated by the system.Read-only. More info:https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

1.2.3、创建资源配置清单

[root@hdss7-21 ~]# vim nginx-ds-svc.yaml
apiVersion: V1
kind: Service
metadata:labels:app: nginx-dsname: nginx-dsnamespace: default
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: nginx-dssessionAffinity: Nonetype: ClusterIP
[root@hdss7-21 ~]# kubectl create -f nginx-ds-svc.yaml
service/nginx-ds created
[root@hdss7-21 ~]# kubectl get svc -n default
NAME         TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   192.168.0.1       <none>        443/TCP   7d18h
nginx-ds     ClusterIP   192.168.167.187   <none>        80/TCP    55s
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl get svc nginx-ds -o yaml
apiVersion: v1
kind: Service
metadata:creationTimestamp: "2021-04-22T14:05:32Z"labels:app: nginx-dsname: nginx-dsnamespace: defaultresourceVersion: "55757"selfLink: /api/v1/namespaces/default/services/nginx-dsuid: 78abf20e-2e7c-4632-b00c-8a377a4fbe0d
spec:clusterIP: 192.168.167.187ports:- port: 80protocol: TCPtargetPort: 80selector:app: nginx-dssessionAffinity: Nonetype: ClusterIP
status:loadBalancer: {}

1.2.4、应用资源配置清单

kubectl apply -f nginx-ds-svc.yaml

1.2.5、修改资源配置清单并应用

  • 离线修改
修改nginx-ds-svc.yaml文件,并用kubectl apply -f nginx-ds-svc.yaml 使之生效
  • 在线修改
直接使用 kubectl edit service nginx-ds 在线编辑资源配置清单并保存

1.2.6、删除资源配置清单

  • 陈述式删除
kubectl delete service nginx-ds -n kube-public
  • 声明式删除
kubectl delete -f nginx-ds-svc.yaml

1.2.7、小结

  • 声明式资源管理方法,依赖于统一配源配置清单文件对资源进行管理
  • 对资源的管理,是通过事先电定义在统一资源配置清单内,再通过陈述式命令应用到K8S集群里
  • 语法格式:kubectl create/apply/delete -f /path/to/yaml
  • 资源配置清单的学习方法:
    • tip1:多看别人(官方)写的,能读懂
    • tip2:能照着现成的文件改着用
    • tip3:遇到不懂得,善用kubectl explain… 查
    • tip4:初学切记上来就无中生有,自己憋着写

二、K8S的核心插件(addons)

Kubernetes设计了网络模型,但却将他的实现交给了网络插件,CNI网络插件最主要的功能就是实现POD资源能够跨宿主机进行通信

常见的CNI网络插件:

  • Flannel
  • Calico
  • Canal
  • Contiv
  • OpenContrail
  • NSX-T
  • Kube-route

2.1、Kubernetes的CNI网络插件-flannel

集群规划

主机名 角色 ip
HDSS7-21.host.com flannel 10.4.7.21
HDSS7-22.host.com flannel 10.4.7.22

这里以部署10.4.7.21为例,10.4.7.22同样

下载软件、解压、做软连接

下载地址:https://github.com/coreos/flannel/releases/

[root@hdss7-21 src]# cd /opt/src
[root@hdss7-21 src]# mkdir /opt/flannel-v0.11.0/
[root@hdss7-21 src]# tar zxf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
[root@hdss7-21 src]# ln -s flannel-v0.11.0 flannel

拷贝证书到flannel目录

登录到10.4.7.200

[root@hdss7-200 certs]# cd /opt/certs
[root@hdss7-200 certs]# scp -r client.pem client-key.pem ca.pem 10.4.7.21:/opt/flannel/cert/
[root@hdss7-200 certs]# scp -r client.pem client-key.pem ca.pem 10.4.7.22:/opt/flannel/cert/

配置文件&启动脚本&创建日志目录

[root@hdss7-21 flannel]# vim subnet.env         # 注意修改IP
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.21.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
[root@hdss7-21 flannel]# vim flanneld.sh        # 注意修改IP地址
#!/bin/sh
./flanneld--public-ip=10.4.7.21 \--etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \--etcd-keyfile=./cert/client-key.pem--etcd-certfile=./cert/client.pem--etcd-cafile=./cert/ca.pem--iface=ens33 \--subnet-file=./subnet.env--healthz-port=2401
[root@hdss7-22 flannel]# chmod +x flanneld.sh
[root@hdss7-21 flannel]# mkdir -p /home/logs/flanneld

操作etcd,增加host-gw模型

在任一etcd节点上操作即可,这里在10.4.7.21上操作

[root@hdss7-21 ~]# cd /opt/etcd
[root@hdss7-21 ~]# ./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'
{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}
[root@hdss7-21 etcd]# ./etcdctl get /coreos.com/network/config
{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}

通过supervisord启动它

[root@hdss7-21 ~]# vim /etc/supervisord.d/flannel.ini       # 这注意路径修改
[program:flanneld-7-21]
command=/opt/flannel/flanneld.sh                             ; the program (relative uses PATH, can take args)
numprocs=1                                                   ; number of processes copies to start (def 1)
directory=/opt/flannel                                       ; directory to cwd to before exec (def no cwd)
autostart=true                                               ; start at supervisord start (default: true)
autorestart=true                                             ; retstart at unexpected quit (default: true)
startsecs=30                                                 ; number of secs prog must stay running (def. 1)
startretries=3                                               ; max # of serial start failures (default 3)
exitcodes=0,2                                                ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                              ; signal used to kill process (default TERM)
stopwaitsecs=10                                              ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                    ; setuid to this UNIX account to run the program
redirect_stderr=true                                         ; redirect proc stderr to stdout (default false)
stdout_logfile=/home/logs/flanneld/flanneld.stdout.log       ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                 ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                     ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                  ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                  ; emit events on stdout writes (default false)
[root@hdss7-21 ~]# supervisorctl update
flanneld-7-21: added process group
[root@hdss7-21 ~]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 6523, uptime 1:43:52
flanneld-7-21                    RUNNING   pid 36736, uptime 0:00:34
kube-apiserver-7-21              RUNNING   pid 6528, uptime 1:43:52
kube-controller-manager-7-21     RUNNING   pid 6536, uptime 1:43:52
kube-kubelet-7-21                RUNNING   pid 6520, uptime 1:43:52
kube-proxy-7-21                  RUNNING   pid 6549, uptime 1:43:51
kube-scheduler-7-21              RUNNING   pid 6564, uptime 1:43:51

查看路由

[root@hdss7-22 flannel]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.4.7.254      0.0.0.0         UG    100    0        0 ens33
10.4.7.0        0.0.0.0         255.255.255.0   U     100    0        0 ens33
172.7.19.0      10.4.7.21       255.255.255.0   UG    0      0        0 ens33
172.7.22.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0

snat优化

[root@hdss7-22 flannel]# yum -y install iptables-services
[root@hdss7-22 flannel]# systemctl start iptables
[root@hdss7-22 flannel]# systemctl enable iptables
Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.
[root@hdss7-22 flannel]#
[root@hdss7-22 flannel]# iptables-save |grep -i postrouting
:POSTROUTING ACCEPT [9:546]
:KUBE-POSTROUTING - [0:0]
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.7.22.0/24 ! -o docker0 -j MASQUERADE     # 我们要优化这一条规则
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE
[root@hdss7-22 flannel]#           # 删掉下面这一条规则
[root@hdss7-22 flannel]# iptables -t nat -D POSTROUTING -s 172.7.22.0/24 ! -o docker0 -j MASQUERADE
[root@hdss7-22 flannel]#           # 添加一条新的规则在下面
[root@hdss7-22 flannel]# iptables -t nat -I POSTROUTING -s 172.7.22.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
[root@hdss7-22 flannel]# iptables-save |grep -i postrouting
:POSTROUTING ACCEPT [4:240]
:KUBE-POSTROUTING - [0:0]
-A POSTROUTING -s 172.7.22.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE
[root@hdss7-22 flannel]#           # 将规则保存到文件里
[root@hdss7-22 flannel]# iptables-save  > /etc/sysconfig/iptables

三、Kubernetes的服务发现插件-coredns

服务发现就是服务(应用)之间相互定位的过程
服务发现并不是云计算时代独有的,传统的单体架构时代也会用到。下面的应用场景更需要服务发现:

  • 服务(应用)的动态性强
  • 服务(应用)更新发布频繁
  • 服务(应用)支持自动伸缩

在K8S集群里,POD是不断变化的,如何以不变应万变呢?

  • 抽象出了Service资源,通过标签选择器,关联一组POD
  • 抽象出了集群网络,通过相对固定的“集群IP”,使服务接入点固定

那么如何自动关联Service资源的“名称”和“集群IP”,从而达到服务被集群自动发现的目的呢?

  • 考虑传统DNS的模型:hdss7-21.host.com → 10.4.7.21
  • 能否在K8S里建立这样的模型:nginx-ds → 192.168.0.5
  • K8S里服务发现的方式-DNS

实现K8S里DNS功能的插件(软件)

  • Kube-dns–kubernetes-v1.2至kubernetes-v1.10
  • Coredns–kubernetes-v1.11至今

1、安装部署coredns

部署K8S的内网资源配置清单http服务
在运维主机10.4.7.200上配置一个nginx虚拟主机,用于提供k8s统一的资源配置清单访问入口

2、配置nginx,创建在线yaml

[root@hdss7-200 ~]# vim /etc/nginx/conf.d/k8s-yaml.od.com.conf
server {listen       80;server_name  k8s-yaml.od.com k8s-yaml.grep.pro;location / {autoindex on;default_type text/plain;root /home/k8s-yaml;}
}
[root@hdss7-200 ~]# mkdir -p /home/k8s-yaml/coredns

在dns主机上10.4.7.11添加一条dns记录到最下面

[root@hdss7-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600  ; 10 minutes
@       IN SOA  dns.od.com. dnsadmin.od.com. (2021032902 ; serial10800      ; refresh (3 hours)900        ; retry (15 minutes)604800     ; expire (1 week)86400      ; minimum (1 day))NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A k8s-yaml.od.com @10.4.7.11 +short
10.4.7.200

3、部署coredns

coredns官方Github地址:https://github.com/coredns/coredns
coredns官方Docker地址:https://hub.docker.com/r/coredns/coredns/tags?page=1&ordering=last_updated
登录运维主机10.4.7.200

[root@hdss7-200 coredns]# docker pull docker.io/coredns/coredns:1.6.1
[root@hdss7-200 coredns]# docker images |grep coredns
coredns/coredns                 1.6.1     c0f6e815079e   21 months ago   42.2MB
[root@hdss7-200 coredns]# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
[root@hdss7-200 coredns]# docker push harbor.od.com/public/coredns:v1.6.1
The push refers to repository [harbor.od.com/public/coredns]
da1ec456edc8: Pushed
225df95e717c: Pushed
v1.6.1: digest: sha256:c7bf0ce4123212c87db74050d4cbab77d8f7e0b49c041e894a35ef15827cf938 size: 739

4、准备资源配置清单

参考:https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

4.1、rbac.yaml
[root@hdss7-200 coredns]# cd /home/k8s-yaml/coredns
[root@hdss7-200 coredns]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-systemlabels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns
rules:
- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
4.2、cm.yaml
[root@hdss7-200 coredns]# vim cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorsloghealthreadykubernetes cluster.local 192.168.0.0/16forward . 10.4.7.11cache 30loopreloadloadbalance}
4.3、dp.yaml
[root@hdss7-200 coredns]# vim dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"
spec:replicas: 1selector:matchLabels:k8s-app: corednstemplate:metadata:labels:k8s-app: corednsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednscontainers:- name: corednsimage: harbor.od.com/public/coredns:v1.6.1args: - -conf - /etc/coredns/CorefilevolumeMounts:- name: config-volumemountPath: /etc/corednsports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5dnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
4.4、svc.yaml
apiVersion: v1
kind: Service
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: corednskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 192.168.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
4.5、依次执行创建

登录10.4.7.21

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
configmap/coredns created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
deployment.apps/coredns created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created
4.6、检查
[root@hdss7-21 ~]# kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/coredns-6b6c4f9648-xklfx   1/1     Running   0          5m9sNAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
service/coredns   ClusterIP   192.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   5m1sNAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   1/1     1            1           5m9sNAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-6b6c4f9648   1         1         1       5m9s

四、Kubernetes的服务暴露插件-traefik

K8S的DNS实现了服务在集群“内”被自动发现,那如何使得服务在K8S集群“外”被使用和访问呢?

  • 使用NodePort型的Service

    • 注意:无法使用kube-proxy的ipvs模型,只能使用iptables模型
  • 使用Ingress资源
    • 注意:Ingress只能调度并暴露7层应用,特指http和https协议
  • Ingress是K8S API的标准资源类型之一,也是一种核心资源,它其实就是一组基于域名和URL路径,把用户的请求转发至指定Service资源的规则
  • 可以将集群外部的请求流量,转发至集群内部,从而实现“服务暴露”
  • Ingress控制器是能够为Ingress资源监听某套接字,然后根据Ingress规则匹配机制路由调度流量的一个组件
  • 说白了,Ingress没啥神秘的,就是一个nginx + 一段go脚本而已
  • 常用的Ingress控制器的实现软件
    • Ingress-nginx
    • HAProxy
    • Traefik

1、部署traefik(ingress控制器)

traefik官方GitHub:https://github.com/traefik/traefik
treafik官方DockerHub:
登录运维主机10.4.7.200

[root@hdss7-200 k8s-yaml]# cd /home/k8s-yaml/
[root@hdss7-200 k8s-yaml]# mkdir traefik && cd traefik
[root@hdss7-200 traefik]# docker pull traefik:v1.7.2-alpine
[root@hdss7-200 traefik]# docker images |grep traefik
traefik                         v1.7.2-alpine   add5fac61ae5   2 years ago     72.4MB
[root@hdss7-200 traefik]# docker tag add5fac61ae5 harbor.od.com/public/traefik:v1.7.2
[root@hdss7-200 traefik]# docker push harbor.od.com/public/traefik:v1.7.2

2、准备资源配置清单

2.1、rbac.yaml
[root@hdss7-200 traefik]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: traefik-ingress-controllernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: traefik-ingress-controller
rules:- apiGroups:- ""resources:- services- endpoints- secretsverbs:- get- list- watch- apiGroups:- extensionsresources:- ingressesverbs:- get- list- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: traefik-ingress-controller
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: traefik-ingress-controller
subjects:
- kind: ServiceAccountname: traefik-ingress-controllernamespace: kube-system
2.2、ds.yaml
[root@hdss7-200 traefik]# vim ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:name: traefik-ingressnamespace: kube-systemlabels:k8s-app: traefik-ingress
spec:template:metadata:labels:k8s-app: traefik-ingressname: traefik-ingressspec:serviceAccountName: traefik-ingress-controllerterminationGracePeriodSeconds: 60containers:- image: harbor.od.com/public/traefik:v1.7.2name: traefik-ingressports:- name: controllercontainerPort: 80hostPort: 81- name: admin-webcontainerPort: 8080securityContext:capabilities:drop:- ALLadd:- NET_BIND_SERVICEargs:- --api- --kubernetes- --logLevel=INFO- --insecureskipverify=true- --kubernetes.endpoint=https://10.4.7.10:7443- --accesslog- --accesslog.filepath=/var/log/traefik_access.1og- --traefiklog- --traefiklog.filepath=/var/log/traefik.1og- --metrics.prometheus
2.3、svc.yaml
[root@hdss7-200 traefik]# vim svc.yaml
kind: Service
apiVersion: v1
metadata:name: traefik-ingress-servicenamespace: kube-system
spec:selector:k8s-app: traefik-ingressports:- protocol: TCPport: 80name: controller- protocol: TCPport: 8080name: admin-web
2.4、ingress.yaml
[root@hdss7-200 traefik]# vim ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:name: traefik-web-uinamespace: kube-systemannotations:kubernetes.io/ingress.class: traefik
spec:rules:- host: traefik.od.comhttp:paths:- path: /backend:serviceName: traefik-ingress-serviceservicePort: 8080

3、依次执行创建

登录到10.4.7.21机器

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml
serviceaccount/traefik-ingress-controller created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yaml
daemonset.extensions/traefik-ingress created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
service/traefik-ingress-service created
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml
ingress.extensions/traefik-web-ui created

4、检查

[root@hdss7-21 ~]# kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6b6c4f9648-xklfx   1/1     Running   1          23h
traefik-ingress-vqk7m      1/1     Running   0          3m57s

5、配置反向代理

10.4.7.1110.4.7.12两台机器上的nginx需要配置

[root@hdss7-11 ~]# vim /etc/nginx/conf.d/od.com.conf
upstream default_backend_traefik {server 10.4.7.21:81    max_fails=3 fail_timeout=10s;server 10.4.7.22:81    max_fails=3 fail_timeout=10s;
}
server {server_name *.od.com *.grep.pro;location / {proxy_pass http://default_backend_traefik;proxy_set_header Host       $http_host;proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;}
}
[root@hdss7-11 ~]# nginx -s reload

6、修改dns配置

[root@hdss7-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600  ; 10 minutes
@       IN SOA  dns.od.com. dnsadmin.od.com. (2021032904 ; serial10800      ; refresh (3 hours)900        ; retry (15 minutes)604800     ; expire (1 week)86400      ; minimum (1 day))NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named

浏览器访问:http://traefik.od.com
出现以下画面:

五、Kubernetes的GUI管理工具-dashboard

登录10.4.7.200机器

1、部署kubernetes-dashboard

1.1、准备镜像

[root@hdss7-200 k8s-yaml]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
[root@hdss7-200 k8s-yaml]# docker images |grep dashboard
k8scn/kubernetes-dashboard-amd64   v1.8.3          fcac9aa03fd6   2 years ago     102MB
[root@hdss7-200 k8s-yaml]# docker tag fcac9aa03fd6 harbor.od.com/public/dashboard.od.com:v1.8.3
[root@hdss7-200 k8s-yaml]# docker push harbor.od.com/public/dashboard.od.com:v1.8.3
The push refers to repository [harbor.od.com/public/dashboard.od.com]
23ddb8cbb75a: Pushed
v1.8.3: digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ff size: 529

1.2、准备资源配置清单

1.2.1 rbac.yaml
[root@hdss7-200 k8s-yaml]# mkdir dashboard
[root@hdss7-200 k8s-yaml]# cd dashboard/
[root@hdss7-200 dashboard]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcilename: kubernetes-dashboard-adminnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcile
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: Cluster-admin
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard
1.2.2、dp.yaml
[root@hdss7-200 dashboard]# vim dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile
spec:selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:priorityClassName: system-clister-criticalcontainers:- name: kubernetes-dashboardimage: harbor.od.com/public/dashboard:v1.8.3resources:limits:cpu: 100mmemory: 300Mirequests:cpu: 50mmemory: 100Miports:- containerPort: 8443protocol: TCPargs:# PLATFORM-SPECIFIC ARGS HERE- --auto-generate-certificatesvolumeMounts:- name: tmp-volumemountPath: /tmplivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboard-admintolerations:- key: "CriticalAddonsOnly"operator: "Exists"
1.2.3、svc.yaml
[root@hdss7-200 dashboard]# vim svc.yaml
apiVersion: v1
kind: Service
metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile
spec:selector:k8s-app: kubernetes-dashboardports:- port: 443targetPort: 8443
1.2.4、ingress.yaml
[root@hdss7-200 dashboard]# vim ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:name: kubernetes-dashboardnamespace: kube-systemannotations:kubernetes.io/ingress.class: traefik
spec:rules:- host: dashboard.od.comhttp:paths :- backend :serviceName : kubernetes-dashboardservicePort: 443

1.3、应用资源配置清单

任选一个运算节点,这里我选择10.4.7.22

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
[root@hdss7-22 ~]#
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
deployment.apps/kubernetes-dashboard created
[root@hdss7-22 ~]#
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
service/kubernetes-dashboard created
[root@hdss7-22 ~]#
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml
ingress.extensions/kubernetes-dashboard created

2、域名解析

[root@hdss7-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600  ; 10 minutes
@       IN SOA  dns.od.com. dnsadmin.od.com. (2021032905 ; serial10800      ; refresh (3 hours)900        ; retry (15 minutes)604800     ; expire (1 week)86400      ; minimum (1 day))NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
dashboard          A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A dashboard.od.com @10.4.7.11 +short
10.4.7.10

3、浏览器访问


4、配置认证

登录到10.4.7.200主机配置证书

[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 certs]# (umask 077; openssl genrsa -out dashboard.od.com.key 2048)
Generating RSA private key, 2048 bit long modulus
...................................................................+++
...+++
e is 65537 (0x10001)
[root@hdss7-200 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OldboyEdu/OU=ops"
[root@hdss7-200 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
Signature ok
subject=/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OldboyEdu/OU=ops
[root@hdss7-200 certs]# ll dashboard.od.com.*
-rw-r--r-- 1 root root 1196 May 10 22:09 dashboard.od.com.crt
-rw-r--r-- 1 root root 1005 May 10 22:05 dashboard.od.com.csr
-rw------- 1 root root 1675 May 10 21:57 dashboard.od.com.key

切换到10.4.7.11服务器

[root@hdss7-11 ~]# cd /etc/nginx/
[root@hdss7-11 nginx]# mkdir certs
[root@hdss7-11 nginx]# cd certs/
[root@hdss7-11 certs]# scp -r hdss7-200:/opt/certs/dashboard.od.com.* ./
[root@hdss7-11 certs]# rm -f dashboard.od.com.csr
[root@hdss7-11 conf.d]# cd /etc/nginx/conf.d
[root@hdss7-11 conf.d]# vim dashboard.od.com.confserver {listen      80;server_name dashboard.od.com;rewrite ^(.*)$ https://${server_name}$1 permanent;
}
server {listen      443 ssl;server_name dashboard.o .com;ssl_certificate "certs/dashboard.od.com.crt";ssl_certificate_key "certs/dashboard.od.com.key";ssl_session_cache shared:SSL:1m;ssl_session_timeout  10m;ssl_ciphers HIGH:!aNULL:!MD5;ssl_prefer_server_ciphers on;location / {proxy_pass http://default_backend_traefik;proxy_set_header Host       $http_host;proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;}
}

网页重新刷新,点击安全,继续前往

[root@hdss7-22 ~]# kubectl get secret -n kube-system
NAME                                     TYPE                                  DATA   AGE
coredns-token-snpx8                      kubernetes.io/service-account-token   3      20h
default-token-z6pmn                      kubernetes.io/service-account-token   3      21d
kubernetes-dashboard-admin-token-pbr2v   kubernetes.io/service-account-token   3      16m
kubernetes-dashboard-key-holder          Opaque                                2      6m19s
traefik-ingress-controller-token-t27zn   kubernetes.io/service-account-token   3      8h
[root@hdss7-22 ~]# kubectl describe secret kubernetes-dashboard-admin-token-pbr2v  -n kube-system
Name:         kubernetes-dashboard-admin-token-pbr2v
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-adminkubernetes.io/service-account.uid: 1f03a210-3dae-4b10-9a19-1c5b6679edd4Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1346 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImtYmVybmV0ZXMuaW8vc2VydmljZWFY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1wYnIydiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFmMDNhMjEwLTNkYWUtNGIxMC05YTE5LTFjNWI2Njc5ZWRkNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.G76b_oYCaqIL2h6ejhak5qeO4BnibLxv9RNmi-y23DLvcekzs_wKk7D1KSUDTF_yGF9GnQZ_ECA_4d8yH2q3l0vwpCcitXw0H_YsOaGw5t8wZbATSUKEEZfjAULXXnZREP9Aa8as14i1tcgw2DGcHxyBCcP9bvhZcj3INsat3lBcmotr3Y3ynDGXAkE-8CSRFnK2YbnUtCc0CijC2nPgugBNR-wV9SMhoLQ1L5SZHOQgmaC9OKlQhGCDvWukDXUdBtaNdBW1UJUMHrg1UV5iwFAtQccpOxfoUa8WJVkGBQYsDlpT3LqG21sPUt6HwD4228MbiWvqRbgNBL3IQcgowg
[root@hdss7-22 ~]# 用这里的token登陆就可以

登录到运维主机10.4.7.200

[root@hdss7-200 dashboard]# vim rbac-minamal.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcilename: kubernetes-dashboardnamespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcilename: kubernetes-dashboard-minimalnamespace: kub-system
rules:- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]verbs: ["get", "update", "delete"]- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]- apiGroups: [""]resources: ["services"]resourceNames: ["heapster"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:"]verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: kubernetes-dashboard-minimalnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcile
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard

在运算节点10.4.7.22上应用一下

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac-minamal.yaml
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

10.4.7.200上修改一下dp.yaml

[root@hdss7-200 dashboard]# vim dp.yaml
# 将第51行修改为:  serviceAccountName: kubernetes-dashboard

在运算节点10.4.7.22上重新应用一下dp.yaml

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
deployment.apps/kubernetes-dashboard configured

5、部署heapster,dashboard的小插件

登录运维主机10.4.7.200

[root@hdss7-200 dashboard]# cd /home/k8s-yaml/dashboard
[root@hdss7-200 dashboard]# mkdir heapster
[root@hdss7-200 dashboard]# cd heapster/
[root@hdss7-200 heapster]# docker pull quay.io/bitnami/heapster:1.5.4
[root@hdss7-200 heapster]# docker images |grep heap
quay.io/bitnami/heapster                1.5.4           c359b95ad38b   2 years ago     136MB
[root@hdss7-200 heapster]# docker tag c359b95ad38b harbor.od.com/public/heapster:v1.5.4
[root@hdss7-200 heapster]# docker push harbor.od.com/public/heapster:v1.5.4

5.1、准备资源配置清单

rbac.yaml
[root@hdss7-200 heapster]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: heapsternamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: heapster
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:heapster
subjects:
- kind: ServiceAccountname: heapsternamespace: kube-system
dp.yaml
[root@hdss7-200 heapster]# vim dp.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: heapsternamespace: kube-system
spec:replicas: 1template:metadata:labels:task: monitoringk8s-app: heapsterspec:serviceAccountName: heapstercontainers:- name: heapsterimage: harbor.od.com/public/heapster:v1.5.4imagePullPolicy: IfNotPresentcommand:- /opt/bitnami/heapster/bin/heapster- --source=kubernetes:https://kubernetes.default
svc.yaml
[root@hdss7-200 heapster]# vim svc.yaml
apiVersion: v1
kind: Service
metadata:labels:task: monitoring# For use as a cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)# If you are NOT using this as an addon, you should comment out this line.    kubernetes.io/cluster-service: 'true'kubernetes.io/name: Heapstername: heapsternamespace: kube-system
spec:ports:- port: 80targetPort: 8082selector:k8s-app: heapster

回到节点主机上,应用一下,这里选择10.4.7.22服务器

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/rbac.yaml
serviceaccount/heapster created
clusterrolebinding.rbac.authorization.k8s.io/heapster created
[root@hdss7-22 ~]#
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/dp.yaml
deployment.extensions/heapster created
[root@hdss7-22 ~]#
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/svc.yaml
service/heapster created

六、如何平滑的升级集群

先查看当前版本

[root@hdss7-22 ~]# kubectl get nodes -o wide
NAME                STATUS   ROLES         AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
hdss7-21.host.com   Ready    master,node   22d   v1.15.11   10.4.7.21     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://20.10.5
hdss7-22.host.com   Ready    master,node   22d   v1.15.11   10.4.7.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://20.10.5
[root@hdss7-21 ~]#
[root@hdss7-21 ~]# kubectl get pod -n kube-system -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
coredns-6b6c4f9648-xklfx   1/1     Running   5          15d   172.7.21.4   hdss7-21.host.com   <none>           <none>
heapster-b5b9f794-8dvvh    1/1     Running   0          23m   172.7.22.4   hdss7-22.host.com   <none>           <none>
traefik-ingress-rn7fq      1/1     Running   4          14d   172.7.22.3   hdss7-22.host.com   <none>           <none>
traefik-ingress-vqk7m      1/1     Running   4          14d   172.7.21.3   hdss7-21.host.com   <none>           <none>

将其中一个节点删除,再次查看,pod都跑到另一个节点上去了

[root@hdss7-21 ~]# kubectl delete node hdss7-21.host.com

将新的包复制到同一个目录下,解压缩,将之前的包里的脚本等全都复制过来,重启supervisord程序

七、Kubernetes集群的生产维护经验

八、交付Dubbo微服务到K8S

1、原理图:

  • Provider:暴露服务的服务提供方
  • Consumer:调用远程服务的服务
  • Registry:服务注册于发现的注册中心
  • Monitor:统计服务的调用次数和调用时间的监控中心
  • Container:服务运行容器(载体)

2、交付服务的流程:

3、部署zookeeper集群

集群规划

主机名 角色 ip
HDSS7-11.host.com k8s代理节点1,zk1 10.4.7.11
HDSS7-12.host.com k8s代理节点2,zk2 10.4.7.12
HDSS7-21.host.com k8s运算节点1,zk3 10.4.7.21
HDSS7-22.host.com k8s运算节点2,jenkins 10.4.7.22
HDSS7-200.host.com k8s运维几点(docker仓库) 10.4.7.200

安装jdk1.8(3台zk角色主机

官方网址:https://www.oracle.com/cn/java/technologies/javase/javase-jdk8-downloads.html
官方下载地址:https://download.oracle.com/otn/java/jdk/8u291-b10/d7fc238d0cbf4b0dac67be84580cfb4b/jdk-8u291-linux-x64.tar.gz
百度网盘:https://pan.baidu.com/s/1HIgHE3bGS19hz0P09dntMQ,提取码:lszs

安装过程以10.4.7.11为例
解压jdk-8u291-linux-x64.tar.gz

[root@hdss7-11 src]# mkdir /usr/java
[root@hdss7-11 src]# tar zxf jdk-8u291-linux-x64.tar.gz -C /usr/java/
[root@hdss7-11 src]# ln -s /usr/java/jdk1.8.0_291 /usr/java/jdk
[root@hdss7-11 src]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib/$JAVA_HOME/lib/tools.jar
[root@hdss7-11 src]# source /etc/profile
[root@hdss7-11 src]# java -version
java version "1.8.0_291"
Java(TM) SE Runtime Environment (build 1.8.0_291-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.291-b10, mixed mode)

安装zookeeper

下载地址:https://archive.apache.org/dist/zookeeper/
网盘地址:https://pan.baidu.com/s/14HmIcZi225p2wIUzXIzfjQ,提取码:lszs
这里以10.4.7.11服务器安装zookeeper-3.4.11.tar为例

[root@hdss7-11 src]# tar zxf zookeeper-3.4.11.tar.gz -C /opt/
[root@hdss7-11 src]# cd ..
[root@hdss7-11 opt]# ln -s zookeeper-3.4.11 zookeeper
[root@hdss7-11 opt]# mkdir -vp /home/zookeeper/data /home/zookeeper/logs
[root@hdss7-11 opt]# vim /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/zookeeper/data
dataLogDir=/home/zookeeper/logs
clientPort=2181
server.1=zk1.od.com:2888:3888
server.2=zk2.od.com:2888:3888
server.3=zk3.od.com:2888:3888

设置dns解析

[root@hdss7-11 zookeeper]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600  ; 10 minutes
@       IN SOA  dns.od.com. dnsadmin.od.com. (2021032906 ; serial10800      ; refresh (3 hours)900        ; retry (15 minutes)604800     ; expire (1 week)86400      ; minimum (1 day))NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
dashboard          A    10.4.7.10
zk1                A    10.4.7.11
zk2                A    10.4.7.12
ZK3                A    10.4.7.21
[root@hdss7-11 zookeeper]# systemctl restart named

将3台zookeeper设为一个集群

10.4.7.11执行

[root@hdss7-11 zookeeper]# vim /home/zookeeper/data/myid
1

10.4.7.12执行

[root@hdss7-12 zookeeper]# vim /home/zookeeper/data/myid
2

10.4.7.21执行

[root@hdss7-21 zookeeper]# vim /home/zookeeper/data/myid
3

依次启动

[root@hdss7-11 zookeeper]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

4、部署jenkins

准备镜像:
jenkins官网:https://www.jenkins.io/
jenkins镜像:https://hub.docker.com/r/jenkins/jenkins/tags?page=1&ordering=last_updated&name=2.190.3
拉取命令: docker pull jenkins/jenkins:2.190.3

登录到10.4.7.200机器

[root@hdss7-200 ~]# docker pull jenkins/jenkins:2.190.3
[root@hdss7-200 ~]# docker tag 22b8b9a84dbe harbor.od.com/public/jenkins:v2.190.3
[root@hdss7-200 ~]# docker push harbor.od.com/public/jenkins:v2.190.3
The push refers to repository [harbor.od.com/public/jenkins]
e0485b038afa: Pushed
2950fdd45d03: Pushed
cfc53f61da25: Pushed
29c489ae7aae: Pushed
473b7de94ea9: Pushed
6ce697717948: Pushed
0fb3a3c5199f: Pushed
23257f20fce5: Pushed
b48320151ebb: Pushed
911119b5424d: Pushed
5051dc7ca502: Pushed
a8902d6047fe: Pushed
99557920a7c5: Pushed
7e3c900343d0: Pushed
b8f8aeff56a8: Pushed
687890749166: Pushed
2f77733e9824: Pushed
97041f29baff: Pushed
v2.190.3: digest: sha256:64576b8bd0a7f5c8ca275f4926224c29e7aa3f3167923644ec1243cd23d611f3 size: 4087

生成证书等

[root@hdss7-200 ~]# ssh-keygen -t rsa -b 2048 -C "1162841083@qq.com" -N "" -f /root/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:XGLKu1260LNjoyti/iQk5k7/qAwB/sOhHn/XNCbDjH4 1162841083@qq.com
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|.       o .      |
|o    . + o       |
|oo..  * S        |
|o++ .. B +       |
|.=.+o o O o      |
|* *o+o E+*       |
| Bo*+o*+=+       |
+----[SHA256]-----+

自定义Dockerfile

[root@hdss7-200 ~]#  cd /home/dockerfile/jenkins
[root@hdss7-200 jenkins]# vim Dockerfile
FROM harbor.od.com/public/jenkins:v2.190.3
USER root
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\echo 'Asia/Shanghai' > /etc/timezone
ADD id_rsa /root/.ssh/id_rsa
ADD config.json /root/.docker/config.json
ADD get-docker.sh /get-docker.sh
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&\/get-docker.sh
[root@hdss7-200 jenkins]# cp /root/.ssh/id_rsa ./
[root@hdss7-200 jenkins]# cp /root/.docker/config.json ./
[root@hdss7-200 jenkins]# curl -fsSL get.docker.com -o get-docker.sh

这个Dockerfile这里主要做了几件事

  • 设置容器用户为root
  • 设置容器内的时区
  • 将ssh私钥加入(使用git拉代码时要用到,配对的公钥应配置在gitlab中
  • 加入了登录自建harbor仓库的config文件
  • 修改了ssh客户端的配置
  • 安装一个docker的客户端

执行dockerfile,构建一个新镜像

[root@hdss7-200 jenkins]# docker build . -t harbor.od.com/infra/jenkins:v2.190.3
[root@hdss7-200 jenkins]# docker push harbor.od.com/infra/jenkins:v2.190.3

创建kubernetes命名空间

在任意运算节点,这里我选择10.4.7.21

[root@hdss7-21 conf]# kubectl create ns infra
namespace/infra created
[root@hdss7-21 conf]#
[root@hdss7-21 conf]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 -n infra
secret/harbor created
[root@hdss7-21 conf]# kubectl describe secrets harbor -n infra
Name:         harbor
Namespace:    infra
Labels:       <none>
Annotations:  <none>Type:  kubernetes.io/dockerconfigjsonData
====
.dockerconfigjson:  107 bytes

准备共享存储NFS

运维主机上以及所有的运算节点服务器上执行

[root@hdss7-21 conf]# yum install -y nfs-utils -y

运维主机10.4.7.200上

[root@hdss7-200 ~]# vim /etc/exports
/home/nfs-volume 10.4.7.0/24(rw,no_root_squash)
[root@hdss7-200 ~]# mkdir /home/nfs-volume
[root@hdss7-200 ~]# systemctl start nfs
[root@hdss7-200 ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

准备jenkins的资源配置清单

[root@hdss7-200 ~]# cd /home/k8s-yaml/
[root@hdss7-200 k8s-yaml]# mkdir jenkins
[root@hdss7-200 k8s-yaml]# cd jenkins
[root@hdss7-200 jenkins]#
dp.yaml
[root@hdss7-200 jenkins]# vim dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:name: jenkinsnamespace: infralabels:name: jenkins
spec:replicas: 1selector:matchLabels:name: jenkinstemplate:metadata:labels:app: jenkinsname: jenkinsspec:volumes:- name: datanfs:server: hdss7-200path: /home/nfs-volume/jenkins_home- name: dockerhostPath:path: /run/docker.socktype: ''containers:- name: jenkinsimage: harbor.od.com/infra/jenkins:v2.190.3imagePullPolicy: IfNotPresentports:- containerPort: 8080protocol: TCPenv:- name: JAVA_OPTSvalue: -Xmx512m -Xms512mvolumeMounts:- name: datamountPath: /var/jenkins_home- name: dockermountPath: /run/docker.sockimagePullSecrets:- name: harborsecurityContext:runAsUser: 0strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1maxSurge: 1recisionHistoryLimit: 7progressDeadlineSeconds: 600
svc.yaml
[root@hdss7-200 jenkins]# vim svc.yaml
kind: Service
apiVersion: v1
metadata:name: jenkinsnamespace: infra
spec:ports:- protocol: TCPport: 80targetPort: 8080selector:app: jenkins
ingress.yaml
[root@hdss7-200 jenkins]# vim ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:name: jenkinsnamespace: infra
spec:rules:- host: jenkins.od.comhttp:paths: - path: /backend:serviceName: jenkinsservicePort: 80

应用配置清单

[root@hdss7-200 jenkins]# mkdir /home/nfs-volume/jenkins_home
[root@hdss7-200 jenkins]# kubectl create -f http://k8s-yaml.od.com/jenkins/deployment.yaml
[root@hdss7-200 jenkins]# kubectl create -f http://k8s-yaml.od.com/jenkins/svc.yaml
[root@hdss7-200 jenkins]# kubectl create -f http://k8s-yaml.od.com/jenkins/ingress.yaml
[root@hdss7-200 jenkins]# kubectl get pod -n infra
NAME                       READY   STATUS    RESTARTS   AGE
jenkins-66799565d8-n7w6m   1/1     Running   0          111s
[root@hdss7-200 jenkins]# kubectl exec -it -n infra jenkins-66799565d8-n7w6m /bin/bash
root@jenkins-66799565d8-n7w6m:/# date
Sat Feb 27 09:23:26 CST 2021
root@jenkins-66799565d8-n7w6m:/# whoami
root

配置域名解析

登录10.4.7.11服务器

[root@hdss7-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600  ; 10 minutes
@       IN SOA  dns.od.com. dnsadmin.od.com. (2021032907 ; serial10800      ; refresh (3 hours)900        ; retry (15 minutes)604800     ; expire (1 week)86400      ; minimum (1 day))NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
dashboard          A    10.4.7.10
zk1                A    10.4.7.11
zk2                A    10.4.7.12
ZK3                A    10.4.7.21
jenkins            A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A jenkins.od.com @10.4.7.11 +short
10.4.7.10

九、重要图片

一步步学习k8s(二)相关推荐

  1. 一步步学习k8s(一)

    一步步学习k8s 1.目前使用docker的情况 使用docker的缺点: 使用Docker容器化封装应用程序的缺点(坏处) 单机使用,无法有效集群 随着容器数量的上升,管理成本攀升 没有有效的容灾/ ...

  2. 一步步学习SPD2010--第二章节--处理SP网站(6)---- 探索SP网站

    SP技术没有一个界面:你可以通过使用Web浏览器或者兼容程序如Office 应用程序,包括SPD.你可以选择适合你必须完成的任务的接口.然而,根据你选择的程序,你可能有SP网站的不同视图.如果你使用M ...

  3. 一步步学习SPD2010--第二章节--处理SP网站(9)---- 关键点

    1.      SP网站属性----标题和描述,向用户传递了网站的目的和功能.它们对于使信息容易被搜索也是基础的. 2.      主题可以使用PPT2010创建,.thmx文件可以上载到网站集顶层网 ...

  4. 第十四课 k8s源码学习和二次开发原理篇-调度器原理

    第十四课 k8s源码学习和二次开发原理篇-调度器原理 tags: k8s 源码学习 categories: 源码学习 二次开发 文章目录 第十四课 k8s源码学习和二次开发原理篇-调度器原理 第一节 ...

  5. 第八课 k8s源码学习和二次开发原理篇-KubeBuilder使用和Controller-runtime原理

    第八课 k8s源码学习和二次开发原理篇-KubeBuilder使用和Controller-runtime原理 tags: k8s 源码学习 categories: 源码学习 二次开发 文章目录 第八课 ...

  6. 第三课 k8s源码学习和二次开发-缓存机制Informers和Reflector组件学习

    第三课 k8s源码学习和二次开发-缓存机制Informers和Reflector组件学习 tags: k8s 源码学习 categories: 源码学习 二次开发 文章目录 第三课 k8s源码学习和二 ...

  7. 第四课 k8s源码学习和二次开发-DeltaFIFO和Indexer原理学习

    第四课 k8s源码学习和二次开发-DeltaFIFO和Indexer原理学习 tags: k8s 源码学习 categories: 源码学习 二次开发 文章目录 第四课 k8s源码学习和二次开发-De ...

  8. 基于asp.net + easyui框架,一步步学习easyui-datagrid——界面(一)

    从这篇博客,我会一步步的为大家讲解,easyui框架中最常用的一个控件datagrid.在使用easyui框架时,datagrid是使用最多的控件,它不仅好用,关键是实用. 我为大家建立一个博客更新的 ...

  9. 一步步学习EF Core(3.EF Core2.0路线图)

    前言 这几天一直在研究EF Core的官方文档,暂时没有发现什么比较新的和EF6.x差距比较大的东西.不过我倒是发现了EF Core的路线图更新了,下面我们就来看看 今天我们来看看最新的EF Core ...

  10. 一步步学习EF Core(2.事务与日志)

    前言 上节我们留了一个问题,为什么EF Core中,我们加载班级,数据并不会出来 其实答案很简单,~ 因为在EF Core1.1.2 中我们在EF6.0+中用到的的延迟加载功能并没有被加入,不过在EF ...

最新文章

  1. 电脑测速软件_康佳电视免费看直播,如何安装第三方软件?2个方法值得收藏...
  2. 消息队列怎么保证消息有没有重复消费(幂等性)?
  3. 用 Python 做数据处理必看:12 个使效率倍增的 Pandas 技巧(上)
  4. C#利用反射机制,获取实例的属性和属性值
  5. supervisor开机自启动方法
  6. 浅析路径遍历漏洞 文/饭
  7. LeetCode 249. 移位字符串分组(哈希)
  8. leetcode - 931. 下降路径最小和
  9. 计算机2级怎么插u盘,台式电脑怎么插u盘
  10. 误删除 Oracle 数据库数据的恢复方法
  11. Q105:PC双系统:Windows 7下硬盘安装Ubuntu 16.04
  12. jlabel 不能连续两次set_请问一个JAVA中JLabel的setFont()问题?
  13. 微信小程序商城源码,带前后端,基于node
  14. 【Flash动画制作】
  15. 人人都能看懂——c大调d大调f…
  16. 解决Windows照片查看器加载慢和颜色问题
  17. 设计一个几何图形的面积计算器,希望这个计算器可以计算圆和矩形等图形的面积
  18. 周志华----第5章神经网络(误差逆传播算法)
  19. 笔记本电脑上html怎样运行,手提电脑如何进入BIOS|笔记本电脑进入BIOS按哪个键...
  20. 如何利用计算机打印较大的字,Word打印技巧:打印大字-word技巧-电脑技巧收藏家...

热门文章

  1. oracle导入dmp文件数据不全,oracle导入dmp文件(恢复数据)
  2. 如何在CSS中解决长英文单词的页面显示问题?CSS3
  3. 常用web服务器:状态监控status页面
  4. t台式计算机如何安装2个硬盘,台式机械硬盘怎么安装?机械硬盘安装图解教程(SATA固态可参考)(2)...
  5. Java并发编程-4-百万流量的短信网关系统
  6. 静态页面练习——京东商城登录页面
  7. c51语言访问绝对地址的方法,51单片机绝对地址访问的两种方法
  8. 图像处理: 可见光波长(wavelength)与RGB之间的转换
  9. 长期睡前玩手机的人,会出现这七个问题,不只是反应迟钝!
  10. Numpy图解(三)--高维数组