主题

  • 1.运行机制介绍
  • 2.flanel 网络介绍
  • 3.Nginx+tomcat+NFS 实现动静分离

1.运行机制介绍

1.1 master 运行机制

1.1.1 kube-apiserver

  • k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统
    的数据总线和数据中心。
    apiserver 目前在master监听两个端口,通过 --insecure-port int 监听一个非安全的127.0.0.1本地端口(默认为
    8080):

  • 该端口用于接收HTTP请求;
    该端口默认值为8080,可以通过API Server的启动参数“–insecure-port”的值来修改默认值;
    默认的IP地址为“localhost”,可以通过启动参数“–insecure-bind-address”的值来修改该IP地址;
    非认证或未授权的HTTP请求通过该端口访问API Server(kube-controller-manager、kube-scheduler)。

  • 通过参数–bind-address=192.168.7.101 监听一个对外访问且安全(https)的端口(默认为6443):

  • 该端口默认值为6443,可通过启动参数“–secure-port”的值来修改默认值;
    默认IP地址为非本地(Non-Localhost)网络端口,通过启动参数“–bind-address”设置该值;
    该端口用于接收客户端、dashboard等外部HTTPS请求;
    用于基于Tocken文件或客户端证书及HTTP Base的认证;
    用于基于策略的授权;

  • kubernetes API Server的功能与使用:

提供了集群管理的REST API接口(包括认证授权、数据校验以及集群状态变更);
提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作
etcd);
是资源配额控制的入口;
拥有完备的集群安全机制.
# curl 127.0.0.1:8080/apis #分组api
# curl 127.0.0.1:8080/api/v1 #带具体版本号的api
# curl 127.0.0.1:8080/ #返回核心api列表
# curl 127.0.0.1:8080/version #api 版本信息
# curl 127.0.0.1:8080/healthz/etcd #与etcd的心跳监测
# curl 127.0.0.1:8080/apis/autoscaling/v1 #api的详细信息
# curl 127.0.0.1:8080/metrics #指标数据
  • 启动脚本
root@master1:~# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target[Service]
ExecStart=/usr/bin/kube-apiserver \--advertise-address=172.16.62.201 \--allow-privileged=true \--anonymous-auth=false \--authorization-mode=Node,RBAC \--bind-address=172.16.62.201 \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--endpoint-reconciler-type=lease \--etcd-cafile=/etc/kubernetes/ssl/ca.pem \--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \--etcd-servers=https://172.16.62.210:2379,https://172.16.62.211:2379,https://172.16.62.212:2379 \--kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate=/etc/kubernetes/ssl/admin.pem \--kubelet-client-key=/etc/kubernetes/ssl/admin-key.pem \--kubelet-https=true \--service-account-key-file=/etc/kubernetes/ssl/ca.pem \--service-cluster-ip-range=172.28.0.0/16 \    #service subnet--service-node-port-range=20000-40000 \--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \--requestheader-allowed-names= \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-group-headers=X-Remote-Group \--requestheader-username-headers=X-Remote-User \--proxy-client-cert-file=/etc/kubernetes/ssl/aggregator-proxy.pem \--proxy-client-key-file=/etc/kubernetes/ssl/aggregator-proxy-key.pem \--enable-aggregator-routing=true \--v=2
Restart=always   #重启策略
RestartSec=5
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
root@master1:~

1.1.2 kube-controller-manager

  • Controller Manager作为集群内部的管理控制中心,非安全默认端口10252,负责集群内的Node、Pod副本、服
    务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)
    的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预
    期的工作状态。

  • 启动脚本

root@master1:~# cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]
ExecStart=/usr/bin/kube-controller-manager \--address=127.0.0.1 \--allocate-node-cidrs=true \--cluster-cidr=10.20.0.0/16 \    #pod subnet--cluster-name=kubernetes \      #namespace name--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \--leader-elect=true \--node-cidr-mask-size=24 \        #pod 子网掩码--root-ca-file=/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-cluster-ip-range=172.28.0.0/16 \--use-service-account-credentials=true \--v=2
Restart=always    #重启策略
RestartSec=5      #5秒重启[Install]
WantedBy=multi-user.target
root@master1:~# 

1.1.3 kube-scheduler

  • Scheduler负责Pod调度,在整个系统中起"承上启下"作用,

  • 承上:负责接收Controller Manager创建的新的Pod,
    为其选择一个合适的Node;
    启下:Node上的kubelet接管Pod的生命周期。

  • 启动脚本

root@master1:~# cat /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]
ExecStart=/usr/bin/kube-scheduler \--address=127.0.0.1 \  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \--leader-elect=true \--v=2
Restart=always
RestartSec=5[Install]
WantedBy=multi-user.target
root@master1:~# 
  • 通过调度算法为待调度Pod列表的每个Pod从可用Node列表中选择一个最适合的Node,并将信息写入etcd中
    node节点上的kubelet通过API Server监听到kubernetes Scheduler产生的Pod绑定信息,然后获取对应的Pod清
    单,下载Image,并启动容器。
    优选策略
    1.LeastRequestedPriority
    优先从备选节点列表中选择资源消耗最小的节点(CPU+内存)。
    2.CalculateNodeLabelPriority
    优先选择含有指定Label的节点。
    3.BalancedResourceAllocation
    优先从备选节点列表中选择各项资源使用率最均衡的节点。

1.2.node 运行机制

1.2.1 kubelet

  • 在kubernetes集群中,每个Node节点都会启动kubelet进程,用来处理Master节点下发到本节点的任务,管理
    Pod和其中的容器。kubelet会在API Server上注册节点信息,定期向Master汇报节点资源使用情况,并通过
    cAdvisor(顾问)监控容器和节点资源,可以把kubelet理解成Server/Agent架构中的agent,kubelet是Node上的
    pod管家。

  • 启动脚本

root@node1:~# cat /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]
WorkingDirectory=/var/lib/kubelet
ExecStartPre=/bin/mount -o remount,rw '/sys/fs/cgroup'
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service
ExecStart=/usr/bin/kubelet \--config=/var/lib/kubelet/config.yaml \--cni-bin-dir=/usr/bin \--cni-conf-dir=/etc/cni/net.d \--hostname-override=172.16.62.207 \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--network-plugin=cni \--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.1 \--root-dir=/var/lib/kubelet \--v=2
Restart=always
RestartSec=5[Install]
WantedBy=multi-user.target
root@node1:~#

1.2.2 kube-proxy

  • https://kubernetes.io/zh/docs/concepts/services-networking/service/

  • kube-proxy 运行在每个节点上,监听 API Server 中服务对象的变化,再通过管理 IPtables 来实现网络的转发。
    Kube-Proxy 不同的版本可支持三种工作模式:

UserSpace
k8s v1.2 及以后就已经淘汰
IPtables
目前默认方式,1.1开始支持,1.2开始为默认
IPVS
1.9引入到1.11正式版本,需要安装ipvsadm、ipset 工具包和加载 ip_vs 内核模块
  • 启动脚本
root@node1:~# cat /etc/systemd/system/cat /etc/systemd/system/kube-proxy.service
cat: /etc/systemd/system/cat: No such file or directory
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后,kube-proxy 会对访问 Service IP 的请求做 SNAT
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/bin/kube-proxy \--bind-address=172.16.62.207 \--cluster-cidr=10.20.0.0/16 \--hostname-override=172.16.62.207 \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \--logtostderr=true \--proxy-mode=ipvs
Restart=always
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
root@node1:~#  

1.3 iptables

Kube-Proxy 监听 Kubernetes Master 增加和删除 Service 以及 Endpoint 的消息。对于每一个 Service,Kube
Proxy 创建相应的 IPtables 规则,并将发送到 Service Cluster IP 的流量转发到 Service 后端提供服务的 Pod 的相
应端口上。
注:
虽然可以通过 Service 的 Cluster IP 和服务端口访问到后端 Pod 提供的服务,但该 Cluster IP 是 Ping 不通的。
其原因是 Cluster IP 只是 IPtables 中的规则,并不对应到一个任何网络设备。
IPVS 模式的 Cluster IP 是可以 Ping 通的。

1.4 IPVS

  • kubernetes从1.9开始测试支持ipvs(Graduate kube-proxy IPVS mode to beta),https://github.com/kubernete
    s/kubernetes/blob/master/CHANGELOG-1.9.md#ipvs,从1.11版本正式支持ipvs(IPVS-based in-cluster load
    balancing is now GA),https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-
    1.11.md#ipvs。
    IPVS 相对 IPtables 效率会更高一些,使用 IPVS 模式需要在运行 Kube-Proxy 的节点上安装 ipvsadm、ipset 工具
    包和加载 ip_vs 内核模块,当 Kube-Proxy 以 IPVS 代理模式启动时,Kube-Proxy 将验证节点上是否安装了 IPVS
    模块,如果未安装,则 Kube-Proxy 将回退到 IPtables 代理模式。
使用IPVS模式,Kube-Proxy会监视Kubernetes Service对象和Endpoints,调用宿主机内核Netlink接口以相应
地创建IPVS规则并定期与Kubernetes Service对象 Endpoints对象同步IPVS规则,以确保IPVS状态与期望一致,
访问服务时,流量将被重定向到其中一个后端 Pod,IPVS使用哈希表作为底层数据结构并在内核空间中工作,这意味着
IPVS可以更快地重定向流量,并且在同步代理规则时具有更好的性能,此外,IPVS 为负载均衡算法提供了更多选项,例
如:rr (轮询调度)、lc (最小连接数)、dh (目标哈希)、sh (源哈希)、sed (最短期望延迟)、nq(不排队调度)
等。

1.5 etcd运行机制

  • etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)数据库。
    etcd内部采用raft协议作为一致性算法,etcd基于Go语言实现。
  • github地址:https://github.com/etcd-io/etcd
  • 官方网站:https://etcd.io/
Etcd具有下面这些属性:
完全复制:集群中的每个节点都可以使用完整的存档
高可用性:Etcd可用于避免硬件的单点故障或网络问题
一致性:每次读取都会返回跨多主机的最新写入
简单:包括一个定义良好、面向用户的API(gRPC)
安全:实现了带有可选的客户端证书身份验证的自动化TLS
快速:每秒10000次写入的基准速度
可靠:使用Raft算法实现了存储的合理分布Etcd的工作原理
  • 启动脚本
root@etcd1:/tmp/netplan_5juwqwqg# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \--name=etcd1 \--cert-file=/etc/etcd/ssl/etcd.pem \--key-file=/etc/etcd/ssl/etcd-key.pem \--peer-cert-file=/etc/etcd/ssl/etcd.pem \--peer-key-file=/etc/etcd/ssl/etcd-key.pem \--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \--initial-advertise-peer-urls=https://172.16.62.210:2380 \ #通告自己的集群端口--listen-peer-urls=https://172.16.62.210:2380 \ #集群之间通信端口--listen-client-urls=https://172.16.62.210:2379,http://127.0.0.1:2379 \  #客户端访问地址--advertise-client-urls=https://172.16.62.210:2379 \ #通告自己的客户端端口--initial-cluster-token=etcd-cluster-0 \ #创建集群使用的token,一个集群内的节点保持一致--initial-cluster=etcd1=https://172.16.62.210:2380,etcd2=https://172.16.62.211:2380,etcd3=https://172.16.62.212:2380 \ #集群所有的节点信息--initial-cluster-state=new \ 新建集群的时候的值为new,如果是已经存在的集群为existing--data-dir=/var/lib/etcd  #数据目录路径
Restart=always
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
root@etcd1:/tmp/netplan_5juwqwqg#

1.5.1 查看成员信息

  • etcd有多个不同的API访问版本,v1版本已经废弃,etcd v2 和 v3 本质上是共享同一套 raft 协议代码的两个独立的
    应用,接口不一样,存储不一样,数据互相隔离。也就是说如果从 Etcd v2 升级到 Etcd v3,原来v2 的数据还是只
    能用 v2 的接口访问,v3 的接口创建的数据也只能访问通过 v3 的接口访问。
    WARNING:
    Environment variable ETCDCTL_API is not set; defaults to etcdctl v2. #默认使用V2版本
    Set environment variable ETCDCTL_API=3 to use v3 API or ETCDCTL_API=2 to use v2 API. #设置API版本

1.5.2 验证当前etcd所有成员状态:

root@etcd3:~# export NODE_IPS="172.16.62.210 172.16.62.211 172.16.62.212"root@etcd3:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl endpoint health --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem  --key=/etc/etcd/ssl/etcd-key.pem; done
https://172.16.62.210:2379 is healthy: successfully committed proposal: took = 30.252449ms
https://172.16.62.211:2379 is healthy: successfully committed proposal: took = 29.714374ms
https://172.16.62.212:2379 is healthy: successfully committed proposal: took = 28.290729ms
root@etcd3:~# 

1.5.3 查看etcd 数据信息

root@etcd3:/var/lib/etcd# ETCDCTL_API=3 etcdctl get / --prefix --keys-only
/registry/apiregistration.k8s.io/apiservices/v1./registry/apiregistration.k8s.io/apiservices/v1.admissionregistration.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.apiextensions.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.apps/registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.autoscaling/registry/apiregistration.k8s.io/apiservices/v1.batch/registry/apiregistration.k8s.io/apiservices/v1.coordination.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.scheduling.k8s.io/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.authorization.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.batch/registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.coordination.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.discovery.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.events.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.extensions/registry/apiregistration.k8s.io/apiservices/v1beta1.networking.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.node.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.policy/registry/apiregistration.k8s.io/apiservices/v1beta1.rbac.authorization.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.scheduling.k8s.io/registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io/registry/apiregistration.k8s.io/apiservices/v2beta1.autoscaling/registry/apiregistration.k8s.io/apiservices/v2beta2.autoscaling/registry/clusterrolebindings/admin-user/registry/clusterrolebindings/cluster-admin/registry/clusterrolebindings/flannel/registry/clusterrolebindings/kubernetes-dashboard/registry/clusterrolebindings/system:basic-user/registry/clusterrolebindings/system:controller:attachdetach-controller/registry/clusterrolebindings/system:controller:certificate-controller/registry/clusterrolebindings/system:controller:clusterrole-aggregation-controller/registry/clusterrolebindings/system:controller:cronjob-controller/registry/clusterrolebindings/system:controller:daemon-set-controller/registry/clusterrolebindings/system:controller:deployment-controller/registry/clusterrolebindings/system:controller:disruption-controller/registry/clusterrolebindings/system:controller:endpoint-controller/registry/clusterrolebindings/system:controller:expand-controller/registry/clusterrolebindings/system:controller:generic-garbage-collector/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler/registry/clusterrolebindings/system:controller:job-controller/registry/clusterrolebindings/system:controller:namespace-controller/registry/clusterrolebindings/system:controller:node-controller/registry/clusterrolebindings/system:controller:persistent-volume-binder/registry/clusterrolebindings/system:controller:pod-garbage-collector/registry/clusterrolebindings/system:controller:pv-protection-controller/registry/clusterrolebindings/system:controller:pvc-protection-controller/registry/clusterrolebindings/system:controller:replicaset-controller/registry/clusterrolebindings/system:controller:replication-controller/registry/clusterrolebindings/system:controller:resourcequota-controller/registry/clusterrolebindings/system:controller:route-controller/registry/clusterrolebindings/system:controller:service-account-controller/registry/clusterrolebindings/system:controller:service-controller/registry/clusterrolebindings/system:controller:statefulset-controller/registry/clusterrolebindings/system:controller:ttl-controller/registry/clusterrolebindings/system:coredns/registry/clusterrolebindings/system:discovery/registry/clusterrolebindings/system:kube-controller-manager/registry/clusterrolebindings/system:kube-dns/registry/clusterrolebindings/system:kube-scheduler/registry/clusterrolebindings/system:node/registry/clusterrolebindings/system:node-proxier/registry/clusterrolebindings/system:public-info-viewer/registry/clusterrolebindings/system:volume-scheduler/registry/clusterroles/admin/registry/clusterroles/cluster-admin/registry/clusterroles/edit/registry/clusterroles/flannel/registry/clusterroles/kubernetes-dashboard/registry/clusterroles/system:aggregate-to-admin/registry/clusterroles/system:aggregate-to-edit/registry/clusterroles/system:aggregate-to-view/registry/clusterroles/system:auth-delegator/registry/clusterroles/system:basic-user/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient/registry/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient/registry/clusterroles/system:controller:attachdetach-controller/registry/clusterroles/system:controller:certificate-controller/registry/clusterroles/system:controller:clusterrole-aggregation-controller/registry/clusterroles/system:controller:cronjob-controller/registry/clusterroles/system:controller:daemon-set-controller/registry/clusterroles/system:controller:deployment-controller/registry/clusterroles/system:controller:disruption-controller/registry/clusterroles/system:controller:endpoint-controller/registry/clusterroles/system:controller:expand-controller/registry/clusterroles/system:controller:generic-garbage-collector/registry/clusterroles/system:controller:horizontal-pod-autoscaler/registry/clusterroles/system:controller:job-controller/registry/clusterroles/system:controller:namespace-controller/registry/clusterroles/system:controller:node-controller/registry/clusterroles/system:controller:persistent-volume-binder/registry/clusterroles/system:controller:pod-garbage-collector/registry/clusterroles/system:controller:pv-protection-controller/registry/clusterroles/system:controller:pvc-protection-controller/registry/clusterroles/system:controller:replicaset-controller/registry/clusterroles/system:controller:replication-controller/registry/clusterroles/system:controller:resourcequota-controller/registry/clusterroles/system:controller:route-controller/registry/clusterroles/system:controller:service-account-controller/registry/clusterroles/system:controller:service-controller/registry/clusterroles/system:controller:statefulset-controller/registry/clusterroles/system:controller:ttl-controller/registry/clusterroles/system:coredns/registry/clusterroles/system:discovery/registry/clusterroles/system:heapster/registry/clusterroles/system:kube-aggregator/registry/clusterroles/system:kube-controller-manager/registry/clusterroles/system:kube-dns/registry/clusterroles/system:kube-scheduler/registry/clusterroles/system:kubelet-api-admin/registry/clusterroles/system:node/registry/clusterroles/system:node-bootstrapper/registry/clusterroles/system:node-problem-detector/registry/clusterroles/system:node-proxier/registry/clusterroles/system:persistent-volume-provisioner/registry/clusterroles/system:public-info-viewer/registry/clusterroles/system:volume-scheduler/registry/clusterroles/view/registry/configmaps/kube-system/coredns/registry/configmaps/kube-system/extension-apiserver-authentication/registry/configmaps/kube-system/kube-flannel-cfg/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings/registry/controllerrevisions/kube-system/kube-flannel-ds-amd64-fcb99d957/registry/csinodes/172.16.62.201/registry/csinodes/172.16.62.202/registry/csinodes/172.16.62.203/registry/csinodes/172.16.62.207/registry/csinodes/172.16.62.208/registry/csinodes/172.16.62.209/registry/daemonsets/kube-system/kube-flannel-ds-amd64/registry/deployments/default/net-test1/registry/deployments/default/net-test2/registry/deployments/default/net-test3/registry/deployments/default/nginx-deployment/registry/deployments/kube-system/coredns/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper/registry/deployments/kubernetes-dashboard/kubernetes-dashboard/registry/events/default/busybox.162762ce821f3622/registry/events/default/busybox.162762cf683a6f3e/registry/events/default/busybox.162762cf7640b61e/registry/events/default/busybox.162762cf9c878ced/registry/leases/kube-node-lease/172.16.62.201/registry/leases/kube-node-lease/172.16.62.202/registry/leases/kube-node-lease/172.16.62.203/registry/leases/kube-node-lease/172.16.62.207/registry/leases/kube-node-lease/172.16.62.208/registry/leases/kube-node-lease/172.16.62.209/registry/leases/kube-system/kube-controller-manager/registry/leases/kube-system/kube-scheduler/registry/masterleases/172.16.62.201/registry/masterleases/172.16.62.202/registry/masterleases/172.16.62.203/registry/minions/172.16.62.201/registry/minions/172.16.62.202/registry/minions/172.16.62.203/registry/minions/172.16.62.207/registry/minions/172.16.62.208/registry/minions/172.16.62.209/registry/namespaces/default/registry/namespaces/kube-node-lease/registry/namespaces/kube-public/registry/namespaces/kube-system/registry/namespaces/kubernetes-dashboard/registry/pods/default/busybox/registry/pods/default/net-test1-5fcc69db59-9mr5d/registry/pods/default/net-test1-5fcc69db59-dqrf8/registry/pods/default/net-test1-5fcc69db59-mbt9f/registry/pods/default/net-test2-8456fd74f7-229tw/registry/pods/default/net-test2-8456fd74f7-r8d2d/registry/pods/default/net-test2-8456fd74f7-vxnsk/registry/pods/default/net-test3-59c6947667-jjf4n/registry/pods/default/net-test3-59c6947667-ll4tm/registry/pods/default/net-test3-59c6947667-pg7x8/registry/pods/default/nginx-deployment-795b7c6c68-zgtzj/registry/pods/kube-system/coredns-cb9d89598-gfqw5/registry/pods/kube-system/kube-flannel-ds-amd64-2htr5/registry/pods/kube-system/kube-flannel-ds-amd64-72qbc/registry/pods/kube-system/kube-flannel-ds-amd64-dqmg5/registry/pods/kube-system/kube-flannel-ds-amd64-jsm4f/registry/pods/kube-system/kube-flannel-ds-amd64-nh6j6/registry/pods/kube-system/kube-flannel-ds-amd64-rnf4b/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-7b8b58dc8b-pj9mg/registry/pods/kubernetes-dashboard/kubernetes-dashboard-6dccc48d7-xgkhz/registry/podsecuritypolicy/psp.flannel.unprivileged/registry/priorityclasses/system-cluster-critical/registry/priorityclasses/system-node-critical/registry/ranges/serviceips/registry/ranges/servicenodeports/registry/replicasets/default/net-test1-5fcc69db59/registry/replicasets/default/net-test2-8456fd74f7/registry/replicasets/default/net-test3-59c6947667/registry/replicasets/default/nginx-deployment-795b7c6c68/registry/replicasets/kube-system/coredns-cb9d89598/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-7b8b58dc8b/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-5f5f847d57/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-6dccc48d7/registry/rolebindings/kube-public/system:controller:bootstrap-signer/registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader/registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler/registry/rolebindings/kube-system/system:controller:bootstrap-signer/registry/rolebindings/kube-system/system:controller:cloud-provider/registry/rolebindings/kube-system/system:controller:token-cleaner/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard/registry/roles/kube-public/system:controller:bootstrap-signer/registry/roles/kube-system/extension-apiserver-authentication-reader/registry/roles/kube-system/system::leader-locking-kube-controller-manager/registry/roles/kube-system/system::leader-locking-kube-scheduler/registry/roles/kube-system/system:controller:bootstrap-signer/registry/roles/kube-system/system:controller:cloud-provider/registry/roles/kube-system/system:controller:token-cleaner/registry/roles/kubernetes-dashboard/kubernetes-dashboard/registry/secrets/default/default-token-ddvdz/registry/secrets/kube-node-lease/default-token-7kpl4/registry/secrets/kube-public/default-token-wq894/registry/secrets/kube-system/attachdetach-controller-token-mflx8/registry/secrets/kube-system/certificate-controller-token-q85tw/registry/secrets/kube-system/clusterrole-aggregation-controller-token-72qkv/registry/secrets/kube-system/coredns-token-r6jnw/registry/secrets/kube-system/cronjob-controller-token-tnphb/registry/secrets/kube-system/daemon-set-controller-token-dz5qp/registry/secrets/kube-system/default-token-65hrl/registry/secrets/kube-system/deployment-controller-token-5klk8/registry/secrets/kube-system/disruption-controller-token-jz2kp/registry/secrets/kube-system/endpoint-controller-token-q27vg/registry/secrets/kube-system/expand-controller-token-jr47v/registry/secrets/kube-system/flannel-token-2wjp4/registry/secrets/kube-system/generic-garbage-collector-token-96pbt/registry/secrets/kube-system/horizontal-pod-autoscaler-token-g7rmw/registry/secrets/kube-system/job-controller-token-9ktbt/registry/secrets/kube-system/namespace-controller-token-42ncg/registry/secrets/kube-system/node-controller-token-sb64t/registry/secrets/kube-system/persistent-volume-binder-token-gwpch/registry/secrets/kube-system/pod-garbage-collector-token-w4np7/registry/secrets/kube-system/pv-protection-controller-token-6x5wt/registry/secrets/kube-system/pvc-protection-controller-token-969b6/registry/secrets/kube-system/replicaset-controller-token-bvb2d/registry/secrets/kube-system/replication-controller-token-qgsnj/registry/secrets/kube-system/resourcequota-controller-token-bhth8/registry/secrets/kube-system/service-account-controller-token-4ltvx/registry/secrets/kube-system/service-controller-token-gk5h9/registry/secrets/kube-system/statefulset-controller-token-kmv7q/registry/secrets/kube-system/ttl-controller-token-k4rjd/registry/secrets/kubernetes-dashboard/admin-user-token-x4fpc/registry/secrets/kubernetes-dashboard/default-token-xcv2x/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-certs/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-key-holder/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-token-bsxzt/registry/serviceaccounts/default/default/registry/serviceaccounts/kube-node-lease/default/registry/serviceaccounts/kube-public/default/registry/serviceaccounts/kube-system/attachdetach-controller/registry/serviceaccounts/kube-system/certificate-controller/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller/registry/serviceaccounts/kube-system/coredns/registry/serviceaccounts/kube-system/cronjob-controller/registry/serviceaccounts/kube-system/daemon-set-controller/registry/serviceaccounts/kube-system/default/registry/serviceaccounts/kube-system/deployment-controller/registry/serviceaccounts/kube-system/disruption-controller/registry/serviceaccounts/kube-system/endpoint-controller/registry/serviceaccounts/kube-system/expand-controller/registry/serviceaccounts/kube-system/flannel/registry/serviceaccounts/kube-system/generic-garbage-collector/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler/registry/serviceaccounts/kube-system/job-controller/registry/serviceaccounts/kube-system/namespace-controller/registry/serviceaccounts/kube-system/node-controller/registry/serviceaccounts/kube-system/persistent-volume-binder/registry/serviceaccounts/kube-system/pod-garbage-collector/registry/serviceaccounts/kube-system/pv-protection-controller/registry/serviceaccounts/kube-system/pvc-protection-controller/registry/serviceaccounts/kube-system/replicaset-controller/registry/serviceaccounts/kube-system/replication-controller/registry/serviceaccounts/kube-system/resourcequota-controller/registry/serviceaccounts/kube-system/service-account-controller/registry/serviceaccounts/kube-system/service-controller/registry/serviceaccounts/kube-system/statefulset-controller/registry/serviceaccounts/kube-system/ttl-controller/registry/serviceaccounts/kubernetes-dashboard/admin-user/registry/serviceaccounts/kubernetes-dashboard/default/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard/registry/services/endpoints/default/jack-nginx-service/registry/services/endpoints/default/kubernetes/registry/services/endpoints/kube-system/kube-controller-manager/registry/services/endpoints/kube-system/kube-dns/registry/services/endpoints/kube-system/kube-scheduler/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard/registry/services/specs/default/jack-nginx-service/registry/services/specs/default/kubernetes/registry/services/specs/kube-system/kube-dns/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper/registry/services/specs/kubernetes-dashboard/kubernetes-dashboardroot@etcd3:/var/lib/etcd#

1.5.4 etcd增删改查

#添加数据
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl put /testkey "test for linux"
OK
#查看数据
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl get /testkey
/testkey
test for linux
#改动数据
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl put /testkey "test for linux202008"
OK
#查看数据已经改动
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl get /testkey
/testkey
test for linux202008
#删除数据
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl del /testkey
1
#查看数据
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl get /testkey
root@etcd3:/var/lib/etcd# 

1.5.5 etcd数据watch机制

  • 基于不断监看数据,发生变化就主动触发通知客户端,Etcd v3 的watch机制支持watch某个固定的key,也支持
    watch一个范围。
    相比Etcd v2, Etcd v3的一些主要变化:

  • 接口通过grpc提供rpc接口,放弃了v2的http接口,优势是长连接效率提升明显,缺点是使用不如以前方便,尤其对不
    方便维护长连接的场景。
    废弃了原来的目录结构,变成了纯粹的kv,用户可以通过前缀匹配模式模拟目录。
    内存中不再保存value,同样的内存可以支持存储更多的key。
    watch机制更稳定,基本上可以通过watch机制实现数据的完全同步。
    提供了批量操作以及事务机制,用户可以通过批量事务请求来实现Etcd v2的CAS机制(批量事务支持if条件判断)。

  • watch测试:

#在etcd2新增数据
root@etcd2:~#
root@etcd2:~# ETCDCTL_API=3 /usr/bin/etcdctl put /testkey "test for data"
OK
root@etcd2:~# #在etcd3 上查看
root@etcd3:/var/lib/etcd# ETCDCTL_API=3 /usr/bin/etcdctl watch /testkey
PUT
/testkey
test for data

1.5.6 etcd数据备份和恢复

  • WAL是write ahead log的缩写,顾名思义,也就是在执行真正的写操作之前先写一个日志。
    wal: 存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要
    先写入到WAL中

1.5.7 etcd v3版本数据备份与恢复

  • 备份
root@etcd1:/tmp# ETCDCTL_API=3 etcdctl snapshot save snapshot-0806.db
{"level":"info","ts":1596709497.3054078,"caller":"snapshot/v3_snapshot.go:110","msg":"created temporary db file","path":"snapshot-0806.db.part"}
{"level":"warn","ts":"2020-08-06T18:24:57.307+0800","caller":"clientv3/retry_interceptor.go:116","msg":"retry stream intercept"}
{"level":"info","ts":1596709497.3073182,"caller":"snapshot/v3_snapshot.go:121","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":1596709497.3965096,"caller":"snapshot/v3_snapshot.go:134","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","took":0.090895066}
{"level":"info","ts":1596709497.396825,"caller":"snapshot/v3_snapshot.go:143","msg":"saved","path":"snapshot-0806.db"}
Snapshot saved at snapshot-0806.db#恢复到新目录
root@etcd1:/tmp# ETCDCTL_API=3 etcdctl snapshot restore snapshot-0806.db --data-dir=/opt/test
{"level":"info","ts":1596709521.3448675,"caller":"snapshot/v3_snapshot.go:287","msg":"restoring snapshot","path":"snapshot-0806.db","wal-dir":"/opt/test/member/wal","data-dir":"/opt/test","snap-dir":"/opt/test/member/snap"}
{"level":"info","ts":1596709521.418283,"caller":"mvcc/kvstore.go:378","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1373577}
{"level":"info","ts":1596709521.4332154,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1596709521.4604409,"caller":"snapshot/v3_snapshot.go:300","msg":"restored snapshot","path":"snapshot-0806.db","wal-dir":"/opt/test/member/wal","data-dir":"/opt/test","snap-dir":"/opt/test/member/snap"}#验证
root@etcd1:/tmp# cd /opt/test/member/wal/
root@etcd1:/opt/test/member/wal# ll
total 62508
drwx------ 2 root root     4096 Aug  6 18:25 ./
drwx------ 4 root root     4096 Aug  6 18:25 ../
-rw------- 1 root root 64000000 Aug  6 18:25 0000000000000000-0000000000000000.wal
root@etcd1:/opt/test/member/wal# 

1.5.8 自动备份数据

  • 使用脚本自动备份,可以设置计划任务没6小时备份一次
root@etcd1:/data# bash etcd_backup.sh
{"level":"info","ts":1596710073.3407273,"caller":"snapshot/v3_snapshot.go:110","msg":"created temporary db file","path":"/data/etcd-backup/etcdsnapshot-2020-08-06_18-34-33.db.part"}
{"level":"warn","ts":"2020-08-06T18:34:33.343+0800","caller":"clientv3/retry_interceptor.go:116","msg":"retry stream intercept"}
{"level":"info","ts":1596710073.3440814,"caller":"snapshot/v3_snapshot.go:121","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":1596710073.4372525,"caller":"snapshot/v3_snapshot.go:134","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","took":0.096120761}
{"level":"info","ts":1596710073.4379556,"caller":"snapshot/v3_snapshot.go:143","msg":"saved","path":"/data/etcd-backup/etcdsnapshot-2020-08-06_18-34-33.db"}
Snapshot saved at /data/etcd-backup/etcdsnapshot-2020-08-06_18-34-33.db
root@etcd1:/data# 

2.flanel 网络介绍

2.1 flanel 网络

  • 官网:https://coreos.com/flannel/docs/latest/

  • 文档:https://coreos.com/flannel/docs/latest/kubernetes.html

  • 由CoreOS开源的针对k8s的网络服务,其目的为解决k8s集群中各主机上的pod相互通信的问题,其借助于etcd维
    护网络IP地址分配,并为每一个node服务器分配一个不同的IP地址段。
    Flannel 网络模型 (后端),Flannel目前有三种方式实现 UDP/VXLAN/host-gw:

#UDP:早期版本的Flannel使用UDP封装完成报文的跨越主机转发,其安全性及性能略有不足。#VXLAN:Linux 内核在在2012年底的v3.7.0之后加入了VXLAN协议支持,因此新版本的Flannel也有UDP转换为VXLAN,VXLAN本质上是一种tunnel(隧道)协议,用来基于3层网络实现虚拟的2层网络,目前flannel 的网络模型已
经是基于VXLAN的叠加(覆盖)网络。#Host-gw:也就是Host GateWay,通过在node节点上创建到达各目标容器地址的路由表而完成报文的转发,因此这种方式要求各node节点本身必须处于同一个局域网(二层网络)中,因此不适用于网络变动频繁或比较大型的网络环境,但是
其性能较好

2.1.1 Flannel 组件的解释

  • Cni0:网桥设备,每创建一个pod都会创建一对 veth pair,其中一端是pod中的eth0,另一端是Cni0网桥中的端口
    (网卡),Pod中从网卡eth0发出的流量都会发送到Cni0网桥设备的端口(网卡)上,Cni0 设备获得的ip地址是该节点分配到的网段的第一个地址

  • Flannel.1: overlay网络的设备,用来进行vxlan报文的处理(封包和解包),不同node之间的pod数据流量都从overlay设备以隧道的形式发送到对端

2.1.2 vxlan 配置

2.1.2.1 node1 节点子网信息,是10.20.2.1/24

root@node1:/run/flannel# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.20.0.0/16
FLANNEL_SUBNET=10.20.2.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

2.2 node1主机路由

  • cni0 为10.20.2.0
root@node1:/run/flannel# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.62.1     0.0.0.0         UG    0      0        0 eth0
10.20.0.0       10.20.0.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.1.0       10.20.1.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.2.0       0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.20.3.0       10.20.3.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.4.0       10.20.4.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.5.0       10.20.5.0       255.255.255.0   UG    0      0        0 flannel.1
172.16.62.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0#查看cni0 网络
root@node1:/run/flannel# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450inet 10.20.2.1  netmask 255.255.255.0  broadcast 0.0.0.0ether ae:a3:87:c4:bd:84  txqueuelen 1000  (Ethernet)RX packets 204920  bytes 17820806 (17.8 MB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 226477  bytes 23443847 (23.4 MB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

2.3 node1主机cni信息

root@node1:/run/flannel# cat /var/lib/cni/flannel/2f649aea6ca393a45663bc9591f0d086714c0e34d445c2d4bb996e1c7aafd6d5
{"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"routes":[{"dst":"10.20.0.0/16"}],"subnet":"10.20.2.0/24","type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}root@node1:/run/flannel# 

2.3 验证跨主机pod网络通信

#查看pod 信息
root@master1:/etc/ansible/roles/flannel/templates# kubectl get pod -A -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP              NODE            NOMINATED NODE   READINESS GATES
default                busybox                                      1/1     Running   36         4d9h    10.20.3.4       172.16.62.208   <none>           <none>
default                net-test1-5fcc69db59-9mr5d                   1/1     Running   4          4d8h    10.20.2.2       172.16.62.207   <none>           <none>
default                net-test1-5fcc69db59-dqrf8                   1/1     Running   2          4d8h    10.20.3.3       172.16.62.208   <none>           <none>
default                net-test1-5fcc69db59-mbt9f                   1/1     Running   2          4d8h    10.20.3.2       172.16.62.208   <none>           <none>
default                net-test2-8456fd74f7-229tw                   1/1     Running   6          4d6h    10.20.5.4       172.16.62.209   <none>           <none>
default                net-test2-8456fd74f7-r8d2d                   1/1     Running   3          4d6h    10.20.2.3       172.16.62.207   <none>           <none>
default                net-test2-8456fd74f7-vxnsk                   1/1     Running   6          4d6h    10.20.5.2       172.16.62.209   <none>           <none>
default                net-test3-59c6947667-jjf4n                   1/1     Running   2          4d4h    10.20.2.4       172.16.62.207   <none>           <none>
default                net-test3-59c6947667-ll4tm                   1/1     Running   2          4d4h    10.20.5.5       172.16.62.209   <none>           <none>
default                net-test3-59c6947667-pg7x8                   1/1     Running   2          4d4h    10.20.2.6       172.16.62.207   <none>           <none>
default                nginx-deployment-795b7c6c68-zgtzj            1/1     Running   1          2d23h   10.20.5.21      172.16.62.209   <none>           <none>
kube-system            coredns-cb9d89598-gfqw5                      1/1     Running   0          4d3h    10.20.3.6       172.16.62.208   <none>           <none>#进入node2的podroot@master1:/etc/ansible/roles/flannel/templates# kubectl exec -it net-test1-5fcc69db59-dqrf8 sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 06:F8:6C:28:AA:05  inet addr:10.20.3.3  Bcast:0.0.0.0  Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1RX packets:21 errors:0 dropped:0 overruns:0 frame:0TX packets:4 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0 RX bytes:1154 (1.1 KiB)  TX bytes:224 (224.0 B)lo        Link encap:Local Loopback  inet addr:127.0.0.1  Mask:255.0.0.0UP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)#测试node1 pod
/ # ping 10.20.2.2
PING 10.20.2.2 (10.20.2.2): 56 data bytes
64 bytes from 10.20.2.2: seq=0 ttl=62 time=1.197 ms
64 bytes from 10.20.2.2: seq=1 ttl=62 time=0.627 ms
^C
--- 10.20.2.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.627/0.912/1.197 ms
/ # traceroute 10.20.2.2
traceroute to 10.20.2.2 (10.20.2.2), 30 hops max, 46 byte packets1  10.20.3.1 (10.20.3.1)  0.019 ms  0.011 ms  0.008 ms2  10.20.2.0 (10.20.2.0)  0.391 ms  0.718 ms  0.267 ms3  10.20.2.2 (10.20.2.2)  0.343 ms  0.681 ms  0.417 ms/ # traceroute to 223.6.6.6 (223.6.6.6), 30 hops max, 46 byte packets1  10.20.3.1 (10.20.3.1)  0.015 ms  0.056 ms  0.008 ms2  172.16.62.1 (172.16.62.1)  0.321 ms  0.229 ms  0.189 ms3  *  *  *

2.4 vxlan+directrouting

  • Directrouting 为在同一个二层网络中的node节点启用直接路由机制,类似于host-gw模式。
  • 修改flannel支持Directrouting
root@master1:/etc/ansible/roles/flannel/defaults# more main.yml
# 部分flannel配置,参考 docs/setup/network-plugin/flannel.md# 设置flannel 后端
#FLANNEL_BACKEND: "host-gw"
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: true   #修改为true#flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flanneld_image: "easzlab/flannel:v0.11.0-amd64"# 离线镜像tar包
flannel_offline: "flannel_v0.11.0-amd64.tar"
root@master1:/etc/ansible/roles/flannel/defaults# 

2.5 重新安装网络插件

  • 安装完成后需要重启节点
root@master1:/etc/ansible# ansible-playbook 06.network.yml 

2…6vxlan+directrouting 测试

#进入node2节点上的到容器
root@master1:~# kubectl exec -it net-test1-5fcc69db59-dqrf8 sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 0A:99:BB:82:5B:2B  inet addr:10.20.3.9  Bcast:0.0.0.0  Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1RX packets:3 errors:0 dropped:0 overruns:0 frame:0TX packets:1 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0 RX bytes:126 (126.0 B)  TX bytes:42 (42.0 B)lo        Link encap:Local Loopback  inet addr:127.0.0.1  Mask:255.0.0.0UP LOOPBACK RUNNING  MTU:65536  Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
#测试网络
/ # ping 10.20.2.8
PING 10.20.2.8 (10.20.2.8): 56 data bytes
64 bytes from 10.20.2.8: seq=0 ttl=62 time=1.709 ms
64 bytes from 10.20.2.8: seq=1 ttl=62 time=0.610 ms#traceroute其他节点pod
- 没有走flanel1.1 直接到了node的eth0网络
/ # traceroute 10.20.2.8
traceroute to 10.20.2.8 (10.20.2.8), 30 hops max, 46 byte packets1  10.20.3.1 (10.20.3.1)  0.014 ms  0.011 ms  0.008 ms2  172.16.62.207 (172.16.62.207)  0.424 ms  0.415 ms  0.222 ms3  10.20.2.8 (10.20.2.8)  0.242 ms  0.329 ms  0.280 ms

2.7 主机路由对比

  • 修改前主机路由
root@node1:/run/flannel# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.62.1     0.0.0.0         UG    0      0        0 eth0
10.20.0.0       10.20.0.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.1.0       10.20.1.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.2.0       0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.20.3.0       10.20.3.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.4.0       10.20.4.0       255.255.255.0   UG    0      0        0 flannel.1
10.20.5.0       10.20.5.0       255.255.255.0   UG    0      0        0 flannel.1
172.16.62.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
  • 修改后主机路由
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.62.1     0.0.0.0         UG    0      0        0 eth0
10.20.0.0       172.16.62.202   255.255.255.0   UG    0      0        0 eth0
10.20.1.0       172.16.62.201   255.255.255.0   UG    0      0        0 eth0
10.20.2.0       0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.20.3.0       172.16.62.208   255.255.255.0   UG    0      0        0 eth0
10.20.4.0       172.16.62.203   255.255.255.0   UG    0      0        0 eth0
10.20.5.0       172.16.62.209   255.255.255.0   UG    0      0        0 eth0
172.16.62.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
root@node1:~# 

3.Nginx+tomcat+NFS 实现动静分离

  • 环境介绍
角色 主机名 IP 备注
k8s-master1 kubeadm-master1.haostack.com 172.16.62.201
k8s-master2 kubeadm-master2.haostack.com 172.16.62.202
k8s-master3 kubeadm-master3.haostack.com 172.16.62.203
ha1 ha1.haostack.com 172.16.62.204
ha2 ha2.haostack.com 172.16.62.205
node1 node1.haostack.com 172.16.62.207
node2 node2.haostack.com 172.16.62.208
node3 node3.haostack.com 172.16.62.209
etc1 etc1.haostack.com 172.16.62.210
etc2 etc2.haostack.com 172.16.62.211
etc3 etc3.haostack.com 172.16.62.212
harbor harbor.haostack.com 172.16.62.26
dns haostack.com 172.16.62.24
NFS haostack.com 172.16.62.24

3.1 基础镜像Centos 制作

3.1.1 centos 基础镜像制作

root@master1:/data/web/centos# cat Dockerfile
#自定义Centos 基础镜像
from harbor.haostack.com/official/centos:7.8.2003MAINTAINER Jack.liu <jack_liu@qq.com>ADD filebeat-7.6.1-x86_64.rpm /tmp
RUN yum install -y /tmp/filebeat-7.6.1-x86_64.rpm vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&  rm -rf /etc/localtime /tmp/filebeat-7.6.1-x86_64.rpm  && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd  nginx -u 2019 && useradd www -u 2020
#创建centos镜像
root@master1:/data/web/centos# cat build-command.sh
#!/bin/bash
docker build -t  harbor.haostack.com/k8s/jack_k8s_base-centos:v1 .
sleep 3
docker push harbor.haostack.com/k8s/jack_k8s_base-centos:v1
root@master1:/data/web/centos#
  • 文件
root@master1:/data/web/centos# tree
.
├── build-command.sh
├── Dockerfile
└── filebeat-7.6.1-x86_64.rpm0 directories, 3 files
root@master1:/data/web/centos#

3.2 nginx基础镜像制作

root@master1:/data/web/nginx-base# cat Dockerfile
#Nginx Base Image
FROM harbor.haostack.com/k8s/jack_k8s_base-centos:v1MAINTAINER  jack liu<jack_liu@qq.com>RUN yum install -y vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.14.2.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.14.2 && ./configure  && make && make install && ln -sv /usr/local/nginx/sbin/nginx  /usr/sbin/nginx  &&rm -rf /usr/local/src/nginx-1.14.2.tar.gz
root@master1:/data/web/nginx-base# 
  • 文件
root@master1:/data/web/nginx-base# tree
.
├── build-command.sh
├── Dockerfile
└── nginx-1.14.2.tar.gz0 directories, 3 files

3.3 nginx业务镜像制作

root@master1:/data/web/nginx-web1# cat Dockerfile
#自定义Nginx业务镜像
from harbor.haostack.com/k8s/jack_k8s_base-nginx:v1MAINTAINER Jack.liu <jack_liu@qq.com>ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz  /usr/local/nginx/html/webapp/
ADD index.html  /usr/local/nginx/html/index.html#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/images /usr/local/nginx/html/webapp/staticEXPOSE 80 443CMD ["/usr/sbin/nginx"]
root@master1:/data/web/nginx-web1
  • 文件
root@master1:/data/web/nginx-web1# tree
.
├── app1.tar.gz
├── build-command.sh
├── Dockerfile
├── index.html
├── nginx.conf
├── nginx.yaml
├── ns-uat.yaml
└── webapp1 directory, 7 files

3.3.1 创建nginx-pod

3.3.2.1 创建namespace

root@master1:/data/web/nginx-web1# cat ns-uat.yaml
apiVersion: v1
kind: Namespace
metadata: name: ns-uat
root@master1:/data/web/nginx-web1#

3.3.2.2 创建nginx-pod

  • kubectl apply -f nginx.yaml
root@master1:/data/web/nginx-web1# cat nginx.yaml
kind: Deployment
apiVersion: apps/v1
metadata:labels:app: uat-nginx-deployment-labelname: uat-nginx-deploymentnamespace: ns-uat
spec:replicas: 1selector:matchLabels:app: uat-nginx-selectortemplate:metadata:labels:app: uat-nginx-selectorspec:containers:- name: uat-nginx-containerimage: harbor.haostack.com/k8s/jack_k8s_nginx-web1:v1#command: ["/apps/tomcat/bin/run_tomcat.sh"]#imagePullPolicy: IfNotPresentimagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: http- containerPort: 443protocol: TCPname: httpsenv:- name: "password"value: "123456"- name: "age"value: "20"resources:limits:cpu: 2memory: 2Girequests:cpu: 500mmemory: 1GivolumeMounts:- name: volume-nginx-imagesmountPath: /usr/local/nginx/html/webapp/imagesreadOnly: false- name: volume-nginx-staticmountPath: /usr/local/nginx/html/webapp/staticreadOnly: falsevolumes:- name: volume-nginx-imagesnfs:server: 172.16.62.24path: /nfsdata/k8s/images- name: volume-nginx-staticnfs:server: 172.16.62.24path: /nfsdata/k8s/static#nodeSelector:#  group: magedu---
kind: Service
apiVersion: v1
metadata:labels:app: uat-nginx-service-labelname: uat-nginx-servicenamespace: ns-uat
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 80nodePort: 30016- name: httpsport: 443protocol: TCPtargetPort: 443nodePort: 30443selector:app: uat-nginx-selectorroot@master1:/data/web/nginx-web1#

3.4 NFS服务器

  • 172.16.62.24
[root@node24 ~]# cat /etc/exports
/nfsdata/node11 172.16.62.*(rw,sync,no_root_squash)
/nfsdata/node12 172.16.62.*(rw,sync,no_root_squash)
/nfsdata/node13 172.16.62.*(rw,sync,no_root_squash)
/nfsdata/harbor25 172.16.62.*(rw,sync,no_root_squash)
/nfsdata/harbor26 172.16.62.*(rw,sync,no_root_squash)
/nfsdata/k8s *(rw,sync,no_root_squash)

3.5 haproxy配置

listen uat-nginx-80bind 172.16.62.191:80mode tcpbalance roundrobinserver node1 172.16.62.207:30016 check inter 3s fall 3 rise 5server node2 172.16.62.208:30016 check inter 3s fall 3 rise 5server node3 172.16.62.209:30016 check inter 3s fall 3 rise 5
root@ha1:/etc/haproxy# 

3.6 测试nginx

#nginx 默认页面
[root@node24 ~]# curl http://172.16.62.191
k8s lab  nginx web v1#nginx webapp 页面
[root@node24 ~]# curl http://172.16.62.191/webapp/index.html
webapp nginx v1
[root@node24 ~]#

3.7 JDK基础镜像制作

3.7.1 JAK Dockerfile

root@master1:/data/web/jdk-1.8.212# more Dockerfile
#JDK基础镜像制作
FROM harbor.haostack.com/k8s/jack_k8s_base-centos:v1MAINTAINER jack liu<jack_liu@qq.com>ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profileENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
root@master1:/data/web/jdk-1.8.212#
  • 文件
root@master1:/data/web/jdk-1.8.212# tree
.
├── build-command.sh
├── Dockerfile
├── jdk-8u212-linux-x64.tar.gz
└── profile0 directories, 4 files
root@master1:/data/web/jdk-1.8.212#

3.8 tomcat基础镜像制作

root@master1:/data/web/tomcat-base# cat Dockerfile
#Tomcat 8.5.43基础镜像
FROM harbor.haostack.com/k8s/jack_k8s_base-jdk:v8.212MAINTAINER jack liu<jack_liu@qq.com>RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz  /apps
RUN useradd tomcat -u 2021 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R
root@master1:/data/web/tomcat-base# 
  • 文件
root@master1:/data/web/tomcat-base#  tree
.
├── apache-tomcat-8.5.43.tar.gz
├── build-command.sh
└── Dockerfile0 directories, 3 files
root@master1:/data/web/tomcat-base#

3.9 tomcat-app1镜像制作

3.9.1 1Dockerfile 文件

root@master1:/data/web/tomcat-app1# cat Dockerfile
#tomcat-app1
FROM harbor.haostack.com/k8s/jack_k8s_base-tomcat:v8.5.43ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown  -R tomcat.tomcat /data/ /apps/EXPOSE 8080 8443CMD ["/apps/tomcat/bin/run_tomcat.sh"]
root@master1:/data/web/tomcat-app1# 

3.9.2 运行脚本run_tomcat.sh

root@master1:/data/web/tomcat-app1# more run_tomcat.sh
#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
root@master1:/data/web/tomcat-app1#

3.9.3 server.xml

  • 需要修改项目路径
root@master1:/data/web/tomcat-app1# more server.xml
<?xml version='1.0' encoding='utf-8'?>
<!--Licensed to the Apache Software Foundation (ASF) under one or morecontributor license agreements.  See the NOTICE file distributed withthis work for additional information regarding copyright ownership.The ASF licenses this file to You under the Apache License, Version 2.0(the "License"); you may not use this file except in compliance withthe License.  You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
-->
<!-- Note:  A "Server" is not itself a "Container", so you may notdefine subcomponents such as "Valves" at this level.Documentation at /docs/config/server.html-->
<Server port="8005" shutdown="SHUTDOWN"><Listener className="org.apache.catalina.startup.VersionLoggerListener" /><!-- Security listener. Documentation at /docs/config/listeners.html<Listener className="org.apache.catalina.security.SecurityListener" />--><!--APR library loader. Documentation at /docs/apr.html --><Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /><!-- Prevent memory leaks due to use of particular java/javax APIs--><Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /><Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /><Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /><!-- Global JNDI resourcesDocumentation at /docs/jndi-resources-howto.html--><GlobalNamingResources><!-- Editable user database that can also be used byUserDatabaseRealm to authenticate users--><Resource name="UserDatabase" auth="Container"type="org.apache.catalina.UserDatabase"description="User database that can be updated and saved"factory="org.apache.catalina.users.MemoryUserDatabaseFactory"pathname="conf/tomcat-users.xml" /></GlobalNamingResources><!-- A "Service" is a collection of one or more "Connectors" that sharea single "Container" Note:  A "Service" is not itself a "Container",so you may not define subcomponents such as "Valves" at this level.Documentation at /docs/config/service.html--><Service name="Catalina"><!--The connectors can use a shared executor, you can define one or more named thread pools--><!--<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"maxThreads="150" minSpareThreads="4"/>--><!-- A "Connector" represents an endpoint by which requests are receivedand responses are returned. Documentation at :Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)Java AJP  Connector: /docs/config/ajp.htmlAPR (HTTP/AJP) Connector: /docs/apr.htmlDefine a non-SSL/TLS HTTP/1.1 Connector on port 8080--><Connector port="8080" protocol="HTTP/1.1"connectionTimeout="20000"redirectPort="8443" /><!-- A "Connector" using the shared thread pool--><!--<Connector executor="tomcatThreadPool"port="8080" protocol="HTTP/1.1"connectionTimeout="20000"redirectPort="8443" />--><!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443This connector uses the NIO implementation that requires the JSSEstyle configuration. When using the APR/native implementation, theOpenSSL style configuration is required as described in the APR/nativedocumentation --><!--<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"maxThreads="150" SSLEnabled="true" scheme="https" secure="true"clientAuth="false" sslProtocol="TLS" />--><!-- Define an AJP 1.3 Connector on port 8009 --><Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /><!-- An Engine represents the entry point (within Catalina) that processesevery request.  The Engine implementation for Tomcat stand aloneanalyzes the HTTP headers included with the request, and passes themon to the appropriate Host (virtual host).Documentation at /docs/config/engine.html --><!-- You should set jvmRoute to support load-balancing via AJP ie :<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">--><Engine name="Catalina" defaultHost="localhost"><!--For clustering, please take a look at documentation at:/docs/cluster-howto.html  (simple how to)/docs/config/cluster.html (reference documentation) --><!--<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>--><!-- Use the LockOutRealm to prevent attempts to guess user passwordsvia a brute-force attack --><Realm className="org.apache.catalina.realm.LockOutRealm"><!-- This Realm uses the UserDatabase configured in the global JNDIresources under the key "UserDatabase".  Any editsthat are performed against this UserDatabase are immediatelyavailable for use by the Realm.  --><Realm className="org.apache.catalina.realm.UserDatabaseRealm"resourceName="UserDatabase"/></Realm><Host name="localhost"  appBase="/data/tomcat/webapps"  unpackWARs="true" autoDeploy="true"><!-- SingleSignOn valve, share authentication between web applicationsDocumentation at: /docs/config/valve.html --><!--<Valve className="org.apache.catalina.authenticator.SingleSignOn" />--><!-- Access log processes all example.Documentation at: /docs/config/valve.htmlNote: The pattern used is equivalent to using pattern="common" --><Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"prefix="localhost_access_log" suffix=".txt"pattern="%h %l %u %t &quot;%r&quot; %s %b" /></Host></Engine></Service>
</Server>
root@master1:/data/web/tomcat-app1# 

3.9.4 yaml 文件

root@master1:/data/web/tomcat-app1# more tomcat-app1.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:labels:app: uat-tomcat-app1-deployment-labelname: uat-tomcat-app1-deploymentnamespace: ns-uat
spec:replicas: 1selector:matchLabels:app: uat-tomcat-app1-selectortemplate:metadata:labels:app: uat-tomcat-app1-selectorspec:containers:- name: uat-tomcat-app1-containerimage: harbor.haostack.com/k8s/jack_k8s_tomcat-app1:v1#command: ["/apps/tomcat/bin/run_tomcat.sh"]#imagePullPolicy: IfNotPresentimagePullPolicy: Alwaysports:- containerPort: 8080protocol: TCPname: httpenv:- name: "password"value: "123456"- name: "age"value: "18"resources:limits:cpu: 1memory: "512Mi"requests:cpu: 500mmemory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:labels:app: uat-tomcat-app1-service-labelname: uat-tomcat-app1-servicenamespace: ns-uat
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 8080nodePort: 30017selector:app: uat-tomcat-app1-selector
root@master1:/data/web/tomcat-app1# 

3.10 nginx+tomcat+NFS 实现动静分离

3.10.1 nginx.conf 需要修改为 server name 名称,实现转发

  • server name名称 uat-tomcat-app1-service.ns-uat.svc.haostack.com
root@master1:/data/web/nginx-web1# cat nginx.conf
user  nginx nginx;
worker_processes  auto;#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;#pid        logs/nginx.pid;
daemon off;events {worker_connections  1024;
}http {include       mime.types;default_type  application/octet-stream;#log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '#                  '$status $body_bytes_sent "$http_referer" '#                  '"$http_user_agent" "$http_x_forwarded_for"';#access_log  logs/access.log  main;sendfile        on;#tcp_nopush     on;#keepalive_timeout  0;keepalive_timeout  65;#gzip  on;upstream  tomcat_webserver {server   uat-tomcat-app1-service.ns-uat.svc.haostack.com:80;
}server {listen       80;server_name  localhost;#charset koi8-r;#access_log  logs/host.access.log  main;location / {root   html;index  index.html index.htm;}location /webapp {root   html;index  index.html index.htm;}location /myapp {proxy_pass  http://tomcat_webserver;proxy_set_header   Host    $host;proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Real-IP $remote_addr;}#error_page  404              /404.html;# redirect server error pages to the static page /50x.html#error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}# proxy the PHP scripts to Apache listening on 127.0.0.1:80##location ~ \.php$ {#    proxy_pass   http://127.0.0.1;#}# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000##location ~ \.php$ {#    root           html;#    fastcgi_pass   127.0.0.1:9000;#    fastcgi_index  index.php;#    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;#    include        fastcgi_params;#}# deny access to .htaccess files, if Apache's document root# concurs with nginx's one##location ~ /\.ht {#    deny  all;#}}# another virtual host using mix of IP-, name-, and port-based configuration##server {#    listen       8000;#    listen       somename:8080;#    server_name  somename  alias  another.alias;#    location / {#        root   html;#        index  index.html index.htm;#    }#}# HTTPS server##server {#    listen       443 ssl;#    server_name  localhost;#    ssl_certificate      cert.pem;#    ssl_certificate_key  cert.key;#    ssl_session_cache    shared:SSL:1m;#    ssl_session_timeout  5m;#    ssl_ciphers  HIGH:!aNULL:!MD5;#    ssl_prefer_server_ciphers  on;#    location / {#        root   html;#        index  index.html index.htm;#    }#}}root@master1:/data/web/nginx-web1#

3.10.2 Dockerfile

3.10.3 nginx.yaml

root@master1:/data/web/nginx-web1# cat nginx.yaml
kind: Deployment
apiVersion: apps/v1
metadata:labels:app: uat-nginx-deployment-labelname: uat-nginx-deploymentnamespace: ns-uat
spec:replicas: 1selector:matchLabels:app: uat-nginx-selectortemplate:metadata:labels:app: uat-nginx-selectorspec:containers:- name: uat-nginx-containerimage: harbor.haostack.com/k8s/jack_k8s_nginx-web1:v2#command: ["/apps/tomcat/bin/run_tomcat.sh"]#imagePullPolicy: IfNotPresentimagePullPolicy: Alwaysports:- containerPort: 80protocol: TCPname: http- containerPort: 443protocol: TCPname: httpsenv:- name: "password"value: "123456"- name: "age"value: "20"resources:limits:cpu: 2memory: 2Girequests:cpu: 500mmemory: 1GivolumeMounts:- name: volume-nginx-imagesmountPath: /usr/local/nginx/html/webapp/imagesreadOnly: false- name: volume-nginx-staticmountPath: /usr/local/nginx/html/webapp/staticreadOnly: falsevolumes:- name: volume-nginx-imagesnfs:server: 172.16.62.24path: /nfsdata/k8s/images- name: volume-nginx-staticnfs:server: 172.16.62.24path: /nfsdata/k8s/static#nodeSelector:#  group: magedu---
kind: Service
apiVersion: v1
metadata:labels:app: uat-nginx-service-labelname: uat-nginx-servicenamespace: ns-uat
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 80nodePort: 30016- name: httpsport: 443protocol: TCPtargetPort: 443nodePort: 30443selector:app: uat-nginx-selectorroot@master1:/data/web/nginx-web1#

3.11 测试

  • haproxy上配置代理 172.16.62.191:80
  • 测试nginx页面
[root@node24 ~]# curl http://172.16.62.191
k8s lab  nginx web v1
  • 测试nginx webapp
[root@node24 ~]# curl http://172.16.62.191/webapp/
webapp nginx v1
  • 测试nginx转发到tomcat 页面
[root@node24 ~]# curl http://172.16.62.191/myapp/index.html
k8s lab  tomcat app1 v1
[root@node24 ~]#

Kubernetes基础3相关推荐

  1. Kubernetes基础学习(一)

    Kubernetes基础学习 Kubernetes核心组件 Kubernetes核心组件如下: Kubernetes Master部分: etcd保存了整个集群的状态: apiserver提供了资源操 ...

  2. Kubernetes基础组件概述

    本文讲的是Kubernetes基础组件概述[编者的话]最近总有同学问Kubernetes中的各个组件的相关问题,其实这些概念内容在官方文档中都有,奈何我们有些同学可能英文不好,又或者懒得去看,又或者没 ...

  3. kubernetes基础介绍及kubectl常用命令

    kubernetes基础介绍及kubectl常用命令 k8s的pod分类 自主式pod 控制器管理的pod 核心主键 HPA service 网络模型 同节点Pod之间的通信 不同节点上的Pod通信 ...

  4. Kubernetes生产实践系列之二十九:Kubernetes基础技术之容器关键技术实践

    一.前言 在文章<Kubernetes生产实践系列之二十八:Kubernetes基础技术之容器关键技术介绍>中,对于Docker容器技术依赖的namespace.cgroup和UnionF ...

  5. Kubernetes基础与架构

    Kubernetes是一个可移植,可扩展,强大的的开源平台,用于管理容器化工作负载和服务,有助于声明性配置和自动化. 它拥有庞大,快速发展的生态系统. Kubernetes服务,支持和工具广泛可用. ...

  6. Kubernetes基础详解

    1. Kubernetes介绍 1.1 应用部署方式演变 在部署应用程序的方式上,主要经历了三个时代: 传统部署:互联网早期,会直接将应用程序部署在物理机上 优点:简单,不需要其它技术的参与 缺点:不 ...

  7. 【云原生 · Kubernetes】Kubernetes基础环境搭建

    1.系统镜像 安装运行环境系统要求为CentOS7.5,内核版本不低于3.10. CentOS-7.5-x86_64-DVD-1804.iso Chinaskill_Cloud_PaaS.iso Do ...

  8. Kubernetes基础:MacOS上设定Dashboard

    在上篇文章中在MacOS上通过Docker Desktop记录了安装Kubernetes的方法,这篇文章继续记录一下设定Dashboard的方式,这样Mac上的基础开发或者实验环境就基本就绪了. 目录 ...

  9. 【云原生 · Kubernetes】Kubernetes基础概念

    文章目录 1. Kubernetes是什么 2. 架构 2.1 工作方式 2.2 组件架构 2.2.1 控制平面组件(Control Plane Components) kube-apiserver ...

最新文章

  1. Spring-Boot快速搭建web项目详细总结
  2. 图解Oracle用户管理
  3. boost::mp11::mp_count相关用法的测试程序
  4. 进程线程区别,和线程初识
  5. android中的多媒体应用camera
  6. 用C#生成随机中文汉字验证码
  7. Numpy数据分析数值范围调整、计算
  8. Silverlight DataGrid 在显示数据, 如果遇到全角的符号, 好像会出错. 待测试.
  9. Domino 邮箱服务器接收不存在的邮箱账号的邮件
  10. jsp引进的js 显示404_【梅园大讲堂微课堂】Get超级大招!上海人才引进云申报权威解读来了...
  11. b站视频解析php,b站视频解析【调解流程】
  12. azw3、epub、PDF等格式转换
  13. Android动态生成答题卡,好分数怎样制作答题卡
  14. 求长方形和正方形的面积
  15. 【前端面试之缓存】js本地缓存、浏览器缓存、服务器缓存
  16. 千万不要和女程序员做同事!否则你会爱上她
  17. java虚无世界_我的世界虚无世界2.5
  18. Python:PDF文件处理(数据处理)
  19. c语言void类型函数调用不可作为,对于void类型函数调用时不可作为
  20. 英文书法字体 免费字体下载_您可以免费下载45种美丽的字体

热门文章

  1. 苹果电脑的CCTV直播软件
  2. 正则表达式验证身份证号码
  3. ButterKnife 牛油刀使用
  4. kali入侵win7(转载)
  5. 在Android Studio中使用tess-Two
  6. sobol灵敏度分析matlab_sobol全局灵敏性分析
  7. python炫酷动画源代码_python_红心大战游戏源代码_满分原创作业
  8. 魔兽地图编辑器插件YDWE的使用与基本设置5 触发编辑器4 技能特效
  9. 【网络教程】IPtables官方教程--学习笔记3
  10. BDD之单元测试(三):BDD的官方教程