1、kube-proxy开启ipvs的前置条件
默认情况下,Kube-proxy将在kubeadm部署的集群中以iptables模式运行,需要注意的是,当内核版本大于4.19时,移除了nf_conntrack_ipv4模块,kubernetes官方建议使用nf_conntrack代替,否则报错无法找到nf_conntrack_ipv4模块,模式改为lvs调度的方式,kube-proxy主要解决的是svc(service)与pod之间的调度关系,ipvs的调度方式可以极大的增加它的访问效率,所以这种方式现在是我们必备的一种。

modprobe br_netfilter #加载netfilter模块
#yum install -y ipset ipvsadm(ipvs的安装)
#编写一个引导文件,这个文件将会引导我们lvs的一些相关依赖的加载,注意这里的依赖并不是rpm包含,也是模块依赖
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules  
lsmod | grep -e ip_vs -e nf_conntrack_ipv4  #使用lsmod命令查看这些文件是否被引导。

[root@k8smaster yum]# modprobe br_netfilter
[root@k8smaster yum]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack
> EOF

[root@k8smaster yum]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@k8smaster yum]# bash /etc/sysconfig/modules/ipvs.modules 
[root@k8smaster yum]# lsmod |grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      20480  0 
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_conntrack          114688  3 ip_vs,xt_conntrack,nf_conntrack_ipv4
libcrc32c              16384  2 xfs,ip_vs
[root@k8smaster yum]#

2、安装docker
依赖 yum install yum-utils device-mapper-persistent-data lvm2 -y
yum remove yum-utils  lvm2 -y
yum remove device-mapper-persistent-data -y
yum remove lvm2 -y

导入阿里云Docker-ce的镜像仓库
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install -y docker-ce  #安装docker

创建/etc/docker目录
[ ! -d /etc/docker ] && mkdir /etc/docker

配置daemon
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF

exec-opts,设置默认的组为systemd,默认情况下Centos有两个组,一个fs,一个systemd管理的,为了统一化,我们交给systemd管理。
log-driver,让我们存储日志的方式改为json文件的形式
log-opts,存储最大为100Mb,这样我们可以在后期通过war/log/content/去查找对应的容器的日志信息,这样就可以在EFK里去搜索对应的信息

# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

3、安装kubeadm(主从配置)
安装 kubelet、kubeadm 和 kubectl,kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。kubeadm 用于初始化 Cluster。kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

# 添加阿里云yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 默认安装最新版本,此处为1.15.1
yum install -y kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1
systemctl enable kubelet && systemctl start kubelet
因为kubelet需要跟我们的容器接口进行交互,启动我们的容器,而我们的k8s通过kubeadm安装出来以后都是以pod的方式存在,也就是底层是以容器的方式运行,所以kubelet一定要是开机自启的,不然的话,重启以后k8s集群不会启动。

4、启用kubectl命令的自动补全功能
# 安装并配置bash-completion
yum install -y bash-completion
echo 'source /usr/share/bash-completion/bash_completion' >> /etc/profile
source /etc/profile
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc

5、初始化Master
使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置
通过如下指令创建默认的kubeadm-config.yaml文件:# kubernetes-version版本和前面安装的kubelet和kubectl一致
kubeadm config print init-defaults  > kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.23.100  #master的ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8smaster
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: 192.168.23.100:5000 #本地私有仓库
kind: ClusterConfiguration
kubernetesVersion: v1.15.1 #k8s安装版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16 #声明pod的所处网段【注意,必须要添加此内容】默认情况下我们会安装一个flnode网络插件去实现覆盖性网路,它的默认pod网段就这么一个网段,如果这个网段不一致的话,后期我们需要进入pod一个个修改
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---  #把默认的调度方式改为ipvs调度模式
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
 SupportIPVSProxyMode: true
mode: ipvs

kubeadm-config.yaml组成部署说明:
InitConfiguration: 用于定义一些初始化配置,如初始化使用的token以及apiserver地址等
ClusterConfiguration:用于定义apiserver、etcd、network、scheduler、controller-manager等master组件相关配置项
KubeletConfiguration:用于定义kubelet组件相关的配置项
KubeProxyConfiguration:用于定义kube-proxy组件相关的配置项
可以看到,在默认的kubeadm-config.yaml文件中只有InitConfiguration、ClusterConfiguration 两部分。我们可以通过如下操作生成另外两部分的示例文件:

# 生成KubeletConfiguration示例文件 
kubeadm config print init-defaults --component-configs KubeletConfiguration

# 生成KubeProxyConfiguration示例文件 
kubeadm config print init-defaults --component-configs KubeProxyConfiguration

#使用指定的yaml文件进行初始化安装 自动颁发证书(1.13后支持) 把所有的信息都写入到 kubeadm-init.log中
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
--experimental-upload-certs已被弃用,官方推荐使用--upload-certs替代,官方公告:https://v1-15.docs.kubernetes.io/docs/setup/release/notes/

[init] Using Kubernetes version: v1.15.1   #安装日志记录  最开始告诉我们kubernetes的版本
[preflight] Running pre-flight checks   #检测当前运行环境
[preflight] Pulling images required for setting up a Kubernetes cluster  #为k8s集群下载镜像
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'  #开始安装镜像
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" #在/var/lib/kubelet/kubeadm-flags.env文件中保存了kubelet环境变量
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" #在/var/lib/kubelet/config.yaml文件中保存了kubelet配置文件
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"  #在/etc/kubernetes/pki目录中保存了k8s所使用的所有的证书,因为k8s采用了http协议进行的C/S结构的开发,它为了安全性考虑在所有的组件通讯的时候采用的是https的双向认证的方案,所以k8s需要大量的CE证书以及私钥密钥
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.23.100 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.23.100 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.23.100] #配置DNS以及当前默认的域名【svc(service)的默认名称】
[certs] Generating "apiserver-kubelet-client" certificate and key  #生成k8s组件的密钥
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"  #在/etc/kubernetes目录下生成k8s组件的配置文件
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.006263 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7daa5684ae8ff1835af697c3b3bd017c471adcf8bb7b28eee7e521b03694ef8c
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles #RBAC授权
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!  #初始化成功

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dda2582a61959ec547f8156288a32fe2b1d04febeca03476ebd1b3a754244588 

命令直接初始化:kubeadm init --kubernetes-version=1.15.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.23.100 --ignore-preflight-errors=NumCPU --image-repository=192.168.23.100:5000

获取组件的健康状态
[root@k8smaster ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8smaster ~]#

查看节点信息
[root@k8smaster ~]# kubectl get node
NAME        STATUS     ROLES    AGE   VERSION
k8smaster   NotReady   master   10h   v1.15.1
[root@k8smaster ~]# 
这里status未就绪,是因为没有网络插件,如flannel.地址https://github.com/coreos/flannel可以查看flannel在github上的相关项目,执行如下的命令自动安装flannel

执行如下的命令,获取当前系统上所有在运行的pod的状态,指定名称空间为kube-system,为系统级的pod,命令如下
[root@k8smaster ~]# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-75b6d67b6d-9zznq            0/1     Pending   0          11h
coredns-75b6d67b6d-r2hkz            0/1     Pending   0          11h
etcd-k8smaster                      1/1     Running   0          10h
kube-apiserver-k8smaster            1/1     Running   0          10h
kube-controller-manager-k8smaster   1/1     Running   0          10h
kube-proxy-5nrvf                    1/1     Running   0          11h
kube-scheduler-k8smaster            1/1     Running   0          10h
[root@k8smaster ~]#

执行如下命令,获取当前系统的名称空间
[root@k8smaster ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   11h
kube-node-lease   Active   11h
kube-public       Active   11h
kube-system       Active   11h
[root@k8smaster ~]#

6、安装flannel网络插件

(1)下载flannel yaml文件
[root@k8smaster ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml  下载后,将镜像修改为本地镜像(从私有仓库下载,quay.io修改为192.168.23.100:5000)

(2)创建flannel
kubectl create -f kube-flannel.yml
查看flannel是否部署成功【系统组件默认在kube-system命名空间下】,同样使用ip addr也可以看到flannel[root@k8smaster [root@k8smaster test]# kubectl get pod -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-75b6d67b6d-9hmmd            1/1     Running   0          131m
coredns-75b6d67b6d-rf2q5            1/1     Running   0          131m
etcd-k8smaster                      1/1     Running   0          130m
kube-apiserver-k8smaster            1/1     Running   0          130m
kube-controller-manager-k8smaster   1/1     Running   0          130m
kube-flannel-ds-amd64-kvffl         1/1     Running   0          101m
kube-flannel-ds-amd64-trjfx         1/1     Running   0          105m
kube-proxy-5zkhj                    1/1     Running   0          131m
kube-proxy-h2r8g                    1/1     Running   0          101m
kube-scheduler-k8smaster            1/1     Running   0          130m
[root@k8smaster test]#

[root@k8smaster test]# ip addr
8: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noqueue state DOWN group default 
    link/ether 06:18:40:93:ec:ff brd ff:ff:ff:ff:ff:ff

[root@k8smaster ~]# kubectl get node  #node状态为Ready
NAME        STATUS   ROLES    AGE    VERSION
k8smaster   Ready    master   134m   v1.15.1

7、将k8s子节点加入到k8s主节点
在node机器运行下面命令(在kubeadm-init.log查找)
kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:dda2582a61959ec547f8156288a32fe2b1d04febeca03476ebd1b3a754244588

[root@k8snode01 log]# kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:dda2582a61959ec547f8156288a32fe2b1d04febeca03476ebd1b3a754244588
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8snode01 log]#
[root@k8snode01 log]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
192.168.23.100:5000/kube-proxy       v1.15.1             89a062da739d        6 months ago        82.4MB
192.168.23.100:5000/coreos/flannel   v0.11.0-amd64       ff281650a721        12 months ago       52.5MB
192.168.23.100:5000/pause            3.1                 da86e6ba6ca1        2 years ago         742kB
[root@k8snode01 log]# docker ps
CONTAINER ID        IMAGE                            COMMAND                  CREATED              STATUS              PORTS               NAMES
ba9d285b313f        ff281650a721                     "/opt/bin/flanneld -…"   About a minute ago   Up About a minute                       k8s_kube-flannel_kube-flannel-ds-amd64-kvffl_kube-system_f7f3aa12-fd16-41fa-a577-559156d545d0_0
677fe835f591        192.168.23.100:5000/kube-proxy   "/usr/local/bin/kube…"   About a minute ago   Up About a minute                       k8s_kube-proxy_kube-proxy-h2r8g_kube-system_a13b5efa-3e14-40e2-b109-7f067ba6ad82_0
357321f007c9        192.168.23.100:5000/pause:3.1    "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-h2r8g_kube-system_a13b5efa-3e14-40e2-b109-7f067ba6ad82_0
01ab31239bfd        192.168.23.100:5000/pause:3.1    "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-flannel-ds-amd64-kvffl_kube-system_f7f3aa12-fd16-41fa-a577-559156d545d0_0
[root@k8snode01 log]#

[root@k8smaster ~]# kubectl get node  #node已经加入集群
NAME        STATUS   ROLES    AGE    VERSION
k8smaster   Ready    master   134m   v1.15.1
k8snode01   Ready    <none>   103m   v1.15.1
[root@k8smaster ~]#

8、安装出现的问题
(1)镜像拉取问题
从quay.io和gcr.io进行镜像拉取,国内访问外网是被屏蔽了的。可以将其替换为quay-mirror.qiniu.com和registry.aliyuncs.com,然后重新打tag标签处理

(2)/var/lib/kubelet/config.yaml没发现报错可忽略,在kubeadm init的时候会自动生成
在/var/log/messages日志出现如下的报错:
Feb 11 05:17:44 k8smaster kubelet: F0211 05:17:44.750462    1547 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

(3)虚机内存和cpu设置过小导致安装失败
在/var/log/messages日志出现如下的报错:
Feb 11 05:24:44 k8smaster kubelet: E0211 05:24:44.762078    2876 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "k8smaster" not found
内存不够,建议使用2G内存。

使用kubeadm部署k8s(2、k8s集群部署)相关推荐

  1. K8S部署工具:KubeOperator集群部署

    K8S部署工具:KubeOperator集群部署 集群信息⚓︎ 项目: 选择集群所属项目 供应商: 支持裸金属(手动模式)和部署计划(自动模式) 版本: 支持版本管理中最新的两个 Kubernetes ...

  2. Hadoop部署方式-高可用集群部署(High Availability)

    Hadoop部署方式-高可用集群部署(High Availability) 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客的高可用集群是建立在完全分布式基础之上的,详情请参 ...

  3. K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。

    更新:今天抽时间写了昨天部署的一键脚本: date:Aug 3,2019 <Kubernetes最新版本1.15.1,shell脚本一键部署,刚刚完成测试,实用.> 最近利用空闲时间,把之 ...

  4. a24.ansible 生产实战案例 -- 基于kubeadm安装kubernetes v1.20 -- 集群部署(一)

    源码下载地址:https://github.com/raymond999999/kubernetes-ansible 1.高可用Kubernetes集群规划 角色 机器名 机器配置 ip地址 安装软件 ...

  5. a32.ansible 生产实战案例 -- 基于kubeadm安装kubernetes v1.22 -- 集群部署(一)

    源码下载地址:https://github.com/raymond999999/kubernetes-ansible 1.高可用Kubernetes集群规划 角色 机器名 机器配置 ip地址 安装软件 ...

  6. swarm部署mysql_「实战篇」开源项目docker化运维部署-借助dockerSwarm搭建集群部署(九)...

    为了让学习的知识融汇贯通,目前是把所有的集群都放在了一个虚拟机上,如果这个虚拟机宕机了怎么办?俗话说鸡蛋不要都放在一个篮子里面,把各种集群的节点拆分部署,应该把各种节点分机器部署,多个宿主机,这样部署 ...

  7. 服务搭建篇(七) Elasticsearch单节点部署以及多节点集群部署

    感兴趣的话大家可以关注一下公众号 : 猿人刘先生 , 欢迎大家一起学习 , 一起进步 , 一起来交流吧! 1.Elasticsearch Elasticsearch(简称ES) 是一个分布式 , RE ...

  8. 正式环境docker部署hyperf_Hyperf使用docker-compose集群部署

    从运行容器开始 docker run -v /www:/www -p 9601:9601 -p 9602:9602 -p 9603:9603 -it --entrypoint /bin/sh hype ...

  9. 从架构设计理念到集群部署,全面认识KubeEdge

    摘要:本篇文章将从KubeEdge架构设计理念.KubeEdge代码目录概览.KubeEdge集群部署三方面带大家认识KubeEdge. KubeEdge即Kube+Edge,顾名思义就是依托K8s的 ...

  10. ELK 7.17.5 集群部署及使用

    文章目录 一.ElasticSearch 安装 1.elasticsearch 单节点安装 2.elasticsearch 分布式集群安装 3.elasticsearch 配置身份认证 二.Elast ...

最新文章

  1. numpy随机生成数组
  2. 将数组存入mysql数据库,将数组值写入mysql数据库
  3. 利用vue和jQuery实现中国主要城市搜索与选择
  4. DataBind 踩坑事件
  5. 生活大爆炸第6季第12集
  6. 百度富文本编辑jsp上传_百度富文本编辑器教程,从入门到放弃
  7. Spring @Autowired 注入为 null
  8. 以太网接口保护方案设计图
  9. BZOJ1146 [CTSC2008]网络管理Network 树链剖分 主席树 树状数组
  10. [Abp 源码分析]异常处理
  11. 图解算法学习笔记(八):贪婪算法
  12. 数学建模-7.多元线性回归分析
  13. Chrome格式化json
  14. 总结div里面水平垂直居中的实现方法
  15. python在input输入数字为何是str_Python基础笔记:input()输入与数据类型转换
  16. android 断点下载的实现,自己动手实现一个Android断点下载
  17. 【字体分享】适合寒露闪屏设计的字体有哪些?
  18. SpringBoot+JMail
  19. spring使用之旅(二) ---- AOP的使用
  20. 机器学习的前世今生:一段波澜壮阔的历史

热门文章

  1. TP50、TP90、TP99的理解和使用
  2. MySQL备份报错mysqldump: Got error: 1045: Access denied for user ‘root‘@‘localhost‘ (using password: YES)
  3. 【程序】Marvell 88W8686 WiFi模块(WM-G-MR-09)创建或连接热点,并使用lwip2.0.3建立http服务器(20180312版)
  4. WiFi认证—分析从连接WiFi到上网的全过程(一)
  5. 使用经验 1 C++程序由哪些部分构成
  6. 中国云计算行业研究报告
  7. 教你如何用ps制作紫色光斑效果
  8. 高等数学:数列前后项【递推式的单调性】与【整个数列单调性】的关系
  9. 一个可以提升180%推广效果的信息流广告投放策略
  10. ElasticSearch wildcard查询(英文检索)