本地集群需求



  • 轻量; (下载的包小)

  • 启动快; (最好是docker 方式启动, 而非VM)

  • 占用资源少; (最好是docker 方式启动, 而非VM. 且需要是一个拥有Kubernetes完整功能的, 精简的发行版)



用过minikube, VM启动比较慢, 而且下载最新版的时候, 阿里云的mirror都没有最新版本的镜像, 导致一直启动不起来. 非常难受.



基于K3S的K3D完美符合我的以上需求.



K3S简介 - 轻量级 Kubernetes



轻量级 Kubernetes。安装简单,内存只有一半,所有的二进制都不到 200MB。包含K3S的完整镜像大小如下:



 REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE rancher/k3s                      v1.18.2-k3s1        e9f6bccce7de        6 months ago        151MB



我这边安装完成后, (又安装了traefik和Kubernetes dashboard和一个demo deployment), 消耗如下:



  • CPU: 0.3 Core

  • 内存: 1.2 G





适用于:



  • 边缘计算-Edge

  • 物联网-IoT

  • CI

  • Development

  • ARM

  • 嵌入 K8s

  • 不想深陷 k8s 运维管理的人



K3s 是一个完全符合 Kubernetes 的发行版,有以下增强功能。



  • 打包为 单个二进制 文件。

  • 基于 sqlite3 的轻量级存储后端作为默认存储机制。etcd3,MySQL,Postgres 仍然可用。

  • 封装在简单的启动程序中,该启动程序处理很多复杂的 TLS 和选项。

  • 默认情况下是安全的,对轻量级环境有合理的默认值。

  • 添加了简单但功能强大的“batteries-included”功能,例如:本地存储提供程序,服务负载均衡器,Helm controller 和 Traefik ingress controller。

  • 所有 Kubernetes 控制平面组件的操作都封装在单个二进制文件和进程中。这使 K3s 可以自动化和管理复杂的集群操作,例如分发证书。

  • 外部依赖性已最小化(仅需要现代内核和 cgroup 挂载)。K3s 软件包需要依赖项,包括:



K3D - K3S in docker



k3d创建容器化的k3s集群。这意味着,您可以使用docker在单台计算机上启动多节点k3s集群。



K3D 快速入门



使用 k3d 搭建 k3s 集群. k3d是快速搭建容器化 k3s 集群的工具。可以使用 Docker 在单台计算机上启动多节点 k3s 集群。



???? 备注:

我的计算机环境:

  • win10专业版 2004

  • WSL2 + Ubuntu20.04 + docker desktop



  1. 运行以下指令,启动具有 3 个 worker 节点的本地 k3s 集群。(搭建集群搭吐了, 有现成官方脚本直接用. 亲测国内好用) 使用root执行:

curl -fL https://octopus-assets.oss-cn-beijing.aliyuncs.com/k3d/cluster-k3s-spinup.sh | bash -

⚠️ 注意:

如果安装成功,则应该看到以下日志:

please input CTRL+C to stop the local cluster

如果想要停止K3S集群, 请运行CTRL+C

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100 13549  100 13549    0     0   6784      0  0:00:01  0:00:01 --:--:--  6781[INFO] [1107 17:02:03] cleanup proxy config[INFO] [1107 17:02:03] creating edge cluster with v1.18.2[INFO] [1107 17:02:03] INGRESS_HTTP_PORT is 54836[INFO] [1107 17:02:03] INGRESS_HTTPS_PORT is 54837INFO[0000] Created cluster network with ID ba03de48d65b8e1fbef6ff03cbba0b9e9ad008e7cc81d67d8393c69272a1c4b9INFO[0000] Add TLS SAN for 0.0.0.0INFO[0000] Created docker volume  k3d-edge-imagesINFO[0000] Creating cluster [edge]INFO[0000] Creating server using docker.io/rancher/k3s:v1.18.2-k3s1...INFO[0006] SUCCESS: created cluster [edge]INFO[0006] You can now use the cluster with:export KUBECONFIG="$(k3d get-kubeconfig --name='edge')"kubectl cluster-info[WARN] [1107 17:02:09] default kubeconfig has been backup in /root/.kube/config_k3d_bak[INFO] [1107 17:02:09] edge cluster's kubeconfig wrote in /root/.kube/config now[INFO] [1107 17:02:09] waiting node edge-control-plane for readyINFO[0000] Adding 1 agent-nodes to k3d cluster edge...INFO[0000] Created agent-node with ID 752aebb8f9bb1af1c5fcf62ff9313163c243835373872595f38de03004257514[INFO] [1107 17:02:21] waiting node edge-worker for readyINFO[0000] Adding 1 agent-nodes to k3d cluster edge...INFO[0000] Created agent-node with ID 7d0aa70e24f387217d3094911a7c0f5fa2f504c1fe3e106b08d00f3a6b11158c[INFO] [1107 17:02:34] waiting node edge-worker1 for readyINFO[0000] Adding 1 agent-nodes to k3d cluster edge...INFO[0000] Created agent-node with ID 7b880c8966f9b8b252c5385ee10167384d9517c87ff60763989b69f5c3f344ab[INFO] [1107 17:02:47] waiting node edge-worker2 for ready[WARN] [1107 17:02:59] please input CTRL+C to stop the local cluster
  1. 打开一个新终端,并配置KUBECONFIG以访问本地 k3s 集群。

export KUBECONFIG="$(k3d get-kubeconfig --name='edge')"kubectl cluster-info

输出结果如下:

Kubernetes master is running at https://0.0.0.0:54835CoreDNS is running at https://0.0.0.0:54835/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyMetrics-server is running at https://0.0.0.0:54835/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
  1. 运行kubectl get node命令, 检查本地 k3s 集群的节点是否正常:

# kubectl get nodeNAME                 STATUS   ROLES    AGE     VERSIONedge-worker          Ready    <none>   3h17m   v1.18.2+k3s1edge-worker2         Ready    <none>   3h17m   v1.18.2+k3s1edge-control-plane   Ready    master   3h17m   v1.18.2+k3s1edge-worker1         Ready    <none>   3h17m   v1.18.2+k3s1
  1. 运行kubectl get pod -A命令, 检查本地 k3s 集群的pod是否正常: (默认就已经部署好了traefik)

kubectl get pod -ANAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGEkube-system   metrics-server-7566d596c8-6h776          1/1     Running     0          3h18mkube-system   local-path-provisioner-6d59f47c7-sz5tp   1/1     Running     0          3h18mkube-system   coredns-8655855d6-lmrkq                  1/1     Running     0          3h18mkube-system   svclb-traefik-wxp6k                      2/2     Running     0          133mkube-system   svclb-traefik-jls5w                      2/2     Running     0          133mkube-system   svclb-traefik-j776k                      2/2     Running     0          133mkube-system   svclb-traefik-qbfx4                      2/2     Running     0          133mkube-system   helm-install-traefik-jxptl               0/1     Completed   0          120mkube-system   traefik-6cbfb44969-r9fj2                 1/1     Running     0          118m

???? 笔记:



K3D的快速启动脚本, 涉及到以下docker镜像: (只有第一个镜像是在外边pull的, 其他镜像其实都是在启动后的k3s 容器里pull的.)

# docker imagesREPOSITORY                       TAG                 IMAGE ID            CREATED             SIZErancher/k3s                      v1.18.2-k3s1        e9f6bccce7de        6 months ago        151MBrancher/klipper-helm             v0.2.5              6207e2a3f522        6 months ago        136MBrancher/library-traefik          1.7.19-amd64        aa764f7db305        12 months ago       85.7MBrancher/metrics-server           v0.3.6              9dd718864ce6        13 months ago       39.9MBrancher/local-path-provisioner   v0.0.11             9d12f9848b99        13 months ago       36.2MBrancher/coredns-coredns          1.6.3               c4d3d16fe508        14 months ago       44.3MB

K3D的快速启动脚本, 会启动4个docker容器作为4个node节点:

sudo docker psCONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                                                                     NAMES7b880c8966f9        rancher/k3s:v1.18.2-k3s1   "/bin/k3s agent --no…"   3 hours ago         Up 3 hours                                                                                    k3d-edge-worker-37d0aa70e24f3        rancher/k3s:v1.18.2-k3s1   "/bin/k3s agent --no…"   3 hours ago         Up 3 hours                                                                                    k3d-edge-worker-2752aebb8f9bb        rancher/k3s:v1.18.2-k3s1   "/bin/k3s agent --no…"   3 hours ago         Up 3 hours                                                                                    k3d-edge-worker-1dca9851cf5d6        rancher/k3s:v1.18.2-k3s1   "/bin/k3s server --h…"   3 hours ago         Up 3 hours          0.0.0.0:54835->54835/tcp, 0.0.0.0:54836->80/tcp, 0.0.0.0:54837->443/tcp   k3d-edge-server

从上文可以看到, 1个k3s server(就是控制平面), 3个k3s agent. k3s server对外暴露了3个 随机 端口:



  1. 54835->54835: K8S API

  2. 54836->80: K8S Ingress的HTTP端口.

  3. 54837->443: K8S Ingress的HTTPS端口.



部署Traefik Dashboard



所以我们要访问部署在容器中的应用, 就用这2个随机端口: http://localhost:54836 或https://localhost:54836 . 部署好了后, 默认是没有任何的Ingress的, 所以访问这2个地址都是报: 404



而且默认脚本是没有启用Traefik的Dashboard的, 管理不便. 我们将它启用起来.



首先是进入到k3s server容器里. 这个容器没有/bin/bash, 只有/bin/sh, 如下:

# docker exec -it <k3s server container id> ls /binaddgroup  cat           containerd-shim          df             expr         fstrim      i2cdetect          ipcs              kubectl   lsof      mkswap      openvt         ptx          runcon       sha512sum          swapoff      tr          unxz      whoamiadduser   charon        containerd-shim-runc-v2  diff           factor       fuser       i2cdump            iplink            last      lspci     mktemp      partprobe      pwd          runlevel     shred              swapon       traceroute  unzip     xargsar        chattr        coreutils                dir            fallocate    getopt      i2cget             ipneigh           less      lsscsi    modprobe    passwd         rdate        sed          shuf               switch_root  true        uptime    xtables-legacy-multiarch      chcon         cp                       dircolors      false        getty       i2cset             iproute           link      lsusb     more        paste          readlink     seq          sleep              sync         truncate    users     xxdarp       check-config  cpio                     dirname        fbset        ginstall    id                 iprule            linux32   lzcat     mountpoint  patch          readprofile  setarch      slirp4netns        sysctl       tsort       usleep    xzarping    chgrp         crictl                   dmesg          fdflush      grep        ifconfig           ipset             linux64   lzma      mt          pathchk        realpath     setconsole   socat              syslogd      tty         uudecode  xzcatash       chmod         crond                    dnsd           fdformat     groups      ifdown             iptables          linuxrc   lzopcat   mv          pidof          reboot       setfattr     sort               tac          ubirename   uuencode  yesaux       chown         crontab                  dnsdomainname  fdisk        gunzip      ifup               iptables-restore  ln        makedevs  nameif      pigz           renice       setkeycodes  split              tail         udhcpc      vconfig   zcatawk       chroot        csplit                   dos2unix       fgrep        gzip        inetd              iptables-save     loadfont  md5sum    netstat     ping           reset        setlogcons   start-stop-daemon  tar          uevent      vdirb2sum     chrt          ctr                      du             find         halt        init               iptunnel          loadkmap  mdev      nice        pinky          resize       setpriv      stat               tc           umount      vibase32    chvt          cut                      dumpkmap       flannel      hdparm      insmod             join              logger    mesg      nl          pipe_progress  resume       setserial    strings            tee          uname       vlockbase64    cksum         date                     ebtables       flock        head        install            k3s               login     microcom  nohup       pivot_root     rm           setsid       stty               telnet       unexpand    wbasename  clear         dc                       echo           fmt          hexdump     ip                 k3s-agent         logname   mkdir     nproc       portmap        rmdir        sh           su                 test         uniq        watchblkid     cmp           dd                       egrep          fold         hexedit     ip6tables          k3s-server        loopback  mkdosfs   nsenter     poweroff       rmmod        sha1sum      sulogin            tftp         unix2dos    watchdogbridge    cni           deallocvt                eject          free         host-local  ip6tables-restore  kill              losetup   mke2fs    nslookup    pr             route        sha224sum    sum                time         unlink      wcbunzip2   comm          delgroup                 env            freeramdisk  hostid      ip6tables-save     killall           ls        mkfifo    nuke        printenv       run-init     sha256sum    svc                timeout      unlzma      wgetbusybox   conntrack     deluser                  ether-wake     fsck         hostname    ipaddr             killall5          lsattr    mknod     numfmt      printf         run-parts    sha384sum    svok               top          unlzop      whichbzcat     containerd    devmem                   expand         fsfreeze     hwclock     ipcrm              klogd             lsmod     mkpasswd  od          ps             runc         sha3sum      swanctl            touch        unpigz      who

所以通过/bin/sh进入到容器里:

# docker exec -it <k3s server container id> /bin/sh---------------已经进入容器里--------------/ # cd /var/lib/rancher/k3s/server/manifests/var/lib/rancher/k3s/server/manifests # vi traefik.yaml

编辑后的traefik.yaml如下: (增加:dashboard.enabled: "true" )

apiVersion: helm.cattle.io/v1kind: HelmChartmetadata:  name: traefik  namespace: kube-systemspec:  chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz  valuesContent: |-    rbac:      enabled: true    ssl:      enabled: true    metrics:      prometheus:        enabled: true    kubernetes:      ingressEndpoint:        useDefaultPublishedService: true    dashboard:      enabled: true    image: "rancher/library-traefik"    tolerations:      - key: "CriticalAddonsOnly"        operator: "Exists"      - key: "node-role.kubernetes.io/master"        operator: "Exists"        effect: "NoSchedule"

保存后就会重新部署traefik.yaml, 如下:

# kubectl get events -n kube-systemLAST SEEN   TYPE      REASON              OBJECT                           MESSAGE43s         Normal    Pulled              pod/helm-install-traefik-jxptl   Successfully pulled image "rancher/klipper-helm:v0.2.5"43s         Normal    Created             pod/helm-install-traefik-jxptl   Created container helm43s         Normal    Started             pod/helm-install-traefik-jxptl   Started container helm43s         Normal    ScalingReplicaSet   deployment/traefik               Scaled up replica set traefik-6cbfb44969 to 143s         Normal    SuccessfulCreate    replicaset/traefik-6cbfb44969    Created pod: traefik-6cbfb44969-r9fj2<unknown>   Normal    Scheduled           pod/traefik-6cbfb44969-r9fj2     Successfully assigned kube-system/traefik-6cbfb44969-r9fj2 to edge-worker242s         Normal    Pulling             pod/traefik-6cbfb44969-r9fj2     Pulling image "rancher/library-traefik:1.7.19"42s         Normal    Completed           job/helm-install-traefik         Job completed41s         Normal    SandboxChanged      pod/helm-install-traefik-jxptl   Pod sandbox changed, it will be killed and re-created.9s          Normal    Pulled              pod/traefik-6cbfb44969-r9fj2     Successfully pulled image "rancher/library-traefik:1.7.19"9s          Normal    Created             pod/traefik-6cbfb44969-r9fj2     Created container traefik9s          Normal    Started             pod/traefik-6cbfb44969-r9fj2     Started container traefik

部署后, 会自动配置ingress, 如下:

# kubectl get ingress -ANAMESPACE     NAME                CLASS    HOSTS                 ADDRESS      PORTS   AGEkube-system   traefik-dashboard   <none>   traefik.example.com   172.18.0.2   80      149m

所以我们配置hosts: 127.0.0.1 traefik.example.com. 就可以访问: http://traefik.example.com:54836/dashboard/, 如下图所示:





???? 备注:

其实还有另一种方法可以进行访问: kubectl port-forward. 如下:

 $ kubectl port-forward $(kubectl get pods --selector "app=traefik" --output=name -n kube-system) --address 0.0.0.0 8080:8080 -n kube-system

则可以通过http://localhost:8080/dashboard/ 访问到traefik的管理页面.



部署应用



使用whoami 应用程序部署测试.

$ kubectl create deploy whoami --image containous/whoamideployment.apps/whoami created$ kubectl expose deploy whoami --port 80service/whoami exposed

然后我们定义一个 Ingress 规则来使用我们新的 Traefik,Traefik 既能读取自己的 CRD IngressRoute,也能读取传统的 Ingress 资源。



 vi whoami-ingress.yaml

具体内容如下:

apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata:  name: whoami  annotations:    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure    traefik.ingress.kubernetes.io/router.tls: "true"spec:  rules:  - http:      paths:      - path: /        backend:          serviceName: whoami          servicePort: 80

kubectl apply应用:

 kubectl apply -f whoami-ingress.yaml -n default

在这个例子中,我们在 HTTP 和 HTTPs 两个入口点上暴露了 whoami 服务,每一个 URL 都会被发送到该服务上,我们可以在 Traefik Dashboard 上看到新的Ingress。





要测试这个应用我们可以直接在浏览器中访问:http://localhost:54836/ 即可,这是因为上面我们安装 Traefik 的时候自动创建了一个 LoadBalancer 的 Service 服务。为啥要加端口号, 因为k3s server在容器里, 映射到外边是54386 端口.



部署Kubernetes 仪表盘



GITHUB_URL=https://github.com/kubernetes/dashboard/releasesVERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

输出如下:

namespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created

验证pod已正常启动:

# kubectl get pod -n kubernetes-dashboardNAME                                         READY   STATUS    RESTARTS   AGEdashboard-metrics-scraper-6b4884c9d5-ltk42   1/1     Running   0          14mkubernetes-dashboard-7d8574ffd9-sptn6        1/1     Running   0          98s

仪表盘 RBAC 配置



⚠️ 重要:

本指南中创建的 admin-user 将在仪表板中拥有管理权限。



创建以下资源清单文件:



vi dashboard.admin-user.ymlapiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kubernetes-dashboard



vi dashboard.admin-user-role.ymlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: admin-userroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:  - kind: ServiceAccount    name: admin-user    namespace: kubernetes-dashboard

部署admin-user 配置:



 kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml

获得 Bearer Token

 kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token

结果如下:

 token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Im9XNENjc0VlSzVBTDJGRWpPT2VuY1pkbzNJblYybFFwY2YxQnBvZVlMVlEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXA1Y253Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0NmI4NGFkYS02MDQ3LTQzN2EtODk2My1lY2NmZWQ4MjE0ZDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.N8Zhsf2JU5Hoa8yfhrspJbMGP7AFmfs2JeWXVpDksAEMfWf5mI-MXYcqMkbZ9_Qbwp-h9S7k7oZE41lUp8UXlDWi0Ovm4I4fsuoWqq-aJoyt-c060bWNla1edVZ5BzMTanIYzJHPjS7-cOnsxqg-EtXfdN3JRsiE0QevLvJLhYU37HFc7-cImJ8iH8-r-GHCD8MmuBbTV0EBidLmSo-BdWC5hcZoYghgNtfnMkN0p1e3O23EPRO2XDmaw_lVN4TNgZXPS9hirBD1AZxm1ZE1Iyo2mSOgYjCNQOF8IcaUtjTGqt4RzK4R9AWRbL9z-HMbK_JamcQvDz3fnW3aauCezQ

本地访问仪表盘:

 kubectl proxy

现在可以通过以下网址访问仪表盘:

 http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/  使用admin-user Bearer Token Sign In





更多仪表盘访问方式

port-forward方式

 $ kubectl port-forward $(kubectl get pods --selector "k8s-app=kubernetes-dashboard" --output=name -n kubernetes-dashboard) --address 0.0.0.0 8443:8443 -n kubernetes-dashboard

Helm 部署应用

# helm repo add stable http://mirror.azure.cn/kubernetes/charts# helm repo update# helm install jenkins stable/jenkinsWARNING: This chart is deprecatedNAME: jenkinsLAST DEPLOYED: Sat Nov  7 22:25:02 2020NAMESPACE: defaultSTATUS: deployedREVISION: 1NOTES:***********************DEPRECATED************************* The Jenkins chart is deprecated. Future development has been moved to https://github.com/jenkinsci/helm-charts1. Get your 'admin' user password by running:  printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo2. Get the Jenkins URL to visit by running these commands in the same shell:  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=jenkins" -o jsonpath="{.items[0].metadata.name}")  echo http://127.0.0.1:8080  kubectl --namespace default port-forward $POD_NAME 8080:80803. Login with the password from step 1 and the username: admin4. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demosFor more information on running Jenkins on Kubernetes, visit:https://cloud.google.com/solutions/jenkins-on-container-engineFor more information about Jenkins Configuration as Code, visit:https://jenkins.io/projects/jcasc/

总结



通过K3S/K3D, 有以下优势:



  1. 创建, 部署, 启动集群快;

  2. 集群消耗资源少;

  3. 创建的集群拥有完善的基础功能;

  4. 可以提供和标准K8S集群一致的:

使用 K3S 创建本地开发集群相关推荐

  1. api k8s restful 创建pods_Kind:一个容器创建K8S开发集群

    什么是 Kind kind:是一种使用Docker容器节点运行本地Kubernetes集群的工具.该类型主要用于测试Kubernetes,但可用于本地开发或CI. 注意:kind仍在开发中 部署 Ma ...

  2. 巧用 K3s 和 Traefik 快速搭建本地 Kubernetes 集群

    公众号关注 「奇妙的 Linux 世界」 设为「星标」,每天带你玩转 Linux ! 完整的 Kubernetes 集群往往非常复杂,需要较高的资源,往往我们在开发期间需要一个完整的 Kubernet ...

  3. 使用Minikube部署本地Kubernetes集群(二十九)

    前言 使用Minikube部署本地k8s集群相对比较简单,非常推荐将其用于本地k8s开发环境,唯一麻烦点的仅仅是网络问题. 在本篇教程中,我们使用了国内的镜像来完成本地k8s集群的搭建.如果搭建过程中 ...

  4. SpringCloud创建Eureka模块集群

    1.说明 本文详细介绍Spring Cloud创建Eureka模块集群的方法, 基于已经创建好的Spring Cloud Eureka Server模块, 请参考SpringCloud创建Eureka ...

  5. 本地k8s集群搭建保姆级教程(4)-安装k8s集群Dashboard

    安装k8s集群管理UI 1 Dashboard安装 1.1 参考文档 Dashboard 是基于网页的 Kubernetes 用户界面. 你可以使用 Dashboard 将容器应用部署到 Kubern ...

  6. 如何创建 Azure AKS 集群?

    Kubernetes 已经改变了微服务的世界,Azure 通过其 Azure Kubernetes 服务使 Kubernetes 编排变得轻而易举,在本分步教程中,我将向您展示如何在 Azure 上创 ...

  7. Redis创建高可用集群教程【Windows环境】

    模仿的过程中,加入自己的思考和理解,也会有进步和收获. 在这个互联网时代,在高并发和高流量可能随时爆发的情况下,单机版的系统或者单机版的应用已经无法生存,越来越多的应用开始支持集群,支持分布式部署了. ...

  8. Dubbo 入门实例 本地伪集群测试Demo

    1.   概述 Dubbo是一个分布式服务框架,致力于提供高性能和透明化的RPC远程服务调用方案,以及SOA服务治理方案 Dubbo是阿里巴巴SOA服务化治理方案的核心框架,每天为2,000+个服务提 ...

  9. 实战weblogic集群之创建节点和集群

    一.启动weblogic,访问控制台 weblogic的domain创建完成后,接下来就可以启动它,步骤如下: $ cd /app/sinova/domains/base_domain/bin $ . ...

  10. 基于云服务创建弹性托管集群服务

    弹性托管集群服务 使用JMR产品,可在几分钟内创建并启动集群,弹性灵活,可根据业务规模与工作负载等需求实现低成本集群组件最优组合,动态扩容缩容,更专注于业务分析. 使用京东云的JMR,对于其他云平台可 ...

最新文章

  1. 转 Debugging AutoCAD 2017 using Visual Studio 2015
  2. php连接mySql,加密函数
  3. Celt Codec简单使用方法
  4. 4.MYSQL 三大范式+BC范式
  5. python 滑块验证码_python selenium 淘宝滑块验证码 问题
  6. Android小测验感受
  7. nginx修改upstream不重启的方法(ngx_http_dyups_module模块)
  8. 移动营销的魔力:让你的客户无处可逃(附赠2012移动营销百问百答手册)
  9. 妲己机器人怎么升级固件_OnRobot新增爱普生川崎机器人套件、HEX固件升级
  10. SQL68 牛客每个人最近的登录日期(三)
  11. 一个自己主动依据xcode中的objective-c代码生成类关系图的神器
  12. 机架服务器显示器,机架式显示器排名_欧仕茄物联
  13. 数学建模算法与应用_CORDIC算法详解(三) CORDIC 算法之线性系统及其数学应用...
  14. 解决Maven:Cannot resolve com.oracle.ojdbc:ojdbc6:11.2.0.1.0报红找不到问题,解决方案亲测有效详细图文教程 问题描述(ojdbc6)
  15. C#调用默认浏览器打开网页的几种方法
  16. 【已解决】您的PHP似乎没有安装运行WordPress所必需的MySQL扩展
  17. 2020.8.25丨微生物基因组重测序流程梳理
  18. 淘宝客接入PHP(一)
  19. 别让懒惰,毁了你的努力!从上班混日子到月入2万
  20. CVPR2022论文集锦 | CVPR2022最新论文 | CVPR2022审稿结果 | CVPR2022录取结果

热门文章

  1. 徐州2018年大学计算机比赛,2018年第四届徐州市中小学生学科综合能力大赛获奖名单!权威发布!【五年级】...
  2. php 上传文件大小设置,调整PHP上传文件大小限制
  3. python实现整数反转_python算法 整数反转
  4. python3爬虫扒云班课资源
  5. 密室逃脱全集(试试吧)
  6. 你绝对不知道的JS冷知识
  7. nginx史上最强入门教学
  8. 阿里图片合成接口拼接
  9. 有监督学习,无监督学习,半监督学习和强化学习
  10. 服务器上怎么开启vt虚拟化功能,如何开启VT虚拟化功能