Ubuntu16.04多主机集群上手动部署Kubernetes,配置docker私有registry,配置Kubernetes-dashboard WEB ui
Ubuntu16.04多主机集群上手动部署Kubernetes,配置docker私有registry,配置Kubernetes-dashboard WEB ui
2017年03月17日 20:51:42 阅读数:3475 标签: kubernetesdashboardk8setcd集群docker更多
个人分类: kubernetesdocker
Ubuntu16.04多主机集群上手动部署Kubernetes,配置docker私有registry,配置Kubernetes-dashboard WEB ui
主要流程:
- 部署etcd集群
- 部署K8s Master
- 配置flannel服务
- 部署K8s Node
- 部署DNS
- 部署Dashboard
环境信息
项目 | 版本 |
---|---|
Etcd | 3.1.2 |
docker | 17.03.0-ce |
flannel | v0.7.0 |
Kubernetes | v1.4.9 |
集群主机ip:
名称 | ip | os |
---|---|---|
master | 10.107.20.5 | Ubuntu 16.04.2 LTS |
node1 | 10.107.20.6 | Ubuntu 16.04.2 LTS |
node2 | 10.107.20.7 | Ubuntu 16.04.2 LTS |
node3 | 10.107.20.8 | Ubuntu 16.04.2 LTS |
node4 | 10.107.20.9 | Ubuntu 16.04.2 LTS |
每台主机上安装最新版Docker Engine,手工安装可参考博主其他博客。
部署etcd集群
我们将在5台主机上安装部署etcd集群
下载etcd
在部署机上下载etcd
ETCD_VERSION=${ETCD_VERSION:-"3.1.2"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gztar xzf etcd.tar.gz -C /tmp
cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64for h in master node1 node2 node3 node4; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done
for h in master node1 node2 node3 node4; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
可以手动到github上下载etcd release (https://github.com/coreos/etcd/releases/) 的.tar.gz包,解压。通过scp复制到 etcd 和etcdctl到各个主机(每台主机需要配置ssh,可参考博主其他博客)。然后复制到/opt/bin目录下。
配置etcd服务
在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改IP地址,和主机名)
/opt/config/etcd.conf
sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo cat <<EOF | sudo tee /opt/config/etcd.conf
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME=etcd5
ETCD_INITIAL_CLUSTER=etcd5=http://10.107.20.5:2380,etcd6=http://10.107.20.6:2380,etcd7=http://10.107.20.7:2380,etcd8=http://10.107.20.8:2380,etcd9=http://10.107.20.9:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://10.107.20.5:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.107.20.5:2380
ETCD_ADVERTISE_CLIENT_URLS=http://10.107.20.5:2379
ETCD_LISTEN_CLIENT_URLS=http://10.107.20.5:2379,http://127.0.0.1:2379
GOMAXPROCS=$(nproc)
EOF
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
此处五台主机的ETCD_NAME为etcd 5-9,可修改ETCD_NAME为自己起得名字(相应的ETCD_INITIAL_CLUSTER中对应五个名字)。每台主机上修改ETCD_LISTEN_PEER_URLS、ETCD_INITIAL_ADVERTISE_PEER_URLS、ETCD_ADVERTISE_CLIENT_URLS、ETCD_LISTEN_CLIENT_URLS为本机的ip。
/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000[Install]
WantedBy=multi-user.target
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
然后在每台主机上运行如下命令,将etcd加入开机服务并启动
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
- 1
- 2
- 3
部署K8s Master
下载Flannel
FLANNEL_VERSION=${FLANNEL_VERSION:-"v0.7.0"}
curl -L https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf flannel.tar.gz -C /tmp
- 1
- 2
- 3
如etcd一般,可手动到github上下载,解压。
下载K8s release包,并解压(https://github.com/kubernetes/kubernetes/releases)
cd /tmp
scp kubernetes/server/bin/kube-apiserver \kubernetes/server/bin/kube-controller-manager \kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@10.107.20.5:~/kube
scp flannel-${FLANNEL_VERSION}/flanneld user@10.107.20.5:~/kube
ssh -t user@10.107.20.5 'sudo mv ~/kube/* /opt/bin/'
将kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、flanneld复制到master主机。
创建证书
在master主机上 ,运行如下命令创建证书
mkdir -p /srv/kubernetes/cd /srv/kubernetesexport MASTER_IP=10.107.20.5
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
配置kube-apiserver服务
我们使用如下的Service以及Flannel的网段:
SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16
FLANNEL_NET=192.168.0.0/16
在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \--insecure-bind-address=0.0.0.0 \--insecure-port=8080 \--etcd-servers=http://10.107.20.5:2379, http://10.107.20.6:2379, http://10.107.20.7:2379, http://10.107.200
.8:2379, http://10.107.20.9:2379 \--logtostderr=true \--allow-privileged=false \--service-cluster-ip-range=172.18.0.0/16 \--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \--service-node-port-range=30000-32767 \--advertise-address=10.107.20.5 \--client-ca-file=/srv/kubernetes/ca.crt \--tls-cert-file=/srv/kubernetes/server.crt \--tls-private-key-file=/srv/kubernetes/server.key
Restart=on-failure
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
–etcd_servers:指定etcd服务的URL;–insecure-bind-address:绑定主机的非安全地址,设置0.0.0.0表示绑定所有ip地址;–insecure-port:apiserver绑定主机的非安全端口号,默认为8080;–service-cluster-ip-range:Kubernetes集群中Service的虚拟ip地址范围,以CIDR格式表示,该ip范围不能与物理机的真实ip段有重合。
配置kube-controller-manager服务
在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \--master=127.0.0.1:8080 \--root-ca-file=/srv/kubernetes/ca.crt \--service-account-private-key-file=/srv/kubernetes/server.key \--logtostderr=true
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
配置kuber-scheduler服务
在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \--logtostderr=true \--master=127.0.0.1:8080
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下
配置flanneld服务
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service[Service]
User=root
ExecStart=/opt/bin/flanneld \--etcd-endpoints="http://10.107.20.5:2379" \--iface=10.107.20.5 \--ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
启动服务
/opt/bin/etcdctl --endpoints="http://10.107.20.5:2379,http://10.107.20.6:2379,http://10.107.20.7:2379,http://10.107.20.8:2379,http://10.107.20.9:2379" mk /coreos.com/network/config '{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}'sudo systemctl daemon-reloadsudo systemctl enable kube-apiserver
sudo systemctl enable kube-controller-manager
sudo systemctl enable kube-scheduler
sudo systemctl enable flanneldsudo systemctl start kube-apiserver
sudo systemctl start kube-controller-manager
sudo systemctl start kube-scheduler
sudo systemctl start flanneld
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
修改Docker服务
source /run/flannel/subnet.envsudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
ip link set dev docker0 down
ip link delete docker0
fisudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
如果是手动安装的docker则需要去github上下载,docker.service和docker.socket文件到/lib/systemd/system/目录
部署K8s Node
复制程序文件
cd /tmp
for h in master node1 node2 node3 node4; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done
for h in master node1 node2 node3 node4; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done
for h in master node1 node2 node3 node4; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done
- 1
- 2
- 3
- 4
复制kube-proxy、kubelet到各节点。
配置Flanned以及修改Docker服务
参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址
配置kubelet服务(在master上也配置kubelet和kube-proxy)
/lib/systemd/system/kubelet.service,注意修改–hostname-override的IP地址,改为各主机的ip
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service[Service]
ExecStart=/opt/bin/kubelet \--hostname-override=10.107.20.5 \--api-servers=http://10.107.20.5:8080 \--logtostderr=true
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
启动服务
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
- 1
- 2
- 3
配置kube-proxy服务
/lib/systemd/system/kube-proxy.service,注意修改IP地址
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
ExecStart=/opt/bin/kube-proxy \--hostname-override=10.107.20.5 \--master=http://10.107.20.5:8080 \--logtostderr=true
Restart=on-failure[Install]
WantedBy=multi-user.target
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
启动服务
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
- 1
- 2
- 3
部署DNS
我们设置DNS_SERVER_IP=”172.18.8.8”;DNS_DOMAIN=”cluster.local”;DNS_REPLICAS=1
在master上创建文件, skydns.yml
apiVersion: v1
kind: ReplicationController
metadata:name: kube-dns-v17.1namespace: kube-systemlabels:k8s-app: kube-dnsversion: v17.1kubernetes.io/cluster-service: "true"
spec:replicas: 1selector:k8s-app: kube-dnsversion: v17.1template:metadata:labels:k8s-app: kube-dnsversion: v17.1kubernetes.io/cluster-service: "true"spec:containers:- name: kubednsimage: 10.107.20.5:5000/mritd/kubedns-amd64resources:# TODO: Set memory limits when we've profiled the container for large# clusters, then set request = limit to keep this container in# guaranteed class. Currently, this container falls into the# "burstable" category so the kubelet doesn't backoff from restarting it.limits:cpu: 100mmemory: 170Mirequests:cpu: 100mmemory: 70MilivenessProbe:httpGet:path: /healthzport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readinessport: 8081scheme: HTTP# we poll on pod startup for the Kubernetes master service and# only setup the /readiness HTTP server once that's available.initialDelaySeconds: 30timeoutSeconds: 5args:# command = "/kube-dns"- --domain=cluster.local- --dns-port=10053- --kube-master-url=http://10.107.20.5:8080ports:- containerPort: 10053name: dns-localprotocol: UDP- containerPort: 10053name: dns-tcp-localprotocol: TCP- name: dnsmasqimage: 10.107.20.5:5000/kube-dnsmasq-amd64args:- --cache-size=1000- --no-resolv- --server=127.0.0.1#10053ports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- name: healthzimage: 10.107.20.5:5000/exechealthz-amd64resources:# keep request = limit to keep this container in guaranteed classlimits:cpu: 10mmemory: 50Mirequests:cpu: 10m# Note that this container shouldn't really need 50Mi of memory. The# limits are set higher than expected pending investigation on #29688.# The extra memory was stolen from the kubedns container to keep the# net memory requested by the pod constant.memory: 50Miargs:- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null- -port=8080- -quietports:- containerPort: 8080protocol: TCPdnsPolicy: Default # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "KubeDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 172.18.8.8ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
创建pod和服务
kubectl create -f skydns.yml
- 1
然后,修该各节点的kubelet.service,添加–cluster-dns=172.18.8.8以及–cluster-domain=cluster.local
此处可能出现问题,kubernetes在创建pod时,还通过启动一个名为google_containers/pause的镜像来实现pod的概念。离线环境需要先下载pause镜像。此处,笔者通过一台嫩够连接上Internet的电脑下载一个docker registry镜像,docker save -o 保存为tar包,拷贝到master主机,docker load 为本地镜像
运行起来构建一个私有registry。从docker hub下载一个mritd/pause-amd64镜像,docker tag打上标签10.107.20.5:5000/mritd/pause-amd64,推到私有registry。
之后,给每台Node的kubelet服务的启动参数加上–pod_infra_container_image参数,指定为私有Docker Registry中的pause镜像的地址。–pod_infra_container_image=10.107.20.5:5000/mritd/pause-amd64.
来另外,kube-dns中庸用到的镜像(kubedns-amd64,kube-dnsmasq-amd64,exechealthz-amd64)也是离线下载推到私有registry中的。需要你自己手动完成这些操作。
对于客户端对于私有docker registry不能拉取推送的问题。到客户端上配置/etc/docker/daemon.json.添加{“insecure-registries”:[“registry的ip:5000”]} 如 {“insecure-registries”:[“10.107.20.5:5000”]}。
部署Dashboard
在master上创建kube-dashboard.yml文件
kind: Deployment
apiVersion: extensions/v1beta1
metadata:labels:app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: kubernetes-dashboardtemplate:metadata:labels:app: kubernetes-dashboard# Comment the following annotation if Dashboard must not be deployed on masterannotations:scheduler.alpha.kubernetes.io/tolerations: |[{"key": "dedicated","operator": "Equal","value": "master","effect": "NoSchedule"}]spec:containers:- name: kubernetes-dashboardimage: 10.107.20.5:5000/mritd/kubernetes-dashboard-amd64ports:- containerPort: 9090protocol: TCPargs:- --apiserver-host=http://10.107.20.5:8080livenessProbe:httpGet:path: /port: 9090initialDelaySeconds: 30timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:labels:app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:type: NodePortports:- port: 80targetPort: 9090selector:app: kubernetes-dashboard
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
kubectl create -f kube-dashboard.yml
参考dns的部署
查看kubectl describe -f kube-dashboard.yml pod被部署到了哪个节点,查看映射的端口。通过访问被部署到的节点的ip和port就可访问ui页面了。
30W年薪的人工智能工程师只是“白菜价”?
机器学习|深度学习|图像处理|自然语言处理|无人驾驶,这些技术都会吗?看看真正的人工智能师都会那些关键技术?年薪比你高多少!
-
王树民ITDATA: 好(08-23 07:45#5楼)
-
gnoloahs: k8s一键部署 视频教程 链接:https://pan.baidu.com/s/1c2m565U 密码:c3f4(12-02 12:43#4楼)
-
Mark-PINE: kube-dns和kubernetes-dashboard推荐用我最新转载的方式部署,国内可以去网易蜂巢下载对应版本的image镜像部署。(04-01 16:52#3楼)
-
雪之下丶微凉: 请教 博主 子节点是如何注册到集群中的,有类似于join的命令之类的吗?(03-29 22:47#2楼)查看回复(1)
-
Mark-PINE: 对于客户端对于私有docker registry不能拉取推送的问题。到客户端上配置/etc/docker/daemon.json.添加{"insecure-registries":["registry的ip:5000"]} 如 {"insecure-registries":["10.107.20.5:5000"]}。(03-17 20:57#1楼)
查看 6 条热评
Ubuntu16.04手动部署Kubernetes(1)——Master和Node部署
Ubuntu16.04多主机集群上手动部署Kubernetes,配置docker私有registry,配置Kubernetes-dashboard WEB ui相关推荐
- 在现有K8S集群上安装部署JenkinsX
在2018年年初,Jenkins X首次发布,它由Apache Groovy语言的创建者Jame Strachan创建.Jenkins X 是一个高度集成化的 CI/CD 平台,基于 Jenkins ...
- docker swarm 集群服务编排部署指南(docker stack)
Docker Swarm 集群管理 概述 Docker Swarm 是 Docker 的集群管理工具.它将 Docker 主机池转变为单个虚拟 Docker 主机,使得容器可以组成跨主机的子网网络.D ...
- 在ubuntu16.04安装hadoop集群时ssh不成功
背景信息: root@ubuntu4:~/.ssh# cat /etc/issue Ubuntu 16.04.1 LTS \n \l 官方指导: 如果不输入口令就无法用ssh登陆localhost,执 ...
- 在Kubernetes集群上部署和管理JFrog Artifactory
JFrog Artifactory是一个artifacts仓库管理平台,它支持所有的主流打包格式.构建工具和持续集成(CI)服务器.它将所有二进制内容保存在一个单一位置并提供一个接口,这使得用户在整个 ...
- 【VMware vSAN 7.0】5.4.5 在现有集群上启用 vSAN—我们有软硬件解决方案
目录 1. vSAN简介 1.1 vSAN 概念 1.1.1 vSAN 的特性 1.2 vSAN术语和定义 1.3 vSAN 和传统存储 1.4 构建 vSAN 群集 1.5 vSAN 部署选项 1. ...
- 在生产集群上运行topology
2019独角兽企业重金招聘Python工程师标准>>> 在生产集群上运行topology 博客分类: 分布式计算 在生产集群上运行topology跟本地模式差不多.下面是步骤: 1) ...
- 从认证到调度,K8s 集群上运行的小程序到底经历了什么?
作者 | 声东 阿里云售后技术专家 导读:不知道大家有没有意识到一个现实:大部分时候,我们已经不像以前一样,通过命令行,或者可视窗口来使用一个系统了. 前言 现在我们上微博.或者网购,操作的其实不是 ...
- 如何在tomcat下应用部署日志_如何在kubernete集群上部署springboot应用
1.打包springboot镜像 2.在kubernete上发布镜像 3.测试 在之前的文章中,我讲了使用kubeadm从0到1搭建kubernete集群,今天我们来聊一下如何在这套k8s集群上部署s ...
- apache ignite_Kubernetes集群上的Apache Ignite和Spring第1部分:Spring Boot应用程序
apache ignite 在之前的一系列博客中,我们在Kubernetes集群上启动了一个Ignite集群. 在本教程中,我们将使用先前在Spring Boot Application上创建的Ign ...
最新文章
- java文件怎么建立关联_如何创建两个Java Web应用程序并相互关联jar依赖关系和其他文件?...
- DES密码实现( C语言 )
- Drools学习 入门实例
- Abp商业版 - Identity Server模块
- mysql hy093_请问SQLSTATE [HY093]:参数号无效:未定义参数
- flask.Blueprint
- node js、npm、homebrew、cocoapod、git、hexo
- python画六边形的代码_跟我学python(1)——turtle
- 对有序特征进行离散化(继承Spark的机器学习Estimator类)
- [2018.10.23 T1] 战争
- 大数据——海量数据处理的基本方法总结
- matlab中solver函数_Matlab中solve函数用法详解
- UC浏览器书签导入Chrome的详解
- spring batch的原则(避免停不下来)
- Summary of defect detection algorithms based on deep learning
- Ai-WB2模组基于TCP的MQTT连接服务器使用示例
- 提高电脑性能增加fps的方法
- windows下修改中用户名为英文
- 手电筒android studio,Android QuickSetting---手电筒控制
- IOS UDID 6种方法在线获取