kubernetes是google公司基于docker所做的一个分布式集群,有以下主件组成

  etcd: 高可用存储共享配置和服务发现,作为与minion机器上的flannel配套使用,作用是使每台 minion上运行的docker拥有不同的ip段,最终目的是使不同minion上正在运行的docker containner都有一个与别的任意一个containner(别的minion上运行的docker containner)不一样的IP地址。

  flannel: 网络结构支持

  kube-apiserver: 不论通过kubectl还是使用remote api 直接控制,都要经过apiserver

  kube-controller-manager: 对replication controller, endpoints controller, namespace controller, and serviceaccounts controller的循环控制,与kube-apiserver交互,保证这些controller工作

  kube-scheduler: Kubernetes scheduler的作用就是根据特定的调度算法将pod调度到指定的工作节点(minion)上,这一过程也叫绑定(bind)

  kubelet: Kubelet运行在Kubernetes Minion Node上. 它是container agent的逻辑继任者

  kube-proxy: kube-proxy是kubernetes 里运行在minion节点上的一个组件, 它起的作用是一个服务代理的角色

图为GIT+Jenkins+Kubernetes+Docker+Etcd+confd+Nginx+Glusterfs架构

1、环境介绍及准备:

1.1 物理机操作系统

  物理机操作系统采用Centos7.4 64位,细节如下。

[root@etcd ~]# uname -a
Linux etcd 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@etcd ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

1.2 主机信息

  本文准备了五台机器用于部署k8s的运行环境(可把Master、etcd、registry放在同一台上),细节如下:

| 节点           | 主机名    | IP     |

| Etcd           | etcd     | 10.0.0.10 |

| Kubernetes Master  | k8s-master | 10.0.0.11 |

| Kubernetes Node 1  | k8s-node-1 | 10.0.0.12 |

| Kubernetes Node 2  | k8s-node-2 | 10.0.0.13 |

| Kubernetes Node 3  | k8s-node-3 | 10.0.0.14 |

设置五台机器的主机名:

  Etcd上执行:

[root@localhost ~]#  hostnamectl --static set-hostname  etcd

  Master上执行:

[root@localhost ~]#  hostnamectl --static set-hostname  k8s-master

  Node1上执行:

[root@localhost ~]#  hostnamectl --static set-hostname  k8s-node-1

  Node2上执行:

[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-2

  Node3上执行:

[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-3

  在五台机器上设置hosts,均执行如下命令:

echo '10.0.0.10    etcd                 #添加hosts解析
10.0.0.11    k8s-master
10.0.0.12    k8s-node-1
10.0.0.13    k8s-node-2
10.0.0.14    k8s-node-3
10.0.0.10    kube-registry' >> /etc/hosts

1.3 关闭五台机器上的防火墙

systemctl disable firewalld.service
systemctl stop firewalld.service

安装NTP并启动

# yum -y install ntp
# systemctl start ntpd
# systemctl enable ntpd

2、部署etcd

 k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:

[root@etcd ~]# yum install etcd -y

yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:

# [member]
ETCD_NAME=master                             #<----修改这行
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#修改这行,2379是默认的使用端口,为了防止端口占用问题的出现,增加4001端口备用。
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"       #<----修改这行
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""

启动并验证状态

[root@etcd ~]# systemctl enable etcd
[root@etcd ~]# systemctl start etcd
[root@etcd ~]# etcdctl set testdir/testkey0 0
0
[root@etcd ~]# etcdctl get testdir/testkey0
0
[root@etcd ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@etcd ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy

3、部署master

3.1 安装Docker

[root@k8s-master ~]# yum install -y docker

配置Docker配置文件,使其允许从registry中拉取镜像。

[root@k8s-master ~]# vim /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; thenDOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

设置开机自启动并开启服务

[root@k8s-master ~]# chkconfig docker on
Note: Forwarding request to 'systemctl enable docker.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-master ~]# service docker start
Redirecting to /bin/systemctl start docker.service

3.2 安装kubernets

[root@k8s-master ~]# yum install -y kubernetes

3.3 配置并启动kubernetes

在kubernetes master上需要运行以下组件:

    Kubernets API Server

    Kubernets Controller Manager

    Kubernets Scheduler

相应的要更改以下几个配置中带颜色部分信息:

3.3.1 /etc/kubernetes/apiserver

[root@k8s-master ~]# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"          #<----修改这行
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"                                 #<----修改这行
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"        #<----修改这行
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"   #<----修改这行
# Add your own!
KUBE_API_ARGS=""

3.3.2  /etc/kubernetes/config

[root@k8s-master ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"                  #<----修改这行

启动服务并设置开机自启动

[root@k8s-master ~]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master ~]# systemctl start kube-apiserver.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master ~]# systemctl start kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master ~]# systemctl start kube-scheduler.service

4、部署node

4.1 安装docker

  参见3.1

[root@k8s-node-1 ~]# yum install -y docker

配置Docker配置文件,使其允许从registry中拉取镜像。

[root@k8s-node-1 ~]# vim /etc/sysconfig/docker# /etc/sysconfig/docker# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; thenDOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry kube-registry:5000'     #添加这行kube-registry:5000,需要/etc/hosts解析
[root@k8s-master ~]# chkconfig docker on
[root@k8s-master ~]# service docker start

4.2 安装kubernets

  参见3.2

[root@k8s-master ~]# yum install -y kubernetes

4.3 配置并启动kubernetes

  在kubernetes node上需要运行以下组件:

    Kubelet

    Kubernets Proxy

相应的要更改以下几个配置文中带颜色部分信息:

4.3.1 /etc/kubernetes/config

[root@k8s-node-1 ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"        #修改

4.3.2 /etc/kubernetes/kubelet

[root@k8s-node-1 ~]# cat /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"        #修改
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"        #修改
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"        #修改# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER:Pod 启动时的一个基础容器,这里使用自建的pod-infrastructure
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kube-registry:5000"       #修改,上面默认的基础容器无法下载# Add your own!
KUBELET_ARGS=""

启动服务并设置开机自启动

[root@k8s-node-1 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node-1 ~]# systemctl start kubelet.service
[root@k8s-node-1 ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-node-1 ~]# systemctl start kube-proxy.service

4.4 查看状态,查询Kubernetes的健康状态

  在master上查看集群中节点及节点状态

[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME         STATUS    AGE
k8s-node-1   Ready     5m
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    AGE
k8s-node-1   Ready     5m

其他的k8s-node-2,k8s-node-3只需修改:

[root@k8s-node-2 ~]# vi /etc/kubernetes/kubelet
......
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-2"
......

k8s-node-1,k8s-node-2,k8s-node-3节点部署node后,完成后查看状态:

#<---查询Kubernetes Master各组件的健康状态
[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME         STATUS    AGE
k8s-node-1        Ready     40m
k8s-node-2        Ready     5m
k8s-node-3        Ready     23s#<---查询Kubernetes Node的健康状态
[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME         STATUS    AGE
k8s-node-1        Ready     3d
k8s-node-2        Ready     3d
k8s-node-3        Ready     3d
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    AGE
k8s-node-1   Ready     40m
k8s-node-2   Ready     5m
k8s-node-3   Ready     25s

至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。

5、创建覆盖网络——Flannel

5.1 安装Flannel

  在master、node上均执行如下命令,进行安装

[root@k8s-master ~]# yum install -y flannel

5.2 配置Flannel

  master、node上均编辑/etc/sysconfig/flanneld,修改红色部分

[root@k8s-master ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"             #<----修改
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

5.3 配置etcd中关于flannel的key

Kubernetes的网络模型要求每一个Pod都拥有一个扁平化共享网络命名空间的IP,称为PodIP,Pod能够直接通过PodIP跨网络与其他物理机和Pod进行通信。要实现Kubernetes的网络模型,需要在Kubernetes集群中创建一个覆盖网络(Overlay Network),联通各个节点,目前可以通过第三方网络插件来创建覆盖网络,比如Flannel和Open vSwitch。

Flannel会为不同Node的Docker网桥配置不同的IP网段以保证Docker容器的IP在集群内唯一,所以Flannel会重新配置Docker网桥,需要先删除原先创建的Docker网桥(每台机器需要执行):

# iptables -t nat -F
# ifconfig docker0 down
# brctl delbr docker0

  Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

[root@etcd ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/24" }'
{ "Network": "10.0.0.0/24" }
[root@etcd ~]# etcdctl get /atomic.io/network/config
{ "Network": "10.0.0.0/24" }
[root@k8s-node-1 ~]# systemctl status flanneld.service    #不能和物理机取相同的网段
● flanneld.service - Flanneld overlay address etcd agentLoaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)Active: active (running) since Thu 2018-05-31 17:01:42 CST; 4s agoProcess: 74016 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker (code=exited, status=0/SUCCESS)Main PID: 73955 (flanneld)Memory: 16.7MCGroup: /system.slice/flanneld.service└─73955 /usr/bin/flanneld -etcd-endpoints=http://etcd:2379 -etcd-prefix=/atomic.io/network --iface=ens33
May 31 17:01:36 k8s-node-1 flanneld-start[73955]: E0531 17:01:36.996778   73955 network.go:102] failed to retrieve network config: 100: Key not... [11633]
May 31 17:01:38 k8s-node-1 flanneld-start[73955]: E0531 17:01:38.000029   73955 network.go:102] failed to retrieve network config: 100: Key not... [11633]
May 31 17:01:39 k8s-node-1 flanneld-start[73955]: E0531 17:01:39.003675   73955 network.go:102] failed to retrieve network config: 100: Key not... [11633]
May 31 17:01:40 k8s-node-1 flanneld-start[73955]: E0531 17:01:40.006161   73955 network.go:102] failed to retrieve network config: 100: Key not... [11633]
May 31 17:01:41 k8s-node-1 flanneld-start[73955]: E0531 17:01:41.007743   73955 network.go:102] failed to retrieve network config: 100: Key not... [11633]
May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.016595   73955 local_manager.go:179] Picking subnet in range 10.0.1.0 ... 10.0.255.0
May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.055551   73955 manager.go:250] Lease acquired: 10.0.74.0/24
May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.056475   73955 network.go:98] Watching for new subnet leases
May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.081693   73955 network.go:191] Subnet added: 10.0.0.128/25
May 31 17:01:42 k8s-node-1 systemd[1]: Started Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.
[root@etcd ~]# etcdctl rm /atomic.io/network/config
PrevNode.Value: { "Network": "10.0.0.0/24" }
[root@etcd ~]# etcdctl get /atomic.io/network/config
Error:  100: Key not found (/atomic.io/network/config) [11633]

所以设置成不同的网段

(例子:

etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16", "SubnetMin": "172.17.1.0", "SubnetMax": "172.17.254.0"}'
[root@etcd ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }
[root@etcd ~]# etcdctl get /atomic.io/network/config
{ "Network": "10.0.0.0/16" }

5.4 启动

启动Flannel之后,需要依次重启docker、kubernete。

  在master执行:

systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

  在node上执行:

systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
  1. kube-apiserver:位于master节点,接受用户请求。

  2. kube-scheduler:位于master节点,负责资源调度,即pod建在哪个node节点。

  3. kube-controller-manager:位于master节点,包含ReplicationManager,Endpointscontroller,Namespacecontroller,Nodecontroller等。

  4. etcd:分布式键值存储系统,共享整个集群的资源对象信息。

  5. kubelet:位于node节点,负责维护在特定主机上运行的pod。

  6. kube-proxy:位于node节点,它起的作用是一个服务代理的角色。

工作流程:

① kubectl 发送部署请求到 API Server。

② API Server 通知 Controller Manager 创建一个 deployment 资源。

③ Scheduler 执行调度任务,将两个副本 Pod 分发到 k8s-node1 和 k8s-node2。

④ k8s-node1 和 k8s-node2 上的 kubectl 在各自的节点上创建并运行 Pod。

[root@k8s-master ~]# kubectl get cs        #查询各个组件的健康状态
NAME                 STATUS    MESSAGE              ERROR
controller-manager         Healthy     ok
scheduler             Healthy     ok
etcd-0               Healthy    {"health": "true"}

---------------------------------------

[root@k8s-master ~]# kube-apiserver --version

Kubernetes v1.5.2

[root@k8s-master ~]# flanneld -version

0.7.1

[root@etcd ~]# etcd -version

etcd Version: 3.2.18

Git SHA: eddf599

Go Version: go1.9.4

Go OS/Arch: linux/amd64

--------------------------------------------

推广:

Flannel会为不同Node的Dokcer网桥配置不同的IP网段以保证Docker容器的IP在集群内唯一。

所以Flannel会重新配置Docker网桥。需要删除原先创建的Dokcer网桥。

[root@k8s-master ~]# ip link set docker0 down      //关闭docker0网桥
[root@k8s-master ~]# ip link delete docker0        //删除docker0网桥
[root@k8s-master ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:f0:be:d7 brd ff:ff:ff:ff:ff:ffinet 10.0.0.11/24 brd 10.0.0.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c0f:685e:be9c:a067/64 scope link tentative dadfailed valid_lft forever preferred_lft foreverinet6 fe80::25bc:f2cb:4ac9:4aff/64 scope link tentative dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5095:3b33:58b4:46a4/64 scope link tentative dadfailed valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500link/none inet 10.0.20.0/16 scope global flannel0valid_lft forever preferred_lft foreverinet6 fe80::2ab2:fede:73d1:9c7/64 scope link flags 800 valid_lft forever preferred_lft forever
[root@k8s-master ~]# systemctl restart flanneld docker

操作完成后,可以进行验证。
每个Node会有docker0和flannel0这2个网卡。这2个网卡在不同的Node下都是不同的.

[root@k8s-master ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:f0:be:d7 brd ff:ff:ff:ff:ff:ffinet 10.0.0.11/24 brd 10.0.0.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::c0f:685e:be9c:a067/64 scope link tentative dadfailed valid_lft forever preferred_lft foreverinet6 fe80::25bc:f2cb:4ac9:4aff/64 scope link tentative dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5095:3b33:58b4:46a4/64 scope link tentative dadfailed valid_lft forever preferred_lft forever
4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500link/none inet 10.0.20.0/16 scope global flannel0valid_lft forever preferred_lft foreverinet6 fe80::48ef:305f:a9af:6b4c/64 scope link flags 800 valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:62:01:91:e2 brd ff:ff:ff:ff:ff:ffinet 10.0.20.1/24 scope global docker0valid_lft forever preferred_lft forever

[root@k8s-master ~]#  kubectl get services

NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE

kubernetes   10.254.0.1   <none>        443/TCP   7h

[root@k8s-master ~]#  kubectl get pod -o wide

No resources found.

转载于:https://blog.51cto.com/sf1314/2122143

Kubernetes实战[1]: 基于kubernetes构建Docker集群环境实战相关推荐

  1. 基于Kubernetes构建Docker集群管理详解

    from: 基于Kubernetes构建Docker集群管理详解 Kubernetes是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度.均衡容灾.服务注册.动 ...

  2. 基于 Swarm 的 Docker 集群管理

    文章目录 基于 Swarm 的 Docker 集群管理 一.Swarm简介 1.Swarm 模式简介 2.Swarm 特性 3.Swarm 主要概念 (1)开始使用Swarm模式 (2)安装环境要求 ...

  3. 基于Gitlab Jenkins Docker集群 微服务搭建自动化部署平台

    随着公司应用系统的不断增多,原有手工部署流程越来越不满足上线的需求.为了各个系统能快速迭代与测试,减少上线流程时间和人为出错,迫切需要一套自动化部署系统. 转载原文:https://luoji.liv ...

  4. 《云原生之K8s实战》基于kubeadm部署K8S集群

    目录 基于kubeadm部署K8S集群 一.环境准备 1.1.主机初始化配置 1.2.部署docker环境

  5. Docker 集群环境实现的新方式

    近几年来,Docker 作为一个开源的应用容器引擎,深受广大开发者的欢迎.随着 Docker 生态圈的不断建设,应用领域越来越广.云计算,大数据,移动技术的快速发展,加之企业业务需求的不断变化,紧随技 ...

  6. linux(centos7)部署kubernetes(k8s 1.16.2)集群环境及测试

    k8s作为容器集群管理系统有着明显的优势,比如动态扩容/缩容. 1. 准备环境 最基本的集群需要三个节点,在三个节点上都安装k8s Node,在其中一个节点上安装Master. 操作系统 IP hos ...

  7. 002如何构建hadoop集群环境?

    待我君临天下,结发与蕊可好.@夏瑾墨 实验室机器配置情况: 3台PowerEdge R730 Server 1台PowerEdge R410 Server 1台kvm 1台交换机 我们打算配置三个节点 ...

  8. Docker集群环境下安装Fastdfs集群+Nginx负载均衡

    一.环境配置 现有两台服务器10.168.103.110(旧服务器)和10.168.103.111(新服务器),其中110服务器上有swarm集群,一个tracker和一个storage,现在要做Fa ...

  9. k8s docker集群搭建

    一.Kubernetes系列之介绍篇 1.背景介绍 云计算飞速发展 - IaaS - PaaS - SaaS Docker技术突飞猛进 - 一次构建,到处运行 - 容器的快速轻量 - 完整的生态环境 ...

最新文章

  1. mongodb windows安装
  2. 图像处理(一)图像变形(1)矩形全景图像还原-Siggraph 2014
  3. Xamarin.Android 使用ListView绑定数据
  4. setProperty will trigger ui re-render 南京同事提的问题
  5. js 操作cookies 方法
  6. 大家常用的 IDEA 插件大推荐,个个都得安装!
  7. c++生成光栅条纹程序_共享屋:一文让你认识光栅尺和编码器
  8. 学习ReasonML编程语言
  9. CloudStack4.10+GlusterFS4.10测试
  10. python课程_python课程大放送
  11. CCF201403-3 命令行选项(100分)
  12. JAVA集合一:ArrayList和LinkedList
  13. vue登录页面ajax,springboot+vue 登录页面(三)
  14. 硬盘保护卡(增霸卡)的工作原理
  15. 那些漂亮的sci论文图一般用什么软件制作的?
  16. lay-ui里修改表格自动换行
  17. ajax 点击下一页,ajax调用不会进入下一页
  18. 超人:钢铁之躯 Man of Steel (2013)
  19. 名词解释atm网络_名词解释(通信)
  20. STM32之音乐播放器

热门文章

  1. VCSA 6.5 HA 配置之五:故障转移测试
  2. 初识hibernate小案例
  3. 在eclipse中将项目发布到tomcat的root目录
  4. this class is not key value coding-compliant for the key ##
  5. 巴菲特:买进你同学的10%
  6. Sweeter Than Fiction - Taylor Swift
  7. 南京晓庄学院计算机网络试卷,南京晓庄学院计算机网络8套卷(完整含答案).doc...
  8. date比较大小 mybatis_MyBatis版本升级导致OffsetDateTime入参解析异常问题复盘
  9. 倒序查10条数据_王者荣耀对抗路数据公布,尖端局吕布倒第一,夏洛特真的很意外...
  10. 本科刚毕业有点迷茫,想入门单片机,应该怎么开始?