kubernetes:17.4

docker:20.10.8

etcd:3.4.9

etcdctl:3.4.9

kubectl:1.17.4

1.下载

wget https://github.com/kubernetes/kubernetes/releases/download/v1.17.4/kubernetes.tar.gz

2.下载安装包

./kubernetes/cluster/get-kube-binaries.sh
下面就是下载的
kubernetes/server/kubernetes-server-linux-amd64.tar.gz

3.解压copy

cp kubernetes/server/kubernetes/server/bin/kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubectl /usr/bin
cp etcd etcdctl /usr/bin

4。配置

etc:

[root@rhc kubernetes]# cat /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://0.0.0.0:2380"ETCD_NAME="master"
ETCD_INITIAL_CLUSTER="master=http://0.0.0.0:2380"

apiserver:

[root@rhc kubernetes]# cat apiserver
KUBE_API_ARGS="--etcd-servers=http://172.16.10.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=1-65535 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=0"KUBE_API_ARGS="--etcd-servers=http://172.16.10.1:2379 --insecure-bind-address=0.0.0.0 --advertise-address=172.16.10.1 --insecure-port=8080 --service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=1024-65535 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=0 --allow-privileged=true"

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
## The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"# The port on the local server to listen on.
#KUBE_API_PORT="port=8080"
KUBE_API_PORT="--insecure-port=8080"# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"# Comma separated list of nodes in the etcd cluster
#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379,http://127.0.0.1:4001"
KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.10.1:2379"# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"# default admission control policies
#用k8s1.5 默认的admission-controlKUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"# Add your own!
KUBE_API_ARGS="--service-node-port-range=1-65535"

controller-manager:

[root@rhc kubernetes]# cat controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"
#KUBE_CONTROLLER_MANAGER_ARGS="--logtostderr=false --log-dir=/var/log/kubernetes --v=0 --master=172.16.10.1:8080 --cluster-name=kubernetes"不用指定kubeconfig
KUBE_CONTROLLER_MANAGER_ARGS="--logtostderr=false --log-dir=/var/log/kubernetes --v=0 --master=172.16.10.1:8080 --cluster-name=kubernetes --allocate-node-cidrs=true --cluster-cidr=10.244.0.1/16 --service-cluster-ip-range=10.254.0.0/16 --bind-address=0.0.0.0 --secure-port=10257"

延续k8s1.5 风格的配置

[root@localhost kubernetes]# cat controller-manager
###
# The following values are used to configure the kubernetes controller-manager# defaults from config and apiserver should be adequate# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""

scheduler:

[root@rhc kubernetes]# cat scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"
#KUBE_SCHEDULER_MANAGER_ARGS="--logtostderr=false --log-dir=/var/log/kubernetes --v=0 --master=172.16.10.1:8080 --cluster-name=kubernetes"不用指定kubeconfig
KUBE_SCHEDULER_ARGS="--logtostderr=false --log-dir=/var/log/kubernetes --v=0 --master=172.16.10.1:8080 --bind-address=0.0.0.0  --secure-port=10259"

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat scheduler
###
# kubernetes scheduler config# default config should be adequate# Add your own!
KUBE_SCHEDULER_ARGS=""

node节点

kubelet:

[root@rhc kubernetes]# cat kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig  --hostname-override=172.16.10.1 --logtostderr=false --log-dir=/var/log/kubernetes --v=0 --fail-swap-on=false"

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat kubelet
###
# kubernetes kubelet (minion) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"# The port for the info server to serve on
KUBELET_PORT="--port=10250"# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.10.1"# Edit the kubelet.kubeconfig to have correct cluster server address
KUBELET_KUBECONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=harbor.jettech.com/rancher/pause:3.1"# Add your own!
#KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false --cluster-dns=10.254.254.254 --cluster-domain=jettech.com"
KUBELET_ARGS="--cgroup-driver=cgroupfs --fail-swap-on=false --cluster-dns=10.254.254.254 --cluster-domain=jettech.com"

kubelet.kubeconfig

[root@rhc kubernetes]# cat kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:- cluster:server: http://172.16.10.1:8080/name: local
contexts:- context:cluster: localname: local
current-context: local

proxy:

[root@rhc kubernetes]# cat proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat proxy
###
# kubernetes proxy config# default config should be adequate# Add your own!
KUBE_PROXY_ARGS=""

公共kubeconfig:

[root@rhc kubernetes]# cat kubeconfig
apiVersion: v1
king: Config
users:
- name: clientuser:
clusters:
- name: defaultcluster:server: http://172.16.10.1:8080
contexts:
- context:cluster: defaultuser: clientname: default
current-context: default

延续k8s1.5 风格的配置 ,公共的

[root@rhc kubernetes]# cat config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://172.16.10.1:8080"

5.系统服务:

kube-apiserver

[root@rhc kubernetes]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat  /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver https://kubernetes.io/docs/reference/generated/kube-apiserver/
After=network.target
After=etcd.service[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
#User=kube
ExecStart=/usr/bin/kube-apiserver \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_ETCD_SERVERS \$KUBE_API_ADDRESS \$KUBE_API_PORT \#$KUBELET_PORT \$KUBE_ALLOW_PRIV \$KUBE_SERVICE_ADDRESSES \$KUBE_ADMISSION_CONTROL \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

kube-controller-manager

[root@rhc kubernetes]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat  /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager https://kubernetes.io/docs/reference/generated/kube-controller-manager/[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
#User=kube
ExecStart=/usr/bin/kube-controller-manager \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

kube-scheduler

[root@rhc kubernetes]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat  /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler https://kubernetes.io/docs/reference/generated/kube-scheduler/[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
#User=kube
ExecStart=/usr/bin/kube-scheduler \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

kubelet:

[root@rhc kubernetes]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure[Install]
WantedBy=multi-user.target

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat  /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kubelet https://kubernetes.io/docs/reference/generated/kubelet/
After=docker.service crio.service[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBELET_KUBECONFIG \$KUBELET_ADDRESS \$KUBELET_PORT \$KUBELET_HOSTNAME \$KUBELET_POD_INFRA_CONTAINER \$KUBELET_ARGS
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target

kube-proxy:

[root@rhc kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.service
Requires=network.service[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

延续k8s1.5 风格的配置

[root@rhc kubernetes]# cat  /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-proxy https://kubernetes.io/docs/reference/generated/kube-proxy/
After=network.target[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \$KUBE_LOGTOSTDERR \$KUBE_LOG_LEVEL \$KUBE_MASTER \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target

6.启动服务

systemctl enable docker kube-apiserver.service kube-controller-manager kube-schduler kubelet kube-proxy --now

7 查看节点情况

[root@rhc kubernetes]# echo "alias kubectl='kubectl -s http://172.16.10.1:8080' " >>/etc/bashrc[root@rhc kubernetes]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
172.16.10.1   Ready    <none>   38m   v1.17.4

8 etcd查看etcdctl V3

export ETCDCTL_API=3或者在`/etc/profile`文件中添加环境变量
vi /etc/profile
...
ETCDCTL_API=3
...
source /etc/profile
ENDPOINTS=http://0.0.0.0:2379
etcdctl --endpoints=$ENDPOINTS member list增删改查
増
etcdctl --endpoints=$ENDPOINTS put foo "Hello World!"
查
etcdctl --endpoints=$ENDPOINTS get foo
etcdctl --endpoints=$ENDPOINTS --write-out="json" get foo
删
etcdctl --endpoints=$ENDPOINTS put key myvalue
etcdctl --endpoints=$ENDPOINTS del keyetcdctl --endpoints=$ENDPOINTS put k1 value1
etcdctl --endpoints=$ENDPOINTS put k2 value2
etcdctl --endpoints=$ENDPOINTS del k --prefix集群状态,集群状态主要是etcdctl endpoint status 和etcdctl endpoint health两条命令[root@rhc k8s]# etcdctl --write-out=table --endpoints=172.16.10.1:2379  endpoint status
+------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT     |       ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.16.10.1:2379 | 1c70f9bbb41018f |   3.4.9 |  762 kB |      true |      false |         5 |    1654380 |            1654380 |        |
+------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+[root@rhc k8s]# etcdctl --endpoints=172.16.10.1:2379 endpoint health
172.16.10.1:2379 is healthy: successfully committed proposal: took = 1.316833ms集群成员,例如 etcdctl member list列出集群成员的命令
[root@rhc k8s]# etcdctl --endpoints=172.16.10.1:2379  member list -w table
+-----------------+---------+--------+---------------------+-----------------------------------------+------------+
|       ID        | STATUS  |  NAME  |     PEER ADDRS      |              CLIENT ADDRS               | IS LEARNER |
+-----------------+---------+--------+---------------------+-----------------------------------------+------------+
| 1c70f9bbb41018f | started | master | http://0.0.0.0:2380 | http://0.0.0.0:2379,http://0.0.0.0:4001 |      false |
+-----------------+---------+--------+---------------------+-----------------------------------------+------------+

9 flannel

[root@rhc test]# yum install flannel[root@rhc test]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  # etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://172.16.10.1:2379"# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

启动flanneld服务 报错  tailf -f /var/log/message

[root@rhc test]# systemctl enable flanneld --now

tailf -f /var/log/message

查看 flannel 日志报错如下:Couldn't fetch network config: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint. timed out无法获取网络配置:客户端:响应无效json。端点可能不是有效的etcd集群端点。 计时结束

查看版本

[root@rhc test]# etcd --version
etcd Version: 3.4.9
Git SHA: 54ba95891
Go Version: go1.12.17
Go OS/Arch: linux/amd64[root@rhc test]# flanneld --version
0.7.1

然后查阅 flanneld 官网文档,上面标准了 flannel 这个版本不能给 etcd 3 进行通信,所以才造成了上面的问题,下面进行如下操作

1)停止 flannel

使用自己的方法停止,

2)删除 etcd 里面刚才创建的 key

之前创建的

[root@rhc test]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
[root@rhc test]#  etcdctl get --prefix /atomic.io
[root@rhc test]#  etcdctl del /atomic.io/network/config

3)开启 etcd v2 API 接口

在 etcd 启动命令里面添加上如下内容,然后重启 etcd 集群

--enable-v2
[root@rhc test]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target[Service]
#Type=simple
Type=notify
#WorkingDireotory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd --enable-v2[Install]
WantedBy=multi-user.target

4) 使用 etcd v2 去创建 flannel 的网络配置

[root@rhc test]#  ETCDCTL_API=2 etcdctl set /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

5)然后启动 flannel 服务

6)查看 etcd 里面的数据是否正确

[root@rhc test]# ETCDCTL_API=2 etcdctl ls /atomic.io/network
/atomic.io/network/config
/atomic.io/network/subnets
[root@rhc test]# ETCDCTL_API=2 etcdctl ls /atomic.io/network/config
/atomic.io/network/config
[root@rhc test]# ETCDCTL_API=2 etcdctl ls /atomic.io/network/subnets
/atomic.io/network/subnets/10.0.59.0-24

检查得到的结果是不是给你的 subnet.env 里面配置的网络在同一网段,如果是检查路由表,查看是否自动添加路由

扩展: 卸载删除 flannel 网络插件

1)停止 flannel 网络插件

2) 删除 etcdctl 里面的内容

ETCDCTL_API=2 etcdctl rm /atomic.io/network/config
ETCDCTL_API=2 etcdctl rm /atomic.io/network/subnets/10.0.59.0-24
ETCDCTL_API=2 etcdctl rmdir /coreos.com/network/subnets

3,)删除路由表

route del -net 10.0.0.0/16 gw 10.0.0.1

4,)删除 flannel 程序包

yum erase flannel -y

问题

1启动了FLANNEL和:wq!,但docker0的IP还是原来的样子,还是无法与同一Node的flannel0在同一网段。
  原因:

cat /usr/lib/systemd/system/docker.service中发现没有source这个这个文件。(如果docker启动文件是自己写的,需要在启动文件中加入这些变量)

  所以,我就加上了这么几行(根据具体调整下列的参数位置):

[root@rhc test]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.0.59.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.0.59.1/24 --ip-masq=true --mtu=1472"

docker 配置文件修改

[root@rhc test]# cat /usr/lib/systemd/system/docker.service
[Service]
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONS

或者

[root@localhost test]# cat /etc/docker/daemon.json
{"insecure-registries": [ "harbor.jettech.com", "192.168.99.41"],"registry-mirrors": ["https://registry.docker-cn.com"],"bip":"172.17.0.1/16"
}

最后重启docker 后,发现问题解决了。在同一个网段了

[root@rhc test]# ip a  | grep -E 'docker|flan'
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default inet 10.0.59.1/24 brd 10.0.59.255 scope global docker0
113: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500inet 10.0.59.0/16 scope global flannel0

原理:

flannel

flannel可以为容器提供网络服务。
其模型为全部的容器使用一个network,然后在每个host上从network中划分一个子网subnet。
为host上的容器创建网络时,从subnet中划分一个ip给容器。

其采用目前比较流行的no server的方式,即不存在所谓的控制节点,而是每个host上的flanneld从一个etcd中获取相关数据,然后声明自己的子网网段,并记录在etcd中。

其他的host对数据转发时,从etcd中查询到该子网所在的host的ip,然后将数据发往对应host上的flanneld,交由其进行转发。

根据kubernetes的模型,即为每个pod提供一个ip。flannel的模型正好与之契合。因此flannel是最简单易用的kubernetes集群网络方案。

flannel与docker的结合

flannel服务启动

flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作:

  • 从etcd中获取network的配置信息
  • 划分subnet,并在etcd中进行注册
  • 将子网信息记录到/run/flannel/subnet.env
[root@localhost kubernetes]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.0.0.0/16
FLANNEL_SUBNET=10.0.74.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
  • 之后将会有一个脚本将subnet.env转写成一个docker的环境变量文件/run/flannel/docker
[root@localhost kubernetes]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.0.74.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.0.74.1/24 --ip-masq=true --mtu=1472"

docker服务启动

接下来,docker daemon启动,使用/run/flannel/docker中的变量,作为启动参数,启动后的进程如下

容器启动

容器之后的启动,就是由docker daemon负责了。因为配置了bip(就是CIDR ip地址的另一种表示方式,和传统的A,B,C,D类不一样),因此创建出来的容器会使用该网段的ip,并赋给容器。即容器其实还是按照bridge的模式,进行创建的。

新版本docker没有获取到flannel分配的IP地址

启动后查看下启动的docker是不是被flannel托管了:

如果有–bip=xxxx/16(xxxx) ,说明Flannel管理了Docker

[root@jettodevops-prod-31 kubernetes]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:50:56:b9:87:b3 brd ff:ff:ff:ff:ff:ffinet 172.16.10.31/24 brd 172.16.10.255 scope global noprefixroute ens192valid_lft forever preferred_lft foreverinet6 fe80::5c97:2a:3e80:666c/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::50dc:de4b:6459:584c/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::ff83:804c:be7c:d2b4/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:75:39:22:7a brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:75ff:fe39:227a/64 scope link valid_lft forever preferred_lft forever
6272: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500link/none inet 10.0.92.0/16 scope global flannel0valid_lft forever preferred_lft foreverinet6 fe80::c583:8201:cfb2:403f/64 scope link flags 800 valid_lft forever preferred_lft forever

此时的docker和flannel ip不是一个网段的说明没有被flannel进行分配ip托管

[root@jettodevops-prod-31 kubernetes]# ps -aux | grep docker
root     17673  0.8  0.8 796268 66844 ?        Ssl  09:42   0:19 /usr/bin/dockerd
root     17681  0.9  0.3 427584 26488 ?        Ssl  09:42   0:20 docker-containerd --config /var/run/docker/containerd/containerd.toml
root     22613  0.0  0.0 112728   972 pts/0    S+   10:19   0:00 grep --color=auto docker

可以看到 我们的网卡信息同docker压根没有关联上

编辑service文件

vim /usr/lib/systemd/system/docker.service

添加DOCKER_NETWORK_OPTIONS 的配置到docker的启动参数中

EnvironmentFile=-/run/flannel/docker

ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS

加载docker

[root@jettodevops-prod-31 kubernetes]# vim /usr/lib/systemd/system/docker.service
[root@jettodevops-prod-31 kubernetes]#
[root@jettodevops-prod-31 kubernetes]# systemctl daemon-reload; systemctl restart docker
(reverse-i-search)`ps': ^C
[root@jettodevops-prod-31 kubernetes]# ps -aux | grep docker
root     23392  3.1  0.6 781472 51804 ?        Ssl  10:22   0:00 /usr/bin/dockerd --bip=10.0.92.1/24 --ip-masq=true --mtu=1472
root     23400  1.7  0.3 344536 25260 ?        Ssl  10:22   0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
root     23543  0.0  0.0 112728   976 pts/0    S+   10:22   0:00 grep --color=auto docker

docker和flannel 的ip已经在一个网段了

[root@jettodevops-prod-31 kubernetes]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:50:56:b9:87:b3 brd ff:ff:ff:ff:ff:ffinet 172.16.10.31/24 brd 172.16.10.255 scope global noprefixroute ens192valid_lft forever preferred_lft foreverinet6 fe80::5c97:2a:3e80:666c/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::50dc:de4b:6459:584c/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::ff83:804c:be7c:d2b4/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:75:39:22:7a brd ff:ff:ff:ff:ff:ffinet 10.0.92.1/24 brd 10.0.92.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:75ff:fe39:227a/64 scope link valid_lft forever preferred_lft forever
6272: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500link/none inet 10.0.92.0/16 scope global flannel0valid_lft forever preferred_lft foreverinet6 fe80::c583:8201:cfb2:403f/64 scope link flags 800 valid_lft forever preferred_lft forever

已经ok了

flannel与docker结合原理

现在问题来了,容器之间是怎么互通的呢?这里先要说道flanneld,他会在宿主机host上创建一个flannel0的设备。

[root@localhost kubernetes]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:50:56:b9:3c:30 brd ff:ff:ff:ff:ff:ffinet 172.16.10.1/24 brd 172.16.10.255 scope global ens192valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default link/ether 02:42:f7:34:d6:f4 brd ff:ff:ff:ff:ff:ffinet 10.0.74.1/24 brd 10.0.74.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:f7ff:fe34:d6f4/64 scope link valid_lft forever preferred_lft forever
5: br-f5b72b528fe0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:08:f7:8b:5c brd ff:ff:ff:ff:ff:ffinet 172.18.0.1/16 brd 172.18.255.255 scope global br-f5b72b528fe0valid_lft forever preferred_lft forever
44: veth47bf1f1@veth302d64a: <BROADCAST,MULTICAST> mtu 1472 qdisc noop state DOWN group default link/ether f6:22:2f:b5:e0:2b brd ff:ff:ff:ff:ff:ff
45: veth302d64a@veth47bf1f1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1472 qdisc noqueue master docker0 state LOWERLAYERDOWN group default link/ether 06:e6:e7:f8:ba:5e brd ff:ff:ff:ff:ff:ff
106: veth5971efe@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue master docker0 state UP group default link/ether 92:8d:75:cd:02:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0inet6 fe80::908d:75ff:fecd:20f/64 scope link valid_lft forever preferred_lft forever
107: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500link/none inet 10.0.74.0/16 scope global flannel0valid_lft forever preferred_lft foreverinet6 fe80::8864:bdc7:cb76:3689/64 scope link flags 800 valid_lft forever preferred_lft forever

接下来我们在看宿主机host上的路由信息。

[root@localhost kubernetes]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.10.254   0.0.0.0         UG    0      0        0 ens192
10.0.0.0        0.0.0.0         255.255.0.0     U     0      0        0 flannel0
10.0.74.0       0.0.0.0         255.255.255.0   U     0      0        0 docker0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens192
172.16.10.0     0.0.0.0         255.255.255.0   U     0      0        0 ens192
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-f5b72b528fe0

现在有三个容器分别是A/B/C

容器 ip
A 10.0.74.4
B 10.0.74.6
C 10.0.92.3

当容器A发送到同一个subnet的容器B时,因为二者处于同一个子网,所以容器A/B位于同一个宿主机host上,而容器A/B也均桥接在docker0上。此时的docker0相当于二层的交换机 veth302d64a和veth5971efe相当于两个网线的一头插在交换机上面,另一头在容器里面呢。借助于网桥docker0,容器A/B即可实现通信。

[root@localhost kubernetes]# brctl show
bridge name bridge id       STP enabled interfaces
br-f5b72b528fe0     8000.024208f78b5c   no
docker0     8000.0242f734d6f4   no      veth302d64aveth5971efe

那么位于不同宿主机的容器A和C如何通信呢?这个时候就要用到了flannel0这个设备了。
容器A想要发送给容器C,查路由表,可以知道需要使用flannel0接口,因此将数据发送到flannel0。
flanneld进程接收到flannel0接收的数据,然后从etcd中查询出10.0.92.0-24的子网的宿主机host的ip172.16.10.31

[root@localhost kubernetes]#  ETCDCTL_API=2 etcdctl ls /atomic.io/network/subnets/
/atomic.io/network/subnets/10.0.74.0-24
/atomic.io/network/subnets/10.0.92.0-24
[root@localhost kubernetes]#  ETCDCTL_API=2 etcdctl get /atomic.io/network/subnets/10.0.92.0-24
{"PublicIP":"172.16.10.31"}

然后将数据封包,发送到172.16.10.31的对应端口,由172.16.10.31的flanneld接收,解包,并转发到对应的容器中。

问题2,kubernetes cluster IP not with in the service CIDR

问题:在kubernetes service-cluster-ip-range地址网段明明配置的地址段110.254.0.0/16,但是Cluster Service IP居然使用是169.169.0.1。

分析:

查看k8s中apiserver服务日志:
tail -f /data/logs/kubernetes/kube-apiserver/apiserver.stdout.log

the cluster IP 192.168.0.1 for service kubernetes/default is not within the service CIDR x.x.x.x/24; please recreate

[root@localhost kubernetes]# kubectl describe  service kubernetes
Name:              kubernetes
Namespace:         default
Labels:            component=apiserverprovider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                169.169.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         172.16.10.1:6443
Session Affinity:  None
Events:Type     Reason                 Age                From                           Message----     ------                 ----               ----                           -------Warning  ClusterIPOutOfRange    58m (x5 over 67m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.254.0.0/16; please recreate serviceWarning  ClusterIPOutOfRange    46m (x6 over 58m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 192.168.0.0/16; please recreate serviceWarning  ClusterIPNotAllocated  44m                ipallocator-repair-controller  Cluster IP 169.169.0.1 is not allocated; repairingWarning  ClusterIPOutOfRange    22m (x4 over 28m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.43.0.0/16; please recreate serviceWarning  ClusterIPOutOfRange    21m (x2 over 21m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.43.0.0/16; please recreate serviceWarning  ClusterIPOutOfRange    20m (x2 over 20m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.43.0.0/16; please recreate serviceWarning  ClusterIPOutOfRange    19m (x2 over 19m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.43.0.0/16; please recreate serviceWarning  ClusterIPOutOfRange    50s (x8 over 18m)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.43.0.0/16; please recreate serviceWarning  ClusterIPOutOfRange    43s (x2 over 43s)  ipallocator-repair-controller  Cluster IP 169.169.0.1 is not within the service CIDR 10.43.0.0/16; please recreate service

解决问题:
使用命令:kubectl delete service kubernetes,然后系统会自动用新的ip重建这个service,就能解决

也可以不操作

删除ipvsadm规则 :

ipvsadm -D -t  169.169.0.1:443

原因:经过回忆是因为中途修改过--service-cluster-ip-range地址,一开始以为是 apiserver-csr.json中签证hosts中IP顺序的问题,其实并不是,但是hosts必须要有service-cluster-ip-range网段中的第一个IP。后面重新deletet就解决了

10 nginx和busybox测试

[root@rhc test]# cat nginx.yaml
apiVersion: v1
kind: Service
metadata:labels: {name: nginx}name: nginx
spec:ports:- {name: t9080, nodePort: 9080, port: 80, protocol: TCP, targetPort: 80}selector: {name: nginx}type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:name: nginxlabels: {name: nginx}
spec:replicas: 1selector:matchLabels: {name: nginx}template:metadata:name: nginxlabels: {name: nginx}spec:containers:- name: nginximage: harbor.jettech.com/jettechtools/nginx:1.21.4

[root@rhc test]# kubectl run nginx --image=harbor.jettech.com/jettechtools/nginx:1.21.4 --port=80 --replicas=1[root@rhc test]# kubectl run kubernetes-bootcamp --image=jocatalin/kubernetes-bootcamp:v1 --port=8080
[root@rhc kubernetes]# kubectl get deploy,pod
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   3/3     3            3           3m30sNAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-54fc66fbfc-8r6mc   1/1     Running   0          3m30s
pod/nginx-54fc66fbfc-g2wsh   1/1     Running   0          3m30s
pod/nginx-54fc66fbfc-z84jk   1/1     Running   0          3m30s

访问应用

Pods运行在k8s的内部网络,他们能够被同一个集群的Pods访问,但是不能被外部网络访问。

 kubectl命令可以创建一个代理,将通信转发到集群范围的专用网络中。代理可以通过按control-c终止,并且在运行时不会显示任何输出。

  我们将打开第二个终端窗口来运行代理。

[root@rhc kubernetes]# kubectl proxy
Starting to serve on 127.0.0.1:8001

api服务器将根据pod名称自动为每个pod创建一个端点,该端点也可以通过代理访问。

首先,我们需要获取pod名称,然后将其存储在环境变量pod_name中:

通过pod的名称访问应用

[root@rhc kubernetes]# export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
[root@rhc kubernetes]# echo POD_NAME
POD_NAME
[root@rhc kubernetes]# echo $POD_NAME
nginx-54fc66fbfc-8r6mc nginx-54fc66fbfc-g2wsh nginx-54fc66fbfc-z84jk

通过pod的名称访问应用

[root@rhc kubernetes]# curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

也可以这样操作

[root@rhc kubernetes]# kubectl expose deployment nginx --port=80 --type=LoadBalancer
[root@rhc kubernetes]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.254.0.0     <none>        443/TCP        9d
nginx        LoadBalancer   10.254.193.41   <pending>     80:25954/TCP   2m12s
[root@rhc kubernetes]# kubectl describe service nginx
Name:                     nginx
Namespace:                default
Labels:                   name=nginx
Annotations:              <none>
Selector:                 name=nginx
Type:                     LoadBalancer
IP:                       10.254.193.41
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  25954/TCP
Endpoints:                10.0.59.7:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

然后浏览器访问

11.coredns 基础服务部署kubernetes上的服务发现-CoreDNS配置_ccy19910925的博客-CSDN博客_coredns配置

1)默认情况下dns服务使用ServiceAccount、 ClusterRole和 ClusterRoleBinding来做认证和捐授权,如果内网不采用认证,可以去掉ServiceAccount、 ClusterRole和 ClusterRoleBinding 部分,同时将Deployment部分删除 serviceAccountName: coredns
2)

kubectl create -f coredns.yaml

如果安装kubernetes的时候没有启用kube-apiserver的token认证,此时会报错,提示:“/var/run/secrets/kubernetes.io/serviceaccount/token 不存在”,需要创建一个serviceaccount账号。

3)创建ServiceAccount和secret

执行kubectl get serviceaccount,如果结果如下:

NAME      SECRETS default   0

说明没有Serviceaccount,需要在apiserver的启动参数中添加:

--admission-control=ServiceAccount

[root@rhc kubernetes]# cat apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
## The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"# The port on the local server to listen on.
#KUBE_API_PORT="port=8080"
KUBE_API_PORT="--insecure-port=8080"# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"# Comma separated list of nodes in the etcd cluster
#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379,http://127.0.0.1:4001"
KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.10.1:2379"# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota,ServiceAccount"#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"# Add your own!
KUBE_API_ARGS="--service-node-port-range=1-65535"

重启kube-apiserver服务,会发现创建了/var/run/kubernetes/apiserver.crt和apiserver.key这两个文件。

flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据; 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 --cluster-cidr 参数值一致;

接着修改kube-controller-manager服务,添加:

--service-account-private-key-file=/var/run/kubernetes/apiserver.key

[root@localhost kubernetes]# cat controller-manager
###
# The following values are used to configure the kubernetes controller-manager# defaults from config and apiserver should be adequate# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--cluster-cidr=10.0.0.0/16 --service-cluster-ip-range=10.43.0.0/16 --service-account-private-key-file=/var/run/kubernetes/apiserver.key"
#KaUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/var/run/kubernetes/apiserver.key"

然后重启kube-controller-manager服务。

再执行kubectl get serviceaccount,如果结果如下:

[root@rhc kubernetes]# kubectl get sa,secrets --all-namespaces
NAMESPACE         NAME                     SECRETS   AGE
default           serviceaccount/default   1         9d
kube-node-lease   serviceaccount/default   1         56m
kube-public       serviceaccount/default   1         9d
kube-system       serviceaccount/default   1         9dNAMESPACE         NAME                         TYPE                                  DATA   AGE
default           secret/default-token-gpbrn   kubernetes.io/service-account-token   2      28m
kube-node-lease   secret/default-token-2zbj2   kubernetes.io/service-account-token   2      28m
kube-public       secret/default-token-br6w8   kubernetes.io/service-account-token   2      28m
kube-system       secret/default-token-x6j52   kubernetes.io/service-account-token   2      28m

完整的coredns.yaml

[root@rhc test]# cat coredns.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cclinux.com.cn 10.254.0.0/16 {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.confcache 30loopreload}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: corednskubernetes.io/name: "CoreDNS"
spec:replicas: 1selector:matchLabels:k8s-app: corednstemplate:metadata:labels:k8s-app: corednsannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'spec:containers:- name: corednsimage: harbor.jettech.com/jettechtools/kubernetes/coredns:1.6.7imagePullPolicy: Alwaysargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5dnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemlabels:k8s-app: corednskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: corednsclusterIP: 10.254.254.254ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP

查看

[root@rhc kubernetes]# kubectl get deploy,pod,svc --all-namespaces
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   1/1     1            1           6m8sNAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-5b4c58fb9b-24l8h   1/1     Running   0          6m8sNAMESPACE     NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes   ClusterIP   10.254.0.0      <none>        443/TCP         9d
kube-system   service/kube-dns     ClusterIP   10.254.254.254   <none>        53/UDP,53/TCP   6m8s

测试

安全版本

[root@localhost ssl]# cat readme
ca:
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=172.16.10.1" -days 5000 -out ca.crtapi-server:
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=172.16.10.1" -config openssl.cnf  -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -extensions v3_req -extfile openssl.cnf  -out server.crt -days 5000controller-manager,scheduler:
openssl genrsa -out cs_client.key 2048
openssl req -new -key cs_client.key -subj "/CN=172.16.10.1"  -out cs_client.csr
openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial    -out cs_client.crt -days 5000kubelet
openssl genrsa -out kubelet_client.key 2048
openssl req -new -key kubelet_client.key -subj "/CN=172.16.10.1"  -out kubelet_client.csr
openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out kubelet_client.crt -days 5000
[root@localhost kubernetes]# cat apiserver
KUBE_API_ARGS="--etcd-servers=http://172.16.10.1:2379 --insecure-bind-address=0.0.0.0 --advertise-address=172.16.10.1 --insecure-port=0 --secure-port=6443  --service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=1024-65535 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=0 --allow-privileged=true  --client-ca-file=/var/run/kubernetes/ca.crt --tls-private-key-file=/var/run/kubernetes/server.key --tls-cert-file=/var/run/kubernetes/server.crt"
[root@localhost kubernetes]# cat controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0 --service-account-private-key-file=/var/run/kubernetes/server.key  --root-ca-file=/var/run/kubernetes/ca.crt"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig   --hostname-override=172.16.10.1 --logtostderr=false --log-dir=/var/log/kubernetes  --v=0 --fail-swap-on=false --pod-infra-container-image=harbor.jettech.com/rancher/pause:3.1"
[root@localhost kubernetes]# cat proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
[root@localhost kubernetes]# cat scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"
#KUBE_SCHEDULER_ARGS="--logtostderr=false --log-dir=/var/log/kubernetes --v=0 --master=172.16.10.1:8080 --bind-address=0.0.0.0  --secure-port=10259"
#KUBE_SCHEDULER_MANAGER_ARGS="--logtostderr=false --log-dir=/var/log/kubernetes --v=0 --master=172.16.10.1:8080 --cluster-name=kubernetes"
[root@localhost kubernetes]# cat kubeconfig
apiVersion: v1
king: Config
users:
- name: clientuser:client-certificate: /var/run/kubernetes/cs_client.crtclient-key: /var/run/kubernetes/cs_client.key
clusters:
- name: defaultcluster:certificate-authority: /var/run/kubernetes/ca.crtserver: https://172.16.10.1:6443
contexts:
- context:cluster: defaultuser: clientname: default
current-context: default

kubernetes 非安全部署相关推荐

  1. helm安装_如何利用 Helm 在 Kubernetes 上快速部署 Jenkins

    Jenkins 做为最著名的 CI/CD 工具,在全世界范围内被广泛使用,而随着以 Kubernetes 为首的云平台的不断发展与壮大,在 Kubernetes 上运行 Jenkins 的需求越来越多 ...

  2. 教你在Kubernetes中快速部署ES集群

    摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...

  3. Kubernetes集群部署实录

    空降助手 环境准备 服务器配置信息 部署版本信息 关闭防火墙 禁用SELinux 关闭swap 修改hostname 配置hosts文件 runtime安装(docker安装) 安装记录 kubead ...

  4. linux重启was控制台报错,Linux非WAS部署,启动报错Cannot run program \lsb_release\

    Linux非WAS部署,启动报错Cannot run program "lsb_release" 已确认 tools.jar 文件是当前 linux 中的 jdk 下的 jar 文 ...

  5. kubernetes 集群部署

    kubernetes 集群部署 环境 JiaoJiao_Centos7-1(152.112) 192.168.152.112 JiaoJiao_Centos7-2(152.113) 192.168.1 ...

  6. ASP.NET Core应用程序容器化、持续集成与Kubernetes集群部署(三

    在上文ASP.NET Core应用程序容器化.持续集成与Kubernetes集群部署(二)中,我介绍了如何使用Azure DevOps为ASP.NET Core应用程序案例:tasklist搭建持续集 ...

  7. ASP.NET Core应用程序容器化、持续集成与Kubernetes集群部署(二)

    在上文中我介绍了ASP.NET Core应用程序容器化时需要注意的几个问题,并给出了一个案例应用程序:tasklist.今天接着上文的内容,继续了解一下如何使用Azure DevOps进行ASP.NE ...

  8. 基于Kubernetes的持续部署方案

    戳蓝字"CSDN云计算"关注我们哦! 文章转载自Docker 方案概述 本技术方案为基于Kubernetes为核心的持续部署(下文简称CD)方案,可以满足开发方的程序级日志查看分析 ...

  9. 基于Kubernetes的Spark部署完全指南

    基于Kubernetes的Spark部署完全指南 [编者的话]本文是在Kubernets上搭建Spark集群的操作指南,同时提供了Spark测试任务及相关的测试数据,通过阅读本文,你可以实践从制作Sp ...

  10. 检查是否禁止asp.net服务扩展_在 Kubernetes 环境下部署 OpenWhisk 服务

    本文使用 Zhihu On VSCode 创作并发布 1. 总体目标 终于开始准备毕业设计了.在和导师们的讨论之后,我们确定之后的研究方向将会专注于 Serverless Computing. 先长舒 ...

最新文章

  1. 5G 标准 — R18
  2. UbuntuでPostgreSQLをインストールからリモートアクセスまでの手順
  3. 各种基本的排序算法在Object-C实现
  4. Spring mvc 启动配置文件加载两遍问题
  5. [Objective-c 基础 - 2.1] 封装
  6. qt中颜色对话框弹出时应用程序输出栏出现QWindowsWindow::setGeometry: Unable to set geometry 180x30+345+311 (frame: 202x8
  7. 在网络虚拟化之前部署NFV将使运营商网络面临风险
  8. S5PV210裸机之SDRAM
  9. LeetCode OJ:Combination Sum III(组合之和III)
  10. c# WebService添加SoapHeader认证
  11. java自定义tag,tag文件与tag标记,java自定义标签
  12. raise NotImplementedError
  13. DBA和开发同事的一些代沟(一)
  14. 联想售后服务偷换主板
  15. 自然语言处理(NLP)知识结构总结
  16. 基础算法-生兔子(JAVA)
  17. 关于error LNK2005: char * xxx (?xx@@3PADA) already defined in xxx
  18. 分享一个在线去水印网站
  19. matlab模拟嫦娥奔月,2017年6月英语六级翻译模拟练习题:嫦娥奔月
  20. 湍流 Spectrum 与 Cascade 的理解

热门文章

  1. 国际贸易基础(一)找客户
  2. TRIZ 40创新原理
  3. java 三维旋转立方体_旋转立方体实现
  4. 【Python】使用Selenium实现淘宝抢单
  5. 线性约束最优化问题的Frank-Wolfe方法
  6. HCIA~广域网技术
  7. 定时炸弹?揭露AmazonBasics电池背后的秘密
  8. scrollbars属性,MultiLine 属性
  9. vue中在哪个生命周期中dom被渲染_Vue生命周期说明
  10. 训练集,验证集,测试集(以及为什么要使用验证集?)(Training Set, Validation Set, Test Set)