Kubernetes v1.10.4 安装记录

2024-06-06 07:02:21

#参考文档:https://github.com/opsnull/follow-me-install-kubernetes-cluster

#约定:

#1、会显式的标注出命令都要再哪一台或哪几台上执行;

#2、本次不配置高可用apiserver

#3、ETCD集群配置不加密模式;

#4、安装镜像使用默认的镜像,被墙掉的镜像需要提前导入到系统中,可以参考文章末尾的相关操作导入;

#5、Master主机 172.16.3.150, node主机 172.16.3.151,172.16.3.152, 测试扩充|删除节点主机 172.16.3.153

#######################-----初始化系统-----#######################

#此部分的命令需要三台服务器都执行

#初始化主机列表:172.16.3.150,172.16.3.151,172.16.3.152

#修改主机名

hostnamectl set-hostname dev-test-3-150 #172.16.3.150执行

hostnamectl set-hostname dev-test-3-151 #172.16.3.151执行

hostnamectl set-hostname dev-test-3-152 #172.16.3.162执行

#yum升级及相关组件安装

yum install epel-release -y

yum update -y

yum install lrzsz vim wget net-tools ntp python-pip conntrack ipvsadm ipset jq sysstat curl iptables libseccomp conntrack-tools -y

yum install -y bash-completion

pip install --upgrade pip

#优化系统参数()

echo "net.ipv4.tcp_fin_timeout=30">>/etc/sysctl.conf

echo "net.ipv4.tcp_tw_recycle=1">>/etc/sysctl.conf

echo "net.ipv4.tcp_tw_reuse=1">>/etc/sysctl.conf

echo "net.ipv4.icmp_echo_ignore_broadcasts=1">>/etc/sysctl.conf

echo "net.ipv4.conf.all.rp_filter=1">>/etc/sysctl.conf

echo "net.ipv4.tcp_keepalive_time=300">>/etc/sysctl.conf

echo "net.ipv4.tcp_synack_retries=2">>/etc/sysctl.conf

echo "net.ipv4.tcp_syn_retries=2">>/etc/sysctl.conf

echo "net.ipv4.ip_forward=1">>/etc/sysctl.conf

sysctl -p

echo "*softnofile=65536">>/etc/security/limits.conf

echo "*hardnofile=65536">>/etc/security/limits.conf

echo "ulimit -n 65536">>/etc/profile

source /etc/profile

#ntp配置

service ntpd start

chkconfig ntpd on

#iptables配置

systemctl stop firewalld

systemctl disable firewalld

yum install -y iptables-services iptables-devel.x86_64 iptables.x86_64

service iptables start

chkconfig iptables on

iptables -F &&  iptables -X &&  iptables -F -t nat &&  iptables -X -t nat

iptables -P FORWARD ACCEPT

#增加hosts

vim /etc/hosts

#添加如下内容

172.16.3.150 dev-test-3-150

172.16.3.151 dev-test-3-151

172.16.3.152 dev-test-3-152

#关闭 swap 分区

swapoff -a

#为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

#无密码 ssh 登录其它节点

#生成密钥

ssh-keygen -t rsa

#拷贝密钥

ssh-copy-id root@dev-test-3-150

ssh-copy-id root@dev-test-3-151

ssh-copy-id root@dev-test-3-152

#创建相关安装目录

mkdir -p /opt/k8s/bin

mkdir -p /etc/kubernetes/cert

#mkdir -p /etc/etcd/cert

mkdir -p /var/lib/etcd

#新建环境变量文件,根据实际主机修改NODE_IP和NODE_NAME

vim /opt/k8s/env.sh

# 生成 EncryptionConfig 所需的加密 key

export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成

#export BOOTSTRAP_TOKEN="261e41eb026ccacc5d71c7698392c00e"

# 建议用 未用的网段 来定义服务网段和 Pod 网段

# 服务网段 (Service CIDR),部署前路由不可达,部署后集群内使用 IP:Port可达

export SERVICE_CIDR="10.253.0.0/16"

# POD 网段 (Cluster CIDR),部署前路由不可达,**部署后**路由可达 (flanneld 保证)

export CLUSTER_CIDR="172.50.0.0/16"

# 服务端口范围 (NodePort Range)

export NODE_PORT_RANGE="10000-30000"

# etcd 集群服务地址列表

export ETCD_ENDPOINTS="http://172.16.3.150:2379,http://172.16.3.151:2379,http://172.16.3.152:2379"

# flanneld 网络配置前缀

export FLANNEL_ETCD_PREFIX="/kubernetes/network"

# kubernetes 服务 IP (预分配,一般是 SERVICE_CIDR 中第一个IP)

export CLUSTER_KUBERNETES_SVC_IP="10.253.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)

export CLUSTER_DNS_SVC_IP="10.253.0.2"

# 集群 DNS 域名

export CLUSTER_DNS_DOMAIN="cluster.local."

# 部署机器的主机名(修改为对应主机)

export NODE_NAME=dev-test-3-150

# 当前部署主机的IP(修改为对应主机)

export NODE_IP=172.16.3.150

# ETCD集群所有主机的ip地址

export NODE_IPS="172.16.3.150 172.16.3.151 172.16.3.152"

# ETCD集群所有主机的主机名

export NODE_NAMES="dev-test-3-150 dev-test-3-151 dev-test-3-152"

# ETCD集群通讯的ip和端口

export ETCD_NODES=dev-test-3-150=http://172.16.3.150:2380,dev-test-3-151=http://172.16.3.151:2380,dev-test-3-152=http://172.16.3.152:2380

# 临时的Kubernetes的api接口地址

export KUBE_APISERVER="https://172.16.3.150:6443"

#Master API Server 地址

export MASTER_URL="k8s-api.virtual.local"

#Master IP 地址

export MASTER_IP="172.16.3.150"

#二进制文件目录

export PATH=/opt/k8s/bin:$PATH

#保存退出

#关闭selinux

vim /etc/selinux/config

SELINUX=enforcing  改成 :SELINUX=disabled

#保存退出

#重启服务器

init 6

#######################-----创建 CA 证书和秘钥-----#######################

#CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。

#安装CFSSL(三个节点都装)

mkdir /root/ssl/ && cd /root/ssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod u+x *linux-amd64

mv cfssl_linux-amd64 /usr/bin/cfssl

mv cfssljson_linux-amd64 /usr/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

#配置CA证书(只在3.150上配置)

#创建配置文件

#CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。

cd /home

source /opt/k8s/env.sh

cfssl print-defaults config > config.json

cfssl print-defaults csr > csr.json

cat > ca-config.json <<EOF

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"kubernetes": {

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

],

"expiry": "87600h"

}

}

}

}

EOF

#signing:表示该证书可用于签名其它证书,生成的 ca.pem 证书中 CA=TRUE;

#server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;

#client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;

#创建证书签名请求文件

cat > ca-csr.json <<EOF

{

"CN": "kubernetes",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "QRXD"

}

]

}

EOF

#CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;

#O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

#kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

#生成 CA 证书和私钥(只在3.150上配置)

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

ls ca*

scp ca*.pem ca-config.json  root@172.16.3.150:/etc/kubernetes/cert/

scp ca*.pem ca-config.json  root@172.16.3.151:/etc/kubernetes/cert/

scp ca*.pem ca-config.json  root@172.16.3.152:/etc/kubernetes/cert/

#######################-----部署 kubectl 命令行工具-----#######################

#只需要部署一次(只在3.150部署配置)

#下载

cd /home

wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

tar -zxvf kubernetes-server-linux-amd64.tar.gz

scp /home/kubernetes/server/bin/kubectl root@172.16.3.150:/usr/bin/

scp /home/kubernetes/server/bin/kubectl root@172.16.3.151:/usr/bin/

scp /home/kubernetes/server/bin/kubectl root@172.16.3.152:/usr/bin/

#创建 admin 证书和私钥(只在3.150上配置)

#kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。

#kubectl 作为集群的管理工具,需要被授予最高权限。这里创建具有最高权限的 admin 证书。

cd /home

cat > admin-csr.json <<EOF

{

"CN": "admin",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "system:masters",

"OU": "QRXD"

}

]

}

EOF

#O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;

#预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;

#该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

#生成证书和私钥

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes admin-csr.json | cfssljson -bare admin

ls admin*

#创建 kubeconfig 文件

#kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

source /opt/k8s/env.sh

# 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials admin \

--client-certificate=admin.pem \

--client-key=admin-key.pem \

--embed-certs=true \

--kubeconfig=kubectl.kubeconfig

# 设置上下文参数

kubectl config set-context kubernetes \

--cluster=kubernetes \

--user=admin \

--kubeconfig=kubectl.kubeconfig

# 设置默认上下文

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

#--certificate-authority:验证 kube-apiserver 证书的根证书;

#--client-certificate、--client-key:刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;

#--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径);

mkdir /root/.kube

scp kubectl.kubeconfig root@172.16.3.150:/root/.kube/config

scp kubectl.kubeconfig root@172.16.3.151:/root/.kube/config

scp kubectl.kubeconfig root@172.16.3.152:/root/.kube/config

#scp kubectl.kubeconfig root@172.16.3.153:/root/.kube/config

#配置kubectl tab命令

source /usr/share/bash-completion/bash_completion

source <(kubectl completion bash)

#echo "source <(kubectl completion bash)" >> ~/.bashrc

#

#######################-----部署 etcd 集群(非加密)-----#######################

#三个节点都执行

cd /home

wget https://github.com/coreos/etcd/releases/download/v3.1.19/etcd-v3.1.19-linux-amd64.tar.gz

tar -zxvf etcd-v3.1.19-linux-amd64.tar.gz

cp etcd-v3.1.19-linux-amd64/etcd* /usr/local/bin

cd /etc/systemd/system/

source /opt/k8s/env.sh

cat > etcd.service <<EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

ExecStart=/usr/local/bin/etcd \\

--name=${NODE_NAME} \\

--initial-advertise-peer-urls=http://${NODE_IP}:2380 \\

--listen-peer-urls=http://${NODE_IP}:2380 \\

--listen-client-urls=http://${NODE_IP}:2379,http://127.0.0.1:2379 \\

--advertise-client-urls=http://${NODE_IP}:2379 \\

--initial-cluster-token=etcd-cluster-0 \\

--initial-cluster=${ETCD_NODES} \\

--initial-cluster-state=new \\

--data-dir=/var/lib/etcd

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

#如果提示访问端口不通,防火墙需要开放2379和2380端口

#iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT

#iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT

#service iptables save

systemctl daemon-reload

systemctl enable etcd

systemctl restart etcd

etcdctl --endpoints=${ETCD_ENDPOINTS} cluster-health

#member 224d21bc6c7cdd08 is healthy: got healthy result from http://172.16.3.151:2379

#member 254a600a75e4f39f is healthy: got healthy result from http://172.16.3.152:2379

#member b6f4f1b2c016095e is healthy: got healthy result from http://172.16.3.150:2379

#cluster is healthy

#######################-----部署 flannel 网络-----#######################

#kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络。

#flaneel 第一次启动时,从 etcd 获取 Pod 网段信息,为本节点分配一个未使用的 /24 段地址,然后创建 flannedl.1(也可能是其它名称,如 flannel1 等) 接口。

#flannel 将分配的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥。

#三台服务器都执行

cd /home

wget https://github.com/coreos/flannel/releases/download/v0.9.0/flannel-v0.9.0-linux-amd64.tar.gz

tar -zxvf flannel-v0.9.0-linux-amd64.tar.gz

cp flanneld /usr/local/bin/

cp mk-docker-opts.sh /usr/local/bin/

mkdir -p /etc/flanneld/cert

#创建 flannel 证书和私钥(3.150执行)

cd /home

cat > flanneld-csr.json <<EOF

{

"CN": "flanneld",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "QRXD"

}

]

}

EOF

#该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

ls flanneld*pem

scp flanneld*pem root@172.16.3.150:/etc/flanneld/cert/

scp flanneld*pem root@172.16.3.151:/etc/flanneld/cert/

scp flanneld*pem root@172.16.3.152:/etc/flanneld/cert/

#向 etcd 写入集群 Pod 网段信息(3.150执行)

source /opt/k8s/env.sh

etcdctl --endpoints=${ETCD_ENDPOINTS} set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

#测试

etcdctl --endpoints=${ETCD_ENDPOINTS} get ${FLANNEL_ETCD_PREFIX}/config

#创建flanneld的systemd unit 文件(3个节点执行)

source /opt/k8s/env.sh

cd /etc/systemd/system/

cat > flanneld.service << EOF

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

[Service]

Type=notify

ExecStart=/usr/local/bin/flanneld \\

-etcd-cafile=/etc/kubernetes/cert/ca.pem \\

-etcd-certfile=/etc/flanneld/cert/flanneld.pem \\

-etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\

-etcd-endpoints=${ETCD_ENDPOINTS} \\

-etcd-prefix=${FLANNEL_ETCD_PREFIX}

ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

EOF

#启动flanneld

systemctl daemon-reload

systemctl enable flanneld

systemctl start flanneld

#检查

etcdctl --endpoints=${ETCD_ENDPOINTS} ls ${FLANNEL_ETCD_PREFIX}/subnets

#######################-----部署 master 节点-----#######################

#在3.150上部署

cd /home

#wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

#tar -zxvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

tar -xzvf  kubernetes-src.tar.gz

cp server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} /usr/local/bin/

#部署 kube-apiserver 组件

#创建 kubernetes 证书和私钥

cd /home

source /opt/k8s/env.sh

cat > kubernetes-csr.json <<EOF

{

"CN": "kubernetes",

"hosts": [

"127.0.0.1",

"172.16.3.150",

"172.16.3.151",

"172.16.3.152",

"172.16.0.0/16",

"${CLUSTER_KUBERNETES_SVC_IP}",

"kubernetes",

"kubernetes.default",

"kubernetes.default.svc",

"kubernetes.default.svc.cluster",

"kubernetes.default.svc.cluster.local"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "QRXD"

}

]

}

EOF

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

ls kubernetes*pem

cp kubernetes*pem /etc/kubernetes/cert

#创建加密配置文件

# cat > token.csv <<EOF

# ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"

# EOF

# mv token.csv /etc/kubernetes/

source /opt/k8s/env.sh

cat > encryption-config.yaml <<EOF

kind: EncryptionConfig

apiVersion: v1

resources:

- resources:

- secrets

providers:

- aescbc:

keys:

- name: key1

secret: ${ENCRYPTION_KEY}

- identity: {}

EOF

cp encryption-config.yaml /etc/kubernetes/encryption-config.yaml

#创建kube-apiserver 的systemd unit文件

cd /etc/systemd/system

mkdir /var/log/kube

source /opt/k8s/env.sh

cat  > kube-apiserver.service <<EOF

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

ExecStart=/usr/local/bin/kube-apiserver \\

--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\

--anonymous-auth=false \\

--experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\

--insecure-bind-address=${MASTER_IP} \\

--advertise-address=${NODE_IP} \\

--bind-address=${MASTER_IP} \\

--authorization-mode=Node,RBAC \\

--runtime-config=api/all \\

--enable-bootstrap-token-auth \\

--service-cluster-ip-range=${SERVICE_CIDR} \\

--service-node-port-range=${NODE_PORT_RANGE} \\

--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\

--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\

--client-ca-file=/etc/kubernetes/cert/ca.pem \\

--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\

--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\

--service-account-key-file=/etc/kubernetes/cert/ca-key.pem \\

--etcd-servers=${ETCD_ENDPOINTS} \\

--enable-swagger-ui=true \\

--allow-privileged=true \\

--apiserver-count=3 \\

--audit-log-maxage=30 \\

--audit-log-maxbackup=3 \\

--audit-log-maxsize=100 \\

--audit-log-path=/var/log/kube-apiserver-audit.log \\

--event-ttl=1h \\

--alsologtostderr=true \\

--logtostderr=false \\

--log-dir=/var/log/kube \\

--v=2

Restart=on-failure

RestartSec=5

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl start kube-apiserver

systemctl enable kube-apiserver

systemctl status kube-apiserver

#检查测试

ETCDCTL_API=3 etcdctl --endpoints=${ETCD_ENDPOINTS} get /registry/ --prefix --keys-only

kubectl cluster-info

#Kubernetes master is running at https://172.27.129.253:8443

#To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl get all --all-namespaces

#NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE

#default     service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   35m

kubectl get componentstatuses

#NAME                 STATUS      MESSAGE                                                                                        ERROR

#controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused

#scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused

#etcd-1               Healthy     {"health":"true"}

#etcd-0               Healthy     {"health":"true"}

#etcd-2               Healthy     {"health":"true"}

netstat -lnpt|grep kube

#授予 kubernetes 证书访问 kubelet API 的权限

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=systemdm:kubelet-api-admin --user kubernetes

#部署kube-controller-manager

#创建和分发 kube-controller-manager systemd unit 文件

source /opt/k8s/env.sh

cd /etc/systemd/system

cat > kube-controller-manager.service <<EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

ExecStart=/usr/local/bin/kube-controller-manager \\

--address=127.0.0.1 \\

--master=http://${MASTER_IP}:8080 \\

--allocate-node-cidrs=true \\

--service-cluster-ip-range=${SERVICE_CIDR} \\

--cluster-cidr=${CLUSTER_CIDR} \\

--cluster-name=kubernetes \\

--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\

--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\

--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\

--root-ca-file=/etc/kubernetes/cert/ca.pem \\

--feature-gates=RotateKubeletServerCertificate=true \\

--controllers=*,bootstrapsigner,tokencleaner \\

--use-service-account-credentials=true \\

--leader-elect=true \\

--alsologtostderr=true \\

--logtostderr=false \\

--log-dir=/var/log/kube \\

--v=2

Restart=on-failure

RestartSec=5

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl restart kube-controller-manager

#检查

systemctl status kube-controller-manager

netstat -lnpt|grep kube-controll

#curl -s --cacert /etc/kubernetes/cert/ca.pem https://127.0.0.1:10252/metrics |head

#部署kube-scheduler

#在安全端口(https,10251) 输出 prometheus 格式的 metrics;

#创建 kube-scheduler 证书和私钥

#创建证书签名请求:

cd /home

#创建和分发 kube-scheduler systemd unit 文件

source /opt/k8s/env.sh

cd /etc/systemd/system

cat > kube-scheduler.service <<EOF

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

ExecStart=/usr/local/bin/kube-scheduler \\

--address=127.0.0.1 \\

--master=http://${MASTER_IP}:8080 \\

--leader-elect=true \\

--v=2

Restart=on-failure

RestartSec=5

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-scheduler

systemctl restart kube-scheduler

#检查

systemctl status kube-scheduler

netstat -lnpt|grep kube-sche

#curl -s http://127.0.0.1:10251/metrics |head

#######################-----部署 worker 节点-----#######################

#三台服务器都执行

#安装Docker(使用yum的方式)

#安装docker

#删除旧版本

yum remove docker \

docker-client \

docker-client-latest \

docker-common \

docker-latest \

docker-latest-logrotate \

docker-logrotate \

docker-selinux \

docker-engine-selinux \

docker-engine

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum-config-manager --enable docker-ce-stable

yum-config-manager --disable docker-ce-test

yum-config-manager --disable docker-ce-edge

yum install docker-ce -y

rm -rf /usr/lib/systemd/system/docker.service

vim /usr/lib/systemd/system/docker.service

#输入以下内容

#cat > docker.service <<"EOF"

[Unit]

Description=Docker Application Container Engine

Documentation=http://docs.docker.io

[Service]

EnvironmentFile=-/run/flannel/docker

ExecStart=/usr/bin/dockerd --log-level=info $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

Restart=on-failure

RestartSec=5

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

Delegate=yes

KillMode=process

[Install]

WantedBy=multi-user.target

#EOF

systemctl daemon-reload

systemctl start docker.service

systemctl enable docker.service

#检查

systemctl status docker.service

ip addr show flannel.1 && ip addr show docker0

#部署 kubelet 组件

#下载和分发 kubelet 二进制文件(3.150单独执行)

cd /home/kubernetes

scp /home/kubernetes/server/bin/{kube-proxy,kubelet,kubeadm} root@172.16.3.150:/usr/local/bin/

scp /home/kubernetes/server/bin/{kube-proxy,kubelet,kubeadm} root@172.16.3.151:/usr/local/bin/

scp /home/kubernetes/server/bin/{kube-proxy,kubelet,kubeadm} root@172.16.3.152:/usr/local/bin/

#创建 kubelet bootstrap kubeconfig 文件(3.150单独执行)

source /opt/k8s/env.sh

for node_name in ${NODE_NAMES[@]}

do

echo ">>> ${node_name}"

# 创建 token

export BOOTSTRAP_TOKEN=$(kubeadm token create \

--description kubelet-bootstrap-token \

--groups system:bootstrappers:${node_name} \

--kubeconfig ~/.kube/config)

# 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \

--token=${BOOTSTRAP_TOKEN} \

--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

# 设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=kubelet-bootstrap \

--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

# 设置默认上下文

kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

done

kubeadm token list --kubeconfig ~/.kube/config

#创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:(3.150执行)

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

#分发 bootstrap kubeconfig 文件到所有 worker 节点(3.150单独执行)

scp kubelet-bootstrap-dev-test-3-150.kubeconfig root@172.16.3.150:/etc/kubernetes/kubelet-bootstrap.kubeconfig

scp kubelet-bootstrap-dev-test-3-151.kubeconfig root@172.16.3.151:/etc/kubernetes/kubelet-bootstrap.kubeconfig

scp kubelet-bootstrap-dev-test-3-152.kubeconfig root@172.16.3.152:/etc/kubernetes/kubelet-bootstrap.kubeconfig

#创建和分发 kubelet 参数配置文件(三台服务器都执行)

source /opt/k8s/env.sh

cd /etc/kubernetes

cat > kubelet.config.json <<EOF

{

"kind": "KubeletConfiguration",

"apiVersion": "kubelet.config.k8s.io/v1beta1",

"authentication": {

"x509": {

"clientCAFile": "/etc/kubernetes/cert/ca.pem"

},

"webhook": {

"enabled": true,

"cacheTTL": "2m0s"

},

"anonymous": {

"enabled": false

}

},

"authorization": {

"mode": "Webhook",

"webhook": {

"cacheAuthorizedTTL": "5m0s",

"cacheUnauthorizedTTL": "30s"

}

},

"address": "${NODE_IP}",

"port": 10250,

"readOnlyPort": 0,

"cgroupDriver": "cgroupfs",

"hairpinMode": "promiscuous-bridge",

"serializeImagePulls": false,

"featureGates": {

"RotateKubeletClientCertificate": true,

"RotateKubeletServerCertificate": true

},

"clusterDomain": "${CLUSTER_DNS_DOMAIN}",

"clusterDNS": ["${CLUSTER_DNS_SVC_IP}"]

}

EOF

#创建 kubelet systemd unit 文件(三台服务器都执行)

source /opt/k8s/env.sh

mkdir /var/lib/kubelet

cd /etc/systemd/system

cat > kubelet.service <<EOF

[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=/var/lib/kubelet

ExecStart=/usr/local/bin/kubelet \\

--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\

--cert-dir=/etc/kubernetes/cert \\

--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\

--config=/etc/kubernetes/kubelet.config.json \\

--hostname-override=${NODE_NAME} \\

--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \\

--allow-privileged=true \\

--alsologtostderr=true \\

--logtostderr=false \\

--log-dir=/var/log/kube \\

--v=2

Restart=on-failure

RestartSec=5

[Install]

WantedBy=multi-user.target

EOF

其他节点启动失败的话,检查一下防火墙

iptables -F &&  iptables -X &&  iptables -F -t nat &&  iptables -X -t nat

iptables -P FORWARD ACCEPT

systemctl daemon-reload

systemctl start kubelet.service

systemctl enable kubelet.service

#检查

systemctl status kubelet.service

#approve kubelet CSR 请求(3.150执行)

kubectl get csr

#NAME                                                   AGE       REQUESTOR                 CONDITION

#node-csr-LY1jhbfkcVBthGoveaEhUycp8yg_7D4atQkmQSz3N3Y   13m       system:bootstrap:faum34   Pending

#node-csr-WY5RcykyS66oKc_wHydYiK6QkAFDluCIP856QB_0QRM   6m        system:bootstrap:aedkze   Pending

#node-csr-bwIGn4TJECVhWUIU5j0ckIAtzR5Qkgt8ZQzuHpg-NcA   6m        system:bootstrap:esigku   Pending

#手动 approve CSR 请求(3.150执行)

for i in `kubectl get csr | grep -v NAME | awk '{print $1}'`; do kubectl certificate approve ${i}; done

#查看节点

kubectl get nodes

#NAME             STATUS    ROLES     AGE       VERSION

#dev-test-3-150   Ready     <none>    27s       v1.10.4

#dev-test-3-151   Ready     <none>    27s       v1.10.4

#dev-test-3-152   Ready     <none>    25s       v1.10.4

#创建自动approve csr(3.150执行)

cd /home

source /opt/k8s/env.sh

cat > csr-crb.yaml <<EOF

# Approve all CSRs for the group "system:bootstrappers"

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: auto-approve-csrs-for-group

subjects:

- kind: Group

name: system:bootstrappers

apiGroup: rbac.authorization.k8s.io

roleRef:

kind: ClusterRole

name: system:certificates.k8s.io:certificatesigningrequests:nodeclient

apiGroup: rbac.authorization.k8s.io

---

# To let a node of the group "system:nodes" renew its own credentials

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: node-client-cert-renewal

subjects:

- kind: Group

name: system:nodes

apiGroup: rbac.authorization.k8s.io

roleRef:

kind: ClusterRole

name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient

apiGroup: rbac.authorization.k8s.io

---

# A ClusterRole which instructs the CSR approver to approve a node requesting a

# serving cert matching its client cert.

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: approve-node-server-renewal-csr

rules:

- apiGroups: ["certificates.k8s.io"]

resources: ["certificatesigningrequests/selfnodeserver"]

verbs: ["create"]

---

# To let a node of the group "system:nodes" renew its own server credentials

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: node-server-cert-renewal

subjects:

- kind: Group

name: system:nodes

apiGroup: rbac.authorization.k8s.io

roleRef:

kind: ClusterRole

name: approve-node-server-renewal-csr

apiGroup: rbac.authorization.k8s.io

EOF

#auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;

#node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;

#node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

kubectl apply -f csr-crb.yaml

#创建kube-proxy(3.150执行)

#创建 kube-proxy 证书

#创建证书签名请求:

cd /home

cat > kube-proxy-csr.json <<EOF

{

"CN": "system:kube-proxy",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "QRXD"

}

]

}

EOF

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

source /opt/k8s/env.sh

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

--client-certificate=kube-proxy.pem \

--client-key=kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

#分发配置文件

scp kube-proxy.kubeconfig root@172.16.3.150:/etc/kubernetes/

scp kube-proxy.kubeconfig root@172.16.3.151:/etc/kubernetes/

scp kube-proxy.kubeconfig root@172.16.3.152:/etc/kubernetes/

#创建 kube-proxy 配置文件(三台都执行)

cd /etc/kubernetes

source /opt/k8s/env.sh

cat >kube-proxy.config.yaml <<EOF

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: ${NODE_IP}

clientConnection:

kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig

clusterCIDR: ${CLUSTER_CIDR}

healthzBindAddress: ${NODE_IP}:10256

hostnameOverride: ${NODE_NAME}

kind: KubeProxyConfiguration

metricsBindAddress: ${NODE_IP}:10249

mode: "ipvs"

EOF

#创建 kube-proxy systemd unit 文件(三台都执行)

mkdir /var/lib/kube-proxy

source /opt/k8s/env.sh

cd /etc/systemd/system

cat > kube-proxy.service <<EOF

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

WorkingDirectory=/var/lib/kube-proxy

ExecStart=/usr/local/bin/kube-proxy \\

--config=/etc/kubernetes/kube-proxy.config.yaml \\

--alsologtostderr=true \\

--hostname-override=${NODE_NAME} \\

--logtostderr=false \\

--log-dir=/var/log/kube \\

--v=2

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-proxy

systemctl restart kube-proxy

#检查

systemctl status kube-proxy

netstat -lnpt|grep kube-proxy

#查看路由规则

source /opt/k8s/env.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"

done

#######################-----验证集群功能-----#######################

kubectl get nodes

#测试文件

cd /home

mkdir yaml

cd yaml

cat > nginx-ds.yml <<EOF

apiVersion: v1

kind: Service

metadata:

name: nginx-ds

labels:

app: nginx-ds

spec:

type: NodePort

selector:

app: nginx-ds

ports:

- name: http

port: 80

targetPort: 80

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: nginx-ds

labels:

addonmanager.kubernetes.io/mode: Reconcile

spec:

template:

metadata:

labels:

app: nginx-ds

spec:

containers:

- name: my-nginx

image: nginx:1.7.9

ports:

- containerPort: 80

EOF

kubectl apply -f nginx-ds.yml -n default

kubectl get pod

#######################-----部署集群插件-----#######################

#-------------------------------

#部署 DNS 插件(coredns)

mkdir /home/yaml/coredns

cp /home/kubernetes/cluster/addons/dns/coredns.yaml.base /home/yaml/coredns/coredns.yaml

#修改信息如下:

#61行

__PILLAR__DNS__DOMAIN__  ==> cluster.local.

#153行

__PILLAR__DNS__SERVER__  ==> 10.253.0.2

#创建coredns

kubectl apply -f /home/kube-system/coredns.yaml -n kube-system

#检查

kubectl get all -n kube-system

#-------------------------------

#部署 DNS 插件(sky-dns)

mkdir /home/kube-system/kube-dns

cp  /home/kubernetes/cluster/addons/dns/

#-------------------------------

#部署 dashboard 插件

mkdir /home/yaml/dashboard

#yaml文件地址:

#官方v1.10.0版本

#wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

#修改版v1.8.3版本

#wget https://github.com/gh-Devin/kubernetes-dashboard/blob/master/kubernetes-dashboard.yaml

#镜像名称也需要修改下根据自己的镜像修改需要提前下载下来

#kubectl apply -f kubernetes-dashboard.yaml

cd /home/kubernetes/cluster/addons/dashboard

cp dashboard-* /home/kubernetes/cluster/addons/dashboard/

vim dashboard-service.yaml

#末尾新增,并和ports节对齐

type: NodePort

#创建dashboard

kubectl applf -f .

#查看 kubectl get pod -n kube-system

kubectl get pod -n kube-system

kubectl get svc -n kube-system

kubectl cluster-info

#Kubernetes master is running at https://172.16.3.150:6443

#CoreDNS is running at https://172.16.3.150:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

#kubernetes-dashboard is running at https://172.16.3.150:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

#注意,只有cluster-info 中可以看到kubernetes-dashboard is running .... ,才可以通过kube-apiserver去访问dashboard

#创建登录 Dashboard 的 token 和 kubeconfig 配置文件

cd /home

kubectl create sa dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')

echo ${DASHBOARD_LOGIN_TOKEN}

#可以使用输出的 token 登录 Dashboard

#创建使用 token 的 KubeConfig 文件‘

source /opt/k8s/env.sh

cd /home

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=dashboard.kubeconfig

# 设置客户端认证参数,使用上面创建的 Token

kubectl config set-credentials dashboard_user \

--token=${DASHBOARD_LOGIN_TOKEN} \

--kubeconfig=dashboard.kubeconfig

# 设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=dashboard_user \

--kubeconfig=dashboard.kubeconfig

# 设置默认上下文

kubectl config use-context default --kubeconfig=dashboard.kubeconfig

#导出dashboard.kubeconfig在登录页面进行上传

#配置允许浏览器访问(3.150)

cd /home

openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem

#需要输入证书密码

#导出admin.pfx

#将admin.pfx导入到本地的证书管理中(windows通过 mmc管理单元配置)

#浏览器打开以下地址

https://172.16.3.150:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

#使用nodeport访问

kubectl get svc -n kube-system

#查看到对应的nodeport

#通过浏览器访问(用Firefox访问,用chrom会有些问题)

https://172.16.3.150:13054

#登录时需要使用导出的dashboard.kubeconfig 进行验证

#-------------------------------

#部署 heapster 插件

#提前把需要的镜像下载下来

cd /home/

wget https://github.com/kubernetes/heapster/archive/v1.5.3.tar.gz

tar -xzvf v1.5.3.tar.gz

mkdir /home/yaml/heapster

cp /home/heapster-1.5.3/deploy/kube-config/influxdb/* /home/yaml/heapster/

#cp /home/heapster-1.5.3/deploy/kube-config/rbac/* /home/yaml/heapster/

cd /home/yaml/heapster

vim grafana.yaml

#67行注释去掉,位置和ports对齐

type: NodePort

vim heapster.yaml

#27行 修改成如下

- --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250

#influxdb 不需要修改

cat > heapster-rbac.yaml <<EOF

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: heapster-kubelet-api

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

#name: cluster-admin

name: system:kubelet-api-admin

subjects:

- kind: ServiceAccount

name: heapster

namespace: kube-system

EOF

#将 serviceAccount kube-system:heapster 与 ClusterRole system:kubelet-api-admin 绑定,授予它调用 kubelet API 的权限;

kubectl apply -f .

#可以通过node port进行访问

kubectl get svc -n kube-system|grep -E 'monitoring|heapster'

#错误信息:1 namespace based enricher.go:75] Namespace doesn't exist: default

#权限不够,rbac授权 可以使用cluster-admin 角色解决该问题

#-------------------------------

#安装prometheus

#Prometheus 可以通过服务发现掌握集群内部已经暴露的监控点,然后主动拉取所有监控数据。

#通过这样的架构设计,我们仅仅只需要向Kubernetes集群中部署一份Prometheus实例,

#它就可以通过向apiserver查询集群状态,然后向所有已经支持Prometheus metrics的kubelet 获取所有Pod的运行数据。如果我们想采集底层服务器运行状态,

#通过DaemonSet在所有服务器上运行 配套的node-exporter之后,Prometheus就可以自动采集到新的这部分数据。

#详细配置解释在配置文件中说明

##node-exporter.yaml

cat node-exporter.yaml <<EOF

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: node-exporter

#指定运行在kube-system中,也可以根据自己的配置指定其他的namespace

namespace: kube-system

labels:

k8s-app: node-exporter

spec:

template:

metadata:

labels:

k8s-app: node-exporter

spec:

containers:

#镜像不需要×××就可以下载

- image: prom/node-exporter

name: node-exporter

ports:

- containerPort: 9100

protocol: TCP

name: http

---

apiVersion: v1

kind: Service

metadata:

labels:

k8s-app: node-exporter

name: node-exporter

namespace: kube-system

spec:

ports:

- name: http

port: 9100

nodePort: 21672

protocol: TCP

type: NodePort

selector:

k8s-app: node-exporter

EOF

##prometheus-configmap.yaml

cat prometheus-configmap.yaml <<EOF

apiVersion: v1

kind: ConfigMap

metadata:

name: prometheus-config

#此处的namespace需要和node-exporter一致

namespace: kube-system

data:

prometheus.yml: |

global:

scrape_interval: 30s

scrape_timeout: 30s

scrape_configs:

- job_name: 'prometheus'

static_configs:

- targets: ['localhost:9090']

- job_name: 'kubernetes-apiservers'

kubernetes_sd_configs:

- role: endpoints

scheme: https

tls_config:

ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

relabel_configs:

- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]

action: keep

regex: default;kubernetes;https

- job_name: 'kubernetes-nodes'

scheme: https

tls_config:

ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

kubernetes_sd_configs:

- role: node

relabel_configs:

- action: labelmap

regex: __meta_kubernetes_node_label_(.+)

- target_label: __address__

replacement: kubernetes.default.svc:443

- source_labels: [__meta_kubernetes_node_name]

regex: (.+)

target_label: __metrics_path__

replacement: /api/v1/nodes/${1}/proxy/metrics

- job_name: 'kubernetes-cadvisor'

scheme: https

tls_config:

ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

kubernetes_sd_configs:

- role: node

relabel_configs:

- action: labelmap

regex: __meta_kubernetes_node_label_(.+)

- target_label: __address__

replacement: kubernetes.default.svc:443

- source_labels: [__meta_kubernetes_node_name]

regex: (.+)

target_label: __metrics_path__

replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

- job_name: 'kubernetes-node-exporter'

scheme: http

tls_config:

ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

kubernetes_sd_configs:

- role: node

relabel_configs:

- action: labelmap

regex: __meta_kubernetes_node_label_(.+)

- source_labels: [__meta_kubernetes_role]

action: replace

target_label: kubernetes_role

- source_labels: [__address__]

regex: '(.*):10250'

#下面这个端口需要注意,这个地址是实际的node-exporter的nodeport端口

replacement: '${1}:21672'

target_label: __address__

EOF

##prometheus.yaml

cat >prometheus.yaml <<EOF

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

labels:

k8s-app: prometheus

name: prometheus

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

k8s-app: prometheus

spec:

serviceAccountName: prometheus

containers:

- image: prom/prometheus:v2.0.0

name: prometheus

command:

- "/bin/prometheus"

args:

- "--config.file=/etc/prometheus/prometheus.yml"

- "--storage.tsdb.path=/prometheus"

- "--storage.tsdb.retention=15d"

ports:

- containerPort: 9090

protocol: TCP

name: http

volumeMounts:

- mountPath: "/prometheus"

name: data

subPath: prometheus/data

- mountPath: "/etc/prometheus"

name: config-volume

resources:

requests:

cpu: 100m

memory: 100Mi

limits:

cpu: 200m

memory: 1Gi

volumes:

- name: data

emptyDir: {}

- configMap:

name: prometheus-config

name: config-volume

EOF

##prometheus-rabc.yaml

cat >prometheus-rabc.yaml<<EOF

apiVersion: v1

kind: ServiceAccount

metadata:

name: prometheus

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRole

metadata:

name: prometheus

namespace: kube-system

rules:

- apiGroups: [""]

resources:

- nodes

- nodes/proxy

- services

- endpoints

- pods

verbs: ["get", "list", "watch"]

- nonResourceURLs: ["/metrics"]

verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

name: prometheus

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: prometheus

subjects:

- kind: ServiceAccount

name: prometheus

namespace: kube-system

EOF

##prometheus-service.yaml

cat >prometheus-service.yaml <<EOF

apiVersion: v1

kind: Service

metadata:

namespace: kube-system

name: prometheus-svc

labels:

k8s-app: prometheus

spec:

selector:

k8s-app: prometheus

ports:

- port: 9090

targetPort: 9090

type: NodePort

EOF

kubectl apply -f .

#-------------------------------

#######################-----Kubernetes镜像相关-----#######################

#首先下载镜像

docker pull majun9129/kubernetes-images:pause-3.0

docker pull majun9129/kubernetes-images:kube-dashboard-1.10.0

docker pull majun9129/kubernetes-images:dashboard-1.8.3

docker pull majun9129/kubernetes-images:kube-heapster-influxdb-1.3.3

docker pull majun9129/kubernetes-images:kube-heapster-grafana-4.4.3

docker pull majun9129/kubernetes-images:kube-heapster-1.5.3

#把镜像重命名为官网镜像名称

docker tag majun9129/kubernetes-images:pause-3.0 gcr.io/google_containers/pause-amd64:3.0

docker tag  majun9129/kubernetes-images:kube-dashboard-1.10.0  k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

docker tag majun9129/kubernetes-images:dashboard-1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

docker tag majun9129/kubernetes-images:kube-heapster-influxdb-1.3.3 gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3

docker tag majun9129/kubernetes-images:kube-heapster-grafana-4.4.3 gcr.io/google_containers/heapster-grafana-amd64:v4.4.3

docker tag majun9129/kubernetes-images:kube-heapster-1.5.3 gcr.io/google_containers/heapster-amd64:v1.5.3

#######################-----Kubernetes 插件yaml文件-----#######################

#####core-dns#####

cat >coredns.yaml <<EOF

# __MACHINE_GENERATED_WARNING__

apiVersion: v1

kind: ServiceAccount

metadata:

name: coredns

namespace: kube-system

labels:

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

labels:

kubernetes.io/bootstrapping: rbac-defaults

addonmanager.kubernetes.io/mode: Reconcile

name: system:coredns

rules:

- apiGroups:

- ""

resources:

- endpoints

- services

- pods

- namespaces

verbs:

- list

- watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

annotations:

rbac.authorization.kubernetes.io/autoupdate: "true"

labels:

kubernetes.io/bootstrapping: rbac-defaults

addonmanager.kubernetes.io/mode: EnsureExists

name: system:coredns

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:coredns

subjects:

- kind: ServiceAccount

name: coredns

namespace: kube-system

---

apiVersion: v1

kind: ConfigMap

metadata:

name: coredns

namespace: kube-system

labels:

addonmanager.kubernetes.io/mode: EnsureExists

data:

Corefile: |

.:53 {

errors

health

kubernetes cluster.local. in-addr.arpa ip6.arpa {

pods insecure

upstream

fallthrough in-addr.arpa ip6.arpa

}

prometheus :9153

proxy . /etc/resolv.conf

cache 30

}

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: coredns

namespace: kube-system

labels:

k8s-app: coredns

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

kubernetes.io/name: "CoreDNS"

spec:

replicas: 3

strategy:

type: RollingUpdate

rollingUpdate:

maxUnavailable: 1

selector:

matchLabels:

k8s-app: coredns

template:

metadata:

labels:

k8s-app: coredns

spec:

serviceAccountName: coredns

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

- key: "CriticalAddonsOnly"

operator: "Exists"

containers:

- name: coredns

image: coredns/coredns:1.0.6

imagePullPolicy: IfNotPresent

resources:

limits:

memory: 170Mi

requests:

cpu: 100m

memory: 70Mi

args: [ "-conf", "/etc/coredns/Corefile" ]

volumeMounts:

- name: config-volume

mountPath: /etc/coredns

ports:

- containerPort: 53

name: dns

protocol: UDP

- containerPort: 53

name: dns-tcp

protocol: TCP

livenessProbe:

httpGet:

path: /health

port: 8080

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

dnsPolicy: Default

volumes:

- name: config-volume

configMap:

name: coredns

items:

- key: Corefile

path: Corefile

---

apiVersion: v1

kind: Service

metadata:

name: coredns

namespace: kube-system

labels:

k8s-app: coredns

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

kubernetes.io/name: "CoreDNS"

spec:

selector:

k8s-app: coredns

clusterIP: 10.253.0.2

ports:

- name: dns

port: 53

protocol: UDP

- name: dns-tcp

port: 53

protocol: TCP

EOF

###dashboard###

cat >dashboard-configmap.yaml <<EOF

apiVersion: v1

kind: ConfigMap

metadata:

labels:

k8s-app: kubernetes-dashboard

# Allows editing resource and makes sure it is created first.

addonmanager.kubernetes.io/mode: EnsureExists

name: kubernetes-dashboard-settings

namespace: kube-system

EOF

cat >dashboard-controller.yaml <<EOF

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: kubernetes-dashboard

addonmanager.kubernetes.io/mode: Reconcile

name: kubernetes-dashboard

namespace: kube-system

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: kubernetes-dashboard

namespace: kube-system

labels:

k8s-app: kubernetes-dashboard

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

spec:

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

annotations:

scheduler.alpha.kubernetes.io/critical-pod: ''

spec:

priorityClassName: system-cluster-critical

containers:

- name: kubernetes-dashboard

image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

resources:

limits:

cpu: 100m

memory: 300Mi

requests:

cpu: 50m

memory: 100Mi

ports:

- containerPort: 8443

protocol: TCP

args:

# PLATFORM-SPECIFIC ARGS HERE

- --auto-generate-certificates

volumeMounts:

- name: kubernetes-dashboard-certs

mountPath: /certs

- name: tmp-volume

mountPath: /tmp

livenessProbe:

httpGet:

scheme: HTTPS

path: /

port: 8443

initialDelaySeconds: 30

timeoutSeconds: 30

volumes:

- name: kubernetes-dashboard-certs

secret:

secretName: kubernetes-dashboard-certs

- name: tmp-volume

emptyDir: {}

serviceAccountName: kubernetes-dashboard

tolerations:

- key: "CriticalAddonsOnly"

operator: "Exists"

EOF

cat >dashboard-rbac.yaml <<EOF

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

labels:

k8s-app: kubernetes-dashboard

addonmanager.kubernetes.io/mode: Reconcile

name: kubernetes-dashboard-minimal

namespace: kube-system

rules:

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: ["secrets"]

resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

verbs: ["get", "update", "delete"]

# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

resourceNames: ["kubernetes-dashboard-settings"]

verbs: ["get", "update"]

# Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

resources: ["services"]

resourceNames: ["heapster"]

verbs: ["proxy"]

- apiGroups: [""]

resources: ["services/proxy"]

resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

labels:

k8s-app: kubernetes-dashboard

addonmanager.kubernetes.io/mode: Reconcile

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system

EOF

cat >dashboard-secret.yaml <<EOF

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

# Allows editing resource and makes sure it is created first.

addonmanager.kubernetes.io/mode: EnsureExists

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque

---

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

# Allows editing resource and makes sure it is created first.

addonmanager.kubernetes.io/mode: EnsureExists

name: kubernetes-dashboard-key-holder

namespace: kube-system

type: Opaque

EOF

cat >dashboard-service.yaml <<EOF

apiVersion: v1

kind: Service

metadata:

name: kubernetes-dashboard

namespace: kube-system

labels:

k8s-app: kubernetes-dashboard

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

spec:

selector:

k8s-app: kubernetes-dashboard

ports:

- port: 443

targetPort: 8443

type: NodePort

EOF

#####heapster#####

cat >grafana.yaml <<EOF

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: monitoring-grafana

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

task: monitoring

k8s-app: grafana

spec:

containers:

- name: grafana

image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3

ports:

- containerPort: 3000

protocol: TCP

volumeMounts:

- mountPath: /etc/ssl/certs

name: ca-certificates

readOnly: true

- mountPath: /var

name: grafana-storage

env:

- name: INFLUXDB_HOST

value: monitoring-influxdb

- name: GF_SERVER_HTTP_PORT

value: "3000"

# The following env variables are required to make Grafana accessible via

# the kubernetes api-server proxy. On production clusters, we recommend

# removing these env variables, setup auth for grafana, and expose the grafana

# service using a LoadBalancer or a public IP.

- name: GF_AUTH_BASIC_ENABLED

value: "false"

- name: GF_AUTH_ANONYMOUS_ENABLED

value: "true"

- name: GF_AUTH_ANONYMOUS_ORG_ROLE

value: Admin

- name: GF_SERVER_ROOT_URL

# If you're only using the API Server proxy, set this value instead:

# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

value: /

volumes:

- name: ca-certificates

hostPath:

path: /etc/ssl/certs

- name: grafana-storage

emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

labels:

# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

# If you are NOT using this as an addon, you should comment out this line.

kubernetes.io/cluster-service: 'true'

kubernetes.io/name: monitoring-grafana

name: monitoring-grafana

namespace: kube-system

spec:

# In a production setup, we recommend accessing Grafana through an external Loadbalancer

# or through a public IP.

# type: LoadBalancer

# You could also use NodePort to expose the service at a randomly-generated port

type: NodePort

ports:

- port: 80

targetPort: 3000

selector:

k8s-app: grafana

EOF

cat >heapster.yaml <<EOF

apiVersion: v1

kind: ServiceAccount

metadata:

name: heapster

namespace: kube-system

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: heapster

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

task: monitoring

k8s-app: heapster

spec:

serviceAccountName: heapster

containers:

- name: heapster

image: gcr.io/google_containers/heapster-amd64:v1.5.3

imagePullPolicy: IfNotPresent

command:

- /heapster

- --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250

#- --source=kubernetes:https://kubernetes.default

- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086

---

apiVersion: v1

kind: Service

metadata:

labels:

task: monitoring

# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

# If you are NOT using this as an addon, you should comment out this line.

kubernetes.io/cluster-service: 'true'

kubernetes.io/name: Heapster

name: heapster

namespace: kube-system

spec:

ports:

- port: 80

targetPort: 8082

selector:

k8s-app: heapster

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: heapster-kubelet-api

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

#name: system:kubelet-api-admin

name: cluster-admin

subjects:

- kind: ServiceAccount

name: heapster

namespace: kube-system

EOF

cat >influxdb.yaml <<EOF

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: monitoring-influxdb

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

task: monitoring

k8s-app: influxdb

spec:

containers:

- name: influxdb

image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3

volumeMounts:

- mountPath: /data

name: influxdb-storage

volumes:

- name: influxdb-storage

emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

labels:

task: monitoring

# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

# If you are NOT using this as an addon, you should comment out this line.

kubernetes.io/cluster-service: 'true'

kubernetes.io/name: monitoring-influxdb

name: monitoring-influxdb

namespace: kube-system

spec:

ports:

- port: 8086

targetPort: 8086

selector:

k8s-app: influxdb

EOF

转载于:https://blog.51cto.com/agent/2314508

Kubernetes v1.10.4 安装记录相关推荐

  1. Kubernetes v1.10.x HA 全手动安装教程(TL;DR)

    转自 https://www.kubernetes.org.cn/3814.html 本篇延续过往手动安装方式来部署 Kubernetes v1.10.x 版本的 High Availability ...

  2. kubernetes V1.10.4 集群部署 (手动生成证书)

    说明:本文档涉及docker镜像,yaml文件下载地址 链接:https://pan.baidu.com/s/1QuVelCG43_VbHiOs04R3-Q 密码:70q2 本文只是作为一个安装记录 ...

  3. 深入玩转K8S之使用kubeadm安装Kubernetes v1.10以及常见问题解答

    原文链接:http://blog.51cto.com/devingeng/2096495 关于K8S: Kubernetes是Google开源的容器集群管理系统.它构建于docker技术之上,为容器化 ...

  4. kubeadm安装Kubernetes V1.10集群详细文档

    1:服务器信息以及节点介绍 系统信息:centos1708 minimal    只修改IP地址 主机名称 IP 备注 node01 192.168.150.181 master and etcd r ...

  5. kubernetes 1.7.2 安装 记录过程

    系统信息 cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) 环境信息 IP地址 主机名称 10.10.6.11 master 1 ...

  6. ubuntu9.10硬盘安装记录一

    Ubuntu9.10安装 下载 至http://releases.ubuntu.com/karmic/ubuntu-9.10-beta-desktop-i386.iso 下载到 c: 下重命名为 ub ...

  7. ubuntu9.10硬盘安装记录二

    firefox插件安装 要安装的插件有: Google工具栏 autopager autoproxy firebug flashgot foxlingo foxtab greasemonkey imt ...

  8. Ubuntu 10.04 安装记录 [Samsung R700]

    设置源列表及更新系统 打开终端方法: 按下ALT+F2 -> gnome-terminal 备份当前的源列表,以便日后需要时恢复: sudo cp /etc/apt/sources.list / ...

  9. Kubernetes Dashboard on Ubuntu 16.04安装记录

    2019独角兽企业重金招聘Python工程师标准>>> Kubernetes Dashboard on Ubuntu 16.04安装记录 以下内容在Kubernetes 1.9.3 ...

最新文章

  1. java layoutinflater_LayoutInflater(布局服务)
  2. ML:推荐给小白入门机器学习一系列书籍
  3. Oracle中的UPDATE FROM解决方法
  4. 【Linux】/etc/sysconfig/i18n文件详解
  5. SAP ABAP编辑器里的Code Completion(代码自动完成)的等待时间设置
  6. 男子借款70万前后还了1600万仍未还清,如何避免套路贷?
  7. stm32电机控制定时器1_STM32通过PWM控制电机速度
  8. 美本计算机专业,关于美国本科计算机专业排名
  9. D37 682. Baseball Game
  10. LeetCode 109. Convert Sorted List to Binary Search Tree
  11. open cv+C++错误及经验总结(三)
  12. 蓝色清爽可用做排行的侧边列表滑动门代码
  13. Unhandled exception. System.NullReferenceException: Object reference not set to an....
  14. qt 取消按钮点击效果_Qt 对话框里添加确定取消按钮
  15. igs时间和utc_世界协调时间(UTC)与中国标准时间
  16. Windows Sever 2012 R2 组策略将everyone权限应用于匿名用户
  17. vscode编译Window c++程序缺少vc运行库解决方法
  18. linux shift f11,然后按下CTRL+SHIFT+F11组合键
  19. C++中的bool类型
  20. 计算机网络技术店面取名,适合电脑店的名字大全 霸气的电脑店铺起名

热门文章

  1. (解决)application.yml文件图标不能正常显示为绿色叶子
  2. Java实现word导出与pdf导出
  3. 用知识图谱打开梁山好汉一百单八将
  4. IT人的架构书单:如何赋予软件以灵魂
  5. 人工智能方向毕业设计_本科生的毕业论文如果选择人工智能相关方向,需要注意哪些问题...
  6. 前端社区的恶趣味之Vanilla JS
  7. HyperMesh 实用教程(一)组件
  8. 判断鼠标滑轮滚动事件
  9. 2022年湖北省光电子信息和生命健康领域科技计划成果路演征集条件以及申报时间流程汇总!
  10. smb.php如何使用,win10smb1协议怎么开