前言

步骤跟之前安装1.13版本的是一样的
区别就在于kubeadm initconfiguration file
目前kubeadm init with configuration file已经处于beta阶段了,在1.15版本已经进入到了v1beta2版本
虽然还没到GA版,但是相对于手动配置k8s集群,kubeadm不但简化了步骤,而且还减少了手动部署的出错的概率,何乐而不为呢

环境介绍:

系统版本:CentOS 7.6
内核:4.18.7-1.el7.elrepo.x86_64
Kubernetes: v1.14.1
Docker-ce: 18.09Keepalived保证apiserever服务器的IP高可用
Haproxy实现apiserver的负载均衡
master x3 && etcd x3 保证k8s集群可用性192.168.1.1        master
192.168.1.2        master2
192.168.1.3        master3
192.168.1.4        Keepalived + Haproxy
192.168.1.5        Keepalived + Haproxy
192.168.1.6        etcd1
192.168.1.7        etcd2
192.168.1.8        etcd3
192.168.1.9        node1
192.168.1.10       node2
192.168.1.100      VIP、apiserver的地址

一、准备工作

为方便操作,所有操作均以root用户执行
以下操作仅在kubernetes集群节点执行即可

  • 关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0systemctl disable firewalld
systemctl stop firewalld
  • 关闭swap
swapoff -a
  • 配置转发相关参数,否则可能会出错
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#vm.swappiness=0
EOFsysctl --system
  • 加载ipvs模块
cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed  -r 's#(.*).ko.*#\1#'\`; do/sbin/modinfo -F filename \$i  &> /dev/nullif [ \$? -eq 0 ]; then/sbin/modprobe \$ifi
done
EOFchmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
  • 安装cfssl
#在master节点安装即可!!!wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
  • 安装kubernetes阿里云镜像
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFyum install -y  kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1
  • 安装docker,并干掉docker0网桥
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y docker-cemkdir /etc/docker/
cat << EOF > /etc/docker/daemon.json
{   "exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://registry.docker-cn.com"],"live-restore": true,"default-shm-size": "128M","bridge": "none","max-concurrent-downloads": 10,"oom-score-adjust": -1000,"debug": false
}
EOF #重启docker
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
  • 配置hosts文件
#为所有节点配置hosts文件192.168.1.1    master
192.168.1.2    master2
192.168.1.3    master3
192.168.1.4    lb1
192.168.1.5    lb2
192.168.1.6    etcd1
192.168.1.7    etcd2
192.168.1.8    etcd3
192.168.1.9    node1
192.168.1.10   node2

二、配置etcd

  • 配置etcd的证书
mkdir -pv $HOME/ssl && cd $HOME/sslcat << EOF > ca-config.json
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}
}
EOFcat << EOF > etcd-ca-csr.json
{"CN": "etcd","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Shenzhen","L": "Shenzhen","O": "etcd","OU": "Etcd Security"}]
}
EOFcat << EOF > etcd-csr.json
{"CN": "etcd","hosts": ["127.0.0.1","192.168.1.6","192.168.1.7","192.168.1.8"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Shenzhen","L": "Shenzhen","O": "etcd","OU": "Etcd Security"}]
}
EOF#生成证书并复制证书至其他etcd节点cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcdmkdir -pv /etc/etcd/ssl
cp etcd*.pem /etc/etcd/ssl
mkdir -pv /etc/kubernetes/pki/etcd
cp etcd*.pem /etc/kubernetes/pki/etcdscp -r /etc/etcd 192.168.1.6:/etc/
scp -r /etc/etcd 192.168.1.7:/etc/
scp -r /etc/etcd 192.168.1.8:/etc/
  • etcd1主机启动etcd
yum install -y etcd cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.6:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOFchown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • etcd2主机启动etcd
yum install -y etcd cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.7:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.7:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.7:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.7:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOFchown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • etcd3主机启动etcd
yum install -y etcd cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.8:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.8:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.8:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.8:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOFchown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • 检查etcd集群
etcdctl --endpoints "https://192.168.1.6:2379,https://192.168.1.7:2379,https://192.168.1.8:2379"   --ca-file=/etc/etcd/ssl/etcd-ca.pem  \
--cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   cluster-health[root@node3 ~]# etcdctl --endpoints "https://192.168.1.6:2379,https://192.168.1.7:2379,https://192.168.1.8:2379"   --ca-file=/etc/etcd/ssl/etcd-ca.pem  \
> --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   cluster-health
member 3639deb1869a1bda is healthy: got healthy result from https://127.0.0.1:2379
member b75e13f1faa57bd8 is healthy: got healthy result from https://127.0.0.1:2379
member e31fec5bb4c882f2 is healthy: got healthy result from https://127.0.0.1:2379

配置keepalived

  • 在lb1机器上配置
yum install -y keepalivedcat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {root@localhost      #发送邮箱}notification_email_from keepalived@localhost    #邮箱地址   smtp_server 127.0.0.1   #邮件服务器地址smtp_connect_timeout 30 router_id node1         #主机名,每个节点不同即可vrrp_mcast_group4 224.0.100.100    #组播地址}       vrrp_instance VI_1 {state MASTER        #在另一个节点上为BACKUPinterface eth0      #IP地址漂移到的网卡virtual_router_id 6 #多个节点必须相同priority 100        #优先级,备用节点的值必须低于主节点的值advert_int 1        #通告间隔1秒authentication {auth_type PASS      #预共享密钥认证auth_pass 571f97b2  #密钥}virtual_ipaddress {192.168.1.100/24    #VIP地址}
}
EOFsystemctl enable keepalived
systemctl start keepalived
  • 在lb2主机配置
yum install -y keepalivedcat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {root@localhost      #发送邮箱}notification_email_from keepalived@localhost    #邮箱地址   smtp_server 127.0.0.1   #邮件服务器地址smtp_connect_timeout 30 router_id node2         #主机名,每个节点不同即可vrrp_mcast_group4 224.0.100.100  #组播地址}       vrrp_instance VI_1 {state BACKUP        #在另一个节点上为MASTERinterface eth0      #IP地址漂移到的网卡virtual_router_id 6 #多个节点必须相同priority 80        #优先级,备用节点的值必须低于主节点的值advert_int 1        #通告间隔1秒authentication {auth_type PASS      #预共享密钥认证auth_pass 571f97b2  #密钥}virtual_ipaddress {192.168.1.100/24    #漂移过来的IP地址}
}
EOFsystemctl enable keepalived
systemctl start keepalived

配置Haproxy

  • 在lb1主机上
yum install  -y haproxycat << EOF > /etc/haproxy/haproxy.cfg
globallog         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemondefaultsmode                    tcplog                     globalretries                 3timeout connect         10stimeout client          1mtimeout server          1mfrontend kubernetesbind *:6443mode tcpdefault_backend kubernetes-masterbackend kubernetes-masterbalance roundrobinserver master  192.168.1.1:6443 check maxconn 2000server master2 192.168.1.2:6443 check maxconn 2000server master3 192.168.1.3:6443 check maxconn 2000
EOFsystemctl enable haproxy
systemctl start haproxy
  • 在lb2主机上
yum install  -y haproxycat << EOF > /etc/haproxy/haproxy.cfg
globallog         127.0.0.1 local2chroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemondefaultsmode                    tcplog                     globalretries                 3timeout connect         10stimeout client          1mtimeout server          1mfrontend kubernetesbind *:6443mode tcpdefault_backend kubernetes-masterbackend kubernetes-masterbalance roundrobinserver master  192.168.1.1:6443 check maxconn 2000server master2 192.168.1.2:6443 check maxconn 2000server master3 192.168.1.3:6443 check maxconn 2000
EOFsystemctl enable haproxy
systemctl start haproxy

初始化master

  • 初始化master1
#kubeadm init配置文件参考:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-filecd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
# 本地的api server监听地址和端口
localAPIEndpoint:advertiseAddress: 192.168.1.1bindPort: 6443
# 本节点加入集群的注册信息,也就是kubectl get node看到的信息
nodeRegistration:
# 如果不填写name字段,则默认使用主机名,名字最好是是集群唯一
#  name: master1criSocket: /var/run/dockershim.sock
# 污点,NoSchedule表示不调度Pod到这台node上
# 详细信息参考:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
# 集群名字
clusterName: kubernetes
# controller访问api server的地址
# 如果是多个master的集群,这里就要写前端lb的地址
controlPlaneEndpoint: "192.168.1.100:6443"
apiServer:
# 此处填所有的masterip和lbip和其它你可能需要通过它访问apiserver的地址和域名或者主机名等  certSANs:- "master"- "master2"- "master3"- "192.168.1.1"- "192.168.1.2"- "192.168.1.3"- "192.168.1.4"- "192.168.1.5"- "192.168.1.100"- "127.0.0.1"timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
dns:type: CoreDNS
etcd:
# 让k8s自行启动etcd,多master的集群必须使用external
#  local:
#    imageRepository: "k8s.gcr.io"
#    dataDir: "/var/lib/etcd"
# 外部的etcd,所有的api server都要连接external:endpoints:- "https://192.168.1.6:2379"- "https://192.168.1.7:2379"- "https://192.168.1.8:2379"caFile: "/etc/kubernetes/pki/etcd/etcd-ca.pem"certFile: "/etc/kubernetes/pki/etcd/etcd.pem"keyFile: "/etc/kubernetes/pki/etcd/etcd-key.pem"
imageRepository: k8s.gcr.io
kubernetesVersion: v1.14.1
networking:
# service网段serviceSubnet: "10.96.0.0/12"
# pod网络网段podSubnet: "10.100.0.1/24"dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"
# 如果swap启用了,是否将kubelet判定为启动失败
failSwapOn: false
EOFsystemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
kubeadm init --config /root/kubeadm-init.yamlmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configcat << EOF > /etc/profile.d/kubernetes.sh
source <(kubectl completion bash)
EOF
source /etc/profile.d/kubernetes.sh scp -r /etc/kubernetes/pki 192.168.1.2:/etc/kubernetes/
scp -r /etc/kubernetes/pki 192.168.1.3:/etc/kubernetes/
  • 初始化master2
cd /etc/kubernetes/pki/
rm -fr apiserver.crt apiserver.key
cd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.1.2bindPort: 6443
nodeRegistration:
#  name: master1criSocket: /var/run/dockershim.socktaints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
clusterName: kubernetes
controlPlaneEndpoint: "192.168.1.100:6443"
apiServer:certSANs:- "master"- "master2"- "master3"- "192.168.1.1"- "192.168.1.2"- "192.168.1.3"- "192.168.1.4"- "192.168.1.5"- "192.168.1.100"- "127.0.0.1"timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
dns:type: CoreDNS
etcd:external:endpoints:- "https://192.168.1.6:2379"- "https://192.168.1.7:2379"- "https://192.168.1.8:2379"caFile: "/etc/kubernetes/pki/etcd/etcd-ca.pem"certFile: "/etc/kubernetes/pki/etcd/etcd.pem"keyFile: "/etc/kubernetes/pki/etcd/etcd-key.pem"
imageRepository: k8s.gcr.io
kubernetesVersion: v1.14.1
networking:serviceSubnet: "10.96.0.0/12"podSubnet: "10.100.0.1/24"dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"
failSwapOn: false
EOFsystemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
kubeadm init --config /root/kubeadm-init.yamlmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configcat << EOF > /etc/profile.d/kubernetes.sh
source <(kubectl completion bash)
EOF
source /etc/profile.d/kubernetes.sh
  • 初始化master3
cd /etc/kubernetes/pki/
rm -fr apiserver.crt apiserver.key
cd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.1.3bindPort: 6443
nodeRegistration:
#  name: master1criSocket: /var/run/dockershim.socktaints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
clusterName: kubernetes
controlPlaneEndpoint: "192.168.1.100:6443"
apiServer:certSANs:- "master"- "master2"- "master3"- "192.168.1.1"- "192.168.1.2"- "192.168.1.3"- "192.168.1.4"- "192.168.1.5"- "192.168.1.100"- "127.0.0.1"timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
dns:type: CoreDNS
etcd:external:endpoints:- "https://192.168.1.6:2379"- "https://192.168.1.7:2379"- "https://192.168.1.8:2379"caFile: "/etc/kubernetes/pki/etcd/etcd-ca.pem"certFile: "/etc/kubernetes/pki/etcd/etcd.pem"keyFile: "/etc/kubernetes/pki/etcd/etcd-key.pem"
imageRepository: k8s.gcr.io
kubernetesVersion: v1.14.1
networking:serviceSubnet: "10.96.0.0/12"podSubnet: "10.100.0.1/24"dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"
failSwapOn: false
EOFsystemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
kubeadm init --config /root/kubeadm-init.yamlmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config# 配置kubectl命令提示
cat << EOF > /etc/profile.d/kubernetes.sh
source <(kubectl completion bash)
EOF
source /etc/profile.d/kubernetes.sh

将所有node节点加入集群

  • 获取加入集群的token
#在master主机执行获取join命令
kubeadm token create --print-join-command[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.100:6443 --token zpru0r.jkvrdyy2caexr8kk --discovery-token-ca-cert-hash sha256:a45c091dbd8a801152aacd877bcaaaaf152697bfa4536272c905a83612b3bf22

其他的步骤参考:kubeadm安装kubernetes v1.11.3 HA多主高可用并启用ipvs

转载于:https://blog.51cto.com/bigboss/2395872

kubeadm安装高可用kubernetes v1.14.1相关推荐

  1. Kubernetes学习-K8S安装篇-Kubeadm安装高可用K8S集群

    Kubernetes学习-K8S安装篇-Kubeadm高可用安装K8S集群 1. Kubernetes 高可用安装 1.1 kubeadm高可用安装k8s集群1.23.1 1.1.1 基本环境配置 1 ...

  2. Kubeadm安装高可用的K8S集群--多master单node

    Kubeadm安装高可用的K8S集群–多master单node master1 IP 192.168.1.180/24 OS Centos7.6 master2 IP 192.168.1.181/24 ...

  3. Kubernetes 生产环境安装部署 基于 Kubernetes v1.14.0 之 etcd集群

    说明:没有明确注明在某台服务器,都是在k8s-operation 工作服务器完成 K8S node 节点数大于2000 节点 k8s-operation 目录规划,工作目录/apps/work/k8s ...

  4. 二、《云原生 | Kubernetes篇》Kubernetes学习-K8S安装篇-Kubeadm安装高可用K8S集群--生产环境

    目录 1. Kubernetes 高可用安装 1.1.1实验环境规划 高可用Kubernetes集群规划

  5. 二进制部署高可用Kubernetes v1.17.x

    一.基本说明 本博文将演示CentOS 7二进制方式安装高可用k8s 1.17.x,相对于其他版本,二进制安装方式并无太大区别. 二.基本环境配置 2.1 主机信息 OS role && ...

  6. 局域网使用kubeadm安装高可用k8s集群

    主机列表: ip 主机名 节点 cpu 内存 192.168.23.100 k8smaster01 master 2核 2G 192.168.23.101 k8smaster02 node 2核 2G ...

  7. k8s.4-kubeadm部署高可用kubernetes集群 1.21

    kubeadm部署高可用kubernetes集群 1.21 ​ 一.kubernetes 1.21发布 1.1 介绍 2021年04月,Kubernetes 1.21正式与大家见面,这是我们 2021 ...

  8. 保姆级二进制安装高可用k8s集群文档(1.23.8)

    保姆级二进制安装高可用k8s集群文档 k8s搭建方式 前期准备 集群规划 机器准备 1.master vagrantfile 2.master install.sh 3.node vagrantfil ...

  9. kubeadm部署高可用k8s

    KubeAdmin安装k8s 1.集群类型 # kubernetes集群大体上分为两类: 一主多从和多主多从 # 1.一主多从: 一台 Master节点和多台Node节点,搭建简单,有单机故障分析,适 ...

最新文章

  1. css删除线_前端删除文字贯穿线的方法有哪些
  2. java rect 旋转_处理(Java可视化语言):使用rectMode(CENTER)而不是rectMode(CORNER)旋转矩形,留下奇数衰落轨迹效果...
  3. Java黑皮书课后题第3章:**3.28(几何:两个矩形)编写一个程序,提示用户输入两个矩形中心的x坐标和y坐标以及矩形的宽度和高度,然后判断第二个矩形是在第一个矩形内,还是和第一个矩形重叠
  4. NoClassDefFoundError和ClassNotFoundException
  5. 图形基本变换c语言代码,图形变换-C语言课程设计.doc
  6. 一个学中医女生的保养身体法
  7. 《零基础》MySQL DELETE 语句(十五)
  8. fisco bcos应用开发(一) springboot报错 Error reading resource
  9. python设置excel的格式_python 操作Excel 设置格式
  10. c语言经典程序100例50行以上,C语言非常简单的字符统计程序50行
  11. JavaScript的DOM操作.
  12. 罗技Lua脚本-CF神圣爆裂者自动开枪
  13. python session过期_session的工作原理、django的超时时间设置及session过期判断
  14. 实习周记---20180527
  15. ANDROID ROOT FIDDLER HTTPS 抓包
  16. Visual SourceSafe 2005 简体中文语言包
  17. java quicktime_Java Media Development with QuickTime for Java
  18. 2020最新WordPress网站优化教程
  19. 通信网实验_Kruskal算法_Mininet_Ryu
  20. LL(1)文法中FIRST集和FOLLOW集的计算方法

热门文章

  1. TensorFlow基础5-可训练变量和自动求导机制
  2. nvidia 英伟达 显卡 GPU 的计算能力
  3. Ubuntu16下安装kaldi(使用物理主机)
  4. 我看过的Java方面的好文章
  5. python同名包_可以使用两个同名的Python包吗?
  6. Java序列化bean保存到本地文件中
  7. pycharm使用Djiago创建第一个web项目
  8. 中南大学计算机复试题,中南大学计算机05年复试试题
  9. java和php哪个运行更快,java和php哪个入门快?-php教程
  10. idea 快速导入实现父类方法_教你快速吸引精准粉丝实现流量变现的方法