说明:本文档涉及docker镜像,yaml文件下载地址

链接:https://pan.baidu.com/s/1QuVelCG43_VbHiOs04R3-Q 密码:70q2


本文只是作为一个安装记录

1. 环境

1.1 服务器信息

主机名 IP地址 os 版本 节点
k8s01 172.16.50.131 CentOS Linux release 7.4.1708 (Core) master
k8s02 172.16.50.132 CentOS Linux release 7.4.1708 (Core) master
k8s03 172.16.50.104 CentOS Linux release 7.4.1708 (Core) master
k8s04 172.16.50.111 CentOS Linux release 7.4.1708 (Core) node

1.2 软件版本信息

名称 版本
kubernetes v1.10.4
docker 1.13.1

2. Master部署

2.1 服务器初始化

基础软件安装

yum install vim net-tools git -y

关闭selinux

编辑 /etc/sysconfig/selinux
SELINUX=disabled

关闭firewall防火墙

systemctl disable  firewalld && systemctl stop firewalld

配置 k8s01  免密码登录所有节点

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:uKdVyzKOYp1YiuvmKuYQDice4UX2aKbzAxmzdeou3uo root@k8s01
The key's randomart image is:
+---[RSA 2048]----+
|   o             |
|  o o            |
| + * o           |
|. % o  .         |
|+O..  . S .      |
|++*   .. o .     |
|.o = =..= o      |
|oo= * o* o       |
|BE*= .o .        |
+----[SHA256]-----+
[root@k8s01 ~]# for i in 131 132 104 111 ;  do ssh-copy-id root@172.16.50.$i ; done

更新服务器后重启

yum upgrade -y    &&    reboot

上传v1.10.4.zip压缩包到服务器 k8s01 /data目录

# unzip v1.10.4.zip && cd v1.10.4
Archive:  v1.10.4.zipcreating: v1.10.4/creating: v1.10.4/images/inflating: v1.10.4/images/etcd-amd64_3.1.12.tar  inflating: v1.10.4/images/flannel_v0.10.0-amd64.tar  inflating: v1.10.4/images/heapster-amd64_v1.5.3.tar  inflating: v1.10.4/images/k8s-dns-dnsmasq-nanny-amd64_1.14.8.tar  inflating: v1.10.4/images/k8s-dns-kube-dns-amd64_1.14.8.tar  inflating: v1.10.4/images/k8s-dns-sidecar-amd64_1.14.8.tar  inflating: v1.10.4/images/kube-apiserver-amd64_v1.10.4.tar  inflating: v1.10.4/images/kube-controller-manager-amd64_v1.10.4.tar  inflating: v1.10.4/images/kube-proxy-amd64_v1.10.4.tar  inflating: v1.10.4/images/kube-scheduler-amd64_v1.10.4.tar  inflating: v1.10.4/images/kubernetes-dashboard-amd64_v1.8.3.tar  inflating: v1.10.4/images/pause-amd64_3.1.tar  creating: v1.10.4/pkg/extracting: v1.10.4/pkg/kubeadm-1.10.4-0.x86_64.rpm  inflating: v1.10.4/pkg/kubectl-1.10.4-0.x86_64.rpm  extracting: v1.10.4/pkg/kubelet-1.10.4-0.x86_64.rpm  inflating: v1.10.4/pkg/kubernetes-cni-0.6.0-0.x86_64.rpm  creating: v1.10.4/yaml/inflating: v1.10.4/yaml/deploy.yaml  inflating: v1.10.4/yaml/grafana.yaml  inflating: v1.10.4/yaml/heapster.yaml  inflating: v1.10.4/yaml/influxdb.yaml  inflating: v1.10.4/yaml/kube-flannel.yaml  inflating: v1.10.4/yaml/kubernetes-dashboard.yaml

创建配置文件目录

# mkdir config  && cd config/

创建k8s.conf 内容如下:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

分发到所有主机

for i in 131 132 104 111 ;  do scp k8s.conf root@172.16.50.$i:/etc/sysctl.d/k8s.conf ; done

生效

for i in 131 132 104 111 ;  do ssh root@172.16.50.$i  "modprobe br_netfilter &&  sysctl -p /etc/sysctl.d/k8s.conf " ; done
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

创建kubernetes  yum源文件 kubernetes.repo,内容如下:

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

这里使用阿里云的yum源,免×××

分发到各个服务器

for i in 131 132 104 111 ;  do scp kubernetes.repo root@172.16.50.$i:/etc/yum.repos.d/kubernetes.repo ; done

配置hosts

for i in 131 132 104 111 ;  do ssh root@172.16.50.$i  'echo "172.16.50.131 k8s01" >> /etc/hosts && echo "172.16.50.132 k8s02" >> /etc/hosts && echo "172.16.50.104 k8s03" >> /etc/hosts && echo "172.16.50.111 k8s04" >> /etc/hosts' ; done

所有服务器安装docker及 kubernetes 组件

yum install docker -y && systemctl start docker.service && systemctl status docker.service && systemctl enable docker.service && yum install kubeadm kubectl kubelet docker -y && systemctl enable kubelet

分发dock镜像

cd ../images/ && for i in 131 132 104 111 ;  do  scp ./* root@172.16.50.$i:/mnt ; done

所有主机执行,导入docker镜像

for j in `ls /mnt`; do docker load --input /mnt/$j ; done

在主机k8s01上手动生成证书

git clone   && cd k8s-tls/

分发可执行文件到所有服务器

 for i in 131 132 104 111 ;  do scp ./bin/* root@172.16.50.$i:/usr/bin/ ; done

编辑 apiserver.json  文件, 修改之后内容如下:

{"CN": "kube-apiserver","hosts": ["172.16.50.131","172.16.50.132","172.16.50.104","k8s01","k8s02","k8s03","10.96.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"     ],"key": {"algo": "rsa","size": 2048}
}

运行  ./run.sh  生成证书

进入到 /etc/kubernetes/pki/ 目录

编辑 node.sh 文件

ip="172.16.50.131"
NODE="k8s01"

编辑 kubelet.json 文件

"CN": "system:node:k8s01",

执行  ./node.sh  生成配置文件

进入到 /data/v1.10.4/config 目录 ; 创建config.yaml 文件 内容如下:

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: v1.10.4
networking:podSubnet: 10.244.0.0/16
apiServerCertSANs:
- k8s01
- k8s02
- k8s03
- 172.16.50.131
- 172.16.50.132
- 172.16.50.104
- 172.16.50.227
apiServerExtraArgs:endpoint-reconciler-type: "lease"
etcd:endpoints:- http://172.16.50.132:2379- http://172.16.50.131:2379- http://172.16.50.104:2379
token: "deed3a.b3542929fcbce0f0"
tokenTTL: "0"

在主机k8s01上初始化集群

# kubeadm init --config config.yaml
[init] Using Kubernetes version: v1.10.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 15.506913 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s01 as master by adding a label and a taint
[markmaster] Master k8s01 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: deed3a.b3542929fcbce0f0
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join 172.16.50.131:6443 --token deed3a.b3542929fcbce0f0 --discovery-token-ca-cert-hash sha256:0334022c7eb4f2b20865f1784c64b1e81ad87761b9e8ffd50ecefabca5cfad5c

分发证书文件到k8s02 k8s03 服务器

for i in 131 132 104 111 ;  do ssh root@172.16.50.$i  "mkdir /etc/kubernetes/pki/ " ; done
for i in 132 104 ;  do scp /etc/kubernetes/pki/* root@172.16.50.$i:/etc/kubernetes/pki/ ; done

分发 config.yaml 文件到k8s02 k8s03 服务器

for i in 132 104 ;  do scp config.yaml root@172.16.50.$i:/mnt ; done

后续操作

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

在主机k8s02上初始化集群

cd /etc/kubernetes/pki/

编辑 node.sh  文件

ip="172.16.50.132"
NODE="k8s02"

编辑 kubelet.json  文件

"CN": "system:node:k8s02",

生成配置文件

./node.sh

初始化集群

kubeadm init --config /mnt/config.yaml

同样方法初始化节点k8s03

加入 node 节点

kubeadm join 172.16.50.131:6443 --token deed3a.b3542929fcbce0f0 --discovery-token-ca-cert-hash sha256:0334022c7eb4f2b20865f1784c64b1e81ad87761b9e8ffd50ecefabca5cfad5c

这里未做kubernetes api 的负载均衡器 直接加入 master k8s01节点

转载于:https://blog.51cto.com/11889458/2130621

kubernetes V1.10.4 集群部署 (手动生成证书)相关推荐

  1. 使用二进制包在生产环境部署 Kubernetes v1.13.2 集群

    文章目录 使用二进制包在生产环境部署 Kubernetes v1.13.2 集群 一 背景 二 环境及架构图 2.1 软件环境 2.2 服务器规划 2.3 节点或组件功能简介 2.4 Kubernet ...

  2. kubespray v2.21.0 在线定制部署升级 kubernetes v1.24.0 集群【2】

    文章目录 简介 创建 虚拟机模板 虚拟机名称 配置静态地址 配置代理 yum 配置 配置主机名 安装 git 安装 docker 安装 ansible 配置内核参数 安装 k8s 定制安装 kuber ...

  3. kubespray v2.21.0 部署 kubernetes v1.24.0 集群

    文章目录 1. 前言 2. 创建7台虚拟机 3. 部署 git 3.1 dnf 安装 3.2 tar 安装 4. 下载 kubespray 介质 5. 配置 zsh 终端 6. 配置互信 7. 安装 ...

  4. Kubespray v2.21.0 离线部署 Kubernetes v1.25.6 集群

    文章目录 1. 前言 2. 预备条件 3. 配置代理 4. 下载介质 5. 初始化配置 6. 安装部署工具 6.1 配置 venv 部署环境 6.2 配置容器部署环境 7. 配置互信 8. 编写 in ...

  5. CentOS7 使用二进制部署 Kubernetes v1.15.3集群

    组件版本 && 集群环境 组件版本: Kubernetes v1.15.3 Etcd v3.3.10 Flanneld v0.11.0 服务器IP 角色 192.168.1.241 m ...

  6. kubeadm 部署 kubernetes:v1.23.4集群

    一.安装前的准备 !!!以下操作需要在所有master和node上执行 1.1.关闭selinux,关闭防火墙 1.2.添加hosts解析 192.168.122.160 master 192.168 ...

  7. Kubespray v2.22.1 在线部署 kubernetes v1.26.5 集群

    文章目录 1. 介绍 2. 预备条件 3. 配置 hostname 4. yum 5. 下载 kubespray 6. 编写 inventory.ini 7. 配置互信 8. 安装 ansible 9 ...

  8. Kubernetes(K8S)集群部署搭建图文教程(最全)

    Kubernetes 集群安装 前期准备 集群安装 系统初始化 Harbor采取私有的仓库去镜像使用 集群检测 集群功能演示 前期准备 第一步:Router软路由构建 第二步:centos7安装 5台 ...

  9. Kubadem方式安装Kubernetes(1.10.0)集群

    背景 kubernetes已经是现有的docker容器管理工具中必学的一个架构了,相对与swarm来说,它的架构更重,组件和配置也更复杂,当然了,提供的功能也更加强大.在这里,k8s的基本概念和架构就 ...

最新文章

  1. 华为自动驾驶首秀,狂到diss潜在客户
  2. SQL SERVER 2005 请求失败或服务未及时响应
  3. xheditor 内容保存时 不转义html特殊字符,xheditor编辑器上传图片(示例代码)
  4. Fiori里花瓣的动画效果实现原理
  5. mac 不能连接wi-fi_如何阻止Mac自动连接到Wi-Fi网络
  6. Teradata QueryGrid整合最佳分析技术 拓展客户选择空间
  7. mysql更新代码_mysql update语句的用法
  8. 编译原理pl/0 c语言版 pl0.h文件
  9. 公交导航准确度大PK:高德地图百度地图谁更精确
  10. Android 轻松实现语音识别详解及实例代码
  11. Oracle技术之串行隔离对延迟段和INTERVAL分区的支持
  12. 全网首发:解决办法,/bin/ant: 1: cd: can‘t cd to /bin/../share/ant/bin/..
  13. 安卓 实现一个简单的计算器
  14. How to Install Jdownloader on Ubuntu
  15. 倾斜摄影计算机配置,2019年倾斜摄影三维建模-台式、便携、多机集群配置推荐...
  16. 大胖子走迷宫【第十届】【决赛】
  17. linux 命令:yum 详解
  18. 送学计算机男生什么礼物好,【十大男生喜欢的礼物】男生渴望收到什么礼物_主妇网...
  19. Google Common Lisp 风格指南
  20. 爱普生Epson WorkForce WF-7725 一体机驱动

热门文章

  1. android远程桌面软件毕设_2019 远程桌面解决方案综述
  2. mysql insert s锁_MySQL 死锁套路:唯一索引 S 锁与 X 锁的爱恨情仇
  3. 文件fluent_Win10 中解决FLUENT中UDF 的方法
  4. html资源文件记载进度条,HTML5矢量实现文件上传进度条
  5. oracle mseq,一次RMAN备份报错的诊断过程(一)
  6. 开发实例_5G时代导热石墨散热片的开发和应用实例
  7. 4后期盒子叫什么_考研:什么叫跨考专业?跨考专业的4大原因和存在3个方面的困难...
  8. google bert
  9. python \__class__
  10. Hadoop sqoop