k8s离线安装包 三步安装,简单到难以置信

kubeadm源码分析

说句实在话,kubeadm的代码写的真心一般,质量不是很高。

几个关键点来先说一下kubeadm干的几个核心的事:

  • kubeadm 生成证书在/etc/kubernetes/pki目录下
  • kubeadm 生成static pod yaml配置,全部在/etc/kubernetes/manifasts下
  • kubeadm 生成kubelet配置,kubectl配置等 在/etc/kubernetes下
  • kubeadm 通过client go去启动dns

kubeadm init

代码入口 cmd/kubeadm/app/cmd/init.go 建议大家去看看cobra

找到Run函数来分析下主要流程:

如果证书不存在,就创建证书,所以如果我们有自己的证书可以把它放在/etc/kubernetes/pki下即可, 下文细看如果生成证书

    if res, _ := certsphase.UsingExternalCA(i.cfg); !res {if err := certsphase.CreatePKIAssets(i.cfg); err != nil {return err}

创建kubeconfig文件

        if err := kubeconfigphase.CreateInitKubeConfigFiles(kubeConfigDir, i.cfg); err != nil {return err}

创建manifest文件,etcd apiserver manager scheduler都在这里创建, 可以看到如果你的配置文件里已经写了etcd的地址了,就不创建了,这我们就可以自己装etcd集群,而不用默认单点的etcd,很有用

controlplanephase.CreateInitStaticPodManifestFiles(manifestDir, i.cfg);
if len(i.cfg.Etcd.Endpoints) == 0 {if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(manifestDir, i.cfg); err != nil {return fmt.Errorf("error creating local etcd static pod manifest file: %v", err)}
}

等待APIserver和kubelet启动成功,这里就会遇到我们经常遇到的镜像拉不下来的错误,其实有时kubelet因为别的原因也会报这个错,让人误以为是镜像弄不下来

if err := waitForAPIAndKubelet(waiter); err != nil {ctx := map[string]string{"Error":                  fmt.Sprintf("%v", err),"APIServerImage":         images.GetCoreImage(kubeadmconstants.KubeAPIServer, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),"ControllerManagerImage": images.GetCoreImage(kubeadmconstants.KubeControllerManager, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),"SchedulerImage":         images.GetCoreImage(kubeadmconstants.KubeScheduler, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),}kubeletFailTempl.Execute(out, ctx)return fmt.Errorf("couldn't initialize a Kubernetes cluster")
}

给master加标签,加污点, 所以想要pod调度到master上可以把污点清除了

if err := markmasterphase.MarkMaster(client, i.cfg.NodeName); err != nil {return fmt.Errorf("error marking master: %v", err)
}

生成tocken

if err := nodebootstraptokenphase.UpdateOrCreateToken(client, i.cfg.Token, false, i.cfg.TokenTTL.Duration, kubeadmconstants.DefaultTokenUsages, []string{kubeadmconstants.NodeBootstrapTokenAuthGroup}, tokenDescription); err != nil {return fmt.Errorf("error updating or creating token: %v", err)
}

调用clientgo创建dns和kube-proxy

if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil {return fmt.Errorf("error ensuring dns addon: %v", err)
}if err := proxyaddonphase.EnsureProxyAddon(i.cfg, client); err != nil {return fmt.Errorf("error ensuring proxy addon: %v", err)
}

笔者批判代码无脑式的一个流程到底,要是笔者操刀定抽象成接口 RenderConf Save Run Clean等,DNS kube-porxy以及其它组件去实现,然后问题就是没把dns和kubeproxy的配置渲染出来,可能是它们不是static pod的原因, 然后就是join时的bug下文提到

证书生成

循环的调用了这一坨函数,我们只需要看其中一两个即可,其它的都差不多

certActions := []func(cfg *kubeadmapi.MasterConfiguration) error{CreateCACertAndKeyfiles,CreateAPIServerCertAndKeyFiles,CreateAPIServerKubeletClientCertAndKeyFiles,CreateServiceAccountKeyAndPublicKeyFiles,CreateFrontProxyCACertAndKeyFiles,CreateFrontProxyClientCertAndKeyFiles,
}

根证书生成:


//返回了根证书的公钥和私钥
func NewCACertAndKey() (*x509.Certificate, *rsa.PrivateKey, error) {caCert, caKey, err := pkiutil.NewCertificateAuthority()if err != nil {return nil, nil, fmt.Errorf("failure while generating CA certificate and key: %v", err)}return caCert, caKey, nil
}

k8s.io/client-go/util/cert 这个库里面有两个函数,一个生成key的一个生成cert的:

key, err := certutil.NewPrivateKey()
config := certutil.Config{CommonName: "kubernetes",
}
cert, err := certutil.NewSelfSignedCACert(config, key)

config里面我们也可以填充一些别的证书信息:

type Config struct {CommonName   stringOrganization []stringAltNames     AltNamesUsages       []x509.ExtKeyUsage
}

私钥就是封装了rsa库里面的函数:

    "crypto/rsa""crypto/x509"
func NewPrivateKey() (*rsa.PrivateKey, error) {return rsa.GenerateKey(cryptorand.Reader, rsaKeySize)
}

自签证书,所以根证书里只有CommonName信息,Organization相当于没设置:

func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) {now := time.Now()tmpl := x509.Certificate{SerialNumber: new(big.Int).SetInt64(0),Subject: pkix.Name{CommonName:   cfg.CommonName,Organization: cfg.Organization,},NotBefore:             now.UTC(),NotAfter:              now.Add(duration365d * 10).UTC(),KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,BasicConstraintsValid: true,IsCA: true,}certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key)if err != nil {return nil, err}return x509.ParseCertificate(certDERBytes)
}

生成好之后把之写入文件:

 pkiutil.WriteCertAndKey(pkiDir, baseName, cert, key);
certutil.WriteCert(certificatePath, certutil.EncodeCertPEM(cert))

这里调用了pem库进行了编码

encoding/pemfunc EncodeCertPEM(cert *x509.Certificate) []byte {block := pem.Block{Type:  CertificateBlockType,Bytes: cert.Raw,}return pem.EncodeToMemory(&block)
}

然后我们看apiserver的证书生成:

caCert, caKey, err := loadCertificateAuthorithy(cfg.CertificatesDir, kubeadmconstants.CACertAndKeyBaseName)
//从根证书生成apiserver证书
apiCert, apiKey, err := NewAPIServerCertAndKey(cfg, caCert, caKey)

这时需要关注AltNames了比较重要,所有需要访问master的地址域名都得加进去,对应配置文件中apiServerCertSANs字段,其它东西与根证书无差别

config := certutil.Config{CommonName: kubeadmconstants.APIServerCertCommonName,AltNames:   *altNames,Usages:     []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
}

创建k8s配置文件

可以看到创建了这些文件

return createKubeConfigFiles(outDir,cfg,kubeadmconstants.AdminKubeConfigFileName,kubeadmconstants.KubeletKubeConfigFileName,kubeadmconstants.ControllerManagerKubeConfigFileName,kubeadmconstants.SchedulerKubeConfigFileName,
)

k8s封装了两个渲染配置的函数:
区别是你的kubeconfig文件里会不会产生token,比如你进入dashboard需要一个token,或者你调用api需要一个token那么请生成带token的配置
生成的conf文件基本一直只是比如ClientName这些东西不同,所以加密后的证书也不同,ClientName会被加密到证书里,然后k8s取出来当用户使用

所以重点来了,我们做多租户时也要这样去生成。然后给该租户绑定角色。

return kubeconfigutil.CreateWithToken(spec.APIServer,"kubernetes",spec.ClientName,certutil.EncodeCertPEM(spec.CACert),spec.TokenAuth.Token,
), nilreturn kubeconfigutil.CreateWithCerts(spec.APIServer,"kubernetes",spec.ClientName,certutil.EncodeCertPEM(spec.CACert),certutil.EncodePrivateKeyPEM(clientKey),certutil.EncodeCertPEM(clientCert),
), nil

然后就是填充Config结构体喽, 最后写到文件里,略

"k8s.io/client-go/tools/clientcmd/api
return &clientcmdapi.Config{Clusters: map[string]*clientcmdapi.Cluster{clusterName: {Server: serverURL,CertificateAuthorityData: caCert,},},Contexts: map[string]*clientcmdapi.Context{contextName: {Cluster:  clusterName,AuthInfo: userName,},},AuthInfos:      map[string]*clientcmdapi.AuthInfo{},CurrentContext: contextName,
}

创建static pod yaml文件

这里返回了apiserver manager scheduler的pod结构体,

specs := GetStaticPodSpecs(cfg, k8sVersion)
staticPodSpecs := map[string]v1.Pod{kubeadmconstants.KubeAPIServer: staticpodutil.ComponentPod(v1.Container{Name:          kubeadmconstants.KubeAPIServer,Image:         images.GetCoreImage(kubeadmconstants.KubeAPIServer, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),Command:       getAPIServerCommand(cfg, k8sVersion),VolumeMounts:  staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeAPIServer)),LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeAPIServer, int(cfg.API.BindPort), "/healthz", v1.URISchemeHTTPS),Resources:     staticpodutil.ComponentResources("250m"),Env:           getProxyEnvVars(),}, mounts.GetVolumes(kubeadmconstants.KubeAPIServer)),kubeadmconstants.KubeControllerManager: staticpodutil.ComponentPod(v1.Container{Name:          kubeadmconstants.KubeControllerManager,Image:         images.GetCoreImage(kubeadmconstants.KubeControllerManager, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),Command:       getControllerManagerCommand(cfg, k8sVersion),VolumeMounts:  staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeControllerManager)),LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeControllerManager, 10252, "/healthz", v1.URISchemeHTTP),Resources:     staticpodutil.ComponentResources("200m"),Env:           getProxyEnvVars(),}, mounts.GetVolumes(kubeadmconstants.KubeControllerManager)),kubeadmconstants.KubeScheduler: staticpodutil.ComponentPod(v1.Container{Name:          kubeadmconstants.KubeScheduler,Image:         images.GetCoreImage(kubeadmconstants.KubeScheduler, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),Command:       getSchedulerCommand(cfg),VolumeMounts:  staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeScheduler)),LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeScheduler, 10251, "/healthz", v1.URISchemeHTTP),Resources:     staticpodutil.ComponentResources("100m"),Env:           getProxyEnvVars(),}, mounts.GetVolumes(kubeadmconstants.KubeScheduler)),
}//获取特定版本的镜像
func GetCoreImage(image, repoPrefix, k8sVersion, overrideImage string) string {if overrideImage != "" {return overrideImage}kubernetesImageTag := kubeadmutil.KubernetesVersionToImageTag(k8sVersion)etcdImageTag := constants.DefaultEtcdVersionetcdImageVersion, err := constants.EtcdSupportedVersion(k8sVersion)if err == nil {etcdImageTag = etcdImageVersion.String()}return map[string]string{constants.Etcd:                  fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "etcd", runtime.GOARCH, etcdImageTag),constants.KubeAPIServer:         fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-apiserver", runtime.GOARCH, kubernetesImageTag),constants.KubeControllerManager: fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-controller-manager", runtime.GOARCH, kubernetesImageTag),constants.KubeScheduler:         fmt.Sprintf("%s/%s-%s:%s", repoPrefix, "kube-scheduler", runtime.GOARCH, kubernetesImageTag),}[image]
}
//然后就把这个pod写到文件里了,比较简单staticpodutil.WriteStaticPodToDisk(componentName, manifestDir, spec); 

创建etcd的一样,不多废话

等待kubelet启动成功

这个错误非常容易遇到,看到这个基本就是kubelet没起来,我们需要检查:selinux swap 和Cgroup driver是不是一致
setenforce 0 && swapoff -a && systemctl restart kubelet如果不行请保证 kubelet的Cgroup driver与docker一致,docker info|grep Cg

go func(errC chan error, waiter apiclient.Waiter) {// This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything specialif err := waiter.WaitForHealthyKubelet(40*time.Second, "http://localhost:10255/healthz"); err != nil {errC <- err}
}(errorChan, waiter)go func(errC chan error, waiter apiclient.Waiter) {// This goroutine can only make kubeadm init fail. If this check succeeds, it won't do anything specialif err := waiter.WaitForHealthyKubelet(60*time.Second, "http://localhost:10255/healthz/syncloop"); err != nil {errC <- err}
}(errorChan, waiter)

创建DNS和kubeproxy

我就是在此发现coreDNS的

if features.Enabled(cfg.FeatureGates, features.CoreDNS) {return coreDNSAddon(cfg, client, k8sVersion)
}
return kubeDNSAddon(cfg, client, k8sVersion)

然后coreDNS的yaml配置模板直接是写在代码里的:
/app/phases/addons/dns/manifests.go

    CoreDNSDeployment = `
apiVersion: apps/v1beta2
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dns
spec:replicas: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:serviceAccountName: corednstolerations:- key: CriticalAddonsOnlyoperator: Exists- key: {{ .MasterTaintKey }}
...

然后渲染模板,最后调用k8sapi创建,这种创建方式可以学习一下,虽然有点拙劣,这地方写的远不如kubectl好

coreDNSConfigMap := &v1.ConfigMap{}
if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), configBytes, coreDNSConfigMap); err != nil {return fmt.Errorf("unable to decode CoreDNS configmap %v", err)
}// Create the ConfigMap for CoreDNS or update it in case it already exists
if err := apiclient.CreateOrUpdateConfigMap(client, coreDNSConfigMap); err != nil {return err
}coreDNSClusterRoles := &rbac.ClusterRole{}
if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), []byte(CoreDNSClusterRole), coreDNSClusterRoles); err != nil {return fmt.Errorf("unable to decode CoreDNS clusterroles %v", err)
}
...

这里值得一提的是kubeproxy的configmap真应该把apiserver地址传入进来,允许自定义,因为做高可用时需要指定虚拟ip,得修改,很麻烦
kubeproxy大差不差,不说了,想改的话改: app/phases/addons/proxy/manifests.go

kubeadm join

kubeadm join比较简单,一句话就可以说清楚,获取cluster info, 创建kubeconfig,怎么创建的kubeinit里面已经说了。带上token让kubeadm有权限
可以拉取

return https.RetrieveValidatedClusterInfo(cfg.DiscoveryFile)cluster info内容
type Cluster struct {// LocationOfOrigin indicates where this object came from.  It is used for round tripping config post-merge, but never serialized.LocationOfOrigin string// Server is the address of the kubernetes cluster (https://hostname:port).Server string `json:"server"`// InsecureSkipTLSVerify skips the validity check for the server's certificate. This will make your HTTPS connections insecure.// +optionalInsecureSkipTLSVerify bool `json:"insecure-skip-tls-verify,omitempty"`// CertificateAuthority is the path to a cert file for the certificate authority.// +optionalCertificateAuthority string `json:"certificate-authority,omitempty"`// CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority// +optionalCertificateAuthorityData []byte `json:"certificate-authority-data,omitempty"`// Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields// +optionalExtensions map[string]runtime.Object `json:"extensions,omitempty"`
}return kubeconfigutil.CreateWithToken(clusterinfo.Server,"kubernetes",TokenUser,clusterinfo.CertificateAuthorityData,cfg.TLSBootstrapToken,
), nil

CreateWithToken上文提到了不再赘述,这样就能去生成kubelet配置文件了,然后把kubelet启动起来即可

kubeadm join的问题就是渲染配置时没有使用命令行传入的apiserver地址,而用clusterinfo里的地址,这不利于我们做高可用,可能我们传入一个虚拟ip,但是配置里还是apiser的地址

kubeadm源码分析(kubernetes离线安装包,三步安装)相关推荐

  1. kubeadm源码分析(内含kubernetes离线包,三步安装)

    k8s离线安装包 三步安装,简单到难以置信 kubeadm源码分析 说句实在话,kubeadm的代码写的真心一般,质量不是很高. 几个关键点来先说一下kubeadm干的几个核心的事: kubeadm ...

  2. Linux软件包管理之源码包、脚本安装包

    目录 1.源码包和RPM包的区别 RPM包和源码包默认安装位置: 由于安装位置不同带来的影响 2.源码包安装 ①.安装准备 ②.安装注意事项 ③.安装源码包 3.源码包卸载 4.脚本安装包 5.总结 ...

  3. 【Apollo源码分析】系列的第三部分【prediction】_slamcode的博客 -CSDN博客

    [Apollo源码分析]系列的第三部分[prediction]_slamcode的博客 -CSDN博客

  4. linux 离线安装nfs,ubuntu 上离线安装包制作与安装之NFS搭建

    在Ubuntu上离线搭建NFS服务器 一.制作离线软件包 在联网且环境与你相同的电脑上 制作离线软件包 注:先查看有没有安装nfs-utils的安装包 命令:# rpm -q nfs-utils 1. ...

  5. Spring Cloud源码分析之Eureka篇第三章:EnableDiscoveryClient与EnableEurekaClient的区别(Edgware版本)

    在基于SpringCloud做开发的时候,EnableDiscoveryClient和EnableEurekaClient这两个注解我们并不陌生,今天就来聊聊它们的区别,和网上更早期的类似文章不同的是 ...

  6. 深入理解Spark:核心思想与源码分析

    大数据技术丛书 深入理解Spark:核心思想与源码分析 耿嘉安 著 图书在版编目(CIP)数据 深入理解Spark:核心思想与源码分析/耿嘉安著. -北京:机械工业出版社,2015.12 (大数据技术 ...

  7. Docker源码分析(三):Docker Daemon启动

    http://www.infoq.com/cn/articles/docker-source-code-analysis-part3 1 前言 Docker诞生以来,便引领了轻量级虚拟化容器领域的技术 ...

  8. 下载Visual Studio 2019离线安装包

    Visual Studio 2019 可以在线安装也可以下载离线安装包之后再安装. 最好下载离线安装包之后再安装,这样即便Visual Studio 2019之后出现在什么问题需要卸载再安装的时候就不 ...

  9. OkHttpClient源码分析(一)—— 同步、异步请求分析和Dispatcher的任务调度

    OkHttpClient同步请求的执行流程和源码分析 同步请求示例 OkHttpClient okHttpClient = new OkHttpClient.Builder().readTimeout ...

最新文章

  1. 离职总结:大公司与小公司的个人体验
  2. 英特尔助力完善AI人才培养,携手微软共促地球可持续发展
  3. 动态规划——硬币找零
  4. 静态绑定(前期绑定)
  5. 想起纽微特期间的一次版本事故
  6. Matlab根据滤波器系数画出幅频特性曲线
  7. 最新Thinkphp免签码支付系统源码+安卓监控端/实测可用
  8. Suricata/Snort规则参考
  9. 数据处理可视化的最有价值的 50 张图 (上)
  10. 计算机 在哪看是什么32位,怎么看电脑是32位还是64位?
  11. CSC申请成功经验(自动化到生物信息,德国KIT-CSC攻博)
  12. HDU 5745 La Vie en rose (DP||模拟) 2016杭电多校联合第二场
  13. java计算机毕业设计南通大学福利发放管理系统源码+系统+数据库+lw文档+mybatis+运行部署
  14. 阿里巴巴绩效管理理念和原则
  15. 软件著作权申请流程及注意事项,你知道多少?
  16. 计算机课题参与者的学术背景,课题参与有几种方法
  17. 单例模式-高性能单例模式
  18. 为dev c++配置图形开发环境easyx之mingw32
  19. 修复 SSL Certificate Problem 常见问题
  20. 蝉知企业门户系统v7.7 - 命令执行漏洞

热门文章

  1. CF540 B 贪心
  2. Redis在windows下安装说明
  3. LeetCode : Word Pattern
  4. JY01-KX-01
  5. eclipse+maven远程(自动)部署web项目到tomcat
  6. android ListView和GridView拖拽移位具体实现及拓展
  7. [数据库]oracle导出数据库
  8. GARFIELD@01-25-2005
  9. 01-10 Linux-bash编程
  10. SQL Server如何保证可空字段中非空值唯一