文章目录

  • 1.kubernetes简介
  • 2.Kubernetes快速部署
  • 3.准备环境:
  • 4.所有节点安装Docker/kubeadm/kubelet
    • 4.1 安装Docker
    • 4.2添加kubernetes阿里云YUM软件源
    • 4.3 安装kubeadm,kubelet和kubectl
  • 5.部署Kubernetes Master
  • 6.安装Pod网络插件(CNI)
  • 7.加入Kubernetes Node
  • 8.测试kubernetes集群
  • 9.kubectl命令使用:
  • 10.资源管理方式
    • 10.1命令式对象配置
    • 10.2声明式对象配置
  • 11.标签使用

1.kubernetes简介

kubernetes,是一个全新的基于容器技术的分布式架构领先方案,是谷歌严格保密十几年的秘密武器----Borg系统的一个开源版本,于2014年9月发布第一个版本,2015年7月发布第一个正式版本。

kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:

自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
服务发现:服务可以通过自动发现的形式找到它所依赖的服务
负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
存储编排:可以根据容器自身的需求自动创建存储卷
1.3 kubernetes组件
一个kubernetes集群主要是由控制节点(master)、工作节点(node)构成,每个节点上都会安装不同的组件。

master:集群的控制平面,负责集群的决策 ( 管理 )

ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制

Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上

ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等

Etcd :负责存储集群中各种资源对象的信息

node:集群的数据平面,负责为容器提供运行环境 ( 干活 )

Kubelet : 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器

KubeProxy : 负责提供集群内部的服务发现和负载均衡

Docker : 负责节点上容器的各种操作

下面,以部署一个nginx服务来说明kubernetes系统各个组件调用关系:

首先要明确,一旦kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中

一个nginx服务的安装请求会首先被发送到master节点的apiServer组件

apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上

在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer

apiServer调用controller-manager去调度Node节点安装nginx服务

kubelet接收到指令后,会通知docker,然后由docker来启动一个nginx的pod

pod是kubernetes的最小操作单元,容器必须跑在pod中至此,

一个nginx服务就运行了,如果需要访问nginx,就需要通过kube-proxy来对pod产生访问的代理

这样,外界用户就可以访问集群中的nginx服务了

1.4 kubernetes概念
Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控

Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的docker负责容器的运行

Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器

Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等

Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod

Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签

NameSpace:命名空间,用来隔离pod的运行环境

2.Kubernetes快速部署

学习目标

  • 在所有节点上安装Docker和kubeadm
  • 部署Kubernetes Master
  • 部署容器网络插件
  • 部署 Kubernetes Node,将节点加入Kubernetes集群中
  • 部署Dashboard Web页面,可视化查看Kubernetes资源

3.准备环境:

实验环境:
当前最低配值
master:4核4G
nob1: 2核2G
nob2: 2核2G

角色 IP
k8s-master 192.168.70.134
k8s-node1 192.168.70.138
k8s-node2 192.168.70.139
//以下的操作所有主机都要做
//关闭所有主机的防火墙,selinux
[root@k8s-master ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# vim /etc/selinux/config
[root@k8s-node1 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-node1 ~]# setenforce 0
[root@k8s-node1 ~]# vim /etc/selinux/config
[root@k8s-node2 ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-node2 ~]# setenforce 0
[root@k8s-node2 ~]# vim /etc/selinux/config
//关闭所有主机的swap分区:
# vim /etc/fstab
//注释掉swap分区
[root@k8s-master ~]# free -mtotal        used        free      shared  buff/cache   available
Mem:           3752         556        2656          10         539        2956
Swap:          4051           0        4051
[root@k8s-master ~]# vim /etc/fstab
[root@k8s-node1 ~]# free -mtotal        used        free      shared  buff/cache   available
Mem:           1800         550         728          10         521        1084
Swap:          2047           0        2047
[root@k8s-node1 ~]# vim /etc/fstab
[root@k8s-node2 ~]# free -mtotal        used        free      shared  buff/cache   available
Mem:           1800         559         711          10         529        1072
Swap:          2047           0        2047
[root@k8s-node2 ~]# vim /etc/fstab
//在master添加hosts:
[root@k8s-master ~]# cat >> /etc/hosts << EOF
192.168.70.134 k8s-master
192.168.70.138 k8s-node1
192.168.70.139 k8s-node2
EOF
[root@k8s-node1 ~]# cat >> /etc/hosts << EOF
> 192.168.70.134 k8s-master
> 192.168.70.138 k8s-node1
> 192.168.70.139 k8s-node2
> EOF
[root@k8s-node2 ~]# cat >> /etc/hosts << EOF
> 192.168.70.134 k8s-master
> 192.168.70.138 k8s-node1
> 192.168.70.139 k8s-node2
> EOF
[root@k8s-master ~]# ping k8s-master      //测试
PING k8s-master (192.168.70.134) 56(84) bytes of data.
64 bytes from k8s-master (192.168.70.134): icmp_seq=1 ttl=64 time=0.072 ms
64 bytes from k8s-master (192.168.70.134): icmp_seq=2 ttl=64 time=0.080 ms
^C
--- k8s-master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 41ms
rtt min/avg/max/mdev = 0.072/0.076/0.080/0.004 ms
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (192.168.70.138) 56(84) bytes of data.
64 bytes from k8s-node1 (192.168.70.138): icmp_seq=1 ttl=64 time=0.512 ms
64 bytes from k8s-node1 (192.168.70.138): icmp_seq=2 ttl=64 time=0.285 ms
^C
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (192.168.70.139) 56(84) bytes of data.
64 bytes from k8s-node2 (192.168.70.139): icmp_seq=1 ttl=64 time=1.60 ms
64 bytes from k8s-node2 (192.168.70.139): icmp_seq=2 ttl=64 time=0.782 ms
64 bytes from k8s-node2 (192.168.70.139): icmp_seq=3 ttl=64 time=1.32 ms//将桥接的IPv4流量传递到iptables的链:
[root@k8s-master ~]#  cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-master ~]# sysctl --system   //生效
#省略过程
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...    //看见这个说明应用了
* Applying /etc/sysctl.conf ...
//时间同步,所有主机
[root@k8s-master ~]# vim /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool time1.aliyun.com iburst       //配置成阿里云的时间同步
[root@k8s-master ~]# systemctl enable chronyd
[root@k8s-master ~]# systemctl restart chronyd
[root@k8s-master ~]# systemctl status chronyd
● chronyd.service - NTP client/serverLoaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enab>Active: active (running) since Tue 2022-09-06 15:54:27 CST; 9s ago
[root@k8s-node1 ~]# vim /etc/chrony.conf
[root@k8s-node1 ~]# systemctl enable chronyd
[root@k8s-node1 ~]# systemctl restart chronyd
[root@k8s-node1 ~]# systemctl status chronyd
● chronyd.service - NTP client/serverLoaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enab>Active: active (running) since Tue 2022-09-06 15:57:52 CST; 8s ago
[root@k8s-node2 ~]# vim /etc/chrony.conf
[root@k8s-node2 ~]# systemctl enable chronyd
[root@k8s-node2 ~]# systemctl restart chronyd
[root@k8s-node2 ~]# systemctl status chronyd
● chronyd.service - NTP client/serverLoaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enab>Active: active (running) since Tue 2022-09-06
//配置免密登录
[root@k8s-master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:LZeVhmrafNhs4eAGG8dNQltVYcGX/sXbKj/dPzR/wNo root@k8s-master
The key's randomart image is:
+---[RSA 3072]----+
|        . ...o=o.|
|       . o . o...|
|        o o + .o |
|       . * +   .o|
|      o S *  .  =|
|       @ O .  o+o|
|      o * *  o.++|
|       . o  o E.=|
|             o..=|
+----[SHA256]-----+
[root@k8s-master ~]# ssh-copy-id k8s-master
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-master (192.168.70.134)' can't be established.
ECDSA key fingerprint is SHA256:1x2Tw0BYQrGTk7wpwsIy+TtFN72hWbHYYiU6WtI/Ojk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'k8s-master'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-master ~]# ssh-copy-id k8s-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node1 (192.168.70.138)' can't be established.
ECDSA key fingerprint is SHA256:75svPGZTNSPdFX6K4lCDkoQfG10Y478mu0NzQD7HpnA.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-master ~]# ssh-copy-id k8s-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node2 (192.168.70.139)' can't be established.
ECDSA key fingerprint is SHA256:75svPGZTNSPdFX6K4lCDkoQfG10Y478mu0NzQD7HpnA.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-master ~]# ssh k8s-master
Activate the web console with: systemctl enable --now cockpit.socketThis system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --registerLast login: Tue Sep  6 15:10:17 2022 from 192.168.70.1
[root@k8s-master ~]# ssh k8s-node1
Activate the web console with: systemctl enable --now cockpit.socketThis system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --registerLast login: Tue Sep  6 15:10:18 2022 from 192.168.70.1
[root@k8s-node1 ~]# exit
注销
Connection to k8s-node1 closed.
[root@k8s-master ~]# ssh k8s-node2
Activate the web console with: systemctl enable --now cockpit.socketThis system is not registered to Red Hat Insights. See https://cloud.redhat.com/
To register this system, run: insights-client --registerLast login: Tue Sep  6 15:10:18 2022 from 192.168.70.1
[root@k8s-node2 ~]# exit
注销
Connection to k8s-node2 closed.
[root@k8s-master ~]# reboot   //前面设置了seliunx,swap分区,重启确保他永久生效
[root@k8s-node1 ~]# reboot
[root@k8s-node2 ~]# reboot
#//注意重启完了检查防火墙和seliunx,swap分区是是否关闭

4.所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

4.1 安装Docker

##注意所有docker的版本要一致,用dnf list all|grep docker 命令查看 docker-ce.x86_64 版本是否一致
#所有节点都做下面的操作
[root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo      //下载docker
//省略过程[root@k8s-master ~]# dnf list all|grep docker
containerd.io.x86_64                                              1.6.8-3.1.el8                                          @docker-ce-stable
docker-ce.x86_64     //这个                                             3:20.10.17-3.el8                                       @docker-ce-stable
docker-ce-cli.x86_64                                              1:20.10.17-3.el8                                       @docker-ce-stable[root@k8s-master ~]# dnf -y install docker-ce --allowerasiong      //正常情况不用加--allowerasiong来替换冲突的软件包 ,我的源有问题所有加这个
[root@k8s-master ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@k8s-master ~]# docker version
Client: Docker Engine - CommunityVersion:           20.10.17      //版本要统一API version:       1.41Go version:        go1.17.11Git commit:        100c701Built:             Mon Jun  6 23:03:11 2022OS/Arch:           linux/amd64Context:           defaultExperimental:      trueServer: Docker Engine - CommunityEngine:Version:          20.10.17API version:      1.41 (minimum version 1.12)Go version:       go1.17.11Git commit:       a89b842Built:            Mon Jun  6 23:01:29 2022OS/Arch:          linux/amd64Experimental:     falsecontainerd:Version:          1.6.8GitCommit:        9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6runc:Version:          1.1.4GitCommit:        v1.1.4-0-g5fd4c4ddocker-init:Version:          0.19.0GitCommit:        de40ad0
[root@k8s-master ~]#
//配置加速器
[root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF
> {>   "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF
[root@k8s-master ~]# cat /etc/docker/daemon.json
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],  //加速器"exec-opts": ["native.cgroupdriver=systemd"],   //驱动"log-driver": "json-file",   //格式"log-opts": {"max-size": "100m"    //100m开始运行},"storage-driver": "overlay2"    //存储驱动
}

4.2添加kubernetes阿里云YUM软件源

#以下操作所有节点都要配置
[root@k8s-master ~]#  cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.3 安装kubeadm,kubelet和kubectl

#以下操作所有节点都要配置
[root@k8s-master ~]# dnf list all|grep kubelet   //查看,要统一
kubelet.x86_64                                                    1.25.0-0                                               kubernetes
[root@k8s-master ~]# dnf list all|grep kubeadm
kubeadm.x86_64                                                    1.25.0-0                                               kubernetes
[root@k8s-master ~]# dnf list all|grep kubectl
kubectl.x86_64                                                    1.25.0-0                                               kubernetes
[root@k8s-master ~]# dnf -y install kubelet kubeadm kubectl
[root@k8s-master ~]# systemctl enable kubelet      //不能启动
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@k8s-master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor>Drop-In: /usr/lib/systemd/system/kubelet.service.d

5.部署Kubernetes Master

在192.168.70.134(Master)执行

[root@k8s-master ~]# kubeadm init -h     //看帮助文档
[root@k8s-master ~]# cd /etc/containerd/
[root@k8s-master containerd]# containerd config default > config.toml    //生成
[root@k8s-master containerd]# vim config.toml
sandbox_image = "k8s.gcr.io/pause:3.6"     //改为  sandbox_image = "registry.cn-beijing.aliyuncs.com/abcdocker/pause:3.6"
[root@k8s-master manifests]# systemctl stop kubelet
[root@k8s-master manifests]# systemctl restart containerd
[root@k8s-master manifests]# kubeadm init --apiserver-advertise-address 192.168.70.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.25.0 --service-cidr 10.96.0.0/12 --pod-network-cidr 10.244.0.0/16
//省略过程
//除了192.168.70.134,其他基本都是固定写法
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube      //普通用户用这些命令sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.conf     //root用户用这些命令You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:        //要配置这个网络插件,在github.com ,上搜索flannelhttps://kubernetes.io/docs/concepts/cluster-administration/addons/       Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.70.134:6443 --token h9utko.9esdw3ge9j0urwae \--discovery-token-ca-cert-hash sha256:8c36d378e51b8d01f1fe904e51e1b5d7215fc76dcbaf105c798c4cda70e84ca1 //看到着说明初始化成功//如果初始化报错,可以试一试重置kubeadm, 命令:kubeadm reset
//设置永久环境变量,用root方式
[root@k8s-master ~]# vim /etc/profile.d/k8s.sh
[root@k8s-master ~]# cat /etc/profile.d/k8s.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master ~]# source /etc/profile.d/k8s.sh
[root@k8s-master ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf

由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
使用kubectl工具:

//这个普通用户要做的操作,root不用mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl get nodes
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES           AGE   VERSION
k8s-master   NotReady   control-plane   14m   v1.25.0

6.安装Pod网络插件(CNI)

部署flannel网络插件能让pod网络直接通信,不需要经过转发

//看到说明成功
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

确保能够访问到quay.io这个registery。

7.加入Kubernetes Node

在192.168.70.138、192.168.70.139上(Node)执行。

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   15h   v1.25.0    //网络通了,变成Ready
//node节点这个文件都备份或者删除
[root@k8s-node2 containerd]# ls
config.toml
[root@k8s-node2 containerd]# mv config.toml{,.bak}    //备份文件,里面都加#
[root@k8s-node2 containerd]# ls
config.toml.bak
[root@k8s-master ~]# cd /etc/containerd/
[root@k8s-master containerd]# ls
config.toml//将已经生成好不会报错的文件传过去
[root@k8s-master containerd]# scp /etc/containerd/config.toml k8s-node1:/etc/containerd/
config.toml                        100% 6952     3.4MB/s   00:00
[root@k8s-master containerd]# scp /etc/containerd/config.toml k8s-node2:/etc/containerd/
config.toml                        100% 6952     3.8MB/s   00:00
[root@k8s-node1 containerd]# ls
config.toml
[root@k8s-node1 containerd]# systemctl restart containerd   //重启文件生效
[root@k8s-node1 containerd]# kubeadm join 192.168.70.134:6443 --token h9utko.9esdw3ge9j0urwae         --discovery-token-ca-cert-hash sha256:8c36d378e51b8d01f1fe904e51e1b5d7215fc76dcbaf105c798c4cda70e84ca1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:    //看到下面这几行说明成功
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-node2 containerd]# ls
config.toml  config.toml.bak
[root@k8s-node2 containerd]# systemctl restart containerd
[root@k8s-node2 containerd]# kubeadm join 192.168.70.134:6443 --token h9utko.9esdw3ge9j0urwae         --discovery-token-ca-cert-hash sha256:8c36d378e51b8d01f1fe904e51e1b5d7215fc76dcbaf105c798c4cda70e84ca1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master containerd]# kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   Ready      control-plane   15h     v1.25.0
k8s-node1    Ready      <none>          2m39s   v1.25.0
k8s-node2    NotReady   <none>          81s     v1.25.0     //NotReady 不是因为网络不同,而是他在拉镜像,等会
[root@k8s-master containerd]# kubectl get nodes       //已连通添加节点
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   15h     v1.25.0
k8s-node1    Ready    <none>          4m35s   v1.25.0
k8s-node2    Ready    <none>          3m17s   v1.25.0
[root@k8s-node1 ~]# kubectl get nodes    //访问不了因为没有添加环境变量和环境变量所指的文件
The connection to the server localhost:8080 was refused - did you specify the right host or port?
//添加环境变量和环境变量所指的文件
[root@k8s-master ~]# scp /etc/kubernetes/admin.conf k8s-node1:/etc/kubernetes/admin.conf
admin.conf                         100% 5638     2.7MB/s   00:00
[root@k8s-master ~]# scp /etc/kubernetes/admin.conf k8s-node2:/etc/kubernetes/admin.conf
admin.conf                         100% 5638     2.9MB/s   00:00
[root@k8s-master ~]# scp /etc/profile.d/k8s.sh k8s-node2:/etc/profile.d/k8s.sh
k8s.sh                             100%   45    23.6KB/s   00:00
[root@k8s-master ~]# scp /etc/profile.d/k8s.sh k8s-node1:/etc/profile.d/k8s.sh
k8s.sh                             100%   45     3.8KB/s   00:00
//在node节点查看
[root@k8s-node1 ~]# bash   //让变量生效
[root@k8s-node1 ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf
[root@k8s-node1 ~]# kubectl get nodes     //成功,可以查看节点
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   16h   v1.25.0
k8s-node1    Ready    <none>          21m   v1.25.0
k8s-node2    Ready    <none>          20m   v1.25.0
[root@k8s-node2 ~]# bash
[root@k8s-node2 ~]#  echo $KUBECONFIG
/etc/kubernetes/admin.conf
[root@k8s-node2 ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   16h   v1.25.0
k8s-node1    Ready    <none>          21m   v1.25.0
k8s-node2    Ready    <none>          20m   v1.25.0

8.测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created      //deployment: 类型   //ngimx:pod名称   //nginx:镜像
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort     //--part:暴露端口号 //type=NodePort:类型
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc   //获取pod,service的信息
//READY是0/1  说明是在拉镜像,等以下在查看  //ContainerCreating 没起来
NAME                        READY   STATUS              RESTARTS   AGE
pod/nginx-76d6c9b8c-6vnnf   0/1     ContainerCreating   0          17sNAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        16h
service/nginx        NodePort    10.98.79.60   <none>        80:30274/TCP   7s
[root@k8s-master ~]# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-76d6c9b8c-6vnnf   1/1     Running   0          5m28sNAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        16h
service/nginx        NodePort    10.98.79.60   <none>        80:30274/TCP   5m18s
//NodePort :映射         集群IP端口80

访问端口,成功

集群IP只能内部访问,默认80端口

[root@k8s-master ~]# curl 10.98.79.60
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

9.kubectl命令使用:

get命令:

//查看pod详细信息
//查看名称空间来自那台主机,查看容器,KubeProxy提供负载均衡随机分配[root@k8s-node1 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS      AGE   IP           NODE        NOMINATED NODE   READINESS GATES
nginx-76d6c9b8c-6vnnf   1/1     Running   1 (10m ago)   39m   10.244.2.3   k8s-node2   <none>           <none>
//查看控制器
[root@k8s-node1 ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           44m
// AVAILABLE  是否可用
// UP-TO-DATE   更新到最新状态

create命令

[root@k8s-master ~]# kubectl create deployment test1 --image busybox
deployment.apps/test1 created
[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS       AGE
nginx-76d6c9b8c-6vnnf    1/1     Running             2 (157m ago)   6h54m
test1-5b6fc8994b-7fdf4   0/1     ContainerCreating   0              13s
[root@k8s-master ~]# 

run命令

[root@k8s-master ~]# kubectl run nginx --image nginx
pod/nginx created         //启动 nginx的pod
[root@k8s-master ~]# kubectl delete pods nginx
pod "nginx" deleted        //删除  nginx的pod

delete命令

[root@k8s-master ~]# kubectl delete deployment test1 // 删除test1,使用deployment类型
deployment.apps "test1" deleted
[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-254qg   1/1     Running   0          24m

expose命令

[root@k8s-master ~]# kubectl create deployment web --image busybox    //列出所有的pod
deployment.apps/web created
[root@k8s-master ~]# kubectl expose deployment web --port 8080 --target-port 80   //显示所有pod的详细信息
service/web exposed
[root@k8s-master ~]# kubectl get svc   列出所有服务
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        23h
nginx        NodePort    10.98.79.60      <none>        80:30274/TCP   7h1m
web          ClusterIP   10.105.214.159   <none>        8080/TCP       48s

10.资源管理方式

命令式对象管理:直接使用命令去操作kubernetes资源

kubectl run nginx-pod --image=nginx:1.17.1 --port=80

命令式对象配置:通过命令配置和配置文件去操作kubernetes资源

kubectl create/patch -f nginx-pod.yaml

声明式对象配置:通过apply命令和配置文件去操作kubernetes资源

kubectl apply -f nginx-pod.yaml


命令式对象管理
kubectl命令

kubectl是kubernetes集群的命令行工具,通过它能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署。kubectl命令的语法如下:

kubectl [command] [type] [name] [flags]

comand:指定要对资源执行的操作,例如create、get、delete

type:指定资源类型,比如deployment、pod、service

name:指定资源的名称,名称大小写敏感

flags:指定额外的可选参数

# 查看所有pod
kubectl get pod # 查看某个pod
kubectl get pod pod_name# 查看某个pod,以yaml格式展示结果
kubectl get pod pod_name -o yaml
资源类型

kubernetes中所有的内容都抽象为资源,可以通过下面的命令进行查看:

kubectl api-resources

经常使用的资源有下面这些:

操作

kubernetes允许对资源进行多种操作,可以通过–help查看详细的操作命令

kubectl --help

经常使用的操作有下面这些:

10.1命令式对象配置

[root@k8s-master ~]# vi nginxpod.ymlapiVersion: v1
kind: Namespace
metadata:  name: dev---apiVersion: v1
kind: Pod
metadata:name: nginxpodnamespace: dev
spec:containers:- name: nginx-containersimage: nginx:latest//执行create命令,创建资源
[root@k8s-master ~]# kubectl create -f nginxpod.yml
namespace/dev created
pod/nginxpod created
//get命令,查看资源[root@k8s-master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   22h
dev               Active   61s
kube-flannel      Active   22h
kube-node-lease   Active   22h
kube-public       Active   22h
kube-system       Active   22h[root@k8s-master ~]# kubectl get ns default
NAME      STATUS   AGE
default   Active   22h
查看资源对象[root@k8s-master manifest]# kubectl get -f nginxpod.yml
NAME            STATUS   AGE
namespace/dev   Active   99sNAME           READY   STATUS    RESTARTS   AGE
pod/nginxpod   0/1     Pending   0          99s
//delete命令,删除资源
[root@k8s-master manifest]# kubectl delete -f nginxpod.yml
namespace "dev" deleted
pod "nginxpod" deleted
[root@k8s-master manifest]# kubectl get ns
NAME              STATUS   AGE
default           Active   22h
kube-flannel      Active   22h
kube-node-lease   Active   22h
kube-public       Active   22h

10.2声明式对象配置

声明式对象配置跟命令式对象配置很相似,但是它只有一个命令apply。

//创建资源
[root@k8s-master ~]# kubectl apply -f nginxpod.yml
namespace/dev created
pod/nginxpod created//执行两次,没有报错
[root@k8s-master ~]# kubectl apply -f nginxpod.yml
namespace/dev unchanged
pod/nginxpod unchanged[root@k8s-master ~]# kubectl get -f nginxpod.yml
NAME            STATUS   AGE
namespace/dev   Active   8sNAME           READY   STATUS    RESTARTS   AGE
pod/nginxpod   0/1     Pending   0          8s

11.标签使用

[root@k8s-master ~]# kubectl get pods -n devNAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          66m
[root@k8s-master ~]# kubectl describe pod nginx -n dev|grep -i label
Labels:           <none>
[root@k8s-master ~]# [root@k8s-master ~]# kubectl get pods -n devNAME       READY   STATUS    RESTARTS   AGE
mynginx    1/1     Running   0          2m29s
nginx      1/1     Running   0          72m
nginx1     1/1     Running   0          2m48s
nginxpod   1/1     Running   0          4m4s
[root@k8s-master ~]# # 为pod资源打标签
[root@k8s-master ~]# kubectl label pod nginx -n dev app=nginx
pod/nginx labeled
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl label pod nginx1 -n dev app=nginx1
pod/nginx1 labeled
[root@k8s-master ~]# kubectl label pod nginxpod -n dev app=nginxpod
pod/nginxpod labeled
[root@k8s-master ~]# kubectl label pod mynginx -n dev app=mynginx
pod/mynginx labeled
[root@k8s-master ~]# # 为pod资源更新标签
[root@k8s-master ~]# kubectl label pod nginx -n dev app=test
error: 'app' already has a value (nginx), and --overwrite is false
[root@k8s-master ~]# kubectl label pod nginx -n dev app=test --overwrite
pod/nginx labeled
[root@k8s-master ~]# [root@k8s-master ~]# kubectl get pod nginx -n dev --show-labels
NAME    READY   STATUS    RESTARTS   AGE   LABELS
nginx   1/1     Running   0          74m   app=nginx
[root@k8s-master ~]# kubectl get pod nginx -n dev --show-labels
NAME    READY   STATUS    RESTARTS   AGE   LABELS
nginx   1/1     Running   0          74m   app=test
[root@k8s-master ~]# # 查看标签
[root@k8s-master ~]# kubectl describe pod nginx -n dev|grep -i label
Labels:           app=nginx
[root@k8s-master ~]# kubectl get pod nginx -n dev --show-labels
NAME    READY   STATUS    RESTARTS   AGE   LABELS
nginx   1/1     Running   0          71m   app=nginx
[root@k8s-master ~]# 查看所有标签
[root@k8s-master ~]# kubectl get pods -n dev --show-labels
NAME       READY   STATUS    RESTARTS   AGE     LABELS
mynginx    1/1     Running   0          7m10s   app=mynginx
nginx      1/1     Running   0          76m     app=test
nginx1     1/1     Running   0          7m29s   app=nginx1
nginxpod   1/1     Running   0          8m45s   app=nginxpod
[root@k8s-master ~]# # 筛选标签:kubectl get pods -n dev -l app=test --show-labels
[root@k8s-master ~]# kubectl get pods -n dev --show-labels
NAME       READY   STATUS    RESTARTS   AGE     LABELS
mynginx    1/1     Running   0          7m48s   app=mynginx
nginx      1/1     Running   0          77m     app=test
nginx1     1/1     Running   0          8m7s    app=nginx1
nginxpod   1/1     Running   0          9m23s   app=nginxpod
筛选某一个标签
[root@k8s-master ~]# kubectl get pods -n dev -l app=test --show-labels
NAME    READY   STATUS    RESTARTS   AGE   LABELS
nginx   1/1     Running   0          78m   app=test
[root@k8s-master ~]#
除了被筛选的
[root@k8s-master ~]# kubectl get pods -n dev -l app!=test --show-labels
NAME       READY   STATUS    RESTARTS   AGE   LABELS
mynginx    1/1     Running   0          10m   app=mynginx
nginx1     1/1     Running   0          10m   app=nginx1
nginxpod   1/1     Running   0          11m   app=nginxpod
[root@k8s-master ~]# #删除标签
[root@k8s-master ~]# kubectl label pod nginx1 -n dev app-
pod/nginx1 unlabeled
[root@k8s-master ~]# kubectl get pods -n dev --show-labels
NAME       READY   STATUS    RESTARTS   AGE   LABELS
mynginx    1/1     Running   0          12m   app=mynginx
nginx      1/1     Running   0          81m   app=test
nginx1     1/1     Running   0          12m   <none>
nginxpod   1/1     Running   0          13m   app=nginxpod

Kubernetes快速部署,kubectl命令使用,资源管理相关推荐

  1. Kubernetes集群kubectl命令的常见使用方法

    简介:kubectl是一个命令行界面,用于运行针对Kubernetes群集的命令. 语法: kubectl [command] [TYPE] [NAME] [flags] command:指定您希望对 ...

  2. kubernetes学习:4.安装kubectl命令

    kubernetes学习:安装kubectl命令 kubectl是k8s的集群命令的管理工具,通过kubectl可以完成对k8s各种资源的操作(查看.添加.修改等).在管理工具界面使用kubectl语 ...

  3. Kubernetes二进制集群部署+Web管理界面+kubectl 命令管理+YAML文件详解(集合)

    Kubernetes---- 二进制集群部署(ETCD集群+Flannel网络) Kubernetes----单节点部署 Kubernetes----双master节点二进制部署 Kubernetes ...

  4. 使用FIT2CLOUD在青云QingCloud快速部署和管理Kubernetes集群

    一.Kubernetes概述 Kubernetes是Google一直在推进的容器调度和管理系统,是Google内部使用的容器管理系统Borg的开源版本.它可以实现对Docker容器的部署,配置,伸缩和 ...

  5. Kubernetes之(五)快速部署应用

    目录 Kubernetes之(五)快速部署应用 kubectl命令介绍 kubectl run命令行部署应用 kubectl expose 通过service暴漏Pod kubectl scale 动 ...

  6. helm安装_如何利用 Helm 在 Kubernetes 上快速部署 Jenkins

    Jenkins 做为最著名的 CI/CD 工具,在全世界范围内被广泛使用,而随着以 Kubernetes 为首的云平台的不断发展与壮大,在 Kubernetes 上运行 Jenkins 的需求越来越多 ...

  7. 吊炸天!一行命令快速部署大规模K8S集群!!!

    吊炸天!一行命令快速部署大规模K8S集群!!! 先决条件 请事先准备好几台服务器(测试环境虚拟机即可) 请事先设置好相同的root密码(方便同时操作多服务器) 请事先在Linux安装好docker 请 ...

  8. 教你在Kubernetes中快速部署ES集群

    摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...

  9. ② kubeadm快速部署Kubernetes集群

    文章目录 1. 安装要求 2. 目标 3. 准备环境 4. 所有节点安装Docker/kubeadm/kubelet 4.1 安装Docker(以一台为例,其他相同) 4.2 添加kubernetes ...

  10. 最新版kubeadm快速部署Kubernetes

    最新版kubeadm快速部署Kubernetes kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具 一.操作要求 在开始之前,部署kubernetes集群需要满足以下几个条 ...

最新文章

  1. iOS开发——你真的会用SDWebImage?
  2. 信息学奥赛一本通(C++)在线评测系统——基础(一)C++语言—— 1055:判断闰年
  3. Boyer-Moore 投票算法
  4. 枚举类型enum例题_10.1 C++枚举类型
  5. ARCGIS地理信息系统学习笔记001--认识ARCGIS
  6. 【换句话说】【等价描述】—— 定义及概念的不同描述
  7. [BZOJ1934/Luogu2057][SHOI2007]Vote 善意的投票 题解
  8. iOS小技能:OCR的使用(身份证/营业执照/车牌/银行卡识别)
  9. windows服务器登录记录查看
  10. UI设计原型交互基础
  11. 成都拓嘉启远:拼多多推广如何自己添加关键词
  12. html5 视差地图,用HTML5构建高性能视差网站的图文代码详解
  13. Jmeter性能测试之测试报告
  14. vscode配置c/c++编译环境(最终解决办法)
  15. XJNU CTF 2018
  16. Kafka原理+操作+实战
  17. ON_NOTIFY用法
  18. Win10-x64编译Hadoop2.7.3
  19. 云端软件平台(免去重装系统后装软件的烦恼)
  20. java计算机毕业设计医院医护人员排班系统源码+数据库+系统+lw文档+mybatis+运行部署

热门文章

  1. 中国各省会城市经纬度位置
  2. html5弹性盒子模型,推荐10款弹性盒子源码(收藏)
  3. 【硬缸·EP2.0】MOSFET的特性与选择
  4. 将excel数据导入到SQL server数据库的详细过程
  5. java常用类实验报告总结_【Java基础】java常用类实验总结
  6. mtkwin10驱动_联发科MTK刷机驱动(支持WIN10)驱动
  7. java驱动刷机_ProductTool(炬力芯片刷机工具+驱动) v5.46 中文安装免费版
  8. C语言数据结构--线性表
  9. java js加密_JS加密解密
  10. window10+cuda+cudnn下载