采用KubeSphere的kk,部署安装多节点服务的kubernetes-v1.18.6和kubesphere-v3.0.0的踩坑过程记录,及反思
前言
KubeSphere® 是经 CNCF 认证的 Kubernetes 主流开源发行版之一,在 Kubernetes 之上提供多种以容器为资源载体的业务功能模块,如多租户管理、集群运维、应用管理、DevOps、微服务治理等功能。
最近微服务,要部署到k8s,采用KubeSphere应用为中心的容器管理平台,于是捣鼓怎样去部署,第一次部署成功,好像不稳定,再次恢复四台服务器镜像,重新部署,其中遇到很多的问题及挫折,在此记录一下,以供大家参考。
准备服务器
master:172.16.7.12 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 200G ( data)
node5:172.16.7.15 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)
node3:172.16.7.16 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)
node4:172.16.7.17 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)
系统优化及安装准备软件
系统自动优化
参考“ansible安装部署CDH集群,与手动安装部署CDH集群,及CM配置和用户权限配置”,先进行对系统进行优化:
sh deploy_robot.sh init_ssh
sh deploy_robot.sh init_sys
关于deploy_robot.sh的脚本在https://github.com/fleapx/cdh-deploy-robot.git
ansible工具软件安装
# 控制机器上安装ansible
yum install -y ansible
配置修改 /etc/ansible/hosts ,对需要管理的主机进行配置。
[all]
172.16.7.12 master
172.16.7.15 node5
172.16.7.16 node3
172.16.7.17 node4
默认配置,需要修改编辑 /etc/ansible/ansible.cfg。
# uncomment this to disable SSH key host checking
host_key_checking = False
其他系统软件安装
ansible all -m shell -a "wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo"
ansible all -m shell -a "sed -i 's/^.*aliyuncs*/#&/g' /etc/yum.repos.d/CentOS-Base.repo"
ansible all -m shell -a "wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo"
ansible all -m shell -a "yum -y install ebtables socat ipset conntrack nfs-utils rpcbind"
ansible all -m shell -a "yum install -y vim wget yum-utils device-mapper-persistent-data lvm2"
集群时间校准
ansible all -m shell -a "yum install chrony -y"
ansible all -m shell -a "systemctl start chronyd"
ansible all -m shell -a "sed -i -e '/^server/s/^/#/' -e '1a server ntp.aliyun.com iburst' /etc/chrony.conf"
ansible all -m shell -a "systemctl restart chronyd"
ansible all -m shell -a "timedatectl set-timezone Asia/Shanghai"
系统其他优化
ansible all -m shell -a "echo '* soft nofile 655360' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* hard nofile 655360' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* soft nproc 655360' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* hard nproc 655360' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* soft memlock unlimited' >> /etc/security/limits.conf"
ansible all -m shell -a "echo '* hard memlock unlimited' >> /etc/security/limits.conf"
ansible all -m shell -a "echo 'DefaultLimitNOFILE=1024000' >> /etc/systemd/system.conf"
ansible all -m shell -a "echo 'DefaultLimitNPROC=1024000' >> /etc/systemd/system.conf"
开放所有端口
ansible all -m shell -a "iptables -P INPUT ACCEPT"
ansible all -m shell -a "iptables -P FORWARD ACCEPT"
ansible all -m shell -a "iptables -P OUTPUT ACCEPT"
ansible all -m shell -a "iptables -F"
安装docker
CentOS 7系统下docker安装,及配置阿里云加速,及所有配置请参加此文。
ansible all -m shell -a "yum remove docker docker-common docker-selinux docker-engine"
ansible all -m shell -a "yum -y install docker-ce-19.03.8-3.el7"
安装k8s的镜像源
tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubernetes-v1.18.6和kubesphere-v3.0
官方https://kubesphere.com.cn/多节点安装地址:https://kubesphere.com.cn/docs/quick-start/all-in-one-on-linux/
下载kk安装文件
# 在国内先添加一个环境变量
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
然后,使kk成为全局可执行文件
mv ./kk /usr/local/bin
生成多节点集群的配置
# 创建一个配置文件模版
kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0 -f ./config-kubesphere.yaml
修改配置文件
修改配置文件config-kubesphere.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 172.16.7.12, internalAddress: 172.16.7.12, user: root, password: azdebug_it}
- {name: node3, address: 172.16.7.16, internalAddress: 172.16.7.16, user: root, password: azdebug_it}
- {name: node4, address: 172.16.7.17, internalAddress: 172.16.7.17, user: root, password: azdebug_it}
- {name: node5, address: 172.16.7.15, internalAddress: 172.16.7.15, user: root, password: azdebug_it}
roleGroups:
etcd:
- node3
master:
- master
worker:
- node4
- node3
- node5
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: "172.16.7.12"
port: "6443"
kubernetes:
version: v1.18.6
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
#privateRegistry: dockerhub.kubekey.local
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.0.0
spec:
local_registry: ""
persistence:
storageClass: ""
authentication:
jwtSecret: ""
etcd:
monitoring: true
endpointIps: 172.16.7.16
port: 2379
tlsEnable: true
common:
es:
elasticsearchDataVolumeSize: 20Gi
elasticsearchMasterVolumeSize: 4Gi
elkPrefix: logstash
logMaxAge: 7
mysqlVolumeSize: 20Gi
minioVolumeSize: 20Gi
etcdVolumeSize: 20Gi
openldapVolumeSize: 2Gi
redisVolumSize: 2Gi
console:
enableMultiLogin: true # enable/disable multi login
port: 30880
alerting:
enabled: true
auditing:
enabled: true
devops:
enabled: true
jenkinsMemoryLim: 5Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 1024m
jenkinsJavaOpts_Xmx: 1024m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: true
ruler:
enabled: true
replicas: 2
logging:
enabled: true
logsidecarReplicas: 2
metrics_server:
enabled: true
monitoring:
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none # host | member | none
networkpolicy:
enabled: true
notification:
enabled: true
openpitrix:
enabled: true
servicemesh:
enabled: true
执行及安装kubernetes v1.18.6和kubesphere v3.0.0
# 修改配置文件,添加上节点信息(节点名称,ip等)
kk create cluster -f ./config-kubesphere.yaml
正确执行结果
[root@master ~]# sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"
W0105 22:45:15.009277 22248 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0105 22:45:15.009521 22248 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory
[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist
[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@master ~]# cd /home/k
k8s-script/ kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz
[root@master ~]# cd /home/k
k8s-script/ kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz
[root@master ~]# cd /home/k8s-script/
[root@master k8s-script]# export KKZONE=cn
[root@master k8s-script]# ./kk create cluster -f ./k8s-config.yaml
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node5 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |
| node4 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |
| node3 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |
| master | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[22:57:10 CST] Downloading Installation Files
INFO[22:57:10 CST] Downloading kubeadm ...
INFO[22:57:10 CST] Downloading kubelet ...
INFO[22:57:10 CST] Downloading kubectl ...
INFO[22:57:11 CST] Downloading helm ...
INFO[22:57:11 CST] Downloading kubecni ...
INFO[22:57:11 CST] Configurating operating system ...
[node5 172.16.7.15] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node4 172.16.7.17] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master 172.16.7.12] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node3 172.16.7.16] MSG:
vm.swappiness = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
fs.file-max = 655350
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.ipv4.tcp_fin_timeout = 2
net.core.somaxconn = 32768
kernel.threads-max = 655360
kernel.pid_max = 655360
vm.max_map_count = 393210
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[22:57:25 CST] Installing docker ...
INFO[22:57:35 CST] Start to download images on all nodes
[node5] Downloading image: kubesphere/pause:3.2
[master] Downloading image: kubesphere/pause:3.2
[node3] Downloading image: kubesphere/etcd:v3.3.12
[node4] Downloading image: kubesphere/pause:3.2
[node5] Downloading image: kubesphere/kube-proxy:v1.18.6
[node4] Downloading image: kubesphere/kube-proxy:v1.18.6
[node3] Downloading image: kubesphere/pause:3.2
[master] Downloading image: kubesphere/kube-apiserver:v1.18.6
[node5] Downloading image: coredns/coredns:1.6.9
[node4] Downloading image: coredns/coredns:1.6.9
[master] Downloading image: kubesphere/kube-controller-manager:v1.18.6
[node5] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: kubesphere/kube-proxy:v1.18.6
[master] Downloading image: kubesphere/kube-scheduler:v1.18.6
[node5] Downloading image: calico/kube-controllers:v3.15.1
[node3] Downloading image: coredns/coredns:1.6.9
[node4] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master] Downloading image: kubesphere/kube-proxy:v1.18.6
[node5] Downloading image: calico/cni:v3.15.1
[node4] Downloading image: calico/kube-controllers:v3.15.1
[master] Downloading image: coredns/coredns:1.6.9
[node5] Downloading image: calico/node:v3.15.1
[node4] Downloading image: calico/cni:v3.15.1
[node3] Downloading image: calico/kube-controllers:v3.15.1
[node3] Downloading image: calico/cni:v3.15.1
[master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node5] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node4] Downloading image: calico/node:v3.15.1
[master] Downloading image: calico/kube-controllers:v3.15.1
[node3] Downloading image: calico/node:v3.15.1
[node4] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master] Downloading image: calico/cni:v3.15.1
[master] Downloading image: calico/node:v3.15.1
[master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[23:01:26 CST] Generating etcd certs
INFO[23:01:32 CST] Synchronizing etcd certs
INFO[23:01:36 CST] Creating etcd service
INFO[23:01:49 CST] Starting etcd cluster
[node3 172.16.7.16] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[23:01:58 CST] Refreshing etcd configuration
INFO[23:01:59 CST] Backup etcd data regularly
INFO[23:02:00 CST] Get cluster status
[master 172.16.7.12] MSG:
Cluster will be created.
INFO[23:02:01 CST] Installing kube binaries
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.16:/tmp/kubekey/kubeadm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.15:/tmp/kubekey/kubeadm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.17:/tmp/kubekey/kubeadm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.12:/tmp/kubekey/kubeadm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.16:/tmp/kubekey/kubelet Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.12:/tmp/kubekey/kubelet Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.15:/tmp/kubekey/kubelet Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.17:/tmp/kubekey/kubelet Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.16:/tmp/kubekey/kubectl Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.17:/tmp/kubekey/kubectl Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.15:/tmp/kubekey/kubectl Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.12:/tmp/kubekey/kubectl Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.12:/tmp/kubekey/helm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.16:/tmp/kubekey/helm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.15:/tmp/kubekey/helm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.17:/tmp/kubekey/helm Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.16:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.17:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.12:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[23:02:50 CST] Initializing kubernetes cluster
[master 172.16.7.12] MSG:
W0105 23:02:51.587457 23847 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0105 23:02:51.587685 23847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master master.cluster.local node3 node3.cluster.local node4 node4.cluster.local node5 node5.cluster.local] and IPs [10.233.0.1 172.16.7.12 127.0.0.1 172.16.7.12 172.16.7.16 172.16.7.17 172.16.7.15 10.233.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0105 23:03:00.466175 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0105 23:03:00.474746 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0105 23:03:00.476002 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 32.002873 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6zsarg.gxg5eijglkupq85j
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \
--discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \
--discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9
[master 172.16.7.12] MSG:
service "kube-dns" deleted
[master 172.16.7.12] MSG:
service/coredns created
[master 172.16.7.12] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master 172.16.7.12] MSG:
configmap/nodelocaldns created
[master 172.16.7.12] MSG:
I0105 23:04:05.247536 26174 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.18
W0105 23:04:06.468801 26174 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
13a993ef56fb292d7ecb9947a3095a0eca6c419dfa148569699c474a8d6c28df
[master 172.16.7.12] MSG:
secret/kubeadm-certs patched
[master 172.16.7.12] MSG:
secret/kubeadm-certs patched
[master 172.16.7.12] MSG:
secret/kubeadm-certs patched
[master 172.16.7.12] MSG:
W0105 23:04:08.563212 26292 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join lb.kubesphere.local:6443 --token cfoibt.jdyzk3oc1aze53ri --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9
[master 172.16.7.12] MSG:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master NotReady master 39s v1.18.6 172.16.7.12 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.9
INFO[23:04:09 CST] Deploying network plugin ...
[master 172.16.7.12] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[23:04:13 CST] Joining nodes to cluster
[node5 172.16.7.15] MSG:
W0105 23:04:14.035833 52825 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0105 23:04:20.030576 52825 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node4 172.16.7.17] MSG:
W0105 23:04:14.432448 53936 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0105 23:04:19.870838 53936 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 172.16.7.16] MSG:
W0105 23:04:14.376894 57091 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0105 23:04:20.949568 57091 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 172.16.7.16] MSG:
node/node3 labeled
[node5 172.16.7.15] MSG:
node/node5 labeled
[node4 172.16.7.17] MSG:
node/node4 labeled
[master 172.16.7.12] MSG:
storageclass.storage.k8s.io/local created
serviceaccount/openebs-maya-operator created
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-localpv-provisioner created
INFO[23:04:51 CST] Deploying KubeSphere ...
v3.0.0
[master 172.16.7.12] MSG:
namespace/kubesphere-system created
namespace/kubesphere-monitoring-system created
[master 172.16.7.12] MSG:
secret/kube-etcd-client-certs created
[master 172.16.7.12] MSG:
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
INFO[23:10:23 CST] Installation is complete.
Please check the result using the command:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
遇到问题及解决方案
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.35:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.36:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[16:20:31 CST] Initializing kubernetes cluster
[master 172.16.8.36] MSG:
[preflight] Running pre-flight checks
W0105 16:20:38.445541 19396 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0105 16:20:38.453323 19396 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master 172.16.8.36] MSG:
[preflight] Running pre-flight checks
W0105 16:20:40.305612 19617 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0105 16:20:40.310273 19617 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[16:20:41 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"
W0105 16:20:40.826437 19657 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0105 16:20:40.826682 19657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory
[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist
[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 node=172.16.8.36
WARN[16:20:41 CST] Task failed ...
WARN[16:20:41 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
-f, --filename string Path to a configuration file
-h, --help help for cluster
--skip-pull-images Skip pre pull images
--with-kubernetes string Specify a supported version of kubernetes
--with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, --yes Skip pre-check of the installation
Global Flags:
--debug Print detailed information (default true)
Failed to init kubernetes cluster: interrupted by error
在master节点执行
sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"
提取关键错误
[ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory
[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist
[ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist
在master节点执行,再次查看
[root@master ~]# ls -lh /etc/ssl/etcd/ssl/
总用量 32K
-rw-r--r--. 1 root root 1.7K 1月 5 19:59 admin-node3-key.pem
-rw-r--r--. 1 root root 1.4K 1月 5 19:59 admin-node3.pem
-rw-r--r--. 1 root root 1.7K 1月 5 19:59 ca-key.pem
-rw-r--r--. 1 root root 1.1K 1月 5 19:59 ca.pem
-rw-r--r--. 1 root root 1.7K 1月 5 19:59 member-node3-key.pem
-rw-r--r--. 1 root root 1.4K 1月 5 19:59 member-node3.pem
-rw-r--r--. 1 root root 1.7K 1月 5 19:59 node-master-key.pem
-rw-r--r--. 1 root root 1.4K 1月 5 19:59 node-master.pem
原来在/etc/ssl/etcd/ssl/中真不存在node-node3-key.pem、node-node3.pem,真么办?原来是我选择的是member模式。
解决方法
[root@master ~]# cp /etc/ssl/etcd/ssl/member-node3-key.pem /etc/ssl/etcd/ssl/node-node3-key.pem
[root@master ~]# cp /etc/ssl/etcd/ssl/member-node3.pem /etc/ssl/etcd/ssl/node-node3.pem
再次执行
export KKZONE=cn
./kk create cluster -f ./k8s-config.yaml
查看安装KubeSphere日志:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
成功安装KubeSphere3.0
登陆kubesphere
#ip:30880
#用户名:admin
#默认密码:P@88w0rd
优秀参考
- https://www.cnblogs.com/elfcafe/p/13779619.html
- https://www.cnblogs.com/carriezhangyan/p/11551192.html
- https://www.cnblogs.com/xiao987334176/p/13267339.html
- https://www.cnblogs.com/wenyang321/p/14086162.html
- https://www.cnblogs.com/technology178/p/13547342.html
- https://www.cnblogs.com/wuchangblog/p/14091717.html
- https://www.cnblogs.com/it-peng/p/11393812.html
推荐这个博文:https://blog.csdn.net/weixin_43141746/article/details/110261158
采用KubeSphere的kk,部署安装多节点服务的kubernetes-v1.18.6和kubesphere-v3.0.0的踩坑过程记录,及反思相关推荐
- 安装Kubernetes V1.18.2
安装Kubernetes V1.18.2 设置系统 #关闭防火墙(所有节点) systemctl disable firewalld systemctl stop firewalld#关闭SELINU ...
- ubuntu18.04 kubeadm 安装kubernetes v1.18.3
机器:Ubuntu 18.04.2 LTS 关闭swap 先关闭swap!,不关也可以,安装kubernetes初始时会有提示关闭swap . sudo swapoff -a free -htotal ...
- FastDFS 的安装、使用、踩坑过程
FastDFS 的一系列踩坑 1. 什么是 FastDFS 2. 为什么要使用 FastDFS 3. FastDFS 安装[CentOS] 3.1 Tracker 安装 3.2 Storage 安装 ...
- 容器化部署(k8s)任务调度平台xxl-job(部署过程及踩坑问题记录)
文章预览: 1 部署过程(下方ip代表服务器的ip哈) 1.1 制作服务打包镜像DockerFile 1.2 制作执行脚本run.sh 1.3 jar包上上传 1.4 kuboard创建----配置信 ...
- install-newton部署安装--------计算节点部署安装
#################################################################################################### ...
- Hyperledger Fabric 1.0 快速搭建 -------- 多机部署 Fabric CA节点服务
前言 在这里我推荐两位大神的博客,可以参考或者直接跟着这两位大神学习,我是阅读这两位大神的博客和<深度探索区块链Hyperledger技术与应用>一书部署的 <深度探索区块链Hype ...
- 搭建kubernetes v1.21.5 和 kubesphere v3.2.1
一.准备一台干净的centos7.6机器 二.关闭防火墙打上相关补丁和相关系统软件 systemctl stop firewalld yum install openssh openssl sudo ...
- superset安装踩坑过程总结
搭建这个superset花了我两天的时间,作为一个小白翻遍了全网,但其实都是些很小的问题,感谢网上大腿们的无私贡献,才得以成功,真的自学太难了-- Windows10,64bit 先避坑(我前面花了1 ...
- linux ubuntu安装pytorch(深度学习环境搭建记录,无sudo权限)踩坑全记录
一些牢骚:深度学习没怎么学习几次,搭建环境已经把我搞秃了哈哈哈. 之前在网上找到的搭建环境的步骤,我没有root权限,很多操作都不行(比如运行.run文件,cuda 和cudnn的安装和修改也需要ro ...
- centos7 安装esrally 踩坑过程
背景: 为了压测下新ES集群性能,使用es官网推荐的Esrally.因为网络差异, 按照官网的步骤来,一步一个坑那基本上就是下载很慢,半天过去爬不出来. 所以那个简洁的命令有很多依赖前置条件.不然就是 ...
最新文章
- 机器学习入门(01)— 感知机概念、实现、局限性以及多层感知机
- 屏幕为什么要正负压供电_负压变换器的设计
- vba 字典_VBA中字典的基础概念及调用方法
- html 标签 r语言,从R中的字符串中删除html标签
- python面试常问题解答_10个Python面试常问的问题
- 【Elasticsearch】如何在Elasticsearch中查找相似的术语
- adprw指令通讯案例_超实用,非常典型的Modbus通讯项目案例,三分钟学会
- 在java 中调c_在Java中调用C
- 牙龈出血试试四个食疗方_新闻中心_新浪网
- PAT 乙级1019	数字黑洞
- sql中的distinct
- R语言ETL工程系列:R语言基础设置
- 随笔小杂记(四)——将语义分割标签转换为指定像素值
- FRR BGP代码分析20 -- 6PE\6VPE
- 《Android框架揭秘》——2.3节搭建Android SDK开发环境
- android 东软pda扫描适配_东软数字化医院解决方案
- “白嫖”时代进入最后倒计时,网盘行业到底是怎么由盛及衰的?
- 用U盘与移动硬盘制作WIN7启动盘(亲自实践)
- sprint演示会议
- 数据的经济价值与个人信息安全保护,该如何平衡?