Kubernetes安装基于v1.23.1版本
Kubernetes安装基于v1.23.1
前期准备
前提,主机没32G就别玩了。。。。嘿嘿嘿
服务器(centos7)
- master服务器
- k8s-Node-01
- k8s-Node-02
路由
- Router
Harbor仓库
安装
主机名 | 系统 | 配置 | ip | 备注 |
---|---|---|---|---|
k8s-master-01 | centos8.2 | 2c 4Gb *100Gb | 192.168.66.140 | k8s主节点 |
k8s-node-01 | centos8.2 | 2c 4Gb *100Gb | 192.168.66.141 | k8s从节点 |
k8s-node-02 | centos8.2 | 2c 4Gb *100Gb | 192.168.66.142 | k8s从节点 |
k8s-harbor | centos8.2 | 2c 4Gb *100Gb | 192.168.66.143 | 仓库 |
koolshare | win10 64 | 1c 4Gb *20Gb | 192.168.66.144 | 软路由 |
基础环境配置
修改固定ip
修改文件
centos7的网络IP地址配置文件在 /etc/sysconfig/network-scripts 文件夹下BOOTPROTO="static" DNS1="192.168.66.2" IPADDR="192.168.66.141" NETMASK="255.255.255.0" GATEWAY="192.168.66.2"
重启网卡
service network restart
关闭防火墙启用IPtables
命令关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
启用iptable
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
主机名/Host文件解析
大型环境中,建议通过DNS主机名和ip进行关联
设置主机名:hostnamectl set-hostname k8s-master-01hostnamectl set-hostname k8s-node-01hostnamectl set-hostname k8s-node-02hostnamectl set-hostname k8s-harborhostnamectl set-hostname koolshare 设置host解析vim /etc/hosts192.168.66.140 k8s-master-01192.168.66.141 k8s-node-01192.168.66.142 k8s-node-02192.168.66.143 k8s-harbor192.168.66.144 koolshare 拷贝当前文件到其他服务器目录中scp /etc/hosts root@k8s-node-01:/etc/hosts
关闭swap交换分区
关闭虚拟内存 && 永久关闭虚拟内存(也可以注解掉) swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab确认交换分区是否关闭,都为0表示关闭 free -m
关闭selinux虚拟内存
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disable/' /etc/selinux/config
集群时间同步配置
选择一个节点作为服务端我们选择master01为时间服务器的服务端,其他的为时间服务器的客户端 安装时间服务器yum install -y chrony编辑配置文件(master)vi /etc/chrony.confserver 192.168.66.140 iburstallow 192.168.66.0/24local stratum 10编辑配置文件(node)vi /etc/chrony.confserver 192.168.66.140 iburst确认是否可以同步chronyc sources 启动服务systemctl start chronyd验证启动ss -unl | grep 123 开机启动服务systemctl enable chronyd#设置系统时区为中国/上海 timedatectl set-timezone Asia/Shanghai #将当前的 UTC 时间写入硬件时钟 timedatectl set-local-rtc 0 #重启依赖于系统时间的服务 systemctl restart rsyslog systemctl restart crond
系统日志保存方式设置
原因:centos7以后,引导方式改为了systemd,所以会有两个日志系统同时工作只保留一个日志(journald)的方法
设置rsyslogd 和 systemd journald
# 持久化保存日志的目录 mkdir /var/log/journal mkdir /etc/systemd/journald.conf.dcat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] #持久化保存到磁盘 Storage=persistent # 压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大占用空间10G SystemMaxUse=10G # 单日志文件最大200M SystemMaxFileSize=200M # 日志保存时间 2 周 MaxRetentionSec=2week # 不将日志转发到 syslog ForwardToSyslog=no EOF#重启journald配置 systemctl restart systemd-journald
升级系统内核(如需)
- 查看当前内核版本 uname -r- 升级内核 - 安装 ELRepo 源: rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm启用 ELRepo 源仓库: yum --disablerepo="*" --enablerepo="elrepo-kernel" list available安装新内核: yum -y --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel------如无意外,最新内核已经安装好。-------修改 grub 配置使用新内核版本启动 查看当前默认启动内核: yum install dnf dnf install grubby grubby --default-kernel如不是,查看所有内核: grubby --info=ALL 然后指定新内核启动: grubby --set-default /boot/vmlinuz-5.3.8-1.el8.elrepo.x86_64
安装工具包
yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
安装docker软件
# docker依赖 yum install -y yum-utils device-mapper-persistent-data lvm2# 导入阿里云的docker-ce仓库 yum-config-manager \ --add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# centos8的yum库中没有符合最新版docker-ce对应版本的containerd.io,docker-ce-3:19.03.11-3.el7.x86_64需要containerd.io >= 1.2.2-3 # 通过阿里云镜像库安装符合最新docker-ce版本的containerd.io; yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm# 安装 yum -y install docker-ce docker-ce-cli# 启动 systemctl start docker# 开机自启 systemctl enable docker# 配置镜像加速deamon cd /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://4bsnyw1n.mirror.aliyuncs.com"],"exec-opts":["native.cgroupdriver=systemd"] } EOF# 重启docker systemctl daemon-reload && systemctl restart docker && systemctl enable docker
kube-proxy开启ipvs的前置条件
//1、加载netfilter模块 modprobe br_netfilter //2、添加配置文件 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF//3、赋予权限并引导 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4# 报错 高版本的centos内核nf_conntrack_ipv4被nf_conntrack替换了,所以装不了 Module nf_conntrack_ipv4 not found. # 解决方法 modprobe -- nf_conntrack
集群安装
配置K8S yum镜(所有节点)
- 导入阿里云的YUM仓库
方法1: # 进入到/etc/yum.repos.d/目录下,备份之前的CentOS-Base.repo地址。 cd /etc/yum.repos.d/ mv CentOS-Base.repo CentOS-Base.repo.bak# 下载阿里云yum源 centos8:wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repocentos7:wget -O CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo# 将服务器上的软件包信息缓存到本地,以提高搜索安装软件的速度 yum makecache 如果你在执行上面这边命令时,报错:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again 建议用如下方法解决:检查/etc/yum.repos.d/下是否有epel.repo文件,如果有,重命名为epel.repo_bak 千万不能以.repo格式备份,然后在执行一次上面的命令即可! CentOS7对应地址:http://mirrors.aliyun.com/repo/Centos-7.repo CentOS8对应地址:http://mirrors.aliyun.com/repo/Centos-8.repo#方式二: cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装命令工具 (所有节点)
# 安装初始化工具、命令行管理工具、与docker的cri交互创建容器kubelet yum -y install kubeadm kubectl kubelet --disableexcludes=kubernetes# k8s开机自启 systemctl enable kubelet.service & systemctl start kubelet.service
命令tab健补齐(所有节点)
kubectl
和kebuadm
命令tab健补齐,默认不补齐kubectl completion bash >/etc/bash_completion.d/kubectl kubeadm completion bash >/etc/bash_completion.d/kubeadm #退出当前终端生效
下载所需的镜像(所有节点)
查看所需要的镜像
[root@k8s-master-01 kubernetes]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.23.1 k8s.gcr.io/kube-controller-manager:v1.23.1 k8s.gcr.io/kube-scheduler:v1.23.1 k8s.gcr.io/kube-proxy:v1.23.1 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6
获取config配置文件下载镜像
#获取默认初始化配置文件 kubeadm config print init-defaults >init.default.yaml #保存配置文件名为init-config.yaml备用 cp init.default.yaml init-config.yaml
修改配置文件
# 修改镜像源地址 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers# 配置文件内容 apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 192.168.66.140bindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockimagePullPolicy: IfNotPresentname: k8s-master-01taints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd:local:dataDir: /var/lib/etcd #指定镜像仓库地址 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration #指定k8s版本 kubernetesVersion: 1.23.0 #指定pod范围 networking:dnsDomain: cluster.localpodSubnet: "10.244.0.0/16"serviceSubnet: 10.96.0.0/12 scheduler: {}
下载k8s镜像(所有节点)
#下载镜像,使用上一步创建的配置文件 kubeadm config images pull --config=init-config.yaml# 拉取镜像信息 [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.0 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.0 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.0 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.0 [config/images] Pulled k8s.gcr.io/pause:3.2 [config/images] Pulled k8s.gcr.io/etcd:3.4.9-1 [config/images] Pulled k8s.gcr.io/coredns:1.7.0#镜像下载完成后就可以进行安装了
初始化master节点
初始化
# 旧版本使用 kubeadm init --config=init-config.yaml --experimental-upload-certs | tee kubeadm-init.log# 新版本使用 kubeadm init --config=init-config.yaml --upload-certs | tee kubeadm-init.log
生成信息如下:
[root@k8s-master-01 conf]# kubeadm init --config=init-config.yaml --upload-certs | tee kubeadm-init.log [init] Using Kubernetes version: v1.23.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.140] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [192.168.66.140 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [192.168.66.140 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 8.503905 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: ed51127d80b0fd5841cf3caf3b024e5cdf1e0883fc146a2577018dbb25c46400 [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.66.140:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:5ce43af4ee1c8d7e0185e6149dd697571e801480ebcf38c69d65977a1cdb749d
按照要求提示执行下面的命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看证书
ll /etc/kubernetes/pki 总用量 56 -rw-r--r--. 1 root root 1273 8月 29 22:19 apiserver.crt -rw-r--r--. 1 root root 1135 8月 29 22:19 apiserver-etcd-client.crt -rw-------. 1 root root 1675 8月 29 22:19 apiserver-etcd-client.key -rw-------. 1 root root 1679 8月 29 22:19 apiserver.key -rw-r--r--. 1 root root 1143 8月 29 22:19 apiserver-kubelet-client.crt -rw-------. 1 root root 1679 8月 29 22:19 apiserver-kubelet-client.key -rw-r--r--. 1 root root 1066 8月 29 22:19 ca.crt -rw-------. 1 root root 1679 8月 29 22:19 ca.key drwxr-xr-x. 2 root root 162 8月 29 22:19 etcd -rw-r--r--. 1 root root 1078 8月 29 22:19 front-proxy-ca.crt -rw-------. 1 root root 1679 8月 29 22:19 front-proxy-ca.key -rw-r--r--. 1 root root 1103 8月 29 22:19 front-proxy-client.crt -rw-------. 1 root root 1679 8月 29 22:19 front-proxy-client.key -rw-------. 1 root root 1675 8月 29 22:19 sa.key -rw-------. 1 root root 451 8月 29 22:19 sa.pub
此时,master主机上便已经安装了kubernetes,但是集群内还是没有可用工作的Node,并缺乏对容器网络的配置。
这里需要注意kubeadm init 命令执行完成后的最后几行提示信息,其中包含加入节点的指令(kubeadm join)和所需的Token
此时可以用kubectl命令验证ConfigMap
kubectl get -n kube-system configmap可以看到其中生成了名为kubeadm-config的configMap对象[root@k8s-master-01]# kubectl get -n kube-system configmapNAME DATA AGEcoredns 1 2m42sextension-apiserver-authentication 6 2m44skube-proxy 2 2m41skubeadm-config 2 2m43skubelet-config-1.19 1 2m43s
Node节点加入集群
将本Node节点加入到Master节点
# 命令 kubeadm join 192.168.66.140:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:5ce43af4ee1c8d7e0185e6149dd697571e801480ebcf38c69d65977a1cdb749d
安装网络插件(flannel)(主节点)
各插件对比
网络插件 性能 隔离策略 开发者 kube-router 最高 支持 calico 2 支持 canal 3 支持 flannel 3 无 CoreOS romana 3 支持 Weave 3 支持 Weaveworks 当我们使用命令kubectl get nodes 命令时发现 有提示master节点为NotReady状态,这是因为没有安装CNI网络插件
安装flannel插件
方法一 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml# 验证flannel网络插件是否部署成功(Running即为成功) kubectl get pods -n kube-system |grep flannel # 方式二 GitHub地址:https://github.com/flannel-io/flannel/tags # 下载文件: flanneld-v0.15.1-amd64.docker # docker 加载文件: docker load flanneld-v0.15.1-amd64.docker #修改本地Linux上的kube-flannel.yml文件: 换成本地导入的镜像 #最后刷新pod kubectl apply -f kube-flannel.yml
验证插件安装状态
kubectl get pod -n kube-system
- 验证集群是否安装完成
# 执行下面的命令#获取所有节点 kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 33m v1.19.0 k8s-node-01 Ready <none> 34s v1.19.0 k8s-node-02 Ready <none> 28s v1.19.0kubectl get pod -n kube-system -o wide#如果发现有状态错误的pod,则可以执行kubctl --namesspaace=kube-system describepod <pod_name> 来查明错误原因,常见原因是镜像么有下载完成#如果按照失败,则可以通过命令恢复初始状态重新执行初始化init命令,再次进行安装
常用命令
# 查看节点信息 kubectl get pod -n kube-system # 监视 kubectl get pod -n kube-system -w # 详细信息 kubectl get pod -n kube-system -o wide kubctl describe pod [pod name]kubectl delete pod [pod name]kubctl creat pod -f [file name]
附上docker镜像文件,可通过 docker load -i < k8s-images.tar 进行加载k8s 1.23.1镜像
链接:https://pan.baidu.com/s/1Cu0rf8m2CGD3fmyFmEpwuA
提取码:0x0z
–来自百度网盘超级会员V3的分享
Kubernetes安装基于v1.23.1版本相关推荐
- unutun21.04安装k8s v1.23.1(一)
unutun21.04安装k8s v1.23.1 1. 环境初始化 2. 安装docker 2.1 安装依赖 2.2 安装gpg证书 2.3 写入软件源信息 2.4 更新并安装Docker-ce 2. ...
- (shell批量版)二进制高可用安装k8s集群v1.23.5版本,搭配containerd容器运行时
目录 第1章 安装前准备 1.1 节点规划 1.2 配置NTP 1.3 bind安装DNS服务 1.4 修改主机DNS 1.5 安装runtime环境及依赖 1.5.1 安装docker运行时 1.5 ...
- CentOS7安装K8S V1.23.3
一.系统准备 查看系统版本 [root@localhost docker]# cat /etc/centos-release CentOS Linux release 7.9.2009 (Core) ...
- 二进制安装Kubernetes(k8s) v1.23.6
二进制安装Kubernetes(k8s) v1.23.6 背景 kubernetes二进制安装 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成. 后续尽可能第 ...
- 二进制安装k8s集群V1.23.0
目录 一.集群规划 二.基础环境配置 1.配置/etc/hosts文件 2.设置主机名 3.安装yum源(Centos7) 4.必备工具安装 5.所有节点关闭firewalld .dnsmasq.se ...
- 二进制安装Kubernetes(k8s) v1.23.3
声明:微信公众号不支持富文本格式,代码缩进有问题 参考我其他平台文档: https://www.oiox.cn/index.php/archives/90/ https://juejin.cn/pos ...
- 二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈
二进制安装Kubernetes(k8s) v1.23.7 IPv4/IPv6双栈 Kubernetes 开源不易,帮忙点个star,谢谢了
- K8S V1.23 安装--Kubeadm+contained+公网 IP 多节点部署
简介 基于两台公网的服务器节点,两个服务器不再局域网内,只能通过公网 IP 相互访问,搭建 K8S 集群,并且按照 Dashboard,通过网页查看 K8S 相关的东西 环境及机器说明 两台机器,其中 ...
- 使用docker engine 运行时 集群部署 kubernetes v1.23
硬件环境: 使用 VMware16 模拟3台服务器,建立1个control plane(master),2个work 节点的kubernetes集群 使用 Ubuntu server 20.04 TL ...
最新文章
- 设计模式(三)--观察者模式
- 服务器遭受攻击后,这样排查处理不背锅!
- 7.JasperReports学习笔记7-applet打印
- (转)Linux系统中sysctl命令详解 sysctl -p、sysctl -a、sysctl -w
- 对Spark2.2.0文档的学习1-Cluster Mode Overview
- 用 WebSocket 实现一个简单的客服聊天系统
- 视频大数据与物联网(IoT)融合发展的探索
- 作为通信人,我们究竟该如何看待AI?
- IT运维管理的服务内容包括哪些
- ubuntu如何设置默认程序打开方式
- 一个人越想赚钱,就越要改掉这3个习惯,否则注定穷一辈子
- 基于MYSQL的论坛管理系统数据库设计项目实战
- nodejs glup打包工程问题
- ios android 手柄,雷蛇发布支持iOS和安卓的新手柄Razer Kishi,全世界都在NS化!
- python生成试卷制卷系统_Python如何自动生成考试试卷?
- 【C语言】字符串拼接函数拷贝函数
- 华为服务器机柜的型号,服务器机柜规格
- 多媒体在计算机的应用,谈计算机多媒体在教学中的应用
- select菜单实现二级联动
- linux 基础学习就该这样做