k8s,盘他!k8s的五种控制器类型解析
文章目录
- 一:k8s的五种控制器
- 1.1:k8s的控制器类型
- 1.2:Deployment控制器
- 1.2.1:测试deployment控制器
- 1.3:SatefulSet控制器
- 1.3.1:创建无头服务的service资源和dns资源
- 1.3.2:创建statefulset资源
- 1.4:DaemonSet控制器
- 1.4.1:测试
- 1.5:job控制器
- 1.5.1:测试
- 1.6:cronjob控制器
- 1.6.1:测试
- 如有疑问可评论区交流!
一:k8s的五种控制器
1.1:k8s的控制器类型
- Kubernetes中内建了很多controller(控制器),这些相当于一个状态机,用来控制Pod的具体状态和行为
1、deployment:适合无状态的服务部署
2、StatefullSet:适合有状态的服务部署
3、DaemonSet:一次部署,所有的node节点都会部署,例如一些典型的应用场景:
- 运行集群存储 daemon,例如在每个Node上运行 glusterd、ceph
- 在每个Node上运行日志收集 daemon,例如 fluentd、 logstash
- 在每个Node上运行监控 daemon,例如 Prometheus Node Exporter
4、Job:一次性的执行任务
5、Cronjob:周期性的执行任务
控制器又被称为工作负载,pod通过控制器实现应用的运维,比如伸缩、升级等
1.2:Deployment控制器
- 适合部署无状态的应用服务,用来管理pod和replicaset,具有上线部署、副本设定、滚动更新、回滚等功能,还可提供声明式更新,例如只更新一个新的Image
1.2.1:测试deployment控制器
1、编写yaml文件,并创建nginx服务pod资源
[root@master test]# vim nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3 '//指定副本数为3'selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginx1image: nginx:1.15.4ports:- containerPort: 80 [root@master test]# kubectl create -f nginx-deployment.yaml '//创建pod' deployment.apps/nginx-deployment created [root@master test]# kubectl get pod -w NAME READY STATUS RESTARTS AGE nginx-deployment-78cdb5b557-7tr9h 1/1 Running 0 13s nginx-deployment-78cdb5b557-kbt7m 1/1 Running 0 13s nginx-deployment-78cdb5b557-knd7n 1/1 Running 0 13s ^C[root@master test]# kubectl get all '//可以查看所有类型的pod资源' NAME READY STATUS RESTARTS AGE pod/nginx-deployment-78cdb5b557-7tr9h 1/1 Running 0 44s pod/nginx-deployment-78cdb5b557-kbt7m 1/1 Running 0 44s pod/nginx-deployment-78cdb5b557-knd7n 1/1 Running 0 44sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 11dNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 3 3 3 3 44sNAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deployment-78cdb5b557 3 3 3 44s
2、查看控制器参数:可以使用describe或者edit两种方式
[root@master test]# kubectl describe deploy nginx-deployment '//或者使用edit' [root@master test]# kubectl edit deploy nginx-deployment '//两种方式都可以查看pod的更详细的信息,包括各种类型的名称、资源、事件等'...省略内容strategy:rollingUpdate: '//此段解释的是滚动更新机制'maxSurge: 25% '//25%指的是pod数量的百分比,最多可以扩容125%'maxUnavailable: 25% '//25%指的是pod数量的百分比,最多可以缩容75%'type: RollingUpdate...省略内容
3、查看控制器的历史版本,滚动更新以此为基础
[root@master test]# kubectl rollout history deploy/nginx-deployment deployment.extensions/nginx-deployment REVISION CHANGE-CAUSE 1 <none> '//发现只有一个,说明没有开始滚动更新,否则会保持2个'
1.3:SatefulSet控制器
1、适合部署有状态应用
2、解决Pod的独立生命周期,保持Pod启动顺序和唯一性
3、稳定,唯一的网络标识符,持久存储(例如:etcd配置文件,节点地址发生变化,将无法使用)
4、有序,优雅的部署和扩展、删除和终止(例如:mysql主从关系,先启动主,再启动从)
5、有序,滚动更新
6、应用场景:例如数据库
无状态服务的特点:
1)deployment 认为所有的pod都是一样的
2)不用考虑顺序的要求
3)不用考虑在哪个node节点上运行
4)可以随意扩容和缩容
有状态服务的特点:
1)实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
2)实例之间不对等的关系,以及依靠外部存储的应用。
常规的service服务和无头服务的区别
service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现。
Headless service 无头服务,不需要cluster-IP,直接绑定具体的Pod的IP,无头服务经常用于statefulset的有状态部署
1.3.1:创建无头服务的service资源和dns资源
由于有状态服务的IP地址是动态的,所以使用无头服务的时候要绑定dns服务
1、编写yaml文件并创建service资源
[root@master test]# vim nginx-headless.yaml apiVersion: v1 kind: Service '//创建一个service类型的资源' metadata:name: nginx-headlesslabels:app: nginx spec:ports:- port: 80name: webclusterIP: None '//不使用clusterIP'selector:app: nginx [root@master test]# kubectl create -f nginx-headless.yaml service/nginx-headless created [root@master test]# kubectl get svc '//查看service资源' NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 11d nginx-headless ClusterIP None <none> 80/TCP 19s '//刚刚创建的无头服务没有clusterIP'
2、配置dns服务,使用yaml文件创建
[root@master test]# vim coredns.yaml # Warning: This is a file generated from the base underscore template file: coredns.yaml.baseapiVersion: v1 kind: ServiceAccount '//系统账户,为pod中的进程和外部用户提供身份信息' metadata:name: corednsnamespace: kube-system '//指定名称空间'labels:kubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole '//创建访问权限的角色' metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: Reconcilename: system:coredns rules: - apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding '//创建集群角色绑定的用户' metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsaddonmanager.kubernetes.io/mode: EnsureExistsname: system:coredns roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns subjects: - kind: ServiceAccountname: corednsnamespace: kube-system --- apiVersion: v1 kind: ConfigMap '//通过此服务来更改服务发现的工作方式' metadata:name: corednsnamespace: kube-systemlabels:addonmanager.kubernetes.io/mode: EnsureExists data:Corefile: | '//是coreDNS的配置文件'.:53 {errorshealthkubernetes cluster.local in-addr.arpa ip6.arpa {pods insecure upstreamfallthrough in-addr.arpa ip6.arpa}prometheus :9153proxy . /etc/resolv.confcache 30loopreloadloadbalance} --- apiVersion: extensions/v1beta1 kind: Deployment metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS" spec:# replicas: not specified here:# 1. In order to make Addon Manager do not reconcile this replicas parameter.# 2. Default is 1.# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsannotations:seccomp.security.alpha.kubernetes.io/pod: 'docker/default'spec:serviceAccountName: corednstolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule- key: "CriticalAddonsOnly"operator: "Exists"containers:- name: corednsimage: coredns/coredns:1.2.2imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPlivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5securityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truednsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile --- apiVersion: v1 kind: Service metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "CoreDNS" spec:selector:k8s-app: kube-dnsclusterIP: 10.0.0.2 ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP [root@master test]# kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.extensions/coredns created service/kube-dns created [root@master test]# kubectl get pod,svc -n kube-system '//查看kube-system名称空间的pod和svc资源' NAME READY STATUS RESTARTS AGE pod/coredns-56684f94d6-8p44x 1/1 Running 0 30s pod/kubernetes-dashboard-7dffbccd68-58qms 1/1 Running 2 11dNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 31s service/kubernetes-dashboard NodePort 10.0.0.139 <none> 443:30005/TCP 11d
3、创建一个测试的pod资源并验证DNS解析
[root@master test]# kubectl run -it --image=busybox:1.28.4 --rm --restart=Never sh If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: kubernetes Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local '//解析成功' / # exit pod "sh" deleted
1.3.2:创建statefulset资源
1、编写yaml文件并创建资源
[root@master test]# vim statefulset-test.yaml apiVersion: v1 kind: Service metadata:name: nginxlabels:app: nginx spec:ports:- port: 80name: webclusterIP: None '//指定为无头服务'selector:app: nginx --- apiVersion: apps/v1beta1 kind: StatefulSet metadata:name: nginx-statefulset namespace: default spec:serviceName: nginx replicas: 3 '//指定副本数量'selector:matchLabels: app: nginxtemplate: metadata:labels:app: nginx spec:containers:- name: nginximage: nginx:latest ports:- containerPort: 80 [root@master test]# vim pod-dns-test.yaml '//创建用来测试dns的pod资源' apiVersion: v1 kind: Pod metadata:name: dns-test spec:containers:- name: busyboximage: busybox:1.28.4args:- /bin/sh- -c- sleep 36000restartPolicy: Never[root@master test]# kubectl delete -f . '//先删除之前所有的资源'
2、创建资源并测试
[root@master test]# kubectl create -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.extensions/coredns created service/kube-dns created [root@master test]# kubectl create -f statefulset-test.yaml service/nginx created statefulset.apps/nginx-statefulset created [root@master test]# kubectl create -f pod-dns-test.yaml pod/dns-test created [root@master test]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/dns-test 1/1 Running 0 37s pod/nginx-statefulset-0 1/1 Running 0 56s pod/nginx-statefulset-1 1/1 Running 0 39s pod/nginx-statefulset-2 1/1 Running 0 21sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 12d service/nginx ClusterIP None <none> 80/TCP 56s [root@master test]# kubectl exec -it dns-test sh '//登陆pod资源进行测试' / # nslookup pod/nginx-statefulset-0 / # nslookup kubernetes Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: kubernetes Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local / # nslookup nginx-statefulset-0.nginx Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: nginx-statefulset-0.nginx Address 1: 172.17.78.2 nginx-statefulset-0.nginx.default.svc.cluster.local / # exit
相比于Deployment而言,StatefulSet是有身份的!(序列编号区分唯一身份)
身份三要素:
1、域名 nginx-statefulset-0.nginx
2、主机名 nginx-statefulset-0
3、存储(PVC)
StatefulSet的有序部署和有序伸缩
有序部署(即0到N-1)
有序收缩,有序删除(即从N-1到0)
无论是部署还是删除,更新下一个 Pod 前,StatefulSet 控制器终止每个 Pod 并等待它们变成 Running 和 Ready。
1.4:DaemonSet控制器
在每一个Node上运行一个Pod
新加入的Node也同样会自动运行一个Pod
应用场景:监控,分布式存储,日志收集等
1.4.1:测试
1、编写yaml文件并创建资源
[root@master test]# vim daemonset-test.yaml apiVersion: apps/v1 kind: DaemonSet metadata:name: nginx-daemonsetlabels:app: nginx spec:selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.15.4ports:- containerPort: 80 [root@master test]# kubectl create -f daemonset-test.yaml daemonset.apps/nginx-daemonset created
2、查看资源的部署情况
[root@master test]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE dns-test 1/1 Running 0 18m 172.17.14.4 192.168.233.133 <none> nginx-daemonset-m8lm5 1/1 Running 0 41s 172.17.78.5 192.168.233.132 <none> nginx-daemonset-sswfq 1/1 Running 0 41s 172.17.14.5 192.168.233.133 <none> nginx-statefulset-0 1/1 Running 0 18m 172.17.78.2 192.168.233.132 <none> nginx-statefulset-1 1/1 Running 0 18m 172.17.14.3 192.168.233.133 <none> nginx-statefulset-2 1/1 Running 0 18m 172.17.78.4 192.168.233.132 <none> '//发现daemonset的资源已经分配到两个node节点上了'
1.5:job控制器
- 一次性执行任务,类似Linux中的job
- 应用场景:如离线数据处理,视频解码等业务
1.5.1:测试
1、编写yaml文件并创建资源
[root@master test]# vim job-test.yaml apiVersion: batch/v1 kind: Job metadata:name: pi spec:template:spec:containers:- name: piimage: perlcommand: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] '//命令是计算π的值'restartPolicy: NeverbackoffLimit: 4 '//重试次数默认是6次,修改为4次,当遇到异常时Never状态会重启,所以要设定次数。' [root@master test]# kubectl create -f job-test.yaml job.batch/pi created
2、查看job资源
[root@master test]# kubectl get pod -w NAME READY STATUS RESTARTS AGE dns-test 1/1 Running 0 23m nginx-daemonset-m8lm5 1/1 Running 0 5m33s nginx-daemonset-sswfq 1/1 Running 0 5m33s nginx-statefulset-0 1/1 Running 0 23m nginx-statefulset-1 1/1 Running 0 23m nginx-statefulset-2 1/1 Running 0 23m pi-dhzrg 0/1 ContainerCreating 0 50s pi-dhzrg 1/1 Running 0 61s pi-dhzrg 0/1 Completed 0 65s '//执行成功后就结束了' ^C[root@master test]# kubectl logs pi-dhzrg '//查看日志可以查看结果' 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
3、查看并删除job资源
[root@master test]# kubectl get job NAME COMPLETIONS DURATION AGE pi 1/1 65s 2m49s [root@master test]# kubectl delete -f job-test.yaml job.batch "pi" deleted [root@master test]# kubectl get job No resources found.
1.6:cronjob控制器
- 周期性任务,像Linux的Crontab一样。
- 应用场景:如通知,备份等
1.6.1:测试
1、编写yaml文件并创建资源(建立每分钟打印hello的任务)
[root@master test]# vim cronjob-test.yaml apiVersion: batch/v1beta1 kind: CronJob metadata:name: hello spec:schedule: "*/1 * * * *"jobTemplate:spec:template:spec:containers:- name: helloimage: busyboxargs:- /bin/sh- -c- date; echo Hello from the Kubernetes clusterrestartPolicy: OnFailure [root@master test]# kubectl create -f cronjob-test.yaml cronjob.batch/hello created
2、查看pod资源
[root@master test]# kubectl get pod -w NAME READY STATUS RESTARTS AGE dns-test 1/1 Running 0 32m nginx-daemonset-m8lm5 1/1 Running 0 14m nginx-daemonset-sswfq 1/1 Running 0 14m nginx-statefulset-0 1/1 Running 0 32m nginx-statefulset-1 1/1 Running 0 32m nginx-statefulset-2 1/1 Running 0 32m hello-1589946540-6wn5h 0/1 Pending 0 0s hello-1589946540-6wn5h 0/1 Pending 0 0s hello-1589946540-6wn5h 0/1 ContainerCreating 0 0s hello-1589946540-6wn5h 0/1 Completed 0 14s hello-1589946600-dlt4c 0/1 Pending 0 0s hello-1589946600-dlt4c 0/1 Pending 0 0s hello-1589946600-dlt4c 0/1 ContainerCreating 0 0s hello-1589946600-dlt4c 0/1 Completed 0 16s
3、可以查看日志信息
^C[root@master test]# kubectl logs hello-1589946540-6wn5h Wed May 20 03:49:18 UTC 2020 Hello from the Kubernetes cluster [root@master test]# kubectl logs hello-1589946600-dlt4c Wed May 20 03:50:22 UTC 2020 Hello from the Kubernetes cluster [root@master test]# kubectl delete -f cronjob-test.yaml cronjob.batch "hello" deleted '//使用cronjob要慎重,用完之后要删掉,不然会占用很多资源'
如有疑问可评论区交流!
k8s,盘他!k8s的五种控制器类型解析相关推荐
- k8s--五种控制器类型解析
文章目录 一.k8s的五种控制器 1.1 k8s的控制器类型 1.2 Deployment控制器 1.3 SatefulSet控制器 1.4 DaemonSet控制器 1.5 Job控制器 1.6 c ...
- 是你想要的K8S--五种控制器类型解析(Deployment 、StatefulSet 、DaemonSet 、Job 、CronJob)
文章目录 一. 控制器 1.1 Pod与控制器之间的关系 1.2 Deployment 特点: 应用场景:web服务 测试 1.2 SatefulSet 官方文档 特点 应用场景:数据库 常规serv ...
- redis的五种存储类型的具体用法
一.String 类型操作 string是redis最基本的类型,而且string类型是二进制安全的.意思是redis的string可以包含任何数据.比如jpg图片或者序列化的对象 $redis-&g ...
- 学习ActiveMQ(五):activemq的五种消息类型和三种监听器类型
一.前面我们一直发送的是字符串类型,其实activemq一共支持五种消息类型: 1.String消息类型:发送者:消费者: 1.String消息类型:发送者:消费者: 1.String消息类型:发送者 ...
- Redis_17_Redis服务器中的数据库(五种基本类型底层存放)
文章目录 一.前言 二.RedisObject对象 2.1 RedisObject对象 2.2 类型type 2.3 编码encoding 2.4 sds 三.字符串对象string 3.1 int编 ...
- 【Redis】五种存储类型及其底层数据结构
Redis(Remote Dictionary Service远程字典服务) 参考: 图解redis五种数据结构底层实现(动图哦) Redis(1)--5种基本数据结构 目录 1. Redis的五种存 ...
- 短视频开头如何才能吸引人?五种开头类型分享,帮你抓住观众眼球
短视频开头如何才能吸引人?五种开头类型分享,帮你抓住观众眼球 对于一则短视频来说,开头就吸引人,才能让观众有兴趣看下去,获得更多的播放量.那么,短视频开头究竟要怎么做才能吸引人呢?今天我们就来分享五种 ...
- 2层框架结构柱子间距_2分钟掌握五种不同类型的厂房结构,找厂房少绕弯!
随着工业的迅猛发展,工业厂房的分类也在不断地明细化.现在,厂房的分类可以根据不同的标准划分多种类别的厂房,比如可以分为单层厂房和多层厂房,也可以分为标准厂房与非标准厂房,也可以分为轻钢厂房与重钢厂房等 ...
- OSPF NBMA网络的五种基本类型
实验拓扑: 实验步骤: 配置FR帧中继: FR(config)#frame-relay switching FR(config)#in FR(config)#interface s1/2 FR(con ...
最新文章
- 【AI】caffe源码分析(一)
- Bash功能与使用技巧
- VMware克隆出来的网卡错误
- 浅谈windows句柄表
- [官网]Prevent a worm by updating Remote Desktop Services (CVE-2019-0708)
- OSS全球传输加速开启公测,助力企业业务全地域覆盖...
- 《C#与.net高级编程》——第一支柱:C#的封装
- 【从0到1,搭建Spring Boot+RESTful API+Shiro+Mybatis+SQLServer权限系统】05、Shiro集成
- apache mediawiki 安装_MediaWiki初探:安装及使用入门
- 2020年Web前端技术的三大趋势(干货)
- 完全掌握AS中点(.)语法的应用
- Python 入门 Day5
- Linux 初级命令
- Java案例3-1 基于控制台的购书系统
- pytorch GPU内存管理
- Android常用播放器对比,谁更好用?四款Android音乐播放器对比
- MFC中有关鼠标单击双击响应的问题
- 评论抓取:Python爬取AppStore上的评论内容及星级,突破500条限制
- 如何在Ubuntu 14.04中读取MOBI文件
- 如何使用EXCEL批量检查地址格式?
热门文章
- Oracle排名函数(Rank)实例详解
- 微分方程5_如何理解$e^i*pi
- 科技节图像后期处理一等奖作品---相守
- css设置1.5倍行高,css设定行高、绝对定位
- 小Z的袜子(hose) (莫队算法入门)
- HTML+CSS+JS实现透明度动画
- html表格边框线透明度度标签,css如何让边框具有透明度
- Python中x +=1和x = x + 1的区别
- 如果linux虚拟机没网络网卡怎么办检查没ens33网卡
- 解决win10机械硬盘占用100%的方法(个人亲测有效)