简介

prometheus-operator
Prometheus:一个非常优秀的监控工具或者说是监控方案。它提供了数据搜集、存储、处理、可视化和告警一套完整的解决方案。作为kubernetes官方推荐的监控系统,用Prometheus来监控kubernetes集群的状况和运行在集群上的应用运行状况。

Prometheus架构图

那Prometheus Operator是做什么的呢?
Operator是由CoreOS公司开发的,用来扩展 Kubernetes API,特定的应用程序控制器,它用来创建、配置和管理复杂的有状态应用,如数据库、缓存和监控系统。
可以理解为,Prometheus Operator就是用于管理部署Prometheus到kubernetes的工具,其目的是简化和自动化对Prometheus组件的维护。

Prometheus Operator架构

部署前准备

1、克隆kube-prometheus项目

[root@k8s-master001 opt]# git clone https://github.com/prometheus-operator/kube-prometheus.git

2、进入kube-prometheus/manifests目录,可以看到一堆yaml文件,文件太多,我们按用组件分类

[root@k8s-master001 manifests]# ls -altotal 20drwxr-xr-x. 10 root root  140 Sep 14 21:25 .drwxr-xr-x. 12 root root 4096 Sep 14 21:11 ..drwxr-xr-x.  2 root root 4096 Sep 14 21:23 adapterdrwxr-xr-x.  2 root root  189 Sep 14 21:22 alertmanagerdrwxr-xr-x.  2 root root  241 Sep 14 21:22 exporterdrwxr-xr-x.  2 root root  254 Sep 14 21:23 grafanadrwxr-xr-x.  2 root root  272 Sep 14 21:22 metricsdrwxr-xr-x.  2 root root 4096 Sep 14 21:25 prometheusdrwxr-xr-x.  2 root root 4096 Sep 14 21:23 serviceMonitordrwxr-xr-x.  2 root root 4096 Sep 14 21:11 setup

3、修改yaml文件中的nodeSelector
首先查看下现在Node节点的标签

[root@k8s-master001 manifests]# kubectl get node --show-labels=trueNAME            STATUS   ROLES    AGE     VERSION   LABELSk8s-master001   Ready    master   4d16h   v1.19.0   app.storage=rook-ceph,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master001,kubernetes.io/os=linux,node-role.kubernetes.io/master=k8s-master002   Ready    master   4d16h   v1.19.0   app.storage=rook-ceph,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master002,kubernetes.io/os=linux,node-role.kubernetes.io/master=k8s-master003   Ready    master   4d16h   v1.19.0   app.storage=rook-ceph,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master003,kubernetes.io/os=linux,node-role.kubernetes.io/master=,role=ingress-controller

并把manifests目录的yaml文件中nodeSelector改为kubernetes.io/os=linux
例如:vim setup/prometheus-operator-deployment.yaml,

      nodeSelector:        kubernetes.io/os: linux

其他的自行修改,可以如下命令过滤并查看是否需要修改

[root@k8s-master001 manifests]# grep -A1 nodeSelector  prometheus/*prometheus/prometheus-prometheus.yaml:  nodeSelector:prometheus/prometheus-prometheus.yaml:  nodeSelector:prometheus/prometheus-prometheus.yaml-    kubernetes.io/os: linux

部署kube-prometheus

1、安装operator

[root@k8s-master001 manifests]# kubectl  apply -f setup/namespace/monitoring createdcustomresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com createdclusterrole.rbac.authorization.k8s.io/prometheus-operator createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-operator createddeployment.apps/prometheus-operator createdservice/prometheus-operator createdserviceaccount/prometheus-operator created[root@k8s-master001 manifests]# kubectl  get po -n monitoringNAME                                   READY   STATUS    RESTARTS   AGEprometheus-operator-74d54b5cfc-xgqg7   2/2     Running   0          2m40s

2、安装adapter

[root@k8s-master001 manifests]# kubectl  apply -f adapter/apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdclusterrole.rbac.authorization.k8s.io/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter createdclusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator createdclusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources createdconfigmap/adapter-config createddeployment.apps/prometheus-adapter createdrolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader createdservice/prometheus-adapter createdserviceaccount/prometheus-adapter createdservicemonitor.monitoring.coreos.com/prometheus-adapter created[root@k8s-master001 manifests]# kubectl  get po -n monitoringNAME                                   READY   STATUS    RESTARTS   AGEprometheus-adapter-557648f58c-9x446    1/1     Running   0          41sprometheus-operator-74d54b5cfc-xgqg7   2/2     Running   0          4m33s

3、安装alertmanager

[root@k8s-master001 manifests]# kubectl  apply -f alertmanager/alertmanager.monitoring.coreos.com/main createdsecret/alertmanager-main createdservice/alertmanager-main createdserviceaccount/alertmanager-main createdservicemonitor.monitoring.coreos.com/alertmanager created[root@k8s-master001 ~]# kubectl  get po -n monitoringNAME                                   READY   STATUS    RESTARTS   AGEalertmanager-main-0                    2/2     Running   0          53malertmanager-main-1                    2/2     Running   0          3m3salertmanager-main-2                    2/2     Running   0          53m

4、安装exporter

[root@k8s-master001 manifests]# kubectl  apply -f exporter/clusterrole.rbac.authorization.k8s.io/node-exporter createdclusterrolebinding.rbac.authorization.k8s.io/node-exporter createddaemonset.apps/node-exporter createdservice/node-exporter createdserviceaccount/node-exporter createdservicemonitor.monitoring.coreos.com/node-exporter created[root@k8s-master001 manifests]# kubectl  get po -n monitoring NAME                                   READY   STATUS    RESTARTS   AGEnode-exporter-2rvtt                    2/2     Running   0          108snode-exporter-9kwb6                    2/2     Running   0          108snode-exporter-9zlbb                    2/2     Running   0          108s

5、安装metrics

[root@k8s-master001 manifests]# kubectl  apply -f metricsclusterrole.rbac.authorization.k8s.io/kube-state-metrics createdclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics createddeployment.apps/kube-state-metrics createdservice/kube-state-metrics createdserviceaccount/kube-state-metrics createdservicemonitor.monitoring.coreos.com/kube-state-metrics created[root@k8s-master001 manifests]# kubectl  get po -n monitoringNAME                                   READY   STATUS    RESTARTS   AGEkube-state-metrics-85cb9cfd7c-v9c4f    3/3     Running   0          2m8s

6、安装prometheus

[root@k8s-master001 manifests]# kubectl  apply -f prometheus/clusterrole.rbac.authorization.k8s.io/prometheus-k8s createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdservicemonitor.monitoring.coreos.com/prometheus-operator createdprometheus.monitoring.coreos.com/k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s-config createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdprometheusrule.monitoring.coreos.com/prometheus-k8s-rules createdservice/prometheus-k8s createdserviceaccount/prometheus-k8s created[root@k8s-master001 manifests]# kubectl  get po -n monitoringNAME                                   READY   STATUS    RESTARTS   AGEprometheus-k8s-0                       3/3     Running   1          94sprometheus-k8s-1                       3/3     Running   1          94s

7、安装grafana

root@k8s-master001 manifests]# kubectl  apply -f grafana/secret/grafana-datasources createdconfigmap/grafana-dashboard-apiserver createdconfigmap/grafana-dashboard-cluster-total createdconfigmap/grafana-dashboard-controller-manager createdconfigmap/grafana-dashboard-k8s-resources-cluster createdconfigmap/grafana-dashboard-k8s-resources-namespace createdconfigmap/grafana-dashboard-k8s-resources-node createdconfigmap/grafana-dashboard-k8s-resources-pod createdconfigmap/grafana-dashboard-k8s-resources-workload createdconfigmap/grafana-dashboard-k8s-resources-workloads-namespace createdconfigmap/grafana-dashboard-kubelet createdconfigmap/grafana-dashboard-namespace-by-pod createdconfigmap/grafana-dashboard-namespace-by-workload createdconfigmap/grafana-dashboard-node-cluster-rsrc-use createdconfigmap/grafana-dashboard-node-rsrc-use createdconfigmap/grafana-dashboard-nodes createdconfigmap/grafana-dashboard-persistentvolumesusage createdconfigmap/grafana-dashboard-pod-total createdconfigmap/grafana-dashboard-prometheus-remote-write createdconfigmap/grafana-dashboard-prometheus createdconfigmap/grafana-dashboard-proxy createdconfigmap/grafana-dashboard-scheduler createdconfigmap/grafana-dashboard-statefulset createdconfigmap/grafana-dashboard-workload-total createdconfigmap/grafana-dashboards createddeployment.apps/grafana createdservice/grafana createdserviceaccount/grafana createdservicemonitor.monitoring.coreos.com/grafana created[root@k8s-master001 manifests]# kubectl  get po -n monitoringNAME                                   READY   STATUS    RESTARTS   AGEgrafana-b558fb99f-87spq                1/1     Running   0          3m14s

8、安装serviceMonitor

[root@k8s-master001 manifests]# kubectl  apply -f serviceMonitor/servicemonitor.monitoring.coreos.com/prometheus createdservicemonitor.monitoring.coreos.com/kube-apiserver createdservicemonitor.monitoring.coreos.com/coredns createdservicemonitor.monitoring.coreos.com/kube-controller-manager createdservicemonitor.monitoring.coreos.com/kube-scheduler createdservicemonitor.monitoring.coreos.com/kubelet created

9、查看全部运行的服务

[root@k8s-master001 manifests]# kubectl  get po -n monitoring NAME                                   READY   STATUS    RESTARTS   AGEalertmanager-main-0                    2/2     Running   0          90malertmanager-main-1                    2/2     Running   0          40malertmanager-main-2                    2/2     Running   0          90mgrafana-b558fb99f-87spq                1/1     Running   0          4m56skube-state-metrics-85cb9cfd7c-v9c4f    3/3     Running   0          10mnode-exporter-2rvtt                    2/2     Running   0          35mnode-exporter-9kwb6                    2/2     Running   0          35mnode-exporter-9zlbb                    2/2     Running   0          35mprometheus-adapter-557648f58c-9x446    1/1     Running   0          91mprometheus-k8s-0                       3/3     Running   1          7m49sprometheus-k8s-1                       3/3     Running   1          7m49sprometheus-operator-74d54b5cfc-xgqg7   2/2     Running   0          95mNAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEservice/alertmanager-main       ClusterIP   10.98.96.94             9093/TCP                     91mservice/alertmanager-operated   ClusterIP   None                    9093/TCP,9094/TCP,9094/UDP   91mservice/grafana                 ClusterIP   10.108.204.33           3000/TCP                     6m30sservice/kube-state-metrics      ClusterIP   None                    8443/TCP,9443/TCP            12mservice/node-exporter           ClusterIP   None                    9100/TCP                     36mservice/prometheus-adapter      ClusterIP   10.98.16.117            443/TCP                      93mservice/prometheus-k8s          ClusterIP   10.109.119.37           9090/TCP                     9m22sservice/prometheus-operated     ClusterIP   None                    9090/TCP                     9m24sservice/prometheus-operator     ClusterIP   None                    8443/TCP                     97m

10、使用nodeport暴露grafana和prometheus服务,访问UI界面

---apiVersion: v1kind: Servicemetadata:  name: grafana-svc  namespace: monitoringspec:  type: NodePort  ports:  - port: 3000    targetPort: 3000  selector:    app: grafana---apiVersion: v1kind: Servicemetadata:  name: prometheus-svc  namespace: monitoringspec:  type: NodePort  ports:  - port: 9090    targetPort: 9090  selector:    prometheus: k8s

查看结果

[root@k8s-master001 manifests]# kubectl  get svc -n monitoringNAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEgrafana-svc             NodePort    10.99.31.100            3000:30438/TCP               9sprometheus-svc          NodePort    10.102.245.8            9090:32227/TCP               3s

现在可以使用浏览器访问URL NodeIP:30438 NodeIP:32227 : NodeIP为k8s节点IP,当然也可以使用前文介绍的ingress暴露服务
例如:
prometheus: http://10.26.25.20:32227

grafana: http://10.26.25.20:30438 默认密码admin/admin,登录后需要修改admin密码

以上,kube-prometheus已经部署完毕,可以用过prometheus查看到监控信息了。

几个小坑

坑位一

1、从prometheus target可以看到,kube-controller-manager和kube-scheduler都没有被监控

解决
这是因为serviceMonitor是根据label去选取svc的,我们可以看到对应的serviceMonitor是选取的namespace范围是kube-system

[root@k8s-master001 manifests]# grep -A2 -B2  selector serviceMonitor/prometheus-serviceMonitorKube*serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml-    matchNames:serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml-    - kube-systemserviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml:  selector:serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml-    matchLabels:serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml-      k8s-app: kube-controller-manager--serviceMonitor/prometheus-serviceMonitorKubelet.yaml-    matchNames:serviceMonitor/prometheus-serviceMonitorKubelet.yaml-    - kube-systemserviceMonitor/prometheus-serviceMonitorKubelet.yaml:  selector:serviceMonitor/prometheus-serviceMonitorKubelet.yaml-    matchLabels:serviceMonitor/prometheus-serviceMonitorKubelet.yaml-      k8s-app: kubelet--serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml-    matchNames:serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml-    - kube-systemserviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml:  selector:serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml-    matchLabels:serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml-      k8s-app: kube-scheduler

2、创建kube-controller-manager和kube-scheduler service
k8s v1.19默认使用https,kube-controller-manager端口10257 kube-scheduler端口10259
kube-controller-manager-scheduler.yml

apiVersion: v1kind: Servicemetadata:  namespace: kube-system  name: kube-controller-manager  labels:    k8s-app: kube-controller-managerspec:  selector:    component: kube-controller-manager  type: ClusterIP  clusterIP: None  ports:  - name: https-metrics    port: 10257    targetPort: 10257    protocol: TCP---apiVersion: v1kind: Servicemetadata:  namespace: kube-system  name: kube-scheduler  labels:    k8s-app: kube-schedulerspec:  selector:    component: kube-scheduler  type: ClusterIP  clusterIP: None  ports:  - name: https-metrics    port: 10259    targetPort: 10259    protocol: TCP

执行命令

[root@k8s-master001 manifests]# kubectl apply -f  kube-controller-manager-scheduler.yml[root@k8s-master001 manifests]# kubectl  get svc -n kube-system NAME                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                        AGEkube-controller-manager   ClusterIP   None                 10257/TCP                      37mkube-scheduler            ClusterIP   None                 10259/TCP                      37m

3、创建kube-controller-manager和kube-scheduler endpoint
注意:addresses改成集群实际的IP
kube-ep.yml

apiVersion: v1kind: Endpointsmetadata:  labels:    k8s-app: kube-controller-manager  name: kube-controller-manager  namespace: kube-systemsubsets:- addresses:  - ip: 10.26.25.20  - ip: 10.26.25.21  - ip: 10.26.25.22  ports:  - name: https-metrics    port: 10257    protocol: TCP---apiVersion: v1kind: Endpointsmetadata:  labels:    k8s-app: kube-scheduler  name: kube-scheduler  namespace: kube-systemsubsets:- addresses:  - ip: 10.26.25.20  - ip: 10.26.25.21  - ip: 10.26.25.22  ports:  - name: https-metrics    port: 10259    protocol: TCP
[root@k8s-master001 manifests]# kubectl  apply -f kube-ep.ymlendpoints/kube-controller-manager createdendpoints/kube-scheduler created[root@k8s-master001 manifests]# kubectl  get ep -n kube-systemNAME                      ENDPOINTS                                                        AGEkube-controller-manager   10.26.25.20:10257,10.26.25.21:10257,10.26.25.22:10257            16mkube-scheduler            10.26.25.20:10259,10.26.25.21:10259,10.26.25.22:10259            16m

现在看下页面上prometheus target,已经能看到kube-controller-manager和kube-scheduler被监控了

坑位二

1、默认清理下,kube-controller-manager和kube-scheduler绑定IP为127.0.0.1,如果需要监控这两个服务,需要修改kube-controller-manager和kube-scheduler配置,让其绑定到0.0.0.0
2、配置文件所在目录/etc/kubernetes/manifests
修改kube-controller-manager.yaml中--bind-address=0.0.0.0
修改kube-scheduler.yaml中--bind-address=0.0.0.0
3、重启kubelet:systemctl restart kubelet
4、查看是否生效,返回200即为成功

[root@k8s-master002 manifests]# curl -I -k https://10.26.25.20:10257/healthzHTTP/1.1 200 OKCache-Control: no-cache, privateContent-Type: text/plain; charset=utf-8X-Content-Type-Options: nosniffDate: Tue, 15 Sep 2020 06:19:32 GMTContent-Length: 2[root@k8s-master002 manifests]# curl -I -k https://10.26.25.20:10259/healthzHTTP/1.1 200 OKCache-Control: no-cache, privateContent-Type: text/plain; charset=utf-8X-Content-Type-Options: nosniffDate: Tue, 15 Sep 2020 06:19:36 GMTContent-Length: 2

最后

kube-prometheus配置很多,这里只是做了最基础的设置。更多需求请自行查看官方文档

注:文中图片来源于网络,如有侵权,请联系我及时删除。

k8s查看pod的yaml文件_k8s监控系统prometheus-operator相关推荐

  1. k8s查看pod的yaml文件_K8S系列学习,Pod实战那些事儿,有必要知道知道

    Kubernetes学习路上的那些事儿,很有必要分享出来 什么是Pod 要弄清楚这个概念,我们还是先上官网瞧瞧,https://kubernetes.io/zh/docs/concepts/workl ...

  2. k8s查看pod的yaml文件_K8S教程(6)YAML资源配置清单

    一.K8S配置清单作用 虽然使用kubectl或者dashboard都可以创建Pods,但在实际使用中基本都是使用yaml配置清单来对资源进行一个声明,然后K8S按照声明信息进行创建.对于K8S来说, ...

  3. k8s查看pod的yaml文件_k8s yaml文件说明

    nginx yaml文件说明 --- #定义nginx命名空间 apiVersion: v1 kind: Namespace metadata: name: k8s-nginx #自定义的命名空间 - ...

  4. k8s查看pod的yaml文件_每天5分钟|轻松掌握开发工作中必会的k8s-yaml配置说明和常用命令...

    在 K8S 中,创建资源有两种方式: 方式一:使用kubectl 命令行,在输入kubectl命令的时候,填上对应的参数 优点:方便快捷,一行命令搞定 缺点:一行命令太多参数,可读性差,不利于维护 方 ...

  5. k8s查看pod的yaml文件_【大强哥-k8s从入门到放弃04】Yaml语法解析

    依然小福利[Python-零基础入门]2020感谢走进我的生命,Python!(已更新至206集)_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili​www.bilibili.com 一.K8S名称 ...

  6. k8s查看pod的yaml文件_K8s-yaml的使用及命令

    YAML配置文件管理对象 对象管理: # 创建deployment资源 kubectl create-f nginx-deployment.yaml # 查看deployment kubectl ge ...

  7. k8s启动centos7的yaml文件_K8S yaml文件详解

    apiVersion: extensions/v1beta1 #接口版本 kind: Deployment #接口类型 metadata: name: ptengine-demo #Deploymen ...

  8. k8s 查看 pod 网段

    文章目录 k8s 查看 pod 网段 k8s 查看 pod 网段 查看当前k8s集群中 pod 和 service 网段信息 [root@master ~]# kubectl get configma ...

  9. k8s 查看pod流量_K8s中对pod进行流量限制

    最近接到一个需求,对日志上传的pod进行流量限制. # 前期准备k8s一份 calico装好 # k8s配置 由于默认情况下calico并没有启用流量限制的功能,所以需要在calico的配置文件里面启 ...

  10. 通过编写k8s的资源清单yaml文件部署gitlab服务

    微信公众号搜索DevOps和k8s全栈技术 ,即可关注我的公众号,也可通过扫描文章最后的二维码关注,每天都会分享技术文章供大家参考阅读~,拥抱开源,同大家共同进步~~!!! 搭建nfs服务器 1.为什 ...

最新文章

  1. 用protoc-gen-lua生成PB的lua代码
  2. 独家 | 如何正确选择聚类算法?
  3. 【勉强采用】反欺诈四大手段一秒get
  4. Atlas, AJAX
  5. 在SAP ABAP和Hybris Commerce里启动后台作业
  6. 多尺度视网膜图像增强_视网膜图像怪异的预测
  7. 设计模式笔记二十三:策略模式
  8. GitHub:我们为什么会弃用jQuery?
  9. 行DataRow对象的属性和方法
  10. 计算机如何更新苹果系统,苹果电脑系统怎么升级_苹果电脑升级系统的操作步骤...
  11. 制药企业计算机管理软件,制药企业erp管理系统
  12. python创建通讯录_python实现简易通讯录修改版
  13. leetcode 376. 摆动序列(三种解法)
  14. shareSDK导入的常见错误及解决方法
  15. jQuery笔记——工具函数——jQuery标志
  16. JAVA实现CRC16校验
  17. 1.1.7. Least Angle Regression(最小角回归)和 1.1.8. LARS Lasso
  18. 类别不均衡问题之loss大集合:focal loss, GHM loss, dice loss 等等
  19. FL Studio教程之Fruity Wrapper插件讲解
  20. Windows Server 2003 移除系统密码

热门文章

  1. 《数据挖掘导论》实验课——实验七、数据挖掘之K-means聚类算法
  2. 科研院所推进6S管理的难点及推进手段分析
  3. [MapReduce_8] MapReduce 中的自定义分区实现
  4. 前端基础知识复习之html
  5. AFURLRequestSerialization
  6. ki4so-发起一个史上最开源的sso项目
  7. audio 相关概念梳理记录
  8. 恢复出厂设置 卡住10分钟问题
  9. linux日志级别的正确使用(printk)
  10. php函数默认参数放在哪测,php函数默认参数的实例代码