1 .1cni网络问题导致coredns起不来

NetworkPlugin cni failed to set up pod "coredns-5c98db65d4-fr9nk_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 172.21.0.1/24NetworkPlugin cni failed to set up pod "coredns-5c98db65d4-fr9nk_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 172.21.0.1/24

解决:k8s集群reset之后,重新发布kube-flannel

flannel的资源限制要关闭,不然,会OOM

1. 2 还有一种coredns起不来,是因为dns解析的问题

nameserver xxx.xxx.xxx.xxx这个值不能为空,否则coredns也起不来

vim /etc/resolv.conf
nameserver 114.114.114.114
systemctl daemon-reload
systemctl restart kubelet

2.k8s  reset之后彻底清除上次初始化

kubeadm resetiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/*
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/*
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker

之后重新kubeadm init

3.  journalctl -u kubelet 查看kubectl日志发现报错如下

Kubernetes启动报错

kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

错误原因:

docker和k8s使用的cgroup不一致导致

解决办法:

修改二者一致,统一使用systemd或者cgroupfs进行资源管理。由于k8s官方文档中提示使用cgroupfs管理docker和k8s资源,而使用systemd管理节点上其他进程资源在资源压力大时会出现不稳定,因此推荐修改docker和k8s统一使用systemd管理资源。

Cgroup drivers

When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. It’s possible to configure your container runtime and the kubelet to use cgroupfs. Using cgroupfs alongside systemd means that there will then be two different cgroup managers.

Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the available and in-use resources. When we have two managers we end up with two views of those resources. We have seen cases in the field where nodes that are configured to use cgroupfs for the kubelet and Docker, and systemd for the rest of the processes running on the node becomes unstable under resource pressure.

docker修改方法:

修改或创建/etc/docker/daemon.json,加入下面的内容:

cat > /etc/docker/daemon.json <<EOF{"exec-opts": ["native.cgroupdriver=systemd"]}EOF
 

重启docker:

systemctl restart docker

k8s修改方法:

修改kubelet:

修改docker,只需在/etc/docker/daemon.json中,添加"exec-opts": ["native.cgroupdriver=systemd"]即可,本文最初的docker配置可供参考。

修改kubelet:

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

vim /var/lib/kubelet/kubeadm-flags.env

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --hostname-override=10.249.176.86 --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1"

添加如下内容--cgroup-driver=systemd

需要重启 kubelet:

systemctl daemon-reloadsystemctl restart kubelet

4.kubernetes认证namespace,默认default的namespace变成操作自定义namespace    yujia-k8s

示例:kubectl get pod -n yujia-k8s    <<---等价于-->>   kubectl get pod

vim /root/.kube/configcontexts:的下面加上自己想要修改的namespacecontexts:
#- context:
#    cluster: kubernetes
#    user: kubernetes-admin
#  name: kubernetes-admin@kubernetes
#current-context: kubernetes-admin@kubernetes
- context:cluster: kubernetesnamespace: yujia-k8suser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:

这样做能让我们操作默认default的namespace变成操作yujia-k8s  命名空间下面的资源,提升了简便性

5.node节点加不进去

'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused

kubelet没有这个文件

open /var/lib/kubelet/pki/kubelet.crt: no such file or directory

解决办法:复制其他node节点的这个证书

6. 容器无法被调度:OCI runtime state failed: fork/exec /usr/bin/runc: resource temporarily unavailable: : unknown

failed to start shim: fork/exec /usr/bin/containerd-shim: resource temporarily unavailable: unknown

判断:应该资源限制了,节点资源问题,进程数大于主机设置之类的

一个 Pod 被创建后, 一直卡在ContainerCreating的状态, 执行describe命令查看该 Pod 详细信息后发现如下 Event

1
2
3
4
5
6
7
Events:Type     Reason                  Age               From                            Message----     ------                  ----              ----                            -------Normal   Scheduled               2m                default-scheduler               Successfully assigned 61f983b5-19ca-4b33-8647-6b279ae93812 to k8node3Normal   SuccessfulMountVolume   2m                kubelet, k8node3  MountVolume.SetUp succeeded for volume "default-token-7r9jt"Warning  FailedCreatePodSandBox  2m (x12 over 2m)  kubelet, k8node3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""Normal   SandboxChanged          2m (x12 over 2m)  kubelet, k8node3  Pod sandbox changed, it will be killed and re-created.

以上 Event 信息中, 能解读到的信息极其有限

  • Failed create pod sandbox: Google 提供的 pause 容器启动失败
  • oci runtime error: 运行时接口出的问题, 我的环境中运行时环境为 docker
  • connection reset by peer: 连接被重置
  • Pod sandbox changed, it will be killed and re-created: pause 容器引导的 Pod 环境被改变, 重新创建 Pod 中的 pause 引导

看完上面的报错信息并不能准确定位到问题的根源, 只能大致了解到是因为创建SandBox失败导致的, 接下来查看 kubelet 的日志

1
2
3
4
5
6
7
8
9
10
Oct 31 16:33:57 k8node3 kubelet[1865]: E1031 16:33:57.551282    1865 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""
Oct 31 16:33:57 k8node3 kubelet[1865]: E1031 16:33:57.551415    1865 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""
Oct 31 16:33:57 k8node3 kubelet[1865]: E1031 16:33:57.551459    1865 kuberuntime_manager.go:646] createPodSandbox for pod "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""
Oct 31 16:33:57 k8node3 kubelet[1865]: E1031 16:33:57.551581    1865 pod_workers.go:186] Error syncing pod 77b2b948-dce4-11e8-afec-b82a72cf3061 ("61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)"), skipping: failed to "CreatePodSandbox" for "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)" with CreatePodSandboxError: "CreatePodSandbox for pod \"61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"61f983b5-19ca-4b33-8647-6b279ae93812\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:286: decoding sync type from init pipe caused \\\"read parent: connection reset by peer\\\"\""
Oct 31 16:33:58 k8node3 kubelet[1865]: E1031 16:33:58.718255    1865 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""
Oct 31 16:33:58 k8node3 kubelet[1865]: E1031 16:33:58.718406    1865 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""
Oct 31 16:33:58 k8node3 kubelet[1865]: E1031 16:33:58.718443    1865 kuberuntime_manager.go:646] createPodSandbox for pod "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "61f983b5-19ca-4b33-8647-6b279ae93812": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:286: decoding sync type from init pipe caused \"read parent: connection reset by peer\""
Oct 31 16:33:58 k8node3 kubelet[1865]: E1031 16:33:58.718597    1865 pod_workers.go:186] Error syncing pod 77b2b948-dce4-11e8-afec-b82a72cf3061 ("61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)"), skipping: failed to "CreatePodSandbox" for "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)" with CreatePodSandboxError: "CreatePodSandbox for pod \"61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"61f983b5-19ca-4b33-8647-6b279ae93812\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:286: decoding sync type from init pipe caused \\\"read parent: connection reset by peer\\\"\""
Oct 31 16:36:02 k8node3 kubelet[1865]: E1031 16:36:02.114171    1865 kubelet.go:1644] Unable to mount volumes for pod "61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)": timeout expired waiting for volumes to attach or mount for pod "default"/"61f983b5-19ca-4b33-8647-6b279ae93812". list of unmounted volumes=[default-token-7r9jt]. list of unattached volumes=[default-token-7r9jt]; skipping pod
Oct 31 16:36:02 k8node3 kubelet[1865]: E1031 16:36:02.114262    1865 pod_workers.go:186] Error syncing pod 77b2b948-dce4-11e8-afec-b82a72cf3061 ("61f983b5-19ca-4b33-8647-6b279ae93812_default(77b2b948-dce4-11e8-afec-b82a72cf3061)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"61f983b5-19ca-4b33-8647-6b279ae93812". list of unmounted volumes=[default-token-7r9jt]. list of unattached volumes=[default-token-7r9jt]

kubelet 的日志中, 与 describe 出来的信息差不多, tail 的时候更直观的感觉到频繁的Sandbox创建的过程, 既然是 OCI 运行时报错, 只能去 docker 的日志中找找看了

1
2
3
4
5
6
7
8
9
10
Oct 31 16:33:58 k8node3 dockerd[1715]: time="2018-10-31T16:33:58.671146675+08:00" level=error msg="containerd: start container" error="oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:286: decoding sync type from init pipe caused \\\"read parent: connection reset by peer\\\"\"\n" id=029d9e843eedb822370c285b5abf1f37556461083d3bda2c7af38b3b00695b0f
Oct 31 16:33:58 k8node3 dockerd[1715]: time="2018-10-31T16:33:58.671871096+08:00" level=error msg="Create container failed with error: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:286: decoding sync type from init pipe caused \\\"read parent: connection reset by peer\\\"\"\n"
Oct 31 16:33:58 k8node3 dockerd[1715]: time="2018-10-31T16:33:58.717553371+08:00" level=error msg="Handler for POST /v1.27/containers/029d9e843eedb822370c285b5abf1f37556461083d3bda2c7af38b3b00695b0f/start returned error: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:286: decoding sync type from init pipe caused \\\"read parent: connection reset by peer\\\"\"\n"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.759631102+08:00" level=error msg="Handler for POST /v1.27/containers/207f0ffb4b5ecc5f8261af40cd7a2c4c2800a2c30b027c4fb95648f8c1b00274/stop returned error: Container 207f0ffb4b5ecc5f8261af40cd7a2c4c2800a2c30b027c4fb95648f8c1b00274 is already stopped"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.768603351+08:00" level=error msg="Handler for POST /v1.27/containers/03bf9bfcf4e3f66655b0124d6779ff649b2b00219b83645ca18b4bb08d1cc573/stop returned error: Container 03bf9bfcf4e3f66655b0124d6779ff649b2b00219b83645ca18b4bb08d1cc573 is already stopped"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.777073508+08:00" level=error msg="Handler for POST /v1.27/containers/7b37f5aee7afe01f209bcdc6b3568b522fb0bbda5cb4b322e10b05ec603f5728/stop returned error: Container 7b37f5aee7afe01f209bcdc6b3568b522fb0bbda5cb4b322e10b05ec603f5728 is already stopped"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.785774443+08:00" level=error msg="Handler for POST /v1.27/containers/1a01419973e4701b231556d74c619c30e0966889948e810b46567f08475ec431/stop returned error: Container 1a01419973e4701b231556d74c619c30e0966889948e810b46567f08475ec431 is already stopped"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.794198279+08:00" level=error msg="Handler for POST /v1.27/containers/c3c4049e7b1942395b3cc3a45cf0cc69b34bab6271cb940a70c7d9aed3ba6176/stop returned error: Container c3c4049e7b1942395b3cc3a45cf0cc69b34bab6271cb940a70c7d9aed3ba6176 is already stopped"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.802698120+08:00" level=error msg="Handler for POST /v1.27/containers/8d2c8a4cd5b43b071a9976251932955937d5b1f0f34dca1482cde4195df4747d/stop returned error: Container 8d2c8a4cd5b43b071a9976251932955937d5b1f0f34dca1482cde4195df4747d is already stopped"
Oct 31 16:34:22 k8node3 dockerd[1715]: time="2018-10-31T16:34:22.811103238+08:00" level=error msg="Handler for POST /v1.27/containers/7fdb697e251cec249c0a17f1fdcc6d76fbec13a60929eb0217c744c181702c1f/stop returned error: Container 7fdb697e251cec249c0a17f1fdcc6d76fbec13a60929eb0217c744c181702c1f is already stopped"

Docker 的日志中, 除了已经看了很多遍的connection reset by peer之外, 还有一些新的发现

  • xxx is already stopped: 看日志, 感觉是向容器接口发送了 POST 请求以 stop 容器, 但是该容器已经被 stop 掉了

Docker 的日志和 kubelet 的日志的共同点就是, kubelet 频繁 recreate Sandbox

执行 docker container ls -a 命令发现存在大量 create 状态的 pause 容器

查看 demesg -T 信息, 发现了大量 oom-killer 的字眼的日志, 初步判断是由于内存溢出, 导致系统主动 kill 进程.

发生这样的情况的概率并不高, 一般情况下有两种类型的 oom kill

  • 由于 pod 内进程超出了 pod 指定 Limit 限制的值, 将导致 oom kill, 此时 pod 退出的 Reason 会显示 OOMKilled
  • 另一种情况是 pod 内的进程给自己设置了可用内存, 比如 jvm 内存限制设置为2G, pod Limit 设置为6G, 此时由于程序的原因导致内存使用超过2G 时, 也会引发 oom kill

这两种内存溢出的 kill 区别是第一种原因直接显示在 pod 的 Event 里; 第二种你在 Event 里找不到, 在宿主机的 dmesg 里面可以找到 invoked oom-killer 的日志

这次的情况看起来像属于第二种情况, 于是赶紧再次 describe pod, 查看 Limit 限制

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Containers:61f983b5-19ca-4b33-8647-6b279ae93812:Container ID:Image:          reg.lvrui.io/public/testpublish:latestImage ID:Port:           <none>Host Port:      <none>State:          WaitingReason:       ContainerCreatingReady:          FalseRestart Count:  0Limits:cpu:     1memory:  2kRequests:cpu:     1memory:  2kEnvironment:key:  valueMounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-7r9jt (ro)

终于找到了原因, 就是因为对内存的 Limit 导致的. 这里显示内存 Limit 为2k, 实际上是因为在创建资源时, 写的是 2000(不加单位时, 默认单位为 bytes), k8s 自动转成了以小写 k 的单位表示方式, 所以就变成了2k

理论上来说, 按照之前的经验, 此种情况(实际使用内存超过 Limit 内存的情况)应该属于第一种, 会在 Event 里显示的由于 OOMkilled 原因导致 Terminated 状态. 但实际情况却是在 Event 里找不到 oom kill 的日志, 且 pod 状态处于 ContainerCreating 状态.

  • 由于 OOMkilled 处于 Terminated 的状态是因为 pod 已经正常被引导创建后, 导致的内存溢出
  • 由于系统 invoked oom-killer 导致的处于 ContainerCreating 状态的 pod 是因为 pod 还没正常被创建, pod 中的 pause 容器都没有被正常引导就已经被 cgroup 的内存限制而招来杀身之祸

Kubernetes中内存资源限制引发的报错 | Polar Snow Documentation

解决:k8s pod数量我们设置的是300,但是docker启动文件service设置只有226默认,导致容器无法被调度,注释这一行重启docker

解决 docker 的 read unix @->/run/containerd/s/xxx read: connection reset by peer: unknown · zhangguanzhang's Blog

docker exec 失败问题排查之旅 | Stupig

K8S 故障处理经验积累(网络)相关推荐

  1. 一个有经验的网络工程师的所谈

     <一个有经验的网络工程师的所谈> 一个有经验的网络工程师的所谈[工资.认证.就业等问题] 网工之路难,也容易(为自己将来的时间做个规划吧).下面是我自己在网页上找到的一些资料,你也看一下 ...

  2. 一个有经验的网络工程师的所谈[工资、认证、就业等问题]

    网工之路难,也容易(为自己将来的时间做个规划吧).下面是我自己在网页上找到的一些资料,你也看一下吧,希望对你也有所帮助.书籍方面的问题我帮不了你,我们一起努力,相信都会有个好的出路的: 你可以参加国家 ...

  3. IT职场人生系列之十四:经验积累

    本文是IT职场人生系列的第十四篇. 任何时候都会发现IT业是个变化迅速的行业,几年前还很时髦的技术,现在已经过时了:几年前还很热门的行业,现在也过时了.这种变化之莫测,别说我们普通人,连IT巨头们都经 ...

  4. K8s系列之:网络原理

    K8s系列之:网络原理 一.K8s网络模型 二.Docker的网络模型 三.网络的命名空间 1.网络命名空间的实现 2.网络命名空间的操作 3.网络命名空间的一些技巧 四.Veth设备对 1.Veth ...

  5. K8s的网络模型和网络策略

    K8s的网络模型和网络策略_路---的博客-CSDN博客_k8s网络模式 1.Kubernetes网络模型和CNI插件 在Kubernetes中设计了一种网络模型,要求无论容器运行在集群中的哪个节点, ...

  6. echarts数据可视化项目经验积累

    echarts数据可视化项目经验积累 echarts图表在初始化时可以在mounted中. // An highlighted block mounted() {this.myChart = this ...

  7. Android Studio经验积累之常见问题以及解决方式

    原文出处--Android Studio经验积累 1.获取SHA1: Android Studio中获取sha1证书指纹数据的方法 2.注释模板:android studio中如何设置注释模板 3.A ...

  8. 本人学习经历经验积累

    @学习 本人学习经历经验积累 本文是参考众多博主的经验贴结合自己实际操作过程所记录方便本人遗忘时查阅使用,会不定期更新! 1.如何从Github上快速下载代码 自Android课设开始渐渐接触GitH ...

  9. AI快速入门学习的经验积累-最佳学习路线图谱梳理

    一  最难的一件事         要成为大牛,其实不难,只需要做一件事 -- 学习:然而其实也很难,因为必须做到一件事 -- 坚持学习.无关智商,无关信仰,能否坚持到底,至关重要.         ...

最新文章

  1. php javabean对象,Struts2 bean标签:创建并示例化一个JavaBean对象
  2. android 获取serialno_[Android]关于Android 唯一设备号(ro.serialno)
  3. 最基础!MySQL基础查询SELECT
  4. keras框架实现手写数字识别
  5. Javascript里的sleep()方法
  6. 汲取 IE6、IE8 消亡的经验,如何“杀死” IE11?
  7. 程序员如何勇敢说“不”!
  8. 华为路由器时间同步_4G网络变WIFI,华为4G路由2 Pro让上网变得更简单
  9. java web mvc spring_Java下Web MVC的领跑者:SpringMVC
  10. 数学建模竞赛代码及论文降重方法
  11. 谷歌Android UI设计技巧
  12. 19. 正则表达式(二)
  13. 【记】2021年第十二届极客大挑战
  14. JAVA 实现《飞机大战-III》游戏
  15. 【总结】从0到1的项目经历
  16. echarts pie饼图既显示内部又显示外部指示线
  17. Java EE Security API ,给企业最棒的安全守护!
  18. Android开发之播放音频
  19. SMART PLC运动超驰功能编程应用(含V2.7版本固件下载)
  20. 牛客银行面试问题总结

热门文章

  1. SAP PP 笔记(一) 概述
  2. 计算机硬盘对计算机速度的影响,实测加密软件BitLocker对硬盘性能有何影响
  3. 【共享经济】披着共享经济外衣的租赁经济
  4. 贝叶斯规划学习BPL
  5. CIO40知识星球:参观富士康灯塔工厂
  6. 视频通话除了QQ还有什么软件可以实现?
  7. java下载网络文件+Illegal character in path at index 135错误解决方法
  8. Python使用pandas_ml输出混淆矩阵以及从混淆矩阵衍生出来的其他指标:TP、TN、FP、FN、TPR、TNR(SPC)、PPV、NPV、FPR、FDR、FNR、ACC、F1、MCC等
  9. 《从一到无穷大》阅读笔记1
  10. 细谈八种架构设计模式及其优缺点概述