kubernetes与calico整合

说明

以前的kubernetes集群都是基于flannel搭建的,但应用系统所用的容器之间都能互访,存在一定的安全性,因calico在网络方面可支持策略,本文档为基于calico搭建kubernetes集群的记录

所有文件已经下载完成,并放置与我的github上 calico-kubernetes。

环境准备

  • 宿主机系统CentOS 7.1 64bit
  • virtualbox 5.0.14
  • vagrant 1.8.1
  • CoreOS alpha 928.0.0
  • kubernetes v1.1.7
  • calicoctl v0.15.0
  • calico v1.0
  • calico-ipam v1.0

安装

相关配置文件及组件下载完成后目录结构如下所示:

➜  coreos  tree
.
├── cloud-config
│   ├── calico
│   ├── calicoctl
│   ├── calico-ipam
│   ├── easy-rsa.tar.gz
│   ├── key.sh
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   ├── kubelet
│   ├── kube-proxy
│   ├── kube-scheduler
│   ├── make-ca-cert.sh
│   ├── master-config.yaml
│   ├── master-config.yaml.tmpl
│   ├── network-environment
│   ├── node-config.yaml_calico-02
│   ├── node-config.yaml_calico-03
│   ├── node-config.yaml.tmpl
│   └── setup-network-environment
├── manifests
│   ├── busybox.yaml
│   ├── kube-ui-rc.yaml
│   ├── kube-ui-svc.yaml
│   └── skydns.yaml
├── synced_folders.yaml
└── Vagrantfile

必要二进制工具下载

# 创建目录
mkdir cloud-config && cd cloud-config
## 下载calico相关组件
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
wget https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
wget https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam## 下载kubernetes相关组件
wget http://storage.googleapis.com/kubernetes-release/release/v1.1.7/bin/linux/amd64/kubectl
wget http://storage.googleapis.com/kubernetes-release/release/v1.1.7/bin/linux/amd64/kubelet
wget http://storage.googleapis.com/kubernetes-release/release/v1.1.7/bin/linux/amd64/kube-proxy
wget http://storage.googleapis.com/kubernetes-release/release/v1.1.7/bin/linux/amd64/kube-apiserver
wget http://storage.googleapis.com/kubernetes-release/release/v1.1.7/bin/linux/amd64/kube-controller-manager
wget http://storage.googleapis.com/kubernetes-release/release/v1.1.7/bin/linux/amd64/kube-scheduler## 下载环境设置工具
wget https://github.com/kelseyhightower/setup-network-environment/releases/download/1.0.1/setup-network-environment## 下载证书制作工具(也可以使用CoreOS系统自带的,本文档中不包含后续再更新)
wget https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz

cloud-init配置文件模板

目录中master-config.yamlnode-config.yaml_calico-02node-config.yaml_calico-03为启动集群时根据.tmpl文件自动生成的配置文件

master cloud-init模板

~/cloud-config/master-config.yaml.tmpl 内容如下:

#cloud-config
---
write_files:# Network config file for the Calico CNI plugin.- path: /etc/cni/net.d/10-calico.conf
    owner: rootpermissions: 0755content: |{"name": "calico-k8s-network","type": "calico","etcd_authority": "172.18.18.101:2379","log_level": "info","ipam": {"type": "calico-ipam"}}# Kubeconfig file.- path: /etc/kubernetes/worker-kubeconfig.yaml
    owner: rootpermissions: 0755content: |apiVersion: v1kind: Configclusters:- name: local
        cluster:server: http://172.18.18.101:8080users:- name: kubelet
      contexts:- context:
          cluster: localuser: kubeletname: kubelet-contextcurrent-context: kubelet-contexthostname: __HOSTNAMT__
coreos:update:reboot-strategy: offetcd2:name: "etcdserver"listen-client-urls: http://0.0.0.0:2379advertise-client-urls: http://$private_ipv4:2379initial-cluster: etcdserver=http://$private_ipv4:2380initial-advertise-peer-urls: http://$private_ipv4:2380listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001listen-peer-urls: http://0.0.0.0:2380fleet:metadata: "role=master"etcd_servers: "http://localhost:2379"units:- name: etcd2.service
      command: start- name: fleet.service
      command: start- name: setup-network-environment.service
      command: startcontent: |[Unit]Description=Setup Network EnvironmentDocumentation=https://github.com/kelseyhightower/setup-network-environmentRequires=network-online.targetAfter=network-online.target[Service]ExecStartPre=-/usr/bin/chmod +x /opt/bin/setup-network-environmentExecStart=/opt/bin/setup-network-environmentRemainAfterExit=yesType=oneshot[Install]WantedBy=multi-user.target- name: kube-apiserver.service
      command: startcontent: |[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesRequires=etcd2.serviceAfter=etcd2.service[Service]ExecStart=/opt/bin/kube-apiserver \--allow-privileged=true \
        --etcd-servers=http://$private_ipv4:2379 \
        --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
        --insecure-bind-address=0.0.0.0 \
        --advertise-address=$private_ipv4 \
        --service-account-key-file=/srv/kubernetes/kubecfg.key \
        --tls-cert-file=/srv/kubernetes/server.cert \
        --tls-private-key-file=/srv/kubernetes/server.key \
        --service-cluster-ip-range=10.100.0.0/16 \
        --client-ca-file=/srv/kubernetes/ca.crt \
        --kubelet-https=true \
        --secure-port=443 \
        --runtime-config=extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/ingress=true \
        --logtostderr=true
        Restart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: kube-controller-manager.service
      command: startcontent: |[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesRequires=kube-apiserver.serviceAfter=kube-apiserver.service[Service]ExecStart=/opt/bin/kube-controller-manager \--master=$private_ipv4:8080 \
        --service_account_private_key_file=/srv/kubernetes/kubecfg.key \
        --root-ca-file=/srv/kubernetes/ca.crt \
        --logtostderr=true
        Restart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: kube-scheduler.service
      command: startcontent: |[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesRequires=kube-apiserver.serviceAfter=kube-apiserver.service[Service]ExecStart=/opt/bin/kube-scheduler --master=$private_ipv4:8080 --logtostderr=trueRestart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: calico-node.service
      command: startcontent: |[Unit]Description=calicoctl nodeAfter=docker.serviceRequires=docker.service[Service]#EnvironmentFile=/etc/network-environmentUser=rootEnvironment="ETCD_AUTHORITY=127.0.0.1:2379"PermissionsStartOnly=true#ExecStartPre=/opt/bin/calicoctl pool add 192.168.0.0/16 --ipip --nat-outgoingExecStart=/opt/bin/calicoctl node --ip=$private_ipv4 --detach=falseRestart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: kubelet.service
      command: startcontent: |[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=docker.serviceRequires=docker.service[Service]ExecStart=/opt/bin/kubelet \--register-node=true \
        --pod-infra-container-image="shenshouer/pause:0.8.0" \
        --allow-privileged=true \
        --config=/opt/kubernetes/manifests \
        --cluster-dns=10.100.0.10 \
        --hostname-override=$private_ipv4 \
        --api-servers=http://localhost:8080 \
        --cluster-domain=cluster.local \
        --network-plugin-dir=/etc/cni/net.d \
        --network-plugin=cni \
        --logtostderr=true
        Restart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: kube-proxy.service
      command: startcontent: |[Unit]Description=Kubernetes ProxyDocumentation=https://github.com/GoogleCloudPlatform/kubernetesRequires=kubelet.serviceAfter=kubelet.service[Service]ExecStart=/opt/bin/kube-proxy \--master=http://$private_ipv4:8080 \
        --proxy-mode=iptables \
        --logtostderr=true
        Restart=alwaysRestartSec=10[Install]WantedBy=multi-user.target

~/cloud-config/node-config.yaml.tmpl 内容如下:

#cloud-config
---
write_files:# Network config file for the Calico CNI plugin.- path: /etc/cni/net.d/10-calico.conf
    owner: rootpermissions: 0755content: |{"name": "calico-k8s-network","type": "calico","etcd_authority": "172.18.18.101:2379","log_level": "info","ipam": {"type": "calico-ipam"}}# Kubeconfig file.- path: /etc/kubernetes/worker-kubeconfig.yaml
    owner: rootpermissions: 0755content: |apiVersion: v1kind: Configclusters:- name: local
        cluster:server: http://172.18.18.101:8080users:- name: kubelet
      contexts:- context:
          cluster: localuser: kubeletname: kubelet-contextcurrent-context: kubelet-contexthostname: __HOSTNAMT__
coreos:etcd2:proxy: onlisten-client-urls: http://localhost:2379initial-cluster: etcdserver=http://172.18.18.101:2380fleet:metadata: "role=node"etcd_servers: "http://localhost:2379"update:reboot-strategy: offunits:- name: etcd2.service
      command: start- name: fleet.service
      command: start- name: setup-network-environment.service
      command: startcontent: |[Unit]Description=Setup Network EnvironmentDocumentation=https://github.com/kelseyhightower/setup-network-environmentRequires=network-online.targetAfter=network-online.target[Service]ExecStartPre=-/usr/bin/chmod +x /opt/bin/setup-network-environmentExecStart=/opt/bin/setup-network-environmentRemainAfterExit=yesType=oneshot[Install]WantedBy=multi-user.target- name: calico-node.service
      command: startcontent: |[Unit]Description=calicoctl nodeAfter=docker.serviceRequires=docker.service[Service]#EnvironmentFile=/etc/network-environmentUser=rootEnvironment=ETCD_AUTHORITY=172.18.18.101:2379PermissionsStartOnly=trueExecStart=/opt/bin/calicoctl node --ip=$private_ipv4 --detach=falseRestart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: kubelet.service
      command: startcontent: |[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=docker.serviceRequires=docker.service[Service]ExecStart=/opt/bin/kubelet \--address=0.0.0.0 \
        --allow-privileged=true \
        --cluster-dns=10.100.0.10 \
        --cluster-domain=cluster.local \
        --config=/opt/kubernetes/manifests \
        --hostname-override=$private_ipv4 \
        --api-servers=http://172.18.18.101:8080 \
        --pod-infra-container-image="shenshouer/pause:0.8.0" \
        --network-plugin-dir=/etc/cni/net.d \
        --network-plugin=cni \
        --logtostderr=true
        Restart=alwaysRestartSec=10[Install]WantedBy=multi-user.target- name: kube-proxy.service
      command: startcontent: |[Unit]Description=Kubernetes ProxyDocumentation=https://github.com/GoogleCloudPlatform/kubernetesRequires=kubelet.serviceAfter=kubelet.service[Service]ExecStart=/opt/bin/kube-proxy \--master=http://172.18.18.101:8080 \
        --proxy-mode=iptables \
        --logtostderr=true
        Restart=alwaysRestartSec=10[Install]WantedBy=multi-user.target

集群附件组件

文件夹manifests中为测试工具以及DNS、kube-ui等附件组件配置文件

当启动集群时,将自动复制到master节点的home目录中

Vagrantfile配置

require 'fileutils'
require 'yaml'# Size of the cluster created by Vagrant
num_instances=3# Read YAML file with mountpoint details
MOUNT_POINTS = YAML::load_file('synced_folders.yaml')module OSdef OS.windows?(/cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM) != nilenddef OS.mac?(/darwin/ =~ RUBY_PLATFORM) != nilenddef OS.unix?!OS.windows?enddef OS.linux?OS.unix? and not OS.mac?end
end# Change basename of the VM
instance_name_prefix="calico"Vagrant.configure("2") do |config|# always use Vagrants insecure keyconfig.ssh.insert_key = false# 指定创建集群vm所需的boxconfig.vm.box = "coreos-alpha-928.0.0"config.vm.provider :virtualbox do |v|# On VirtualBox, we don't have guest additions or a functional vboxsf# in CoreOS, so tell Vagrant that so it can be smarter.v.check_guest_additions = falsev.memory = 2048v.cpus = 2v.functional_vboxsf     = falseend# Set up each box(1..num_instances).each do |i|vm_name = "%s-%02d" % [instance_name_prefix, i]config.vm.define vm_name do |host|host.vm.hostname = vm_namehost.vm.synced_folder ".", "/vagrant", disabled: true# 挂载当前目录到虚拟机中的/vagrant目录,自动化部署beginMOUNT_POINTS.each do |mount|mount_options = ""disabled = falsenfs =  trueif mount['mount_options']mount_options = mount['mount_options']endif mount['disabled']disabled = mount['disabled']endif mount['nfs']nfs = mount['nfs']endif File.exist?(File.expand_path("#{mount['source']}"))if mount['destination']host.vm.synced_folder "#{mount['source']}", "#{mount['destination']}",id: "#{mount['name']}",disabled: disabled,mount_options: ["#{mount_options}"],nfs: nfsendendendrescueend# 指定虚拟机的ip地址范围ip = "172.18.18.#{i+100}"host.vm.network :private_network, ip: iphost.vm.provision :shell, :inline => "/usr/bin/timedatectl set-timezone Asia/Shanghai ", :privileged => true# 自动将对应二进制文件复制到对应目录中host.vm.provision :shell, :inline => "chmod +x /vagrant/cloud-config/key.sh && /vagrant/cloud-config/key.sh ", :privileged => true#host.vm.provision :shell, :inline => "cp /vagrant/cloud-config/network-environment /etc/network-environment", :privileged => true# docker pull 相应必要镜像文件host.vm.provision :docker, images: ["busybox:latest", "shenshouer/pause:0.8.0", "calico/node:v0.15.0"]sedInplaceArg = OS.mac? ? " ''" : ""if i == 1# Configure the master.system "cp cloud-config/master-config.yaml.tmpl cloud-config/master-config.yaml"system "sed -e 's|__HOSTNAMT__|#{vm_name}|g' -i#{sedInplaceArg} ./cloud-config/master-config.yaml"host.vm.provision :file, :source => "./manifests/skydns.yaml", :destination => "/home/core/skydns.yaml"host.vm.provision :file, :source => "./manifests/busybox.yaml", :destination => "/home/core/busybox.yaml"host.vm.provision :file, :source => "./cloud-config/master-config.yaml", :destination => "/tmp/vagrantfile-user-data"host.vm.provision :shell, :inline => "mv /tmp/vagrantfile-user-data /var/lib/coreos-vagrant/", :privileged => trueelse# Configure a node.system "cp cloud-config/node-config.yaml.tmpl cloud-config/node-config.yaml_#{vm_name}"system "sed -e 's|__HOSTNAMT__|#{vm_name}|g' -i#{sedInplaceArg} ./cloud-config/node-config.yaml_#{vm_name}"host.vm.provision :file, :source => "./cloud-config/node-config.yaml_#{vm_name}", :destination => "/tmp/vagrantfile-user-data"host.vm.provision :shell, :inline => "mv /tmp/vagrantfile-user-data /var/lib/coreos-vagrant/", :privileged => trueendendend
end

部署附件组件

在Vagrantfile所在目录中执行vagrant up,启动过程因挂载宿主机目录到虚拟机会要求输入宿主机密码.

启动过程中会使用vm中的docker pull必要的镜像文件,等待自动化部署完成

部署 集群附加组件

在当前目录中执行vagrant ssh calico-01进入到master主机中:

core@calico-01 ~ $ ls
busybox.yaml  kube-ui-rc.yaml  kube-ui-svc.yaml  skydns.yaml# 查看节点情况
core@calico-01 ~ $ kubectl get node
NAME            LABELS                                 STATUS    AGE
172.18.18.101   kubernetes.io/hostname=172.18.18.101   Ready     3h
172.18.18.102   kubernetes.io/hostname=172.18.18.102   Ready     3h
172.18.18.103   kubernetes.io/hostname=172.18.18.103   Ready     3h# 创建kube-system namespace,部署skydns rc,部署skydns svc:
kubectl create -f skydns.yaml# 部署kube-ui
kubectl create -f kube-ui-rc.yaml
kubectl create -f kube-ui-svc.yaml# 查看执行结果core@calico-01 ~ $ kubectl get po -o wide --namespace=kube-system
NAME                READY     STATUS    RESTARTS   AGE       NODE
kube-dns-v9-qd8i3   4/4       Running   0          3h        172.18.18.103
kube-ui-v5-8spea    1/1       Running   0          3h        172.18.18.102# 验证DNS
# 部署busybox工具
kubectl create -f busybox.yamlcore@calico-01 ~ $ kubectl get po -o wide
NAME      READY     STATUS    RESTARTS   AGE       NODE
busybox   1/1       Running   3          3h        172.18.18.101# 验证DNS
core@calico-01 ~ $ kubectl exec busybox -- nslookup kubernetes
Server:    10.100.0.10
Address 1: 10.100.0.10Name:      kubernetes
Address 1: 10.100.0.1# 查看calico状态
core@calico-01 ~ $ calicoctl status
calico-node container is running. Status: Up 3 hours
Running felix version 1.3.0rc6IPv4 BGP status
IP: 172.18.18.101    AS Number: 64511 (inherited)
+---------------+-------------------+-------+----------+-------------+
|  Peer address |     Peer type     | State |  Since   |     Info    |
+---------------+-------------------+-------+----------+-------------+
| 172.18.18.102 | node-to-node mesh |   up  | 03:57:16 | Established |
| 172.18.18.103 | node-to-node mesh |   up  | 03:59:36 | Established |
+---------------+-------------------+-------+----------+-------------+IPv6 BGP status
No IPv6 address configured.# 查看busybox容器地址分配,为192.168.0.0
core@calico-01 ~ $ kubectl describe po busybox
Name:               busybox
Namespace:          default
Image(s):           busybox
Node:               172.18.18.101/172.18.18.101
Start Time:         Tue, 02 Feb 2016 12:02:36 +0800
Labels:             <none>
Status:             Running
Reason:
Message:
IP:             192.168.0.0
Replication Controllers:    <none>
Containers:busybox:Container ID:       docker://2ffad63169095e31816bd10e45270e2d6add39480f61a5370ba76d7e4c5dd86bImage:          busyboxImage ID:           docker://b175bcb790231169e232739bd2172bded9669c25104a9b723999c5f366ed7543State:          RunningStarted:          Tue, 02 Feb 2016 15:03:03 +0800Last Termination State: TerminatedReason:           ErrorExit Code:        0Started:          Tue, 02 Feb 2016 14:02:52 +0800Finished:         Tue, 02 Feb 2016 15:02:52 +0800Ready:          TrueRestart Count:      3Environment Variables:
Conditions:Type      StatusReady     True
Volumes:default-token-9u1ub:Type:   Secret (a secret that should populate this volume)SecretName: default-token-9u1ub
Events:FirstSeen LastSeen    Count   From            SubobjectPath           Reason  Message───────── ────────    ─────   ────            ─────────────           ──────  ───────3h        37m     4   {kubelet 172.18.18.101} spec.containers{busybox}    Pulled  Container image "busybox" already present on machine37m       37m     1   {kubelet 172.18.18.101} spec.containers{busybox}    Created Created with docker id 2ffad631690937m       37m     1   {kubelet 172.18.18.101} spec.containers{busybox}    Started Started with docker id 2ffad6316909# 查看集群信息中的对外服务
core@calico-01 ~ $ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui

在宿主机上浏览器打开http://172.18.18.101:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui即可看到集群信息。
部署完成,验证也ok,开心地玩耍吧。

kubernetes与calico整合相关推荐

  1. kubernetes 中 calico 组件的 calicoctl 工具的使用示例及 BGP 相关配置

    # 安装 [root@calico ~]# curl -sL https://github.com/projectcalico/calicoctl/releases/download/v3.13.2/ ...

  2. 第二篇:kubernetes部署calico网络插件

    说明: 总的目标是在k8s集群部署gitlab.jenkins,并且在本地提交代码到gitlab后jenkin流水线可以自动编译打包成为docker镜像然后部署到k8s中并实现客户端外部域名访问,在文 ...

  3. Kubernetes CNI Calico:BGP 模式 / Route Reflector 模式(RR)

    Ipip模式是通过宿主机的网络去传输的,这个模式和flannel的vxlan工作模式差不多是一样的,都是一种复杂的网络方案,IPIP和vxlan模式的性能基本上接近,所以在性能方面要相对于路由方面损失 ...

  4. Kubernetes 高性能网络组件 Calico 入门教程

    公众号关注 「奇妙的 Linux 世界」 设为「星标」,每天带你玩转 Linux ! 1.Calico概述 Calico 是 Kubernetes 生态系统中另一种流行的网络选择.虽然 Flannel ...

  5. 深入理解Kubernetes网络策略

    [编者的话]当我们逐渐向着微服务.云原生迈进的时候,传统静态的.相对简单的网络安全策略开始显得吃力. Kubernetes 的 Network Policy 特性正是来解决这个问题的.在刚刚出炉不久的 ...

  6. Kubernetes — 生产环境架构简述

    目录 文章目录 目录 Kubernetes 在生产环境中架构 基础设施层 业务应用层 服务访问层 Kubernetes 在生产环境中架构 Client 层:即外部用户.客户端等: 服务访问层:即由 T ...

  7. 当 Kubernetes 遇到机密计算,阿里巴巴如何保护容器内数据的安全?

    作者 | 贾之光(甲卓) 阿里巴巴高级开发工程师,专注于 Kubernetes 安全沙箱和机密计算领域,主要参与 Incalvare Containers 社区开发. 8 月 26 日,我们发起了第 ...

  8. 盘点Kubernetes网络问题的4种解决方案

    由于在企业中部署私有云的场景会更普遍,所以在私有云中运行Kubernetes + Docker集群之前,就需要自己搭建符合Kubernetes要求的网络环境.现在的开源世界里,有很多开源组件可以帮助我 ...

  9. 阿里云上Kubernetes集群联邦

    摘要: kubernetes集群让您能够方便的部署管理运维容器化的应用.但是实际情况中经常遇到的一些问题,就是单个集群通常无法跨单个云厂商的多个Region,更不用说支持跨跨域不同的云厂商.这样会给企 ...

最新文章

  1. activeMQ安装9(window下)
  2. 电脑换ip_代理ip地址怎么换
  3. bootcss echarts_数据可视化插件使用(Echarts)
  4. linux 进程间通信之pipe
  5. Linux Boot,Kernel 和 Service 介绍
  6. 在Safari里也能像Chrome里一样,通过执行js修改变量的值,在debugger里立即生效
  7. python发送邮件脚本_python-发邮件脚本
  8. PHP API接口GETPOST请求封装(通用)
  9. 【Python】os库介绍
  10. hive-sql中平方和开根号函数
  11. 网络流24题 餐巾计划(费用流)
  12. 2020年下半年系统架构设计师下午真题及答案解析
  13. 知道创宇云防御平台通过2021上半年可信云安全运营中心能力评估
  14. Visual C++游戏编程基础之透明半透明效果
  15. ubuntu14.04 安装skyeye
  16. 数据分析实战一:教育课程案例线上平台数据分析
  17. vue v-if : TypeError: Cannot read property 'length' of undefined
  18. Python 关于浮点数取整详解
  19. 少年碎碎念:《追飞机的人》
  20. 求无序数组的第K(大/小)数的三种方法

热门文章

  1. docker 报错:bridge docker0 failed: exchange full
  2. 什么是“数据恢复”?
  3. 【微课制作软件】Focusky教程 | 设置鼠标单击不进入下一页面
  4. android 权限管理 主动防御,ROM刷机必备LBE安全大师杜绝病毒隐患
  5. 武汉大学-黄如花-信息检索课程学习笔记二
  6. 机械臂关节模组制动相关(零差云控eBr)
  7. C++打造暴风影音视频播放器项目,手把手教你打造个人播放器
  8. 2022年的物联网发展趋势是什么?
  9. 一文彻底读懂DevOps与SRE来龙去脉
  10. U盘启动盘制作及win8系统重装