kubernets 介绍

kubernetes具有以下特性:
服务发现和负载均衡
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。

存储编排
Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。

自动部署和回滚
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。

自动完成装箱计算
Kubernetes 允许你指定每个容器所需 CPU 和内存(RAM)。 当容器指定了资源请求时,Kubernetes 可以做出更好的决策来管理容器的资源。

自我修复
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的 运行状况检查的容器,并且在准备好服务之前不将其通告给客户端。

密钥与配置管理
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。 你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。

Kubernetes 为你提供了一个可弹性运行分布式系统的框架。 Kubernetes 会满足你的扩展要求、故障转移、部署模式等。 例如,Kubernetes 可以轻松管理系统的 Canary 部署。

1.0 容器环境安装

Docker 环境安装Docker容器环境安装脚本.

1.1 安装kubeadm

前置条件
1.1.1
一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
2 CPU 核或更多
1.1.2
集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
1.1.3
设置防火墙放行规则
节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
1.1.4
设置不同hostname
开启机器上的某些端口。请参见这里.了解更多详细信息。
内网互信
1.1.5
禁用交换分区。为了保证 kubelet 正常工作,你必须禁用交换分区。
永久关闭

2.1 基础环境

所有节点执行

#各个机器设置自己的域名
hostnamectl set-hostname xxxx
#修改对应的hosts文件# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

2.2 安装kubelet、kubeadm、kubectl

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOFyum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetessystemctl enable --now kubelet

2.3 使用kubeadm 引导集群

1.0 下载需要的镜像

所有节点执行

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOFchmod +x ./images.sh && ./images.sh

2.0 初始化主节点

#所有机器添加master域名映射,以下需要修改为自己的
echo "172.31.0.4  cluster-endpoint" >> /etc/hosts#主节点初始化
kubeadm init \
--apiserver-advertise-address=172.31.0.4 \ #修改为自己的ip
--control-plane-endpoint=k8s-master \           #修改自己修改的域名解析
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
#注意所有网络范围不重叠
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

继续执行

 mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

按照提示执行即可,需要保留自己主机所反馈的内容

3.0 安装网络插件

calico官网

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -Okubectl apply -f calico.yaml

4.0 加入工作节点

[root@node1 ~]# kubeadm join master:6443 --token uhvk0u.0hhkg53emomlw6sk \
>     --discovery-token-ca-cert-hash sha256:8daa2863ff5dc8c5d2053af8ea54591ce24f75f7497b8cf4c0482f538737aed6
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.14. Latest validated version: 19.03[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node1 ~]# #主节点验证
[root@master kubernets]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   98m     v1.20.9
node1    Ready    <none>                 4m9s    v1.20.9
node2    Ready    <none>                 3m48s   v1.20.9

执行你在初始化所返回的加入节点令牌
注意这个加入节点的令牌时效是24小时
刷新令牌
kubeadm token create --print-join-command
高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

5.0 验证集群

[root@master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6fcb5c5bcf-2pvr8   1/1     Running   0          39m
kube-system   calico-node-qmm52                          1/1     Running   1          39m
kube-system   coredns-5897cd56c4-b62q8                   1/1     Running   0          81m
kube-system   coredns-5897cd56c4-wd5hm                   1/1     Running   0          81m
kube-system   etcd-master                                1/1     Running   1          81m
kube-system   kube-apiserver-master                      1/1     Running   1          81m
kube-system   kube-controller-manager-master             1/1     Running   1          81m
kube-system   kube-proxy-ch67f                           1/1     Running   1          81m
kube-system   kube-scheduler-master                      1/1     Running   1          81m
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   81m   v1.20.9
[root@master ~]#

3.0 安装dashboard

1.0 部署官方提供的可视化界面

dashboard官网地址

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

执行命令安装dashboard 下载不下来得小伙伴可以自己新建个文件复制如下内容

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.3.1imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.6ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

2.0 端口放行

[root@master kubernets]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
#type: ClusterIP 修改为 NodePort
[root@master kubernets]#kubectl get svc -A |grep kubernetes-dashboard
#找到端口,在安全组放行

访问: https://集群任意IP:端口 https://139.198.165.238:32759

3.0 创建访问账号

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard[root@master kubernets]# kubectl apply -f dash.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

4.0 令牌访问

#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"[root@master kubernets]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6ImNrZXI2VEo5NUVFX2ItelpfLVY0YUhhWWh3MmJNUlJ5c2ZSYmRhbWdyUUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWY5Y3RmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlOGY0ZmZhZi01YmQwLTQ0YjItYjI3NC0wYzZlNTUyYzgxMzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.slAyBwm-T5te-w6wC29TaI0O96hwyeloBp1JOHHVXbgycqSfHMZgk-LjUxjTxulqHpvHBe5ch7aRAL61jwNjZLdhf6qPFL7v4Qeu0BIZ_VMTKxYOl4OML-vG0afrO9pr7zjlkhIGgRjvw2HULc03rL7ByLI5rhiDDmWopSvJYjiI7MlfHWWhNB3yp4eMpi0RdorqR58xrV_wceFFbKYLG8O0OvnR0ylvF1Je_8BWbM9T64oxzu_x4-mwLHxM0yori9hfhgfWDZszgPUC1OZUd4gp17Rw6AfOQ4tw1oIwbdIa4SEb5-AghYwvzu3qby4K04UnD03NwfZCPVQIFw4Y9Q
[root@master kubernets]#

复制令牌Token 访问

kubernets 集群搭建相关推荐

  1. kubernetes集群搭建

    Kubernetes集群组件: - etcd 一个高可用的K/V键值对存储和服务发现系统 - flannel 实现夸主机的容器网络的通信 - kube-apiserver 提供kubernetes集群 ...

  2. ceph实战之ceph集群搭建

    Ceph基础 一.ceph起源 Ceph项目最早起源于Sage就读博士期间的工作(最早的成果于2004年发表),并随后贡献给开源社区.在经过了数年的发展之后,目前已得到众多云计算厂商的支持并被广泛应用 ...

  3. Spark-----Spark 与 Hadoop 对比,Spark 集群搭建与示例运行,RDD算子简单入门

    目录 一.Spark 概述 1.1. Spark是什么 1.2. Spark的特点(优点) 1.3. Spark组件 1.4. Spark和Hadoop的异同 二.Spark 集群搭建 2.1. Sp ...

  4. 大数据调度平台Airflow(八):Airflow分布式集群搭建及测试

    目录 Airflow分布式集群搭建及测试 一.节点规划 二.airflow集群搭建步骤 1.在所有节点安装python3.7 2.在所有节点上安装airflow 三.初始化Airflow 1.每台节点 ...

  5. 2021年大数据Kafka(三):❤️Kafka的集群搭建以及shell启动命令脚本编写❤️

    全网最详细的大数据Kafka文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 系列历史文章 Kafka的集群搭建以及shell启动命令脚本编写 一.搭建 ...

  6. 2021年大数据ZooKeeper(二):ZooKeeper集群搭建

    目录 ZooKeeper集群搭建 第一步:下载zookeeeper的压缩包,下载网址如下 第二步:解压 第三步:修改配置文件 第四步:添加myid配置 ​​​​​​​第五步:安装包分发并修改myid的 ...

  7. 基于zookeeper的solrCloud集群搭建

    转自:https://blog.csdn.net/yougoule/article/details/78445759  基于原文对实践遇到的问题稍作补充 1.安装及搭建相关环境 1.1环境准备 cen ...

  8. Hbase基础(特点、架构、应用场景、集群搭建、HA设计)这一篇就够了

    Hbase基础(特点.架构.应用场景.集群搭建.HA设计)这一篇就够了 1. Hbase特点 2. Hbase VS RDBMS 3. Hbase架构及版本选择 4. Hbase应用场景 5. Ntp ...

  9. java kafka 集群消费_kafka集群搭建和使用Java写kafka生产者消费者

    转自:http://chengjianxiaoxue.iteye.com/blog/2190488 1 kafka集群搭建 1.zookeeper集群 搭建在110, 111,112 2.kafka使 ...

最新文章

  1. Linux下使用ls查看文件颜色全部为白色的解决方法,以及Linux中文件颜色介绍
  2. 微服务实践(七):从单体式架构迁移到微服务架构
  3. hdu 6579 Operation (在线线性基)
  4. LeetCode 1051. 高度检查器
  5. 微软已确认放弃Windows 10X操作系统 新功能下放
  6. python装饰器打印函数执行时间_python装饰器计算函数执行时间
  7. 集成Silverlight 2的AJAX框架 Visual WebGui
  8. 什么样的技术最后会成为CTO
  9. redis 在 mac 下的安装与使用
  10. 广义线性模型(Generalized Linear Models, GLM)
  11. table 转义字符 html,HTML转义字符表
  12. 企业全面运营管理沙盘模拟心得_企业运营沙盘模拟心得体会
  13. 用HTML5编写日历,js编写当天简单日历效果【实现代码】_javascript技巧
  14. 计算机专业排名2017教育部,软件工程专业大学排名最新版(教育部2017学科排名数据整理)...
  15. Minimum supported Gradle version is 5.4.1. Current version is 4.10.1. If using the gradle wrapper
  16. vue项目运行出现66% buil 98% after emitting CopyPlugin
  17. cshop是什么开发语言_ecshop后台如何设置多语言选择
  18. MySQL鲜为人知的排序方式
  19. 微信小程序-RSA签名、验签、加密、解密
  20. Python描述 LeetCode 875. 爱吃香蕉的珂珂

热门文章

  1. 嵌入式有什么值得学习的软硬件技术?
  2. 浅谈软件开发方向之嵌入式
  3. 【BLDC理论篇】直流无刷电机控制方法
  4. html网页id怎么改成名字,IG 改名字:教你如何更改 Instagram 帐号名称 ID 及个人档案姓名...
  5. 独立钻石C语言Mac,C Code Develo‪p for Mac-C Code Develo‪p Mac版下载 V1.0-PC6苹果网
  6. 模型泛化能力是什么意思
  7. 图像梯度-scharr算子
  8. 为C盘爆满提供的一些解决方式
  9. 什么是相对论?相对论是什么?用人话讲讲相对论
  10. xamarin android网络请求总结