kubernetes

by Chris Cooney

克里斯·库尼(Chris Cooney)

安全是Kubernetes领域的标准配置。 (Security as standard in the land of Kubernetes.)

Security in Kubernetes needs some work, but there are some clear steps a team can take to make sure their information is safe.

Kubernetes的安全性需要做一些工作,但是团队可以采取一些明确的步骤来确保其信息安全。

I have been writing about Kubernetes for a while. I use it every day in my place of work. Our mission over the past several months has been to create the fort knox of kubernetes clusters. Should an attacker get through the layers of networking, the firewalls and the WAFs, they will find themselves with very few options. It was spawned from a single piece of advice I got from the gentlemen at LearnK8s.

我一直在写有关Kubernetes的文章。 我每天在工作场所使用它。 在过去的几个月中,我们的任务一直是创建kubernetes集群的堡垒诺克斯。 如果攻击者通过网络,防火墙和WAF的各个层,他们将发现自己几乎没有选择。 它是我从LearnK8s先生那里获得的一条建议所产生的。

Kubernetes will automatically handle high availability, deployments, configuration and secrets for you, but it won’t enforce security on its own. That takes some work.

Kubernetes将自动为您处理高可用性,部署,配置和机密信息,但不会自行实施安全性。 这需要一些工作。

That’s not to say that Kubernetes is insecure, it is simply a different way of approaching software. A way that favours continuous delivery, zero downtime deployments and high availability. Kubernetes will do these things for you, with minimal effort. Security, on the other hand, requires voluntary effort. This has been my job for the past few months and I thought I’d share the lessons I’ve learned. First, the overarching ideal we had in mind.

这并不是说Kubernetes是不安全的 ,它只是一种处理软件的不同方式。 一种支持连续交付,零停机时间部署和高可用性的方式。 Kubernetes将为您轻松完成这些事情。 另一方面,安全需要自愿努力。 在过去的几个月里,这一直是我的工作,我想我应该分享自己的经验教训。 首先,我们想到了总体理想。

安全性为标准 (Security as Standard)

This is simple. It should not be the effort of every single engineer to make sure every layer of their application, deployment and configuration is secure. Think about the amount of rework involved in this.

这很简单。 确保每个应用程序,部署和配置的每一层都是安全的,这不是每个工程师的努力。 考虑一下其中涉及的返工量。

  • The sheer potential for drift, the complexity introduced for support.潜在的巨大漂移,为支持而引入的复杂性。
  • The silo this creates and the disjointed architecture you’re landed with, because everyone has a slightly different method based on their experience.这样创建的筒仓以及与您无关的架构,因为每个人的经验都略有不同。

The aim is to require minimum developer effort and maximum automatic security.

目的是需要最少的开发人员精力最大的自动安全性

那么我们如何实现这一目标? (So How Do We Achieve This?)

This will form the meat of this article. I have compiled a list of tools and practices that one can use to help foster a technical ecosystem that creates a safe, secure environment for engineers to operate in.

这将构成本文的重点。 我整理了一系列工具和实践,可以用来帮助建立技术生态系统,从而为工程师提供一个安全的环境。

This is by no means an exhaustive list. More of a starter kit. There are some fundamental ideas and some more experimental. The trick is to find what works for you, but keep in mind the two metrics — minimum developer effort, maximum automatic security.

这绝不是详尽的清单。 更多入门工具包。 有一些基本的想法,还有一些实验性的。 诀窍是找到适合您的方法,但要牢记两个指标-最少的开发人员工作量,最大的自动安全性。

Kubernetes网络政策 (The Kubernetes NetworkPolicy)

I have no doubt that the majority of people reading this article will know all about the NetworkPolicy in Kubernetes. When you’re starting out with security in a Kubernetes cluster, this is the gold. In short, it allows the engineer to lock down which other services can talk to this pod. For example, you can create a NetworkPolicy like this:

我毫不怀疑,阅读本文的大多数人将了解Kubernetes中的NetworkPolicy 。 当您从Kubernetes集群中的安全性开始时,这就是黄金。 简而言之,它允许工程师锁定其他服务可以与该Pod对话的位置。 例如,您可以这样创建一个NetworkPolicy

kind: NetworkPolicyapiVersion: networking.k8s.io/v1metadata:  name: api-allowspec:  podSelector:    matchLabels:      app: my-app  ingress:  - from:      - podSelector:          matchLabels:            app: my-other-app

We’ve stated something very specific here. We’ve said that that there is only one thing that should be able to talk to our application. That is a pod labelled with my-other-app. Should a rogue container be created inside your cluster, any attempts to communicate with my-app will be thwarted. However, we can take this further.

我们在这里说了一些非常具体的内容。 我们已经说过,只有一件事应该能够与我们的应用程序对话。 那是一个标有my-other-app的豆荚。 如果在集群内部创建恶意容器,则任何与my-app通信的尝试都将受到阻碍。 但是,我们可以更进一步。

Network Policies are not only linked to apps, but they can be hooked into entire namespaces, creating some basic rules that govern all of the applications in there. So one can create a default-deny rule. This will essentially blacklist all traffic and another NetworkPolicy will need to be made to whitelist communication.

网络策略不仅链接到应用程序,而且可以挂接到整个命名空间,从而创建一些基本规则来管理其中的所有应用程序。 因此,可以创建一个default-deny规则。 这实际上将所有流量都列入黑名单,并且需要制定另一个NetworkPolicy将通信列入白名单。

We have a problem though. Not everyone is going to create and use a NetworkPolicy. They are inconvenient and tricky at times, people will err to the side of ease and go for an open container. So far, we’ve got high developer effort and high security. This poses a question. How do we make life easier for the engineer, while maintaining a high degree of security? The answer, is helm.

我们有一个问题。 并非每个人都将创建和使用NetworkPolicy 。 有时它们不方便且很棘手,人们会误以为是轻松的一面而去寻找开放的容器。 到目前为止,我们在开发人员方面付出了很高的努力,并获得了很高的安全性。 这提出了一个问题。 我们如何在保持高度安全性的同时使工程师的工作更轻松? 答案是掌舵

利用头盔保持一致性 (Utilise Helm for Consistency)

Helm allows you to bundle up lots of Kubernetes resources into a single chart. Its most common use case is to make the deployment of 3rd party software trivial — you can peruse the huge collection of stable helm charts at your leisure. Everyone who’s anyone has got a chart on there and it should be your first port of call when you want to make some open source tooling available in your kubernetes cluster.

Helm允许您将大量Kubernetes资源捆绑到一个图表中 。 它最常见的用例是简化第三方软件的部署-您可以在闲暇时仔细阅读大量稳定的头盔图表 。 每个人都有一张图表,当您想在kubernetes集群中提供一些开源工具时,它应该是您的第一通电话。

Light bulbs should be flicking on right about now. What if we can create our own helm chart and include a NetworkPolicy as standard? We can throw all sorts in there — PodDisruptionBudget or HorizontalPodAutoscaler for example. If you can ensure your engineers are using your helm chart for their deployments, you can also ensure that the correct resources are in place with minimal effort from their part. You’ve just drove down the developer effort and kept the same level of security. Score!

灯泡现在应该在闪烁。 如果我们可以创建自己的头盔图并包括NetworkPolicy作为标准怎么办? 我们可以在其中扔各种PodDisruptionBudget ,例如PodDisruptionBudgetHorizontalPodAutoscaler 。 如果您可以确保工程师在部署中使用头盔图表,那么也可以确保他们花费最少的精力就可以使用正确的资源。 您已经降低了开发人员的工作量,并保持了相同的安全级别。 得分!

So now your apps are limiting who they can and can’t talk to, but how do you know ensure your application can’t run a bunch of destructive commands to the other applications around it?

因此,现在您的应用程序限制了他们可以和不可以与谁交谈,但是您如何知道确保您的应用程序不能对周围的其他应用程序运行破坏性命令呢?

服务的基于角色的访问控制(RBAC) (Role Based Access Control (RBAC) for Services)

RBAC is a very common feature in Kubernetes cluster. I would strongly advise turning it on. We typically use it for third party applications and developer access, but we can also use it for our software, in the form of ServiceAccount resources. We can link these to a specific Role or ClusterRole and hey presto, no more risk of applications going rogue inside your cluster.

RBAC是Kubernetes集群中非常常见的功能。 我强烈建议将其打开。 我们通常将其用于第三方应用程序和开发人员访问,但是我们也可以将其用于ServiceAccount资源形式的软件。 我们可以将它们链接到特定的RoleClusterRole并且非常容易,不再有应用程序在您的集群中流氓的风险。

The Kubernetes documentation does a great job of taking you through the mechanics of this. Read it and be enlightened. My one bit of advice would be, once you’ve got your head around the essence of it, bake this into your helm chart. Don’t force engineers to do this because sooner or later, they’re going to avoid it. Or hate you. Or both. Remember, minimum developer effort.

Kubernetes文档在引导您完成这一工作方面做得很好。 阅读并得到启发。 我的一点建议是,一旦您掌握了它的本质,就将其放入掌舵表中。 不要强迫工程师这样做,因为迟早他们会避免这样做。 还是恨你。 或两者。 请记住,最小的开发人员精力。

在使用Kubescore进行部署之前验证Yaml (Validate Your Yaml Before Deploying using Kubescore)

So you’ve got a badass helm chart for your applications, but not everything is an application. There are plenty of things, Nexus servers and CI tools, that won’t be deployed through the typical means. Often, it’ll be a cheeky kubectl command from an engineer’s machine that opens up the widest vulnerability.

因此,您的应用程序有一张糟糕的头盔图表,但并非所有内容都是应用程序。 Nexus服务器和CI工具很多东西都无法通过常规方式进行部署。 通常,这是工程师计算机上的一个厚脸皮的kubectl命令,它会打开最广泛的漏洞。

One trick is to use a tool named kube-score to provide an easy, consistent and objective measurement of the quality of a service. An element of trust is needed here — you could cook something up to prevent any yaml being applied that fails, but in the first instance, simply making your engineers aware of this tool is enough to get you going.

一种技巧是使用名为kube-score的工具来提供对服务质量的轻松,一致和客观的度量。 这里需要信任的元素—您可以做一些准备,以防止应用任何失败的Yaml,但首先,仅使您的工程师知道此工具就足以使您前进。

使用kube-bench测试您的集群配置 (Test your Cluster Configuration with kube-bench)

There is a brilliant tool called kube-bench that will analyse the configuration of your nodes. This will test out things like, whether you have disabled privileged containers and that your kubelet is communicating with the master nodes in HTTPS.

有一个出色的工具叫做kube-bench ,它可以分析节点的配置。 这将测试诸如是否已禁用特权容器以及您的kubelet是否与HTTPS中的主节点进行通信之类的事情。

As with everything, it is unlikely that you’ll pass every metric — this thing is rigorous. Focus on the big ones, anything the tool highlights as critical. This is work you can do behind the scenes, without disrupting engineering, that will benefit you in the long run, should your cluster fall foul of a vulnerability elsewhere.

与所有内容一样,您不太可能会通过每个指标-这件事很严格。 专注于大型工具,该工具强调的所有内容都是至关重要的。 如果您的集群遇到其他地方的漏洞,那么您可以在后台完成这项工作,而不会中断工程,从长远来看,这将对您有利。

创建标准的基础Docker映像集 (Create a Standard Set of Base Docker Images)

One gaping hole with all of this is the application containers themselves. Vulnerabilities creep into those all the time and this unknown quantity creates a wonderful little hiding place for difficult to diagnose bugs and vulnerabilities.

应用程序容器本身就是所有这些漏洞中的一个。 漏洞无时无刻不在蔓延,并且数量未知,这为难以诊断错误和漏洞的人提供了一个绝妙的藏身之所。

However, all is not lost. The good folks at Google have been working on Distroless, a set of docker images that makes docker alpine look like Windows 10. These things are very, very limited and do not give an intruder much wiggle room.

但是,一切并没有丢失。 Google的好伙伴一直在研究Distroless ,这是一组使docker alpine看起来像Windows 10的docker映像。这些事情非常非常有限,不会给入侵者带来很大的回旋余地。

That said, you don’t need to use Distroless images. You can package up your own if you’re confident in your container-fu. The aim here is to limit the scope and scale of custom docker images and bring it to a manageable set of known tooling, and making those available to everyone who needs them.

也就是说,您不需要使用Distroless图像。 如果您对自己的容器有信心,可以打包自己的包装。 此处的目的是限制自定义docker映像的范围和规模,并将其引入一组可管理的已知工具中,并使所有需要这些工具的人都可以使用。

并寻找最新的标签 (And look out for the latest tag)

Ensure all docker images that are deployed into the cluster are pinned to a version. The latest tag is a dangerous game indeed — you have absolutely no idea what software is running after each pod recycle.

确保已将部署到群集中的所有docker映像固定到一个版本。 latest标签确实是一个危险的游戏-您绝对不知道每个pod回收后正在运行什么软件。

Istio相互TLS (Istio Mutual TLS)

If you’ve been looking into Istio, you’ll know that it very recently announced v1.0.0. That’s it folks, they’re ready for production. Istio offers an absolute plethora of monitoring, tracing and resilience. It also has an awesome feature for enabling mutual TLS between Istio managed applications. If your application has an envoy proxy, you’re in the money.

如果您一直在研究Istio ,就会知道它是最近发布的v1.0.0。 就是这样,人们已经准备好进行生产了。 Istio提供了绝对大量的监视,跟踪和弹性。 它还具有强大的功能,可在Istio托管的应用程序之间启用相互TLS。 如果您的应用程序有特使代理,那么您就可以用钱。

但是什么是相互TLS? (But what is Mutual TLS?)

Mutual TLS is a little like HTTPS. It involves one party verifying the existence of the other and, subsequently, establishing an encrypted communications channel. However, HTTPS is predicated on the client verifying the server. In mutual TLS, two clients will verify each other.

相互TLS有点像HTTPS。 它涉及到一方验证另一方的存在,然后建立加密的通信通道。 但是,HTTPS是在验证服务器的客户端上确定的。 在相互TLS中, 两个客户端将相互验证

它会影响应用程序吗? (Does it impact the applications?)

This is the cool bit. Istio Mutual TLS happens outside of the application, inside the envoy proxy that is coupled with your applications. Your apps can send a request in HTTP and Istio will silently verify the source and encrypt the traffic for you. Developer effort? Absolutely nothing. Security benefits? Strong cryptographic encryption in transit within the cluster and a massive blocker to man in the middle (MITM) attacks. Not bad at all.

这是很酷的一点。 Istio Mutual TLS发生在应用程序外部,与应用程序耦合的特使代理内部。 您的应用程序可以使用HTTP发送请求,Istio会静默验证源并为您加密流量。 开发人员的努力? 绝对没有。 安全保障? 集群内部传输过程中的强大加密加密,以及对中间人(MITM)攻击的大规模阻止。 一点也不差。

有争议的:禁止LoadBalancer and NodePort服务类型-坚持使用ClusterIP (Controversial: Ban the LoadBalancer and NodePort Service Types — Stick to ClusterIP)

This one tends to split the room a little. We see the LoadBalancer service type crop up everywhere in services. It is often the default configuration for many helm charts that are seeking to allow ingress traffic from outside the cluster. There are some serious drawbacks to this:

这倾向于使房间稍微分开。 我们看到LoadBalancer服务类型在服务中随处可见。 对于许多寻求允许来自群集外部的入口流量的头盔图,它通常是默认配置。 这有一些严重的缺点:

  • Load Balancers cost money, both for transfer costs and running time. Engineers will use whatever is easiest and you can be damn sure, at any reasonable scale, those charges are going to mount. This wouldn’t be so bad, but it’s completely avoidable.负载平衡器会花费金钱,包括传输成本和运行时间。 工程师将使用最简单的方法,而且您可以肯定,在任何合理的规模上,这些费用都会增加。 情况还不错,但这是完全可以避免的。
  • Each Load Balancer is going to talk to your nodes on a different port. The more ports you’ve got open, the greater the attack surface.每个负载均衡器将与其他端口上的节点通信。 您打开的端口越多,攻击面就越大。
  • Who the hell knows what configuration is going on between those load balancers and their target nodes. It is totally at the discretion of whatever controller you’ve installed.到底谁知道这些负载均衡器与其目标节点之间正在进行什么配置。 完全由您安装的任何控制器决定。

那我们该怎么办呢? (So what can we do instead?)

Create a single way in. One Load Balancer. For us, we use an AWS ALB, but you can use an NLB or a classic ELB if the mood takes you. The aim of the game is to create only one route and one set of ports that traffic flows in on. This minimises your attack surface. With all of your applications running as ClusterIP instances, no additional port exposure is needed and the routing is done for you by Kubernetes. The only thing left to do is to pick a single ingress controller to run as a NodePort, perhaps NGINX or Traefik, and hey presto! You’ve got simple networking with a tiny attack surface.

创建一个单向。一个负载均衡器。 对于我们来说,我们使用AWS ALB ,但如果您愿意,可以使用NLB或经典ELB 。 游戏的目的是仅创建一条路由和一组流量流入的端口。 这样可以最大程度地减少攻击面。 通过将所有应用程序作为ClusterIP实例运行,无需额外的端口暴露,并且Kubernetes会为您完成路由。 剩下要做的唯一一件事就是选择一个入口控制器作为NodePort运行,也许是NGINX或Traefik ,嘿! 您的网络简单,攻击面很小。

很有争议:控制构建和部署过程 (Very Controversial: Take control of the build and deployment process)

I’ve included this one as an opinion piece. It is much more controversial than the others that I have discussed. Teams often enjoy owning their own Continuous Delivery (CD) pipelines. This is something they will need to work with on a daily basis. There are classic disagreements amongst engineers about which CI/CD tool to use.

我已将此内容作为意见发表。 这比我所讨论的其他人更具争议性。 团队经常喜欢拥有自己的持续交付(CD)管道。 这是他们每天需要处理的事情。 工程师之间对于使用哪种CI / CD工具存在经典分歧。

这打开了一个很大的攻击媒介。 (And this opens an attack vector, a big one.)

Until you know, not think or guess, but know that your engineers are making use of the automation that has been put in place, all of this effort could be cancelled out by a single dodgy config. Your helm chart might wrap services in steel wool and spray it with holy water, but it’s no good if it isn’t being used.

除非您知道,不思考或猜测,但知道您的工程师正在利用已经实现的自动化,否则所有这些工作都可以通过一个简单的配置取消。 您的头盔图可能会用钢丝绒包裹服务,然后用圣水喷洒,但是如果不使用它就不好了。

By creating a unified CI/CD process, you can ensure the proper checks are being used. This has some serious benefits. If you wish to introduce a new tool, you build it in one place and everyone gets it immediately. Minimal rework, minimal effort, maximum security benefit. Topped off by consistency across teams.

通过创建统一的CI / CD流程,可以确保使用了正确的检查。 这有一些严重的好处。 如果您想引入一种新工具,则可以将其构建在一个地方,并且每个人都可以立即获得。 最小的返工,最小的工作量,最大的安全效益。 在团队之间保持一致性。

The option we went for was Global Shared Libraries in Jenkins. This allowed us to create a single code base that defined the pipelines which all services use. I have written another article about this topic, if you’d like a more in depth look. (Warning, strong language.)

我们寻求的选择是詹金斯的全球共享图书馆 。 这使我们可以创建单个代码库来定义所有服务使用的管道。 如果您想更深入地了解,我已经写了另一篇有关该主题的文章 。 (警告,强烈的语言。)

但是有一点警告…… (But a word of warning…)

Think long and hard about this one. You’re taking on a huge maintenance and reliability burden. If that CI/CD tool that you’ve built is flakey, it impacts every developer, immediately. It is a single point of failure and your life will be spent fighting fires. Think hard before shouldering this.

认真思考这个问题。 您承担着巨大的维护和可靠性负担。 如果您构建的CI / CD工具是Flakey,它将立即影响每个开发人员。 这是单点故障,您的生命将花在扑灭大火上。 在承担这个之前要认真思考。

通常,信任和教育是最好的解决方案…… (Often trust and education is the best solution…)

Short of taking over the CI/CD process, the best thing you can do is be involved in the creation of each team’s set of tools. Answer questions, talk about capabilities. Make yourself available to give advice to teams. Give people autonomy and responsibility over their own solution by letting them know what good looks like. This approach doesn’t offer guarantees, but it presents a much more manageable operational overhead.

除了接管CI / CD流程,您最好的事情是参与每个团队工具集的创建。 回答问题,谈论能力。 使自己能够为团队提供建议。 让人们知道商品的外观,使他们对自己的解决方案具有自主权和责任感。 这种方法不能提供保证,但会带来可管理的操作开销。

好吧,我很新鲜。 (Okay, I’m Fresh Out.)

I’ve avoided discussing specific networking techniques that we have employed and I’ve ducked around how we can write our apps in a more secure way. The trick here has been to get some quick wins that will seriously narrow the attack surface of any Kubernetes cluster while requiring very little effort from your software engineers.

我避免讨论我们已经采用的特定网络技术,而回避了如何以更安全的方式编写应用程序。 这里的诀窍是获得一些快速胜利,这将严重缩小任何Kubernetes集群的攻击范围,而无需您的软件工程师花费很少的精力。

Have fun on your journey. Kubernetes is a brilliant tool and with a little love and attention, it can form an amazing platform for your engineering squads.

在旅途中玩得开心。 Kubernetes是一个出色的工具,只需一点点关注和关注,它就能为您的工程团队提供一个了不起的平台。

I’m fully on the Kubernetes train and I shout about it on my twitter all the time.

我完全在Kubernetes火车上,我一直在推特上大喊大叫。

翻译自: https://www.freecodecamp.org/news/security-as-standard-in-the-land-of-kubernetes-50bfad74ca16/

kubernetes

kubernetes_Kubernetes领域的标准安全性。相关推荐

  1. Paper:自动驾驶领域SAE标准之《道路机动车辆驾驶自动化系统相关术语的分类和定义》官方英文原文翻译与解读(一)

    Paper:自动驾驶领域L级别SAE标准<道路机动车辆驾驶自动化系统相关术语的分类和定义&Taxonomy and Definitions for Terms Related to Dr ...

  2. Play.ht训练出播客乔布斯/用嘴做视频?Meta出品/我国牵头发布首个自动驾驶测试场景领域国际标准...

    本周,业界有哪些新鲜事? 核心硬件 Linux 6.1为LoongArch CPU带来新功能 日前,Linux 6.1为LoongArch CPU带来新的附加功能.早在5.19版本中,Linux便实现 ...

  3. 物联网领域面临标准挑战

    随着互联网信息技术以及智能技术的发展,物联网和智能设备将会有什么样的发展?将为行业及企业带来哪些挑战和机遇?在昨日主题为物联网与智能设备的主题论坛上,业内的几位嘉宾就上述话题进行了探讨. 海尔集团首席 ...

  4. Paper:自动驾驶领域SAE标准之《道路机动车辆驾驶自动化系统相关术语的分类和定义》官方英文原文翻译与解读(二)

    Paper:自动驾驶领域SAE标准之<道路机动车辆驾驶自动化系统相关术语的分类和定义>官方英文原文翻译与解读(二) 目录 自动驾驶领域SAE标准之<道路机动车辆驾驶自动化系统相关术语 ...

  5. Microbime:微生物组学领域的标准制定

                    简介                  标题:Developing standards for the microbiome field 微生物组学领域的标准制定 杂志 ...

  6. led c语言程序结设计,《C语言程序设计》学习领域课程标准-LED照明工程设计.doc...

    <C语言程序设计>学习领域课程标准-LED照明工程设计.doc <C语言程序设计>课程标准 课程类别 专业核心课程 开课部门 电子信息工程系 总学时 64 学时 学分 4学分 ...

  7. 【官方标准】- 交通运输领域元数据标准规范

    分享下多年前做交通运输领域元数据设计时参考的元数据标准(资源目录编制指南)[试行版],经验证交通领域元数据标准其实和政企.公共事业等单位元数据是统一的,所以这块元数据标准非常具有指导与规范意义. 注: ...

  8. 容器服kubernetes_Kubernetes,标准化和安全性主导2017年Linux容器新闻

    容器服kubernetes 在2017年,无论是在Opensource.com上还是在整个IT基础架构社区中,容器都是大新闻. 在过去的一年中,以下三个主要故事情节主导了集装箱新闻: 首先是Kuber ...

  9. 图形领域GPU标准之战逐鹿并行计算

    没有哪一个领域能像图形工业一样充满了机遇与挑战,过去nVIDIA与ATI激烈的竞争成为产业发展的动力,在AMD接手ATI后nVIDIA更换了对手,但竞争的实质并没有改变.通过市场竞争带动产业发展已经成 ...

最新文章

  1. Error: Default interface methods are only supported starting with Android N (--min-api 24): java.uti
  2. 弹指之间 -- Waltz
  3. php用两个栈来实现队列
  4. 开放计算中国社区技术峰会举行,开放开源加速产业创新
  5. LeetCode刷题(19)
  6. ES6 深拷贝_JS基本数据类型和引用数据类型的区别及深浅拷贝
  7. c++类与对象(1)——构造,复制构造函数
  8. 《深入理解Android2》读书笔记(二)
  9. atitit.无为而治在企业管理,国家治理,教育领域的具体思想与实践
  10. 收藏丨8个常用中文OCR数据集,附下载链接
  11. PHP ZipArchive 实现压缩解压Zip文件
  12. 安装搜狗输入法ubantu18.04
  13. 1147 Heaps (30 分)
  14. Unity3D制作平面FlappyBird小游戏
  15. xp计算机workgroup无法访问,弹出“Workgroup无法访问”的提示?XP 工作组没有权限的解决办法...
  16. canvas绘图树状图
  17. win10雷电3接口驱动_[九猫win10系统]Intel处理器福利普及雷电3接口:微软/苹果强烈支...
  18. 华为服务器开虚拟化,华为服务器虚拟化的随笔
  19. EtherCat主站与从站简介
  20. 01-邂逅Vuejs

热门文章

  1. JVM——Java对象是如何创建、存储和访问的?
  2. 引入媒体播放器media player 并调试它的选择模式 0130
  3. 窗体控件常用属性 1217
  4. winform窗体数据的添加 1217
  5. 20-mysql-事务
  6. django-普通的cookie操作
  7. MATLAB的PLOT函数线型设置及横坐标为字符串的代码实例
  8. Python基础(12)--模块
  9. C++ 为什么要用覆盖(学习笔记)
  10. 读《纸本书变电子书是很小的事》有感