Why are banks paying me big bucks for something as simple as Kubernetes? When anybody — anybody can learn in under three hours?

为什么银行会像Kubernetes这样简单的东西付我大笔钱? 什么时候有人可以在三个小时内学习?

If you doubt me, I challenge you to give it a try! By the end of this article, you will be able to run a Microservice based application on a Kubernetes Cluster. And I guarantee this because it’s how I introduce our clients to Kubernetes.

如果您怀疑我,我挑战您尝试一下! 到本文结尾,您将能够在Kubernetes群集上运行基于微服务的应用程序。 我保证这一点,因为这是我向客户介绍Kubernetes的方式。

What does this guide do differently from the other resources, Rinor?

该指南与Rinor的其他资源有何不同?

Quite a lot! Most guides start with the simple stuff: Kubernetes concepts and and kubectl commands. These guides assume the reader knows about application development, Microservices, and Docker containers.

非常多! 大多数指南都以简单的东西开始:Kubernetes概念和and kubectl命令。 这些指南假定读者了解应用程序开发,微服务和Docker容器。

In this Article we will go from :

在本文中,我们将来自:

  1. Running a Microservice based application on your computer.在计算机上运行基于微服务的应用程序。
  2. Building container images for each service of the Microservice application.为微服务应用程序的每个服务构建容器映像。
  3. Introduction to Kubernetes. Deploying a Microservice based application into a Kubernetes Managed Cluster.Kubernetes简介。 将基于微服务的应用程序部署到Kubernetes托管集群中。

The gradual build up provides the depth required for a mere mortal being to grasp the simplicity of Kubernetes. Yes Kubernetes is simple when you know the context it is used in. Without further ado lets see what we will build.

渐进式的积累提供了仅凡人就能掌握Kubernetes的简单性所需的深度。 是的,当您知道使用它的上下文时,Kubernetes很简单。事不宜迟,让我们看看我们将要构建什么。

应用示范 (Application Demo)

The application has one functionality. It takes one sentence as input. Using Text Analysis, calculates the emotion of the sentence.

该应用程序具有一项功能。 它需要一个句子作为输入。 使用文本分析,计算句子的情感。

From the technical perspective, the application consists of three microservices. Each has one specific functionality:

从技术角度来看,该应用程序包含三个微服务。 每个都有一个特定的功能:

  • SA-Frontend: a Nginx web server that serves our ReactJS static files.

    SA-Frontend:一个为我们的ReactJS static 服务的Nginx Web服务器 文件。

  • SA-WebApp: a Java Web Application that handles requests from the frontend.

    SA-WebApp:一个Java Web应用程序,用于处理来自前端的请求

  • SA-Logic: a python application that performs Sentiment Analysis.

    SA-Logic: 执行情感分析的python应用程序。

It’s important to know that Microservices don’t live in isolation, they enable “separation of concerns” but they still have to interact with each other.

重要的是要知道,微服务不是孤立存在的,它们可以实现“关注点分离”,但是它们仍然必须彼此交互。

This interaction is best illustrated by showing how the data flows between them:

通过显示数据如何在它们之间流动,可以最好地说明这种相互作用:

  1. A client application requests the index.html (which in turn requests bundled scripts of ReactJS application)客户端应用程序请求index.html(依次请求ReactJS应用程序的捆绑脚本)
  2. The user interacting with the application triggers requests to the Spring WebApp.与应用程序交互的用户触发对Spring WebApp的请求。
  3. Spring WebApp forwards the requests for sentiment analysis to the Python app.Spring WebApp将情感分析请求转发到Python应用程序。
  4. Python Application calculates the sentiment and returns the result as a response.Python应用程序将计算情感并返回结果作为响应。
  5. The Spring WebApp returns the response to the React app. (Which then represents the information to the user.)Spring WebApp将响应返回到React应用。 (然后将其表示给用户。)

The code for all of these applications can be found in this repository. I recommend cloning it immediately because we are going to build amazing things together.

所有这些应用程序的代码都可以在此存储库中找到。 我建议立即克隆它,因为我们将一起构建惊人的东西。

1.在计算机上运行基于微服务的应用程序 (1. Running a Microservice based application on your computer)

We need to start up all three services. Let’s get started with the most attractive one, the front-end application.

我们需要启动所有三个服务。 让我们从最吸引人的前端应用程序开始。

为本地开发设置React (Setting up React for Local Development)

To start the React application you need to have NodeJS and NPM installed on your computer. After installing those navigate with your Terminal to the directory sa-frontend. Type the following command:

要启动React应用程序,您需要在计算机上安装NodeJS和NPM。 安装这些文件后,请使用终端浏览至目录sa-frontend。 键入以下命令:

npm install

This downloads all the Javascript dependencies of the React application and places them in the folder node_modules. (Dependencies are defined in the package.json file). After all dependencies are resolved execute the next command:

这将下载React应用程序的所有Javascript依赖项,并将它们放置在node_modules文件夹中 (依赖关系在package.json文件中定义)。 解决所有依赖性之后,请执行以下命令:

npm start

That’s it! We started our react application and by default you can access it on localhost:3000. Feel free to modify the code and see the effects immediately on the browser. That is made possible using Hot Module Replacement. This makes front end development a breeze!

而已! 我们启动了react应用程序,默认情况下您可以在localhost:3000上访问它 随时修改代码,并立即在浏览器上查看效果。 使用热模块更换可以做到这一点 这使前端开发变得轻而易举!

准备好我们的React App生产 (Making Our React App Production Ready)

For production we need to build our application into static files and serve them using a web server.

对于生产,我们需要将我们的应用程序构建到静态文件中,然后使用Web服务器为其提供服务。

To build the React application navigate in your terminal to the sa-frontend directory. Then execute the following command:

要构建React应用程序,请在您的终端中导航到sa-frontend目录。 然后执行以下命令:

npm run build

This generates a folder named build in your project tree. This folder contains all the static files needed for our ReactJS application.

这将在项目树中生成一个名为build的文件夹。 这个文件夹包含了ReactJS应用程序所需的所有静态文件。

使用Nginx提供静态文件 (Serving static files with Nginx)

Install and start the Nginx WebServer (how to). Then move the contents of the sa-frontend/build folder to [your_nginx_installation_dir]/html.

安装并启动Nginx WebServer(操作方法 )。 然后将sa-frontend / build文件夹的内容移动到[ your_nginx_installation_dir ] / html。

This way the generated index.html file will be accessible in [your_nginx_installation_dir]/html/index.html. This is the default file that Nginx serves.

这样,可以在[ your_nginx_installation_dir ] /html/index.html中访问生成的index.html文件。 这是Nginx服务的默认文件

By default the Nginx WebServer listens on port 80. You can specify a different port by updating server.listen property in the file [your_nginx_installation_dir]/conf/nginx.conf.

默认情况下,Nginx WebServer侦听端口80。您可以通过更新文件[ your_nginx_installation_dir ] /conf/nginx.conf中的server.listen属性来指定其他端口。

Open your browser and hit the endpoint localhost:80, see the ReactJS application appear.

打开浏览器,然后单击端点localhost:80,看到ReactJS应用程序出现。

Typing in the field: “Type your sentence.” and pressing the button Send will fail with a 404 error (You can check it in your browser console). But why that? Let’s inspect the code.

在字段中输入:“输入您的句子。” 并按“发送”按钮将失败并显示404错误(您可以在浏览器控制台中进行检查)。 但是为什么呢? 让我们检查一下代码。

检查代码 (Inspecting the Code)

In the file App.js we can see that pressing the Send button triggers the analyzeSentence method. The code for this method is shown below. (Each line that is commented with #Number will be explained below the script):

App.js文件中,我们可以看到按下“发送”按钮会触发analyticsSentence方法。 该方法的代码如下所示。 (用#Number注释的每一行将在脚本下方进行解释):

analyzeSentence() {fetch('http://localhost:8080/sentiment', {  // #1method: 'POST',headers: {'Content-Type': 'application/json'},body: JSON.stringify({sentence: this.textField.getValue()})// #2}).then(response => response.json()).then(data => this.setState(data));  // #3
}

#1: URL at which a POST call is made. (An application should be listening for calls at that URL).

#1:进行POST调用的URL。 (应用程序应在该URL上侦听呼叫)。

#2: The Request body sent to that application as displayed below:

#2:发送到该应用程序的请求正文,如下所示:

{sentence: “I like yogobella!”
}

#3: The response updates the component state. This triggers a re-render of the component. If we received the data, (i.e. the JSON object containing the typed sentence and the polarity) we would display the component polarityComponent because the condition would be fulfilled and we would define the component:

#3:响应会更新组件状态。 这将触发组件的重新渲染。 如果我们接收到数据(即包含键入语句和极性的JSON对象),则将显示组件polarityComponent,因为条件将得到满足,并且我们将定义该组件:

const polarityComponent = this.state.polarity !== undefined ?<Polarity sentence={this.state.sentence} polarity={this.state.polarity}/> :null;

Everything seems correct. But what are we missing? If you guessed that we didn’t set up anything to listen on localhost:8080, then you are correct! We must start our Spring Web application to listen on that port!

一切似乎都正确。 但是我们缺少什么呢? 如果您猜测我们没有设置任何内容可以监听localhost:8080,那么您是正确的! 我们必须启动Spring Web应用程序以侦听该端口!

设置Spring Web应用程序 (Setting up the Spring Web Application)

To start up the Spring application you need to have JDK8 and Maven installed. (their environment variables need to be set up as well). After installing those you can continue to the next part.

要启动Spring应用程序,您需要安装JDK8和Maven。 (还需要设置其环境变量)。 安装完这些后,您可以继续下一部分。

将应用程序包装到罐子中 (Packaging the Application into a Jar)

Navigate in your Terminal to the directory sa-webapp and type the following command:

在终端中导航到目录sa-webapp并键入以下命令:

mvn install

This will generate a folder named target, in the directory sa-webapp. In the folder target we have our Java application packaged as a jar: ‘sentiment-analysis-web-0.0.1-SNAPSHOT.jar

这将在sa-webapp目录中生成一个名为target的文件夹 在文件夹目标中,我们将Java应用程序打包为jar:' sentiment-analysis-web-0.0.1-SNAPSHOT.jar '

启动我们的Java应用程序 (Starting our Java Application)

Navigate to the target directory and start the application with the command:

导航到目标目录并使用以下命令启动应用程序:

java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar

Darn.. We got an error. Our application fails on startup and our only lead is the exception in the stack trace:

该死。。 我们的应用程序在启动时失败,唯一的线索是堆栈跟踪中的异常:

Error creating bean with name 'sentimentController': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'sa.logic.api.url' in value "${sa.logic.api.url}"

The important information here is the placeholder sa.logic.api.url in the SentimentController. Let’s check that out!

这里的重要信息是SentimentController中的占位符sa.logic.api.url。 让我们检查一下!

检查代码 (Inspecting the Code)

@CrossOrigin(origins = "*")
@RestController
public class SentimentController {@Value("${sa.logic.api.url}")    // #1private String saLogicApiUrl;@PostMapping("/sentiment")public SentimentDto sentimentAnalysis(@RequestBody SentenceDto sentenceDto) {RestTemplate restTemplate = new RestTemplate();return restTemplate.postForEntity(saLogicApiUrl + "/analyse/sentiment",    // #2sentenceDto, SentimentDto.class).getBody();}
}
  1. The SentimentController has a field named saLogicApiUrl. The field get’s defined by the property sa.logic.api.url .

    SentimentController有一个名为saLogicApiUrl的字段。 字段get由属性sa.logic.api.url定义。

  2. The String saLogicApiUrl is concatenated with the value “/analyse/sentiment”. Together they form the URL to make the request for Sentiment Analysis.字符串saLogicApiUrl与值“ / analyse / sentiment”连接。 它们一起形成URL,以发出情感分析请求。

Defining the Property

定义属性

In Spring the default property source is application.properties. (Located in sa-webapp/src/main/resources). But that’s not the only means to define a property, it can be done with the earlier command as well:

在Spring中,默认属性来源是application.properties。 (位于sa-webapp / src / main / resources中 )。 但这不是定义属性的唯一方法,也可以使用前面的命令来完成:

java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar --sa.logic.api.url=WHAT.IS.THE.SA.LOGIC.API.URL

The property should be initialized with the value that defines where our Python application is running, this way we will let our Spring Web Application know where to forward messages on run time.

该属性应使用定义Python应用程序在哪里运行的值初始化,这样我们将让Spring Web Application知道在运行时将消息转发到何处。

To make things simpler let’s decide that we will run the python application on localhost:5000. Lets just not forget it!

为了简化起见,让我们决定我们将在localhost:5000.上运行python应用程序localhost:5000. 让我们不要忘记它!

Run the below command and we are ready to move to the last service the python application.

运行以下命令,我们准备移至python应用程序的最后一个服务。

java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar --sa.logic.api.url=http://localhost:5000

设置Python应用程序 (Setting up the Python Application)

To start the Python application, we need to have Python3 and Pip installed. (Their environment variables need to be set up as well).

要启动Python应用程序,我们需要安装Python3和Pip。 (它们的环境变量也需要设置)。

安装依赖项 (Installing Dependencies)

Navigate in the Terminal to the directory sa-logic/sa (repo) and type the following command:

在终端中导航到目录sa-logic / sa ( repo ) 并键入以下命令:

python -m pip install -r requirements.txt
python -m textblob.download_corpora

启动应用 (Starting the app)

After using Pip to install the dependencies we are ready to start the application by executing the following command:

使用Pip安装依赖项之后,我们准备通过执行以下命令来启动应用程序:

python sentiment_analysis.py
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

This means that our application is running and listening for HTTP Requests on port 5000 on localhost.

这意味着我们的应用程序正在运行,并且正在localhost的端口5000上侦听HTTP请求。

检查代码 (Inspecting the Code)

Let’s investigate the code to understand what is happening in the SA Logic python application.

让我们研究一下代码,以了解SA Logic python应用程序中正在发生的事情。

from textblob import TextBlob
from flask import Flask, request, jsonifyapp = Flask(__name__)                                   #1@app.route("/analyse/sentiment", methods=['POST'])      #2
def analyse_sentiment():sentence = request.get_json()['sentence']           #3polarity = TextBlob(sentence).sentences[0].polarity #4return jsonify(                                     #5sentence=sentence,polarity=polarity)if __name__ == '__main__':app.run(host='0.0.0.0', port=5000)                #6
  1. Instantiate a Flask object.实例化一个Flask对象。
  2. Defines the path at which a POST request can be made.定义发出POST请求的路径。
  3. Extract the “sentence” property from the request body.从请求正文中提取“句子”属性。
  4. Instantiate an anonymous TextBlob object and get the polarity from the first sentence. (We have only one).实例化一个匿名TextBlob对象,并从第一句话获得极性。 (我们只有一个)。
  5. Return the response with the body containing the sentence and the polarity to the caller.将包含句子和极性的正文返回给调用方。
  6. Run the flask object app to listen for requests on 0.0.0.0:5000 (calls to localhost:5000 will reach this app aswell).

    运行flask对象应用程序以侦听0.0.0.0:5000上的请求(对localhost:5000的调用也将到达此应用程序)。

The services are set up to communicate with each other. Re-open the frontend in localhost:80 and give them a try before continuing on!

服务被设置为彼此通信。 在localhost:80中重新打开前端,并在继续之前尝试一下!

In the next section, we will go over how to start the services in Docker Containers, as it is a prerequisite to being able to run them in a Kubernetes Cluster.

在下一节中,我们将介绍如何在Docker容器中启动服务,因为这是能够在Kubernetes集群中运行它们的先决条件。

2.为每个服务构建容器映像 (2. Building container images for each service)

Kubernetes is a container orchestrator. Understandably we need containers to be able to orchestrate them. But what are containers? This is best answered from the Documentation at docker.

Kubernetes是一个容器协调器。 可以理解,我们需要能够协调它们的容器。 但是什么是容器? 最好从docker的文档中得到答案。

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment.

容器映像是一个软件的轻量级独立可执行软件包,包括运行该映像所需的一切:代码,运行时,系统工具,系统库,设置。 不论环境如何,容器化软件都可用于基于Linux和Windows的应用程序,并且始终可以运行相同的软件。

It means that containers can run on any computer — even in the production server — with no differences.

这意味着容器可以在任何计算机上运行,​​甚至可以在生产服务器中运行, 并且没有差异

For illustration purposes let’s compare how our React Application would be served using a Virtual Machine vs. a Container.

为了说明起见,让我们比较一下如何使用虚拟机和容器来服务React应用程序。

从VM提供React静态文件 (Serving React static files from a VM)

The cons of using a Virtual Machine:

使用虚拟机的缺点:

  1. Resource inefficient, each VM has the overhead of a fully-fledged OS.资源效率低下,每个VM都有成熟OS的开销。
  2. It is platform dependent. What worked on your computer might not work on the production server.它取决于平台。 您计算机上运行的内容可能无法在生产服务器上运行。
  3. Heavyweight and slow scaling when compared to Containers.与容器相比重量轻且缩放缓慢。

从容器提供React静态文件 (Serving React static files from a Container)

The pros of using a Container.

使用容器的优点。

  1. Resource efficient, use the Host OS with the help of Docker.资源高效,可在Docker的帮助下使用Host OS。
  2. Platform independent. The container that you run on your computer will work anywhere.平台无关。 您在计算机上运行的容器可以在任何地方使用。
  3. Lightweight using image layers.使用图像图层轻巧。

Those are the most prominent features and benefits of using containers. For more information continue reading on the Docker documentation.

这些是使用容器的最突出的功能和好处。 有关更多信息,请继续阅读Docker文档 。

为React App构建容器镜像(Docker简介) (Building the container image for the React App (Docker intro))

The basic building block for a Docker container is the .dockerfile. The Dockerfile starts with a base container image and follows up with a sequence of instructions on how to build a new container image that meets the needs of your application.

Docker容器的基本构建块是.dockerfile。 Dockerfile从基本容器映像开始,然后跟随一系列说明,说明如何构建可满足您应用程序需求的新容器映像。

Before we get started defining the Dockerfile, let’s remember the steps we took to serve the react static files using nginx:

在开始定义Dockerfile之前,请记住我们使用nginx为react静态文件提供服务的步骤:

  1. Build the static files (npm run build)生成静态文件(npm run build)
  2. Startup the nginx server启动Nginx服务器
  3. Copy the contents of the build folder from your sa-frontend project to nginx/html.

    构建文件夹的内容从sa前端项目复制到nginx / html。

In the next section, you will notice parallels on how creating a Container is similar to what we did during local React setup.

在下一节中,您将注意到在创建Container方面与在本地React设置期间所做的相似之处。

为SA-Frontend定义Dockerfile (Defining the Dockerfile for SA-Frontend)

The instructions in the Dockerfile for the SA-Frontend is only a two-step task. That is because the Nginx Team provided us with a base image for Nginx, which we will use to build on top of. The two steps are:

Dockerfile中针对SA-Frontend的指令只是两步任务。 这是因为Nginx团队为我们提供了 Nginx 的基本映像 ,我们将在此映像的基础上进行构建。 这两个步骤是:

  1. Start from the base Nginx Image

    从基本的Nginx映像开始

  2. Copy the sa-frontend/build directory to the containers nginx/html directory.

    sa-frontend / build目录复制到容器nginx / html目录。

Converted into a Dockerfile it looks like:

转换为Dockerfile看起来像:

FROM nginx
COPY build /usr/share/nginx/html

Isn’t it amazing, it’s even humanly readable, let’s recapitulate:

令人惊奇的是,它甚至是人类可读的,让我们来概括一下:

Start from the nginx image. (Whatever the guys did over there). Copy the build directory to the nginx/html directory in the image. That’s it!

从nginx图像开始。 (不管那边的家伙做什么)。 将构建目录复制到映像中的nginx / html目录。 而已!

You may be wondering, how did I know where to copy the build files? i.e. /usr/share/nginx/html. Quite simple: It was documented in the nginx image in Docker Hub.

您可能想知道,我怎么知道将构建文件复制到哪里? 即/usr/share/nginx/html 。 非常简单:它已记录在Docker Hub的nginx 映像中。

建造并推动容器 (Building and Pushing the container)

Before we can push our image, we need a Container Registry to host our images. Docker Hub is a free cloud container service that we will use for this demonstration. You have three tasks before continuing:

在推送图像之前,我们需要一个Container Registry来托管图像。 Docker Hub是一项免费的云容器服务,我们将在此演示中使用它。 在继续之前,您有三个任务:

  1. Install Docker CE

    安装Docker CE

  2. Register to the Docker Hub.注册到Docker Hub。
  3. Login by executing the below command in your Terminal:通过在终端中执行以下命令登录:
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"

After completing the above tasks navigate to the directory sa-frontend. Then execute the below command (replace $DOCKER_USER_ID with your docker hub username. For e.g. rinormaloku/sentiment-analysis-frontend)

完成上述任务后,导航至目录sa-frontend。 然后执行以下命令(将$ DOCKER_USER_ID替换为您的docker hub用户名。例如rinormaloku / sentiment-analysis-frontend)

docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend .

We can drop -f Dockerfile because we are already in the directory containing the Dockerfile.

我们可以删除-f Dockerfile因为我们已经在包含Dockerfile的目录中。

To push the image, use the docker push command:

要推送图像,请使用docker push命令:

docker push $DOCKER_USER_ID/sentiment-analysis-frontend

Verify in your docker hub repository that the image was pushed successfully.

在您的Docker Hub存储库中验证是否已成功推送映像。

运行容器 (Running the container)

Now the image in $DOCKER_USER_ID/sentiment-analysis-frontend can be pulled and run by anyone:

现在,任何人都可以拉并运行$DOCKER_USER_ID/sentiment-analysis-frontend的图像:

docker pull $DOCKER_USER_ID/sentiment-analysis-frontend
docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend

Our Docker container is running!

我们的Docker容器正在运行!

Before we continue lets elaborate the 80:80 which I find confusing:

在继续之前,让我们详细说明一下80:80:

  • The first 80 is the port of the host (i.e. my computer)前80是主机(即我的计算机)的端口
  • The second 80 stands for the container port to which the calls should be forwarded.第二个80代表应将呼叫转发到的容器端口。

It maps from <hostPort> to <containerPort>. Meaning that calls to host port 80 should be mapped to the port 80 of the container, as shown in figure 9.

它从<hostPort>映射到<containerPort>。 这意味着对主机端口80的调用应映射到容器的端口80,如图9所示。

Because the port was run in the host (your computer) in port 80 it should be accessible on the localhost:80. If you do not have native docker support, you can open the application in <docker-machine ip>:80. To find your docker-machine ip execute docker-machine ip

因为该端口在主机(您的计算机)的端口80中运行,所以应该可以在localhost:80上对其进行访问。 如果没有本地Docker支持,则可以在<docker-machine ip>:80中打开应用程序。 查找您的docker-machine ip并ecute docker-mach ine ip

Give it a try! You should be able to access the react application in that endpoint.

试试看! 您应该能够访问该端点中的react应用程序。

Dockerignore (The Dockerignore)

We saw earlier that building the image for SA-Frontend was slow, pardon me, Extremely slow. That was the case because of the build context that had to be sent to the Docker Deamon. In more detail, the build context directory is defined by the last argument in the docker build command (the trailing dot), which specifies the build context. And in our case, it included the following folders:

前面我们看到,为SA-Frontend构建映像很慢,请原谅,这非常慢 。 之所以如此,是因为必须将构建上下文发送到Docker Deamon。 更详细地讲, 构建上下文目录由docker build命令中的最后一个参数(后缀点)定义,该命令指定构建上下文。 在我们的例子中,它包括以下文件夹:

sa-frontend:
|   .dockerignore
|   Dockerfile
|   package.json
|   README.md
+---build
+---node_modules
+---public
\---src

But the only data we need is in the build folder. Uploading anything else will be a waste of time. We can improve our build time by dropping the other directories. That’s where .dockerignore comes into play. For you it will be familiar because it’s like .gitignore, i.e. add all directories that you want to ignore in the .dockerignore file, as shown below:

但是,我们唯一需要的数据是在build文件夹中。 上传其他任何东西都会浪费时间。 我们可以通过删除其他目录来缩短构建时间。 那就是.dockerignore发挥作用的地方。 对您来说,它很熟悉,因为它就像.gitignore一样,即在.dockerignore文件中添加要忽略的所有目录,如下所示:

node_modules
src
public

The .dockerignore file should be in the same folder as the Dockerfile. Now building the image takes only seconds.

.dockerignore文件应与Dockerfile位于同一文件夹中。 现在,构建图像仅需几秒钟。

Let’s continue with the Java Application.

让我们继续Java应用程序。

为Java应用程序构建容器映像 (Building the container image for the Java Application)

Guess what! You learned almost everything about creating container images! That’s why this part is extremely short.

你猜怎么了! 您几乎了解了有关创建容器映像的所有知识! 这就是为什么这部分非常短的原因。

Open the Dockerfile in sa-webapp, and you will find only two new keywords:

sa-webapp中打开Dockerfile,您将仅找到两个新关键字:

ENV SA_LOGIC_API_URL http://localhost:5000
…
EXPOSE 8080

The keyword ENV declares an Environment Variable inside the docker container. This will enable us to provide the URL for the Sentiment Analysis API when starting the Container.

关键字ENV在docker容器内声明一个环境变量。 这将使我们能够在启动容器时提供情感分析API的URL。

Additionally, the keyword EXPOSE exposes a port that we want to access later on. But hey!!! We didn’t do that in the Dockerfile in SA-Frontend, good catch! This is for documentation purposes only, in other words it will serve as information to the person reading the Dockerfile.

另外,关键字EXPOSE公开了一个我们以后要访问的端口。 但是,嘿! 我们没有在SA-Frontend的Dockerfile中做到这一点,很好! 这仅是出于文档目的,换言之,它将作为阅读Dockerfile的人员的信息。

You should be familiar with building and pushing the container image. If any difficulties arise read the README.md file in sa-webapp directory.

您应该熟悉构建和推送容器映像。 如果出现任何困难,请阅读sa-webapp目录中的README.md文件。

为Python应用程序构建容器映像 (Building the container image for the Python Application)

In the Dockerfile in sa-logic there are no new keywords. Now you can call yourself a Docker-Master ?.

sa-logic中的Dockerfile中,没有新的关键字。 现在您可以称自己为Docker-Master?。

For building and pushing the container image read the the README.md in sa-logic directory.

要构建和推送容器映像,请阅读sa-logic目录中的README.md。

测试容器化的应用程序 (Testing the Container-ized Application)

Can you trust something that you didn’t test? Neither can I. Let’s give these containers a test.

您可以信任未经测试的内容吗? 我也不能。让我们对这些容器进行测试。

  1. Run the sa-logic container and configure to listen on port 5050:

    运行sa-logic容器并配置为侦听端口5050:

docker run -d -p 5050:5000 $DOCKER_USER_ID/sentiment-analysis-logic

2. Run sa-webapp container and configure to listen on port 8080, and additionally we need to change the port in which the python app listens by overriding the environment variable SA_LOGIC_API_URL.

2.运行sa-webapp容器并配置为侦听端口8080,此外,我们需要通过覆盖环境变量SA_LOGIC_API_URL来更改python应用程序侦听的端口。

$ docker run -d -p 8080:8080 -e SA_LOGIC_API_URL='http://<container_ip or docker machine ip>:5000' $DOCKER_USER_ID/sentiment-analysis-web-app

Checkout the README on how to get the container ip or docker machine ip.

查看有关如何获取容器ip或docker machine ip的自述文件 。

3. Run sa-frontend container:

3.运行sa前端容器:

docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend

We are done. Open your browser on localhost:80.

我们完了。 在localhost:80上打开浏览器

Attention: If you changed the port for the sa-webapp, or if you are using docker-machine ip, you need to update App.js file in sa-frontend in the method analyzeSentence to fetch from the new IP or Port. Afterwards you need to build, and use the updated image.

注意:如果更改了sa-webapp的端口,或者使用的是docker-machine ip,则需要在analyserSentence方法中的sa-frontend中更新App.js文件,以从新IP或端口获取。 之后,您需要构建并使用更新的图像。

脑筋急转弯-为什么选择Kubernetes? (Brain Teaser — Why Kubernetes?)

In this section, we learned about the Dockerfile, how to use it to build an image, and the commands to push it to the Docker registry. Additionally, we investigated how to reduce the number of files sent to the build context by ignoring useless files. And in the end, we got the application running from containers. So why Kubernetes? We will investigate deeper into that in the next article, but I want to leave you a brainteaser.

在本节中,我们了解了Dockerfile,如何使用它来构建映像以及将其推送到Docker注册表的命令。 此外,我们研究了如何通过忽略无用的文件来减少发送到构建上下文的文件数量。 最后,我们使应用程序从容器运行。 那么为什么要使用Kubernetes? 在下一篇文章中,我们将对此进行更深入的研究,但是我想让您绞尽脑汁。

  • Our Sentiment Analysis web app became a world hit and we suddenly have a million requests per minute to analyze sentiments and we experience huge load on sa-webapp and sa-logic. How can we scale the containers?

    我们的情感分析Web应用程序风靡全球,我们突然每分钟收到一百万个请求以分析情感,并且我们在sa-webappsa-logic上承受了巨大的负担。 我们如何缩放容器?

Kubernetes简介 (Introduction to Kubernetes)

I promise and I am not exaggerating that by the end of the article you will ask yourself “Why don’t we call it Supernetes?”.

我保证,在这篇文章的最后,您会问自己:“为什么我们不称其为“超网”?

If you followed this article from the beginning we covered so much ground and so much knowledge. You might worry that this will be the hardest part, but, it is the simplest. The only reason why learning Kubernetes is daunting is because of the “everything else” and we covered that one so well.

如果您从一开始就遵循本文,那么我们将涵盖很多领域和知识。 您可能会担心这将是最困难的部分,但是,这是最简单的部分。 学习Kubernetes的唯一令人生畏的原因是因为“其他一切”,我们对此进行了很好的介绍。

什么是Kubernetes (What is Kubernetes)

After we started our Microservices from containers we had one question, let’s elaborate it further in a Q&A format:Q: How do we scale containers?A: We spin up another one.Q: How do we share the load between them? What if the Server is already used to the maximum and we need another server for our container? How do we calculate the best hardware utilization?A: Ahm… Ermm… (Let me google).Q: Rolling out updates without breaking anything? And if we do, how can we go back to the working version.

从容器启动微服务后,我们有一个问题,让我们以问答形式进一步加以阐述: 问:我们如何扩展容器? 答:我们再旋转一个。 问:我们如何分担它们之间的负担? 如果服务器已被最大程度地使用,而我们的容器需要另一个服务器怎么办? 我们如何计算最佳硬件利用率? A:嗯... Ermm ...(让我google)。 问:推出更新而不会破坏任何内容吗? 如果这样做,我们如何返回到工作版本。

Kubernetes solves all these questions (and more!). My attempt to reduce Kubernetes in one sentence would be: “Kubernetes is a Container Orchestrator, that abstracts the underlying infrastructure. (Where the containers are run)”.

Kubernetes解决了所有这些问题(还有更多!)。 我用一句话来减少Kubernetes的尝试是:“ Kubernetes是一个容器编排器,它抽象了基础架构。 (运行容器的位置)”。

We have a faint idea about Container Orchestration. We will see it in practice in the continuation of this article, but it’s the first time that we are reading about “abstracts the underlying infrastructure”. So let’s take a close-up shot, at this one.

我们对容器编排有一个模糊的想法。 我们将在本文的续篇中在实践中看到它,但这是我们第一次阅读“抽象底层基础结构”。 因此,让我们对此进行特写拍摄。

抽象基础架构 (Abstracting the underlying infrastructure)

Kubernetes abstracts the underlying infrastructure by providing us with a simple API to which we can send requests. Those requests prompt Kubernetes to meet them at its best of capabilities. For example, it is as simple as requesting “Kubernetes spin up 4 containers of the image x”. Then Kubernetes will find under-utilized nodes in which it will spin up the new containers (see Fig. 12.).

Kubernetes通过为我们提供可以向其发送请求的简单API来抽象化基础架构。 这些请求促使Kubernetes尽其所能与他们见面。 例如,它就像请求“ Kubernetes旋转图像x的4个容器”一样简单。 然后,Kubernetes将发现未充分利用的节点,它将在其中旋转新容器(见图12)。

What does this mean for the developer? That he doesn’t have to care about the number of nodes, where containers are started and how they communicate. He doesn’t deal with hardware optimization or worry about nodes going down (and they will go down Murphy’s Law), because new nodes can be added to the Kubernetes cluster. In the meantime Kubernetes will spin up the containers in the other nodes that are still running. It does this at the best possible capabilities.

这对开发人员意味着什么? 他不必关心节点的数量,容器的启动位置以及它们之间的通信方式。 他不需要处理硬件优化,也不必担心节点出现故障(而且它们会发生墨菲定律 ),因为可以将新节点添加到Kubernetes集群中。 同时,Kubernetes将在其他仍在运行的节点中启动容器。 它以最好的功能做到这一点。

In figure 12 we can see a couple of new things:

在图12中,我们可以看到一些新事物:

  • API Server: Our only way to interact with the Cluster. Be it starting or stopping another container (err *pods) or checking current state, logs, etc.

    API服务器 :我们与集群交互的唯一方法。 它是启动还是停止另一个容器(err * pods)或检查当前状态,日志等。

  • Kubelet: monitors the containers (err *pods) inside a node and communicates with the master node.

    Kubelet :监视节点内的容器(err * pods)并与主节点通信。

  • *Pods: Initially just think of pods as containers.

    *荚 :最初只是将荚视为容器。

And we will stop here, as diving deeper will just loosen our focus and we can always do that later, there are useful resources to learn from, like the official documentation (the hard way) or reading the amazing book Kubernetes in Action, by Marko Lukša.

我们将在这里停止,因为更深入的研究只会放松我们的注意力,而我们以后总是可以这样做,这里有很多有用的资源可以学习,例如官方文档(艰难的方式)或阅读Marko 撰写的惊人的《 Kubernetes in Action》 。 卢莎

标准化云服务提供商 (Standardizing the Cloud Service Providers)

Another strong point that Kubernetes drives home, is that it standardizes the Cloud Service Providers (CSPs). This is a bold statement, but let’s elaborate with an example:

Kubernetes运回家的另一个优点是它标准化了云服务提供商(CSP)。 这是一个大胆的声明,但让我们举例说明:

– An expert in Azure, Google Cloud Platform or some other CSP ends up working on a project in an entirely new CSP, and he has no experience working with it. This can have many consequences, to name a few: he can miss the deadline; the company might need to hire more resources, and so on.

– Azure,Google Cloud Platform或其他一些CSP的专家最终在全新的CSP中从事项目工作,他没有使用该项目的经验。 仅举几个例子,这可能会带来许多后果:他可能错过最后期限; 公司可能需要雇用更多资源,依此类推。

In contrast, with Kubernetes this isn’t a problem at all. Because you would be executing the same commands to the API Server no matter what CSP. You on a declarative manner request from the API Server what you want. Kubernetes abstracts away and implements the how for the CSP in question.

相比之下,使用Kubernetes根本不是问题。 因为无论使用哪种CSP,您都将对API服务器执行相同的命令。 您以声明方式从API服务器请求您想要的内容。 Kubernetes抽象路程,实现了有问题的CSP该怎么办

Give it a second to sink in — this is extremely powerful feature. For the company it means that they are not tied up to a CSP. They calculate their expenses on another CSP, and they move on. They still will have the expertise, they still will have the resources, and they can do that for cheaper!

花一点时间沉浸其中-这是非常强大的功能。 对于公司而言,这意味着它们不依赖于CSP。 他们在另一个CSP上计算费用,然后继续前进。 他们仍然会拥有专业知识,仍然会拥有资源,而且他们可以以更低的价格做到这一点

All that said, in the next section we will put Kubernetes in Practice.

综上所述,在下一节中,我们将实践Kubernetes。

Kubernetes在实践中— Pod (Kubernetes in Practice — Pods)

We set up the Microservices to run in containers and it was a cumbersome process, but it worked. We also mentioned that this solution is not scalable or resilient and that Kubernetes resolves these issues. In continuation of this article, we will migrate our services toward the end result as shown in figure 13, where the Containers are orchestrated by Kubernetes.

我们将微服务设置为在容器中运行,这是一个繁琐的过程,但确实可行。 我们还提到了该解决方案不具有可伸缩性或弹性,并且Kubernetes解决了这些问题。 在本文的后续部分,我们将向最终结果迁移我们的服务,如图13所示,其中,容器由Kubernetes进行了编排。

In this article, we will use Minikube for debugging locally, though everything that will be presented works as well in Azure and in Google Cloud Platform.

在本文中,我们将使用Minikube在本地进行调试,尽管将介绍的所有内容在Azure和Google Cloud Platform中也能正常运行。

安装和启动Minikube (Installing and Starting Minikube)

Follow official documentation for installing Minikube. During Minikube installation, you will also install Kubectl. This is a client to make requests to the Kubernetes API Server.

请遵循官方文档来安装Minikube 。 在Minikube安装期间,您还将安装Kubectl。 这是一个向Kubernetes API服务器发出请求的客户端。

To start Minikube execute the command minikube start and after it is completed, execute kubectl get nodes and you should get the following output

要启动Minikube,请执行命令minikube start ,完成后,执行kubectl get节点,您应该获得以下输出

kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     <none>    11m       v1.9.0

Minikube provides us with a Kubernetes Cluster that has only one node, but remember we do not care how many Nodes there are, Kubernetes abstracts that away, and for us to learn Kubernetes that is of no importance. In the next section, we will start with our first Kubernetes resource [DRUM ROLLS] the Pod.

Minikube为我们提供了一个只有一个节点的Kubernetes集群,但请记住,我们不在乎有多少个节点,Kubernetes会将其抽象化,对于我们而言,学习无关紧要的Kubernetes。 在下一节中,我们将从第一个Kubernetes资源[DRUM ROLLS] Pod开始

豆荚 (Pods)

I love containers, and by now you love containers too. So why did Kubernetes decide to give us Pods as the smallest deployable compute unit? What does a pod do? Pods can be composed of one or even a group of containers that share the same execution environment.

我喜欢容器,现在您也喜欢容器。 那么,为什么Kubernetes决定给我们Pods作为最小的可部署计算单元? 吊舱有什么作用? Pod可以由一个或什至一组共享相同执行环境的容器组成。

But do we really need to run two containers in one pod? Erm.. Usually, you run only one container and that’s what we will do in our examples. But for cases when for e.g. two containers need to share volumes, or they communicate with each other using inter-process communication or are otherwise tightly coupled, then that’s made possible using Pods. Another feature that Pods make possible is that we are not tied to Docker containers, if desired we can use other technologies for e.g. Rkt.

但是,我们真的需要在一个吊舱中运行两个容器吗? Erm ..通常,您只运行一个容器,而这就是我们在示例中所做的。 但是对于例如两个容器需要共享卷,或者它们使用进程间通信彼此通信或者紧密耦合的情况,则可以使用Pods实现 。 Pods可以实现的另一个功能是,我们不受Docker容器的束缚,如果需要,我们可以将其他技术用于Rkt等 。

To summarize, the main properties of Pods are (also shown in figure 14):

总而言之,Pod的主要属性如下(也如图14所示):

  1. Each pod has a unique IP address in the Kubernetes cluster每个Pod在Kubernetes集群中都有一个唯一的IP地址
  2. Pod can have multiple containers. The containers share the same port space, as such they can communicate via localhost (understandably they cannot use the same port), and communicating with containers of the other pods has to be done in conjunction with the pod ip.Pod可以有多个容器。 容器共享相同的端口空间,因此它们可以通过本地主机进行通信(可以理解,它们不能使用同一端口),并且必须与Pod ip一起与其他Pod的容器进行通信。
  3. Containers in a pod share the same volume*, same ip, port space, IPC namespace.Pod中的容器共享相同的卷*,相同的IP,端口空间,IPC名称空间。

*Containers have their own isolated filesystems, though they are able to share data using the Kubernetes resource Volumes.

*尽管容器能够使用Kubernetes资源共享数据,但它们具有各自独立的文件系统

This is more than enough information for us to continue, but to satisfy your curiosity check out the official documentation.

这是供我们继续使用的足够的信息,但是为了满足您的好奇心,请查看官方文档 。

吊舱定义 (Pod definition)

Below we have the manifest file for our first pod sa-frontend, and then below we explain all the points.

在下面,我们有第一个pod sa-frontend的清单文件,然后在下面解释了所有要点。

apiVersion: v1
kind: Pod                                            # 1
metadata:name: sa-frontend                                  # 2
spec:                                                # 3containers:- image: rinormaloku/sentiment-analysis-frontend # 4name: sa-frontend                              # 5ports:- containerPort: 80                          # 6
  1. Kind: specifies the kind of the Kubernetes Resource that we want to create. In our case, a Pod.

    种类:指定我们要创建的Kubernetes资源的种类。 在我们的例子中是Pod

  2. Name: defines the name for the resource. We named it sa-frontend.

    名称:定义资源的名称。 我们将其命名为sa-frontend

  3. Spec is the object that defines the desired state for the resource. The most important property of a Pods Spec is the Array of containers.

    Spec是定义资源所需状态的对象。 Pod Spec的最重要属性是容器数组。

  4. Image is the container image we want to start in this pod.

    图像是我们要在此容器中开始的容器图像。

  5. Name is the unique name for a container in a pod.

    名称是容器中容器的唯一名称。

  6. Container Port:is the port at which the container is listening. This is just an indicator for the reader (dropping the port doesn’t restrict access).

    容器端口:是容器正在侦听的端口。 这只是阅读器的指示器(丢弃端口并不限制访问)。

创建SA前端窗格 (Creating the SA Frontend pod)

You can find the file for the above pod definition in resource-manifests/sa-frontend-pod.yaml. You either navigate in your terminal to that folder or you would have to provide the full location in the command line. Then execute the command:

您可以在resource-manifests/ sa-frontend-pod.yaml.找到上述pod定义的文件resource-manifests/ sa-frontend-pod.yaml. 您要么在终端中导航到该文件夹​​,要么必须在命令行中提供完整位置。 然后执行命令:

kubectl create -f sa-frontend-pod.yaml
pod "sa-frontend" created

To check if the Pod is running execute the following command:

要检查Pod是否正在运行,请执行以下命令:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
sa-frontend                   1/1       Running   0          7s

If it is still in ContainerCreating you can execute the above command with the argument --watch to update the information when the Pod is in Running State.

如果它仍在ContainerCreating中 ,则可以在Pod处于运行状态时使用--watch参数执行上述命令以更新信息。

从外部访问应用程序 (Accessing the application externally)

To access the application externally we create a Kubernetes resource of type Service, that will be our next article, which is the proper implementation, but for quick debugging we have another option, and that is port-forwarding:

为了从外部访问该应用程序,我们创建了Service类型的Kubernetes资源,这将是我们的下一篇文章,这是正确的实现,但是对于快速调试,我们还有另一个选择,那就是端口转发:

kubectl port-forward sa-frontend 88:80
Forwarding from 127.0.0.1:88 -> 80

Open your browser in 127.0.0.1:88 and you will get to the react application.

127.0.0.1:88中打开浏览器,您将进入react应用程序。

扩大规模的错误方法 (The wrong way to scale up)

We said that one of the Kubernetes main features was scalability, to prove this let’s get another pod running. To do so create another pod resource, with the following definition:

我们说过Kubernetes的主要功能之一就是可扩展性,为了证明这一点,让我们让另一个pod运行。 为此,使用以下定义创建另一个pod资源:

apiVersion: v1
kind: Pod
metadata:name: sa-frontend2      # The only change
spec:                                                containers:- image: rinormaloku/sentiment-analysis-frontend name: sa-frontend                              ports:- containerPort: 80

Create the new pod by executing the following command:

通过执行以下命令来创建新容器:

kubectl create -f sa-frontend-pod2.yaml
pod "sa-frontend2" created

Verify that the second pod is running by executing:

通过执行以下命令来验证第二个容器是否正在运行:

kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
sa-frontend                   1/1       Running   0          7s
sa-frontend2                  1/1       Running   0          7s

Now we have two pods running!

现在我们有两个吊舱正在运行!

Attention: this is not the final solution, and it has many flaws. We will improve this in the section for another Kubernetes resource Deployments.

注意:这不是最终解决方案,它有很多缺陷。 我们将在本节中的另一个Kubernetes资源部署中对此进行改进。

吊舱摘要 (Pod Summary)

The Nginx web server with the static files is running inside two different pods. Now we have two questions:

带有静态文件的Nginx Web服务器在两个不同的容器中运行。 现在我们有两个问题:

  • How do we expose it externally to make it accessible via URL, and我们如何在外部公开它以使其可以通过URL访问,以及
  • How do we load balance between them?我们如何在它们之间进行负载平衡?

Kubernetes provides us the Services resource. Let’s jump right into it, in the next section.

Kubernetes向我们提供服务资源。 在下一部分中,让我们直接进入。

Kubernetes实践-服务 (Kubernetes in Practice — Services)

The Kubernetes Service resource acts as the entry point to a set of pods that provide the same functional service. This resource does the heavy lifting, of discovering services and load balancing between them as shown in figure 16.

Kubernetes 服务资源充当提供相同功能服务的一组Pod的入口点。 此资源完成了繁重的工作,发现了服务并在它们之间实现了负载平衡,如图16所示。

In our Kubernetes cluster, we will have pods with different functional services. (The frontend, the Spring WebApp and the Flask Python application). So the question arises how does a service know which pods to target? I.e. how does it generate the list of the endpoints for the pods?

在我们的Kubernetes集群中,我们将提供具有不同功能服务的Pod。 (前端,Spring WebApp和Flask Python应用程序)。 因此,出现了一个问题:服务如何知道要定位的豆荚? 即它如何生成Pod的端点列表?

This is done using Labels, and it is a two-step process:

这是使用Labels完成的,它分为两个步骤:

  1. Applying a label to all the pods that we want our Service to target and在我们希望我们的服务定位的所有吊舱上贴上标签
  2. Applying a “selector” to our service so that defines which labeled pods to target.将“选择器”应用于我们的服务,以便定义要定位的标记豆荚。

This is much simpler visually:

这在视觉上要简单得多:

We can see that pods are labeled with “app: sa-frontend” and the service is targeting pods with that label.

我们可以看到,广告连播标记有“ app:sa-frontend”,并且该服务针对的是带有该标签的连播。

标签 (Labels)

Labels provide a simple method for organizing your Kubernetes Resources. They represent a key-value pair and can be applied to every resource. Modify the manifests for the pods to match the example shown earlier in figure 17.

标签提供了一种组织Kubernetes资源的简单方法。 它们代表一个键值对,并且可以应用于每个资源。 修改吊舱的清单以匹配前面图17中所示的示例。

Save the files after completing the changes, and apply them with the following command:

完成更改后保存文件,并使用以下命令应用它们:

kubectl apply -f sa-frontend-pod.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod "sa-frontend" configured
kubectl apply -f sa-frontend-pod2.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod "sa-frontend2" configured

We got a warning (apply instead of create, roger that). In the second line, we see that the pods “sa-frontend” and “sa-frontend2” are configured. We can verify that the pods were labeled by filtering the pods that we want to display:

我们收到了警告(应用而不是创建,更不用说了)。 在第二行中,我们看到Pod“ sa-frontend”和“ sa-frontend2”已配置。 我们可以通过过滤要显示的容器来验证是否标记了容器:

kubectl get pod -l app=sa-frontend
NAME           READY     STATUS    RESTARTS   AGE
sa-frontend    1/1       Running   0          2h
sa-frontend2   1/1       Running   0          2h

Another way to verify that our pods are labeled is by appending the flag --show-labels to the above command. This will display all the labels for each pod.Great! Our pods are labeled and we are ready to target them with our Service. Lets get started defining the Service of type LoadBalancer shown in Fig. 18.

验证我们的Pod是否被标记的另一种方法是在上述命令后附加--show-labels标志。 这将显示每个吊舱的所有标签。 我们的豆荚已贴上标签,我们已准备好以我们的服务为目标。 让我们开始定义如图18所示的LoadBalancer类型的Service。

服务定义 (Service definition)

The YAML definition of the Loadbalancer Service is shown below:

负载平衡器服务的YAML定义如下所示:

apiVersion: v1
kind: Service              # 1
metadata:name: sa-frontend-lb
spec:type: LoadBalancer       # 2ports:- port: 80               # 3protocol: TCP          # 4targetPort: 80         # 5selector:                # 6app: sa-frontend       # 7
  1. Kind: A service.

    种类:一种服务。

  2. Type: Specification type, we choose LoadBalancer because we want to balance the load between the pods.

    类型:规格类型,我们选择LoadBalancer,因为我们想平衡吊舱之间的负载。

  3. Port: Specifies the port in which the service gets requests.

    端口:指定服务在其中获取请求的端口。

  4. Protocol: Defines the communication.

    协议:定义通信。

  5. TargetPort: The port at which incomming requests are forwarded.

    TargetPort:转发传入请求的端口。

  6. Selector: Object that contains properties for selecting pods.

    选择器:包含用于选择吊舱的属性的对象。

  7. app: sa-frontend Defines which pods to target, only pods that are labeled with “app: sa-frontend”

    app: sa-frontend定义要定位的广告连播,仅定义标有“ app:sa-frontend”的广告连播

To create the service execute the following command:

要创建服务,请执行以下命令:

kubectl create -f service-sa-frontend-lb.yaml
service "sa-frontend-lb" created

You can check out the state of the service by executing the following command:

您可以通过执行以下命令来检查服务的状态:

kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
sa-frontend-lb   LoadBalancer   10.101.244.40   <pending>     80:30708/TCP   7m

The External-IP is in pending state (and don’t wait, as it’s not going to change). This is only because we are using Minikube. If we would have executed this in a cloud provider like Azure or GCP, we would get a Public IP, which makes our services worldwide accessible.

外部IP处于待处理状态(不要等待,因为它不会更改)。 这仅仅是因为我们正在使用Minikube 。 如果我们将在Azure或GCP之类的云提供商中执行此操作,则将获得一个公共IP,这使得我们的服务可在全球范围内访问。

Despite that, Minikube doesn’t let us hanging it provides a useful command for local debugging, execute the following command:

尽管如此,Minikube不允许我们挂起它,它为本地调试提供了有用的命令,请执行以下命令:

minikube service sa-frontend-lb
Opening kubernetes service default/sa-frontend-lb in default browser...

This opens your browser pointing to the services IP. After the Service receives the request, it will forward the call to one of the pods (which one doesn’t matter). This abstraction enables us to see and act with the numerous pods as one unit, using the Service as an entry point.

这将打开指向服务IP的浏览器。 服务收到请求后,会将呼叫转发到其中一个Pod(无关紧要)。 这种抽象使我们能够以服务为切入点,以多个单元为单位来查看和操作。

服务摘要 (Service Summary)

In this section, we covered labeling resources, using those as selectors in Services, and we defined and created a LoadBalancer service. This fulfills our requirements to scale the application (just add new labeled pods) and to Load balance between the pods, using the service as an entry point.

在本节中,我们介绍了将标签资源用作服务中的选择器的标签资源,并定义并创建了LoadBalancer服务。 这满足了我们的要求,即使用该服务作为切入点来扩展应用程序(只需添加新的带标签的容器)并在容器之间进行负载平衡。

Kubernetes实践-部署 (Kubernetes in Practice — Deployments)

Kubernetes Deployments help us with one constant in the life of every application, and that is change. Moreover, the only applications that do not change are the ones that are already dead, and while not, new requirements will come in, more code will be shipped, it will be packaged, and deployed. And on each step of this process, mistakes can be made.

Kubernetes部署可以帮助我们在每个应用程序的生命周期中保持一个常数,那就是变化 。 而且,唯一不变的应用程序就是已经死了的应用程序,而不会改变的是,新的要求将会出现,更多的代码将被运送,打包和部署。 在此过程的每个步骤中,都可能出错。

The Deployment resource automates the process of moving from one version of the application to the next, with zero downtime and in case of failures, it enables us to quickly roll back to the previous version.

部署资源可自动完成从一个应用程序版本到下一个版本的迁移过程,停机时间为零,并且在出现故障的情况下,它使我们能够快速回滚到以前的版本。

实际部署 (Deployments in Practice)

Currently, we have two pods and a service exposing them and load balancing between them (see Fig. 19.). We mentioned that deploying the pods separately is far from perfect. It requires separately managing each (create, update, delete and monitoring their health). Quick updates and fast rollbacks are out of the question! This is not acceptable and the Deployments Kubernetes resource solves each of these issues.

当前,我们有两个Pod一个服务来公开它们以及它们之间的负载平衡(请参见图19)。 我们提到单独部署Pod并非完美。 它需要分别管理每个组件(创建,更新,删除和监视其运行状况)。 快速更新和快速回滚是不可能的! 这是不可接受的, Deployments Kubernetes资源解决了所有这些问题。

Before we continue let’s state what we want to achieve, as it will provide us with the overview that enables us to understand the manifest definition for the deployment resource. What we want is:

在继续之前,让我们陈述一下我们想要实现的目标,因为它将为我们提供概述,使我们能够了解部署资源的清单定义。 我们想要的是:

  1. Two pods of the image rinormaloku/sentiment-analysis-frontend图像rinormaloku / sentiment-analysis-frontend的两个Pod
  2. Zero Downtime deployments,零停机时间部署,
  3. Pods labeled with app: sa-frontend so that the services get discovered by the Service sa-frontend-lb.

    标有app: sa-frontend以便服务sa-frontend-lb发现服务

In the next section, we will translate the requirements into a Deployment definition.

在下一节中,我们将需求转换为Deployment定义。

部署定义 (Deployment definition)

The YAML resource definition that achieves all the above-mentioned points:

实现上述所有要点的YAML资源定义:

apiVersion: apps/v1
kind: Deployment                                          # 1
metadata:name: sa-frontend
spec:selector:                                               # 2matchLabels:app: sa-frontend                                    replicas: 2                                             # 3minReadySeconds: 15strategy:type: RollingUpdate                                   # 4rollingUpdate: maxUnavailable: 1                                   # 5maxSurge: 1                                         # 6template:                                               # 7metadata:labels:app: sa-frontend                                  # 8spec:containers:- image: rinormaloku/sentiment-analysis-frontendimagePullPolicy: Always                         # 9name: sa-frontendports:- containerPort: 80
  1. Kind: A deployment.

    种类:部署。

  2. Selector: Pods matching the selector will be taken under the management of this deployment.

    选择器:与选择器匹配的窗格将在此部署的管理下进行。

  3. Replicas is a property of the deployments Spec object that defines how many pods we want to run. So only 2.

    副本是Deployment Spec对象的属性,它定义了我们要运行多少个Pod。 所以只有2。

  4. Type specifies the strategy used in this deployment when moving from the current version to the next. The strategy RollingUpdate ensures Zero Downtime deployments.

    Type specifies the strategy used in this deployment when moving from the current version to the next. The strategy RollingUpdate ensures Zero Downtime deployments.

  5. MaxUnavailable is a property of the RollingUpdate object that specifies the maximum unavailable pods allowed (compared to the desired state) when doing a rolling update. For our deployment which has 2 replicas this means that after terminating one Pod, we would still have one pod running, this way keeping our application accessible.

    MaxUnavailable is a property of the RollingUpdate object that specifies the maximum unavailable pods allowed (compared to the desired state) when doing a rolling update. For our deployment which has 2 replicas this means that after terminating one Pod, we would still have one pod running, this way keeping our application accessible.

  6. MaxSurge is another property of the RollingUpdate object that defines the maximum amount of pods added to a deployment (compared to the desired state). For our deployment, this means that when moving to a new version we can add one pod, which adds up to 3 pods at the same time.

    MaxSurge is another property of the RollingUpdate object that defines the maximum amount of pods added to a deployment (compared to the desired state). For our deployment, this means that when moving to a new version we can add one pod, which adds up to 3 pods at the same time.

  7. Template: specifies the pod template that the Deployment will use to create new pods. Most likely the resemblance with Pods struck you immediately.

    Template: specifies the pod template that the Deployment will use to create new pods. Most likely the resemblance with Pods struck you immediately.

  8. app: sa-frontend the label to use for the pods created by this template.

    app: sa-frontend the label to use for the pods created by this template.

  9. ImagePullPolicy when set to Always, it will pull the container images on each redeployment.

    ImagePullPolicy when set to Always , it will pull the container images on each redeployment.

Honestly, that wall of text got even me confused, let’s just get started with the example:

Honestly, that wall of text got even me confused, let's just get started with the example:

kubectl apply -f sa-frontend-deployment.yaml
deployment "sa-frontend" created

As always let’s verify that everything went as planned:

As always let's verify that everything went as planned:

kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
sa-frontend                    1/1       Running   0          2d
sa-frontend-5d5987746c-ml6m4   1/1       Running   0          1m
sa-frontend-5d5987746c-mzsgg   1/1       Running   0          1m
sa-frontend2                   1/1       Running   0          2d

We got 4 running pods, two pods created by the Deployment and the other two are the ones we created manually. Delete the ones we created manually using the command kubectl delete pod <pod-name>.

We got 4 running pods, two pods created by the Deployment and the other two are the ones we created manually. Delete the ones we created manually using the command kubectl delete pod <pod-name> .

Exercise: Delete one of the pods of the deployment as well and see what happens. Think for the reason before reading the explanation below.

Exercise: Delete one of the pods of the deployment as well and see what happens. Think for the reason before reading the explanation below.

Explanation: Deleting one pod made the Deployment notice that the current state (1 pod running) is different from the desired state (2 pods running) so it started another pod.

Explanation: Deleting one pod made the Deployment notice that the current state (1 pod running) is different from the desired state (2 pods running) so it started another pod.

So what was so good about Deployments, besides keeping the desired state? Let’s get started with the benefits.

So what was so good about Deployments, besides keeping the desired state? Let's get started with the benefits.

Benefit 1: Rolling a Zero-Downtime deployment (Benefit 1: Rolling a Zero-Downtime deployment)

Our Product manager came to us with a new requirement, our clients want to have a green button in the frontend. The developers shipped their code and provided us with the only thing we need, the container image rinormaloku/sentiment-analysis-frontend:green. Now it’s our turn, we the DevOps have to roll a Zero-Downtime deployment, Will the hard work pay off? Let’s see that!

Our Product manager came to us with a new requirement, our clients want to have a green button in the frontend. The developers shipped their code and provided us with the only thing we need, the container image rinormaloku/sentiment-analysis-frontend:green . Now it's our turn, we the DevOps have to roll a Zero-Downtime deployment, Will the hard work pay off? Let's see that!

Edit the file sa-frontend-deployment.yaml by changing the container image to refer to the new image: rinormaloku/sentiment-analysis-frontend:green. Save the changes as sa-frontend-deployment-green.yaml and execute the following command:

Edit the file sa-frontend-deployment.yaml by changing the container image to refer to the new image: rinormaloku/sentiment-analysis-frontend:green . Save the changes as sa-frontend-deployment-green.yaml and execute the following command:

kubectl apply -f sa-frontend-deployment-green.yaml --record
deployment "sa-frontend" configured

We can check the status of the rollout using the following command:

We can check the status of the rollout using the following command:

kubectl rollout status deployment sa-frontend
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 of 2 updated replicas are available...
deployment "sa-frontend" successfully rolled out

According to the output the deployment was rolled out. It was done in such a fashion that the replicas were replaced one by one. Meaning that our application was always on. Before we move on let’s verify that the update is live.

According to the output the deployment was rolled out. It was done in such a fashion that the replicas were replaced one by one. Meaning that our application was always on. Before we move on let's verify that the update is live.

Verifying the deployment (Verifying the deployment)

Let’s see the update live on our browsers. Execute the same command that we used before minikube service sa-frontend-lb, which opens up the browser. We can see that the button was updated.

Let's see the update live on our browsers. Execute the same command that we used before minikube service sa-frontend-lb , which opens up the browser. We can see that the button was updated.

Behind the scenes of “The RollingUpdate” (Behind the scenes of “The RollingUpdate”)

After we applied the new deployment, Kubernetes compares the new state with the old one. In our case, the new state requests two pods with the image rinormaloku/sentiment-analysis-frontend:green. This is different from the current state running so it kicks in the RollingUpdate.

After we applied the new deployment, Kubernetes compares the new state with the old one. In our case, the new state requests two pods with the image rinormaloku/sentiment-analysis-frontend:green. This is different from the current state running so it kicks in the RollingUpdate .

The RollingUpdate acts according to the rules we specified, those being “maxUnavailable: 1″ and “maxSurge: 1″. This means that the deployment can terminate only one pod, and can start only one new pod. This process is repeated until all pods are replaced (see Fig. 21).

The RollingUpdate acts according to the rules we specified, those being “ maxUnavailable: 1″ and “ maxSurge: 1″. This means that the deployment can terminate only one pod, and can start only one new pod. This process is repeated until all pods are replaced (see Fig. 21).

Let’s continue with the benefit number 2.

Let's continue with the benefit number 2.

Disclaimer: For entertainment purposes, the next part is written as a novella.

Disclaimer: For entertainment purposes, the next part is written as a novella.

Benefit 2: Rolling back to a previous state (Benefit 2: Rolling back to a previous state)

The Product Manager runs into your office and he is having a crisis!

The Product Manager runs into your office and he is having a crisis!

“The application has a critical bug, in PRODUCTION!! Revert back to the previous version immediately” — yells the product manager.

“The application has a critical bug, in PRODUCTION!! Revert back to the previous version immediately” — yells the product manager.

He sees the coolness in you, not twitching one eye. You turn to your beloved terminal and type:

He sees the coolness in you, not twitching one eye. You turn to your beloved terminal and type:

kubectl rollout history deployment sa-frontend
deployments "sa-frontend"
REVISION  CHANGE-CAUSE
1         <none>
2         kubectl.exe apply --filename=sa-frontend-deployment-green.yaml --record=true

You take a short look at the previous deployments. “The last version is buggy, meanwhile the previous version worked perfectly?” — you ask the Product Manager.

You take a short look at the previous deployments. “The last version is buggy, meanwhile the previous version worked perfectly?” — you ask the Product Manager.

“Yes, are you even listening to me!” — screams the product manager.

“Yes, are you even listening to me!” — screams the product manager.

You ignore him, you know what you have to do, you start typing:

You ignore him, you know what you have to do, you start typing:

kubectl rollout undo deployment sa-frontend --to-revision=1
deployment "sa-frontend" rolled back

You refresh the page and the change is undone!

You refresh the page and the change is undone!

The product managers jaw drops open.

The product managers jaw drops open.

You saved the day!

You saved the day!

The end!

The end!

Yeah… it was a boring novella. Before Kubernetes existed it was so much better, we had more drama, higher intensity, and that for a longer period of time. Ohh good old times!

Yeah… it was a boring novella. Before Kubernetes existed it was so much better, we had more drama, higher intensity, and that for a longer period of time. Ohh good old times!

Most of the commands are self-explanatory, besides one detail that you had to work out yourself. Why the first revision has a CHANGE-CAUSE of <none> meanwhile the second revision has a CHANGE-CAUSE of “kubectl.exe apply –filename=sa-frontend-deployment-green.yaml –record=true“.

Most of the commands are self-explanatory, besides one detail that you had to work out yourself. Why the first revision has a CHANGE-CAUSE of <none> meanwhile the second revision has a CHANGE- CAUS E of “kubectl.exe apply –filename=sa-frontend-deployment-gree n .yaml –recor d=true“.

If you concluded that it’s because of the --record flag that we used when we applied the new image then you are totally correct!

If you concluded that it's because of the --record flag that we used when we applied the new image then you are totally correct!

In the next section, we will use the concepts learned thus far to complete the whole architecture.

In the next section, we will use the concepts learned thus far to complete the whole architecture.

Kubernetes and everything else in Practice (Kubernetes and everything else in Practice)

We learned all the resources that we need to complete the architecture that’s why this part is going to be quick. In figure 22 we have greyed out everything that we still have to do. Lets start from the bottom: Deploying the sa-logic deployment.

We learned all the resources that we need to complete the architecture that's why this part is going to be quick. In figure 22 we have greyed out everything that we still have to do. Lets start from the bottom: Deploying the sa-logic deployment .

Deployment SA-Logic (Deployment SA-Logic)

Navigate in your terminal to the folder resource-manifests and execute the following command:

Navigate in your terminal to the folder resource-manifests and execute the following command:

kubectl apply -f sa-logic-deployment.yaml --record
deployment "sa-logic" created

The deployment SA-Logic created three pods. (Running the container of our python application). It labeled them with app: sa-logic. This labeling enables us to target them using a selector from the SA-Logic service. Please take time to open the file sa-logic-deployment.yaml and check out the contents.

The deployment SA-Logic created three pods. (Running the container of our python application). It labeled them with app: sa-logic. This labeling enables us to target them using a selector from the SA-Logic service. Please take time to open the file sa-logic-deployment.yaml and check out the contents.

It’s the same concepts used all over again, let’s jump right into the next resource: the service SA-Logic.

It's the same concepts used all over again, let's jump right into the next resource: the service SA-Logic .

Service SA Logic (Service SA Logic)

Lets elaborate why we need this Service. Our Java application (running in the pods of SA — WebApp deployment) depends on the sentiment analysis done by the Python Application. But now, in contrast to when we were running everything locally, we don’t have one single python application listening to one port, we have 2 pods and if needed we could have more.

Lets elaborate why we need this Service. Our Java application (running in the pods of SA — WebApp deployment) depends on the sentiment analysis done by the Python Application. But now, in contrast to when we were running everything locally, we don't have one single python application listening to one port, we have 2 pods and if needed we could have more.

That’s why we need a Service that “acts as the entry point to a set of pods that provide the same functional service”. This means that we can use the Service SA-Logic as the entry point to all the SA-Logic pods.

That's why we need a Service that “acts as the entry point to a set of pods that provide the same functional service”. This means that we can use the Service SA-Logic as the entry point to all the SA-Logic pods.

Let’s do that:

让我们这样做:

kubectl apply -f service-sa-logic.yaml
service "sa-logic" created

Updated Application State: We have 2 pods (containing the Python Application) running and we have the SA-Logic service acting as an entry point that we will use in the SA-WebApp pods.

Updated Application State: We have 2 pods (containing the Python Application) running and we have the SA-Logic service acting as an entry point that we will use in the SA-WebApp pods.

Now we need to deploy the SA-WebApp pods, using a deployment resource.

Now we need to deploy the SA-WebApp pods, using a deployment resource.

Deployment SA-WebApp (Deployment SA-WebApp)

We are getting the hang out of deployments, though this one has one more feature. If you open the file sa-web-app-deployment.yaml you will find this part new:

We are getting the hang out of deployments, though this one has one more feature. If you open the file sa-web-app-deployment.yaml you will find this part new:

- image: rinormaloku/sentiment-analysis-web-appimagePullPolicy: Alwaysname: sa-web-appenv:- name: SA_LOGIC_API_URLvalue: "http://sa-logic"ports:- containerPort: 8080

The first thing that interests us is what does the env property do? And we surmise that it is declaring the environment variable SA_LOGIC_API_URL with the value “http://sa-logic” inside our pods. But why are we initializing it to http://sa-logic, what is sa-logic?

The first thing that interests us is what does the env property do? And we surmise that it is declaring the environment variable SA_LOGIC_API_URL with the value “ http://sa-logic ” inside our pods. But why are we initializing it to http://sa-logic , what is sa-logic ?

Lets get introduced to kube-dns.

Lets get introduced to kube-dns .

KUBE-DNS (KUBE-DNS)

Kubernetes has a special pod the kube-dns. And by default, all Pods use it as the DNS Server. One important property of kube-dns is that it creates a DNS record for each created service.

Kubernetes has a special pod the kube-dns . And by default, all Pods use it as the DNS Server. One important property of kube-dns is that it creates a DNS record for each created service.

This means that when we created the service sa-logic it got an IP address. Its name was added as a record (in conjunction with the IP) in kube-dns. This enables all the pods to translate the sa-logic to the SA-Logic services IP address.

This means that when we created the service sa-logic it got an IP address. Its name was added as a record (in conjunction with the IP) in kube-dns. This enables all the pods to translate the sa-logic to the SA-Logic services IP address.

Good, now we can continue with:

Good, now we can continue with:

Deployment SA WebApp (continued) (Deployment SA WebApp (continued))

Execute the command:

Execute the command:

kubectl apply -f sa-web-app-deployment.yaml --record
deployment "sa-web-app" created

Done. We are left to expose the SA-WebApp pods externally using a LoadBalancer Service. This enables our react application to make http requests to the service which acts as an entry point to the SA-WebApp pods.

做完了 We are left to expose the SA-WebApp pods externally using a LoadBalancer Service. This enables our react application to make http requests to the service which acts as an entry point to the SA-WebApp pods.

Service SA-WebApp (Service SA-WebApp)

Open the file service-sa-web-app-lb.yaml, as you can see everything is familiar to you.So without further adoexecute the command:

Open the file service-sa-web-app-lb.yaml , as you can see everything is familiar to you.So without further adoexecute the command:

kubectl apply -f service-sa-web-app-lb.yaml
service "sa-web-app-lb" created

The architecture is complete. But we have one single dissonance. When we deployed the SA-Frontend pods our container image was pointing to our SA-WebApp in http://localhost:8080/sentiment. But now we need to update it to point to the IP Address of the SA-WebApp Loadbalancer. (Which acts as an entry point to the SA-WebApp pods).

The architecture is complete. But we have one single dissonance. When we deployed the SA-Frontend pods our container image was pointing to our SA-WebApp in http://localhost:8080/sentiment . But now we need to update it to point to the IP Address of the SA-WebApp Loadbalancer. (Which acts as an entry point to the SA-WebApp pods).

Fixing this dissonance provides us with the opportunity to succinctly encompass once more everything from code to deployment. (It’s even more effective if you do this alone instead of following the guide below). Let’s get started:

Fixing this dissonance provides us with the opportunity to succinctly encompass once more everything from code to deployment. (It's even more effective if you do this alone instead of following the guide below). 让我们开始吧:

  1. Get the SA-WebApp Loadbalancer IP by executing the following command:Get the SA-WebApp Loadbalancer IP by executing the following command:
minikube service list
|-------------|----------------------|-----------------------------|
|  NAMESPACE  |         NAME         |             URL             |
|-------------|----------------------|-----------------------------|
| default     | kubernetes           | No node port                |
| default     | sa-frontend-lb       | http://192.168.99.100:30708 |
| default     | sa-logic             | No node port                |
| default     | sa-web-app-lb        | http://192.168.99.100:31691 |
| kube-system | kube-dns             | No node port                |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|

2. Use the SA-WebApp LoadBalancer IP in the file sa-frontend/src/App.js, as shown below:

2. Use the SA-WebApp LoadBalancer IP in the file sa-frontend/src/App.js , as shown below:

analyzeSentence() {fetch('http://192.168.99.100:31691/sentiment', { /* shortened for brevity */}).then(response => response.json()).then(data => this.setState(data));}

3. Build the static files npm run build (you need to navigate to the folder sa-frontend)

3. Build the static files npm run build (you need to navigate to the folder sa-frontend )

4. Build the container image:

4. Build the container image:

docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend:minikube .

5. Push the image to Docker hub.

5. Push the image to Docker hub.

docker push $DOCKER_USER_ID/sentiment-analysis-frontend:minikube

6. Edit the sa-frontend-deployment.yaml to use the new image and

6. Edit the sa-frontend-deployment.yaml to use the new image and

7. Execute the command kubectl apply -f sa-frontend-deployment.yaml

7. Execute the command kubectl apply -f sa-frontend-deployment.yaml

Refresh the browser or if you closed the window execute minikube service sa-frontend-lb. Give it a try by typing a sentence!

Refresh the browser or if you closed the window execute minikube service sa-frontend-lb . Give it a try by typing a sentence!

Article summary (Article summary)

Kubernetes is beneficial for the team, for the project, simplifies deployments, scalability, resilience, it enables us to consume any underlying infrastructure and you know what? From now on, let’s call it Supernetes!

Kubernetes is beneficial for the team, for the project, simplifies deployments, scalability, resilience, it enables us to consume any underlying infrastructure and you know what? From now on, let's call it Supernetes!

What we covered in this series:

What we covered in this series:

  • Building / Packaging / Running ReactJS, Java and Python ApplicationsBuilding / Packaging / Running ReactJS, Java and Python Applications
  • Docker Containers; how to define and build them using Dockerfiles,Docker Containers; how to define and build them using Dockerfiles,
  • Container Registries; we used the Docker Hub as a repository for our containers.Container Registries; we used the Docker Hub as a repository for our containers.
  • We covered the most important parts of Kubernetes.We covered the most important parts of Kubernetes.
  • Pods豆荚
  • Services服务
  • Deployments部署
  • New concepts like Zero-Downtime deploymentsNew concepts like Zero-Downtime deployments
  • Creating scalable appsCreating scalable apps
  • And in the process, we migrated the whole microservice application to a Kubernetes Cluster.And in the process, we migrated the whole microservice application to a Kubernetes Cluster.

I am Rinor Maloku and I want to thank you for joining me on this voyage. Since you read this far I know that you loved this article and would be interested in more. I write articles that go into this depth of detail for new technologies every 3 months. You can always expect an example application, hands-on practice, and a guide that provides you with the right tools and knowledge to tackle any real-world project.

I am Rinor Maloku and I want to thank you for joining me on this voyage. Since you read this far I know that you loved this article and would be interested in more. I write articles that go into this depth of detail for new technologies every 3 months. You can always expect an example application, hands-on practice, and a guide that provides you with the right tools and knowledge to tackle any real-world project.

To stay in touch and not miss any of my articles subscribe to my newsletter, follow me on Twitter, and check out my page rinormaloku.com.

To stay in touch and not miss any of my articles subscribe to my newsletter , follow me on Twitter , and check out my page rinormaloku.com .

翻译自: https://www.freecodecamp.org/news/learn-kubernetes-in-under-3-hours-a-detailed-guide-to-orchestrating-containers-114ff420e882/

在3小时内学习Kubernetes:编排容器的详细指南相关推荐

  1. 学习Kubernetes 和容器技术体系的最佳方法

    你好,我是 Kubernetes 社区资深成员与项目维护者张磊,也是极客时间<深入剖析 Kubernetes >的专栏作者.今天我来与你谈一谈,学习 Kubernetes 和容器技术体系的 ...

  2. golang go语言_在7小时内学习快速简单的Go编程语言(Golang)

    golang go语言 The Go programming language (also called Golang) was developed by Google to improve prog ...

  3. 这个免费的交互式课程在一小时内学习JavaScript

    JavaScript is the most popular programming language on the web. You can use it to create websites, s ...

  4. 一个小时内学习SQLite数据库

    FROM: http://database.51cto.com/art/201205/335411.htm SQLite 是一个开源的嵌入式关系数据库,实现自包容.零配置.支持事务的SQL数据库引擎. ...

  5. 24小时内最快跑完最详细BSC全节点搭建同步实战

    最详细BSC全节点搭建同步实战 服务器选择: 使用的是 24核48线程 64G内存 2T nvme*2(千万别用esc,如果条件允许可以直接用3块2T的nvme) 带宽100M起 如果不使用nvme硬 ...

  6. 浅淡Kubernetes 与容器技术体系的最佳方法

    我们已经进入到容器化时代,Kubernetes成为了市场上容器编排的事实标准,而且k8S 同样具备了微服务所需要的服务注册与发现.负载均衡.配置中心.Spring cloud 的核心是Netflix微 ...

  7. 阿里云容器服务新增支持Kubernetes编排系统,性能重大提升

    摘要: 作为容器编排系统的两大流派, Kubernetes和Swarm的重要性不言而喻.融合了两大高性能集成的阿里云容器服务,不仅可以降低50%的基础架构成本,提高交付速度将产品迭代加快13倍,还可以 ...

  8. 阿里云容器服务新增支持Kubernetes编排系统,性能重大提升 1

    摘要: 作为容器编排系统的两大流派, Kubernetes和Swarm的重要性不言而喻.融合了两大高性能集成的阿里云容器服务,不仅可以降低50%的基础架构成本,提高交付速度将产品迭代加快13倍,还可以 ...

  9. 云原生时代(五):Kubernetes与容器编排之战

    上文我们主要介绍了容器和Docker,第五部分我们来讲Kubernetes与容器编排之战. 容器编排与Kubernetes 在单机上运行容器,无法发挥它的最大效能,只有形成集群,才能最大程度发挥容器的 ...

  10. Kubernetes(K8s) —— 容器编排管理技术

    K8s 容器编排管理技术 第一章 是什么 1. 背景 2. 基础概念 Pod 控制器 Service 3. 架构 第二章 环境搭建与安装 1. 虚拟机集群搭建 命令批执行技巧 2. K8s相关软件安装 ...

最新文章

  1. 【转载】关于c++中的explicit
  2. redis+aop防重复提交
  3. [转载]Linux下getopt()函数的简单使用
  4. Android 使用SeekBar调节系统音量
  5. Oracle基础语句
  6. YAML_11 when条件判断
  7. 【Java数据结构与算法】第九章 顺序查找、二分查找、插值查找和斐波那契查找
  8. Visual Studio 2005 中的新增安全性功能
  9. 实现图片懒加载的方法
  10. 用友 hr win10 java_用友u8win10安装教程_用友u8win10安装方法步骤教程_用友u8安装教程...
  11. MyExcel 3.6.0 版本发布,支持列表模板混合导出
  12. springboot2.0启动报错The APR based Apache Tomcat Native library which allows optimal performance in ...
  13. cadence 通孔焊盘_【精品】PCB设计软件allegro不规则带通孔焊盘的制作
  14. 查看IC卡芯片的位置
  15. Jenkins的下载、配置、安装和基本操作
  16. 年薪五十万的程序员在北京过着怎样的生活
  17. 深入Android源码系列(二) HOOK技术大作战
  18. 工具——ultraedit常用技巧
  19. 创建GIT项目,并初始化上传项目代码
  20. 20165217叶佺学习基础和C语言基础调查

热门文章

  1. 16张SIM卡,8路5g多卡聚合路由器5g多卡汇聚路由器,多网融合,弱网通信
  2. Redis是什么?怎么用?
  3. Python温度转换程序
  4. 微分方程3_求解偏微分方程
  5. 小程序开发需要多少钱?
  6. 阿里 P9 耗时 28 天,总结历年亿级活动高并发系统设计手册
  7. Biological Psychiatry:亚属连接预测经颅磁刺激位点抗抑郁疗效
  8. 以欺诈和乌托邦主义来划分加密货币的四个象限
  9. 现代 C++ 编译时 结构体字段反射
  10. imagine php,Yii2第三方类库插件Imagine的安装和使用