docker手册

The concept of containerization itself is pretty old, but the emergence of the Docker Engine in 2013 has made it much easier to containerize your applications.

容器化本身的概念还很老,但是Docker Engine在2013年的出现使容器化应用程序变得更加容易。

According to the Stack Overflow Developer Survey - 2020, Docker is the #1 most wanted platform, #2 most loved platform, and also the #3 most popular platform.

根据2020年Stack Overflow开发人员调查 , Docker是排名第一,最受欢迎的平台 , 排名第二,最受欢迎的平台 ,也是排名第三的最受欢迎的平台 。

As in-demand as it may be, getting started can seem a bit intimidating at first. So in this article, we'll be learning everything from basic to intermediate level of containerization. After going through the entire article, you should be able to:

尽管可能是按需的,但起初似乎有些令人生畏。 因此,在本文中,我们将学习从基本到中等级别的容器化的所有内容。 在阅读了整篇文章之后,您应该能够:

  • Containerize (almost) any application容器化(几乎)任何应用程序
  • Upload custom Docker Images in Docker Hub在Docker Hub中上传自定义Docker映像
  • Work with multiple containers using Docker Compose使用Docker Compose处理多个容器

先决条件 (Prerequisites)

  • Familiarity with the Linux Terminal熟悉Linux终端
  • Familiarity with JavaScript (some of the later projects use JavaScript)熟悉JavaScript(以后的一些项目使用JavaScript)

项目编号 (Project Code)

Code for the example projects can be found in the following repository:

示例项目的代码可以在以下存储库中找到:

You can find the complete code in the containerized branch.

您可以在containerized分支中找到完整的代码。

目录 (Table of Contents)

    • Virtual Machines vs Containers

      虚拟机与容器

    Introduction to Containerization and Docker

    容器化和Docker简介

  • Installing Docker

    安装Docker

    • Docker Architecture

      Docker架构

    • Images and Containers

      图片和容器

    • Registries

      登记处

    • The Full Picture

      全貌

    Hello World in Docker

    Docker中的Hello World

    • Running Containers

      运行容器

    • Listing Containers

      列出容器

    • Restarting Containers

      重新启动容器

    • Cleaning Up Dangling Containers

      清理悬挂的容器

    • Running Containers in Interactive Mode

      在交互模式下运行容器

    • Creating Containers Using Executable Images

      使用可执行映像创建容器

    • Running Containers in Detached Mode

      以分离模式运行容器

    • Executing Commands Inside a Running Container

      在正在运行的容器中执行命令

    • Starting Shell Inside a Running Container

      在正在运行的容器中启动Shell

    • Accessing Logs From a Running Container

      从正在运行的容器访问日志

    • Stopping or Killing a Running Container

      停止或杀死正在运行的容器

    • Mapping Ports

      映射端口

    Manipulating Containers

    操作容器

  • Demonstration of Container Isolation

    集装箱隔离示范

    • Image Creation Basics

      图像创作基础

    • Creating an Executable Image

      创建可执行映像

    • Containerizing an Express Application

      容器化Express应用程序

    • Working with Volumes

      处理卷

    • Multi-staged Builds

      多阶段构建

    • Uploading Built Images to Docker Hub

      将内置映像上传到Docker Hub

    Creating Custom Images

    创建自定义图像

    • Compose Basics

      撰写基础

    • Listing Services

      上市服务

    • Executing Commands Inside a Running Service

      在正在运行的服务中执行命令

    • Starting Shell Inside a Running Service

      在正在运行的服务中启动Shell

    • Accessing Logs From a Running Service

      从正在运行的服务访问日志

    • Stopping Running Services

      停止运行服务

    • Composing a Full-stack Application

      编写全栈应用程序

    Working with Multi-container Applications using Docker Compose

    使用Docker Compose处理多容器应用程序

  • Conclusion

    结论

容器化和Docker简介 (Introduction to Containerization and Docker)

Containerization is the process of encapsulating software code along with all of its dependencies inside a single package so that it can be run consistently anywhere.

容器化是将软件代码及其所有依赖项封装在一个软件包中的过程,以便可以在任何地方一致地运行它。

Docker is an open source containerization platform. It provides the ability to run applications in an isolated environment known as a container.

Docker是一个开源容器化平台。 它提供了在称为容器的隔离环境中运行应用程序的功能。

Containers are like very lightweight virtual machines that can run directly on our host operating system's kernel without the need of a hypervisor. As a result we can run multiple containers simultaneously.

容器就像非常轻巧的虚拟机,可以直接在我们主机操作系统的内核上运行,而无需管理程序 。 结果,我们可以同时运行多个容器。

Each container contains an application along with all of its dependencies and is isolated from the other ones. Developers can exchange these containers as image(s) through a registry and can also deploy directly on servers.

每个容器包含一个应用程序及其所有依赖项,并且与其他应用程序隔离。 开发人员可以通过注册表将这些容器作为映像交换,也可以直接部署在服务器上。

虚拟机与容器 (Virtual Machines vs Containers)

A virtual machine is the emulated equivalent of a physical computer system with their virtual CPU, memory, storage, and operating system.

虚拟机是具有虚拟CPU,内存,存储和操作系统的物理计算机系统的模拟等效项。

A program known as a hypervisor creates and runs virtual machines. The physical computer running a hypervisor is called the host system, while the virtual machines are called guest systems.

称为管理程序的程序会创建并运行虚拟机。 运行管理程序的物理计算机称为主机系统,而虚拟机称为来宾系统。

The hypervisor treats resources — like the CPU, memory, and storage — as a pool that can be easily reallocated between the existing guest virtual machines.

系统管理程序将资源(例如CPU,内存和存储)视为可以在现有来宾虚拟机之间轻松重新分配的池。

Hypervisors are of two types:

系统管理程序有两种类型:

  • Type 1 Hypervisor (VMware vSphere, KVM, Microsoft Hyper-V).类型1虚拟机监控程序(VMware vSphere,KVM,Microsoft Hyper-V)。
  • Type 2 Hypervisor (Oracle VM VirtualBox, VMware Workstation Pro/VMware Fusion).类型2虚拟机监控程序(Oracle VM VirtualBox,VMware Workstation Pro / VMware Fusion)。

A container is an abstraction at the application layer that packages code and dependencies together. Instead of virtualizing the entire physical machine, containers virtualize the host operating system only.

容器是应用程序层的抽象,它将代码和依赖项打包在一起。 容器不是虚拟化整个物理机,而是仅虚拟化主机操作系统。

Containers sit on top of the physical machine and its operating system. Each container shares the host operating system kernel and, usually, the binaries and libraries, as well.

容器位于物理计算机及其操作系统的顶部。 每个容器共享主机操作系统内核,通常还共享二进制文件和库。

安装Docker (Installing Docker)

Navigate to the download page for Docker Desktop and choose your operating system from the drop-down:

导航至Docker Desktop的下载页面,然后从下拉列表中选择您的操作系统:

I'll be showing the installation process for the Mac version but I believe installation for other operating systems should be just as straightforward.

我将展示Mac版本的安装过程,但我相信其他操作系统的安装应该同样简单。

The Mac installation process has two steps:

Mac的安装过程分为两个步骤:

  1. Mounting the downloaded Docker.dmg file.

    挂载下载的Docker.dmg文件。

  2. Dragging and dropping Docker into your Application directory.

    Docker拖放到您的Application目录中。

Now go to your Application directory and open Docker by double-clicking. The daemon should run and an icon should appear on your menu bar (taskbar in windows):

现在转到您的应用程序目录,然后双击打开Docker。 守护程序应运行,并且图标应出现在菜单栏(Windows中的任务栏)上:

You can use this icon to access the Docker Dashboard:

您可以使用此图标访问Docker Dashboard

It may look a bit boring at the moment, but once you've run a few containers, this will become much more interesting.

目前可能看起来有些无聊,但是一旦运行了几个容器,这将变得更加有趣。

Docker中的Hello World (Hello World in Docker)

Now that we have Docker ready to go on our machines, it's time for us to run our first container. Open up terminal (command prompt in windows) and run the following command:

现在我们已经准备好在机器上运行Docker,现在是时候运行我们的第一个容器了。 打开终端(Windows中的命令提示符),然后运行以下命令:

docker run hello-world

If everything goes fine you should see some output like the following:

如果一切顺利,您应该会看到类似以下的输出:

The hello-world image is an example of minimal containerization with Docker. It has a single hello.c file responsible for printing out the message you're seeing on your terminal.

hello-world映像是使用Docker进行最小化容器化的一个示例。 它只有一个hello.c文件,用于打印出您在终端上看到的消息。

Almost every image contains a default command. In case of the hello-world image, the default command is to execute the hello binary compiled from the previously mentioned C code.

几乎每个图像都包含一个默认命令。 对于hello-world映像,默认命令是执行从前面提到的C代码编译的hello二进制文件。

If you open up the dashboard again, you should find the hello-world container there:

如果再次打开仪表板,则应在此处找到hello-world容器:

The status is EXITED(0) which indicates that the container has run and exited successfully. You can view the Logs, Stats (CPU/memory/disk/network usage) or Inspect (environment/port mappings).

状态为EXITED(0) ,表示容器已成功运行并退出。 您可以查看日志统计信息(CPU /内存/磁盘/网络使用情况)检查(环境/端口映射)

To understand what just happened, you need to get familiar with the Docker Architecture, Images and Containers, and Registries.

要了解刚刚发生的事情,您需要熟悉Docker架构,映像和容器以及注册表。

Docker架构 (Docker Architecture)

Docker uses a client-server architecture. The engine consists of three major components:

Docker使用客户端-服务器架构。 该引擎包括三个主要组件:

  1. Docker Daemon: The daemon is a long running application that keeps on going in the background, listening to the commands issued by the client. It can manage Docker objects such as images, containers, networks, and volumes.

    Docker守护程序:守护程序是一个长时间运行的应用程序,它在后台继续运行,侦听客户端发出的命令。 它可以管理Docker对象,例如图像,容器,网络和卷。

  2. Docker Client: The client is a command-line interface program accessible by docker command. This client tells the server what to do. When we execute a command like docker run hello-world, the client tells the the daemon to carry out the task.

    Docker客户端:客户端是可通过docker命令访问的命令行界面程序。 该客户端告诉服务器该怎么做。 当我们执行诸如docker run hello-world类的命令时,客户端告诉守护程序执行任务。

  3. REST API: Communication between the daemon and the client happens using a REST API over UNIX sockets or network interfaces.

    REST API:在UNIX套接字或网络接口上使用REST API,可以在守护程序和客户端之间进行通信。

There is a nice graphical representation of the architecture on Docker's official documentation:

Docker的官方文档中有一个很好的图形化架构表示形式:

Don't worry if it looks confusing at the moment. Everything will become much clearer in the upcoming sub-sections.

暂时不要担心它是否令人困惑。 在接下来的小节中,一切将变得更加清晰。

图片和容器 (Images and Containers)

Images are multi-layered self-contained files with necessary instructions to create containers. Images can be exchanged through registries. We can use any image built by others or can also modify them by adding new instructions.

图像是多层自包含文件,其中包含创建容器的必要说明。 图像可以通过注册表进行交换。 我们可以使用他人制作的任何图像,也可以通过添加新的说明对其进行修改。

Images can be created from scratch as well. The base layer of an image is read-only. When we edit a Dockerfile and rebuild it, only the modified part is rebuilt in the top layer.

也可以从头开始创建图像。 图像的基础层是只读的。 当我们编辑一个Dockerfile并重建它时,只有修改过的部分才在顶层重建。

Containers are runnable instances of images. When we pull an image like hello-world and run them, they create an isolated environment suitable for running the program included in the image. This isolated environment is a container. If we compare images with classes from OOP then containers are the objects.

容器是图像的可运行实例。 当我们拉出像hello-world之类的映像并运行它们时,它们会创建一个适合运行该映像中包含的程序的隔离环境。 这种隔离的环境是一个容器。 如果我们将图像与OOP中的类进行比较,则容器就是对象。

登记处 (Registries)

Registries are storage for Docker images. Docker Hub is the default public registry for storing images.

注册表是Docker映像的存储区。 Docker Hub是用于存储图像的默认公共注册表。

Whenever we execute commands like docker run or docker pull the daemon usually fetches images from the hub. Anyone can upload images to the hub using docker push command. You can go to the hub and search for images like any other website.

每当我们执行docker rundocker pull之类的命令时,守护进程通常都会从集线器获取图像。 任何人都可以使用docker push命令将图像上传到中心。 您可以转到中心,然后像搜索其他任何网站一样搜索图像。

If you create an account, you'll be able to upload custom images as well. Images that I've uploaded are available for everyone at https://hub.docker.com/u/fhsinchy page.

如果您创建帐户,则也可以上传自定义图片。 我上传的图像可在https://hub.docker.com/u/fhsinchy页面上提供给所有人。

全貌 (The Full Picture)

Now that you're familiar with the architecture, images, containers, and registries, you're ready to understand what happened when we executed the docker run hello-world command. A graphical representation of the process is as follows:

既然您已经熟悉了架构,图像,容器和注册表,您就可以了解执行docker run hello-world命令时发生的情况。 该过程的图形表示如下:

The entire process happens in five steps:

整个过程分为五个步骤:

  1. We execute the docker run hello-world command.

    我们执行docker run hello-world命令。

  2. Docker client tells the daemon that we want to run a container using the hello-world image.Docker客户端告诉守护程序我们要使用hello-world映像运行容器。
  3. Docker daemon pulls the latest version of the image from the registry.Docker守护程序从注册表中提取映像的最新版本。
  4. Creates a container from the image.从图像创建一个容器。
  5. Runs the newly created container.运行新创建的容器。

It's the default behavior of Docker daemon to look for images in the hub, that are not present locally. But once an image has been fetched, it'll stay in the local cache. So if you execute the command again, you won't see the following lines in the output:

Docker守护程序的默认行为是在中心中查找本地不存在的映像。 但是,一旦获取图像,它将保留在本地缓存中。 因此,如果再次执行该命令,则在输出中将看不到以下几行:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
Digest: sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9
Status: Downloaded newer image for hello-world:latest

If there is a newer version of the image available, the daemon will fetch the image again. That :latest is a tag. Images usually have meaningful tags to indicate versions or builds. You'll learn about this in more detail in a later section.

如果有可用的映像的较新版本,则守护程序将再次获取该映像。 :latest是一个标记。 图像通常具有有意义的标记以指示版本或内部版本。 您将在下一部分中更详细地了解这一点。

操作容器 (Manipulating Containers)

In the previous section, we've had a brief encounter with the Docker client. It is the command-line interface program that takes our commands to the Docker daemon. In this section, you'll learn about more advanced ways of manipulating containers in Docker.

在上一节中,我们与Docker客户端进行了简短的接触。 这是命令行界面程序,它将我们的命令带到Docker守护程序。 在本节中,您将学习在Docker中操作容器的更多高级方法。

运行容器 (Running Containers)

In the previous section, we've used docker run to create and run a container using the hello-world image. The generic syntax for this command is:

在上一节中,我们使用docker run来使用hello-world映像创建和运行容器。 该命令的通用语法为:

docker run <image name>

Here image name can be any image from Docker Hub or our local machine. I hope that you've noticed that I've been saying create and run and not just run, the reason behind that is the docker run command actually does the job of two separate docker commands. They are:

这里的image name可以是Docker Hub或我们本地计算机上的任何映像。 希望您注意到我一直在说创建和运行 ,而不仅仅是运行 ,其背后的原因是docker run命令实际上完成了两个单独的docker命令的工作。 他们是:

  1. docker create <image name> - creates a container from given image and returns the container id.

    docker create <image name> -从给定图像创建容器并返回容器ID。

  2. docker start <container id> - starts a container by given id of a already created command.

    docker start <container id> -通过已创建命令的给定ID启动容器。

To create a container from the hello-world image execute the following command:

要从hello-world映像创建容器,请执行以下命令:

docker create hello-world

The command should output a long string like cb2d384726da40545d5a203bdb25db1a8c6e6722e5ae03a573d717cd93342f61 – this is the container id. This id can be used to start the built container.

该命令应输出一个长字符串,例如cb2d384726da40545d5a203bdb25db1a8c6e6722e5ae03a573d717cd93342f61这是容器ID。 该ID可用于启动构建的容器。

The first 12 characters of the container id are enough for identifying the container. Instead of using the whole string, using cb2d384726da should be fine.

容器ID的前12个字符足以识别容器。 除了使用整个字符串,还可以使用cb2d384726da

To start this container execute the following command:

要启动此容器,请执行以下命令:

docker start cb2d384726da

You should get the container id back as output and nothing else. You may think that the container hasn't run properly. But if you check the dashboard, you'll see that the container has run and exited successfully.

您应该将容器ID取回作为输出,而别无其他。 您可能会认为容器运行不正常。 但是,如果您查看仪表板,则会看到该容器已运行并成功退出。

What happened here is we didn't attach our terminal to the output stream of the container. UNIX and LINUX commands usually open three I/O streams when run, namely STDIN, STDOUT, and STDERR.

这里发生的是我们没有将终端连接到容器的输出流。 UNIX和LINUX命令通常在运行时打开三个I / O流,即STDINSTDOUTSTDERR

If you want to learn more, there is an amazing article out there on the topic.

如果您想了解更多,那里有一篇很棒的文章 。

To attach your terminal to the output stream of the container you have to use the -a or --attach option:

要将终端附加到容器的输出流,必须使用-a--attach选项:

docker start -a cb2d384726da

If everything goes right, then you should see the following output:

如果一切正常,那么您应该看到以下输出:

We can use the start command to run any container that is not already running. Using run command will create a new container every time.

我们可以使用start命令来运行任何尚未运行的容器。 使用run命令将每次创建一个新容器。

列出容器 (Listing Containers)

You may remember from the previous section, that the dashboard can be used for inspecting containers with ease.

您可能从上一节中还记得,仪表板可用于轻松检查容器。

It's a pretty useful tool for inspecting individual containers, but is too much for viewing a plain list of the containers. That's why there is a simpler way to do that. Execute the following command in your terminal:

它是检查单个容器的非常有用的工具,但对于查看容器的简单列表而言却太多了。 这就是为什么有一种更简单的方法可以做到这一点。 在终端中执行以下命令:

docker ps -a

And you should see a list of all the containers on your terminal.

并且您应该在终端上看到所有容器的列表。

The -a or --all option indicates that we want to see not only the running containers but also the stopped ones. Executing ps without the -a option will list out the running containers only.

-a--all选项表明我们不仅要查看正在运行的容器,还要看到停止的容器。 不带-a选项执行ps只会列出正在运行的容器。

重新启动容器 (Restarting Containers)

We've already used the start command to run a container. There is another command for starting containers called restart. Though the commands seem to serve the same purpose on the surface, they have a slight difference.

我们已经使用了start命令来运行容器。 还有另一个用于启动容器的命令称为restart 。 尽管这些命令表面上似乎起到了相同的作用,但它们之间还是有细微的差别。

The start command starts containers that are not running. The restart command, however, kills a running container and starts that again. If we use restart with a stopped container then it'll function just as same as the start command.

start命令启动未运行的容器。 但是, restart命令会杀死正在运行的容器,然后再次启动它。 如果我们对停止的容器使用restart ,则它将与start命令相同。

清理悬挂的容器 (Cleaning Up Dangling Containers)

Containers that have exited already remain in the system. These dangling or unnecessary containers take up space and can even create issues at later times.

退出的容器已经保留在系统中。 这些悬空或不必要的容器会占用空间,甚至在以后会产生问题。

There are a few ways of cleaning up containers. If we want to remove a container specifically, we can use the rm command. Generic syntax for this command is as follows:

有几种清理容器的方法。 如果我们要专门删除一个容器,可以使用rm命令。 此命令的通用语法如下:

docker rm <container id>

To remove a container with id e210d4695c51, execute following command:

要删除标识为e210d4695c51的容器,请执行以下命令:

docker rm e210d4695c51

And you should get the id of the removed container as output. If we want to clean up all Docker objects (images, containers, networks, build cache) we can use the following command:

然后,您应该获取已删除容器的ID作为输出。 如果要清除所有Docker对象(图像,容器,网络,构建缓存),可以使用以下命令:

docker system prune

Docker will ask for confirmation. We can use the -f or --force option to skip this confirmation step. The command will show the amount of reclaimed space at the end of its successful execution.

Docker将要求确认。 我们可以使用-f--force选项来跳过此确认步骤。 该命令将在成功执行结束时显示回收的空间量。

在交互模式下运行容器 (Running Containers in Interactive Mode)

So far we've only run containers built from the hello-world image. The default command for hello-world image is to execute the single hello.c program that comes with the image.

到目前为止,我们仅运行根据hello-world映像构建的容器。 hello-world映像的默认命令是执行映像随附的单个hello.c程序。

All images are not that simple. Images can encapsulate an entire operating system inside them. Linux distributions such as Ubuntu, Fedora, Debian all have official Docker images available in the hub.

所有图像都不是那么简单。 映像可以将整个操作系统封装在其中。 Linux发行版 ,例如Ubuntu , Fedora , Debian都在中心提供了官方Docker映像。

We can run Ubuntu inside a container using the official ubuntu image. If we try to run an Ubuntu container by executing docker run ubuntu command, we'll see nothing happens. But if we execute the command with -it option as follows:

我们可以使用官方的ubuntu映像在容器内运行Ubuntu。 如果我们尝试通过执行docker run ubuntu命令来运行Ubuntu容器,我们将不会发生任何事情。 但是,如果我们使用-it选项执行命令,如下所示:

docker run -it ubuntu

We should land directly on bash inside the Ubuntu container. In this bash window, we'll be able to do tasks, that we usually do in a regular Ubuntu terminal. I have printed out the OS details by executing the standard cat /etc/os-release command:

我们应该直接在Ubuntu容器内的bash上登陆。 在此bash窗口中,我们将能够执行通常在常规Ubuntu终端中执行的任务。 我通过执行标准cat /etc/os-release命令打印出了操作系统的详细信息:

The reason behind the necessity of this -it option is that the Ubuntu image is configured to start bash upon startup. Bash is an interactive program – that means if we do not type in any commands, bash won't do anything.

必须使用此-it选项的原因是,Ubuntu映像配置为在启动时启动bash。 Bash是一个交互式程序,这意味着如果我们不键入任何命令,bash将不会执行任何操作。

To interact with a program that is inside a container, we have to let the container know explicitly that we want an interactive session.

要与容器内的程序进行交互,我们必须让容器明确知道我们要进行交互式会话。

The -it option sets the stage for us to interact with any interactive program inside a container. This option is actually two separate options mashed together.

-it选项为我们与容器内的任何交互式程序进行交互奠定了基础。 此选项实际上是将两个单独的选项混在一起。

  • The -i option connects us to the input stream of the container, so that we can send inputs to bash.

    -i选项将我们连接到容器的输入流,以便我们可以将输入发送到bash。

  • The -t option makes sure that we get some good formatting and a native terminal like experience.

    -t选项可确保我们获得良好的格式设置和类似体验的本机终端。

We need to use the -it option whenever we want to run a container in interactive mode. Executing docker run -it node or docker run -it python should land us directly on the node or python REPL program.

每当我们想以交互方式运行容器时,都需要使用-it选项。 执行docker run -it nodedocker run -it python应该将我们直接放在节点或python REPL程序上。

We can not run any random container in interactive mode. To be eligible for running in interactive mode, the container has to be configured to start an interactive program on startup. Shells, REPLs, CLIs, and so on are examples of some interactive programs.

我们不能以交互方式运行任何随机容器。 为了有资格以交互方式运行,必须将容器配置为在启动时启动交互程序。 Shell,REPL,CLI等是一些交互式程序的示例。

使用可执行映像创建容器 (Creating Containers Using Executable Images)

Up until now I've been saying that Docker images have a default command that they execute automatically. That's not true for every image. Some images are configured with an entry-point (ENTRYPOINT) instead of a command (CMD).

到目前为止,我一直在说Docker映像具有一个默认命令,它们会自动执行。 并非每张图片都是如此。 某些映像配置有入口点( ENTRYPOINT )而不是命令( CMD )。

An entry-point allows us to configure a container that will run as an executable. Like any other regular executable, we can pass arguments to these containers. The generic syntax for passing arguments to an executable container is as follows:

入口点允许我们配置将作为可执行文件运行的容器。 像任何其他常规可执行文件一样,我们可以将参数传递给这些容器。 用于将参数传递给可执行容器的通用语法如下:

docker run <image name> <arguments>

The Ubuntu image is an executable, and the entry-point for the image is bash. Arguments passed to an executable container will be passed directly to the entry-point program. That means any argument that we pass to the the Ubuntu image will be passed directly to bash.

Ubuntu映像是可执行文件,映像的入口是bash。 传递给可执行容器的参数将直接传递给入口点程序。 这意味着我们传递给Ubuntu映像的任何参数都将直接传递给bash。

To see a list of all directories inside the Ubuntu container, you can pass the ls command as an argument.

要查看Ubuntu容器内所有目录的列表,可以传递ls命令作为参数。

docker run ubuntu ls

You should get a list of directories like the following:

您应该获得如下目录列表:

Notice that we're not using the -it option, because we don't want to interact with bash, we just want the output. We can pass any valid bash command as arguments. Like passing the pwd command as an argument will return the present working directory.

注意,我们不使用-it选项,因为我们不想与bash交互,我们只需要输出。 我们可以传递任何有效的bash命令作为参数。 就像将pwd命令作为参数传递一样,它将返回当前的工作目录。

The list of valid arguments usually depends on the entry-point program itself. If the container uses the shell as entry-point, any valid shell command can be passed as arguments. If the container uses some other program as the entry-point then the arguments valid for that particular program can be passed to the container.

有效参数的列表通常取决于入口点程序本身。 如果容器使用外壳程序作为入口点,则可以将任何有效的外壳程序命令作为参数传递。 如果容器使用其他程序作为入口点,则可以将对该特定程序有效的参数传递给容器。

以分离模式运行容器 (Running Containers in Detached Mode)

Assume that you want to run a Redis server on your computer. Redis is a very fast in-memory database system, often used as cache in various applications. We can run a Redis server using the official redis image. To do that by execute the following command:

假设您要在计算机上运行Redis服务器。 Redis是一个非常快的内存数据库系统,通常在各种应用程序中用作缓存。 我们可以使用官方运行的Redis服务器redis的图像。 为此,请执行以下命令:

docker run redis

It may take a few moments to fetch the image from the hub and then you should see a wall of text appear on your terminal.

从集线器获取图像可能需要一些时间,然后您应该会在终端上看到一堵墙。

As you can see, the Redis server is running and is ready to accept connections. To keep the server running, you have to keep this terminal window open (which is a hassle in my opinion).

如您所见,Redis服务器正在运行,并准备接受连接。 要保持服务器运行,您必须保持此终端窗口处于打开状态(我认为这很麻烦)。

You can run these kind of containers in detached mode. Containers running in detach mode run in the background like a service. To detach a container, we can use the -d or --detach option. To run the container in detached mode, execute the following command:

您可以在分离模式下运行此类容器。 以分离模式运行的容器像服务一样在后台运行。 要分离容器,我们可以使用-d--detach选项。 要以分离模式运行容器,请执行以下命令:

docker run -d redis

You should get the container id as output.

您应该获得容器ID作为输出。

The Redis server is now running in the background. You can inspect it using the dashboard or by using the ps command.

Redis服务器现在在后台运行。 您可以使用仪表板或使用ps命令对其进行检查。

在正在运行的容器中执行命令 (Executing Commands Inside a Running Container)

Now that you have a Redis server running in the background, assume that you want to perform some operations using the redis-cli tool. You can not just go ahead and execute docker run redis redis-cli. The container is already running.

现在,您已在后台运行Redis服务器,假设您想使用redis-cli工具执行一些操作。 您不能只是继续执行docker run redis redis-cli 。 容器已在运行。

For situations like this, there is a command for executing other commands inside a running container called exec, and the generic syntax for this command is as follows:

对于这种情况,有一个命令可以在运行中的容器exec执行其他命令,该命令的通用语法如下:

docker exec <container id> <command>

If the id for the Redis container is 5531133af6a1 then the command should be as follows:

如果Redis容器的ID为5531133af6a1则命令应如下所示:

docker exec -it 5531133af6a1 redis-cli

And you should land right into the redis-cli program:

您应该直接进入redis-cli程序:

Notice we're using the -it option as this is going to be an interactive session. Now you can run any valid Redis command in this window and the data will be persisted in the server.

注意,我们将使用-it选项,因为这将是一个交互式会话。 现在,您可以在此窗口中运行任何有效的Redis命令 ,数据将保留在服务器中。

You can exit simply by pressing ctrl + c key combination or closing the terminal window. Keep in mind however, the server will keep running in the background even if you exit out of the CLI program.

您只需按ctrl + c组合键或关闭终端窗口即可退出。 但是请记住,即使退出CLI程序,服务器也将继续在后台运行。

在正在运行的容器中启动Shell (Starting Shell Inside a Running Container)

Assume that you want to use the shell inside a running container for some reason. You can do that by just using the exec command with sh being the executable like the following command:

假设由于某种原因,您想在正在运行的容器内使用外壳程序。 您可以通过仅使用exec命令并将sh作为可执行文件来执行此操作,例如以下命令:

docker exec -it <container id> sh

If the id of the redis container is 5531133af6a1 the, execute the following command to start a shell inside the container:

如果redis容器的标识为5531133af6a1 ,请执行以下命令以启动容器内的shell:

docker run exec -it 5531133af6a1 sh

You should land directly on a shell inside the container.

您应该直接降落在容器内的外壳上。

You can execute any valid shell command here.

您可以在此处执行任何有效的shell命令。

从正在运行的容器访问日志 (Accessing Logs From a Running Container)

If we want to view logs from a container, the dashboard can be really helpful.

如果我们想从容器中查看日志,则仪表板会非常有用。

We can also use the logs command to retrieve logs from a running container. The generic syntax for the command is as follows:

我们还可以使用logs命令从正在运行的容器中检索日志。 该命令的通用语法如下:

docker logs <container id>

If the id for Redis container is 5531133af6a1 then execute following command to access the logs from the container:

如果Redis容器的标识为5531133af6a1执行以下命令以从容器访问日志:

docker logs 5531133af6a1

You should see a wall of text appear on your terminal window.

您应该会在终端窗口上看到一堵墙。

This is just a portion from the log output. You can kind of hook into the output stream of the container and get the logs in real-time by using the -f or --follow option.

这只是日志输出的一部分。 您可以使用-f--follow选项来挂钩到容器的输出流并实时获取日志。

Any later log will show up instantly in the terminal as long as you don't exit by pressing ctrl + c key combination or closing the window. The container will keep running even if you exit out of the log window.

只要您不按ctrl + c组合键或关闭窗口退出,任何以后的日志都会立即显示在终端中。 即使您退出日志窗口,该容器也将继续运行。

停止或杀死正在运行的容器 (Stopping or Killing a Running Container)

Containers running in the foreground can be stopped by simply closing the terminal window or hitting ctrl + c key combination. Containers running in the background, however, can not be stopped in the same way.

只需关闭终端窗口或按ctrl + c组合键,即可停止在前台运行的容器。 但是,不能以相同方式停止在后台运行的容器。

There are two commands for stopping a running container:

有两个用于停止正在运行的容器的命令:

  • docker stop <container id> - attempts to stop the container gracefully by sending a SIGTERM signal to the container. If the container doesn't stop within a grace period, a SIGKILL signal is sent.

    docker stop <container id> -尝试通过向容器发送SIGTERM信号来优雅地停止容器。 如果容器在宽限期内没有停止,则发送SIGKILL信号。

  • docker kill <container id> - stops the container immediately by sending a SIGKILL signal. A SIGKILL signal can not be ignored by a recipient.

    docker kill <container id> -通过发送SIGKILL信号立即停止容器。 接收者不能忽略SIGKILL信号。

To stop a container with id bb7fadc33178 execute docker stop bb7fadc33178 command. Using docker kill bb7fadc33178 will terminate the container immediately without giving a chance to clean up.

要停止ID为bb7fadc33178的容器, bb7fadc33178执行docker stop bb7fadc33178命令。 使用docker kill bb7fadc33178将立即终止容器,而没有机会进行清理。

映射端口 (Mapping Ports)

Assume that you want to run an instance of the popular Nginx web server. You can do that by using the official nginx image. Execute the following command to run a container:

假设您要运行流行的Nginx Web服务器的实例。 您可以使用官方的nginx图像来做到这一点。 执行以下命令以运行容器:

docker run nginx

Nginx is meant to be kept running, so you may as well use the -d or --detach option. By default Nginx runs on port 80. But if you try to access http://localhost:80 you should see something like the following:

Nginx应该保持运行状态,因此您也可以使用-d--detach选项。 默认情况下,Nginx在端口80上运行。但是,如果尝试访问http://localhost:80 ,则应该看到类似以下内容的内容:

That's because Nginx is running on port 80 inside the container. Containers are isolated environments and your host system knows nothing about what's going on inside a container.

这是因为Nginx在容器内部的端口80上运行。 容器是隔离的环境,您的主机系统对容器内部发生的事情一无所知。

To access a port that is inside a container, you need to map that port to a port on the host system. You can do that by using the -p or --port option with the docker run command. Generic syntax for this option is as follows:

要访问容器内部的端口,您需要将该端口映射到主机系统上的端口。 您可以通过在docker run命令中使用-p--port选项来执行此操作。 此选项的通用语法如下:

docker run -p <host port:container port> nginx

Executing docker run -p 80:80 nginx will map port 80 on the host machine to port 80 of the container. Now try accessing http://localhost:80 address:

执行docker run -p 80:80 nginx会将主机上的端口80映射到容器的端口80。 现在尝试访问http://localhost:80地址:

If you execute docker run -p 8080:80 nginx instead of 80:80 the Nginx server will be available on port 8080 of the host machine. If you forget the port number after a while you can use the dashboard to have a look at it:

如果执行80:80 docker run -p 8080:80 nginx而不是80:80则Nginx服务器将在主机的端口8080上可用。 如果过了一会儿忘记了端口号,可以使用仪表板进行查看:

The Inspect tab contains information regarding the port mappings. As you can see, I've mapped port 80 from the container to port 8080 of the host system.

检查”选项卡包含有关端口映射的信息。 如您所见,我已将端口80从容器映射到主机系统的端口8080。

集装箱隔离示范 (Demonstration of Container Isolation)

From the moment that I introduced you to the concept of a container, I've been saying that containers are isolated environments. When I say isolated, I not only mean from the host system but also from other containers.

从向您介绍容器的那一刻起,我就一直在说容器是隔离的环境。 当我说孤立时,我不仅指主机系统,还指其他容器。

In this section, we'll do a little experiment to understand this isolation stuff. Open up two terminal windows and execute and run two Ubuntu container instances using the following command:

在本节中,我们将做一些实验以了解这些隔离内容。 打开两个终端窗口,并使用以下命令执行并运行两个Ubuntu容器实例:

docker run -it ubuntu

If you open up the dashboard you should see two Ubuntu containers running:

如果打开仪表板,应该会看到两个正在运行的Ubuntu容器:

Now on the upper window, execute following command:

现在在上方的窗口中,执行以下命令:

mkdir hello-world

The mkdir command creates a new directory. Now to see the list of directories in both containers execute the ls command inside both of them:

mkdir命令创建一个新目录。 现在要查看两个容器中的目录列表,请在两个容器中执行ls命令:

As you can see, the hello-world directory exists inside the container open on the upper terminal window and not on the lower one. It goes to prove that the containers although created from the same image are isolated from each other.

如您所见, hello-world目录位于在上部终端窗口中打开的容器内部,而不在下部终端窗口中打开。 它证明了尽管容器是由相同的映像创建的,但它们是相互隔离的。

This is something important to understand. Assume a scenario where you've been working inside a container for a while. Then you stop the container and on the next day you execute docker run -it ubuntu once again. You'll see all your works have been lost.

这一点很重要,需要理解。 假设您在容器内工作了一段时间了。 然后stop容器,并在第二天再次执行docker run -it ubuntu 。 您会看到所有作品都丢失了。

I hope you remember from a previous sub-section, the run command creates and starts a new container every time. So remember to start previously created containers using the start command and not the run command.

我希望您还记得前面的小节, run命令每次都会创建并启动一个新容器。 因此,请记住使用start命令而不是run命令来启动先前创建的容器。

创建自定义图像 (Creating Custom Images)

Now that you have a solid understanding of the many ways you can manipulate a container using the Docker client, it's time to learn how to make custom images.

现在,您已经对使用Docker客户端操作容器的多种方式有了深入的了解,现在该学习如何制作自定义映像了。

In this section, you'll learn many important concepts regarding building images, creating containers from them, and sharing them with others.

在本节中,您将学习有关构建图像,从图像创建容器以及与他人共享图像的许多重要概念。

I suggest that you install Visual Studio Code with the official Docker Extension before going into the subsequent sub-sections.

我建议您在进入后续小节之前,先使用正式的Docker Extension安装Visual Studio Code 。

图像创作基础 (Image Creation Basics)

In this sub-section we'll focus on the structure of a Dockerfile and the common instructions. A Dockerfile is a text document, containing a set of instructions for the Docker daemon to follow and build an image.

在本小节中,我们将重点介绍Dockerfile的结构和常见指令。 Dockerfile是一个文本文档,其中包含一组用于Docker守护程序遵循和构建映像的指令。

To understand the basics of building images we'll create a very simple custom Node image. Before we begin, I would like to show you how the official node image works. Execute the following command to run a container:

为了理解构建图像的基础,我们将创建一个非常简单的自定义Node图像。 在开始之前,我想向您展示官方节点图像的工作方式。 执行以下命令以运行容器:

docker run -it node

The Node image is configured to start the Node REPL on startup. The REPL is an interactive program hence the usage of -it flag.

节点映像配置为在启动时启动节点REPL。 REPL是一个交互式程序,因此使用-it标志。

You can execute any valid JavaScript code here. We'll create a custom node image that functions just like that.

您可以在此处执行任何有效JavaScript代码。 我们将创建一个功能与此相同的自定义节点图像。

To start, create a new directory anywhere in your computer and create a new file named Dockerfile inside there. Open up the project folder inside a code editor and put following code in the Dockerfile:

首先,在您计算机的任何地方创建一个新目录,并在其中创建一个名为Dockerfile的新文件。 在代码编辑器中打开项目文件夹,然后将以下代码放入Dockerfile

FROM ubuntuRUN apt-get update
RUN apt-get install nodejs -yCMD [ "node" ]

I hope you remember from a previous sub-section that images have multiple layers. Each line in a Dockerfile is an instruction and each instruction creates a new layer.

希望您记得上一小节中的图像具有多层。 Dockerfile中的每一行都是一条指令,每条指令都会创建一个新层。

Let me break down the Dockerfile line by line for you:

让我为您Dockerfile分解Dockerfile

FROM ubuntu

Every valid Dockerfile must start with a FROM instruction. This instruction starts a new build stage and sets the base image. By setting ubuntu as the base image, we say that we want all the functionalities from the Ubuntu image to be available inside our image.

每个有效的Dockerfile必须以FROM指令开头。 该指令开始新的构建阶段并设置基础映像。 通过将ubuntu设置为基本映像,我们说我们希望Ubuntu映像中的所有功能都可以在映像中使用。

Now that we have the Ubuntu functionalities available in our image, we can use the Ubuntu package manager (apt-get) to install Node.

现在,我们已经在映像中提供了Ubuntu功能,我们可以使用Ubuntu软件包管理器( apt-get )安装Node。

RUN apt-get update
RUN apt-get install nodejs -y

The RUN instruction will execute any commands in a new layer on top of the current image and persist the results. So in the upcoming instructions, we can refer to Node, because we've installed that in this step.

RUN指令将在当前映像顶部的新层中执行所有命令,并保留结果。 因此,在接下来的说明中,我们可以参考Node,因为我们已经在此步骤中安装了Node。

CMD [ "node" ]

The purpose of a CMD instruction is to provide defaults for an executing container. These defaults can include an executable, or you can omit the executable, in which case you must specify an ENTRYPOINT instruction. There can be only one CMD instruction in a Dockerfile. Also, single quotes are not valid.

CMD指令的目的是为执行中的容器提供默认值。 这些默认值可以包含一个可执行文件,也可以忽略该可执行文件,在这种情况下,您必须指定ENTRYPOINT指令。 Dockerfile中只能有一条CMD指令。 同样,单引号无效。

Now to build an image from this Dockerfile, we'll use the build command. The generic syntax for the command is as follows:

现在要从该Dockerfile ,我们将使用build命令。 该命令的通用语法如下:

docker build <build context>

The build command requires a Dockerfile and the build's context. The context is the set of files and directories located in the specified location. Docker will look for a Dockerfile in the context and use that to build the image.

build命令需要一个Dockerfile和构建的上下文。 上下文是位于指定位置的文件和目录的集合。 Docker将在上下文中查找Dockerfile并将其用于构建映像。

Open up a terminal window inside that directory and execute the following command:

在该目录中打开一个终端窗口,然后执行以下命令:

docker build .

We're passing . as the build context which means the current directory. If you put the Dockerfile inside another directory like /src/Dockerfile, then the context will be ./src.

我们过去了. 作为构建上下文,即当前目录。 如果将Dockerfile放在/src/Dockerfile类的另一个目录中,则上下文将为./src

The build process may take some time to finish. Once done, you should see a wall of text in your terminal:

构建过程可能需要一些时间才能完成。 完成后,您应该在终端中看到一堵墙:

If everything goes fine, you should see something like Successfully built d901e4d15263 at the end. This random string is the image id and not container id. You can execute the run command with this image id to create and start a new container.

如果一切顺利,最后应该会看到类似“ Successfully built d901e4d15263的内容。 此随机字符串是图像ID,而不是容器ID。 您可以使用此映像ID执行run命令来创建和启动新容器。

docker run -it d901e4d15263

Remember, the Node REPL is an interactive program, so the -it option is necessary. Once you've run the command you should land on the Node REPL:

请记住,Node REPL是一个交互式程序,因此-it选项是必需的。 运行命令后,应进入Node REPL:

You can execute any valid JavaScript code here, just like the official Node image.

您可以在此处执行任何有效JavaScript代码,就像官方的Node映像一样。

创建可执行映像 (Creating an Executable Image)

I hope you remember the concept of an executable image from a previous sub-section. Images that can take additional arguments just like a regular executable. In this sub-section, you'll learn how to make one.

我希望您还记得上一节中的可执行映像的概念。 像常规可执行文件一样,可以采用其他参数的图像。 在本小节中,您将学习如何制作一个。

We'll create a custom bash image and will pass arguments like we did with the Ubuntu image in a previous sub-section. Start by creating a Dockerfile inside an empty directory and put following code in that:

我们将创建一个自定义的bash映像,并将像上一节中对Ubuntu映像所做的那样传递参数。 首先在一个空目录内创建一个Dockerfile ,然后在其中添加以下代码:

FROM alpineRUN apk add --update bashENTRYPOINT [ "bash" ]

We're using the alpine image as the base. Alpine Linux is a security-oriented, lightweight Linux distribution.

我们以高山图片为基础。 Alpine Linux是面向安全的轻量级Linux发行版。

Alpine doesn't come with bash by default. So on the second line we install bash using Alpine package manager, apk. apk for Alpine is what apt-get is for Ubuntu. The last instruction sets bash as the entry-point for this image. As you can see, the ENTRYPOINT instruction is identical to the CMD instruction.

默认情况下,Alpine不附带bash。 因此,在第二行中,我们使用Alpine软件包管理器apk安装bash。 适用于Alpine的apk是适用于Ubuntu的apt-get 。 最后一条指令将bash设置为该图像的入口点。 如您所见, ENTRYPOINT指令与CMD指令相同。

To build the image execute following command:

要构建映像,请执行以下命令:

docker build .

The build process may take some time. Once done, you should get the newly created image id:

构建过程可能需要一些时间。 完成后,您应该获得新创建的图像ID:

You can run a container from the resultant image with the run command. This image has an interactive entry-point, so make sure you use the -it option.

您可以使用run命令从结果映像中运行容器。 该图像具有交互式入口点,因此请确保使用-it选项。

Now you can pass any argument to this container just like you did with the Ubuntu container. To see a list of all files and directories, you can execute following command:

现在,您可以像使用Ubuntu容器一样将任何参数传递给该容器。 要查看所有文件和目录的列表,可以执行以下命令:

docker run 66e867a1504d -c ls

The -c ls option will be passed directly to bash and should return a list of directories inside the container:

-c ls选项将直接传递给bash,并且应返回容器内的目录列表:

The -c option has nothing to do with Docker client. It's a bash command line option. It reads commands from subsequent strings.

-c选项与Docker客户端无关。 这是一个bash命令行选项。 它从后续字符串中读取命令。

容器化Express应用程序 (Containerizing an Express Application)

So far we've only created images that contain no additional files. In this sub-section you'll learn how to containerize a project with source files in it.

到目前为止,我们仅创建了不包含其他文件的图像。 在本小节中,您将学习如何将包含源文件的项目容器化。

If you've cloned the project code repository, then go inside the express-api directory. This is a REST API that runs on port 3000 and returns a simple JSON payload when hit.

如果您已经克隆了项目代码存储库,请进入express-api目录。 这是一个REST API,可在端口3000上运行,并在命中时返回简单的JSON负载。

To run this application you need to go through following steps:

要运行此应用程序,您需要执行以下步骤:

  1. Install necessary dependencies by executing npm install.

    通过执行npm install安装必要的依赖项。

  2. Start the application by executing npm run start.

    通过执行npm run start应用程序。

To replicate the above mentioned process using Dockerfile instructions, you need to go through the following steps:

要使用Dockerfile指令复制上述过程,您需要执行以下步骤:

  1. Use a base image that allows you to run Node applications.使用允许您运行Node应用程序的基本映像。
  2. Copy the package.json file and install the dependencies by executing npm run install.

    复制package.json文件并通过执行npm run install依赖项。

  3. Copy all necessary project files.复制所有必需的项目文件。
  4. Start the application by executing npm run start.

    通过执行npm run start应用程序。

Now, create a new Dockerfile inside the project directory and put following content in it:

现在,在项目目录中创建一个新的Dockerfile ,并将以下内容放入其中:

FROM nodeWORKDIR /usr/appCOPY ./package.json ./
RUN npm installCOPY . .CMD [ "npm", "run", "start" ]

We're using Node as our base image. The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. It's kind of like cd'ing into the directory.

我们正在使用Node作为我们的基本映像。 WORKDIR指令为ENTRYPOINT中跟在Dockerfile任何RUNCMDENTRYPOINTCOPYADD指令设置工作目录。 有点像cd进入目录。

The COPY instruction will copy the ./package.json to the working directory. As we have set the working directory on the previous line, . will refer to /usr/app inside the container. Once the package.json has been copied, we then install all the necessary dependencies using the RUN instruction.

COPY指令会将./package.json复制到工作目录。 在上一行中设置工作目录后, . 将引用容器内的/usr/app 。 复制package.json ,我们将使用RUN指令安装所有必需的依赖项。

In the CMD instruction, we set npm as the executable and pass run and start as arguments. The instruction will be interpreted as npm run start inside the container.

CMD指令中,我们将npm设置为可执行文件,并将runstart作为参数传递。 该指令将被解释为容器内的npm run start

Now build the image with docker build . and use the resultant image id to run a new container. The application runs on port 3000 inside the container, so don't forget to map that.

现在使用docker build .构建镜像docker build . 并使用生成的图片ID运行新容器。 该应用程序在容器内部的端口3000上运行,因此不要忘记对其进行映射。

Once you've successfully run the container, visit http://127.0.0.1:3000 and you should see a simple JSON response. Replace the 3000 if you've used some other port from the host system.

成功运行容器后,请访问http://127.0.0.1:3000 ,您应该会看到一个简单的JSON响应。 如果您已使用主机系统上的其他端口,请更换3000。

处理卷 (Working with Volumes)

In this sub-section I'll be presenting a very common scenario. Assume that you're working on a fancy front-end application with React or Vue. If you've cloned the project code repository, then go inside the vite-counter directory. This is a simple Vue application initialized with npm init vite-app command.

在本小节中,我将介绍一个非常普通的场景。 假设您正在使用React或Vue开发精美的前端应用程序。 如果您已经克隆了项目代码存储库,请进入vite-counter目录。 这是一个使用npm init vite-app命令初始化的简单Vue应用npm init vite-app

To run this application in development mode, we need to go through the following steps:

要在开发模式下运行此应用程序,我们需要执行以下步骤:

  1. Install necessary dependencies by executing npm install.

    Install necessary dependencies by executing npm install .

  2. Start the application in development mode by executing npm run dev.

    Start the application in development mode by executing npm run dev .

To replicate the above mentioned process using Dockerfile instructions, we need to go through the following steps:

To replicate the above mentioned process using Dockerfile instructions, we need to go through the following steps:

  1. Use a base image that allows you to run Node applications.Use a base image that allows you to run Node applications.
  2. Copy the package.json file and install the dependencies by executing npm run install.

    Copy the package.json file and install the dependencies by executing npm run install .

  3. Copy all necessary project files.Copy all necessary project files.
  4. Start the application in development mode by executing npm run dev.

    Start the application in development mode by executing npm run dev .

In there create a new Dockerfile.dev and put following content in it:

In there create a new Dockerfile.dev and put following content in it:

FROM nodeWORKDIR /usr/appCOPY ./package.json ./
RUN npm installCOPY . .CMD [ "npm", "run", "dev" ]

Nothing fancy here. We're copying the package.json file, installing the dependencies, copying the project files and starting the development server by executing npm run dev.

这里没什么好看的。 We're copying the package.json file, installing the dependencies, copying the project files and starting the development server by executing npm run dev .

Build the image by executing following command:

Build the image by executing following command:

docker build -f Dockerfile.dev .

Docker is programmed to look for a Dockerfile within the build's context. But we've named our file Dockerfile.dev, thus we have to use the -f or --file option and let Docker know the filename. The . at the end indicates the context, just like before.

Docker is programmed to look for a Dockerfile within the build's context. But we've named our file Dockerfile.dev , thus we have to use the -f or --file option and let Docker know the filename. The . at the end indicates the context, just like before.

The development server runs on port 3000 inside the container, so make sure you map the port while creating and starting a container. I can access the application by visiting http://127.0.0.1:3000 on my system.

The development server runs on port 3000 inside the container, so make sure you map the port while creating and starting a container. I can access the application by visiting http://127.0.0.1:3000 on my system.

This the default component that comes with any new Vite application. You can press the button to increase the count.

This the default component that comes with any new Vite application. You can press the button to increase the count.

All the major front-end frameworks come with a hot reload feature. If you make any changes to the code while running in the development server, the changes should reflect immediately in the browser. But if you go ahead and make any changes to the code in this project, you'll see no changes in the browser.

All the major front-end frameworks come with a hot reload feature. If you make any changes to the code while running in the development server, the changes should reflect immediately in the browser. But if you go ahead and make any changes to the code in this project, you'll see no changes in the browser.

Well, the reason is pretty straightforward. When you're making changes in the code, you are changing the code in your host system, not the copy inside the container.

Well, the reason is pretty straightforward. When you're making changes in the code, you are changing the code in your host system, not the copy inside the container.

There is a solution to this problem. Instead of making a copy of the source code inside the container what we can do is, we can just let the container access the files from our host directly.

There is a solution to this problem. Instead of making a copy of the source code inside the container what we can do is, we can just let the container access the files from our host directly.

To do that, Docker has an option called -v or --volume for the run command. Generic syntax for the volume option is as follows:

To do that, Docker has an option called -v or --volume for the run command. Generic syntax for the volume option is as follows:

docker run -v <absolute path to host directory>:<absolute path to container working directory> <image id>

You can use the pwd shell command to get the absolute path of the current directory. My host directory path is /Users/farhan/repos/docker/docker-handbook-projects/vite-counter, container application working directory path is /usr/app and the image id is 8b632faffb17. So my command will be as follows:

You can use the pwd shell command to get the absolute path of the current directory. My host directory path is /Users/farhan/repos/docker/docker-handbook-projects/vite-counter , container application working directory path is /usr/app and the image id is 8b632faffb17 . So my command will be as follows:

docker run -p 3000:3000 -v /Users/farhan/repos/docker/docker-handbook-projects/vite-counter:/usr/app 8b632faffb17

If you execute the above command, you'll be presented with an error saying sh: 1: vite: not found, which means that the dependencies are not present inside the container.

If you execute the above command, you'll be presented with an error saying sh: 1: vite: not found , which means that the dependencies are not present inside the container.

If you do not get such an error, that means you've installed the dependencies in your host system. Delete the node_modules folder in your local system and try again.

If you do not get such an error, that means you've installed the dependencies in your host system. Delete the node_modules folder in your local system and try again.

But if you look into the Dockerfile.dev, at the fourth line, we've clearly written the RUN npm install instruction.

But if you look into the Dockerfile.dev , at the fourth line, we've clearly written the RUN npm install instruction.

Let me explain why this is happening. When using volumes, the container accesses the source code directly from our host system, and as you know, we haven't installed any dependencies in the host system.

Let me explain why this is happening. When using volumes, the container accesses the source code directly from our host system, and as you know, we haven't installed any dependencies in the host system.

Installing the dependencies can solve the problem but isn't ideal at all. Because some dependencies get compiled from source every time you install them. And if you're using Windows or Mac as your host operating system, then the binaries built for your host operating system will not work inside a container running Linux.

Installing the dependencies can solve the problem but isn't ideal at all. Because some dependencies get compiled from source every time you install them. And if you're using Windows or Mac as your host operating system, then the binaries built for your host operating system will not work inside a container running Linux.

To solve this problem, you have to know about the two types of volumes Docker has.

To solve this problem, you have to know about the two types of volumes Docker has.

  • Named Volumes: These volumes have a specific source from outside the container, for example -v ($PWD):/usr/app.

    Named Volumes: These volumes have a specific source from outside the container, for example -v ($PWD):/usr/app .

  • Anonymous Volumes: These volumes have no specific source, for example -v /usr/app/node_modules. When the container is deleted, anonymous volumes remain until you clean them up manually.

    Anonymous Volumes: These volumes have no specific source, for example -v /usr/app/node_modules . When the container is deleted, anonymous volumes remain until you clean them up manually.

To prevent the node_modules directory from getting overwritten, we'll have to put it inside an anonymous volume. To do that, modify the previous command as follows:

To prevent the node_modules directory from getting overwritten, we'll have to put it inside an anonymous volume. To do that, modify the previous command as follows:

docker run -p 3000:3000 -v /usr/app/node_modules -v /Users/farhan/repos/docker/docker-handbook-projects/vite-counter:/usr/app 8b632faffb17

The only change we've made is the addition of a new anonymous volume. Now run the command and you'll see the application running. You can even change anything and see the change immediately in the browser. I've changed the default header a bit.

The only change we've made is the addition of a new anonymous volume. Now run the command and you'll see the application running. You can even change anything and see the change immediately in the browser. I've changed the default header a bit.

The command is a bit too long for repeated execution. You can use shell command substitution instead of the long source directory absolute path.

The command is a bit too long for repeated execution. You can use shell command substitution instead of the long source directory absolute path.

docker run -p 3000:3000 -v /usr/app/node_modules -v $(pwd):/usr/app 8b632faffb17

The $(pwd) bit will be replace with the absolute path to the present directory you're in. So make sure you've opened your terminal window inside the project folder.

The $(pwd) bit will be replace with the absolute path to the present directory you're in. So make sure you've opened your terminal window inside the project folder.

Multi-staged Builds (Multi-staged Builds)

Introduced in Docker v17.05, multi-staged build is an amazing feature. In this sub-section, you'll again work with the vite-counter application.

Introduced in Docker v17.05, multi-staged build is an amazing feature. In this sub-section, you'll again work with the vite-counter application.

In the previous sub-section, you created Dockerfile.dev file, which is clearly for running the development server. Creating a production build of a Vue or React application is a perfect example of a multi-stage build process.

In the previous sub-section, you created Dockerfile.dev file, which is clearly for running the development server. Creating a production build of a Vue or React application is a perfect example of a multi-stage build process.

First let me show you how the production build will work in the following diagram:

First let me show you how the production build will work in the following diagram:

As you can see from the diagram, the build process has two steps or stages. They are as follows:

As you can see from the diagram, the build process has two steps or stages. 它们如下:

  1. Executing npm run build will compile our application into a bunch of JavaScript, CSS and an index.html file. The production build will be available inside the /dist directory on the project root. Unlike the development version though, the production build doesn't come with a fancy server.

    Executing npm run build will compile our application into a bunch of JavaScript, CSS and an index.html file. The production build will be available inside the /dist directory on the project root. Unlike the development version though, the production build doesn't come with a fancy server.

  2. We'll have to use Nginx for serving the production files. We'll copy the files built in stage 1 to the default document root of Nginx and make them available.We'll have to use Nginx for serving the production files. We'll copy the files built in stage 1 to the default document root of Nginx and make them available.

Now if we want to see the steps like we did with our previous two projects, it should go like as follows:

Now if we want to see the steps like we did with our previous two projects, it should go like as follows:

  1. Use a base image (node) that allows us to run Node applications.Use a base image (node) that allows us to run Node applications.
  2. Copy the package.json file and install the dependencies by executing npm run install.

    Copy the package.json file and install the dependencies by executing npm run install .

  3. Copy all necessary project files.Copy all necessary project files.
  4. Make the production build by executing npm run build.

    Make the production build by executing npm run build .

  5. Use another base image (nginx) that allows us to run serve the production files.Use another base image (nginx) that allows us to run serve the production files.
  6. Copy the production files from the /dist directory to the default document root (/usr/share/nginx/html).

    Copy the production files from the /dist directory to the default document root ( /usr/share/nginx/html ).

Let's get to work now. Create a new Dockerfile inside the vite-counter project directory. Content for the Dockerfile is as follows:

Let's get to work now. Create a new Dockerfile inside the vite-counter project directory. Content for the Dockerfile is as follows:

FROM node as builderWORKDIR /usr/appCOPY ./package.json ./
RUN npm installCOPY . .RUN npm run buildFROM nginxCOPY --from=builder /usr/app/dist /usr/share/nginx/html

The first thing that you might have noticed is the multiple FROM instructions. Multi-staged build process allows the usage of multiple FROM instructions. The first FROM instruction sets node as the base image, installs dependencies, copies all project files and executes npm run build. We're calling the first stage builder.

The first thing that you might have noticed is the multiple FROM instructions. Multi-staged build process allows the usage of multiple FROM instructions. The first FROM instruction sets node as the base image, installs dependencies, copies all project files and executes npm run build . We're calling the first stage builder .

Then on the second stage we're using nginx as the base image. Copying all files from /usr/app/dist directory built during stage one to usr/share/nginx/html directory in the second stage. The --from option in the COPY instruction allows us to copy files between stages.

Then on the second stage we're using nginx as the base image. Copying all files from /usr/app/dist directory built during stage one to usr/share/nginx/html directory in the second stage. The --from option in the COPY instruction allows us to copy files between stages.

To build the image execute following command:

To build the image execute following command:

docker build .

We're using a file named Dockerfile this time so declaring the filename explicitly is unnecessary. Once the build process is finished, use the image id to run a new container. Nginx runs on port 80 by default, so don't forget to map that.

We're using a file named Dockerfile this time so declaring the filename explicitly is unnecessary. Once the build process is finished, use the image id to run a new container. Nginx runs on port 80 by default, so don't forget to map that.

Once you've successfully started the container visit http://127.0.0.1:80 and you should see the counter application running. Replace the 80 if you've used some other port from the host system.

Once you've successfully started the container visit http://127.0.0.1:80 and you should see the counter application running. Replace the 80 if you've used some other port from the host system.

The output image from this multi-staged build process is an Nginx based image containing just the built files and no extra data. It's optimized and lightweight in size as a result.

The output image from this multi-staged build process is an Nginx based image containing just the built files and no extra data. It's optimized and lightweight in size as a result.

Uploading Built Images to Docker Hub (Uploading Built Images to Docker Hub)

You've already built quite a lot of images. In this sub-section, you'll learn about tagging and uploading images to Docker Hub. Go ahead and sign up for a free account Docker Hub.

You've already built quite a lot of images. In this sub-section, you'll learn about tagging and uploading images to Docker Hub. Go ahead and sign up for a free account Docker Hub.

Once you've created the account, you log in using the Docker menu.

Once you've created the account, you log in using the Docker menu.

Or you can log in using a command from the terminal. Generic syntax for the command is as follows:

Or you can log in using a command from the terminal. Generic syntax for the command is as follows:

docker login -u <your docker id>  --password <your docker password>

If the login succeeds, you should see something like Login Succeeded on your terminal.

If the login succeeds, you should see something like Login Succeeded on your terminal.

Now you're ready to upload images. In order to upload images, you first have to tag them. If you've cloned the project code repository, open up a terminal window inside the vite-counter project folder.

Now you're ready to upload images. In order to upload images, you first have to tag them. If you've cloned the project code repository, open up a terminal window inside the vite-counter project folder.

You can tag an image by using the -t or --tag option with the build command. The generic syntax for this option is as follows:

You can tag an image by using the -t or --tag option with the build command. The generic syntax for this option is as follows:

docker build -t <tag> <context of the build>

The general convention of tags is as follows:

The general convention of tags is as follows:

<your docker id>/<image name>:<image version>

My Docker id is fhsinchy, so if I want to name the image vite-counter then the command should be as follows:

My Docker id is fhsinchy , so if I want to name the image vite-counter then the command should be as follows:

docker build -t fhsinchy/vite-controller:1.0 .

If you do not define the version after the colon, latest will be used automatically. If everything goes right, you should see something like Successfully tagged fhsinchy/vite-controller:1.0 in your terminal. I am not defining the version in my case.

If you do not define the version after the colon, latest will be used automatically. If everything goes right, you should see something like Successfully tagged fhsinchy/vite-controller:1.0 in your terminal. I am not defining the version in my case.

To upload this image to the hub you can use the push command. Generic syntax for the command is as follows:

To upload this image to the hub you can use the push command. Generic syntax for the command is as follows:

docker push <your docker id>/<image tag with version>

To upload the fhsinchy/vite-counter image the command should be as follows:

To upload the fhsinchy/vite-counter image the command should be as follows:

docker push fhsinchy/vite-counter

You should see some text like the following after the push is complete:

You should see some text like the following after the push is complete:

Anyone can view the image on the hub now.

Anyone can view the image on the hub now.

Generic syntax for running a container from this image is as follows:

Generic syntax for running a container from this image is as follows:

docker run <your docker id>/<image tag with version>

To run the vite-counter application using this uploaded image, you can execute the following command:

To run the vite-counter application using this uploaded image, you can execute the following command:

docker run -p 80:80 fhsinchy/vite-counter

And you should see the vite-counter application running just like before.

And you should see the vite-counter application running just like before.

You can containerize any application and distribute them through Docker Hub or any other registry, making them much easier to run or deploy.

You can containerize any application and distribute them through Docker Hub or any other registry, making them much easier to run or deploy.

Working with Multi-container Applications using Docker Compose (Working with Multi-container Applications using Docker Compose)

So far we've only worked with applications that are comprised of only one container.

So far we've only worked with applications that are comprised of only one container.

Now assume an application with multiple containers. Maybe an API that requires a database service to work properly, or maybe a full-stack application where you have to work with an back-end API and a front-end application together.

Now assume an application with multiple containers. Maybe an API that requires a database service to work properly, or maybe a full-stack application where you have to work with an back-end API and a front-end application together.

In this section, you'll learn about working with such applications using a tool called Docker Compose.

In this section, you'll learn about working with such applications using a tool called Docker Compose .

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Compose is a tool for defining and running multi-container Docker applications. 通过Compose,您可以使用YAML文件来配置应用程序的服务。 然后,使用一个命令,就可以从配置中创建并启动所有服务。

Although Compose works in all environments, it's more focused on development and testing. Using Compose on a production environment is not recommended at all.

Although Compose works in all environments, it's more focused on development and testing. Using Compose on a production environment is not recommended at all.

Compose Basics (Compose Basics)

If you've cloned the project code repository, then go inside the notes-api directory. This is a simple CRUD API where you can create, read, update, and delete notes. The application uses PostgreSQL as its database system.

If you've cloned the project code repository, then go inside the notes-api directory. This is a simple CRUD API where you can create, read, update, and delete notes. The application uses PostgreSQL as its database system.

The project already comes with a Dockerfile.dev file. Content of the file is as follows:

The project already comes with a Dockerfile.dev file. Content of the file is as follows:

FROM node:ltsWORKDIR /usr/appCOPY ./package.json .
RUN npm installCOPY . .CMD [ "npm", "run", "dev" ]

Just like the ones we've written in the previous section. We're copying the package.json file, installing the dependencies, copying the project files and starting the development server by executing npm run dev.

Just like the ones we've written in the previous section. We're copying the package.json file, installing the dependencies, copying the project files and starting the development server by executing npm run dev .

Using Compose is basically a three-step process:

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.

    Define your app's environment with a Dockerfile so it can be reproduced anywhere.

  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

    Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

  3. Run docker-compose up and Compose starts and runs your entire app.

    Run docker-compose up and Compose starts and runs your entire app.

Services are basically containers with some additional stuff. Before we start writing your first YML file together, let's list out the services needed to run this application. There are only two:

Services are basically containers with some additional stuff. Before we start writing your first YML file together, let's list out the services needed to run this application. There are only two:

  1. api - an Express application container run using the Dockerfile.dev file in the project root.

    api - an Express application container run using the Dockerfile.dev file in the project root.

  2. db - a PostgreSQL instance, run using the official postgres image.

    db - a PostgreSQL instance, run using the official postgres image.

Create a new docker-compose.yml file in the project root and let's define your first service together. You can use .yml or .yaml extension. Both work just fine. We'll write the code first and then I'll break down the code line-by-line. Code for the db service is as follows:

Create a new docker-compose.yml file in the project root and let's define your first service together. You can use .yml or .yaml extension. Both work just fine. We'll write the code first and then I'll break down the code line-by-line. Code for the db service is as follows:

version: "3.8"services: db:image: postgres:12volumes: - ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.denvironment:POSTGRES_PASSWORD: 63eaQB9wtLqmNBpgPOSTGRES_DB: notesdb

Every valid docker-compose.yml file starts by defining the file version. At the time of writing, 3.8 is the latest version. You can look up the latest version here.

Every valid docker-compose.yml file starts by defining the file version. At the time of writing, 3.8 is the latest version. You can look up the latest version here .

Blocks in an YML file are defined by indentation. I will go through each of the blocks and will explain what they do.

Blocks in an YML file are defined by indentation. I will go through each of the blocks and will explain what they do.

The services block holds the definitions for each of the services or containers in the application. db is a service inside the services block.

The services block holds the definitions for each of the services or containers in the application. db is a service inside the services block.

The db block defines a new service in the application and holds necessary information to start the container. Every service requires either a pre-built image or a Dockerfile to run a container. For the db service we're using the official PostgreSQL image.

The db block defines a new service in the application and holds necessary information to start the container. Every service requires either a pre-built image or a Dockerfile to run a container. For the db service we're using the official PostgreSQL image.

The docker-entrypoint-initdb.d directory in the project root contains a SQL file for setting up the database tables. This directory is for keeping initialization scripts. There isn't a way to copy directories inside a docker-compose.yml file, that's why we have to use a volume.

The docker-entrypoint-initdb.d directory in the project root contains a SQL file for setting up the database tables. This directory is for keeping initialization scripts. There isn't a way to copy directories inside a docker-compose.yml file, that's why we have to use a volume.

The environment block holds environment variables. List of the valid environment variables can be found on the postgres image page on Docker Hub. The POSTGRES_PASSWORD variable sets the default password for the server and POSTGRES_DB  creates a new database with the given name.

The environment block holds environment variables. List of the valid environment variables can be found on the postgres image page on Docker Hub. The POSTGRES_PASSWORD variable sets the default password for the server and POSTGRES_DB creates a new database with the given name.

Now let's add the api service. Append following code to the file. Be very careful to match the indentation with the db service:

Now let's add the api service. Append following code to the file. Be very careful to match the indentation with the db service:

#### make sure to align the indentation properly##api:build:context: .dockerfile: Dockerfile.devvolumes: - /usr/app/node_modules- ./:/usr/appports: - 3000:3000environment: DB_CONNECTION: pgDB_HOST: db ## same as the database service nameDB_PORT: 5432DB_USER: postgresDB_DATABASE: notesdbDB_PASSWORD: 63eaQB9wtLqmNBpg

We don't have a pre-built image for the api service, but we have a Dockerfile.dev file. The build block defines the build's context and the filename of the Dockerfile to use. If the file is named just Dockerfile then the filename is unnecessary.

We don't have a pre-built image for the api service, but we have a Dockerfile.dev file. The build block defines the build's context and the filename of the Dockerfile to use. If the file is named just Dockerfile then the filename is unnecessary.

Mapping of the volumes is identical to what you've seen in the previous section. One anonymous volume for the node_modules directory and one named volume for the project root.

Mapping of the volumes is identical to what you've seen in the previous section. One anonymous volume for the node_modules directory and one named volume for the project root.

Port mapping also works in the same way as the previous section. The syntax is <host system port>:<container port>. We're mapping the port 3000 from the container to port 3000 of the host system.

Port mapping also works in the same way as the previous section. The syntax is <host system port>:<container port> . We're mapping the port 3000 from the container to port 3000 of the host system.

In the environment block, we're defining the information necessary to setup the database connection. The application uses Knex.js as an ORM which requires these information to connect to the database.

In the environment block, we're defining the information necessary to setup the database connection. The application uses Knex.js as an ORM which requires these information to connect to the database.

DB_PORT: 5432 and DB_USER: postgres is default for any PostgreSQL server. DB_DATABASE: notesdb and DB_PASSWORD: 63eaQB9wtLqmNBpg needs to match the values from the db service. DB_CONNECTION: pg indicates to the ORM that we're using PostgreSQL.

DB_PORT: 5432 and DB_USER: postgres is default for any PostgreSQL server. DB_DATABASE: notesdb and DB_PASSWORD: 63eaQB9wtLqmNBpg needs to match the values from the db service. DB_CONNECTION: pg indicates to the ORM that we're using PostgreSQL.

Any service defined in the docker-compose.yml file can be used as a host by using the service name. So the api service can actually connect to the db service by treating that as a host instead of a value like 127.0.0.1. That's why we're setting the value of DB_HOST to db.

Any service defined in the docker-compose.yml file can be used as a host by using the service name. So the api service can actually connect to the db service by treating that as a host instead of a value like 127.0.0.1 . That's why we're setting the value of DB_HOST to db .

Now that the docker-compose.yml file is complete it's time for us to start the application. The Compose application can be accessed using a CLI tool called dokcer-compose. docker-compose CLI for Compose is what docker CLI is for Docker. To start the services, execute the following command:

Now that the docker-compose.yml file is complete it's time for us to start the application. The Compose application can be accessed using a CLI tool called dokcer-compose . docker-compose CLI for Compose is what docker CLI is for Docker. To start the services, execute the following command:

docker compose up

Executing the command will go through the docker-compose.yml file, create containers for each of the services and start them. Go ahead and execute the command. The startup process may take some time depending on the number of services.

Executing the command will go through the docker-compose.yml file, create containers for each of the services and start them. Go ahead and execute the command. The startup process may take some time depending on the number of services.

Once done, you should see the logs coming in from all the services in your terminal window:

Once done, you should see the logs coming in from all the services in your terminal window:

The application should be running on http://127.0.0.1:3000 address and upon visiting, you should see a JSON response as follows:

The application should be running on http://127.0.0.1:3000 address and upon visiting, you should see a JSON response as follows:

The API has full CRUD functionalities implemented. If you want to know about the end-points go look at the /tests/e2e/api/routes/notes.test.js file.

The API has full CRUD functionalities implemented. If you want to know about the end-points go look at the /tests/e2e/api/routes/notes.test.js file.

The up command builds the images for the services automatically if they don't exist. If you want to force a rebuild of the images, you can use the --build option with the up command. You can stop the services by closing the terminal window or by hitting the ctrl + c key combination.

The up command builds the images for the services automatically if they don't exist. If you want to force a rebuild of the images, you can use the --build option with the up command. You can stop the services by closing the terminal window or by hitting the ctrl + c key combination.

Running Services in Detached Mode (Running Services in Detached Mode)

As I've already mentioned, services are containers and like any other container, services can be run in the background. To run services in detached mode you can use the the -d or --detach option with the up command.

As I've already mentioned, services are containers and like any other container, services can be run in the background. To run services in detached mode you can use the the -d or --detach option with the up command.

To start the current application in detached mode, execute the following command:

To start the current application in detached mode, execute the following command:

docker-compose up -d

This time you shouldn't see the long wall of text that you saw in the previous sub-section.

This time you shouldn't see the long wall of text that you saw in the previous sub-section.

You should still be able to access the API at the http://127.0.0.1:3000 address.

You should still be able to access the API at the http://127.0.0.1:3000 address.

Listing Services (Listing Services)

Just like the docker ps command, Compose has a ps command of its own. The main difference is the docker-compose ps command only lists containers part of a certain application. To list all the containers running as part of the notes-api application, run the following command in the project root:

Just like the docker ps command, Compose has a ps command of its own. The main difference is the docker-compose ps command only lists containers part of a certain application. To list all the containers running as part of the notes-api application, run the following command in the project root:

docker-compose ps

Running the command inside the project directory is important. Otherwise it won't execute. Output from the command should be as follows:

Running the command inside the project directory is important. Otherwise it won't execute. Output from the command should be as follows:

The ps command for Compose shows services in any state by default. Usage of an option like -a or --all is unnecessary.

The ps command for Compose shows services in any state by default. Usage of an option like -a or --all is unnecessary.

Executing Commands Inside a Running Service (Executing Commands Inside a Running Service)

Assume that our notes-api application is running and you want to access the psql CLI application inside the db service. There is a command called exec to do that. Generic syntax for the command is as follows:

Assume that our notes-api application is running and you want to access the psql CLI application inside the db service. There is a command called exec to do that. Generic syntax for the command is as follows:

docker-compose exec <service name> <command>

Service names can be found in the docker-compose.yml file. The generic syntax for starting the psql CLI application is as follows:

Service names can be found in the docker-compose.yml file. The generic syntax for starting the psql CLI application is as follows:

psql <database> <username>

Now to start the psql application inside the db service where the database name is notesdb and the user is psql, the following command should be executed:

Now to start the psql application inside the db service where the database name is notesdb and the user is psql , the following command should be executed:

docker-compose exec db psql notesdb postgres

You should directly land on the psql application:

You should directly land on the psql application:

You can run any valid postgres command here. To exit out of the program write \q and hit enter.

You can run any valid postgres command here. To exit out of the program write \q and hit enter.

Starting Shell Inside a Running Service (Starting Shell Inside a Running Service)

You can also start a shell inside a running container using the exec command. Generic syntax of the command should be as follows:

You can also start a shell inside a running container using the exec command. Generic syntax of the command should be as follows:

docker-compose exec <service name> sh

You can use bash in place of sh if the container comes with that. To start a shell inside the api service, the command should be as follows:

You can use bash in place of sh if the container comes with that. To start a shell inside the api service, the command should be as follows:

docker-compose exec api sh

This should land you directly on the shell inside the api service.

This should land you directly on the shell inside the api service.

In there, you can execute any valid shell command. You can exit by executing the exit command.

In there, you can execute any valid shell command. You can exit by executing the exit command.

Accessing Logs From a Running Service (Accessing Logs From a Running Service)

If you want to view logs from a container, the dashboard can be really helpful.

If you want to view logs from a container, the dashboard can be really helpful.

You can also use the logs command to retrieve logs from a running service. The generic syntax for the command is as follows:

You can also use the logs command to retrieve logs from a running service. The generic syntax for the command is as follows:

docker-compose logs <service name>

To access the logs from the api service execute the following command:

To access the logs from the api service execute the following command:

docker-compose logs api

You should see a wall of text appear on your terminal window.

You should see a wall of text appear on your terminal window.

This is just a portion from the log output. You can kind of hook into the output stream of the service and get the logs in real-time by using the -f or --follow option. Any later log will show up instantly in the terminal as long as you don't exit by pressing ctrl + c key combination or closing the window. The container will keep running even if you exit out of the log window.

This is just a portion from the log output. You can kind of hook into the output stream of the service and get the logs in real-time by using the -f or --follow option. Any later log will show up instantly in the terminal as long as you don't exit by pressing ctrl + c key combination or closing the window. The container will keep running even if you exit out of the log window.

Stopping Running Services (Stopping Running Services)

Services running in the foreground can be stopped by closing the terminal window or hitting the ctrl + c key combination. For stopping services in in the background, there are a number of commands available. I'll explain each of them one by one.

Services running in the foreground can be stopped by closing the terminal window or hitting the ctrl + c key combination. For stopping services in in the background, there are a number of commands available. I'll explain each of them one by one.

  • docker-compose stop - attempts to stop the running services gracefully by sending a SIGTERM signal to them. If the services don't stop within a grace period, a SIGKILL signal is sent.

    docker-compose stop - attempts to stop the running services gracefully by sending a SIGTERM signal to them. If the services don't stop within a grace period, a SIGKILL signal is sent.

  • docker-compose kill - stops the running services immediately by sending a SIGKILL signal. A SIGKILL signal can not be ignored by a recipient.

    docker-compose kill - stops the running services immediately by sending a SIGKILL signal. A SIGKILL signal can not be ignored by a recipient.

  • docker-compose down - attempts to stop the running services gracefully by sending a SIGTERM signal and removes the containers afterwards.

    docker-compose down - attempts to stop the running services gracefully by sending a SIGTERM signal and removes the containers afterwards.

If you want to keep the containers for the services you can use the stop command. If you want to removes the containers as well use the down command.

If you want to keep the containers for the services you can use the stop command. If you want to removes the containers as well use the down command.

Composing a Full-stack Application (Composing a Full-stack Application)

In this sub-section, we'll be adding a front-end application to our notes API and turn it into a complete application. I won't be explaining any of the Dockerfile.dev files in this sub-section (except the one for the nginx service) as they are identical to some of the others you've already seen in previous sub-sections.

In this sub-section, we'll be adding a front-end application to our notes API and turn it into a complete application. I won't be explaining any of the Dockerfile.dev files in this sub-section (except the one for the nginx service) as they are identical to some of the others you've already seen in previous sub-sections.

If you've cloned the project code repository, then go inside the fullstack-notes-application directory. Each directory inside the project root contains the code for each services and the corresponding Dockerfile.

If you've cloned the project code repository, then go inside the fullstack-notes-application directory. Each directory inside the project root contains the code for each services and the corresponding Dockerfile.

Before we start with the docker-compose.yml file let's look at a diagram of how the application is going to work:

Before we start with the docker-compose.yml file let's look at a diagram of how the application is going to work:

Instead of accepting requests directly like we previously did, in this application, all the requests will be first received by a Nginx server. Nginx will then see if the requested end-point has /api in it. If yes, Nginx will route the request to the back-end or if not, Nginx will route the request to the front-end.

Instead of accepting requests directly like we previously did, in this application, all the requests will be first received by a Nginx server. Nginx will then see if the requested end-point has /api in it. If yes, Nginx will route the request to the back-end or if not, Nginx will route the request to the front-end.

The reason behind doing this is that when you run a front-end application it doesn't run inside a container. It runs on the browser, served from a container. As a result, Compose networking doesn't work as expected and the front-end application fails to find the api service.

The reason behind doing this is that when you run a front-end application it doesn't run inside a container. It runs on the browser, served from a container. As a result, Compose networking doesn't work as expected and the front-end application fails to find the api service.

Nginx on the other hand runs inside a container and can communicate with the different services across the entire application.

Nginx on the other hand runs inside a container and can communicate with the different services across the entire application.

I will not get into the configuration of Nginx here. That topic is kinda out of scope of this article. But if you want to have a look at it, go ahead and checkout the /nginx/default.conf file. Code for the /nginx/Dockerfile.dev for the is as follows:

I will not get into the configuration of Nginx here. That topic is kinda out of scope of this article. But if you want to have a look at it, go ahead and checkout the /nginx/default.conf file. Code for the /nginx/Dockerfile.dev for the is as follows:

FROM nginx:stableCOPY ./default.conf /etc/nginx/conf.d/default.conf

All it does is just copying the configuration file to /etc/nginx/conf.d/default.conf inside the container.

All it does is just copying the configuration file to /etc/nginx/conf.d/default.conf inside the container.

Let's start writing the docker-compose.yml file by defining the services you're already familiar with. The db and api service. Create the docker-compose.yml file in the project root and put following code in there:

Let's start writing the docker-compose.yml file by defining the services you're already familiar with. The db and api service. Create the docker-compose.yml file in the project root and put following code in there:

version: "3.8"services: db:image: postgres:12volumes: - ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.denvironment:POSTGRES_PASSWORD: 63eaQB9wtLqmNBpgPOSTGRES_DB: notesdbapi:build: context: ./apidockerfile: Dockerfile.devvolumes: - /usr/app/node_modules- ./api:/usr/appenvironment: DB_CONNECTION: pgDB_HOST: db ## same as the database service nameDB_PORT: 5432DB_USER: postgresDB_DATABASE: notesdbDB_PASSWORD: 63eaQB9wtLqmNBpg

As you can see, these two services are almost identical to the previous sub-section, the only difference is the context of the api service. That's because codes for that application now resides inside a dedicated directory named api. Also, there is no port mapping as we don't want to expose the service directly.

As you can see, these two services are almost identical to the previous sub-section, the only difference is the context of the api service. That's because codes for that application now resides inside a dedicated directory named api . Also, there is no port mapping as we don't want to expose the service directly.

The next service we're going to define is the client service. Append following bit of code to the compose file:

The next service we're going to define is the client service. Append following bit of code to the compose file:

#### make sure to align the indentation properly##client:build:context: ./clientdockerfile: Dockerfile.devvolumes: - /usr/app/node_modules- ./client:/usr/appenvironment: VUE_APP_API_URL: /api

We're naming the service client. Inside the build block, we're setting the /client directory as the context and giving it the Dockerfile name.

We're naming the service client . Inside the build block, we're setting the /client directory as the context and giving it the Dockerfile name.

Mapping of the volumes is identical to what you've seen in the previous section. One anonymous volume for the node_modules directory and one named volume for the project root.

Mapping of the volumes is identical to what you've seen in the previous section. One anonymous volume for the node_modules directory and one named volume for the project root.

Value of the VUE_APP_API_URL variable inside the environtment will be appended to each request that goes from the client to the api service. This way, Nginx will be able to differentiate between different requests and will be able to re-route them properly.

Value of the VUE_APP_API_URL variable inside the environtment will be appended to each request that goes from the client to the api service. This way, Nginx will be able to differentiate between different requests and will be able to re-route them properly.

Just like the api service, there is no port mapping here, because we don't want to expose this service either.

Just like the api service, there is no port mapping here, because we don't want to expose this service either.

Last service in the application is the nginx service. To define that, append following code to the compose file:

Last service in the application is the nginx service. To define that, append following code to the compose file:

#### make sure to align the indentation properly##nginx:build:context: ./nginxdockerfile: Dockerfile.devports: - 80:80

Content of the Dockerfile.dev has already been talked about. We're naming the service nginx. Inside the build block, we're setting the /nginx directory as the context and giving it the Dockerfile name.

Content of the Dockerfile.dev has already been talked about. We're naming the service nginx . Inside the build block, we're setting the /nginx directory as the context and giving it the Dockerfile name.

As I've already shown in the diagram, this nginx service is going to handle all the requests. So we have to expose it. Nginx runs on port 80 by default. So I'm mapping port 80 inside the container to port 80 of the host system.

As I've already shown in the diagram, this nginx service is going to handle all the requests. So we have to expose it. Nginx runs on port 80 by default. So I'm mapping port 80 inside the container to port 80 of the host system.

We're done with the full docker-compose.yml file and now it's time to run the service. Start all the services by executing following command:

We're done with the full docker-compose.yml file and now it's time to run the service. Start all the services by executing following command:

docker-compose up

Now visit http://localhost:80 and voilà!

Now visit http://localhost:80 and voilà!

Try adding and deleting notes to see if the application works properly or not. Multi-container applications can be a lot more complicated than this, but for this article, this is enough.

Try adding and deleting notes to see if the application works properly or not. Multi-container applications can be a lot more complicated than this, but for this article, this is enough.

结论 (Conclusion)

I would like to thank you from the bottom of my heart for the time you've spent on reading this article. I hope you've enjoyed your time and have learned all the essentials of Docker.

I would like to thank you from the bottom of my heart for the time you've spent on reading this article. I hope you've enjoyed your time and have learned all the essentials of Docker.

To stay updated with my upcoming works, follow me @frhnhsin ✌️

To stay updated with my upcoming works, follow me @frhnhsin ✌️

翻译自: https://www.freecodecamp.org/news/the-docker-handbook/

docker手册

docker手册_Docker手册相关推荐

  1. Docker完全自学手册

    阿里云大学免费课程:Docker完全自学手册 课程介绍: Docker 是 PaaS 提供商 dotCloud 开源的一个基于 LXC 的高级容器引擎,源代码托管在 Github 上, 基于go语言并 ...

  2. docker pdf 中文版 linux,Docker入门实战手册PDF

    一.为什么要使用 Docker? 1 .快速交付应用程序 •  开发者使用一个标准的image 来构建开发容器,开发完成之后,系统管理员就可以使用这个容器 来部署代码 •  Docker 可以快速创建 ...

  3. 2.GSAP(TweenMax手册/TweenLite手册)之一

    GSAP(TweenMax手册/TweenLite手册)之一 本文章内容是关于GSAP动画库中的TweenMax和TweenLite的使用,编写于2020年10月22日00时47分(v1.0.0). ...

  4. css参考手册css3手册_CSS手册:面向开发人员CSS便捷指南

    css参考手册css3手册 I wrote this article to help you quickly learn CSS and get familiar with the advanced ...

  5. c语言程序设计第四版乌云高娃,C语言程序设计教学课件作者第3版乌云高娃学习手册C语言程序设计教学课件作者第3版乌云高娃学习手册学习手册第10章文件及其应用课件.docx...

    C语言程序设计教学课件作者第3版乌云高娃学习手册C语言程序设计教学课件作者第3版乌云高娃学习手册学习手册第10章文件及其应用课件.docx 学习手册(1):文本文件的操作学习内容文本文件的操作学习目标 ...

  6. 3.GSAP(TweenMax手册/TweenLite手册)之二

    GSAP(TweenMax手册/TweenLite手册)之二 本文章内容是关于GSAP动画库中的TweenMax和TweenLite的使用,编写于2020年11月19日23:08(v1.0.0). 四 ...

  7. 用友EAI企业应用集成使用手册U8EAI手册2006年下载

    用友EAI企业应用集成使用手册U8EAI手册2006年 最近有一个项目,要进行用友二次开发也就是U8二次开发,找了一些开发资料有需要的朋友,可以下载U8开发资料来看看,下面是用友开发资料的下载有地址 ...

  8. c/c++参考手册与手册查阅(apiref.com/Microsoft doc/cppReference/cplusplus)

    文章目录 c/c++参考手册与手册查阅(apiref.com/Microsoft doc/cppReference/cplusplus) 手册查阅 c c++ 离线手册(pdf/html) pdf 相 ...

  9. 上线即碾压Github榜首!十大互联网Docker实战案例手册(大厂版)

    自2013 年Docker 诞生以来,该技术在业界迅速掀起一股热潮.短短几年时间内,Docker生态系统迅猛发展,在企业中的应用遍地开花.Docker 为企业级应用的构建.交付和运行带来了革命性的便利 ...

最新文章

  1. Spark API编程动手实战-08-基于IDEA使用Spark API开发Spark程序-01
  2. php模板中的数组在哪,php – Twig:从包含的模板中添加项目到数组
  3. 华为回应前员工被拘 251 天;暴风集团仅剩 10 余人;TiDB 3.0.6 发布 | 极客头条...
  4. java 线程的可重入锁_java 多线程-可重入锁
  5. 翻滚吧,水瓶君!全日本高中生机器人大赛,“超自然”力量制霸全场
  6. HDU 1034 - Candy Sharing Game
  7. 微积分经典概念:极限、连续与函数
  8. vue每次请求加头部(shiro+vue)
  9. redis 永不过期_Redis系列八Redis数据过期策略详解
  10. 服务器显示灰色怎么办,服务器远程桌面显示灰色
  11. 中国各省的简称及省会
  12. Android Bottom Sheet详解
  13. 关于java.security.AccessControlException: access denied 的解决方法
  14. 大厂Offer拿到手软啊!技术详细介绍
  15. 7z压缩比最高,rar次之,zip最低
  16. netstat,ss,nc ,wget,dig
  17. 鸿蒙系统安装苹果电脑,好消息!华为鸿蒙OS系统,解决了苹果、安卓系统的一大难题...
  18. 布袋除尘器过滤风速多少_布袋除尘器过滤风速的选择!
  19. 完成输入框自动切换对应的中文或英文输入法
  20. 现代黄河三角洲的冲淤分区及地层特征

热门文章

  1. Java代码块总结(速读版)
  2. php 贝瑟尔曲线,贝塞尔曲线的应用详解
  3. ssm启动报错cannot find class_SSM整合补充 RBAC(权限控制)过滤器
  4. React 入门笔记 1
  5. 音乐(文件)断点下载
  6. Ios生产证书申请(含推送证书)
  7. 启用CORS实现Ajax跨域请求
  8. string与数值之间的转换
  9. Create a restful app with AngularJS/Grails(4)
  10. UVa 374 - Big Mod