文章目录

  • Docker 安全加密
    • 使用ssl加密连接 docker registry
    • 配置用户认证访问
  • habor仓库
    • 安装habor仓库
    • 使用habor 仓库
    • docker-compose 命令
  • docker网络
    • 桥接模式
    • host 模式
    • none模式
    • join(container)模式
    • 一些实用的docker命令
    • 使用cadvisor
  • Docker 自定义网络
    • bridge
      • 创建自定义网桥
      • 自定义网段和网关
      • 设定容器ip
      • 不同网桥间通信
    • Docker跨主机通信
      • docker跨主机的网络方式
      • CNM(container network model)模型
      • 使用macvlan实现Docker容器跨主机网络
      • 问题及解决方法

Docker 安全加密

使用ssl加密连接 docker registry

  • 建立一个 certs 目录
  • 生成自签名证书
[root@server1 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout ./westos.org.key -x509 -days 365 -out ./westos.org.crt
Generating a 4096 bit RSA private key
.......................................................................++
...............++
writing new private key to './westos.org.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:cn
State or Province Name (full name) []:shannxi
Locality Name (eg, city) [Default City]:Xi'an
Organization Name (eg, company) [Default Company Ltd]:westos
Organizational Unit Name (eg, section) []:linux
Common Name (eg, your name or your server's hostname) []:westos.org
Email Address []:root@westos.org
[root@server1 certs]# ls
westos.org.crt  westos.org.key
  • 建立容器
[root@server1 ~]# ls
anaconda-ks.cfg  certs  docker  game2048.tar  init_vm.sh
[root@server1 ~]# docker run -d -p 443:443 --restart=always --name registry -v /opt/registry:/var/lib/registry -v /root/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/westos.org.crt -e REGISTRY_HTTP_TLS_KEY=/certs/westos.org.key  registry
f4bf979077994df0e60c8f8a96261e6e41ce560c66e10a51e9b1a0a7bdf519eb
[root@server1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                            NAMES
f4bf97907799        registry            "/entrypoint.sh /etc…"   8 seconds ago       Up 6 seconds        0.0.0.0:443->443/tcp, 5000/tcp   registry
[root@server1 ~]# docker port registry
443/tcp -> 0.0.0.0:443
  • 使用证书连接registry
    把 xxx.crt 拷贝到/etc/docker/certs.d/westos.org/ca.cert
[root@server1 ~]# cp certs/westos.org.crt /etc/docker/certs.d/westos.org/ca.cert
[root@server1 ~]# ll /etc/docker/certs.d/westos.org/ca.cert
-rw-r--r-- 1 root root 2098 Sep  8 20:23 /etc/docker/certs.d/westos.org/ca.cert

本地:

  • 上传:
[root@server1 ~]# docker push westos.org/busybox
The push refers to repository [westos.org/busybox]
c632c18da752: Pushed
latest: digest: sha256:c2d41d2ba6d8b7b4a3ffec621578eb4d9a0909df29dfa2f6fd8a2e5fd0836aed size: 527

远程:

[root@server2 ~]# docker pull westos.org/busybox
Using default tag: latest
latest: Pulling from busybox
9c075fe2c773: Pull complete
Digest: sha256:c2d41d2ba6d8b7b4a3ffec621578eb4d9a0909df29dfa2f6fd8a2e5fd0836aed
Status: Downloaded newer image for westos.org/busybox:latest
westos.org/busybox:latest

配置除docker客户以外的访问

[root@server2 ~]# cp westos.org.crt /etc/pki/ca-trust/source/anchors/
[root@server2 ~]# update-ca-trust
[root@server2 ~]# curl https://westos.org/v2/_catalog
{"repositories":["busybox"]}

配置用户认证访问

  • 使用htpasswd 生成密钥
htpasswd -Bb auth/htpasswd admin
  • 创建registry容器
[root@server1 ~]# docker run -d -p 443:443 --restart=always --name registry -v /opt/registry:/var/lib/registry -v /root/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/westos.org.crt -e REGISTRY_HTTP_TLS_KEY=/certs/westos.org.key -v /root/auth:/auth -e REGISTRY_AUTH=htpasswd -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry
bfe3305451d8feffdf0fa94b81fcb80394f11a4cdeea2d39a4628df026df6efc
[root@server1 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                            NAMES
bfe3305451d8        registry            "/entrypoint.sh /etc…"   9 seconds ago       Up 7 seconds        0.0.0.0:443->443/tcp, 5000/tcp   registry
[root@server1 ~]#
  • 登陆

habor仓库

安装habor仓库

  • 安装组件
    确保安装Docker 并且版本大于17
    下载harbor-offline-installer-v1.10.1.tgz
    安装docker-compose
  • 配置habor.yml
  • 安装
[root@server1 harbor]# ls
common  common.sh  docker-compose.yml  harbor.v1.10.1.tar.gz  harbor.yml  install.sh  LICENSE  prepare
[root@server1 harbor]# ./install.sh


使用habor 仓库

使用前必须登陆

[root@server1 ~]# docker tag westos.org/busybox:latest westos.org/library/busybox
[root@server1 ~]# docker push westos.org/library/busybox
The push refers to repository [westos.org/library/busybox]
c632c18da752: Pushed
latest: digest: sha256:c2d41d2ba6d8b7b4a3ffec621578eb4d9a0909df29dfa2f6fd8a2e5fd0836aed size: 527

创建仓库和用户


给项目添加成员

登陆

[root@server2 ~]# docker login westos.org
Username: zzzhq
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded

上传

[root@server2 ~]# docker push westos.org/test/busybox
The push refers to repository [westos.org/test/busybox]
be8b8b42328a: Pushed
latest: digest: sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002 size: 527

下载

[root@server2 ~]# docker pull westos.org/test/busybox
Using default tag: latest
latest: Pulling from test/busybox
df8698476c65: Pull complete
Digest: sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002
Status: Downloaded newer image for westos.org/test/busybox:latest
westos.org/test/busybox:latest

指定镜像源

[root@server1 ~]# cat /etc/docker/daemon.json
{"insecure-registries":[
"westos.org"
]
}

重启docker

[root@server1 ~]# docker pull library/busybox
Using default tag: latest
latest: Pulling from library/busybox
df8698476c65: Pull complete
Digest: sha256:d366a4665ab44f0648d7a00ae3fae139d55e32f9712c67accd604bb55df9d05a
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest

docker-compose 命令

  • docker-compose stop/start 关闭/开启 habor
[root@server1 harbor]# docker-compose stop
Stopping nginx         ... done
Stopping harbor-core   ... done
Stopping registry      ... done
Stopping harbor-portal ... done
Stopping harbor-log    ... done
[root@server1 harbor]# docker-compose start
Starting log         ... done
Starting registry    ... done
Starting registryctl ... done
Starting postgresql  ... done
Starting portal      ... done
Starting redis       ... done
Starting core        ... done
Starting jobservice  ... done
Starting proxy       ... done
  • docker-compose up/down 关闭删除/启用重建habor仓库
[root@server1 harbor]# docker-compose down
Stopping harbor-jobservice ... done
Stopping nginx             ... done
Stopping harbor-core       ... done
Stopping harbor-db         ... done
Stopping registryctl       ... done
Stopping redis             ... done
Stopping registry          ... done
Stopping harbor-portal     ... done
Stopping harbor-log        ... done
Removing harbor-jobservice ... done
Removing nginx             ... done
Removing harbor-core       ... done
Removing harbor-db         ... done
Removing registryctl       ... done
Removing redis             ... done
Removing registry          ... done
Removing harbor-portal     ... done
Removing harbor-log        ... done
Removing network harbor_harbor
[root@server1 harbor]# docker-compose up
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-portal ... done
Creating registry      ... done
Creating registryctl   ... done
Creating redis         ... done
Creating harbor-db     ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done

docker网络

桥接模式

在开启docker引擎时 会创建一个叫docker0 的 网桥 所有容器都桥接在这个网桥上 默认为172.17.0.1

当开启一个nginx的容器

docker inspect nginx

"Gateway": "172.17.0.1","GlobalIPv6Address": "","GlobalIPv6PrefixLen": 0,"IPAddress": "172.17.0.2",

容器内部:

[root@server1 ~]# docker run -it --rm busybox
/ # ping baidu.com
PING baidu.com (39.156.69.79): 56 data bytes
64 bytes from 39.156.69.79: seq=0 ttl=45 time=81.037 ms
64 bytes from 39.156.69.79: seq=1 ttl=45 time=247.098 ms
^C
--- baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 81.037/164.067/247.098 ms
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
/ # cat /etc/resolv.conf
nameserver 114.114.114.114

只要宿主机可以上网 容器就可以上网 并且共享宿主机的dns配置
桥接模式 的缺点是 外部不能 直接访问 容器 只能通过端口映射的方式 访问容器

host 模式

一个Network Namespace提供了一份独立的网络环境,包括网卡、路由、Iptable规则等都与其他的Network Namespace隔离。一个Docker容器一般会分配一个独立的Network Namespace。但如果启动容器的时候使用host模式,那么这个容器将不会获得一个独立的Network Namespace,而是和宿主机共用一个Network Namespace。容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口
默认创建容器时使用的是桥接模式 使用–network host 指定使用host模式

[root@server1 ~]# docker run -d --name nginx --network host nginx
1a877cf68b3fd792f187397ff1fe022afa337e51ea71f9b1d73e099f3eae5f8c
[root@server1 ~]# netstat -antlup
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      18802/nginx: master
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3170/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3283/master
tcp        0      0 172.25.254.101:22       172.25.254.1:46798      ESTABLISHED 3296/sshd: root@pts
tcp6       0      0 :::80                   :::*                    LISTEN      18802/nginx: master
tcp6       0      0 :::22                   :::*                    LISTEN      3170/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      3283/master

此时容器直接监听宿主机上的80端口 和宿主机共享网络栈

这样的好处是容器直接和外界通信,不用走桥接,缺点是容器缺少隔离性

none模式

当容器需要隔离 不使用网络 比如 只是原来保存 重要的密码的容器

join(container)模式

创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围。

一些实用的docker命令

  • docker logs 查看容器日志
[root@server1 ~]# docker logs test
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
  • docker stats 查看容器的cpu内存
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
056bd91195ec        test                0.00%               1.391MiB / 991MiB   0.14%               0B / 0B             0B / 0B             2
  • docker network ls 查看容器网络
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b36a79642ad5        bridge              bridge              local
989dfe7c1853        host                host                local
667109ee3035        none                null                local

使用cadvisor

cadvisor是谷歌的一款开源容器监控工具
cadvisor(容器顾问)使容器用户了解其运行容器的资源使用情况和性能特征。它是一个正在运行的守护程序,用于收集,聚合,处理和导出有关正在运行的容器的信息。具体来说,对于每个容器,它保留资源隔离参数,历史资源使用情况,完整历史资源使用情况的直方图和网络统计信息。此数据按容器和机器范围导出。

  • 运行
[root@server1 ~]# docker run \
--volume=/:/rootfs:ro   \
--volume=/var/run:/var/run:ro   \
--volume=/sys:/sys:ro   \
--volume=/var/lib/docker/:/var/lib/docker:ro   \
--volume=/dev/disk/:/dev/disk:ro   \
--publish=8080:8080   \
--detach=true   \
--name=cadvisor   \
--privileged   \
--device=/dev/kmsg \
google/cadvisor
4ab54628f413b772dc3358859f0b38253bd94829db91329e325f15e87386b716
  • 访问

Docker 自定义网络

docker 提供了三种自定义网络驱动

  • bridge
  • overlay
  • macvlan
    bridge驱动类似默认的bridge网络模式,但增加了一些新的功能
    overlay 和 macvlan 用于创建跨主机的网络

bridge

创建自定义网桥
[root@server1 ~]# docker network create -d bridge mynet1
2cb008aca3872d04229eedfd6ca7d11ac9036ff01dc86f2ff3e0ee9a98e17cc4
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b36a79642ad5        bridge              bridge              local
989dfe7c1853        host                host                local
2cb008aca387        mynet1              bridge              local
667109ee3035        none                null                local
[root@server1 ~]# docker run -it --rm --network mynet1 busyboxplus
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
119: eth0@if120: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ffinet 172.19.0.2/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
自定义网段和网关
[root@server1 ~]# docker network create -d bridge --subnet 172.22.0.0/24 --gateway 172.22.0.1 mynet2
aba28c93549378f6e66b977449c3b8e476e62e96999aa93f2fb28602fae14cc3
[root@server1 ~]# docker network inspect mynet2
[{"Name": "mynet2","Id": "aba28c93549378f6e66b977449c3b8e476e62e96999aa93f2fb28602fae14cc3","Created": "2020-09-09T20:57:49.225747592+08:00","Scope": "local","Driver": "bridge","EnableIPv6": false,"IPAM": {"Driver": "default","Options": {},"Config": [{"Subnet": "172.22.0.0/24","Gateway": "172.22.0.1"}]},"Internal": false,"Attachable": false,"Ingress": false,"ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {},"Options": {},"Labels": {}}
]
设定容器ip
[root@server1 ~]# docker run -it --rm --network mynet2 --ip 172.22.0.33  busyboxplus
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
124: eth0@if125: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:16:00:21 brd ff:ff:ff:ff:ff:ffinet 172.22.0.33/24 brd 172.22.0.255 scope global eth0valid_lft forever preferred_lft forever

桥接到同一个网桥上的容器能互相通信

假设vm2 停掉vm3 占用了原来的vm2 ip vm2 重新启动后 vm1 仍能通过vm2来访问

[root@server1 ~]# docker exec -it vm1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ffinet 172.19.0.2/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
/ # [root@server1 ~]# docrun -it -d --name vm3 --network mynet1   busyboxplus
7a2c387315ff67a26e48b58707efd21eff1ba6ee33db91a9afc753503b44df91
[root@server1 ~]# docker exec -it vm3 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
146: eth0@if147: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ffinet 172.19.0.3/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
/ # [root@server1 ~]# docker run -it -d --name vm2 --network mynet1   busyboxplus
30568bb84a8e007bc53a49b4d76e6f80615a5a76b6ab18e8916640db2278c450
[root@server1 ~]# docker exec -it vm2 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
148: eth0@if149: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:04 brd ff:ff:ff:ff:ff:ffinet 172.19.0.4/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
/ # [root@server1 ~]# docexec -it vm1 sh
/ # ping vm2
PING vm2 (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: seq=0 ttl=64 time=0.322 ms
64 bytes from 172.19.0.4: seq=1 ttl=64 time=0.231 ms
64 bytes from 172.19.0.4: seq=2 ttl=64 time=0.237 ms
^C
--- vm2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.231/0.263/0.322 ms
/ #
不同网桥间通信
  • 使用docker network connect 连接
[root@server1 ~]# docker run -it -d --name vm2 --network mynet2   busyboxplus
197f38c6893985252a04182277d2d10093fbf1abb830c0aca8af4d56e5700f80
[root@server1 ~]# docker exec -it vm1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ffinet 172.19.0.2/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
/ # [root@server1 ~]# docker exec -it vm2 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
152: eth0@if153: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ffinet 172.22.0.2/24 brd 172.22.0.255 scope global eth0valid_lft forever preferred_lft forever
/ # [root@server1 ~]# docker network connect mynet2 vm1
[root@server1 ~]# docker exec -it vm1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ffinet 172.19.0.2/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
154: eth1@if155: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:16:00:03 brd ff:ff:ff:ff:ff:ffinet 172.22.0.3/24 brd 172.22.0.255 scope global eth1valid_lft forever preferred_lft forever
/ # ping vm2
PING vm2 (172.22.0.2): 56 data bytes
64 bytes from 172.22.0.2: seq=0 ttl=64 time=0.358 ms
64 bytes from 172.22.0.2: seq=1 ttl=64 time=0.278 ms
^C
--- vm2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.278/0.318/0.358 ms
  • 使用Container连接
    container模式指定新创建的容器和已经存在的一个容器共享一个Network Namespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过lo网卡设备通信。
[root@server1 ~]# docker run -d -it --name vm2 --network container:vm1 busyboxplus
c2d3af703289f364e3c7c32c8c933bd4c24a9c481c0a0626bd9a92d660ba5808
[root@server1 ~]# docker exec -it vm2 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ffinet 172.19.0.2/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever
/ # ping vm1
PING vm1 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.533 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.238 ms
^C
--- vm1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.238/0.385/0.533 ms
/ #
/ # [root@server1 ~]#
[root@server1 ~]# docker exec -it vm1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ffinet 172.19.0.2/16 brd 172.19.255.255 scope global eth0valid_lft forever preferred_lft forever

这种模式 适用于需要频繁交流的容器 使用localhost交流 比如web容器和应用容器

[root@server1 ~]# docker run -d -it --name vm2 --network container:vm1 nginx
0de7918c769d3c2da36d22d4cb9b4cff471a47ad725f4fc5c8755a17ea24c1c6
/ # netstat -antlu
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.11:35025        0.0.0.0:*               LISTEN
tcp        0      0 :::80                   :::*                    LISTEN
udp        0      0 127.0.0.11:34067        0.0.0.0:*
/ # curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • 使用 -link 连接容器
[root@server1 ~]# docker run -d --name test nginx
a402506fa470430cc46671c652e0c16a55a38771463e89d950d94e9e49dd3ac6
[root@server1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
a402506fa470        nginx               "/docker-entrypoint.…"   14 seconds ago      Up 12 seconds       80/tcp              test
[root@server1 ~]# docker run -it --name vm1 --link test:web busyboxplus
/ # ping web
PING web (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.505 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.249 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.243 ms
^C
--- web ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.243/0.332/0.505 ms
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft forever
158: eth0@if159: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ffinet 172.17.0.3/16 brd 172.17.255.255 scope global eth0valid_lft forever preferred_lft forever
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      web a402506fa470 test
172.17.0.3      6ad45d325508
/ # env
HOSTNAME=6ad45d325508
SHLVL=1
HOME=/
WEB_PORT=tcp://172.17.0.2:80
WEB_NAME=/vm1/web
WEB_PORT_80_TCP_ADDR=172.17.0.2
WEB_PORT_80_TCP_PORT=80
WEB_PORT_80_TCP_PROTO=tcp
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WEB_ENV_PKG_RELEASE=1~buster
WEB_PORT_80_TCP=tcp://172.17.0.2:80
WEB_ENV_NGINX_VERSION=1.19.2
WEB_ENV_NJS_VERSION=0.4.3
PWD=/
/ #

容器写了解析和设定了env

[root@server1 ~]# docker stop test
test
[root@server1 ~]# docker run -d  nginx
fe6c24fe2eeb13a144921d9d7abef7979543d6fd487b101164240d7cb90fbca0
[root@server1 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS               NAMES
fe6c24fe2eeb        nginx               "/docker-entrypoint.…"   7 seconds ago       Up 4 seconds                    80/tcp              elegant_morse
6ad45d325508        busyboxplus         "/bin/sh"                2 minutes ago       Exited (0) About a minute ago                       vm1
a402506fa470        nginx               "/docker-entrypoint.…"   3 minutes ago       Exited (0) 32 seconds ago                           test
[root@server1 ~]# docker inspect elegant_morse "IPAddress": "172.17.0.2",
[root@server1 ~]# docker start test
test
[root@server1 ~]# docker start vm1
vm1
[root@server1 ~]# docker attach vm1
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      web a402506fa470 test
172.17.0.4      6ad45d325508
/ # env
HOSTNAME=6ad45d325508
SHLVL=1
HOME=/
WEB_PORT=tcp://172.17.0.3:80
WEB_NAME=/vm1/web
WEB_PORT_80_TCP_ADDR=172.17.0.3
WEB_PORT_80_TCP_PORT=80
WEB_PORT_80_TCP_PROTO=tcp
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WEB_ENV_PKG_RELEASE=1~buster
WEB_PORT_80_TCP=tcp://172.17.0.3:80
WEB_ENV_NGINX_VERSION=1.19.2
WEB_ENV_NJS_VERSION=0.4.3
PWD=/
/ #

ip 变动后 host和env也随之变动

Docker跨主机通信

docker跨主机的网络方式

docker原生的

  • overlay
  • macvlan
    第三方:
  • flannel
  • weave
  • calico
CNM(container network model)模型

Sandbox: 容器网络栈(namespace
Endpoint: 将sandbox接入network(veth)
Network: 包含一组endpoint,同一network的endpoint可以通信

使用macvlan实现Docker容器跨主机网络

macvlan特点:

  • 使用linux内核提供的一种网卡虚拟化技术
  • 性能好:因为无需使用桥接,直接使用宿主机物理网口。

步骤:

  • 在server1 和 server2 上添加 两块网卡
  • 激活网卡并激活混杂模式
[root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
NAME=eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
[root@server1 ~]# ip link set eth1 up
[root@server1 ~]# ip link set eth1 promisc on

  • server1和server2上创建macvlan 网络模型:
### server1 上
[root@server1 ~]# docker network create -d macvlan --subnet 172.10.0.0/24 --gateway 172.10.0.1 -o parent=eth1 vlan1
b044216355b37ebaf8628fe1694e73cc242c86c17a010c17bfbcb8683defa883
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b36a79642ad5        bridge              bridge              local
989dfe7c1853        host                host                local
667109ee3035        none                null                local
b044216355b3        vlan1               macvlan             local
### serve2 上相同操作
  • 测试
### server1 上
[root@server1 ~]# docker run -d --name web --network vlan1 --ip 172.10.0.10 nginx
b8b320373bb8f8b19295ff7221e62648050ef5087a5240951d6354a79554c67f
### server2 上
[root@server2 ~]# docker run -it --rm --network vlan1 --ip 172.10.0.20 busyboxplus
/ # curl 172.10.0.10
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
问题及解决方法

两台主机的网络都没有桥接到docker0上。而是直接占用了物理网卡。无需 NAT 或端口映射 效率大大提升
但是 每一个网络独占一个网卡是不现实的

  • 解决方法:
    macvlan会独占网卡,但是可以使用vlan的子接口实现多macvlan网络,最多可以将物理二层网络划分成4094各逻辑网络,且彼此隔离。
    vlan 可以将物理二层网络划分为 4094 个逻辑网络,彼此隔离,取值为 1~4094,这样就极大的提升了网卡的复用性
### server1 和 server2 相同操作
[root@server1 ~]# docker network create -d macvlan --subnet 172.20.0.0/24 --gateway 172.20.0.1 -o parent=eth1.1 vlan2
71294916fa0f1975361194350860e7b4f7b6828ffd255129dae0277ed126e64e
[root@server1 ~]# docker run -d --name web2 --network vlan2 --ip 172.20.0.10 nginx
178e08410adc5f300fffa9ac04d59a1850dad50993e428ffa86a8b00ff50fdc4
[root@server2 ~]# docker run -it --rm --network vlan2 --ip 172.20.0.20 busyboxplus
/ # curl 172.20.0.10
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # curl 172.10.0.10
curl: (7) Failed to connect to 172.10.0.10 port 80: No route to host

macvlan网络在物理网络二层是隔离的,无法直接通信,但是可以在三层上(即容器内部)通过网关将macvlan网络连通起来。
使用 vlan 子接口实现多 macvlan 网络

Docker(2) 安全加密,habor仓库和Docker网络相关推荐

  1. 从零开始的Docker Desktop使用,Docker快速上手 ( ̄︶ ̄) Docker介绍和基础使用

    文章目录 Docker简介和安装和基础配置 Docker简介 安装Docker Desktop 换源 Docker基础使用 对Docker操作 对镜像的基础操作 获取当时所有镜像(docker ima ...

  2. docker仓库搭建、加密、用户认证

    1 . 含义及理解: 仓库分为公开仓库(Public)和私有仓库(Private)两种形式.最大的公开仓库是 Docker Hub,存放了数量庞大的镜像供用户下载. 国内的公开仓库包括 Docker ...

  3. Docker(仓库)——Docker Hub 公共仓库+企业级私有仓库搭建流程

    目录 一.什么是仓库 二.安装配置Docker Hub 三.配置镜像加速器 四.Registry 工作原理 五.CONTENTS 六.搭建私有仓库 一.什么是仓库 什么是仓库? • Docker 仓库 ...

  4. 公开仓库中Docker镜像的漏洞分析结果发布

    Federacy的一名研究人员发布了一项报告,该报告分析了公开仓库中Docker镜像的漏洞.24%的镜像发现了明显的漏洞,其中基于Ubuntu的镜像漏洞最多,而基于Debian的镜像漏洞最少. \\ ...

  5. Java Spring Boot 2.0 实战之制作Docker镜像并推送到Docker Hub和阿里云仓库

    内容摘要:大规模集群快速部署Java应用,需要制作Docker镜像,本次课程详细介绍如何制作Java程序的Docker镜像文件,深入解析DockerFile核心参数,以及实践演练把我们制作的Docke ...

  6. Docker入门(一) - 仓库、容器、镜像、数据卷

    Docker 是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化.容器是完全使用沙箱机制,相互之间不会有任何 ...

  7. docker 推送到本地仓库_Docker_学习笔记系列之仓库

    docker仓库分公有和私有之分,本文主要介绍如何搭建私有仓库 1. 简介 Docker仓库,类似于yum仓库,是用来保存镜像的仓库.为了方便的管理和使用docker镜像,可以将镜像集中保存至Dock ...

  8. 使用docker registry建立私有镜像仓库

    安装环境centos 7.4 镜像仓库地址10.0.0.200, 域名repo.cssweb.com 如没有DNS, 编辑/etc/hosts添加以上映射关系. 首先安装好docker. yum in ...

  9. 简述Docker镜像、容器、仓库概念

    Docker镜像 Docker镜像(Image)类似于虚拟机的镜像,可以将他理解为一个面向Docker引擎的只读模板,包含了文件系统. 例如:一个镜像可以完全包含了Ubuntu操作系统环境,可以把它称 ...

最新文章

  1. python数据结构视频百度云盘_数据结构与算法Python视频领课
  2. 趣味Python入门(一):初识Python
  3. iPad导入Mac:非常快!一气呵成,直接去photo里面选择,之后左上角倒出就好,颠覆之前windows上面的认知!
  4. u盘文件看得见却打不开_【U盘】国产开源U盘启动制作工具
  5. Android笔记 get方式提交数据到服务器 避免乱码 demo
  6. 一些iphone开发的资料
  7. php图片生成缩略图_php实现根据url自动生成缩略图的方法
  8. 2008安装完了找不到_防臭地漏哪种好?防臭地漏怎么安装?一篇文章全了解
  9. python中的struct模块
  10. MongoDB数据库和集合的基本操作
  11. 学习git: 忽略某些文件(夹)的跟踪
  12. 【算法笔记】输出字符串的所有子序列
  13. 数据接入层设计的2-3法则:应对分布式与多元化存储
  14. 后端如何收取多个文件_一次上传多个文件机制的两种解决方案
  15. xp系统下载U盘安装教程,u盘安装xp系统方法
  16. Android 沉浸式体验
  17. 山东大学项目实训(三十二)—— 科室管理
  18. 雪碧图 Sprite图
  19. 基于SSM框架简易项目“书籍管理系统”,超详细讲解,附源码
  20. 哈哈哈哈,16 岁高中生开发「粤语编程」项目,在 GitHub 火了!

热门文章

  1. oracle 亿级数据存储方案
  2. 计算机内存容量影响游戏的吗,内存容量对整机游戏性能影响到底有多大
  3. python中的常数e的实现
  4. 使用Jsp+Servlet的wlop官网(验证码登录+session自动登陆)
  5. 数据中心机房建设中的关键问题都有哪些?
  6. origin画图记录
  7. MySQL数据库的基本管理操作
  8. 计算机画图怎样更改文字,如何在图片上改字|超简单的修改图片里文字方法
  9. DZ先生怪谈国标之215 and 216(即业务分组和虚拟组织)
  10. ubuntu中共享文件夹看不到