准备sandbox和container启动文件

sandbox:
{"metadata": {"name": "busybox-pod","uid": "busybox-pod","namespace": "test.kata"},"hostname": "busybox_host","log_directory": "","dns_config": {},"port_mappings": [],"resources": {},"labels": {},"annotations": {},"linux": {}
}container:
{"metadata": {"name": "busybox-container","namespace": "test.kata"},"image": {"image": "docker.io/library/busybox:latest"},"command": ["sleep","9999"],"args": [],"working_dir": "/","log_path": "","stdin": false,"stdin_once": false,"tty": false
}

ctr命令启动kata v2

ctr是containerd的客户端命令

下载busybox容器

image="docker.io/library/busybox:latest"
ctr image pull "$image"

ctr使用kata v2部署运行时,并打印sandbox内核版本

ctr run --runtime "io.containerd.kata.v2" --rm -t "$image" test-kata uname -r

当看到返回的内核版本信息后表示运行完成

crictl命令部署kata v2

修改containerd默认runtime为kata v2

/etc/containerd/config.toml
[plugins][plugins.cri][plugins.cri.containerd][plugins.cri.containerd.default_runtime]runtime_type = "io.containerd.kata.v2"[plugins.cri.cni]# conf_dir is the directory in which the admin places a CNI conf.conf_dir = "/etc/cni/net.d"

创建/etc/cni/net.d/10-mynet.conf文件,并添加cni配置

cat /etc/cni/net.d/10-mynet.conf
{"cniVersion": "0.2.0","name": "mynet","type": "bridge","bridge": "cni0","isGateway": true,"ipMasq": true,"ipam": {"type": "host-local","subnet": "172.19.0.0/24","routes": [{ "dst": "0.0.0.0/0" }]}
}

创建sandbox运行环境:crictl runp -r kata sandbox_config.json

抛出如下错误:

WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0002] connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded FATA[0025] run pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.2": failed to pull image "k8s.gcr.io/pause:3.2": failed to pull and unpack image "k8s.gcr.io/pause:3.2": failed to resolve reference "k8s.gcr.io/pause:3.2": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.2": dial tcp 108.177.125.82:443: connect: connection refused

第一段错误信息可以看到crictl连接的runtime是docker,并且连接失败

查看containerd启动信息(journalctl -exu containerd),containerd启动完成后监听的unix sock路径是:/run/containerd/containerd.sock。

创建crictl配置文件:/etc/crictl.yaml

修改crictl默认配置,把runtime-endpoint和image-endpoint指向containerd

runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: true
pull-image-on-create: false

第二段错误信息提示下载k8s.gcr.io/pause:3.2容器镜像失败

crictl配置好image-endpoint后crictl使用containerd管理镜像,并且默认使用k8s.io命令空间,使用crictl pull命令拉取国内pause镜像再修改tag

crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2
拉取完成后使用crictl images命令可以看到镜像
并且可以通过ctr ns list命令看到containerd多出了k8s.io命名空间,containerd管理的镜像也是使用命名空间进行隔离,如果要看crictl下载的镜像需要带上k8s.io明明空间ctr -n k8s.io images ls

因为crictl命令不能修改镜像tag,所以使用ctr命令修改容器tag

ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 k8s.gcr.io/pause:3.2

crictl runp sandbox_config.json 执行成功后会返回一组id,并且可以通过crictl pods看到启动成功的pod,当前pod只是kata的sandbox并没有运行容器

POD ID              CREATED             STATE               NAME                NAMESPACE           ATTEMPT             RUNTIME
613962e9064fb       46 seconds ago      Ready               busybox-pod         test.kata           0                   (default)

使用crictl拉取busybox镜像,成功后可以使用crictl看到pause和busybox镜像,同时也可以使用ctr -n k8s.io images ls命令查看镜像

拉取镜像
crictl pull docker.io/library/busybox:latest
查看镜像
crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/library/busybox                                         latest              a9d583973f65a       769kB
k8s.gcr.io/pause                                                  3.2                 80d28bedfe5de       298kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64   3.2                 80d28bedfe5de       298kB

创建kata v2 pod

crictl create 613962e9064fb container_config.json sandbox_config.json

返回一组id,这时候可以通过crictl ps -a看到已经创建成功busybox容器,并且状态为Created

CONTAINER           IMAGE                              CREATED             STATE               NAME                ATTEMPT             POD ID
30917721dccfa       docker.io/library/busybox:latest   54 seconds ago      Created             busybox-container   0                   613962e9064fb

根据POD id启动pod中容器

crictl start 30917721dccfa

查看容器状态变为Running

CONTAINER           IMAGE                              CREATED             STATE               NAME                ATTEMPT             POD ID
30917721dccfa       docker.io/library/busybox:latest   2 minutes ago       Running             busybox-container   0                   613962e9064fb

进入容器命令

crictl exec -it 30917721dccfa sh

进入kata v2 sandbox runtime

kata v2环境中考虑到安全性问题不能直接使用kata-runtime exec进入sandbox虚拟机,并且kata release版本的镜像没有包含登录组件,如果要登录需要重新制作sandbox文件系统,登录sandbox需要通过kata-monitor并且sandbox需要在kata-monitor启动之后再启动才能进行调试工作

启动kata-monitor,有新pod创建时会提示“add sandbox to cache”,如果在kata-monitor启动之前就创建了sandbox,使用kata-runtime进入sandbox时会提示在cache找不到执行sandbox

[root@localhost ~]# kata-monitor -listen-address 0.0.0.0:8090
INFO[0047] add sandbox to cache                          container=72a318251595d1ca8271258e5cc60050b8b163195ef35600eab14c9a3c4a2087 name=kata-monitor pid=6194 source=kata-monitor

修改kata-runtime配置文件/usr/share/defaults/kata-containers/configuration.toml,[agent.kata]打开debug_console_enabled配置,[hypervisor.qemu]修改内核启动参数

[agent.kata]
debug_console_enabled = true[hypervisor.qemu]
sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.debug_console"/g' "${kata_configuration_file}"
[root@localhost ~]# crictl runp sandbox_config.json
f2a0ddb835b9094c719abd74678875db3c6c3ee0bfa39b07734e1258498c157c
[root@localhost ~]#
[root@localhost ~]# crictl create f2a0ddb835b9094c719abd74678875db3c6c3ee0bfa39b07734e1258498c157c container_config.json sandbox_config.json
b265a772d6ab7d31961c27767b96948a639c4fc56e7b7835b2d2153fce757625
[root@localhost ~]#
[root@localhost ~]# crictl start b265a772d6ab7d31961c27767b96948a639c4fc56e7b7835b2d2153fce757625
b265a772d6ab7d31961c27767b96948a639c4fc56e7b7835b2d2153fce757625
[root@localhost ~]#
[root@localhost ~]# kata-runtime exec f2a0ddb835b9094c719abd74678875db3c6c3ee0bfa39b07734e1258498c157c
bash-4.2# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"bash-4.2# id
uid=0(root) gid=0(root) groups=0(root)
bash-4.2# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 11:20 ?        00:00:00 /sbin/init
root         2     0  0 11:20 ?        00:00:00 [kthreadd]
root         3     2  0 11:20 ?        00:00:00 [rcu_gp]
root         4     2  0 11:20 ?        00:00:00 [rcu_par_gp]
root         5     2  0 11:20 ?        00:00:00 [kworker/0:0-vir]
root         6     2  0 11:20 ?        00:00:00 [kworker/0:0H]
root         7     2  0 11:20 ?        00:00:00 [kworker/u4:0-ev]
root         8     2  0 11:20 ?        00:00:00 [mm_percpu_wq]
root         9     2  0 11:20 ?        00:00:00 [ksoftirqd/0]
root        10     2  0 11:20 ?        00:00:00 [rcu_sched]
root        11     2  0 11:20 ?        00:00:00 [migration/0]
root        12     2  0 11:20 ?        00:00:00 [cpuhp/0]
root        13     2  0 11:20 ?        00:00:00 [kdevtmpfs]
root        14     2  0 11:20 ?        00:00:00 [netns]
root        15     2  0 11:20 ?        00:00:00 [oom_reaper]
root        16     2  0 11:20 ?        00:00:00 [writeback]
root        17     2  0 11:20 ?        00:00:00 [kcompactd0]
root        18     2  0 11:20 ?        00:00:00 [kblockd]
root        19     2  0 11:20 ?        00:00:00 [blkcg_punt_bio]
root        20     2  0 11:20 ?        00:00:00 [kworker/0:1-vir]
root        21     2  0 11:20 ?        00:00:00 [kswapd0]
root        22     2  0 11:20 ?        00:00:00 [xfsalloc]
root        23     2  0 11:20 ?        00:00:00 [xfs_mru_cache]
root        24     2  0 11:20 ?        00:00:00 [kthrotld]
root        25     2  0 11:20 ?        00:00:00 [nfit]
root        26     2  0 11:20 ?        00:00:00 [kworker/u4:1-ev]
root        27     2  0 11:20 ?        00:00:00 [khvcd]
root        28     2  0 11:20 ?        00:00:00 [kworker/0:2-cgr]
root        29     2  0 11:20 ?        00:00:00 [hwrng]
root        30     2  0 11:20 ?        00:00:00 [scsi_eh_0]
root        31     2  0 11:20 ?        00:00:00 [scsi_tmf_0]
root        32     2  0 11:20 ?        00:00:00 [ipv6_addrconf]
root        33     2  0 11:20 ?        00:00:00 [jbd2/pmem0p1-8]
root        34     2  0 11:20 ?        00:00:00 [ext4-rsv-conver]
root        50     2  0 11:20 ?        00:00:00 [kworker/0:3-vir]
root        56     2  0 11:20 ?        00:00:00 [kworker/0:4]
root        59     1  0 11:20 ?        00:00:00 /usr/bin/kata-agent
chrony      65     1  0 11:20 ?        00:00:00 /usr/sbin/chronyd
root        82    59  0 11:20 ?        00:00:00 /pause
root        84    59  0 11:21 ?        00:00:00 sleep 9999
root        87    59  0 11:21 pts/0    00:00:00 [bash]
root        92    87  0 11:21 pts/0    00:00:00 ps -ef
bash-4.2#

进入kata v2容器

使用crictl pods查看运行中的pod信息

[root@localhost ~]# crictl runp sandbox_config.json
e5181d052e28b193f5bae7ea68fb7af7f8ed02a3b0672f30f93f669445b57f34
[root@localhost ~]# crictl create e5181d052e28b193f5bae7ea68fb7af7f8ed02a3b0672f30f93f669445b57f34 container_config.json sandbox_config.json
53c7d1a6d80dada9a44558c14b0b0e3358078094133f523abe118c8b9164e633
[root@localhost ~]# crictl start 53
53
[root@localhost ~]# crictl ps
CONTAINER           IMAGE                              CREATED             STATE               NAME                ATTEMPT             POD ID
53c7d1a6d80da       docker.io/library/busybox:latest   12 seconds ago      Running             busybox-container   0                   e5181d052e28b
[root@localhost ~]# crictl exec -it 53 sh
/ # ps -ef
PID   USER     TIME  COMMAND1 root      0:00 /pause2 root      0:00 sleep 99993 root      0:00 sh4 root      0:00 ps -ef
/ #

kata 网络分析

Kata Containers网络由network namespaces、tap和tc打通,创建sandbox之前首先创建网络命名空间,里面有veth-pair和tap两种网络接口,eth0属于veth-pair类型接口,一端接入cni创建的网络命名空间,一端接入宿主机;tap0_kata属于tap类型接口,一端接入cni创建的网络命名空间,一端接入qemu创建的hypervisor,并且在cni创建的网络命名空间使用tc策略打通eth0网络接口和tap0_kata网络接口,相当于把eth0和tap0_kata两个网络接口连城一条线。

Sandbox环境中只有eth0网络接口,这个接口是qemu和tap模拟出的接口,mac、ip、掩码都和宿主机中cni创建的网络命名空间中eth0的配置一样

Container运行在Sandbox环境中,Container采用共享宿主机网络命名空间方式创建容器,所以在container中看到的网络配置和Sandbox一样

网络流量走向:

流量进入宿主机后首先由物理网络通过网桥或者路由接入到网络命名空间,网络命名空间中在使用tc策略牵引流量到tap网络接口,然后再通过tap网络接口把流量送入虚拟化环境中,最后虚拟化环境中的容器共享宿主机网络命名空间后就可以在容器中拿到网络流量

hw:              NIC--->
host:           bridge or router veth peer interface--->
ns:             ns veth peer interface---> tc redict ns tap interface
hypervisor:     qemu tap char device--->
gust host:      tap interface
container:      gust host interface
Host Network Namespaces:
[root@localhost opt]# ip netns
cni-eeefa566-f128-03d7-0d4f-dec535aeaedd (id: 0)
[root@localhost opt]# ip netns exec cni-eeefa566-f128-03d7-0d4f-dec535aeaedd ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000link/ether 0e:09:b4:1f:d6:2f brd ff:ff:ff:ff:ff:ff link-netnsid 0inet 172.19.0.52/24 brd 172.19.0.255 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::c09:b4ff:fe1f:d62f/64 scope link valid_lft forever preferred_lft forever
4: tap0_kata: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UNKNOWN group default qlen 1000link/ether ba:fd:97:4f:5a:12 brd ff:ff:ff:ff:ff:ffinet6 fe80::b8fd:97ff:fe4f:5a12/64 scope link valid_lft forever preferred_lft forever
[root@localhost opt]# ip netns exec cni-eeefa566-f128-03d7-0d4f-dec535aeaedd tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0 root refcnt 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0
qdisc ingress ffff: dev eth0 parent ffff:fff1 ---------------- Sent 3458 bytes 37 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0
qdisc mq 0: dev tap0_kata root Sent 4982 bytes 50 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0
qdisc fq_codel 0: dev tap0_kata parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 4982 bytes 50 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0new_flows_len 0 old_flows_len 0
qdisc ingress ffff: dev tap0_kata parent ffff:fff1 ---------------- Sent 2644 bytes 38 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0
[root@localhost opt]# Sandbox:
bash-4.2# ./ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether 0e:09:b4:1f:d6:2f brd ff:ff:ff:ff:ff:ffinet 172.19.0.52/24 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::c09:b4ff:fe1f:d62f/64 scope link valid_lft forever preferred_lft forever
bash-4.2# Container:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether 0e:09:b4:1f:d6:2f brd ff:ff:ff:ff:ff:ffinet 172.19.0.52/24 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::c09:b4ff:fe1f:d62f/64 scope link valid_lft forever preferred_lft forever
/ # kata container中走出的流量mac地址和宿主机veth网络接口地址mac一致,因为宿主机的veth接口mac地址和容器中网络接口的mac地址一样
container:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=127 time=65.917 ms
64 bytes from 8.8.8.8: seq=1 ttl=127 time=65.852 ms
64 bytes from 8.8.8.8: seq=2 ttl=127 time=97.151 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 65.852/76.306/97.151 ms
/ # host:
[root@localhost opt]# tcpdump -i vethbf44c9e2 -e
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vethbf44c9e2, link-type EN10MB (Ethernet), capture size 262144 bytes
10:43:22.839544 0e:09:b4:1f:d6:2f (oui Unknown) > ca:16:9b:4d:99:92 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.19.0.52 > dns.google: ICMP echo request, id 3840, seq 0, length 64
10:43:22.905210 ca:16:9b:4d:99:92 (oui Unknown) > 0e:09:b4:1f:d6:2f (oui Unknown), ethertype IPv4 (0x0800), length 98: dns.google > 172.19.0.52: ICMP echo reply, id 3840, seq 0, length 64
10:43:23.840455 0e:09:b4:1f:d6:2f (oui Unknown) > ca:16:9b:4d:99:92 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.19.0.52 > dns.google: ICMP echo request, id 3840, seq 1, length 64
10:43:23.905728 ca:16:9b:4d:99:92 (oui Unknown) > 0e:09:b4:1f:d6:2f (oui Unknown), ethertype IPv4 (0x0800), length 98: dns.google > 172.19.0.52: ICMP echo reply, id 3840, seq 1, length 64
10:43:24.840769 0e:09:b4:1f:d6:2f (oui Unknown) > ca:16:9b:4d:99:92 (oui Unknown), ethertype IPv4 (0x0800), length 98: 172.19.0.52 > dns.google: ICMP echo request, id 3840, seq 2, length 64
10:43:24.907717 ca:16:9b:4d:99:92 (oui Unknown) > 0e:09:b4:1f:d6:2f (oui Unknown), ethertype IPv4 (0x0800), length 98: dns.google > 172.19.0.52: ICMP echo reply, id 3840, seq 2, length 64
10:43:28.033051 0e:09:b4:1f:d6:2f (oui Unknown) > ca:16:9b:4d:99:92 (oui Unknown), ethertype ARP (0x0806), length 42: Request who-has localhost.localdomain tell 172.19.0.52, length 28
10:43:28.033105 ca:16:9b:4d:99:92 (oui Unknown) > 0e:09:b4:1f:d6:2f (oui Unknown), ethertype ARP (0x0806), length 42: Reply localhost.localdomain is-at ca:16:9b:4d:99:92 (oui Unknown), length 28
10:43:28.153723 ca:16:9b:4d:99:92 (oui Unknown) > 0e:09:b4:1f:d6:2f (oui Unknown), ethertype ARP (0x0806), length 42: Request who-has 172.19.0.52 tell localhost.localdomain, length 28
10:43:28.154306 0e:09:b4:1f:d6:2f (oui Unknown) > ca:16:9b:4d:99:92 (oui Unknown), ethertype ARP (0x0806), length 42: Reply 172.19.0.52 is-at 0e:09:b4:1f:d6:2f (oui Unknown), length 28
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel
[root@localhost opt]#

kata文件系统分析

Kata Containers在文件系统层次上面可以理解为Sandbox文件系统处于第一层,Container容器镜像存放在Sandbox文件系统中,但Container的容器镜像需要和宿主机上的容器镜像同步,这里就涉及到宿主机和虚拟机共享文件的技术。

Kata Containers采用Virtio-fs方案共享宿主机目录到虚拟机中。virtio-fs方案使用FUSE协议在host和guest之间通信。在host端实现一个fuse server操作host上的文件,然后把guest kernel当作fuse client在guest内挂载fuse,server和client之间使用virtio来做传输层来承载FUSE协议。

FUSE协议:相当于在Kernel捕获操作文件系统的系统调用,并且把调用转发到用户空间,实现对接不同文件系统效果

virtio-fs::https://kernel.taobao.org/2019/11/virtio-fs-intro-and-perf-optimize/

宿主机启动两个virtiofsd进程

[root@localhost ~]# ps -ef | grep 3591
root        3591    3570  0 10:58 ?        00:00:00 /opt/kata/libexec/kata-qemu/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1
root        3604    3591  0 10:58 ?        00:00:00 /opt/kata/libexec/kata-qemu/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1
root        5201    3980  0 11:43 pts/4    00:00:00 grep --color=auto 3591
[root@localhost ~]# ls /run/kata-containers/shared/sandboxes/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/shared
bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9                               dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246-216db849e4c85789-hostname
bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9-38821f6d7ff9a0a2-resolv.conf  dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246-3d94e11ad98423b6-resolv.conf
dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246                               dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246-ef69b23937321637-hosts
[root@localhost ~]#

kata启动的qemu进程添加了vhost-user-fs-pci类型的设备,用于在Guest Kernel和virtiofsd之间建立起vhost-user连接

添加vhost-user.sock文件:

-chardev socket,id=char-d2eb304b58025a80,path=/run/vc/vm/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/vhost-fs.sock

添加virtiofs设备:

-device vhost-user-fs-pci,chardev=char-d2eb304b58025a80,tag=kataShared

[root@localhost /]# ps -ef | grep qemu-system-x86_64
root        3597       1  0 10:58 ?        00:00:15 /opt/kata/bin/qemu-system-x86_64 -name sandbox-bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9 -uuid cc99a26f-8b24-406b-af3e-7601642fcd3f -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host,pmu=off -qmp unix:/run/vc/vm/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=4735M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=true,id=serial0,romfile= -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/test-kata-containers.img,size=536870912 -device virtio-scsi-pci,id=scsi0,disable-modern=true,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0,romfile= -device vhost-vsock-pci,disable-modern=true,vhostfd=3,id=vsock-112570929,guest-cid=112570929,romfile= -chardev socket,id=char-d2eb304b58025a80,path=/run/vc/vm/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-d2eb304b58025a80,tag=kataShared,romfile= -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=26:9c:58:9e:61:55,disable-modern=true,mq=on,vectors=4,romfile= -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/test-vmlinux.bin -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 quiet systemd.show_status=false panic=1 nr_cpus=2 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none agent.debug_console agent.debug_console_vport=1026 agent.debug_console -pidfile /run/vc/vm/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/pid -smp 1,cores=1,threads=1,sockets=2,maxcpus=2

进入Katacontainers的Sandbox查看挂载状态,使用virtiofs tag为kataShared的文件系统挂载目录

bash-4.2# mount | grep kataShared
kataShared on /run/kata-containers/shared/containers type virtiofs (rw,relatime)
kataShared on /run/kata-containers/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/rootfs type virtiofs (rw,relatime)
kataShared on /run/kata-containers/dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246/rootfs type virtiofs (rw,relatime)
bash-4.2#

挂载对应

/run/kata-containers/shared/sandboxes/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/shared    --->
/run/kata--containers/shared/containers/run/kata-containers/shared/sandboxes/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/shared/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9   --->
/run/kata-containers/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/rootfs/run/kata-containers/shared/sandboxes/bd570ad1fc1d531d7227425e598d71dd17c6b57209991b733df0083ffc38e2a9/shared/dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246  --->
/run/kata-containers/dafed18b274865392308b232916608b50c07f8e3da65bc93920201f8a54c8246/rootfs

katacontainers网络文件系统分析相关推荐

  1. 光盘隐藏文件夹 linux,linux常用命令大全2--挂载/dpkg/文件系统分析/apt/光盘/关机...

    挂载一个文件系统 mount /dev/hda2 /mnt/hda2 挂载一个叫做hda2的盘 - 确定目录 '/ mnt/hda2' 已经存在 umount /dev/hda2 卸载一个叫做hda2 ...

  2. java下载网络中的文件,java下载网络文件解决思路

    java下载网络文件 下面这段代码是下载一个http网络文件的代码,但有时候下载下来的图片是完整的,有时候下载下来的不完整,还有下载的ppt,pdf之类,也是打不开的.请大件们给指导一下,小弟感激不尽 ...

  3. C#访问网络共享文件夹,带用户名密码域,解决电脑重启后访问不到网络文件夹

    问题:winform访问网络共享文件夹,电脑重启后访问不到指定目录 原因:访问网络共享文件夹目录需要相关的用户凭据,文件资源管理器可以记住凭据,但是电脑重启后直接用软件访问网络文件夹路径是没有凭据的, ...

  4. R语言使用download.file函数下载网络文件到本地(Download File from the Internet)

    R语言使用download.file函数下载网络文件到本地(Download File from the Internet) 目录 R语言使用download.file函数下载网络文件到本地(Down ...

  5. Linux 学习笔记_12_文件共享服务_3_NFS网络文件服务

    NFS网络文件服务 NFS---- Network File System 用于UNIX/Linux[UNIX类操作系统]系统间通过网络进行文件共享,用户可以把网络中NFS服务器提供的共享目录挂载到本 ...

  6. 整理前端工作中的可复用代码(二):拓展spark-md5,支持计算网络文件md5

    这里是<整理前端开发中的可复用代码>中的第二篇,最初此系列文章的标题不是这个,但觉得标题要准确.明白一些,便做了修改.这里的经验都来自作者的工作实践,入了前端坑的摸爬滚打. 背景 在工作中 ...

  7. 6421B Lab10 网络文件和打印服务的配置与故障排除

    共1个实验 实验L10:网络文件和打印服务的配置与故障排除 共3个练习 练习1:创建和配置文件共享. 练习2:加密和恢复文件. 练习3:创建和配置打印池. 练习1:创建和配置文件共享 任务1:为文件夹 ...

  8. python科学计算笔记(二)pandas获取网络文件

    import urllib.requestimport requests from io import StringIOimport numpy as npimport pandas as pd '' ...

  9. 操作系统:Windows映射网络文件夹的方法介绍

    大家在工作中我们经常需要访问局域网服务器的共享文件夹,并将共享文件夹做网络映射,映射之后就相当于是访问本地磁盘一样访问远程服务器的共享文件夹.那么如何做网络映射呢?我们以windows操作系统为例,给 ...

  10. [工具库]JFileDownloader工具类——多线程下载网络文件,并保存在本地

    本人大四即将毕业的准程序员(JavaSE.JavaEE.android等)一枚,小项目也做过一点,于是乎一时兴起就写了一些工具. 我会在本博客中陆续发布一些平时可能会用到的工具. 代码质量可能不是很好 ...

最新文章

  1. mysql中如何设置过滤器_mysql 如何动态修改复制过滤器
  2. 【CVPR 2020】弱监督怎样做图像分类?上交大提出自组织记忆网络
  3. POJ1990:MooFest——题解
  4. MySQL-常用引擎
  5. WSDM 2022 | 一种用于在线广告自动竞价的协作竞争多智能体框架
  6. 管理Sass项目文件结构
  7. 印象笔记打开错误_只会用手机自带便签?这三款笔记软件分分钟秒杀
  8. Bailian2943 小白鼠排队【排序】
  9. 2019免费微信营销软件排行榜
  10. h5快速制作工具-企业级. 非个人无水印
  11. 三星识别文字_免费文字识别
  12. mysql中update语句怎么设置变量_MySQL数据库update语句使用详解
  13. ***能篡改WiFi密码,源于存在漏洞
  14. unity点光源消失
  15. it工种分类_科普篇!程序员都有哪些工种和类型呢?
  16. 笔记本(DELL Vostro 3549)为什么固态硬盘要装到光驱位置
  17. Essential Matrix 的求解算法--Nister 五点算法以及原理
  18. python3英文视频课程_Python3国外著名视频教程英文 87课
  19. 全球与中国5G用聚酰亚胺薄膜市场深度研究分析报告
  20. 获取code.google.com上的开源代码的方法

热门文章

  1. 二项分布、poisson分布、gamma分布一些关系的笔记
  2. anaconda安装python3.6_Windows10配置Anaconda+Python3.6+TensorFlow+PyCharm
  3. 基于MFCC参数的元音识别
  4. java计算机毕业设计红色主题旅游网站源码+mysql数据库+系统+lw文档+部署
  5. 如何防止条码流水号打印出现重码漏码错码的防呆检错系统?
  6. 闲鱼搜索相关性——体验与效率平衡的背后
  7. PTA——鸡兔同笼zzuli
  8. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
  9. 用计算机弹无羁的数字,无羁钢琴谱数字双手波尔卡教
  10. 10min快速回顾C++语法(六)函数专题