目录

Prerequisites

Using DPDK interfaces

Configuring Vagrant and start the VM

Checking the host-only network

Attaching a DPDK interface to VPP

Configuring the interface in VPP

Host to VPP communication

Layer-2 cross-connection

Set up the tap interface

Cross-connect DPDK and tap

Host to guest tap communication

Packet tracing

Cleaning DPDK interfaces

MacSwap plugin

Compilation

Loading the plugin

Using the plugin

Cleanup


This tutorial will cover several basic aspects of VPP, namely: using DPDK interfaces, connecting interfaces at layer 2, experimenting with packet traces and demonstrating VPP modularity by compiling/using a sample plugin.

It is intended to people with little experience of VPP. However, having followed e.g. the tutorial on routing and switching can be helpful to have a first insight on VPP's internals.

Prerequisites

Having a working Linux VPP environment. The easiest way to get it done is to install Vagrant and build a test image.

Using DPDK interfaces

Configuring Vagrant and start the VM

If you have not changed your Vagrantfile, Vagrant should be configured to use two NICs on your VPP virtual machine, belonging to a private network on your host OS. Make sure your file looks like the following.

host:~/vpp/build-root/vagrant$ cat Vagrantfile ... snip ...# Define some physical ports for your VMs to be used by DPDKnics = 2if ENV.key?('VPP_VAGRANT_NICS')nics = ENV['VPP_VAGRANT_NICS'].to_i(10)endfor i in 1..nicsconfig.vm.network "private_network", type: "dhcp"end... snip ...

Also, make sure promiscuous mode is enabled for your VM NICs. With Virtualbox, this can be achieved with [Your VM] -> Settings -> Network -> Adapter X -> Promiscuous Mode -> Allow All

Another option is to set the VM NICs in promiscuous mode when updating your Vagrantfile:

 --- a/build-root/vagrant/Vagrantfile+++ b/build-root/vagrant/Vagrantfile@@ -61,6 +61,8 @@ Vagrant.configure(2) do |config|config.vm.synced_folder "../../", "/vpp", disabled: falseconfig.vm.provider "virtualbox" do |vb|vb.customize ["modifyvm", :id, "--ioapic", "on"]+      vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]+      vb.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]vb.memory = 4096vb.cpus = 2end

You can now boot up and access your virtual machine. If your machine was already running, please restart it so as to be in a clean state.

host:~/vpp/build-root/vagrant$ vagrant halt
host:~/vpp/build-root/vagrant$ vagrant up
host:~/vpp/build-root/vagrant$ vagrant ssh

Checking the host-only network

The private network is accessible from your host OS via the interface vboxnet0. On your guest, you should see three Intel e1000 NICs, the first one being the management interface (NATed to your host), and the two other ones belonging two the host-only private network.

vagrant@localhost:~$ sudo lshw -class network -businfo
Bus info          Device     Class      Description
===================================================
pci@0000:00:03.0  eth0       network    82540EM Gigabit Ethernet Controller
pci@0000:00:08.0  eth1       network    82540EM Gigabit Ethernet Controller
pci@0000:00:09.0  eth2       network    82540EM Gigabit Ethernet Controller

Make sure your host can communicate with your VPP virtual machine. For this, get the address of the first NIC of your VM and ping it from your host.

vagrant@localhost:~$ ifconfig eth1eth1      Link encap:Ethernet  HWaddr 08:00:27:69:dc:cc  inet addr:172.28.128.5  Bcast:172.28.128.255  Mask:255.255.255.0inet6 addr: fe80::a00:27ff:fe69:dccc/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:9 errors:0 dropped:0 overruns:0 frame:0TX packets:13 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:4284 (4.2 KB)  TX bytes:2076 (2.0 KB)host:~/vpp/build-root/vagrant$ ping 172.28.128.5
PING 172.28.128.5 (172.28.128.5): 56 data bytes
64 bytes from 172.28.128.5: icmp_seq=0 ttl=64 time=0.955 ms
64 bytes from 172.28.128.5: icmp_seq=1 ttl=64 time=0.289 ms
64 bytes from 172.28.128.5: icmp_seq=2 ttl=64 time=0.248 ms
64 bytes from 172.28.128.5: icmp_seq=3 ttl=64 time=0.194 ms
^C
--- 172.28.128.5 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.194/0.421/0.955/0.310 ms

Attaching a DPDK interface to VPP

By default, VPP will not use attempt to use those interfaces that are in use by the kernel. In order for VPP to bind an interface to DPDK, you need set it down, then whitelist in the configuration file by supplying the corresponding PCI address.

Do this with eth1 (which in this case has PCI address 0000:00:08.0):

vagrant@localhost:~$ sudo ifconfig eth1 down
vagrant@localhost:~$ cat /etc/vpp/startup.conf
unix {nodaemonlog /var/log/vpp/vpp.logfull-coredump
}api-trace {on
}dpdk {socket-mem 1024dev 0000:00:08.0
}

You can now start VPP:

vagrant@localhost:~$ sudo start vpp

From this point on, you can execute CLI commands with sudo vppctl <command>. Alternatively, sudo vppctl opens a VPP prompt in which multiple commands can be typed.

Configuring the interface in VPP

Your NIC should be identified as GigabitEthernet0/8/0 by VPP (the name being directly derived from the NIC PCI address).

vpp# show hardwareName                Idx   Link  Hardware
GigabitEthernet0/8/0               5    down  GigabitEthernet0/8/0Ethernet address 08:00:27:69:dc:ccIntel 82540EM (e1000)carrier up full duplex speed 1000 mtu 9216 local0                             0    down  local0local
pg/stream-0                        1    down  pg/stream-0Packet generator
pg/stream-1                        2    down  pg/stream-1Packet generator
pg/stream-2                        3    down  pg/stream-2Packet generator
pg/stream-3                        4    down  pg/stream-3Packet generator

Set it up and assign it an IP address in the private network subnet:

vpp# set interface state GigabitEthernet0/8/0 up
vpp# set interface ip address GigabitEthernet0/8/0 172.28.128.5/24

Host to VPP communication

You should now be able to ping the VPP interface from your host OS:

host:~/vpp/build-root/vagrant$ ping 172.28.128.5
PING 172.28.128.5 (172.28.128.5): 56 data bytes
64 bytes from 172.28.128.5: icmp_seq=0 ttl=64 time=0.191 ms
64 bytes from 172.28.128.5: icmp_seq=1 ttl=64 time=0.211 ms
^C
--- 172.28.128.5 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.191/0.201/0.211/0.010 ms

From VPP, you can see that pings have indeed reached the interface and be replied to:

vpp# show interface GigabitEthernet0/8/0Name               Idx       State          Counter          Count
GigabitEthernet0/8/0              5         up       rx packets                     3rx bytes                     256tx packets                     3tx bytes                     256ip4                            2
vpp# show errorCount                    Node                  Reason2             ip4-icmp-input             echo replies sent1                arp-input               ARP replies sent

Layer-2 cross-connection

In this section, we will see how to cross-connect two interfaces, such that traffic coming to one is redirected to the other. We will create a tap interface on VPP and connect it to our DPDK interface, enabling the host OS to communicate with the guest OS while going through VPP.

Set up the tap interface

First, restart VPP to clean up your previous work.

vagrant@localhost:~$ sudo restart vpp

Create a tap interface in VPP. This will spawn an interface named tap-0 inside VPP and tap0 in the guest OS.

vpp# tap connect tap0
tap-0

In the guest OS, configure the corresponding interface.

vagrant@localhost:~$ sudo ifconfig tap0 172.28.128.42/24

If you have an eth2 interface attached to vboxnet0 that has an address in the same subnet, disable it with sudo ifconfig eth2 down, otherwise packets might go through it.

Cross-connect DPDK and tap

You can now cross-connect the DPDK interface and the newly created tap interface. Any traffic arriving on one interface will be redirected to the other.

vpp# set interface l2 xconnect tap-0 GigabitEthernet0/8/0
vpp# set interface l2 xconnect GigabitEthernet0/8/0 tap-0
vpp# set interface state GigabitEthernet0/8/0 up
vpp# set interface state tap-0 up

In this setup, you can now send traffic from your host OS vboxnet0 interface to your guest OS tap0 interface, going through the following path: host vboxnet0 -> VPP GigabitEthernet0/8/0 -> VPP tap-0 -> guest tap0.

Host to guest tap communication

You should now be able to ping your guest tap interface from your host.

host:~/vpp/build-root/vagrant$ ping 172.28.128.42
PING 172.28.128.42 (172.28.128.42): 56 data bytes
64 bytes from 172.28.128.42: icmp_seq=0 ttl=64 time=0.241 ms
64 bytes from 172.28.128.42: icmp_seq=1 ttl=64 time=0.223 ms
^C
--- 172.28.128.42 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.223/0.232/0.241/0.009 ms

This time, you can see that packets have flown through the L2 module of VPP:

vpp# show errorCount                    Node                  Reason8                tapcli-rx               no error8                l2-output               L2 output packets8                l2-input                L2 input packets

Packet tracing

An interesting debug feature of VPP is the packet tracer, which allows a future given number of packets coming from an input node to be recorded. Let us trace the packets coming from the DPDK module in our previous setup:

vpp# trace add dpdk-input 10
host:~/vpp/build-root/vagrant$ ping 172.28.128.42
vpp# show trace
------------------- Start of thread 0 vpp_main -------------------
Packet 1 00:07:52:290879: dpdk-inputGigabitEthernet0/8/0 rx queue 0buffer 0x1039a: current data 0, length 98, free-list 0, totlen-nifb 0, trace 0x0PKT MBUF: port 0, nb_segs 1, pkt_len 98buf_len 2176, data_len 98, ol_flags 0x0,packet_type 0x0IP4: 0a:00:27:00:00:00 -> 08:00:27:ce:15:49ICMP: 172.28.128.1 -> 172.28.128.42tos 0x00, ttl 64, length 84, checksum 0x4c09fragment id 0xd63bICMP echo_request checksum 0xf6af
00:07:52:290910: ethernet-inputIP4: 0a:00:27:00:00:00 -> 08:00:27:ce:15:49
00:07:52:290915: l2-inputl2-input: sw_if_index 5 dst 08:00:27:ce:15:49 src 0a:00:27:00:00:00
00:07:52:290917: l2-outputl2-output: sw_if_index 6 dst 08:00:27:ce:15:49 src 0a:00:27:00:00:00
00:07:52:290918: tap-0-outputtap-0IP4: 0a:00:27:00:00:00 -> 08:00:27:ce:15:49ICMP: 172.28.128.1 -> 172.28.128.42tos 0x00, ttl 64, length 84, checksum 0x4c09fragment id 0xd63bICMP echo_request checksum 0xf6af

You can see that the path taken by the first packet is dpdk-input -> ethernet-input -> l2-input -> l2-output -> tap-0-output: the two interfaces are indeed directly connected.

To trace packets coming from the tap interface, use trace add tapcli-rx 10. To clear the trace, use clear trace.

You can trace packets coming from different nodes by using several trace add commands, they will be put in their arrival order in the trace buffer.

Cleaning DPDK interfaces

If you use VPP with a DPDK interface and later decide to stop VPP and use the NIC normally through the Linux stack, you will need to bind it back to its generic PCI driver. To that purpose, you can use the dpdk_nic_bind.py Python script, which requires the driver name and the address of the PCI interface. If you use the default VirtualBox setup, the driver will be e1000 (the standard Intel Gigabit Ethernet driver).

vagrant@localhost:$ sudo stop vpp
vagrant@localhost:$ ifconfig eth1
eth1: error fetching interface information: Device not found
vagrant@localhost:$ sudo /vpp/build-root/build-vpp-native/dpdk/dpdk-16.04/tools/dpdk_nic_bind.py -b e1000 0000:00:08.0
vagrant@localhost:$ ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:69:dc:cc  inet addr:172.28.128.5  Bcast:172.28.128.255  Mask:255.255.255.0inet6 addr: fe80::a00:27ff:fe69:dccc/64 Scope:LinkUP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1RX packets:1 errors:0 dropped:0 overruns:0 frame:0TX packets:7 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:590 (590.0 B)  TX bytes:850 (850.0 B)

MacSwap plugin

In this part, we are going to see how to compile and use a plugin for VPP. We are going to work with the sample macswap plugin, whose role is simply to reverse source and destination hardware addresses of packets arriving on an interface, before retransmitting them on the same one.

Compilation

Compile the plugin:

vagrant@localhost:~$ cd /vpp/
vagrant@localhost:/vpp$ cp -r plugins/sample-plugin ./
vagrant@localhost:/vpp$ cd build-root
vagrant@localhost:/vpp/build-root$ make V=0 PLATFORM=vpp TAG=vpp sample-plugin-install

The plugin is now located in /vpp/build-root/install-vpp-native/sample-plugin/lib64/sample_plugin.so

Loading the plugin

To load the plugin, you can use the plugin-path directive in VPP startup file, specifying the directory where the plugin lies.

vagrant@localhost:/vpp/build-root$ mkdir ~/plugins
vagrant@localhost:/vpp/build-root$ cp install-vpp-native/sample-plugin/lib64/sample_plugin.so ~/plugins/
vagrant@localhost:/vpp/build-root$ echo "plugin_path /home/vagrant/plugins/" | sudo tee -a /etc/vpp/startup.conf

Alternatively, you can copy the .so file to /usr/lib/vpp_plugins, which is VPP's default path for plugin search.

vagrant@localhost:/vpp/build-root$ sudo mkdir /usr/lib/vpp_plugins
vagrant@localhost:/vpp/build-root$ sudo cp install-vpp-native/sample-plugin/lib64/sample_plugin.so /usr/lib/vpp_plugins/

You can now start VPP:

 vagrant@localhost:~$ sudo ifconfig eth1 downvagrant@localhost:~$ sudo start vpp

Alternatively, if you chose to start VPP manually, you will see in the output that the first thing done by VPP is to load the plugin:

vagrant@localhost:~$ sudo vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:201: plugin path /home/vagrant/plugins/
load_one_plugin:87: Loaded plugin: /home/vagrant/plugins//sample_plugin.so
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Support maximum 256 logical core(s) by configuration.
...

Using the plugin

Check that the plugin is correctly loaded:

vpp# sample ?sample macswap                           sample macswap <interface-name> [disable]

As you can see, this plugin adds a CLI command which enables/disables the MAC swapping feature on a specified interface. Let's try this on our DPDK interface:

vpp# sample macswap GigabitEthernet0/8/0
vpp# set interface state GigabitEthernet0/8/0 up

Now, let's trace what's arriving on our DPDK interface, while generating packets from the host. We will simply use ping to generate traffic from the host to VPP.

vpp# trace add dpdk-input 10
host:~/vpp/build-root/vagrant$ ping 172.28.128.5
vpp# show trace
------------------- Start of thread 0 vpp_main -------------------
Packet 1 00:05:07:349249: dpdk-inputGigabitEthernet0/8/0 rx queue 0buffer 0x10f07: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0PKT MBUF: port 0, nb_segs 1, pkt_len 60buf_len 2176, data_len 60, ol_flags 0x0,packet_type 0x0ARP: 0a:00:27:00:00:00 -> ff:ff:ff:ff:ff:ffrequest, type ethernet/IP4, address size 6/40a:00:27:00:00:00/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5
00:05:07:349703: sampleSAMPLE: sw_if_index 5, next index 0
00:05:07:349724: GigabitEthernet0/8/0-outputGigabitEthernet0/8/0ARP: ff:ff:ff:ff:ff:ff -> 0a:00:27:00:00:00request, type ethernet/IP4, address size 6/40a:00:27:00:00:00/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5
00:05:07:349980: GigabitEthernet0/8/0-txGigabitEthernet0/8/0 tx queue 0buffer 0x10f07: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0ARP: ff:ff:ff:ff:ff:ff -> 0a:00:27:00:00:00request, type ethernet/IP4, address size 6/40a:00:27:00:00:00/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5

In this example, you can see that the path taken by the packet is: dpdk-input -> sample -> GigabitEthernet0/8/0-output. This shows that the plugin has created a sample node, that traps all packets destined to the GigabitEthernet0/8/0 interface. If you run Wireshark on your host OS, you will see that the packet has been sent back on the vboxnet0 interface with hardware addresses reversed:

1 0.000000    0a:00:27:00:00:00   Broadcast       ARP 42  Who has 172.28.128.5? Tell 172.28.128.1
2   0.000097    Broadcast       0a:00:27:00:00:00   ARP 42  Who has 172.28.128.5? Tell 172.28.128.1

In addition to creating a CLI command and a graph node, the plugin also creates an error counter in order to keep track of the number of packets that it processed:

vpp# show errorCount                    Node                  Reason1                 sample                 Mac swap packets processed

Cleanup

Finally, you can disable the packet interception with:

vpp# sample macswap GigabitEthernet0/8/0 disable

Note however that the plugin will remain loaded until you restart VPP.

https://wiki.fd.io/view/VPP/Tutorial_DPDK_and_MacSwap#:~:text=%20Using%20DPDK%20interfaces%20%201%20Configuring%20Vagrant,to%20use%20those%20interfaces%20that%20are...%20More%20

FD.io VPP对 DPDK的详细配置:绑定网卡,启动VPP相关推荐

  1. How-to: Build VPP FD.IO with Mellanox DPDK PMD on top CentOS 7.7 with inbox drivers.

    目录 References Prerequisites Installation, Compilation and Configuration This short document will gui ...

  2. FD.io VPP 20.09版本正式发布:往期VPP文章回顾+下载地址+相关链接

    目录 下载RPM/DEB包 往期文章回顾与推荐 FD.io是一些项目和库的集合,基于DPDK并逐渐演化,支持在通用硬件平台上部署灵活可变的业务.FD.io为软件定义基础设施的开发者提供了一个通用平台, ...

  3. FD.io VPP 20.05 官方文档 总目录:用户文档+开发文档+命令行

    https://docs.fd.io/vpp/20.05/index.html Vector Packet Processing FD.io VPP ▼Vector Packet Processing ...

  4. FD.io VPP官方邮件列表

    https://www.mail-archive.com/vpp-dev@lists.fd.io/ 邮件内容更新截至2020年9月21日17:23:47 Messages by Thread [vpp ...

  5. Linux 配置网卡、主机名(基础配置、网卡会话配置、网卡绑定配置)

    目录 配置网卡基本信息 通过nmcli命令配置网卡 通过配置网卡文件配置网卡 通过nmtui命令配置网卡 通过nm-connection-editor命令配置网卡 网卡高级配置 配置网络会话 配置网卡 ...

  6. FD.io/VPP — VPP 的配置与运行

    目录 文章目录 目录 配置 80-vpp.conf startup.conf 可以配置 VPP 的 Threading Modes 运行示例 non-DPDK 模式运行 VPP DPDK 模式运行 V ...

  7. FD.io VPP startup.conf配置文件示例:安装后第一次配置

    FD.io VPP配置文件详解:https://rtoax.blog.csdn.net/article/details/108056964 ## Filename: startup.conf ## I ...

  8. How-to: Build VPP FD.IO development environment with Mellanox DPDK PMD.

    目录 References Prerequisite Installation, Compilation and Configuration VPP is an open-source Vector ...

  9. CentOS7 搭建基于DPDK的FD.io VPP环境-1

    目录 Prerequisites Installation, Compilation and Configuration Prerequisites 1. Install CentOS 7.7 2. ...

最新文章

  1. C++中typedef和define的区别
  2. 5.3多线程条件变量
  3. 机器学习中的算法(2)-支持向量机(SVM)基础
  4. DSX2-5000 CH测试结果使用福禄克LinkWare Live软件的好处
  5. 如何在A用户下建立视图,这个视图是A的表与B的表进行关联的?
  6. 服务器操作系统类型怎么查,服务器查看操作系统类型
  7. STM32学习——EXTI外部中断
  8. Java web(2012/2/23)
  9. PHP类中Static方法效率测试
  10. android studio for android learning (十九 ) 最新Handler消息传递机制全解
  11. 计算机模块中的画板英文,电脑中将画板导入Mockingbot的方法
  12. JavaScript设计模式之“单例模式“
  13. Python之路,Day1 - Python基础1
  14. matlab多元二次分析,Matlab篇----常用的回归分析Matlab命令(regress篇)
  15. 1036:镂空三角形
  16. Python-random.seed()的作用
  17. fuse文件系统调试环境
  18. 清华博士导师整理:Tensorflow 和 Pytorch 的笔记(包含经典项目实战)
  19. 关于S7-1200 OPC通讯的一个问题
  20. python爬取捧腹网gif图片

热门文章

  1. 查看被docker-proxy占用的端口
  2. docker中使用Mysql8+phpmyadmin
  3. 懒癌晚期学图论的时候自己用C语言写了个求可达性矩阵的算法~
  4. 修改本机域名服务器为Google Public DNS或者OpenDNS
  5. 谈卢梭的《爱弥尔》及其对于教育的现实意义
  6. SCOPE_IDENTITY()、 @@IDENTITY 、 IDENT_CURRENT()
  7. ADO.NET Entity Framework学习笔记(4)ObjectQuery对象
  8. c语言密码强度的判断程序,C语言实现密码强度检测
  9. c语言从1加到任意数的编程,c语言:从键盘任意输入一个整数n,编程计算并输出1-n之间的所有素数之和...
  10. php+ioncube',IonCube加密PHP程序