目录

关键步骤

详细步骤和说明

编译和安装

确认RDMA功能

iWARP / RoCEv2选择

流控制设置

ECN配置

内存需求

资源限制配置文件

RDMA统计

perftest测试工具

动态追踪

动态调试

使用tcpdump捕获RDMA流量

intel 中文官网

报错处理

IP配置

查看device和网口的对应关系


关键步骤

(先装ice-->装irdma-->装rdma-core)

1、安装intel RDMA网卡

2、到官网下载驱动https://downloadcenter.intel.com/zh-cn/download/30368/-E810-X722-Ethernet-Linux-RDMA-

3、安装相应的 LAN 驱动程序 (在安装 irdma 之前, E810 和 X722 对应的 LAN 驱动程序(ice 或 i40e)都必须从此版本中包含的源代码构建安装在您的系统上。)

下载ICE包,解压,(看readme,按readme操作)进入src,make install

4、安装irdma

(irdma Linux *驱动程序在支持RDMA的英特尔网络设备上启用RDMA功能。)

5、安装rdma-core (用户空间的ibvers库,给应用程序编程提供接口)

注意,执行 patch -p2 < /path/to/irdma-<version>/libirdma-27.0.patch 命令时,别落了“<”符号

6、设置网卡驱动加载模式iWARP或RoCEv2

ibv_devinfo命令查看网卡模式

transport:                      iWARP (1)

详细步骤和说明

原文:readme.txt:https://downloadmirror.intel.com/30368/eng/README_irdma_1.4.22.txt

==============================================================================
irdma - Linux* RDMA Driver for the E810 and X722 Intel(R) Ethernet Controllers
==============================================================================

--------
目录
--------
- Overview                               概览
- Prerequisites                          前提和依赖
- Supported OS List                      支持是操作系统
- Building and Installation              编译和安装
- Confirm RDMA Functionality             配置RDMA功能
- iWARP/RoCEv2 Selection                 选择iWARP或RoCEv2
- iWARP Port Mapper (iwpmd)
- Flow Control Settings
- ECN Configuration
- Devlink Configuration
- Memory Requirements                   内存需求
- Resource Profile Limits               资源限制
- Resource Limits Selector              资源限制选择
- RDMA Statistics                       RDMA统计
- perftest                              性能测试工具
- MPI
- Performance
- Interoperability
- Dynamic Tracing
- Dynamic Debug
- Capturing RDMA Traffic with tcpdump
- Known Issues/Notes

--------
概览
--------

The irdma Linux* driver enables RDMA functionality on RDMA-capable Intel
network devices. Devices supported by this driver:
   - Intel(R) Ethernet Controller E810
   - Intel(R) Ethernet Network Adapter X722

The E810 and X722 devices each support a different set of RDMA features.
    - E810 supports both iWARP and RoCEv2 RDMA transports, and also supports
      congestion management features like priority flow control (PFC) and
      explicit congestion notification (ECN).
    - X722 supports only iWARP and a more limited set of configuration
      parameters.

Differences between adapters are described in each section of this document.

For both E810 and X722, the corresponding LAN driver (ice or i40e) must be
built from source included in this release and installed on your system prior
to installing irdma.

-------------
先决条件
-------------

- Compile and install the E810 or X722 LAN PF driver from source included in
  this release. Refer to the ice or i40e driver README for installation
  instructions.
    * For E810 adapters, use the ice driver.
    * For X722 adapters, use the i40e driver.
- For best results, use a fully supported OS from the Supported OS List below.
- For server memory requirements, see the "Memory Requirements" section of this
  document.
- Install required packages. Refer to the "Building" section of the rdma-core
  README for required packages for your OS:
        https://github.com/linux-rdma/rdma-core/blob/v27.0/README.md
    * RHEL 7 and SLES:
        Install all required packages listed in the rdma-core README.
    * RHEL 8:
        Install the required packages for RHEL 7, then install the following
        additional packages:
            dnf install python3-docutils perl-generators
    * Ubuntu:
        Install the required packages listed in the rdma-core README, then
        install the following additional package:
            apt-get install python3-docutils libsystemd-dev

* Note:
The following are sample repo files that can be used to get the dependent packages
for rdma-core. However, these may not be all that is required.

- For SLES
    http://download.opensuse.org/distribution/leap/42.3/repo/oss

- For RHEL 8.1
    http://vault.centos.org/8.1.1911/PowerTools/x86_64/os/

-----------------
支持的操作系统列表
-----------------

Supported:
        * RHEL 8.3
        * RHEL 7.9
        * SLES 15 SP2
        * SLES 12 SP5
        * Ubuntu 18.04
        * Ubuntu 20.04

Supported Not Validated:
        * RHEL 8.2
        * RHEL 8.1
        * RHEL 8
        * RHEL 7.8
        * RHEL 7.7
        * RHEL 7.6 + OFED 4.17-1
        * RHEL 7.5 + OFED 4.17-1
        * RHEL 7.4 + OFED 4.17-1
        * SLES 15 SP1
        * SLES 15 + OFED 4.17-1
        * SLES 12 SP 4 + OFED 4.17-1
        * SLES 12 SP 3 + OFED 4.17-1
        * Linux kernel stable 5.10.*
        * Linux kernel longterm 5.4.*, 4.19.*, 4.14.*

-------------------------

编译和安装

-------------------------

To build and install the irdma driver and supporting rdma-core libraries:

1.解压缩irdma驱动程序压缩包:
        tar zxf irdma-<version>.tgz

2.构建并安装RDMA驱动程序:
        cd irdma-<version>
        ./build.sh

By default, the irdma driver is built using in-distro RDMA libraries and
   modules. Optionally, irdma may also be built using OFED modules. See the
   Supported OS List above for a list of OSes that support this option.
   * Note: Intel products are not validated on other vendors' proprietary
           software packages.
   To install irdma using OFED modules:
        - Download OFED-4.17-1.tgz from the OpenFabrics Alliance:
             wget http://openfabrics.org/downloads/OFED/ofed-4.17-1/OFED-4.17-1.tgz
        - Decompress the archive:
             tar xzvf OFED-4.17.1.tgz
        - Install OFED:
             cd OFED-4.17-1
             ./install --all
        - Reboot after installation is complete.
        - Build the irdma driver with the "ofed" option:
             cd /path/to/irdma-<version>
            ./build.sh ofed
        - Continue with the installation steps below.

3.加载驱动程序:
    RHEL and Ubuntu:
        modprobe irdma

SLES:
        modprobe irdma --allow-unsupported

Notes:
        - This modprobe step is required only during installation. Normally,
          irdma is autoloaded via a udev rule when ice or i40e is loaded:
             /usr/lib/udev/rules.d/90-rdma-hw-modules.rules
        - For SLES, to automatically allow loading unsupported modules, add the
          following to /etc/modprobe.d/10-unsupported-modules.conf:
              allow_unsupported_modules 1

4.卸载任何先前版本的rdma-core用户空间库。
   For example, in RHEL:
        yum erase rdma-core

Note: "yum erase rdma-core" will also remove any packages that depend on
          rdma-core, such as perftest or fio. Please re-install them after
          installing rdma-core.

5.Patch,构建和安装rdma-core用户空间库:

RHEL:   #1 从GitHub下载rdma-core-27.0.tar.gz
        wget https://github.com/linux-rdma/rdma-core/releases/download/v27.0/rdma-core-27.0.tar.gz      #2 Apply patch libirdma-27.0.patch to rdma-core
        tar -xzvf rdma-core-27.0.tar.gz
        cd rdma-core-27.0
        patch -p2 < /path/to/irdma-<version>/libirdma-27.0.patch  #别落了“<” 符号    #3 确保目录rdma-core / redhat和contents 位于“ root”组下
        cd ..
        chgrp -R root rdma-core-27.0/redhat #4 重新打包成适当的名称给building用 ( "tgz" 扩展名代替 "tar.gz")
        tar -zcvf rdma-core-27.0.tgz rdma-core-27.0     #5 构建 rdma-core
        mkdir -p ~/rpmbuild/SOURCES
        mkdir -p ~/rpmbuild/SPECS
        cp rdma-core-27.0.tgz ~/rpmbuild/SOURCES/
        cd ~/rpmbuild/SOURCES
        tar -xzvf rdma-core-27.0.tgz
        cp ~/rpmbuild/SOURCES/rdma-core-27.0/redhat/rdma-core.spec ~/rpmbuild/SPECS/
        cd ~/rpmbuild/SPECS/
        rpmbuild -ba rdma-core.spec #6 安装RPMs
        cd ~/rpmbuild/RPMS/x86_64
        yum install *27.0*.rpm

SLES:
        # Download rdma-core-27.0.tar.gz from GitHub
        wget https://github.com/linux-rdma/rdma-core/releases/download/v27.0/rdma-core-27.0.tar.gz
        # Apply patch libirdma-27.0.patch to rdma-core
        tar -xzvf rdma-core-27.0.tar.gz
        cd rdma-core-27.0
        patch -p2 < /path/to/irdma-<version>/libirdma-27.0.patch
        cd ..
        # Zip the rdma-core directory into a tar.gz archive
        tar -zcvf rdma-core-27.0.tar.gz rdma-core-27.0
        # Create an empty placeholder baselibs.conf file
        touch /usr/src/packages/SOURCES/baselibs.conf
        # Build rdma-core
        cp rdma-core-27.0.tar.gz /usr/src/packages/SOURCES
        cp rdma-core-27.0/suse/rdma-core.spec /usr/src/packages/SPECS/
        cd /usr/src/packages/SPECS/
        rpmbuild -ba rdma-core.spec --without=curlmini
        cd /usr/src/packages/RPMS/x86_64
        rpm -ivh --force *27.0*.rpm

Ubuntu:
        To create Debian packages from rdma-core:
        # Download rdma-core-27.0.tar.gz from GitHub
        wget https://github.com/linux-rdma/rdma-core/releases/download/v27.0/rdma-core-27.0.tar.gz
        # Apply patch libirdma-27.0.patch to rdma-core
        tar -xzvf rdma-core-27.0.tar.gz
        cd rdma-core-27.0
        patch -p2 < /path/to/irdma-<version>/libirdma-27.0.patch
        # Build rdma-core
        dh clean --with python3,systemd --builddirectory=build-deb
        dh build --with systemd --builddirectory=build-deb
        sudo dh binary --with python3,systemd --builddirectory=build-deb
        # This creates .deb packages in the parent directory
        # To install the .deb packages
        sudo dpkg -i ../*.deb

6.将以下内容添加到/etc/security/limits.conf:   * soft memlock unlimited
        * hard memlock unlimited
        * soft nofile 1048000
        * hard nofile 1048000
   This avoids any limits on user mode applications as far as pinned memory and number of open files used.

6.安装irdma驱动程序和rdma-core软件包后,reboot服务器。

--------------------------

确认RDMA功能

--------------------------

After successful installation, RDMA devices are listed in the output of
"ibv_devices". For example:
    # ibv_devices
    device                 node GUID
    ------              ----------------rdmap175s0f0        40a6b70b6f300000
    rdmap175s0f1        40a6b70b6f310000

Notes:
    - Device names may differ depending on OS or kernel.
    - Node GUID is different for the same device in iWARP vs. RoCEv2 mode.

每个RDMA设备都与一个网络接口关联。 sysfs文件系统
可以帮助说明这些设备之间的关系。例如:

-要显示与“ ens801f0”网络接口关联的RDMA设备,请执行以下操作:
         # ls /sys/class/net/ens801f0/device/infiniband/
         rdmap175s0f0

-显示与“ rdmap175s0f0” RDMA设备关联的网络接口:
         # ls /sys/class/infiniband/rdmap175s0f0/device/net/
         ens801f0

Before running RDMA applications, ensure that all hosts have IP addresses
assigned to the network interface associated with the RDMA device. The RDMA
device uses the IP configuration from the corresponding network interface.
There is no additional configuration required for the RDMA device.

在运行RDMA应用程序之前,请确保所有主机都给与RDMA设备关联的网络接口分配IP地址。 RDMA设备使用对应的网络接口的IP配置。
RDMA设备不需要其他配置。

要确认RDMA功能,请运行rping:

1)启动rping服务器:
          rping -sdvVa [server IP address]

2)启动rping客户端:
          rping -cdvVa [server IP address] -C 10

3)rping将运行10次迭代(-C 10)并在控制台打印payload 数据。

Notes:
        - Confirm rping functionality both from each host to itself and between
          hosts. For example:
            * Run rping server and client both on Host A.
            * Run rping server and client both on Host B.
            * Run rping server on Host A and rping client on Host B.
            * Run rping server on Host B and rping client on Host A.
        - When connecting multiple rping clients to a persistent rping server,
          older kernels may experience a crash related to the handling of cm_id
          values in the kernel stack. With E810, this problem typically appears
          in the system log as a kernel oops and stack trace pointing to
          irdma_accept. The issue has been fixed in kernels 5.4.61 and later.
          For patch details, see:

笔记:
-确认rping在主机自身和主机与主机之间通信正常(  - Confirm rping functionality both from each host to itself and between)
例如:
*在主机A上同时运行rping服务器和客户端。
*在主机B上同时运行rping服务器和客户端。
*在主机A上运行rping服务器,在主机B上运行rping客户端。
*在主机B上运行rping服务器,在主机A上运行rping客户端。
-将多个rping客户端连接到永久性rping服务器时,较旧的内核可能会遇到与内核堆栈中处理cm_id的值有关的崩溃。使用E810,通常会出现此问题
在系统日志中作为内核oop和指向的堆栈跟踪irdma_accept。此问题已在内核5.4.61及更高版本中修复。
有关补丁程序的详细信息,请参阅:
          https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/infiniband/core/ucma.c?h=v5.9-rc2&id=7c11910783a1ea17e88777552ef146cace607b3c

----------------------

iWARP / RoCEv2选择

----------------------

X722:
The X722 adapter supports only the iWARP transport.

E810:
The E810 controller supports both iWARP and RoCEv2 transports. By default, the
irdma driver is loaded in iWARP mode. RoCEv2 may be selected either globally
(for all ports) using the module parameter "roce_ena=1" or for individual ports
using the devlink interface.

--- 全局选择

配置文件自动加载:修改/etc/modprobe.d/irdma.conf添加下面一行: options irdma roce_ena=1

手动加载:
  - 如果当前已加载irdma驱动程序,请首先将其卸载:
        rmmod irdma
  - 在RoCEv2模式下重新加载驱动程序:
        modprobe irdma roce_ena=1

--- 分端口选择
E810 interfaces may be configured per interface for iWARP mode (default) or
RoCEv2 via devlink parameter configuration. See the "Devlink Configuration"
section below for instructions on per-port iWARP/RoCEv2 selection.

-------------------------

iWARP端口映射器(iwpmd)

iWARP Port Mapper (iwpmd)
-------------------------
The iWARP port mapper service (iwpmd) coordinates with the host network stack
and manages TCP port space for iWARP applications.

iwpmd is automatically loaded when ice or i40e is loaded via udev rules in
/usr/lib/udev/rules.d/90-iwpmd.rules.

To verify iWARP port mapper status:
    systemctl status iwpmd

---------------------

流控制设置

---------------------

X722:
The X722 adapter supports only link-level flow control (LFC).

E810:
The E810 controller supports both link-level flow control (LFC) and priority
flow control (PFC). Enabling flow control is strongly recommended when using
E810 in RoCEv2 mode.

--- Link Level Flow Control (LFC) (E810 and X722)

To enable link-level flow control on E810 or X722, use "ethtool -A".
For example, to enable LFC in both directions (rx and tx):
    ethtool -A DEVNAME rx on tx on

Confirm the setting with "ethtool -a":
    ethtool -a DEVNAME

Sample output:
    Pause parameters for interface:
    Autonegotiate: on
    RX: on
    TX: on
    RX negotiated:  on
    TX negotiated:  on

Full enablement of LFC requires the switch or link partner be configured for
rx and tx pause frames. Refer to switch vendor documentation for more details.

--- Priority Level Flow Control (PFC) (E810 only)

Priority flow control (PFC) is supported on E810 in both willing and
non-willing modes. E810 also has two Data Center Bridging (DCB) modes: software
and firmware. For more background on software and firmware modes, refer to the
E810 ice driver README.
- For PFC willing mode, firmware DCB is recommended.
- For PFC non-willing mode, software DCB must be used.

Note: E810 supports a maximum of 4 traffic classes (TCs), one of which may
      have PFC enabled.

*** PFC willing mode

In willing mode, E810 is "willing" to accept DCB settings from its link
partner. DCB is configured on the link partner (typically a switch), and the
E810 will automatically discover and apply the DCB settings to its own port.
This simplifies DCB configuration in a larger cluster and eliminates the need
to independently configure DCB on both sides of the link.

To enable PFC in willing mode on E810, use ethtool to enable firmware DCB.
Enabling firmware DCB automatically places the NIC in willing mode:
    ethtool --set-priv-flags DEVNAME fw-lldp-agent on

To confirm settings, use following command:
    ethtool --show-priv-flags DEVNAME

Expected output:
    fw-lldp-agent     :on

Note: When firmware DCB is enabled, the E810 NIC may experience an adapter-wide
      reset when the DCBX willing configuration change propagated from the link
      partner removes an RDMA-enabled traffic class (TC). This typically occurs
      when removing a TC associated with priority 0 (the default priority for
      RDMA). The reset results in a temporary loss of connectivity as the
      adapter re-initializes.

Switch DCB and PFC configuration syntax varies by vendor. Consult your switch
manual for details. Sample Arista switch configuration commands:
-  Example: Enable PFC for priority 0 on switch port 21
     * Enter configuration mode for switch port 21:
         switch#configure
         switch(config)#interface ethernet 21/1
     * Turn PFC on:
         switch(config-if-Et21/1)#priority-flow-control mode on
     * Set priority 0 for "no-drop" (i.e., PFC enabled):
         switch(config-if-Et21/1)#priority-flow-control priority 0 no-drop
     * Verify switch port PFC configuration:
         switch(config-if-Et21/1)#show priority-flow-control
- Example: Enable DCBX on switch port 21
     * Enable DCBX in IEEE mode:
         switch(config-if-Et21/1)#dcbx mode ieee
     * Show DCBX settings (including neighbor port settings):
         switch(config-if-Et21/1)#show dcbx

*** PFC non-willing mode

In non-willing mode, DCB settings must be configured on both E810 and its link
partner. Non-willing mode is software-based. OpenLLDP (lldpad and lldptool) is
recommended.

To enable non-willing PFC on E810:
  1. Disable firmware DCB. Firmware DCB is always willing. If enabled, it
     will override any software settings.
         ethtool --set-priv-flags DEVNAME fw-lldp-agent off
  2. Install OpenLLDP
         yum install lldpad
  3. Start the Open LLDP daemon:
        lldpad -d
  4. Verify functionality by showing current DCB settings on the NIC:
        lldptool -ti <ifname>
  5. Configure your desired DCB settings, including traffic classes,
     bandwidth allocations, and PFC.
     The following example enables PFC on priority 0, maps all priorities to
     traffic class (TC) 0, and allocates all bandwidth to TC0.
     This simple configuration is suitable for enabling PFC for all traffic,
     which may be useful for back-to-back benchmarking. Datacenters will
     typically use a more complex configuration to ensure quality-of-service
     (QoS).
     a. Enable PFC for priority 0:
           lldptool -Ti <interface> -V PFC willing=no enabled=0
     b. Map all priorities to TC0 and allocate all bandwidth to TC0:
           lldptool -Ti <interface> -V ETS-CFG willing=no \
           up2tc=0:0,1:0,2:0,3:0,4:0,5:0,6:0,7:0 \
           tsa=0:ets,1:strict,2:strict,3:strict,4:strict,5:strict,6:strict,7:strict \
           tcbw=100,0,0,0,0,0,0,0
  6. Verify output of "lldptool -ti <interface>":
        Chassis ID TLV
            MAC: 68:05:ca:a3:89:78
        Port ID TLV
            MAC: 68:05:ca:a3:89:78
        Time to Live TLV
            120
        IEEE 8021QAZ ETS Configuration TLV
            Willing: no
            CBS: not supported
            MAX_TCS: 8
            PRIO_MAP: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0
            TC Bandwidth: 100% 0% 0% 0% 0% 0% 0% 0%
            TSA_MAP: 0:ets 1:strict 2:strict 3:strict 4:strict 5:strict 6:strict 7:strict
        IEEE 8021QAZ PFC TLV
            Willing: no
            MACsec Bypass Capable: no
            PFC capable traffic classes: 8
            PFC enabled: 0
        End of LLDPDU LTV
  7. Configure the same settings on the link partner.

Full enablement of PFC requires the switch or link partner be configured for
PFC pause frames. Refer to switch vendor documentation for more details.

--- Directing RDMA traffic to a traffic class

When using PFC, traffic may be directed to one or more traffic classes (TCs).
Because RDMA traffic bypasses the kernel, Linux traffic control methods like
tc, cgroups, or egress-qos-map can't be used. Instead, set the Type of Service
(ToS) field on your application command line. ToS-to-priority mappings are
hardcoded in Linux as follows:
  ToS   Priority
  ---   --------
   0       0
   8       2
  24       4
  16       6
Priorities are then mapped to traffic classes using ETS using lldptool or switch
utilities.

Examples of setting ToS 16 in an application:
  ucmatose -t 16
  ib_write_bw -t 16

Alternatively, for RoCEv2, ToS may be set for all RoCEv2 traffic using
configfs. For example, to set ToS 16 on device rdma<interface>, port 1:
  mkdir /sys/kernel/config/rdma_cm/rdma<interface>
  echo 16 > /sys/kernel/config/rdma_cm/rdma<interface>/ports/1/default_roce_tos

-----------------

ECN配置

-----------------
X722:
Congestion control settings are not supported on X722 adapters.

E810:
The E810 controller supports the following congestion control algorithms:
    - iWARP DCTCP
    - iWARP TCP New Reno plus ECN
    - iWARP TIMELY
    - RoCEv2 DCQCN
    - RoCEv2 DCTCP
    - RoCEv2 TIMELY

Congestion control settings are accessed through configfs. Additional DCQCN
tunings are available through the devlink interface. See the "Devlink
Configuration" section for details.

--- Configuration in configfs

To access congestion control settings:

1. After driver load, change to the irdma configfs directory:
        cd /sys/kernel/config/irdma

2. Create a new directory for each RDMA device you want to configure.
   Note: Use "ibv_devices" for a list of RDMA devices.
   For example, to create configfs entries for the rdmap<interface> device:
        mkdir rdmap<interface>

3. List the new directory to get its dynamic congestion control knobs and
   values:
        cd rdmap<interface>
        for f in *; do echo -n "$f: "; cat "$f"; done;

If the interface is in iWARP mode, the files have a "iw_" prefix:
        - iw_dctcp_enable
        - iw_ecn_enable
        - iw_timely_enable

If the interface is in RoCEv2 mode, the files have a "roce_" prefix:
        - roce_dcqcn_enable
        - roce_dctcp_enable
        - roce_timely_enable

4. Enable or disable the desired algorithms.

To enable an algorithm: echo 1 > <attribute>
   For example, to add ECN marker processing to the default TCP New Reno iWARP
   congestion control algorithm:
        echo 1 > /sys/kernel/config/irdma/rdmap<interface>/iw_ecn_enable

To disable an algorithm: echo 0 > <attribute>
    For example:
        echo 0 > /sys/kernel/config/irdma/rdmap<interface>/iw_ecn_enable

To read the current status: cat <attribute>

Default values:
        iwarp_dctcp_en: off
        iwarp_timely_en: off
        iwarp_ecn_en: ON

roce_timely_en: off
        roce_dctcp_en: off
        roce_dcqcn_en: off

5. Remove the configfs directory created above. Without removing these
   directories, the driver will not unload.
          rmdir /sys/kernel/config/irdma/rdmap<interface>

---------------------
Devlink配置
---------------------
X722:
Devlink parameter configuration is not supported on the X722 adapters.

E810:
The E810 controller supports devlink configuration for the following controls:
  - iWARP/RoCEv2 per-port selection
  - DCQCN congestion control tunings
  - Fragment count limit

--- Devlink OS support

Devlink dev parameter configuration is a recent Linux capability that requires
both iproute2 tool support as well as kernel support.

The following OS/Kernel versions support devlink dev parameters:
    - RHEL 8 or later
    - SLES 15 SP1 or later
    - Ubuntu 18.04 or later
    - Linux kernel 4.19 or later

iproute2 may need to be updated to add parameter capability to the devlink
configuration. The latest released version can be downloaded and installed
from: https://github.com/shemminger/iproute2/releases

--- Devlink parameter configuration (E810 only)

1.  Get PCIe bus-info of the desired interface using "ethtool -i":
        ethtool -i DEVNAME

Example:
        # ethtool -i enp175s0f0
        driver: ice
        version: 0.11.7
        firmware-version: 0.50 0x800019de 1.2233.0
        expansion-rom-version:
        bus-info: 0000:af:00.0
        supports-statistics: yes
        supports-test: yes
        supports-eeprom-access: yes
        supports-register-dump: yes
        supports-priv-flags: yes

bus-info is 0000:af:00.0

2.  Find the devlink name 'ice_rdma.x' in the /sys/devices folder:
        ls /sys/devices/*/*/<bus-info>/ | grep ice_rdma

Example:
        ls /sys/devices/*/*/0000:af:00.0/ | grep ice_rdma
        ice_rdma.16

3.  To display available parameters:
        devlink dev param show

RDMA devlink parameters for E810:
        roce_enable
            Selects RDMA transport: RoCEv2 (true) or iWARP (false)
        resource_limits_selector
            Limits available queue pairs (QPs). See "Resource Limits Selector"
            section for details and values.
        dcqcn_enable
            Enables the DCQCN algorithm for RoCEv2.
            Note: "roce_enable" must also be set to "true".
        dcqcn_cc_cfg_valid
            Indicates that all DCQCN parameters are valid and should be updated
            in registers or QP context.
        dcqcn_min_dec_factor
            The minimum factor by which the current transmit rate can be
            changed when processing a CNP. Value is given as a percentage
            (1-100).
        dcqcn_min_rate
            The minimum value, in Mbits per second, for rate to limit.
        dcqcn_F
            The number of times to stay in each stage of bandwidth recovery.
        dcqcn_T
            The number of microseconds that should elapse before increasing the
            CWND in DCQCN mode.
        dcqcn_B
            The number of bytes to transmit before updating CWND in DCQCN mode.
        dcqcn_rai_factor
            The number of MSS to add to the congestion window in additive
            increase mode.
        dcqcn_hai_factor
            The number of MSS to add to the congestion window in hyperactive
            increase mode.
        dcqcn_rreduce_mperiod
            The minimum time between 2 consecutive rate reductions for a single
            flow. Rate reduction will occur only if a CNP is received during
            the relevant time interval.
        fragment_count_limit
            Set fragment count limit to adjust maximum values for queue depth
            and inline data size.

4.  To set a parameter:
        devlink dev param set platform/<ice_rdma.X> name <param name> value <param value> cmode driverinit

Example: Enable RoCEv2, enable DCQCN, and set min_dec_factor=5 on ice_rdma.17:
        devlink dev param set platform/ice_rdma.17 name roce_enable value true cmode driverinit
        devlink dev param set platform/ice_rdma.17 name dcqcn_enable value true cmode driverinit
        devlink dev param set platform/ice_rdma.17 name dcqcn_min_dec_factor value 5 cmode driverinit

5.  Reload the device port with new mode:
        devlink dev reload platform/<ice_rdma.X>

Example:
        devlink dev reload platform/ice_rdma.16

Note: This does not reload the driver, so other ports are unaffected.

-------------------

内存需求

-------------------
Default irdma initialization requires a minimum of ~210 MB (for E810) or
~160 MB (for X722) of memory per port.

For servers where the amount of memory is constrained, you can decrease the
required memory by lowering the resources available to E810 or X722 by loading
the driver with the following resource profile setting:

modprobe irdma rsrc_profile=2

To automatically apply the setting when the driver is loaded, add the following
to /etc/modprobe.d/irdma.conf:
    options irdma rsrc_profile=2

Note: This can have performance and scaling impacts as the number of queue
pairs and other RDMA resources are decreased in order to lower memory usage to
approximately 55 MB (for E810) or 51 MB (for X722) per port.

-----------------------

资源限制配置文件

-----------------------
In the default resource profile, the RDMA resources configured for each
adapter are as follows:

E810 (2 ports):
        Queue Pairs: 4092
        Completion Queues: 8189
        Memory Regions: 4194302
    X722 (4 ports):
        Queue Pairs: 1020
        Completion Queues: 2045
        Memory Regions: 2097150

For resource profile 2, the configuration is:

E810 (2 ports):
        Queue Pairs: 508
        Completion Queues: 1021
        Memory Regions: 524286

X722 (4 ports):
        Queue Pairs: 252
        Completion Queues: 509
        Memory Regions: 524286

------------------------
资源限制选择器
------------------------
In addition to resource profile, you can further limit resources via the
"limits_sel" module parameter:

E810:
    modprobe irdma limits_sel=<0-6>
X722:
    modprobe irdma gen1_limits_sel=<0-5>

To automatically apply this setting when the driver is loaded, add the
following to /etc/modprobe.d/irdma.conf:
    options irdma limits_sel=<value>

The values below apply to a 2-port E810 NIC.
        0 - Default, up to 4092 QPs
        1 - Minimum, up to 124 QPs
        2 - Up to 1020 QPs
        3 - Up to 2044 QPs
        4 - Up to 16380 QPs
        5 - Up to 65532 QPs
        6 - Maximum, up to 131068 QPs

For X722, the resource limit selector defaults to a value of 2. A single port
supports a maximum of 64k QPs, and a 4-port X722 supports up to 16k QPs per
port.

---------------

RDMA统计

---------------
RDMA protocol statistics for E810 or X722 are found in sysfs. To display all
counters and values:
    cd /sys/class/infiniband/rdmap<interface>/hw_counters;
    for f in *; do echo -n "$f: "; cat "$f"; done;

The following counters will increment when RDMA applications are transferring
data over the network in iWARP mode:
    - tcpInSegs
    - tcpOutSegs

Available counters:
    ip4InDiscards       IPv4 packets received and discarded.
    ip4InReasmRqd       IPv4 fragments received by Protocol Engine.
    ip4InMcastOctets    IPv4 multicast octets received.
    ip4InMcastPkts      IPv4 multicast packets received.
    ip4InOctets         IPv4 octets received.
    ip4InPkts           IPv4 packets received.
    ip4InTruncatedPkts  IPv4 packets received and truncated due to insufficient
                          buffering space in UDA RQ.
    ip4OutSegRqd        IPv4 fragments supplied by Protocol Engine to the lower
                          layers for transmission
    ip4OutMcastOctets   IPv4 multicast octets transmitted.
    ip4OutMcastPkts     IPv4 multicast packets transmitted.
    ip4OutNoRoutes      IPv4 datagrams discarded due to routing problem (no hit
                          in ARP table).
    ip4OutOctets        IPv4 octets supplied by the PE to the lower layers for
                           transmission.
    ip4OutPkts          IPv4 packets supplied by the PE to the lower layers for
                          transmission.
    ip6InDiscards       IPv6 packets received and discarded.
    ip6InReasmRqd       IPv6 fragments received by Protocol Engine.
    ip6InMcastOctets    IPv6 multicast octets received.
    ip6InMcastPkts      IPv6 multicast packets received.
    ip6InOctets         IPv6 octets received.
    ip6InPkts           IPv6 packets received.
    ip6InTruncatedPkts  IPv6 packets received and truncated due to insufficient
                          buffering space in UDA RQ.
    ip6OutSegRqd        IPv6 fragments received by Protocol Engine
    ip6OutMcastOctets   IPv6 multicast octets transmitted.
    ip6OutMcastPkts     IPv6 multicast packets transmitted.
    ip6OutNoRoutes      IPv6 datagrams discarded due to routing problem (no hit
                           in ARP table).
    ip6OutOctets        IPv6 octets supplied by the PE to the lower layers for
                           transmission.
    ip6OutPkts          IPv6 packets supplied by the PE to the lower layers for
                           transmission.
    iwInRdmaReads       RDMAP total RDMA read request messages received.
    iwInRdmaSends       RDMAP total RDMA send-type messages received.
    iwInRdmaWrites      RDMAP total RDMA write messages received.
    iwOutRdmaReads      RDMAP total RDMA read request messages sent.
    iwOutRdmaSends      RDMAP total RDMA send-type messages sent.
    iwOutRdmaWrites     RDMAP total RDMA write messages sent.
    iwRdmaBnd           RDMA verbs total bind operations carried out.
    iwRdmaInv           RDMA verbs total invalidate operations carried out.
    RxECNMrkd           Number of packets that have the ECN bits set to
                           indicate congestion
    cnpHandled          Number of Congestion Notification Packets that have
                           been handled by the reaction point.
    cnpIgnored          Number of Congestion Notification Packets that have
                           been ignored by the reaction point.
    rxVlanErrors        Ethernet received packets with incorrect VLAN_ID.
    tcpRetransSegs      Total number of TCP segments retransmitted.
    tcpInOptErrors      TCP segments received with unsupported TCP options or
                           TCP option length errors.
    tcpInProtoErrors    TCP segments received that are dropped by TRX due to
                           TCP protocol errors.
    tcpInSegs           TCP segments received.
    tcpOutSegs          TCP segments transmitted.
    cnpSent             Number of Congestion Notification Packets that have
                           been sent by the reaction point.
    RxUDP               UDP segments received without errors
    TxUDP               UDP segments transmitted without errors

--------

perftest测试工具

--------
The perftest package is a set of RDMA microbenchmarks designed to test
bandwidth and latency using RDMA verbs. The package is maintained upstream
here: https://github.com/linux-rdma/perftest

perftest-4.4-0.29 is recommended.

Earlier versions of perftest had known issues with iWARP that have since been
fixed. Versions 4.4-0.4 through 4.4-0.18 are therefore NOT recommended.

To run a basic ib_write_bw test:
    1. Start server
           ib_write_bw -R
    2. Start client:
           ib_write_bw -R <IP address of server>
    3. Benchmark will run to completion and print performance data on both
       client and server consoles.

Notes:
    - The "-R" option is required for iWARP and optional for RoCEv2.
    - Use "-d <device>" on the perftest command lines to use a specific RDMA
      device.
    - For ib_read_bw, use "-o 1" for testing with 3rd-party link partners.
    - For ib_send_lat and ib_write lat, use "-I 96" to limit inline data size
      to the supported value.
    - iWARP supports only RC connections.
      RoCEv2 supports RC and UD.
      Connection types XRC, UC, and DC are not supported.
    - Atomic operations are not supported on E810 or X722.

-----------
MPI测试
-----------
--- Intel MPI
Intel MPI uses the OpenFabrics Interfaces (OFI) framework and libfabric user
space libraries to communicate with network hardware.

* Recommended Intel MPI versions:
    Single-rail: Intel MPI 2019u8
    Multi-rail:  Intel MPI 2019u3

Note: Intel MPI 2019u4 is not recommended due to known incompatibilities with
        iWARP.

* Recommended libfabric version: libfabric-1.11.0

The Intel MPI package includes a version of libfabric. This "internal"
  version is automatically installed along with Intel MPI and used by default.
  To use a different ("external") version of libfabric with Intel MPI:
      1. Download libfabric from https://github.com/ofiwg/libfabric.
      2. Build and install it according to the libfabric documentation.
      3. Configure Intel MPI to use a non-internal version of libfabric:
             export I_MPI_OFI_LIBRARY_INTERNAL=0
         or  source <installdir>/intel64/bin/mpivars.sh -ofi_internal=0
      4. Verify your libfabric version by using the I_MPI_DEBUG environment
         variable on the mpirun command line:
             -genv I_MPI_DEBUG=1
         The libfabric version will appear in the mpirun output.

* Sample command line for a 2-process pingpong test:

mpirun -l -n 2 -ppn 1 -host myhost1,myhost2 -genv I_MPI_DEBUG=5 \
     -genv FI_VERBS_MR_CACHE_ENABLE=1 -genv FI_VERBS_IFACE=<interface> \
     -genv FI_OFI_RXM_USE_SRX=0 -genv FI_PROVIDER='verbs;ofi_rxm' \
     /path/to/IMB-MPI1 Pingpong

Notes:
   - Example is for libfabrics 1.8 or greater. For earlier versions, use
     "-genv FI_PROVIDER='verbs'"
   - For Intel MPI 2019u6, use "-genv MPIR_CVAR_CH4_OFI_ENABLE_DATA=0".
   - When using Intel MPI, it's recommended to enable only one interface on
     your networking device to avoid MPI application connectivity issues or
     hangs. This issue affects all Intel MPI transports, including TCP and
     RDMA. To avoid the issue, use "ifdown <interface>" or "ip link set down
     <interface>" to disable all network interfaces on your adapter except for
     the one used for MPI.

--- OpenMPI

* OpenMPI version 4.0.3 is recommended.

-----------
性能
-----------
RDMA performance may be optimized by adjusting system, application, or driver
settings.

- Flow control is required for best performance in RoCEv2 mode and is optional
  in iWARP mode. Both link-level flow control (LFC) and priority flow control
  (PFC) are supported, but PFC is recommended. See the "Flow Control Settings"
  section of this document for configuration details.

- For bandwidth applications, multiple queue pairs (QPs) are required for best
  performance. For example, in the perftest suite, use "-q 8" on the command
  line to run with 8 QP.

- For best results, configure your application to use CPUs on the same NUMA
  node as your adapter. For example:
    * To list CPUs local to your NIC:
        cat /sys/class/infiniband/<interface>/device/local_cpulist
    * To specify CPUs (e.g., 27-47) when running a perftest application:
        taskset -c 24-47 ib_write_bw <test options>
    * To specify CPUs when running an Intel MPI application:
        mpirun <options> -genv I_MPI_PIN_PROCESSOR_LIST=24-47 ./my_prog

- System and BIOS tunings may also improve performance. Settings vary by
  platform - consult your OS and BIOS documentation for details.
  In general:
    * Disable power-saving features such as P-states and C-states
    * Set BIOS CPU power policies to "Performance" or similar
    * Set BIOS CPU workload configuration to "I/O Sensitive" or similar
    * On RHEL 7.*/8.*, use the "latency-performance" tuning profile:
         tuned-adm profile latency-performance

----------------
互通性, 互操性
----------------

--- Mellanox

E810 and X722 support interop with Mellanox RoCEv2-capable adapters.

In tests like ib_send_bw, use -R option to select rdma_cm for connection
establishment. You can also use gid-index with -x option instead of -R:

Example:
    On E810 or X722:  ib_send_bw -F -n 5 -x 0
    On Mellanox:      ib_send_bw -F -n 5 -x <gid-index for RoCEv2> <ip>

...where x specifies the gid index value for RoCEv2.

Look in /sys/class/infiniband/mlx5_0/ports/1/gid_attrs/types directory for
port 1.

Note: Using RDMA reads with Mellanox may result in poor performance if there is
      packet loss.

--- Chelsio

X722 supports interop with Chelsio iWARP devices.

Load Chelsio T4/T5 RDMA driver (iw_cxgb4) with parameter "dack_mode" set to 0.

modprobe iw_cxgb4 dack_mode=0

To automatically apply this setting when the iw_cxgb4 driver is loaded, add the
following to /etc/modprobe.d/iw_cxgb4.conf:
    options iw_cxgb4 dack_mode=0

---------------

动态追踪

---------------
Dynamic tracing is available for irdma's connection manager.
Turn on tracing with the following command:
    echo 1 > /sys/kernel/debug/tracing/events/irdma_cm/enable

To retrieve the trace:
    cat /sys/kernel/debug/tracing/trace

-------------

动态调试

-------------
irdma support Linux dynamic debug.

To enable all dynamic debug messages upon irdma driver load, use the "dyndbg"
module parameter:
    modprobe irdma dyndbg='+p'

Debug messages will then appear in the system log or dmesg.

Enabling dynamic debug can be extremely verbose and is not recommended for
normal operation. For more info on dynamic debug, including tips on how to
refine the debug output, see:
   https://www.kernel.org/doc/html/v4.11/admin-guide/dynamic-debug-howto.html

-----------------------------------

使用tcpdump捕获RDMA流量

-----------------------------------
RDMA traffic bypasses the kernel and is not normally available to the Linux tcpdump utility. You may capture RDMA traffic with tcpdump by using port mirroring on a switch.

RDMA通信绕过内核,Linux的tcpdump 通常不可用。您可以通过在交换机上使用端口镜像来使用tcpdump捕获RDMA通信。

1. Connect 3 hosts to a switch:
   - 2 compute nodes to run RDMA traffic
   - 1 host to monitor traffic

2. Configure the switch to mirror traffic from one compute node's switch port
   to the monitoring host's switch port. Consult your switch documentation
   for syntax.

3. Unload the irdma driver on the monitoring host:
      # rmmod irdma
   Traffic may not be captured correctly if the irdma driver is loaded.

4. Start tcpdump on the monitoring host. For example:
      # tcpdump -nXX -i <interface>

5. Run RDMA traffic between the 2 compute nodes. RDMA packets will appear in
   tcpdump on the monitoring host.

-------------------
已知问题/说明
-------------------

X722:
* Support for the Intel(R) Ethernet Connection X722 iWARP RDMA VF driver
(i40iwvf) has been discontinued.

* There may be incompatible drivers in the initramfs image. You can either
update the image or remove the drivers from initramfs.

Specifically, look for i40e, ib_addr, ib_cm, ib_core, ib_mad, ib_sa, ib_ucm,
ib_uverbs, iw_cm, rdma_cm, rdma_ucm in the output of the following command:
  lsinitrd |less
If you see any of those modules, rebuild initramfs with the following command
and include the name of the module in the "" list. For example:
  dracut --force --omit-drivers "i40e ib_addr ib_cm ib_core ib_mad ib_sa
  ib_ucm ib_uverbs iw_cm rdma_cm rdma_ucm"

E810:
* Linux SRIOV for RDMA on E810 is currently not supported.

* RDMA is not supported when E810 is configured for more than 4 ports.

* E810 is limited to 4 traffic classes (TCs), one of which may be enabled for
  priority flow control (PFC).

* When using RoCEv2 on Linux kernel version 5.9 or earlier, some iSER operations
may experience errors related to iSER's handling of work requests. To work
around this issue, set the E810 fragment_count_limit devlink parameter to 13.
Refer to the "Devlink Configuration" section for details on setting devlink
parameters.

X722 and E810:
* Some commands (such as 'tc qdisc add' and 'ethtool -L') will cause the ice
driver to close the associated RDMA interface and reopen it. This will disrupt
RDMA traffic for a few seconds until the RDMA interface is available again.

* NOTE: Installing the ice driver, on RHEL, currently installs ice into initrd.
The implication is that the ice driver will be loaded on boot. The installation
process will also install any currently installed version of irdma into initrd.
This might result in an unintended version of irdma being installed. Depending
on your desired configuration and behavior of ice and irdma please look at the
following instructions to ensure the desired drivers are installed correctly.

A. Desired that both ice and irdma are loaded on boot (default)
        1. Follow installation procedure for the ice driver
        2. Follow installation procedure for the irdma driver

B. Desired that only ice driver is loaded on boot
        1. Untar ice driver
        2. Follow installation procedure for ice driver
        3. Untar irdma driver
        4. Follow installation procedure for irdma driver
        5. % dracut --force --omit-drivers "irdma"

C. Desired that neither ice nor irdma is loaded on boot
        1. Perform all steps in B
        2. % dracut --force --omit-drivers "ice irdma"

-------
支持
-------
For general information, go to the Intel support website at:
http://www.intel.com/support/ or the Intel Wired Networking project
hosted by Sourceforge at: http://sourceforge.net/projects/e1000

If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to e1000-rdma@lists.sourceforge.net

-------
许可
-------
This software is available to you under a choice of one of two
licenses. You may choose to be licensed under the terms of the GNU
General Public License (GPL) Version 2, available from the file
COPYING in the main directory of this source tree, or the
OpenFabrics.org BSD license below:

Redistribution and use in source and binary forms, with or
  without modification, are permitted provided that the following
  conditions are met:

- Redistributions of source code must retain the above
    copyright notice, this list of conditions and the following
    disclaimer.

- Redistributions in binary form must reproduce the above
    copyright notice, this list of conditions and the following
    disclaimer in the documentation and/or other materials
    provided with the distribution.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

----------
商标
----------
Intel is a trademark or registered trademark of Intel Corporation
or its subsidiaries in the United States and/or other countries.

* Other names and brands may be claimed as the property of others

intel 中文官网

https://downloadcenter.intel.com/zh-cn/product/36773/-

https://downloadcenter.intel.com/zh-cn/download/30368/-E810-X722-Ethernet-Linux-RDMA-

报错处理

  • 找不到rpmbuild命令

yum install rpm-build

  • xxxx 被 rdma-core-27.0-1.el7.x86_64 需要

 cmake >= 2.8.11 被 rdma-core-27.0-1.el7.x86_64 需要

执行cmake --version,发现没有cmake,执行下面命令安装:

yum install cmake

        libudev-devel 被 rdma-core-27.0-1.el7.x86_64 需要

yum install libudev-devel

        pkgconfig(libnl-3.0) 被 rdma-core-27.0-1.el7.x86_64 需要
        pkgconfig(libnl-route-3.0) 被 rdma-core-27.0-1.el7.x86_64 需要

yum install libnl3-devel

        /usr/bin/rst2man 被 rdma-core-27.0-1.el7.x86_64 需要

yum install python-docutils

        valgrind-devel 被 rdma-core-27.0-1.el7.x86_64 需要

yum install  valgrind-devel

        systemd-devel 被 rdma-core-27.0-1.el7.x86_64 需要

/PROCGB/Linux/ice-1.3.2

/25_6/RDMA/Linux

IP配置

https://blog.csdn.net/bandaoyu/article/details/116308950

参考:https://blog.csdn.net/qq_36783142/article/details/75353944?locationNum=3&fps=1

查看device和网口的对应关系

Mellanox:

ibdev2netdev

因特尔

ibv_devices|awk '{system("echo "$1"\"-->\"`ls /sys/class/infiniband/"$1"/device/net`")}' |& grep -v '/device/net'

ibv_devices|awk '{system("echo "$1"\"-->\"`ls /sys/class/infiniband/"$1"/device/net`")}'

rocep24s0f3-->ens2f3
rocep24s0f1-->ens2f1
rocep24s0f0-->ens2f0
rocep24s0f2-->ens2f2

【RDMA】intel 因特尔RDMA 驱动和ibverslib 库安装笔记相关推荐

  1. 【RDMA】intel 英特尔RDMA 驱动和ibverslib 库安装|流控PFC

    目录 关键步骤概览 查已经安装的版本 详细步骤和说明 编译和安装(Building and Installation ) 确认RDMA功能(Confirm RDMA Functionality) iW ...

  2. Linux的pcie模拟网卡,Intel英特尔PCIe万兆网卡虚拟功能驱动4.11.1版For Linux(2021年3月5日发布)...

    驱动说明 Intel英特尔PCIe万兆网卡虚拟功能驱动4.11.1版For Linux(2021年3月5日发布) 英特尔万兆网卡Linux ixgbe最新驱动,支持2.6.18~5.11.2内核版本, ...

  3. 7080mt安装linux网卡驱动,Intel英特尔PRO100/1000/10GbE系列网卡驱动

    Intel英特尔PRO100/1000/10GbE系列网卡驱动17.3版For WinXP/2003-32/2003-64/Vista-32/Vista-64/Win7-32/Win7-64/Win8 ...

  4. i217lm网卡驱动linux,Intel英特尔I217/I218/I219系列网卡驱动

    Intel英特尔I217/I218/I219系列网卡驱动官方版是一款专业的驱动软件.Intel英特尔I217/I218/I219系列网卡驱动最新版该驱动仅支持I217/I218系列网卡.除此之外Int ...

  5. inter无线网卡服务器版驱动,Intel英特尔

    Intel英特尔PRO/Wireless 2200BG/PRO Wireless 2915ABG/PRO Wireless 3945ABG/Wireless WiFi Link 4965AGN无线笔记 ...

  6. Intel英特尔G45/G43/G41/G35/G33/G31/G965/Q963/Q965/GM965系列芯片组视频部分(Intel Graphics Media Accelerator)最新驱动1

    Intel英特尔G45/G43/G41/G35/G33/G31/G965/Q963/Q965/GM965系列芯片组视频部分(Intel Graphics Media Accelerator)最新驱动1 ...

  7. x550网卡linux驱动,Intel英特尔X520/X540/X550/82599系列万兆网卡虚拟功能驱动4.10.2版For Linux(2021年2月1日发布)...

    驱动说明 Intel英特尔X520/X540/X550/82599系列万兆网卡虚拟功能驱动4.10.2版For Linux(2021年2月1日发布) 英特尔X520.X540.X550.82599系列 ...

  8. 微型计算机nuc 6i5syk,Intel 英特尔 NUC Kit NUC6i5SYH 紧凑型准系统 开箱(附让人崩溃的系统问题)...

    Intel 英特尔 NUC Kit NUC6i5SYH 紧凑型准系统 开箱(附让人崩溃的系统问题) 2016-03-28 16:20:00 45点赞 229收藏 137评论 追加修改(2016-03- ...

  9. 微型计算机是台式机,#有货自远方来# 黑五买的新“玩具” — Intel 英特尔 NUC5PPYH 微型电脑...

    #有货自远方来# 黑五买的新"玩具" - Intel 英特尔 NUC5PPYH 微型电脑 2015-12-31 17:07:04 22点赞 89收藏 63评论 混迹于倡导" ...

  10. ssd测试工具 linux,英特尔ssd工具下载-Intel英特尔SSD Data Center Tool(数据中心工具)2.2.1 Linux版 - 极光下载站...

    早在2013年,微软就将win10系统的代号设定为Threshold.Threshold也是像Blue一样,会带来一波的系统更新,包括Windows Threshold.Windows Phone T ...

最新文章

  1. 用flex进行网易云音乐界面构建和布局解析(2)
  2. 使用numba要注意的越界问题
  3. TensorFlow 2.0 - CNN / 预训练 / RNN
  4. spring数据源、连接池配置
  5. 【渝粤教育】广东开放大学 知识产权法 形成性考核 (44)
  6. webp格式图片如何简单快速转换成JPG、PNG格式
  7. php讲一个数组分割成字符串,PHP 分割字符串函数把字符串分割成数组示例
  8. xp怎么设置计算机共享的打印机共享的打印机共享,Windows XP系统如何快速设置共享打印机?...
  9. 美团旅行销售绩效系统研发实践
  10. 广东省考计算机类的比例,广东公务员考试22.4万人参加 竞争比例为19:1
  11. JavaScript 各种事件、方法、参数详解示例及常见问题等(全)
  12. android通讯录换ipone,换新iPhone手机,通讯录你会转移吗?90%人居然还不会!
  13. ItunesConnect:苹果内购项目元数据缺失
  14. 官方通报:kissreiko博文因涉嫌诈骗广告 将永久封号
  15. 台式电脑主板插线步骤图_电脑主板跳线插法 装机接线详细图解教程
  16. 低代码平台有哪些?织信informat怎么样?
  17. C#使用Access2003
  18. python与机器人王国_【工业机器人】盘点日本最值得关注的工业机器人13大巨头!...
  19. pyhanlp添加自定义词典
  20. PHP实现新订单提醒功能。

热门文章

  1. 三菱Q系列PLC数据采集随笔
  2. java 接口 实验报告_java-接口练习实验报告
  3. 实战小项目——基于STM32的蓝牙小车
  4. 【每日英文】2021.9.23
  5. linux 电脑观看电视,使用Zattoo在您的Ubuntu桌面上观看直播电视
  6. 计算机无法连接到打印机主机,惠普打印机无法连接电脑解决方法
  7. 混频器/变频器的原理及分类
  8. 时间(格林尼治时间/协调世界时/世界时间)
  9. Java如何创建参数个数不限的函数
  10. python爬取京东商品价格走势_用python编写的抓京东商品价格的爬虫