一、docker使用宿主机硬件设备的三种方式

  1. 使用--privileged=true选项,以特权模式开启容器

  2. 使用--device选项

  3. 使用容器卷挂载-v选项

二、docker使用gpu方式演变

docker使用宿主机的gpu设备,本质是把宿主机使用gpu时调用的设备文件全部挂载到docker上。nvidia提供了三种方式的演变,如下是官网的一些介绍

来自 <Enabling GPUs in the Container Runtime Ecosystem | NVIDIA Technical Blog>

NVIDIA designed NVIDIA-Docker in 2016 to enable portability in Docker images that leverage NVIDIA GPUs. It allowed driver agnostic CUDA images and provided a Docker command line wrapper that mounted the user mode components of the driver and the GPU device files into the container at launch. Over the lifecycle of NVIDIA-Docker, we realized the architecture lacked flexibility for a few reasons: Tight integration with Docker did not allow support of other container technologies such as LXC, CRI-O, and other runtimes in the future We wanted to leverage other tools in the Docker ecosystem – e.g. Compose (for managing applications that are composed of multiple containers) Support GPUs as a first-class resource in orchestrators such as Kubernetes and Swarm Improve container runtime support for GPUs – esp. automatic detection of user-level NVIDIA driver libraries, NVIDIA kernel modules, device ordering, compatibility checks and GPU features such as graphics, video acceleration As a result, the redesigned NVIDIA-Docker moved the core runtime support for GPUs into a library called libnvidia-container. The library relies on Linux kernel primitives and is agnostic relative to the higher container runtime layers. This allows easy extension of GPU support into different container runtimes such as Docker, LXC and CRI-O. The library includes a command-line utility and also provides an API for integration into other runtimes in the future. The library, tools, and the layers we built to integrate into various runtimes are collectively called the NVIDIA Container Runtime. Since 2015, Docker has been donating key components of its container platform, starting with the Open Containers Initiative (OCI) specification and an implementation of the specification of a lightweight container runtime called runc. In late 2016, Docker also donated containerd, a daemon which manages the container lifecycle and wraps OCI/runc. The containerd daemon handles transfer of images, execution of containers (with runc), storage, and network management. It is designed to be embedded into larger systems such as Docker. More information on the project is available on the official site. Figure 1 shows how the libnvidia-container integrates into Docker, specifically at the runc layer. We use a custom OCI prestart hook called nvidia-container-runtime-hook to runc in order to enable GPU containers in Docker (more information about hooks can be found in the OCI runtime spec). The addition of the prestart hook to runc requires us to register a new OCI compatible runtime with Docker (using the –runtime option). At container creation time, the prestart hook checks whether the container is GPU-enabled (using environment variables) and uses the container runtime library to expose the NVIDIA GPUs to the container. Figure 1.Integration of NVIDIA Container Runtime with Docker

编辑切换为居中

添加图片注释,不超过 140 字(可选)

1、nvidia-docker

nvidia-docker是在docker的基础上做了一层封装,通过 nvidia-docker-plugin把硬件设备在docker的启动命令上添加必要的参数。

Ubuntu distributions
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc/nvidia-docker_1.0.0.rc-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker_1.0.0.rc-1_amd64.deb && rm /tmp/nvidia-docker*.deb # Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi Other distributions
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc/nvidia-docker_1.0.0.rc_amd64.tar.xz
sudo tar --strip-components=1 -C /usr/bin -xvf /tmp/nvidia-docker_1.0.0.rc_amd64.tar.xz && rm /tmp/nvidia-docker*.tar.xz
# Run nvidia-docker-plugin
sudo -b nohup nvidia-docker-plugin > /tmp/nvidia-docker.log
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi Standalone install
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.0-rc/nvidia-docker_1.0.0.rc_amd64.tar.xz
sudo tar --strip-components=1 -C /usr/bin -xvf /tmp/nvidia-docker_1.0.0.rc_amd64.tar.xz && rm /tmp/nvidia-docker*.tar.xz
# One-time setup
sudo nvidia-docker volume setup
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

2、nvidia-docker2

sudo apt-get install nvidia-docker2 sudo apt-get install nvidia-container-runtime sudo dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime [...]

3、nvidia-container-toolkit

docker版本在19.03及以上后,nvidia-container-toolkit进行了进一步的封装,在参数里直接使用--gpus "device=0" 即可

编辑切换为居中

添加图片注释,不超过 140 字(可选)

编辑切换为居中

添加图片注释,不超过 140 字(可选)

编辑切换为居中

添加图片注释,不超过 140 字(可选)

编辑切换为居中

添加图片注释,不超过 140 字(可选)

Docker使用GPU相关推荐

  1. docker用gpu的参数_从零开始入门 K8s | GPU 管理和 Device Plugin 工作机制

    导读:2016 年,随着 AlphaGo 的走红和 TensorFlow 项目的异军突起,一场名为 AI 的技术革命迅速从学术圈蔓延到了工业界,所谓 AI 革命从此拉开了帷幕.该热潮的背后推手正是云计 ...

  2. docker用gpu的参数_初探Docker调用GPU

    前一阵子写了一篇docker的学习笔记[1],但是当时没有gpu,所以没法做显卡调用相关的内容.最近机房的电脑启动了,有了实验环境,打算把docker调用gpu相关的内容测试一下.实验环境依然为Ubu ...

  3. win10 docker部署gpu项目

    win10 docker部署gpu项目 nvidia-docker win10安装docker 制作镜像 ubuntu18.04部署docker gpu项目 安装docker 配置docker使用gp ...

  4. windows下wsl2中的ubuntu和ubuntu系统下docker使用gpu的异同

    windows下wsl2中的ubuntu和ubuntu系统下docker使用gpu的异同 介绍ubuntu系统下配置docker下GPU使用环境的文章很多,本文算是一个比较性梳理. 主要比较一下wsl ...

  5. Docker Compose + GPU + TensorFlow 所产生的奇妙火花

    选自 hackernoon 机器之心编译 参与:黄小天.路雪 Docker 有很多优势,但是在数据科学和深度学习方面,使用 Docker 也存在一些阻碍.本文介绍了一系列 Docker 实用工具,以及 ...

  6. ubuntu下docker使用GPU

    达到以下条件即可: 1 docker内部和宿主机的显卡驱动要一致 说白了就是你的cuda.cudnn基于那个显卡驱动,你就要用哪个,内外都要部署 我测试过内部不部署,运行会失败 2 内部虽然装了显卡驱 ...

  7. docker用gpu的参数_ZStack实践汇 | ZStack+Docker支撑GPU业务实践

    背景 ZStack所聚焦的IaaS,作为云计算里的底座基石,能够更好的实现物理资源隔离,以及服务器等硬件资源的统一管理,为上层大数据.深度学习Tensorflow等业务提供了稳定可靠的基础环境. 近年 ...

  8. Ubuntu+Docker+Tensorflow+GPU安装

    Docker对于在Linux下快速建立深度学习的工作环境很有帮助,参考一些文章,2小时安装完成. 0.预备 GCC,Python, CUDA等需要提前安装好. CUDA上次安装Kaldi时我已经安装好 ...

  9. docker ubuntu安装python_BAT架构师手把手教你如何使用Docker安装GPU版本caffe2

    引言 第一步 安装Docker SET UP THE REPOSITORY sudo apt-get remove docker docker-engine docker.io containerd ...

最新文章

  1. 基础正规表示法字符汇整 (characters)
  2. 蓝牙地址的name为null_蓝牙, enable协议栈流程
  3. C++如何监听http请求
  4. 线段树 区间更新模板
  5. 题库练习1(单词长度、统计字符个数、)
  6. Iris——整合Gorm持久化的Casbin的Example
  7. 字节输入流一次读取一个字节的原理
  8. stm32l4 外部中断按键会卡死_stm32f103c8怎么实现外部中断按键点灯,按一下就亮,再按一下就灭,求大神帮忙...
  9. 2021-09-07
  10. 深度学习,实现智能聊天对话机器人(大数据人工智能公司)
  11. 初识大数据 小孩子都懂的大数据
  12. 如何将一个向量投影到一个平面上_数学一轮复习32,平面向量数量积及其应用,三角形‘四心’模型...
  13. 英语作文计算机国际会议开幕词,国际学术会议英文主持词
  14. LogLog Counting
  15. 交通指示牌的特征匹配代码
  16. Mysql之如何使用json
  17. 致远OA管理员密码的重置
  18. jquery修改图片src
  19. 启用数据空间:让VirtualBox虚拟机中的Ubuntu 10.10和XP主机互通有无
  20. 蝙蝠算法(Bat Algorithm,BA)算法

热门文章

  1. 仿拉勾首页之Behavior的学习
  2. 如何更改R默认工作目录:永久或临时
  3. CF 474D Flowers DP
  4. 将驱动mysql-connector-java.jar包导入IDEA
  5. Centos7 下源码安装nginx
  6. Nginx安装Lua
  7. Linux 系统运维学习方法汇总
  8. python 图片保存成视频
  9. IDEA汉化官方插件
  10. video 满屏显示_微信video标签视频设置全屏属性