1 课程规划

1.1 第一部分 pytorch深度学习基础知识

  • pytorch简介与安装
  • pytorch基础知识
  • pytorch 主要组成模块
  • 基础实战 Fashion-MNIST时装分类 ## 1.2 第二部分 pytorch进阶操作
  • 定义自己的pytorch模型
  • pytorch模型保存与加载
  • 灵活使用pytorch模型
  • pytorch可视化
  • pytorch生态 ## 1.3 第三部分 pytorch实战案例
  • CV:语义分割的快速实现
  • NLP:情感分析
  • 图神经网络
  • 医学影响(比赛)

2 pytorch深度学习基础

2.1 张量

定义:是pytorch运算的基本单元

In [3]:

import torch
?torch.tensor
Docstring:
tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) -> TensorConstructs a tensor with :attr:`data`... warning:::func:`torch.tensor` always copies :attr:`data`. If you have a Tensor``data`` and want to avoid a copy, use :func:`torch.Tensor.requires_grad_`or :func:`torch.Tensor.detach`.If you have a NumPy ``ndarray`` and want to avoid a copy, use:func:`torch.as_tensor`... warning::When data is a tensor `x`, :func:`torch.tensor` reads out 'the data' from whatever it is passed,and constructs a leaf variable. Therefore ``torch.tensor(x)`` is equivalent to ``x.clone().detach()``and ``torch.tensor(x, requires_grad=True)`` is equivalent to ``x.clone().detach().requires_grad_(True)``.The equivalents using ``clone()`` and ``detach()`` are recommended.Args:data (array_like): Initial data for the tensor. Can be a list, tuple,NumPy ``ndarray``, scalar, and other types.Keyword args:dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.Default: if ``None``, infers data type from :attr:`data`.device (:class:`torch.device`, optional): the desired device of returned tensor.Default: if ``None``, uses the current device for the default tensor type(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPUfor CPU tensor types and the current CUDA device for CUDA tensor types.requires_grad (bool, optional): If autograd should record operations on thereturned tensor. Default: ``False``.pin_memory (bool, optional): If set, returned tensor would be allocated inthe pinned memory. Works only for CPU tensors. Default: ``False``.Example::>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])tensor([[ 0.1000,  1.2000],[ 2.2000,  3.1000],[ 4.9000,  5.2000]])>>> torch.tensor([0, 1])  # Type inference on datatensor([ 0,  1])>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],...              dtype=torch.float64,...              device=torch.device('cuda:0'))  # creates a torch.cuda.DoubleTensortensor([[ 0.1111,  0.2222,  0.3333]], dtype=torch.float64, device='cuda:0')>>> torch.tensor(3.14159)  # Create a scalar (zero-dimensional tensor)tensor(3.1416)>>> torch.tensor([])  # Create an empty tensor (of size (0,))tensor([])
Type:      builtin_function_or_method

tensor(data,,dtype=None,device=None,requires_grad=False,pin_memory) data:数据 :其他变量的输入
dtype:数据类型
device:GPU,CPU
requires_grad:是否要求求导
pin_memory:是否

In [4]:

a = torch.tensor(1.0,dtype=torch.float)
b = torch.tensor(1,dtype=torch.long)
c = torch.tensor(1.0,dtype=torch.int8) //compel convert
print(a,b,c)
tensor(1.) tensor(1) tensor(1, dtype=torch.int8)

In [5]:

# random size tensor
d = torch.FloatTensor(2,3)
e = torch.IntTensor(2)
f = torch.IntTensor([1,2,3,4])

In [7]:

import numpy as np
g = np.array([[1,2,3],[4,5,6]])
h = torch.tensor(g)
print(h)
i = torch.from_numpy(g)
print(i)
j = h.numpy()
print(j)
tensor([[1, 2, 3],[4, 5, 6]], dtype=torch.int32)
tensor([[1, 2, 3],[4, 5, 6]], dtype=torch.int32)
[[1 2 3][4 5 6]]

In [14]:

# common tensor function
k = torch.rand(2,3)
l = torch.ones(2,3)
m = torch.zeros(2,3)
n = torch.arange(0,10,2)
print(k)
tensor([[0.2229, 0.6146, 0.3432],[0.3783, 0.5949, 0.4072]])

In [12]:

print(k.shape)
print(k.size())
torch.Size([2, 3])
torch.Size([2, 3])

In [15]:

# operation
o = torch.add(k,1)
print(o)
tensor([[1.2229, 1.6146, 1.3432],[1.3783, 1.5949, 1.4072]])

In [16]:

# index
print(o[:,1])
print(o[0,:])
tensor([1.6146, 1.5949])
tensor([1.2229, 1.6146, 1.3432])

In [17]:

# reshape only need one dimension
print(o.view(3,2))
print(o.view(-1,2))
tensor([[1.2229, 1.6146],[1.3432, 1.3783],[1.5949, 1.4072]])
tensor([[1.2229, 1.6146],[1.3432, 1.3783],[1.5949, 1.4072]])

In [18]:

# broadcast
p = torch.arange(1,3).view(1,2)
print(p)
q = torch.arange(1,4).view(3,1)
print(q)
print(p+q)
tensor([[1, 2]])
tensor([[1],[2],[3]])
tensor([[2, 3],[3, 4],[4, 5]])

In [19]:

# unsqueeze on 2th dimension
r = o.unsqueeze(1)
print(r)
tensor([[[1.2229, 1.6146, 1.3432]],[[1.3783, 1.5949, 1.4072]]])

In [20]:

# squeeze on 1th dimension will be fail
s = r.squeeze(0)
print(s)
tensor([[[1.2229, 1.6146, 1.3432]],[[1.3783, 1.5949, 1.4072]]])

In [21]:

# squeeze on 2th dimension will be win
t = r.squeeze(1)
print(t)
tensor([[1.2229, 1.6146, 1.3432],[1.3783, 1.5949, 1.4072]])

2.2 automatic derivation

In [23]:

import torch
x1 = torch.tensor(1.0,requires_grad=True)
x2 = torch.tensor(2.0,requires_grad=True)
y = x1 + 2*x2
print(y)
tensor(5., grad_fn=<AddBackward0>)

In [24]:

# examine each variation if need derivation
print(x1.requires_grad)
print(x2.requires_grad)
print(y.requires_grad)
True
True
True

In [25]:

# examine each variation's derivation
print(x1.grad.data)
print(x2.grad.data)
print(y.grad.data)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_17348\4001318552.py in <module>1 # examine each variation's derivation
----> 2 print(x1.grad.data)3 print(x2.grad.data)4 print(y.grad.data)AttributeError: 'NoneType' object has no attribute 'data'

In [26]:

x1

Out[26]:

tensor(1., requires_grad=True)

In [27]:

y = x1+2*x2
y.backward()
print(x1.grad.data)
print(x2.grad.data)
tensor(1.)
tensor(2.)

In [28]:

x1 = torch.tensor(1.0,requires_grad=False)
x2 = torch.tensor(2.0,requires_grad=False)
y = x1 + 2*x2
y.backward()
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_17348\1754898500.py in <module>2 x2 = torch.tensor(2.0,requires_grad=False)3 y = x1 + 2*x2
----> 4 y.backward()d:\Anaconda3\envs\my_env\lib\site-packages\torch\_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)305                 create_graph=create_graph,306                 inputs=inputs)
--> 307         torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)308 309     def register_hook(self, hook):d:\Anaconda3\envs\my_env\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)154     Variable._execution_engine.run_backward(155         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 156         allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag157 158 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

2.3 parallel computing

Features: GPU,speed,big batch How: CUDA,GPU Way: Network Partitioning,Layer-wise Partitioning,Data Parallelism

3 main module and practice

3.1 流程

数据预处理->模型设计->损失函数与优化方案设计->前向传播->反向传播->更新参数

3.2 特殊性

  • 样本量大,需要分批加载batch
  • 逐层,分模块搭建
  • 多样化的损失函数与优化器设计
  • GPU

3.3 Practice

In [29]:

import os
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset,DataLoader

Out[29]:

False

In [31]:

# GPU/CPU
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
# variable.to(device)

Out[31]:

device(type='cpu')

In [32]:

# parameters
batch_size = 256
num_workers = 4
lr = 1e-4
epochs = 20

In [37]:

from torchvision import transforms
image_size = 28
data_transform = transforms.Compose([transforms.ToPILImage(),transforms.Resize(image_size),transforms.ToTensor()])

In [38]:

# read way 1th  torchvision datasset
from torchvision import datasets
train_data = datasets.FashionMNIST(root='./',train=True,download=True,transform=data_transform)
test_data = datasets.FashionMNIST(root='./',train=False,download=True,transform=data_transform)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ./FashionMNIST\raw\train-images-idx3-ubyte.gz
  0%|          | 0/26421880 [00:00<?, ?it/s]
Extracting ./FashionMNIST\raw\train-images-idx3-ubyte.gz to ./FashionMNIST\rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to ./FashionMNIST\raw\train-labels-idx1-ubyte.gz
  0%|          | 0/29515 [00:00<?, ?it/s]
Extracting ./FashionMNIST\raw\train-labels-idx1-ubyte.gz to ./FashionMNIST\rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to ./FashionMNIST\raw\t10k-images-idx3-ubyte.gz
  0%|          | 0/4422102 [00:00<?, ?it/s]
Extracting ./FashionMNIST\raw\t10k-images-idx3-ubyte.gz to ./FashionMNIST\rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to ./FashionMNIST\raw\t10k-labels-idx1-ubyte.gz
  0%|          | 0/5148 [00:00<?, ?it/s]
Extracting ./FashionMNIST\raw\t10k-labels-idx1-ubyte.gz to ./FashionMNIST\raw

In [ ]:

# read way 2th csv
class FMDataset(Dataset):def __init__(self,df,transform=None):self.df = dfself.transform = transformself.images = df.iloc[:,1:].values.astype(np.unit8)self.labels = df.iloc[:,0].valuesdef __len__(self):

Pytorch基础打卡01相关推荐

  1. 【深度之眼PyTorch框架班第五期】作业打卡01:PyTorch简介及环境配置;PyTorch基础数据结构——张量

    文章目录 任务名称 任务简介 详细说明 作业 1. 安装anaconda,pycharm, CUDA+CuDNN(可选),虚拟环境,pytorch,并实现hello pytorch查看pytorch的 ...

  2. 深度学习之Pytorch基础教程!

    ↑↑↑关注后"星标"Datawhale 每日干货 & 每月组队学习,不错过 Datawhale干货 作者:李祖贤,Datawhale高校群成员,深圳大学 随着深度学习的发展 ...

  3. pytorch 单机多卡训练distributedDataParallel

    pytorch单机多卡:从DataParallel到DistributedDataParallel 最近想做的实验比较多,于是稍微学习了一下和pytorch相关的加速方式.本人之前一直在使用DataP ...

  4. 【深度学习】深度学习之Pytorch基础教程!

    作者:李祖贤,Datawhale高校群成员,深圳大学 随着深度学习的发展,深度学习框架开始大量的出现.尤其是近两年,Google.Facebook.Microsoft等巨头都围绕深度学习重点投资了一系 ...

  5. PyTorch基础(part3)

    学习笔记,仅供参考,有错必纠 文章目录 PyTorch 基础 线性回归 常用代码 导包 生成数据 构建神经网络模型 非线性回归 生成数据 构建神经网络模型 PyTorch 基础 线性回归 常用代码 # ...

  6. Pytorch ——基础指北_肆 [构建数据集与操作数据集]

    Pytorch --基础指北_肆 系列文章目录 Pytorch --基础指北_零 Pytorch --基础指北_壹 Pytorch --基础指北_贰 Pytorch --基础指北_叁 文章目录 Pyt ...

  7. Pytorch ——基础指北_叁 [Pytorch API 构建基础模型]

    Pytorch --基础指北_叁 系列文章目录 Pytorch --基础指北_零 Pytorch --基础指北_壹 Pytorch --基础指北_贰 Pytorch --基础指北_叁 文章目录 Pyt ...

  8. 《深度学习之pytorch实战计算机视觉》第6章 PyTorch基础(代码可跑通)

    上一篇文章<深度学习之pytorch实战计算机视觉>第5章 Python基础讲了Python基础.接下来看看第6章 PyTorch基础. 目录 6.1 PyTorch中的Tensor 6. ...

  9. 深入浅出Pytorch:02 PyTorch基础知识

    深入浅出Pytorch 02 PyTorch基础知识 内容属性:深度学习(实践)专题 航路开辟者:李嘉骐.牛志康.刘洋.陈安东 领航员:叶志雄 航海士:李嘉骐.牛志康.刘洋.陈安东 开源内容:http ...

最新文章

  1. 软件项目管理0723:一页项目管理-主任务
  2. [翻译]初试C# 8.0
  3. “睡服”面试官系列第七篇之map数据结构(建议收藏学习)
  4. 我们正处在“后开源”时代?
  5. Hibernate执行Update操作之后查询跟新的语句出错
  6. 【Linux】关于Linux操作系统的配置步骤
  7. python外星人入侵游戏图片_外星人入侵,使用python开发的2D游戏
  8. SAP License:FICO面试问题
  9. SSM中(Spring-SpringMVC-Mybatis)(二:整合)
  10. 自动化测试unittest测试框架实例
  11. camera常见问题和调试方法
  12. fw313r手机登录_fw313r路由器设置
  13. Matplotlib:初学者绕不开的库,详解50种常用可视化图表!
  14. oracle cdb to no cdb,【CDB】怎样转换non-CDB to CDB
  15. 七夕活动主题html邮件,七夕节活动策划方案,七夕创意活动主题
  16. 2019 Q1数字钱包行业报告 | TokenInsight
  17. 计算机的运作流程的个人感想
  18. 移动端项目功能点及实现方案 (图片居多)
  19. 视频号日引流500+精准流量6大玩法,实现微信后端转化变现丨国仁网络资讯
  20. 【Redis】回顾Redis知识点之事务机制

热门文章

  1. k8s之HPA(Pod水平自动伸缩)
  2. PATH,PYTHONPATH 与sys.path的区别
  3. 联想拯救者系统重装?不求人教程
  4. YOLO系列:YOLO v3解析
  5. Maven中不能引入ojdbc解决方法:com.oracle:ojdbc6:jar:11.2.0.3
  6. 微信小程序模拟车位选择功能(简陋版本)
  7. 阿里云服务器购买完整流程
  8. 1.3【展讯平台】Android 驱动(Kernel)、系统(framework) 定制,调试日志
  9. 【基于51的多功能智能小车】
  10. Linux 打包可执行文件