pytorch张量

In this PyTorch tutorial, we’ll discuss PyTorch Tensor, which are the building blocks of this Deep Learning Framework.

在本PyTorch教程中,我们将讨论PyTorch Tensor ,这是此深度学习框架的构建基块。

Let’s get started!

让我们开始吧!



PyTorch张量 (PyTorch Tensor)

Have you worked with Python numpy before? If yes, then this section is going to be very simple for you! Even if you don’t have experience with numpy, you can seamlessly transition between PyTorch and NumPy!

您以前使用过Python numpy吗? 如果是,那么本节对您来说将非常简单! 即使您没有使用numpy的经验,也可以在PyTorch和NumPy之间无缝转换!

A Tensor in PyTorch is similar to numpy arrays, with the additional flexibility of using a GPU for calculations.

PyTorch中的Tensor与numpy数组相似,具有使用GPU进行计算的额外灵活性。

1. 2D Pytorch张量 (1. 2D Pytorch Tensor)

Imagine a tensor as an array of numbers, with a potentially arbitrary number of dimensions. The only difference between a Tensor and a multidimensional array in C/C++/Java is that the size of all the columns in a dimension is the same.

想象一个张量为一个数字数组,其维数可能是任意的。 在C / C ++ / Java中,张量和多维数组之间的唯一区别是,维中所有列的大小都相同。

For example, the below can be a valid representation of a 2 Dimensional Tensor.

例如,以下内容可以是二维张量的有效表示。


[[1 2 3 4],[5 6 7 8]]

Note, however, that the below example is NOT a valid example, since Tensors are not jagged arrays.

但是请注意,以下示例不是有效示例,因为张量不是锯齿状数组。


[[1 2 3 4],[5 6 7]]

PyTorch Tensors are really convenient for programmers, since they are almost the same as numpy arrays.

PyTorch张量实际上对程序员很方便,因为它们几乎与numpy数组相同。

There are a couple of differences to numpy methods, though, so it is advised that you also refer the official Documentation for further information.

但是, numpy方法有一些区别,因此建议您也参考官方文档以获取更多信息。

2.初始化一个空的PyTorch张量 (2. Initializing an Empty PyTorch Tensor)

Let’s consider the below example, which initializes an empty Tensor.

让我们考虑以下示例,该示例初始化一个空的Tensor。


import torch
# Creates a 3 x 2 matrix which is empty
a = torch.empty(3, 2)

An empty tensor does NOT mean that it does not contain anything. It’s just that there is memory allocated for it.

空张量并不意味着它不包含任何东西。 只是为其分配了内存。


import torch
# Creates a 3 x 2 matrix which is empty
a = torch.empty(3, 2)
print(a)# Create a zero initialized float tensor
b = torch.zeros(3, 2, dtype=torch.float32)
print(b)

Output

输出量


tensor([[3.4655e-37, 0.0000e+00],[4.4842e-44, 0.0000e+00],[       nan, 6.1657e-44]])
tensor([[0., 0.],[0., 0.],[0., 0.]])

The first tensor is a result of PyTorch simply allocating memory for the tensor. Whatever previous content in the memory is not erased.

第一个张量是PyTorch只是为张量分配内存的结果。 存储器中的所有先前内容均不会被擦除。

The second tensor is filled with zeros, since PyTorch allocates memory and zero-initializes the tensor elements.

由于PyTorch会分配内存并将张量元素零初始化,因此第二张量将填充零。

Notice the similarity to numpy.empty() and numpy.zeros(). This is because PyTorch is designed to replace numpy, since the GPU is available.

注意与numpy.empty()numpy.zeros()的相似性。 这是因为PyTorch旨在替代numpy ,因为GPU可用。

3.查找PyTorch张量大小 (3. Finding PyTorch Tensor Size)

Let’s create a basic tensor and determine its size.

让我们创建一个基本的张量并确定其大小。


import torch
# Create a tensor from data
c = torch.tensor([[3.2 , 1.6, 2], [1.3, 2.5 , 6.9]])
print(c)

Output

输出量


tensor([[3.2000, 1.6000, 2.0000],[1.3000, 2.5000, 6.9000]])

To get the size of the tensor, we can use tensor.size()

要获得张量的大小,我们可以使用tensor.size()


print(c.size())

Output

输出量


torch.Size([2, 3])


PyTorch张量操作 (PyTorch Tensor Operations)

Like numpy, PyTorch supports similar tensor operations.

numpy一样,PyTorch支持类似的张量操作。

The summary is given in the below code block.

摘要在下面的代码块中给出。

1.关于张量的基本数学运算 (1. Basic Mathematical Operations on Tensors)


import torch
# Tensor Operations
x = torch.tensor([[2, 3, 4], [5, 6, 7]])
y = torch.tensor([[2, 3, 4], [1.3, 2.6, 3.9]])# Addition
print(x + y)
# We can also use torch.add()
print(x + y == torch.add(x, y))# Subtraction
print(x - y)
# We can also use torch.sub()
print(x-y == torch.sub(x, y))

Output

输出量


tensor([[ 4.0000,  6.0000,  8.0000],[ 6.3000,  8.6000, 10.9000]])
tensor([[True, True, True],[True, True, True]])
tensor([[0.0000, 0.0000, 0.0000],[3.7000, 3.4000, 3.1000]])
tensor([[True, True, True],[True, True, True]])

We can also assign the result to a tensor. Add the following code snippet to the code above.

我们还可以将结果分配给张量。 将以下代码片段添加到上面的代码中。


# We can assign the output to a tensor
z = torch.zeros(x.shape)
torch.add(x, y, out=z)
print(z)

Output

输出量


tensor([[ 4.0000,  6.0000,  8.0000],[ 6.3000,  8.6000, 10.9000]])

2.使用PyTorch张量进行内联加减法 (2. Inline Addition and Subtraction with PyTorch Tensor)

PyTorch also supports in-place operations like addition and subtraction, when suffixed with an underscore (_). Let’s continue on with the same variables from the operations summary code above.

当带有下划线(_)后缀时,PyTorch还支持就地操作,例如加法和减法。 让我们继续上面操作摘要代码中的相同变量。


# In-place addition
print('Before In-Place Addition:', y)
y.add_(x)
print('After addition:', y)

Output

输出量


Before In-Place Addition: tensor([[2.0000, 3.0000, 4.0000],[1.3000, 2.6000, 3.9000]])
After addition: tensor([[ 4.0000,  6.0000,  8.0000],[ 6.3000,  8.6000, 10.9000]])

3.访问张量索引 (3. Accessing Tensor Index)

We can also use numpy based indexing in PyTorch

我们还可以在PyTorch中使用基于numpy的索引


# Use numpy slices for indexing
print(y[:, 1]

Output

输出量


tensor([6.0000, 8.6000])


重塑PyTorch张量 (Reshape a PyTorch Tensor)

Similar to numpy, we can use torch.reshape() to reshape a tensor. We can also use tensor.view() to achieve the same functionality.

numpy相似,我们可以使用torch.reshape()重塑张量。 我们还可以使用tensor.view()实现相同的功能。


import torch
x = torch.randn(5, 3)
# Return a view of the x, but only having
# one dimension
y = x.view(5 * 3)print('Size of x:', x.size())
print('Size of y:', y.size())print(x)
print(y)# Get back the original tensor with reshape()
z = y.reshape(5, 3)
print(z)

Output

输出量


Size of x: torch.Size([5, 3])
Size of y: torch.Size([15])tensor([[ 0.3224,  0.1021, -1.4290],[-0.3559,  0.2912, -0.1044],[ 0.3652,  2.3112,  1.4784],[-0.9630, -0.2499, -1.3288],[-0.0667, -0.2910, -0.6420]])tensor([ 0.3224,  0.1021, -1.4290, -0.3559,  0.2912, -0.1044,  0.3652,  2.3112,1.4784, -0.9630, -0.2499, -1.3288, -0.0667, -0.2910, -0.6420])tensor([[ 0.3224,  0.1021, -1.4290],[-0.3559,  0.2912, -0.1044],[ 0.3652,  2.3112,  1.4784],[-0.9630, -0.2499, -1.3288],[-0.0667, -0.2910, -0.6420]])

The list of all Tensor Operations is available in PyTorch’s Documentation.

所有Tensor操作的列表可在PyTorch的文档中找到 。

PyTorch – NumPy桥 (PyTorch – NumPy Bridge )

We can convert PyTorch tensors to numpy arrays and vice-versa pretty easily.

我们可以很容易地将PyTorch张量转换为numpy数组,反之亦然。

PyTorch is designed in such a way that a Torch Tensor on the CPU and the corresponding numpy array will have the same memory location. So if you change one of them, the other one will automatically be changed.

PyTorch的设计方式是, CPU上的Torch张量和相应的numpy数组将具有相同的内存位置。 因此,如果您更改其中一个,则另一个将自动更改。

To prove this, let’s test it using the torch.numpy() and the torch.from_numpy() methods.

为了证明这一点,让我们使用torch.numpy()torch.from_numpy()方法对其进行测试。

torch.numpy() is used to convert a Tensor to a numpy array, and torch.from_numpy() will do the reverse.

torch.numpy()用于将Tensor转换为numpy数组,而torch.from_numpy()将执行相反的操作。


import torch
# We also need to import numpy to declare numpy arrays
import numpy as npa = torch.tensor([[1, 2, 3], [4, 5, 6]])
print('Original Tensor:', a)b = a.numpy()
print('Tensor to a numpy array:', b)# In-Place addition (add 2 to every element)
a.add_(2)print('Tensor after addition:', a)print('Numpy Array after addition:', b)

Output

输出量


Original Tensor: tensor([[1, 2, 3],[4, 5, 6]])
Tensor to a numpy array: [[1 2 3][4 5 6]]
Tensor after addition: tensor([[3, 4, 5],[6, 7, 8]])
Numpy Array after addition: [[3 4 5][6 7 8]]

Indeed, the numpy array has also changed it’s value!

确实,numpy数组也改变了它的价值!

Let’s do the reverse as well

让我们也做相反的事情


import torch
import numpy as npc = np.array([[4, 5, 6], [7, 8, 9]])
print('Numpy array:', c)# Convert to a tensor
d = torch.from_numpy(c)
print('Tensor from the array:', d)# Add 3 to each element in the numpy array
np.add(c, 3, out=c)print('Numpy array after addition:', c)print('Tensor after addition:', d)

Output

输出量


Numpy array: [[4 5 6][7 8 9]]
Tensor from the array: tensor([[4, 5, 6],[7, 8, 9]])
Numpy array after addition: [[ 7  8  9][10 11 12]]
Tensor after addition: tensor([[ 7,  8,  9],[10, 11, 12]])

NOTE: If you do not use the numpy in-place addition using a += 3 or np.add(out=a), then the Tensor will not reflect the changes in the numpy array.

注意 :如果您不通过a += 3np.add(out=a)使用numpy就地加法,那么Tensor将不会反映numpy数组中的更改。

For example, if you try this:

例如,如果您尝试这样做:


c = np.add(c, 3)

Since you’re using =, this means that Python will create a new object and assign that new object to the name called c. So the original memory location is still unchanged.

由于您使用的是= ,这意味着Python将创建一个新对象并将该新对象分配给名为c的名称。 因此,原始内存位置仍保持不变。

将CUDA GPU与PyTorch张量配合使用 (Use the CUDA GPU with a PyTorch Tensor)

We can make the NVIDIA CUDA GPU perform the computations and have a speedup, by moving the tensor to the GPU.

通过将张量移动到GPU,我们可以使NVIDIA CUDA GPU执行计算并提高速度。

NOTE: This applies only if you have an NVIDIA GPU with CUDA enabled. If you’re not sure of what these terms are, I would advise you to search online.

注意:仅当您具有启用了CUDA的NVIDIA GPU时,此选项才适用。 如果您不确定这些术语是什么,我建议您在线搜索。

We can check if we have the GPU available for PyTorch using torch.cuda.is_available()

我们可以使用torch.cuda.is_available()检查是否有可用于PyTorch的GPU


import torch
if torch.cuda.is_available():print('Your device is supported. We can use the GPU for PyTorch!')
else:print('Your GPU is either not supported by PyTorch or you haven't installed the GPU version')

For me, it is available, so just make sure you install CUDA before proceeding further if your laptop supports it.

对我来说,它是可用的,因此只要您的笔记本电脑支持,请确保先安装CUDA,然后再继续。

We can move a tensor from the CPU to the GPU using tensor.to(device), where device is a device object.

我们可以使用tensor.to(device)将张量从CPU移至GPU,其中device是设备对象。

This can be torch.device("cuda"), or simply cpu.

这可以是torch.device("cuda") ,也可以只是cpu


import torch
x = torch.tensor([1, 2, 3], dtype=torch.long)if torch.cuda.is_available():print('CUDA is available')# Create a CUDA Device objectdevice = torch.device("cuda")# Create a tensor from x and store on the GPUy = torch.ones_like(x, device=device)# Move the tensor from CPU to GPUx = x.to(device)# This is done on the GPUz = x + yprint(z)# Move back to CPU and also change dtypeprint(z.to("cpu", torch.double))print(z)
else:print('CUDA is not available')

Output

输出量


CUDA is available
tensor([2, 3, 4], device='cuda:0')
tensor([2., 3., 4.], dtype=torch.float64)
tensor([2, 3, 4], device='cuda:0')

As you can see, the output does show that our program is now being run on the GPU instead!

如您所见,输出确实显示我们的程序现在正在GPU上运行!



结论 (Conclusion)

In this article, we learned about using Tensors in PyTorch. Feel free to ask any doubts or even suggestions/corrections in the comment section below!

在本文中,我们了解了如何在PyTorch中使用Tensors。 随时在下面的评论部分中提出任何疑问甚至建议/纠正!

We’ll be covering more in our upcoming PyTorch tutorials. Stay tuned!

我们将在即将开始的PyTorch教程中介绍更多内容 。 敬请关注!



参考资料 (References)

  • The PyTorch official DocumentationPyTorch官方文档
  • The PyTorch Official Tutorial (Really good resource. Recommended)PyTorch官方教程 (非常好的资源。推荐)


翻译自: https://www.journaldev.com/37948/pytorch-tensor

pytorch张量

pytorch张量_PyTorch张量-详细概述相关推荐

  1. pytorch2——Pytorch基础数据结构——张量(深度之眼)

    前情提要 在上一篇文章中,我们详细介绍了如何搭建一个pytorch的环境,那么本篇文章我们即将揭开pytorch的神秘面纱,了解pytorch中的数据结构--Tensor. 概念问答 张量是什么? 张 ...

  2. 【深度之眼PyTorch框架班第五期】作业打卡01:PyTorch简介及环境配置;PyTorch基础数据结构——张量

    文章目录 任务名称 任务简介 详细说明 作业 1. 安装anaconda,pycharm, CUDA+CuDNN(可选),虚拟环境,pytorch,并实现hello pytorch查看pytorch的 ...

  3. 深度学习框架pytorch入门之张量Tensor(一)

    文章目录 一.简介 二.查看帮助文档 三.Tensor常用方法 1.概述 2.新建方法 (1)Tensor(*sizes) tensor基础构造函数 (2)ones(*sizes) 构造一个全为1的T ...

  4. pytorch图像和张量的相互转换_[Pytorch]Pytorch的tensor变量类型转换

    原文:https://blog.csdn.net/hustchenze/article/details/79154139 Pytorch的数据类型为各式各样的Tensor,Tensor可以理解为高维矩 ...

  5. PyTorch常用的张量创建、变形及运算总结(速查表)

    PyTorch张量常用的创建.变形及数学运算总结 目录 PyTorch张量常用的创建.变形及数学运算总结 1. 张量(tensor)的创建 1.1 torch.Tensor()与torch.tenso ...

  6. Pytorch基础之张量的存储方式及维度操作:size,shape,view,reshape,contiguous

    参考书目:张校捷,<深入浅出PyTorch:从模型到源码> Pytorch中张量的存储 假设存在一个k维张量,其维数为[ n 1 , n 2 . . . n k n_1,n_2...n_k ...

  7. 入门1,Pytorch深度学习---张量学习

    声明了一个未初始化的矩阵,但在使用前不包含确定的已知值.创建未初始化的矩阵时,当时分配的内存中的任何值都将显示为初始值. 关键词 ===== 程序意思 构造一个未初始化的5x3矩阵: x = torc ...

  8. Pytorch入门之张量tensor

    一.张量tensor 张量的三个特征:秩.轴.形状 张量的秩是指索引的个数,轴是指每一个维度的最大的索引的值,张量的形状提供了维度和索引的数量关系. 经常需要对张量进行重塑 t.reshape(1,9 ...

  9. 向量 矩阵 张量_张量,矩阵和向量有什么区别?

    向量 矩阵 张量 机器学习代数 (MACHINE LEARNING ALGEBRA) Algebra is an important element of mathematics and has a ...

最新文章

  1. C#实现身份证号码验证的方法
  2. java调节音量代码_用Java调用VC音量控制程序_java
  3. 如何让excel表格排头一直都在_Excel表格技巧—如何计算矩阵相乘
  4. maven国内镜像配置
  5. linux中如何快速进入某个目录
  6. Can 总线 收发原理
  7. 信息学奥赛一本通(1244:和为给定数)
  8. pythoninformation leakage_GitHub - MrFk/GSIL: Github Sensitive Information Leakage(Github敏感信息泄露)...
  9. sql modify 会丢失数据么_为什么U盘的数据会丢失?找对方法,轻松应对
  10. 你知道C#中的Lambda表达式的演化过程吗?
  11. Java并发编程,AQS的(独占锁)重入锁(ReetrantLock)及其Condition实现原理
  12. 如何把UCF101数据集分成训练集和测试集
  13. java 32进制10进制互转
  14. IE Adobe Flash Player版本已是最新,界面仍然提示版本过旧原因
  15. 群面时被问到“让你淘汰一个组员”一般如何淘汰谁?
  16. Code Craft(编程匠艺)之代码的生命(一)
  17. linux文件扫描并打印,Linux办公一条龙—Linux中扫描、打印的实现
  18. Daphne以10%的单利投资了100美元。也就是说,每一年的利润都是投资额的10%,即每年10美元:利息 = 0.10 * 原始存款而Cleo以5%的复利投资了100美元。也就是说,
  19. Jqurey总结归纳
  20. Matplotlib_2

热门文章

  1. 判断回文(0315)SWUST-OJ
  2. [转载] python对列表单词排序_计算列表中单词的频率并按频率排序
  3. [转载] windows下python包的导入方法
  4. [转载] 当心掉进Python多重继承里的坑
  5. [转载] 【Python】向json文件中追加新的对象
  6. [转载] 【Python】不用numpy用纯python求极差、平均数、中位数、众数与方差,python的打印到控制台
  7. window.open window.showModelDialog 打开一个新窗口/子窗口中调用父窗口的方法
  8. java 通过eclipse编辑器用mysql尝试 连接数据库
  9. AndroidStudio 内存泄漏分析 Memory Monitor
  10. Python递归、反射、2分查找、冒泡排序