简 介: ※Tensor是Paddle深度框架中的重要概念。对于 Tensor概念介绍-使用文档-PaddlePaddle深度学习平台 文档中的概念进行学习和测试,对比了Tensor于numpy中的ndarray之间的差异。

关键词Tensornumpypaddle

#mermaid-svg-LJp28959mioXPQkz .label{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);fill:#333;color:#333}#mermaid-svg-LJp28959mioXPQkz .label text{fill:#333}#mermaid-svg-LJp28959mioXPQkz .node rect,#mermaid-svg-LJp28959mioXPQkz .node circle,#mermaid-svg-LJp28959mioXPQkz .node ellipse,#mermaid-svg-LJp28959mioXPQkz .node polygon,#mermaid-svg-LJp28959mioXPQkz .node path{fill:#ECECFF;stroke:#9370db;stroke-width:1px}#mermaid-svg-LJp28959mioXPQkz .node .label{text-align:center;fill:#333}#mermaid-svg-LJp28959mioXPQkz .node.clickable{cursor:pointer}#mermaid-svg-LJp28959mioXPQkz .arrowheadPath{fill:#333}#mermaid-svg-LJp28959mioXPQkz .edgePath .path{stroke:#333;stroke-width:1.5px}#mermaid-svg-LJp28959mioXPQkz .flowchart-link{stroke:#333;fill:none}#mermaid-svg-LJp28959mioXPQkz .edgeLabel{background-color:#e8e8e8;text-align:center}#mermaid-svg-LJp28959mioXPQkz .edgeLabel rect{opacity:0.9}#mermaid-svg-LJp28959mioXPQkz .edgeLabel span{color:#333}#mermaid-svg-LJp28959mioXPQkz .cluster rect{fill:#ffffde;stroke:#aa3;stroke-width:1px}#mermaid-svg-LJp28959mioXPQkz .cluster text{fill:#333}#mermaid-svg-LJp28959mioXPQkz div.mermaidTooltip{position:absolute;text-align:center;max-width:200px;padding:2px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);font-size:12px;background:#ffffde;border:1px solid #aa3;border-radius:2px;pointer-events:none;z-index:100}#mermaid-svg-LJp28959mioXPQkz .actor{stroke:#ccf;fill:#ECECFF}#mermaid-svg-LJp28959mioXPQkz text.actor>tspan{fill:#000;stroke:none}#mermaid-svg-LJp28959mioXPQkz .actor-line{stroke:grey}#mermaid-svg-LJp28959mioXPQkz .messageLine0{stroke-width:1.5;stroke-dasharray:none;stroke:#333}#mermaid-svg-LJp28959mioXPQkz .messageLine1{stroke-width:1.5;stroke-dasharray:2, 2;stroke:#333}#mermaid-svg-LJp28959mioXPQkz #arrowhead path{fill:#333;stroke:#333}#mermaid-svg-LJp28959mioXPQkz .sequenceNumber{fill:#fff}#mermaid-svg-LJp28959mioXPQkz #sequencenumber{fill:#333}#mermaid-svg-LJp28959mioXPQkz #crosshead path{fill:#333;stroke:#333}#mermaid-svg-LJp28959mioXPQkz .messageText{fill:#333;stroke:#333}#mermaid-svg-LJp28959mioXPQkz .labelBox{stroke:#ccf;fill:#ECECFF}#mermaid-svg-LJp28959mioXPQkz .labelText,#mermaid-svg-LJp28959mioXPQkz .labelText>tspan{fill:#000;stroke:none}#mermaid-svg-LJp28959mioXPQkz .loopText,#mermaid-svg-LJp28959mioXPQkz .loopText>tspan{fill:#000;stroke:none}#mermaid-svg-LJp28959mioXPQkz .loopLine{stroke-width:2px;stroke-dasharray:2, 2;stroke:#ccf;fill:#ccf}#mermaid-svg-LJp28959mioXPQkz .note{stroke:#aa3;fill:#fff5ad}#mermaid-svg-LJp28959mioXPQkz .noteText,#mermaid-svg-LJp28959mioXPQkz .noteText>tspan{fill:#000;stroke:none}#mermaid-svg-LJp28959mioXPQkz .activation0{fill:#f4f4f4;stroke:#666}#mermaid-svg-LJp28959mioXPQkz .activation1{fill:#f4f4f4;stroke:#666}#mermaid-svg-LJp28959mioXPQkz .activation2{fill:#f4f4f4;stroke:#666}#mermaid-svg-LJp28959mioXPQkz .mermaid-main-font{font-family:"trebuchet ms", verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .section{stroke:none;opacity:0.2}#mermaid-svg-LJp28959mioXPQkz .section0{fill:rgba(102,102,255,0.49)}#mermaid-svg-LJp28959mioXPQkz .section2{fill:#fff400}#mermaid-svg-LJp28959mioXPQkz .section1,#mermaid-svg-LJp28959mioXPQkz .section3{fill:#fff;opacity:0.2}#mermaid-svg-LJp28959mioXPQkz .sectionTitle0{fill:#333}#mermaid-svg-LJp28959mioXPQkz .sectionTitle1{fill:#333}#mermaid-svg-LJp28959mioXPQkz .sectionTitle2{fill:#333}#mermaid-svg-LJp28959mioXPQkz .sectionTitle3{fill:#333}#mermaid-svg-LJp28959mioXPQkz .sectionTitle{text-anchor:start;font-size:11px;text-height:14px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .grid .tick{stroke:#d3d3d3;opacity:0.8;shape-rendering:crispEdges}#mermaid-svg-LJp28959mioXPQkz .grid .tick text{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .grid path{stroke-width:0}#mermaid-svg-LJp28959mioXPQkz .today{fill:none;stroke:red;stroke-width:2px}#mermaid-svg-LJp28959mioXPQkz .task{stroke-width:2}#mermaid-svg-LJp28959mioXPQkz .taskText{text-anchor:middle;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .taskText:not([font-size]){font-size:11px}#mermaid-svg-LJp28959mioXPQkz .taskTextOutsideRight{fill:#000;text-anchor:start;font-size:11px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .taskTextOutsideLeft{fill:#000;text-anchor:end;font-size:11px}#mermaid-svg-LJp28959mioXPQkz .task.clickable{cursor:pointer}#mermaid-svg-LJp28959mioXPQkz .taskText.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-LJp28959mioXPQkz .taskTextOutsideLeft.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-LJp28959mioXPQkz .taskTextOutsideRight.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-LJp28959mioXPQkz .taskText0,#mermaid-svg-LJp28959mioXPQkz .taskText1,#mermaid-svg-LJp28959mioXPQkz .taskText2,#mermaid-svg-LJp28959mioXPQkz .taskText3{fill:#fff}#mermaid-svg-LJp28959mioXPQkz .task0,#mermaid-svg-LJp28959mioXPQkz .task1,#mermaid-svg-LJp28959mioXPQkz .task2,#mermaid-svg-LJp28959mioXPQkz .task3{fill:#8a90dd;stroke:#534fbc}#mermaid-svg-LJp28959mioXPQkz .taskTextOutside0,#mermaid-svg-LJp28959mioXPQkz .taskTextOutside2{fill:#000}#mermaid-svg-LJp28959mioXPQkz .taskTextOutside1,#mermaid-svg-LJp28959mioXPQkz .taskTextOutside3{fill:#000}#mermaid-svg-LJp28959mioXPQkz .active0,#mermaid-svg-LJp28959mioXPQkz .active1,#mermaid-svg-LJp28959mioXPQkz .active2,#mermaid-svg-LJp28959mioXPQkz .active3{fill:#bfc7ff;stroke:#534fbc}#mermaid-svg-LJp28959mioXPQkz .activeText0,#mermaid-svg-LJp28959mioXPQkz .activeText1,#mermaid-svg-LJp28959mioXPQkz .activeText2,#mermaid-svg-LJp28959mioXPQkz .activeText3{fill:#000 !important}#mermaid-svg-LJp28959mioXPQkz .done0,#mermaid-svg-LJp28959mioXPQkz .done1,#mermaid-svg-LJp28959mioXPQkz .done2,#mermaid-svg-LJp28959mioXPQkz .done3{stroke:grey;fill:#d3d3d3;stroke-width:2}#mermaid-svg-LJp28959mioXPQkz .doneText0,#mermaid-svg-LJp28959mioXPQkz .doneText1,#mermaid-svg-LJp28959mioXPQkz .doneText2,#mermaid-svg-LJp28959mioXPQkz .doneText3{fill:#000 !important}#mermaid-svg-LJp28959mioXPQkz .crit0,#mermaid-svg-LJp28959mioXPQkz .crit1,#mermaid-svg-LJp28959mioXPQkz .crit2,#mermaid-svg-LJp28959mioXPQkz .crit3{stroke:#f88;fill:red;stroke-width:2}#mermaid-svg-LJp28959mioXPQkz .activeCrit0,#mermaid-svg-LJp28959mioXPQkz .activeCrit1,#mermaid-svg-LJp28959mioXPQkz .activeCrit2,#mermaid-svg-LJp28959mioXPQkz .activeCrit3{stroke:#f88;fill:#bfc7ff;stroke-width:2}#mermaid-svg-LJp28959mioXPQkz .doneCrit0,#mermaid-svg-LJp28959mioXPQkz .doneCrit1,#mermaid-svg-LJp28959mioXPQkz .doneCrit2,#mermaid-svg-LJp28959mioXPQkz .doneCrit3{stroke:#f88;fill:#d3d3d3;stroke-width:2;cursor:pointer;shape-rendering:crispEdges}#mermaid-svg-LJp28959mioXPQkz .milestone{transform:rotate(45deg) scale(0.8, 0.8)}#mermaid-svg-LJp28959mioXPQkz .milestoneText{font-style:italic}#mermaid-svg-LJp28959mioXPQkz .doneCritText0,#mermaid-svg-LJp28959mioXPQkz .doneCritText1,#mermaid-svg-LJp28959mioXPQkz .doneCritText2,#mermaid-svg-LJp28959mioXPQkz .doneCritText3{fill:#000 !important}#mermaid-svg-LJp28959mioXPQkz .activeCritText0,#mermaid-svg-LJp28959mioXPQkz .activeCritText1,#mermaid-svg-LJp28959mioXPQkz .activeCritText2,#mermaid-svg-LJp28959mioXPQkz .activeCritText3{fill:#000 !important}#mermaid-svg-LJp28959mioXPQkz .titleText{text-anchor:middle;font-size:18px;fill:#000;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz g.classGroup text{fill:#9370db;stroke:none;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);font-size:10px}#mermaid-svg-LJp28959mioXPQkz g.classGroup text .title{font-weight:bolder}#mermaid-svg-LJp28959mioXPQkz g.clickable{cursor:pointer}#mermaid-svg-LJp28959mioXPQkz g.classGroup rect{fill:#ECECFF;stroke:#9370db}#mermaid-svg-LJp28959mioXPQkz g.classGroup line{stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz .classLabel .box{stroke:none;stroke-width:0;fill:#ECECFF;opacity:0.5}#mermaid-svg-LJp28959mioXPQkz .classLabel .label{fill:#9370db;font-size:10px}#mermaid-svg-LJp28959mioXPQkz .relation{stroke:#9370db;stroke-width:1;fill:none}#mermaid-svg-LJp28959mioXPQkz .dashed-line{stroke-dasharray:3}#mermaid-svg-LJp28959mioXPQkz #compositionStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #compositionEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #aggregationStart{fill:#ECECFF;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #aggregationEnd{fill:#ECECFF;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #dependencyStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #dependencyEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #extensionStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz #extensionEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz .commit-id,#mermaid-svg-LJp28959mioXPQkz .commit-msg,#mermaid-svg-LJp28959mioXPQkz .branch-label{fill:lightgrey;color:lightgrey;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .pieTitleText{text-anchor:middle;font-size:25px;fill:#000;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .slice{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz g.stateGroup text{fill:#9370db;stroke:none;font-size:10px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz g.stateGroup text{fill:#9370db;fill:#333;stroke:none;font-size:10px}#mermaid-svg-LJp28959mioXPQkz g.statediagram-cluster .cluster-label text{fill:#333}#mermaid-svg-LJp28959mioXPQkz g.stateGroup .state-title{font-weight:bolder;fill:#000}#mermaid-svg-LJp28959mioXPQkz g.stateGroup rect{fill:#ECECFF;stroke:#9370db}#mermaid-svg-LJp28959mioXPQkz g.stateGroup line{stroke:#9370db;stroke-width:1}#mermaid-svg-LJp28959mioXPQkz .transition{stroke:#9370db;stroke-width:1;fill:none}#mermaid-svg-LJp28959mioXPQkz .stateGroup .composit{fill:white;border-bottom:1px}#mermaid-svg-LJp28959mioXPQkz .stateGroup .alt-composit{fill:#e0e0e0;border-bottom:1px}#mermaid-svg-LJp28959mioXPQkz .state-note{stroke:#aa3;fill:#fff5ad}#mermaid-svg-LJp28959mioXPQkz .state-note text{fill:black;stroke:none;font-size:10px}#mermaid-svg-LJp28959mioXPQkz .stateLabel .box{stroke:none;stroke-width:0;fill:#ECECFF;opacity:0.7}#mermaid-svg-LJp28959mioXPQkz .edgeLabel text{fill:#333}#mermaid-svg-LJp28959mioXPQkz .stateLabel text{fill:#000;font-size:10px;font-weight:bold;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-LJp28959mioXPQkz .node circle.state-start{fill:black;stroke:black}#mermaid-svg-LJp28959mioXPQkz .node circle.state-end{fill:black;stroke:white;stroke-width:1.5}#mermaid-svg-LJp28959mioXPQkz #statediagram-barbEnd{fill:#9370db}#mermaid-svg-LJp28959mioXPQkz .statediagram-cluster rect{fill:#ECECFF;stroke:#9370db;stroke-width:1px}#mermaid-svg-LJp28959mioXPQkz .statediagram-cluster rect.outer{rx:5px;ry:5px}#mermaid-svg-LJp28959mioXPQkz .statediagram-state .divider{stroke:#9370db}#mermaid-svg-LJp28959mioXPQkz .statediagram-state .title-state{rx:5px;ry:5px}#mermaid-svg-LJp28959mioXPQkz .statediagram-cluster.statediagram-cluster .inner{fill:white}#mermaid-svg-LJp28959mioXPQkz .statediagram-cluster.statediagram-cluster-alt .inner{fill:#e0e0e0}#mermaid-svg-LJp28959mioXPQkz .statediagram-cluster .inner{rx:0;ry:0}#mermaid-svg-LJp28959mioXPQkz .statediagram-state rect.basic{rx:5px;ry:5px}#mermaid-svg-LJp28959mioXPQkz .statediagram-state rect.divider{stroke-dasharray:10,10;fill:#efefef}#mermaid-svg-LJp28959mioXPQkz .note-edge{stroke-dasharray:5}#mermaid-svg-LJp28959mioXPQkz .statediagram-note rect{fill:#fff5ad;stroke:#aa3;stroke-width:1px;rx:0;ry:0}:root{--mermaid-font-family: '"trebuchet ms", verdana, arial';--mermaid-font-family: "Comic Sans MS", "Comic Sans", cursive}#mermaid-svg-LJp28959mioXPQkz .error-icon{fill:#522}#mermaid-svg-LJp28959mioXPQkz .error-text{fill:#522;stroke:#522}#mermaid-svg-LJp28959mioXPQkz .edge-thickness-normal{stroke-width:2px}#mermaid-svg-LJp28959mioXPQkz .edge-thickness-thick{stroke-width:3.5px}#mermaid-svg-LJp28959mioXPQkz .edge-pattern-solid{stroke-dasharray:0}#mermaid-svg-LJp28959mioXPQkz .edge-pattern-dashed{stroke-dasharray:3}#mermaid-svg-LJp28959mioXPQkz .edge-pattern-dotted{stroke-dasharray:2}#mermaid-svg-LJp28959mioXPQkz .marker{fill:#333}#mermaid-svg-LJp28959mioXPQkz .marker.cross{stroke:#333}:root { --mermaid-font-family: "trebuchet ms", verdana, arial;}#mermaid-svg-LJp28959mioXPQkz {color: rgba(0, 0, 0, 0.75);font: ;}

Tensor基本概念
文章目录
基本特征
Tensor的shape
Tensor其它属性
对Tensor操作
索引和切片
对Tensor运算
一些测试
矩阵操作
总 结

§01 Tensor基本概念


1.1 基本特征

1.1.1 Tensor概念

  飞桨(PaddlePaddle,以下简称Paddle)和其他深度学习框架一样,使用Tensor来表示数据,在神经网络中传递的数据均为Tensor。

  Tensor可以将其理解为多维数组,其可以具有任意多的维度,不同Tensor可以有不同的数据类型 (dtype) 和形状 (shape)。

  同一Tensor的中所有元素的dtype均相同。如果你对 Numpy 熟悉,Tensor是类似于 Numpy array 的概念。

(1)所有Tensor属性

  Tensor的属性很多,下面列出了一个普通的Tensor的所有的参数。

T            __add__      __and__      __array__    __array_ufunc__ __bool__
__class__    __deepcopy__ __delattr__  __dir__      __div__      __doc__
__eq__       __float__    __floordiv__ __format__   __ge__       __getattribute__
__getitem__  __gt__       __hash__     __index__    __init__     __init_subclass__
__int__      __invert__   __le__       __len__      __long__     __lt__
__matmul__   __mod__      __module__   __mul__      __ne__       __neg__
__new__      __nonzero__  __or__       __pow__      __radd__     __rdiv__
__reduce__   __reduce_ex__ __repr__     __rmul__     __rpow__     __rsub__
__rtruediv__ __setattr__  __setitem__  __setitem_varbase__ __sizeof__   __str__
__sub__      __subclasshook__ __truediv__  __xor__      _alive_vars  _allreduce
_bump_inplace_version _copy_to     _getitem_from_offset _getitem_index_not_tensor _grad_ivar   _grad_name
_grad_value  _inplace_version _is_sparse   _place_str   _register_backward_hook _register_grad_hook
_remove_grad_hook _set_grad_ivar _set_grad_type _share_memory _to_static_var abs
acos         add          add_         add_n        addmm        all
allclose     any          argmax       argmin       argsort      asin
astype       atan         backward     bincount     bitwise_and  bitwise_not
bitwise_or   bitwise_xor  block        bmm          broadcast_shape broadcast_tensors
broadcast_to cast         ceil         ceil_        cholesky     chunk
clear_grad   clear_gradient clip         clip_        clone        concat
cond         conj         copy_        cos          cosh         cpu
cross        cuda         cumprod      cumsum       detach       diagonal
digamma      dim          dist         divide       dot          dtype
eig          eigvals      eigvalsh     equal        equal_all    erf
exp          exp_         expand       expand_as    fill_        fill_diagonal_
fill_diagonal_tensor fill_diagonal_tensor_ flatten      flatten_     flip         floor
floor_       floor_divide floor_mod    gather       gather_nd    grad
gradient     greater_equal greater_than histogram    imag         increment
index_sample index_select inplace_version inverse      is_empty     is_leaf
is_tensor    isfinite     isinf        isnan        item         kron
less_equal   less_than    lgamma       log          log10        log1p
log2         logical_and  logical_not  logical_or   logical_xor  logsumexp
masked_select matmul       matrix_power max          maximum      mean
median       min          minimum      mm           mod          multi_dot
multiplex    multiply     mv           name         ndim         ndimension
neg          nonzero      norm         not_equal    numel        numpy
persistable  pin_memory   place        pow          prod         qr
rank         real         reciprocal   reciprocal_  register_hook remainder
reshape      reshape_     reverse      roll         round        round_
rsqrt        rsqrt_       scale        scale_       scatter      scatter_
scatter_nd   scatter_nd_add set_value    shape        shard_index  sign
sin          sinh         size         slice        solve        sort
split        sqrt         sqrt_        square       squeeze      squeeze_
stack        stanh        std          stop_gradient strided_slice subtract
subtract_    sum          t            tanh         tanh_        tensordot
tile         tolist       topk         trace        transpose    trunc
type         unbind       uniform_     unique       unique_consecutive unsqueeze
unsqueeze_   unstack      value        var          where        zero_

(2)显示属性的内容

import matplotlib.pyplot as plt
from numpy import *
import math,timestrid = 5
tspgetdopstring(-strid)
itemall = clipboard.paste().replace('[','').replace(']','').replace("'","").replace(' ', '').split(',\r\n')maxlen = max([len(s) for s in itemall])//2
forms = '{:%d}'%maxlen
itemall = [forms.format(l) for l in itemall]
itemall = list(zip(*([iter(itemall)]*6)))for i in itemall:print(" ".join(i))

1.1.2 Tensor创建

(1)通过List创建Tensor

ndim_1_tensor = paddle.to_tensor([2.0, 3.0, 4.0], dtype='float64')
print(ndim_1_tensor
Tensor(shape=[3], dtype=float64, place=CPUPlace, stop_gradient=True,[2., 3., 4.])

  从上面来看出,Tensor的属性中的确很多,但print出来的性质主要包括:

  • shape
  • dtype
  • palce
  • stop_gradient

(2)通过单个数值创建Tensor

print(paddle.to_tensor(100, dtype='float64'))
print(paddle.to_tensor([100]))
Tensor(shape=[1], dtype=float64, place=CPUPlace, stop_gradient=True,[100.])
Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,[100])

(3)通过二维数组创建Tensor

  通过ndarray二维数组可以建立二维的Tensor。

 Ⅰ.第一个例子
a = random.randn(5,5)
print(a)
print(paddle.to_tensor(a,dtype='float64'))
[[-0.38842275  0.60203552  0.75242934 -0.63579722 -0.98777417][ 0.13979701 -0.54047714 -0.05141568  0.40233249  0.91852057][ 1.42899054 -0.94539537  0.37745157  0.94664502  2.17836026][ 0.07094955  0.60793962 -0.080198   -0.71505243 -0.23229649][-2.60935282  0.66411124 -1.90732575 -0.22735439  1.40916696]]
Tensor(shape=[5, 5], dtype=float64, place=CPUPlace, stop_gradient=True,[[-0.38842275,  0.60203552,  0.75242934, -0.63579722, -0.98777417],[ 0.13979701, -0.54047714, -0.05141568,  0.40233249,  0.91852057],[ 1.42899054, -0.94539537,  0.37745157,  0.94664502,  2.17836026],[ 0.07094955,  0.60793962, -0.08019800, -0.71505243, -0.23229649],[-2.60935282,  0.66411124, -1.90732575, -0.22735439,  1.40916696]])
 Ⅱ.第二个例子
ndim_3_tensor = paddle.to_tensor([[[1, 2, 3, 4, 5],[6, 7, 8, 9, 10]],[[11, 12, 13, 14, 15],[16, 17, 18, 19, 20]]])
print(ndim_3_tensor)
Tensor(shape=[2, 2, 5], dtype=int64, place=CPUPlace, stop_gradient=True,[[[1 , 2 , 3 , 4 , 5 ],[6 , 7 , 8 , 9 , 10]],[[11, 12, 13, 14, 15],[16, 17, 18, 19, 20]]])
 Ⅲ.第三个例子
ndim_3_tensor = paddle.to_tensor([[[1, 2, 3, 4, 5],[6, 7, 8, 9, 10]],[[11, 12, 13, 14, 15],[16, 17, 18, 19, 20]]])
print(ndim_3_tensor)
Tensor(shape=[2, 2, 5], dtype=int64, place=CPUPlace, stop_gradient=True,[[[1 , 2 , 3 , 4 , 5 ],[6 , 7 , 8 , 9 , 10]],[[11, 12, 13, 14, 15],[16, 17, 18, 19, 20]]])

▲ 图1.1.1 不同ndim的Tensor

1.1.3 array

ndim_2_tensor.numpy()
print(ndim_3_tensor.numpy())
[[[ 1  2  3  4  5][ 6  7  8  9 10]][[11 12 13 14 15][16 17 18 19 20]]]

1.1.4 转换Tensor问题

  Tensor必须形状规则,类似于“矩形”的概念,也就是,沿任何一个轴(也称作维度)上,元素的数量都是相等的,如果为以下情况:

ndim_3_tensor = paddle.to_tensor([[[1, 2, 3, 4],[6, 7, 8, 9, 10]],[[11, 12, 13, 14, 15],[16, 17, 18, 19, 20]]])
ValueError: Faild to convert input data to a regular ndarray :- Usually this means the input data contains nested lists with different lengths.

1.1.5 API

paddle.zeros([m, n])             # 创建数据全为0,shape为[m, n]的Tensor
paddle.ones([m, n])              # 创建数据全为1,shape为[m, n]的Tensor
paddle.full([m, n], 10)          # 创建数据全为10,shape为[m, n]的Tensor
paddle.arange(start, end, step)  # 创建从start到end,步长为step的Tensor
paddle.linspace(start, end, num) # 创建从start到end,元素个数固定为num的Tensor
 Ⅰ.zeros
a = paddle.zeros([5, 6])
print(a)
Tensor(shape=[5, 6], dtype=float32, place=CPUPlace, stop_gradient=True,[[0., 0., 0., 0., 0., 0.],[0., 0., 0., 0., 0., 0.],[0., 0., 0., 0., 0., 0.],[0., 0., 0., 0., 0., 0.],[0., 0., 0., 0., 0., 0.]])
 Ⅱ.ones
a = paddle.ones([5, 6])
print(a)
Tensor(shape=[5, 6], dtype=float32, place=CPUPlace, stop_gradient=True,[[1., 1., 1., 1., 1., 1.],[1., 1., 1., 1., 1., 1.],[1., 1., 1., 1., 1., 1.],[1., 1., 1., 1., 1., 1.],[1., 1., 1., 1., 1., 1.]])
 Ⅲ.full
a = paddle.full([3,4], 10)
print(a)
Tensor(shape=[3, 4], dtype=float32, place=CPUPlace, stop_gradient=True,[[10., 10., 10., 10.],[10., 10., 10., 10.],[10., 10., 10., 10.]])
 Ⅳ.arange
a = paddle.arange(0, 10, 2)
print(a)
Tensor(shape=[5], dtype=int64, place=CPUPlace, stop_gradient=True,[0, 2, 4, 6, 8])

  注意: arange中的参数都必须是整数。

  如果希望得到间隔不是整数的数列,可以使用linspace

 Ⅴ.linspace
a = paddle.linspace(0, 10, 20)
print(a)
Tensor(shape=[20], dtype=float32, place=CPUPlace, stop_gradient=True,[0.        , 0.52631581, 1.05263162, 1.57894742, 2.10526323, 2.63157892,3.15789485, 3.68421054, 4.21052647, 4.73684216, 5.26315784, 5.78947353,6.31578970, 6.84210539, 7.36842108, 7.89473677, 8.42105293, 8.94736862,9.47368431, 10.       ])

  注意: 和 numpy中的lispace不同,endpoint参数不能够使用了。

  如果在numpy中:

a = linspace(0, 10, 20, endpoint=False)
print(a)

  可以产生:

[0.  0.5 1.  1.5 2.  2.5 3.  3.5 4.  4.5 5.  5.5 6.  6.5 7.  7.5 8.  8.59.  9.5]

  但在paddle.linspace中不允许使用endpoint 参数。

a = paddle.linspace(0, 10, 20, endpoint=False)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/tmp/ipykernel_119/2239065584.py in <module>
----> 1 a = paddle.linspace(0, 10, 20, endpoint=False)2 print(a)TypeError: linspace() got an unexpected keyword argument 'endpoint'

1.2 Tensor的shape

1.2.1 基本概念

  查看一个Tensor的形状可以通过 Tensor.shape,shape是 Tensor 的一个重要属性,以下为相关概念:

  • shape:描述了tensor的每个维度上元素的数量
  • ndim: tensor的维度数量,例如vector的 ndim 为1,matrix的 ndim 为2.
  • axis或者dimension:指tensor某个特定的维度
  • size:指tensor中全部元素的个数

(1)举例

  创建1个4-D Tensor,并通过图形来直观表达以上几个概念之间的关系;

ndim_4_tensor = paddle.ones([2, 3, 4, 5])
Tensor(shape=[2, 3, 4, 5], dtype=float32, place=CPUPlace, stop_gradient=True,[[[[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]],[[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]],[[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]]],[[[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]],[[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]],[[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]]]])

▲ 图1.2.1 Tensor 的Shape, Axis,Demension,nDim之间的关系

ndim_4_tensor = paddle.ones([2, 3, 4, 5])print("Data Type of every element:", ndim_4_tensor.dtype)
print("Number of dimensions:", ndim_4_tensor.ndim)
print("Shape of tensor:", ndim_4_tensor.shape)
print("Elements number along axis 0 of tensor:", ndim_4_tensor.shape[0])
print("Elements number along the last axis of tensor:", ndim_4_tensor.shape[-1])
Data Type of every element: paddle.float32
Number of dimensions: 4
Shape of tensor: [2, 3, 4, 5]
Elements number along axis 0 of tensor: 2
Elements number along the last axis of tensor: 5

1.2.2 操作shape

  重新定义Tensor的shape在实际编程中具有重要意义。

ndim_3_tensor = paddle.to_tensor([[[1, 2, 3, 4, 5],[6, 7, 8, 9, 10]],[[11, 12, 13, 14, 15],[16, 17, 18, 19, 20]],[[21, 22, 23, 24, 25],[26, 27, 28, 29, 30]]])
print("the shape of ndim_3_tensor:", ndim_3_tensor.shape)
the shape of ndim_3_tensor: [3, 2, 5]

  利用Paddle中的reshape API来改变Tensor的shape:

ndim_3_tensor = paddle.reshape(ndim_3_tensor, [2, 5, 3])
print("After reshape:", ndim_3_tensor.shape)
print(ndim_3_tensor)
After reshape: [2, 5, 3]
Tensor(shape=[2, 5, 3], dtype=int64, place=CPUPlace, stop_gradient=True,[[[1 , 2 , 3 ],[4 , 5 , 6 ],[7 , 8 , 9 ],[10, 11, 12],[13, 14, 15]],[[16, 17, 18],[19, 20, 21],[22, 23, 24],[25, 26, 27],[28, 29, 30]]])

  从上面操作来看,reshape的含义,就是将原来的数据重新串联成一个一维长长的向量,然后在按照新的维度形状进行装填成新的矩阵(tensor)。

  注意,可以使用(-1)来自动将参数补齐,下面三个语句都是一样的结果。

ndim_3_tensor = paddle.reshape(ndim_3_tensor, [-1,5,3])
ndim_3_tensor = paddle.reshape(ndim_3_tensor, [2,5,-1])
ndim_3_tensor = paddle.reshape(ndim_3_tensor, [2,-1,3])

  在指定新的shape时存在一些技巧:

-1 表示这个维度的值是从Tensor的元素总数和剩余维度推断出来的。因此,有且只有一个维度可以被设置为-1。 2. 0 表示实际的维数是从Tensor的对应维数中复制出来的,因此shape中0的索引值不能超过x的维度。

  有一些例子可以很好解释这些技巧:

origin:[3, 2, 5] reshape:[3, 10]      actual: [3, 10]
origin:[3, 2, 5] reshape:[-1]         actual: [30]
origin:[3, 2, 5] reshape:[0, 5, -1]   actual: [3, 5, 2]

  可以发现,reshape为[-1]时,会将tensor按其在计算机上的内存分布展平为1-D Tensor。

data = paddle.reshape(ndim_3_tensor, [-1]).numpy()
printf(data)
[ 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 2425 26 27 28 29 30]

1.3 Tensor其它属性

1.3.1 dtype

  Tensor的数据类型,可以通过 Tensor.dtype 来查看,dtype支持:'bool''float16''float32''float64''uint8''int8''int16''int32''int64'

  • 通过Python元素创建的Tensor,可以通过dtype来进行指定,如果未指定:

    • 对于python整型数据,则会创建int64型Tensor
    • 对于python浮点型数据,默认会创建float32型Tensor,并且可以通过set_default_type来调整浮点型数据的默认类型。
  • 通过Numpy array创建的Tensor,则与其原来的dtype保持相同。
print("Tensor dtype from Python integers:", paddle.to_tensor(1).dtype)
print("Tensor dtype from Python floating point:", paddle.to_tensor(1.0).dtype)
Tensor dtype from Python integers: VarType.INT64
Tensor dtype from Python floating point: VarType.FP32

(1)改变dtype

  Paddle提供了cast接口来改变dtype:

float32_tensor = paddle.to_tensor(1.0)float64_tensor = paddle.cast(float32_tensor, dtype='float64')
print("Tensor after cast to float64:", float64_tensor.dtype)int64_tensor = paddle.cast(float32_tensor, dtype='int64')
print("Tensor after cast to int64:", int64_tensor.dtype)
Tensor after cast to float64: VarType.FP64
Tensor after cast to int64: VarType.INT64

1.3.2 place

  初始化Tensor时可以通过place来指定其分配的设备位置,可支持的设备位置有三种:CPU/GPU/固定内存,其中固定内存也称为不可分页内存或锁页内存,其与GPU之间具有更高的读写效率,并且支持异步传输,这对网络整体性能会有进一步提升,但其缺点是分配空间过多时可能会降低主机系统的性能,因为其减少了用于存储虚拟内存数据的可分页内存。

(1)创建CPU上的Tensor

cpu_tensor = paddle.to_tensor(1, place=paddle.CPUPlace())
print(cpu_tensor)
Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,[1])

(2)创建GPUuh的Tensor

gpu_tensor = paddle.to_tensor(1, place=paddle.CUDAPlace(0))
print(gpu_tensor)
 Ⅰ.Stuio运行错误

  如果选择在AI Stuio允许,

▲ 图1.3.1 AI StudioNotebook错误

  切换到高级环境中运行:

Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,[1])

(3)创建固定内存上的Tensor

pin_memory_tensor = paddle.to_tensor(1, place=paddle.CUDAPinnedPlace())
print(pin_memory_tensor)
Tensor(shape=[1], dtype=int64, place=CUDAPinnedPlace, stop_gradient=True,[1])
 Ⅰ.Studio上运行错误

  上面的GPU,固定内存上的Tensor建立的时候,不能够在只有CPU存在的情况下建立。

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_120/1555899038.py in <module>6 7 #------------------------------------------------------------
----> 8 pin_memory_tensor = paddle.to_tensor(1, place=paddle.CUDAPinnedPlace())9 print(pin_memory_tensor)RuntimeError: (PermissionDenied) Cannot use CUDAPinnedPlace in CPU only version, Please recompile or reinstall Paddle with CUDA support. (at /paddle/paddle/fluid/pybind/pybind.cc:1759)

  只有在高级版至尊版的环境下,才能够进行初始化。

1.3.3 name

  ensor的name是其唯一的标识符,为python 字符串类型,查看一个Tensor的name可以通过Tensor.name属性。默认地,在每个Tensor创建时,Paddle会自定义一个独一无二的name。

print("Tensor name:", paddle.to_tensor(1).name)
Tensor name: generated_tensor_0

  实际上Paddle是按照一定顺序对于生成的tensor进行命名。

for _ in range(10):print("Tensor name:", paddle.to_tensor(1).name)
Tensor name: generated_tensor_13
Tensor name: generated_tensor_14
Tensor name: generated_tensor_15
Tensor name: generated_tensor_16
Tensor name: generated_tensor_17
Tensor name: generated_tensor_18
Tensor name: generated_tensor_19
Tensor name: generated_tensor_20
Tensor name: generated_tensor_21
Tensor name: generated_tensor_22
 Ⅰ.name
a = paddle.to_tensor(1)
print(a.name)
a.name = 'window'
print(a.name)b = paddle.to_tensor(1)
print(b.name)
b.name = 'window'
print(b.name)
generated_tensor_25
window
generated_tensor_26
window

§02 对Tensor操作


2.1 索引和切片

  您可以通过索引或切片方便地访问或修改 Tensor。Paddle 使用标准的 Python 索引规则与 Numpy 索引规则,与 Indexing a list or a string in Python类似。具有以下特点:

  • 基于 0-n 的下标进行索引,如果下标为负数,则从尾部开始计算
  • 通过冒号 : 分隔切片参数 start:stop:step 来进行切片操作,其中 start、stop、step 均可缺省

2.1.1 访问Tensor

  • 针对1-D Tensor,则仅有单个轴上的索引或切片:
ndim_1_tensor = paddle.to_tensor([0, 1, 2, 3, 4, 5, 6, 7, 8])
print("Origin Tensor:", ndim_1_tensor.numpy())print("First element:", ndim_1_tensor[0].numpy())
print("Last element:", ndim_1_tensor[-1].numpy())
print("All element:", ndim_1_tensor[:].numpy())
print("Before 3:", ndim_1_tensor[:3].numpy())
print("From 6 to the end:", ndim_1_tensor[6:].numpy())
print("From 3 to 6:", ndim_1_tensor[3:6].numpy())
print("Interval of 3:", ndim_1_tensor[::3].numpy())
print("Reverse:", ndim_1_tensor[::-1].numpy())
Origin Tensor: [0 1 2 3 4 5 6 7 8]
First element: [0]
Last element: [8]
All element: [0 1 2 3 4 5 6 7 8]
Before 3: [0 1 2]
From 6 to the end: [6 7 8]
From 3 to 6: [3 4 5]
Interval of 3: [0 3 6]
Reverse: [8 7 6 5 4 3 2 1 0]
  • 针对2-D及以上的 Tensor,则会有多个轴上的索引或切片:
ndim_2_tensor = paddle.to_tensor([[0, 1, 2, 3],[4, 5, 6, 7],[8, 9, 10, 11]])
print("Origin Tensor:", ndim_2_tensor.numpy())
print("First row:", ndim_2_tensor[0].numpy())
print("First row:", ndim_2_tensor[0, :].numpy())
print("First column:", ndim_2_tensor[:, 0].numpy())
print("Last column:", ndim_2_tensor[:, -1].numpy())
print("All element:", ndim_2_tensor[:].numpy())
print("First row and second column:", ndim_2_tensor[0, 1].numpy())
Origin Tensor: [[ 0  1  2  3][ 4  5  6  7][ 8  9 10 11]]
First row: [0 1 2 3]
First row: [0 1 2 3]
First column: [0 4 8]
Last column: [ 3  7 11]
All element: [[ 0  1  2  3][ 4  5  6  7][ 8  9 10 11]]
First row and second column: [1]
  • 索引或切片的第一个值对应 axis 0,第二个值对应 axis 1,以此类推,如果某个 axis 上未指定索引,则默认为 : 。例如:
ndim_2_tensor[1]
ndim_2_tensor[1, :]

2.1.2 修改Tensor

注意:
慎重通过索引或切片修改 Tensor,该操作会原地修改该 Tensor 的数值,且原值不会被保存。如果被修改的 Tensor 参与梯度计算,将仅会使用修改后的数值,这可能会给梯度计算引入风险。Paddle 之后将会对具有风险的操作进行检测和报错。

  与访问 Tensor 类似,修改 Tensor 可以在单个或多个轴上通过索引或切片操作。同时,支持将多种类型的数据赋值给该 Tensor,当前支持的数据类型有:int, float, numpy.ndarray, Tensor

import paddle
import numpy as npx = paddle.to_tensor(np.ones((2, 3)).astype(np.float32)) # [[1., 1., 1.], [1., 1., 1.]]
print(x)x[0] = 0                      # x : [[0., 0., 0.], [1., 1., 1.]]        id(x) = 4433705584
x[0:1] = 2.1                  # x : [[2.1, 2.1, 2.1], [1., 1., 1.]]     id(x) = 4433705584
x[...] = 3                    # x : [[3., 3., 3.], [3., 3., 3.]]        id(x) = 4433705584x[0:1] = np.array([1,2,3])    # x : [[1., 2., 3.], [3., 3., 3.]]        id(x) = 4433705584x[1] = paddle.ones([3])       # x : [[1., 2., 3.], [1., 1., 1.]]        id(x) = 4433705584
Tensor(shape=[2, 3], dtype=float32, place=CPUPlace, stop_gradient=True,[[1., 1., 1.],[1., 1., 1.]])

  同时,Paddle 还提供了丰富的 Tensor 操作的 API,包括数学运算符、逻辑运算符、线性代数相关等100余种 API,这些 API 调用有两种方法:

x = paddle.to_tensor([[1.1, 2.2], [3.3, 4.4]], dtype="float64")
y = paddle.to_tensor([[5.5, 6.6], [7.7, 8.8]], dtype="float64")print(paddle.add(x, y), "\n")
print(x.add(y), "\n")
Tensor(shape=[2, 2], dtype=float64, place=CPUPlace, stop_gradient=True,[[6.60000000 , 8.80000000 ],[11.        , 13.20000000]]) Tensor(shape=[2, 2], dtype=float64, place=CPUPlace, stop_gradient=True,[[6.60000000 , 8.80000000 ],[11.        , 13.20000000]])
x = paddle.to_tensor([[1.1, 2.2], [3.3, 4.4]], dtype="float64")
y = paddle.to_tensor([[5.5, 6.6], [7.7, 8.8]], dtype="float64")z = paddle.add(x, y)
print(z, '\n\n', x.add(z))
Tensor(shape=[2, 2], dtype=float64, place=CPUPlace, stop_gradient=True,[[6.60000000 , 8.80000000 ],[11.        , 13.20000000]]) Tensor(shape=[2, 2], dtype=float64, place=CPUPlace, stop_gradient=True,[[7.70000000 , 11.        ],[14.30000000, 17.60000000]])

  可以看出,使用 Tensor 类成员函数 和 Paddle API 具有相同的效果,由于 类成员函数 操作更为方便,以下均从 Tensor 类成员函数 的角度,对常用 Tensor 操作进行介绍。

2.2 对Tensor运算

2.2.1 数学运算

x.abs()                       #逐元素取绝对值
x.ceil()                      #逐元素向上取整
x.floor()                     #逐元素向下取整
x.round()                     #逐元素四舍五入
x.exp()                       #逐元素计算自然常数为底的指数
x.log()                       #逐元素计算x的自然对数
x.reciprocal()                #逐元素求倒数
x.square()                    #逐元素计算平方
x.sqrt()                      #逐元素计算平方根
x.sin()                       #逐元素计算正弦
x.cos()                       #逐元素计算余弦
x.add(y)                      #逐元素相加
x.subtract(y)                 #逐元素相减
x.multiply(y)                 #逐元素相乘
x.divide(y)                   #逐元素相除
x.mod(y)                      #逐元素相除并取余
x.pow(y)                      #逐元素幂运算
x.max()                       #指定维度上元素最大值,默认为全部维度
x.min()                       #指定维度上元素最小值,默认为全部维度
x.prod()                      #指定维度上元素累乘,默认为全部维度
x.sum()                       #指定维度上元素的和,默认为全部维度

  Paddle对python数学运算相关的魔法函数进行了重写,以下操作与上述结果相同。

x + y  -> x.add(y)            #逐元素相加
x - y  -> x.subtract(y)       #逐元素相减
x * y  -> x.multiply(y)       #逐元素相乘
x / y  -> x.divide(y)         #逐元素相除
x % y  -> x.mod(y)            #逐元素相除并取余
x ** y -> x.pow(y)            #逐元素幂运算

2.2.2 逻辑运算

x.isfinite()                  #判断tensor中元素是否是有限的数字,即不包括inf与nan
x.equal_all(y)                #判断两个tensor的全部元素是否相等,并返回shape为[1]的bool Tensor
x.equal(y)                    #判断两个tensor的每个元素是否相等,并返回shape相同的bool Tensor
x.not_equal(y)                #判断两个tensor的每个元素是否不相等
x.less_than(y)                #判断tensor x的元素是否小于tensor y的对应元素
x.less_equal(y)               #判断tensor x的元素是否小于或等于tensor y的对应元素
x.greater_than(y)             #判断tensor x的元素是否大于tensor y的对应元素
x.greater_equal(y)            #判断tensor x的元素是否大于或等于tensor y的对应元素
x.allclose(y)                 #判断tensor x的全部元素是否与tensor y的全部元素接近,并返回shape为[1]的bool Tensor

  同样地,Paddle对python逻辑比较相关的魔法函数进行了重写,以下操作与上述结果相同。

x == y  -> x.equal(y)         #判断两个tensor的每个元素是否相等
x != y  -> x.not_equal(y)     #判断两个tensor的每个元素是否不相等
x < y   -> x.less_than(y)     #判断tensor x的元素是否小于tensor y的对应元素
x <= y  -> x.less_equal(y)    #判断tensor x的元素是否小于或等于tensor y的对应元素
x > y   -> x.greater_than(y)  #判断tensor x的元素是否大于tensor y的对应元素
x >= y  -> x.greater_equal(y) #判断tensor x的元素是否大于或等于tensor y的对应元素

  以下操作仅针对bool型Tensor:

x.logical_and(y)              #对两个bool型tensor逐元素进行逻辑与操作
x.logical_or(y)               #对两个bool型tensor逐元素进行逻辑或操作
x.logical_xor(y)              #对两个bool型tensor逐元素进行逻辑亦或操作
x.logical_not(y)              #对两个bool型tensor逐元素进行逻辑非操作

  线性代数相关

x.t()                         #矩阵转置
x.transpose([1, 0])           #交换axis 0 与axis 1的顺序
x.norm('fro')                 #矩阵的Frobenius 范数
x.dist(y, p=2)                #矩阵(x-y)的2范数
x.matmul(y)                   #矩阵乘法

  需要注意,Paddle中Tensor的操作符均为非inplace操作,即 x.add(y) 不会在tensor x上直接进行操作,而会返回一个新的Tensor来表示运算结果。

§03 一些测试


  这些测试主要对前面Tensor运算进行测试,特别是对比于numpy中的ndarray运算之间的差异性。

3.1 矩阵操作

3.1.1 条件操作

(1)判断

a = random.randn(4,4)
ta = paddle.to_tensor(a)print(a>1)
print(ta>1)
[[False  True False False][ True False False False][False False  True  True][False  True False False]] Tensor(shape=[4, 4], dtype=bool, place=CPUPlace, stop_gradient=True,[[False, True , False, False],[True , False, False, False],[False, False, True , True ],[False, True , False, False]])

(2)赋值

a[a<0] = 0
ta[ta<0] = 0print(a,ta)
[[0.         0.         0.         0.70413158][0.         0.         0.70739233 0.68697794][0.         0.         0.         1.67288839][0.         0.         0.         0.        ]] Tensor(shape=[4, 4], dtype=float64, place=CPUPlace, stop_gradient=True,[[0.        , 0.        , 0.        , 0.70413158],[0.        , 0.        , 0.70739233, 0.68697794],[0.        , 0.        , 0.        , 1.67288839],[0.        , 0.        , 0.        , 0.        ]])

3.1.2 转置

a = random.randn(4,4)
ta = paddle.to_tensor(a)tta = ta.transpose([1,0])
print(ta,tta)
Tensor(shape=[4, 4], dtype=float64, place=CPUPlace, stop_gradient=True,[[-1.06489243, -1.29275098, -0.17876530, -0.24057844],[ 2.01985970, -1.59158322, -0.87389787, -0.56134716],[ 0.27620963,  0.76918046, -0.59972490,  1.06446822],[ 0.84431932, -0.05805651,  0.39430261,  0.40576678]]) Tensor(shape=[4, 4], dtype=float64, place=CPUPlace, stop_gradient=True,[[-1.06489243, -1.29275098, -0.17876530, -0.24057844],[ 2.01985970, -1.59158322, -0.87389787, -0.56134716],[ 0.27620963,  0.76918046, -0.59972490,  1.06446822],[ 0.84431932, -0.05805651,  0.39430261,  0.40576678]])

※ 总  结 ※


  Tensor是Paddle深度框架中的重要概念。对于 Tensor概念介绍-使用文档-PaddlePaddle深度学习平台 文档中的概念进行学习和测试,对比了Tensor于numpy中的ndarray之间的差异。


■ 相关文献链接:

  • Tensor概念介绍-使用文档-PaddlePaddle深度学习平台

● 相关图表链接:

  • 图1.1.1 不同ndim的Tensor
  • 图1.2.1 Tensor 的Shape, Axis,Demension,nDim之间的关系
  • 图1.3.1 AI StudioNotebook错误

Paddle 网络中的Tensor 数据结构相关推荐

  1. Paddle 点灯人 之 Tensor

    Paddle 点灯人 之 Tensor 文章目录 Paddle 点灯人 之 Tensor Paddle点灯人介绍 Tensor介绍 Pytorch和Paddle的相似之处 创建张量Tensor 图片/ ...

  2. Paddle网络结构中的层和模型

    简 介: 这是 Paddle中的模型与层 的内容学习笔记.对于Paddle中的层的构造,操作进行了初步的测试与相关的学习. 关键词: Layer,Paddle #mermaid-svg-gE0XomQ ...

  3. AI Studio : 利用Paddle框架中的极简框架识别MNIST

    简 介: ※通过测试网络上的这个极简的Paddle识别MNIST程序,也就是使用了一个非常简单的线性回归网络,初步熟悉了Paddle下的网络架构方式.对于如果从numpy到Paddle的tensor转 ...

  4. 【深度学习理论】一文搞透pytorch中的tensor、autograd、反向传播和计算图

    转载:https://zhuanlan.zhihu.com/p/145353262 前言 本文的主要目标: 一遍搞懂反向传播的底层原理,以及其在深度学习框架pytorch中的实现机制.当然一遍搞不定两 ...

  5. 实践教程 | 浅谈 PyTorch 中的 tensor 及使用

    点击上方"视学算法",选择加"星标"或"置顶" 重磅干货,第一时间送达 作者 | xiaopl@知乎(已授权) 来源 | https://z ...

  6. Bioinformatics| 生物医学网络中的图嵌入方法

    今天给大家介绍Bioinformatics期刊的一篇文章,"Graph embedding on biomedical networks: methods, applications and ...

  7. FEMS综述: 如何从微生物网络中的“毛线球”理出头绪(3万字长文带你系统学习网络)...

    如何从微生物网络中的"毛线球"理出头绪 From hairballs to hypotheses–biological insights from microbial Lisa R ...

  8. 区块链中的基础数据结构

    区块 区块/Block 区块是在区块链网络上承载交易数据的数据包,是一种被标记上时间戳和之前一个区块哈希值的数据结构,区块经过网络的共识机制验证并确认区块中的交易. 父块/Parent Block 父 ...

  9. 论文笔记——HDD算法:异构网络中信息扩散的深度学习方法

    HDD算法 发表在knowledge-Based Systems上的一篇文章.有许多现实世界的复杂系统与多类型相互作用的实体,可以被视为异构网络,包括人类连接和生物进化.这类网络的主要问题之一是预测信 ...

最新文章

  1. [Spring mvc 深度解析(二)] Tomcat分析
  2. Html内容超出标记宽度后自动隐藏
  3. python jieba分词教程_Python jieba 分词
  4. 防御 | 阻止木马侵入(电脑设置)
  5. python学习 day1 (3月1日)
  6. 如何在Eclipse中如何自动添加注释和自定义注释风格
  7. 提高网页打开速度的一些小技巧
  8. JavaScript Json对象和Json对象字符串的关系 jsonObj-JsonString
  9. python中mod运算符_Python—运算符模块,pythonoperator
  10. html5 canvas带音效的新年烟花特效,真的好看极了
  11. 什么是SEO?SEO的区别在哪里?
  12. 更改计算机管理员账户用户名和密码,更改电脑的登录用户名为Administrator帐户...
  13. 绘图工具-PlantUML
  14. 债券价格和到期收益率的关系_债券价格、到期收益率与票面利率之间的关系是什么?...
  15. ASP.NET知识点总结
  16. 休谟的“归纳问题”——关于归纳方法的批判
  17. 男人选择什么样的人做老婆?
  18. TQ2440 使用Jlink-Flasher 烧写 u-boot 或者 program
  19. MCNP 在Linux系统的安装步骤
  20. Tampermonkey插件安装出现“无法从该网站添加应用、扩展程序和用户脚本”问题解决

热门文章

  1. Microsoft Azure部署MYSQL-MMM(3)配置MYSQL-MMM
  2. oracle创建表+注释
  3. 卸载linux系统自带JDK,安装自己的jdk
  4. 前端知识点(持续更新)
  5. Apache+PHP+MySQL+phpMyAdmin+WordPress搭建
  6. 特征变换(3)小波变换
  7. 模版方法模式/Template Method
  8. NS2网络模拟(2)-丢包率
  9. invalidate
  10. CCF201612-3 权限查询(100分)