tenor.contiguous() , .
contiguous() , () , (). -, :
aaa = torch.Tensor( [[1,2,3],[4,5,6]] )
print(aaa.stride())
print(aaa.is_contiguous())
stride() return (3,1) , : ( ) 3 . ( ) 1 . , .
come :
bbb = aaa.transpose(0,1)
print(bbb.stride())
print(bbb.is_contiguous())
ccc = aaa.narrow(1,1,2)
print(ccc.stride())
print(ccc.is_contiguous())
ddd = aaa.repeat(2,1 )
print(ddd.stride())
print(ddd.is_contiguous())
eee = aaa.unsqueeze(2).expand(2,3,3)
print(eee.stride())
print(eee.is_contiguous())
fff = aaa.unsqueeze(2).repeat(1,1,8).view(2,-1,2)
print(fff.stride())
print(fff.is_contiguous())
, , transpose(), small() , expand() . , repeat() view() . , : , ?
The answer is that the view () function cannot be applied to a continuous tensor. This is probably due to the fact that view () requires the tensor to be stored continuously so that it can quickly change shape in memory. eg:
bbb.view(-1,3)
we get the error:
RuntimeError Traceback (most recent call last)
<ipython-input-63-eec5319b0ac5> in <module>()
RuntimeError: invalid argument 2: view size is not compatible with input tensor size and stride (at least one dimension spans across two contiguous subspaces). Call .contiguous() before .view(). at /pytorch/aten/src/TH/generic/THTensor.cpp:203
To solve this problem, simply add contiguous () to the continuous tensor, create a continuous copy and then apply view ()
bbb.contiguous().view(-1,3)
#tensor([[1., 4., 2.],
[5., 3., 6.]])