API

 torch / nn / torch.nn


UpsamplingNearest2d

class torch.nn.UpsamplingNearest2d(size: Optional[Union[T, Tuple[T, T]]] = None, scale_factor: Optional[Union[T, Tuple[T, T]]] = None)[source]

Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.

To specify the scale, it takes either the size or the scale_factor as it’s constructor argument.

When size is given, it is the output size of the image (h, w).

Parameters
  • size (int or Tuple[int, int], optional) – output spatial sizes

  • scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.

Warning

This class is deprecated in favor of interpolate().

Shape:
  • Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})

  • Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) where

Hout=Hin×scale_factorH_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor
Wout=Win×scale_factorW_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor

Examples:

>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)
>>> input
tensor([[[[ 1.,  2.],
          [ 3.,  4.]]]])

>>> m = nn.UpsamplingNearest2d(scale_factor=2)
>>> m(input)
tensor([[[[ 1.,  1.,  2.,  2.],
          [ 1.,  1.,  2.,  2.],
          [ 3.,  3.,  4.,  4.],
          [ 3.,  3.,  4.,  4.]]]])

此页内容是否对您有帮助