PaddlePaddle
- abs
- acos
- add
- add_n
- addmm
- all
- allclose
- any
- arange
- argmax
- argmin
- argsort
- asin
- assign
- atan
- bernoulli
- bmm
- broadcast_to
- cast
- ceil
- cholesky
- chunk
- clip
- concat
- conj
- cos
- cosh
- CPUPlace
- cross
- CUDAPinnedPlace
- CUDAPlace
- cumsum
- DataParallel
- diag
- disable_static
- dist
- divide
- dot
- empty
- empty_like
- enable_static
- equal
- equal_all
- erf
- exp
- expand
- expand_as
- eye
- flatten
- flip
- floor
- floor_divide
- flops
- full
- full_like
- gather
- gather_nd
- get_cuda_rng_state
- get_cudnn_version
- get_default_dtype
- get_device
- grad
- greater_equal
- greater_than
- histogram
- imag
- in_dynamic_mode
- increment
- index_sample
- index_select
- inverse
- is_compiled_with_cuda
- is_compiled_with_xpu
- is_empty
- is_tensor
- isfinite
- isinf
- isnan
- kron
- less_equal
- less_than
- linspace
- load
- log
- log10
- log1p
- log2
- logical_and
- logical_not
- logical_or
- logical_xor
- logsumexp
- masked_select
- matmul
- max
- maximum
- mean
- median
- meshgrid
- min
- minimum
- mm
- mod
- Model
- multinomial
- multiplex
- multiply
- mv
- no_grad
- nonzero
- norm
- normal
- not_equal
- numel
- ones
- ones_like
- ParamAttr
- pow
- prod
- rand
- randint
- randn
- randperm
- rank
- real
- reciprocal
- reshape
- reshape_
- roll
- round
- rsqrt
- save
- scale
- scatter
- scatter_
- scatter_nd
- scatter_nd_add
- seed
- set_cuda_rng_state
- set_default_dtype
- set_device
- shape
- shard_index
- sign
- sin
- sinh
- slice
- sort
- split
- sqrt
- square
- squeeze
- squeeze_
- stack
- stanh
- std
- strided_slice
- subtract
- sum
- summary
- t
- tan
- tanh
- tanh_
- Tensor
- tile
- to_tensor
- topk
- trace
- transpose
- tril
- triu
- unbind
- uniform
- unique
- unsqueeze
- unsqueeze_
- unstack
- var
- where
- XPUPlace
- zeros
- zeros_like
- create_lod_tensor
- create_random_int_lodtensor
- cuda_pinned_places
- data
- DataFeedDesc
- DataFeeder
- device_guard
- DistributeTranspiler
- DistributeTranspilerConfig
- get_flags
-
- adaptive_pool2d
- adaptive_pool3d
- add_position_encoding
- affine_channel
- affine_grid
- anchor_generator
- argmax
- argmin
- argsort
- array_length
- array_read
- array_write
- assign
- autoincreased_step_counter
- BasicDecoder
- beam_search
- beam_search_decode
- bipartite_match
- box_clip
- box_coder
- box_decoder_and_assign
- bpr_loss
- brelu
- Categorical
- center_loss
- clip
- clip_by_norm
- collect_fpn_proposals
- concat
- cond
- continuous_value_model
- cosine_decay
- create_array
- create_py_reader_by_data
- create_tensor
- crop
- crop_tensor
- cross_entropy
- ctc_greedy_decoder
- cumsum
- data
- DecodeHelper
- Decoder
- deformable_conv
- deformable_roi_pooling
- density_prior_box
- detection_output
- diag
- distribute_fpn_proposals
- double_buffer
- dropout
- dynamic_gru
- dynamic_lstm
- dynamic_lstmp
- DynamicRNN
- edit_distance
- elementwise_add
- elementwise_div
- elementwise_floordiv
- elementwise_max
- elementwise_min
- elementwise_mod
- elementwise_pow
- elementwise_sub
- elu
- embedding
- equal
- expand
- expand_as
- exponential_decay
- eye
- fc
- fill_constant
- filter_by_instag
- flatten
- fsp_matrix
- gather
- gather_nd
- gaussian_random
- gelu
- generate_mask_labels
- generate_proposal_labels
- generate_proposals
- get_tensor_from_selected_rows
- greater_equal
- greater_than
- GreedyEmbeddingHelper
- grid_sampler
- gru_unit
- GRUCell
- hard_shrink
- hard_sigmoid
- hard_swish
- has_inf
- has_nan
- hash
- hsigmoid
- huber_loss
- IfElse
- im2sequence
- image_resize
- image_resize_short
- increment
- inplace_abn
- inverse_time_decay
- iou_similarity
- isfinite
- kldiv_loss
- l2_normalize
- label_smooth
- leaky_relu
- less_equal
- less_than
- linear_chain_crf
- linear_lr_warmup
- locality_aware_nms
- lod_append
- lod_reset
- logsigmoid
- lrn
- lstm
- lstm_unit
- LSTMCell
- margin_rank_loss
- matmul
- matrix_nms
- maxout
- mean
- merge_selected_rows
- mse_loss
- mul
- multiclass_nms
- MultivariateNormalDiag
- natural_exp_decay
- noam_decay
- Normal
- not_equal
- one_hot
- ones
- ones_like
- pad
- pad2d
- pad_constant_like
- piecewise_decay
- pixel_shuffle
- polygon_box_transform
- polynomial_decay
- pool2d
- pool3d
- pow
- prior_box
- prroi_pool
- psroi_pool
- py_reader
- random_crop
- range
- rank_loss
- read_file
- reduce_all
- reduce_any
- reduce_max
- reduce_mean
- reduce_min
- reduce_prod
- reduce_sum
- relu
- relu6
- reorder_lod_tensor_by_rank
- reshape
- resize_bilinear
- resize_nearest
- resize_trilinear
- retinanet_detection_output
- retinanet_target_assign
- reverse
- rnn
- RNNCell
- roi_align
- roi_perspective_transform
- roi_pool
- rpn_target_assign
- sampled_softmax_with_cross_entropy
- SampleEmbeddingHelper
- sampling_id
- scatter
- selu
- sequence_concat
- sequence_conv
- sequence_enumerate
- sequence_expand
- sequence_expand_as
- sequence_first_step
- sequence_last_step
- sequence_mask
- sequence_pad
- sequence_pool
- sequence_reshape
- sequence_reverse
- sequence_scatter
- sequence_slice
- sequence_softmax
- sequence_unpad
- shuffle_channel
- sigmoid_cross_entropy_with_logits
- sigmoid_focal_loss
- sign
- similarity_focus
- size
- smooth_l1
- soft_relu
- softmax
- softplus
- softshrink
- softsign
- space_to_depth
- split
- squeeze
- ssd_loss
- stack
- StaticRNN
- strided_slice
- sum
- sums
- swish
- Switch
- tanh
- tanh_shrink
- target_assign
- teacher_student_sigmoid_loss
- tensor_array_to_tensor
- thresholded_relu
- topk
- TrainingHelper
- unbind
- Uniform
- uniform_random
- unique
- unique_with_counts
- unsqueeze
- warpctc
- where
- While
- while_loop
- yolo_box
- yolov3_loss
- zeros
- zeros_like
- load_op_library
- LoDTensor
- LoDTensorArray
- memory_optimize
- one_hot
- release_memory
- require_version
- set_flags
- Tensor
- Overview
- AdaptiveAvgPool1D
- AdaptiveAvgPool2D
- AdaptiveAvgPool3D
- AdaptiveMaxPool1D
- AdaptiveMaxPool2D
- AdaptiveMaxPool3D
- AlphaDropout
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNorm
- BatchNorm1D
- BatchNorm2D
- BatchNorm3D
- BCELoss
- BCEWithLogitsLoss
- BeamSearchDecoder
- Bilinear
- BiRNN
- ClipGradByGlobalNorm
- ClipGradByNorm
- ClipGradByValue
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- CosineSimilarity
- CrossEntropyLoss
- CTCLoss
- Dropout
- Dropout2D
- Dropout3D
- dynamic_decode
- ELU
- Embedding
- Flatten
-
- adaptive_avg_pool1d
- adaptive_avg_pool2d
- adaptive_avg_pool3d
- adaptive_max_pool1d
- adaptive_max_pool2d
- adaptive_max_pool3d
- affine_grid
- alpha_dropout
- avg_pool1d
- avg_pool2d
- avg_pool3d
- batch_norm
- bilinear
- binary_cross_entropy
- binary_cross_entropy_with_logits
- conv1d
- conv1d_transpose
- conv2d
- conv2d_transpose
- conv3d
- conv3d_transpose
- cosine_similarity
- cross_entropy
- ctc_loss
- diag_embed
- dice_loss
- dropout
- dropout2d
- dropout3d
- elu
- elu_
- embedding
- gather_tree
- gelu
- grid_sample
- hardshrink
- hardsigmoid
- hardswish
- hardtanh
- hsigmoid_loss
- instance_norm
- interpolate
- kl_div
- l1_loss
- label_smooth
- layer_norm
- leaky_relu
- linear
- local_response_norm
- log_loss
- log_sigmoid
- log_softmax
- margin_ranking_loss
- max_pool1d
- max_pool2d
- max_pool3d
- maxout
- mse_loss
- nll_loss
- normalize
- npair_loss
- one_hot
- pad
- pixel_shuffle
- prelu
- relu
- relu6
- relu_
- selu
- sigmoid
- sigmoid_focal_loss
- smooth_l1_loss
- softmax
- softmax_
- softmax_with_cross_entropy
- softplus
- softshrink
- softsign
- square_error_cost
- swish
- tanhshrink
- temporal_shift
- thresholded_relu
- unfold
- upsample
- GELU
- GroupNorm
- GRU
- GRUCell
- Hardshrink
- Hardsigmoid
- Hardswish
- Hardtanh
- HSigmoidLoss
- InstanceNorm1D
- InstanceNorm2D
- InstanceNorm3D
- KLDivLoss
- L1Loss
- Layer
- LayerList
- LayerNorm
- LeakyReLU
- Linear
- LocalResponseNorm
- LogSigmoid
- LogSoftmax
- LSTM
- LSTMCell
- MarginRankingLoss
- Maxout
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MSELoss
- MultiHeadAttention
- NLLLoss
- Pad1D
- Pad2D
- Pad3D
- PairwiseDistance
- ParameterList
- PixelShuffle
- PReLU
- ReLU
- ReLU6
- RNN
- RNNCellBase
- SELU
- Sequential
- Sigmoid
- SimpleRNN
- SimpleRNNCell
- SmoothL1Loss
- Softmax
- Softplus
- Softshrink
- Softsign
- SpectralNorm
- Swish
- SyncBatchNorm
- Tanh
- Tanhshrink
- ThresholdedReLU
- Transformer
- TransformerDecoder
- TransformerDecoderLayer
- TransformerEncoder
- TransformerEncoderLayer
- Upsample
- UpsamplingBilinear2D
- UpsamplingNearest2D
- append_backward
- BuildStrategy
- CompiledProgram
- cpu_places
- create_global_var
- create_parameter
- cuda_places
- data
- default_main_program
- default_startup_program
- deserialize_persistables
- deserialize_program
- device_guard
- ExecutionStrategy
- Executor
- global_scope
- gradients
- InputSpec
- load
- load_from_file
- load_inference_model
- load_program_state
- name_scope
- ParallelExecutor
- Program
- program_guard
- py_func
- save
- save_inference_model
- save_to_file
- scope_guard
- serialize_persistables
- serialize_program
- set_program_state
- Variable
- WeightNormParamAttr
-
- adjust_brightness
- adjust_contrast
- adjust_hue
- adjust_saturation
- BaseTransform
- BrightnessTransform
- center_crop
- CenterCrop
- ColorJitter
- Compose
- ContrastTransform
- crop
- Grayscale
- hflip
- HueTransform
- Normalize
- normalize
- Pad
- pad
- RandomCrop
- RandomHorizontalFli
- RandomResizedCrop
- RandomRotation
- RandomVerticalFlip
- Resize
- resize
- rotate
- SaturationTransform
- to_grayscale
- to_tensor
- ToTensor
- Transpose
- vflip
paddle.nn / functional / upsample
upsample¶
-
paddle.nn.functional.
upsample
( x, size=None, scale_factor=None, mode='nearest', align_corners=False, align_mode=0, data_format='NCHW', name=None ) ¶
该OP用于调整一个batch中图片的大小。
输入为4-D Tensor时形状为(num_batches, channels, in_h, in_w)或者(num_batches, in_h, in_w, channels),输入为5-D Tensor时形状为(num_batches, channels, in_d, in_h, in_w)或者(num_batches, in_d, in_h, in_w, channels),并且调整大小只适用于深度,高度和宽度对应的维度。
支持的插值方法:
NEAREST:最近邻插值
BILINEAR:双线性插值
TRILINEAR:三线性插值
BICUBIC:双三次插值
LINEAR: 线性插值
AREA: 面积插值
最近邻插值是在输入张量的高度和宽度上进行最近邻插值。
双线性插值是线性插值的扩展,用于在直线2D网格上插值两个变量(例如,该操作中的H方向和W方向)的函数。 关键思想是首先在一个方向上执行线性插值,然后在另一个方向上再次执行线性插值。
三线插值是线性插值的一种扩展,是3参数的插值方程(比如op里的D,H,W方向),在三个方向上进行线性插值。
双三次插值是在二维网格上对数据点进行插值的三次插值的扩展,它能创造出比双线性和最近临插值更为光滑的图像边缘。
Align_corners和align_mode是可选参数,插值的计算方法可以由它们选择。
示例:
scale 计算方法:
if align_corners = True && out_size > 1 :
scale_factor = (in_size-1.0)/(out_size-1.0)
else:
scale_factor = float(in_size/out_size)
不同插值方式的输出纬度计算规则:
Nearest neighbor interpolation:
input : (N,C,H_in,W_in)
output: (N,C,H_out,W_out) where:
H_out = \left \lfloor {H_{in} * scale_{}factor}} \right \rfloor
W_out = \left \lfloor {W_{in} * scale_{}factor}} \right \rfloor
Bilinear interpolation:
if:
align_corners = False , align_mode = 0
input : (N,C,H_in,W_in)
output: (N,C,H_out,W_out) where:
H_out = (H_{in}+0.5) * scale_{factor} - 0.5
W_out = (W_{in}+0.5) * scale_{factor} - 0.5
else:
input : (N,C,H_in,W_in)
output: (N,C,H_out,W_out) where:
H_out = H_{in} * scale_{factor}
W_out = W_{in} * scale_{factor}
Bicubic interpolation:
if:
align_corners = False
input : (N,C,H_in,W_in)
output: (N,C,H_out,W_out) where:
H_out = (H_{in}+0.5) * scale_{factor} - 0.5
W_out = (W_{in}+0.5) * scale_{factor} - 0.5
else:
input : (N,C,H_in,W_in)
output: (N,C,H_out,W_out) where:
H_out = H_{in} * scale_{factor}
W_out = W_{in} * scale_{factor}
Trilinear interpolation:
if:
align_corners = False , align_mode = 0
input : (N,C,D_in,H_in,W_in)
output: (N,C,D_out,H_out,W_out) where:
D_out = (D_{in}+0.5) * scale_{factor} - 0.5
H_out = (H_{in}+0.5) * scale_{factor} - 0.5
W_out = (W_{in}+0.5) * scale_{factor} - 0.5
else:
input : (N,C,D_in,H_in,W_in)
output: (N,C,D_out,H_out,W_out) where:
D_out = D_{in} * scale_{factor}
H_out = H_{in} * scale_{factor}
W_out = W_{in} * scale_{factor}
有关最近邻插值的详细信息,请参阅维基百科: https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
有关双线性插值的详细信息,请参阅维基百科: https://en.wikipedia.org/wiki/Bilinear_interpolation
有关三线插值的详细信息,请参阅维基百科: https://en.wikipedia.org/wiki/Trilinear_interpolation
有关双三次插值的详细信息,请参阅维基百科: https://en.wikipedia.org/wiki/Bicubic_interpolation
- 参数:
-
x (Tensor) - 4-D或5-D Tensor,数据类型为float32、float64或uint8,其数据格式由参数
data_format
指定。size (list|tuple|Tensor|None) - 输出Tensor,输入为4D张量时,形状为为(out_h, out_w)的2-D Tensor。输入为5-D Tensor时,形状为(out_d, out_h, out_w)的3-D Tensor。如果
out_shape
是列表,每一个元素可以是整数或者形状为[1]的变量。如果out_shape
是变量,则其维度大小为1。默认值为None。scale_factor (float|Tensor|list|tuple|None)-输入的高度或宽度的乘数因子 。 out_shape和scale至少要设置一个。out_shape的优先级高于scale。默认值为None。如果scale_factor是一个list或tuple,它必须与输入的shape匹配。
mode (str, 可选) - 插值方法。支持"bilinear"或"trilinear"或"nearest"或"bicubic"或"linear"或"area"。默认值为"nearest"。
align_mode (int, 可选)- 双线性插值的可选项。 可以是 '0' 代表src_idx = scale *(dst_indx + 0.5)-0.5;如果为'1' ,代表src_idx = scale * dst_index。
align_corners (bool, 可选)- 一个可选的bool型参数,如果为True,则将输入和输出张量的4个角落像素的中心对齐,并保留角点像素的值。 默认值为True
data_format (str,可选)- 指定输入的数据格式,输出的数据格式将与输入保持一致。对于4-D Tensor,支持 NCHW(num_batches, channels, height, width) 或者 NHWC(num_batches, height, width, channels),对于5-D Tensor,支持 NCDHW(num_batches, channels, depth, height, width)或者 NDHWC(num_batches, depth, height, width, channels),默认值:'NCHW'。
name (str|None, 可选) - 该参数供开发人员打印调试信息时使用,具体用法请参见 Name 。默认值为None。
返回:4-D Tensor,形状为 (num_batches, channels, out_h, out_w) 或 (num_batches, out_h, out_w, channels);或者5-D Tensor,形状为 (num_batches, channels, out_d, out_h, out_w) 或 (num_batches, out_d, out_h, out_w, channels)。
- 抛出异常:
-
TypeError
- out_shape应该是一个列表、元组或张量。TypeError
- actual_shape应该是变量或None。ValueError
- upsample的"resample"只能是"BILINEAR"或"TRILINEAR"或"NEAREST"或"BICUBIC"或"LINEAR"或"AREA"。ValueError
- 'linear' 只支持 3-D tensor。ValueError
- 'bilinear', 'bicubic' ,'nearest' 只支持 4-D tensor。ValueError
- 'trilinear' 只支持 5-D tensorValueError
- out_shape 和 scale 不可同时为 None。ValueError
- out_shape 的长度必须为1如果输入是3D张量ValueError
- out_shape 的长度必须为2如果输入是4D张量。ValueError
- out_shape 的长度必须为3如果输入是5D张量。ValueError
- scale应大于0。TypeError
- align_corners 应为bool型。ValueError
- align_mode 只能取 ‘0’ 或 ‘1’。ValueError
- data_format 只能取 'NCW', 'NCHW'、'NHWC'、'NCDHW' 或者 'NDHWC'。
代码示例
import paddle
import paddle.nn.functional as F
input = paddle.rand(shape=(2,3,6,10))
output = F.upsample(x=input, size=[12,12])
print(output.shape)
# [2L, 3L, 12L, 12L]
此页内容是否对您有帮助
感谢反馈!