PaddlePaddle
- abs
- acos
- add
- add_n
- addmm
- all
- allclose
- any
- arange
- argmax
- argmin
- argsort
- asin
- assign
- atan
- bernoulli
- bmm
- broadcast_to
- cast
- ceil
- cholesky
- chunk
- clip
- concat
- conj
- cos
- cosh
- CPUPlace
- cross
- CUDAPinnedPlace
- CUDAPlace
- cumsum
- DataParallel
- diag
- disable_static
- dist
- divide
- dot
- empty
- empty_like
- enable_static
- equal
- equal_all
- erf
- exp
- expand
- expand_as
- eye
- flatten
- flip
- floor
- floor_divide
- flops
- full
- full_like
- gather
- gather_nd
- get_cuda_rng_state
- get_cudnn_version
- get_default_dtype
- get_device
- grad
- greater_equal
- greater_than
- histogram
- imag
- in_dynamic_mode
- increment
- index_sample
- index_select
- inverse
- is_compiled_with_cuda
- is_compiled_with_xpu
- is_empty
- is_tensor
- isfinite
- isinf
- isnan
- kron
- less_equal
- less_than
- linspace
- load
- log
- log10
- log1p
- log2
- logical_and
- logical_not
- logical_or
- logical_xor
- logsumexp
- masked_select
- matmul
- max
- maximum
- mean
- median
- meshgrid
- min
- minimum
- mm
- mod
- Model
- multinomial
- multiplex
- multiply
- mv
- no_grad
- nonzero
- norm
- normal
- not_equal
- numel
- ones
- ones_like
- ParamAttr
- pow
- prod
- rand
- randint
- randn
- randperm
- rank
- real
- reciprocal
- reshape
- reshape_
- roll
- round
- rsqrt
- save
- scale
- scatter
- scatter_
- scatter_nd
- scatter_nd_add
- seed
- set_cuda_rng_state
- set_default_dtype
- set_device
- shape
- shard_index
- sign
- sin
- sinh
- slice
- sort
- split
- sqrt
- square
- squeeze
- squeeze_
- stack
- stanh
- std
- strided_slice
- subtract
- sum
- summary
- t
- tan
- tanh
- tanh_
- Tensor
- tile
- to_tensor
- topk
- trace
- transpose
- tril
- triu
- unbind
- uniform
- unique
- unsqueeze
- unsqueeze_
- unstack
- var
- where
- XPUPlace
- zeros
- zeros_like
- create_lod_tensor
- create_random_int_lodtensor
- cuda_pinned_places
- data
- DataFeedDesc
- DataFeeder
- device_guard
- DistributeTranspiler
- DistributeTranspilerConfig
- get_flags
-
- adaptive_pool2d
- adaptive_pool3d
- add_position_encoding
- affine_channel
- affine_grid
- anchor_generator
- argmax
- argmin
- argsort
- array_length
- array_read
- array_write
- assign
- autoincreased_step_counter
- BasicDecoder
- beam_search
- beam_search_decode
- bipartite_match
- box_clip
- box_coder
- box_decoder_and_assign
- bpr_loss
- brelu
- Categorical
- center_loss
- clip
- clip_by_norm
- collect_fpn_proposals
- concat
- cond
- continuous_value_model
- cosine_decay
- create_array
- create_py_reader_by_data
- create_tensor
- crop
- crop_tensor
- cross_entropy
- ctc_greedy_decoder
- cumsum
- data
- DecodeHelper
- Decoder
- deformable_conv
- deformable_roi_pooling
- density_prior_box
- detection_output
- diag
- distribute_fpn_proposals
- double_buffer
- dropout
- dynamic_gru
- dynamic_lstm
- dynamic_lstmp
- DynamicRNN
- edit_distance
- elementwise_add
- elementwise_div
- elementwise_floordiv
- elementwise_max
- elementwise_min
- elementwise_mod
- elementwise_pow
- elementwise_sub
- elu
- embedding
- equal
- expand
- expand_as
- exponential_decay
- eye
- fc
- fill_constant
- filter_by_instag
- flatten
- fsp_matrix
- gather
- gather_nd
- gaussian_random
- gelu
- generate_mask_labels
- generate_proposal_labels
- generate_proposals
- get_tensor_from_selected_rows
- greater_equal
- greater_than
- GreedyEmbeddingHelper
- grid_sampler
- gru_unit
- GRUCell
- hard_shrink
- hard_sigmoid
- hard_swish
- has_inf
- has_nan
- hash
- hsigmoid
- huber_loss
- IfElse
- im2sequence
- image_resize
- image_resize_short
- increment
- inplace_abn
- inverse_time_decay
- iou_similarity
- isfinite
- kldiv_loss
- l2_normalize
- label_smooth
- leaky_relu
- less_equal
- less_than
- linear_chain_crf
- linear_lr_warmup
- locality_aware_nms
- lod_append
- lod_reset
- logsigmoid
- lrn
- lstm
- lstm_unit
- LSTMCell
- margin_rank_loss
- matmul
- matrix_nms
- maxout
- mean
- merge_selected_rows
- mse_loss
- mul
- multiclass_nms
- MultivariateNormalDiag
- natural_exp_decay
- noam_decay
- Normal
- not_equal
- one_hot
- ones
- ones_like
- pad
- pad2d
- pad_constant_like
- piecewise_decay
- pixel_shuffle
- polygon_box_transform
- polynomial_decay
- pool2d
- pool3d
- pow
- prior_box
- prroi_pool
- psroi_pool
- py_reader
- random_crop
- range
- rank_loss
- read_file
- reduce_all
- reduce_any
- reduce_max
- reduce_mean
- reduce_min
- reduce_prod
- reduce_sum
- relu
- relu6
- reorder_lod_tensor_by_rank
- reshape
- resize_bilinear
- resize_nearest
- resize_trilinear
- retinanet_detection_output
- retinanet_target_assign
- reverse
- rnn
- RNNCell
- roi_align
- roi_perspective_transform
- roi_pool
- rpn_target_assign
- sampled_softmax_with_cross_entropy
- SampleEmbeddingHelper
- sampling_id
- scatter
- selu
- sequence_concat
- sequence_conv
- sequence_enumerate
- sequence_expand
- sequence_expand_as
- sequence_first_step
- sequence_last_step
- sequence_mask
- sequence_pad
- sequence_pool
- sequence_reshape
- sequence_reverse
- sequence_scatter
- sequence_slice
- sequence_softmax
- sequence_unpad
- shuffle_channel
- sigmoid_cross_entropy_with_logits
- sigmoid_focal_loss
- sign
- similarity_focus
- size
- smooth_l1
- soft_relu
- softmax
- softplus
- softshrink
- softsign
- space_to_depth
- split
- squeeze
- ssd_loss
- stack
- StaticRNN
- strided_slice
- sum
- sums
- swish
- Switch
- tanh
- tanh_shrink
- target_assign
- teacher_student_sigmoid_loss
- tensor_array_to_tensor
- thresholded_relu
- topk
- TrainingHelper
- unbind
- Uniform
- uniform_random
- unique
- unique_with_counts
- unsqueeze
- warpctc
- where
- While
- while_loop
- yolo_box
- yolov3_loss
- zeros
- zeros_like
- load_op_library
- LoDTensor
- LoDTensorArray
- memory_optimize
- one_hot
- release_memory
- require_version
- set_flags
- Tensor
- Overview
- AdaptiveAvgPool1D
- AdaptiveAvgPool2D
- AdaptiveAvgPool3D
- AdaptiveMaxPool1D
- AdaptiveMaxPool2D
- AdaptiveMaxPool3D
- AlphaDropout
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNorm
- BatchNorm1D
- BatchNorm2D
- BatchNorm3D
- BCELoss
- BCEWithLogitsLoss
- BeamSearchDecoder
- Bilinear
- BiRNN
- ClipGradByGlobalNorm
- ClipGradByNorm
- ClipGradByValue
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- CosineSimilarity
- CrossEntropyLoss
- CTCLoss
- Dropout
- Dropout2D
- Dropout3D
- dynamic_decode
- ELU
- Embedding
- Flatten
-
- adaptive_avg_pool1d
- adaptive_avg_pool2d
- adaptive_avg_pool3d
- adaptive_max_pool1d
- adaptive_max_pool2d
- adaptive_max_pool3d
- affine_grid
- alpha_dropout
- avg_pool1d
- avg_pool2d
- avg_pool3d
- batch_norm
- bilinear
- binary_cross_entropy
- binary_cross_entropy_with_logits
- conv1d
- conv1d_transpose
- conv2d
- conv2d_transpose
- conv3d
- conv3d_transpose
- cosine_similarity
- cross_entropy
- ctc_loss
- diag_embed
- dice_loss
- dropout
- dropout2d
- dropout3d
- elu
- elu_
- embedding
- gather_tree
- gelu
- grid_sample
- hardshrink
- hardsigmoid
- hardswish
- hardtanh
- hsigmoid_loss
- instance_norm
- interpolate
- kl_div
- l1_loss
- label_smooth
- layer_norm
- leaky_relu
- linear
- local_response_norm
- log_loss
- log_sigmoid
- log_softmax
- margin_ranking_loss
- max_pool1d
- max_pool2d
- max_pool3d
- maxout
- mse_loss
- nll_loss
- normalize
- npair_loss
- one_hot
- pad
- pixel_shuffle
- prelu
- relu
- relu6
- relu_
- selu
- sigmoid
- sigmoid_focal_loss
- smooth_l1_loss
- softmax
- softmax_
- softmax_with_cross_entropy
- softplus
- softshrink
- softsign
- square_error_cost
- swish
- tanhshrink
- temporal_shift
- thresholded_relu
- unfold
- upsample
- GELU
- GroupNorm
- GRU
- GRUCell
- Hardshrink
- Hardsigmoid
- Hardswish
- Hardtanh
- HSigmoidLoss
- InstanceNorm1D
- InstanceNorm2D
- InstanceNorm3D
- KLDivLoss
- L1Loss
- Layer
- LayerList
- LayerNorm
- LeakyReLU
- Linear
- LocalResponseNorm
- LogSigmoid
- LogSoftmax
- LSTM
- LSTMCell
- MarginRankingLoss
- Maxout
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MSELoss
- MultiHeadAttention
- NLLLoss
- Pad1D
- Pad2D
- Pad3D
- PairwiseDistance
- ParameterList
- PixelShuffle
- PReLU
- ReLU
- ReLU6
- RNN
- RNNCellBase
- SELU
- Sequential
- Sigmoid
- SimpleRNN
- SimpleRNNCell
- SmoothL1Loss
- Softmax
- Softplus
- Softshrink
- Softsign
- SpectralNorm
- Swish
- SyncBatchNorm
- Tanh
- Tanhshrink
- ThresholdedReLU
- Transformer
- TransformerDecoder
- TransformerDecoderLayer
- TransformerEncoder
- TransformerEncoderLayer
- Upsample
- UpsamplingBilinear2D
- UpsamplingNearest2D
- append_backward
- BuildStrategy
- CompiledProgram
- cpu_places
- create_global_var
- create_parameter
- cuda_places
- data
- default_main_program
- default_startup_program
- deserialize_persistables
- deserialize_program
- device_guard
- ExecutionStrategy
- Executor
- global_scope
- gradients
- InputSpec
- load
- load_from_file
- load_inference_model
- load_program_state
- name_scope
- ParallelExecutor
- Program
- program_guard
- py_func
- save
- save_inference_model
- save_to_file
- scope_guard
- serialize_persistables
- serialize_program
- set_program_state
- Variable
- WeightNormParamAttr
-
- adjust_brightness
- adjust_contrast
- adjust_hue
- adjust_saturation
- BaseTransform
- BrightnessTransform
- center_crop
- CenterCrop
- ColorJitter
- Compose
- ContrastTransform
- crop
- Grayscale
- hflip
- HueTransform
- Normalize
- normalize
- Pad
- pad
- RandomCrop
- RandomHorizontalFli
- RandomResizedCrop
- RandomRotation
- RandomVerticalFlip
- Resize
- resize
- rotate
- SaturationTransform
- to_grayscale
- to_tensor
- ToTensor
- Transpose
- vflip
paddle.static / nn / conv2d_transpose
conv2d_transpose¶
-
paddle.static.nn.
conv2d_transpose
( input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCHW' ) [源代码] ¶
二维转置卷积层(Convlution2D transpose layer)
该层根据输入(input)、滤波器(filter)和卷积核膨胀比例(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过output_size指定输出特征层大小。输入(Input)和输出(Output)为NCHW或NHWC格式,其中N为批尺寸,C为通道数(channel),H为特征层高度,W为特征层宽度。滤波器是MCHW格式,M是输出图像通道数,C是输入图像通道数,H是滤波器高度,W是滤波器宽度。如果组数大于1,C等于输入图像通道数除以组数的结果。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解转置卷积层细节,请参考下面的说明和 参考文献 。如果参数bias_attr不为False, 转置卷积计算会添加偏置项。如果act不为None,则转置卷积计算之后添加相应的激活函数。
输入 \(X\) 和输出 \(Out\) 函数关系如下:
- 其中:
-
\(X\) : 输入,具有NCHW或NHWC格式的4-D Tensor
\(W\) : 滤波器,具有NCHW格式的4-D Tensor
\(*\) : 卷积计算(注意:转置卷积本质上的计算还是卷积)
\(b\) : 偏置(bias),2-D Tensor,形状为
[M,1]
\(σ\) : 激活函数
\(Out\) : 输出值,NCHW或NHWC格式的4-D Tensor, 和
X
的形状可能不同
示例
输入:
输入Tensor的形状: \((N,C_{in}, H_{in}, W_{in})\)
滤波器的形状 : \((C_{in}, C_{out}, H_f, W_f)\)
输出:
输出Tensor的形状 : \((N,C_{out}, H_{out}, W_{out})\)
其中
如果 padding
= "SAME":
如果 padding
= "VALID":
注意:
如果output_size为None,则 \(H_{out}\) = \(H^\prime_{out}\) , \(W_{out}\) = \(W^\prime_{out}\) ;否则,指定的output_size_height(输出特征层的高) \(H_{out}\) 应当介于 \(H^\prime_{out}\) 和 \(H^\prime_{out} + strides[0]\) 之间(不包含 \(H^\prime_{out} + strides[0]\) ), 并且指定的output_size_width(输出特征层的宽) \(W_{out}\) 应当介于 \(W^\prime_{out}\) 和 \(W^\prime_{out} + strides[1]\) 之间(不包含 \(W^\prime_{out} + strides[1]\) )。
由于转置卷积可以当成是卷积的反向计算,而根据卷积的输入输出计算公式来说,不同大小的输入特征层可能对应着相同大小的输出特征层,所以对应到转置卷积来说,固定大小的输入特征层对应的输出特征层大小并不唯一。
如果指定了output_size, conv2d_transpose
可以自动计算滤波器的大小。
- 参数:
-
input (Tensor)- 形状为 \([N, C, H, W]\) 或 \([N, H, W, C]\) 的4-D Tensor,N是批尺寸,C是通道数,H是特征高度,W是特征宽度。数据类型:float32或float64。
num_filters (int) - 滤波器(卷积核)的个数,与输出图片的通道数相同。
output_size (int|tuple,可选) - 输出图片的大小。如果output_size是一个元组,则必须包含两个整型数,(output_size_height,output_size_width)。如果output_size=None,则内部会使用filter_size、padding和stride来计算output_size。如果output_size和filter_size是同时指定的,那么它们应满足上面的公式。默认:None。output_size和filter_size不能同时为None。
filter_size (int|tuple,可选) - 滤波器大小。如果filter_size是一个元组,则必须包含两个整型数,(filter_size_height, filter_size_width)。否则,filter_size_height = filter_size_width = filter_size。如果filter_size=None,则必须指定output_size,
conv2d_transpose
内部会根据output_size、padding和stride计算出滤波器大小。默认:None。output_size和filter_size不能同时为None。padding (int|list|tuple|str,可选) - 填充padding大小。padding参数在输入特征层每边添加
dilation * (kernel_size - 1) - padding
个0。如果它是一个字符串,可以是"VALID"或者"SAME",表示填充算法,计算细节可参考上述padding
= "SAME"或padding
= "VALID" 时的计算公式。如果它是一个元组或列表,它可以有3种格式:(1)包含4个二元组:当data_format
为"NCHW"时为 [[0,0], [0,0], [padding_height_top, padding_height_bottom], [padding_width_left, padding_width_right]],当data_format
为"NHWC"时为[[0,0], [padding_height_top, padding_height_bottom], [padding_width_left, padding_width_right], [0,0]];(2)包含4个整数值:[padding_height_top, padding_height_bottom, padding_width_left, padding_width_right];(3)包含2个整数值:[padding_height, padding_width],此时padding_height_top = padding_height_bottom = padding_height, padding_width_left = padding_width_right = padding_width。若为一个整数,padding_height = padding_width = padding。默认值:0。stride (int|tuple,可选) - 步长stride大小。滤波器和输入进行卷积计算时滑动的步长。如果stride是一个元组,则必须包含两个整型数,形式为(stride_height,stride_width)。否则,stride_height = stride_width = stride。默认:stride = 1。
dilation (int|tuple,可选) - 膨胀比例(dilation)大小。空洞卷积时会指该参数,滤波器对输入进行卷积时,感受野里每相邻两个特征点之间的空洞信息,根据 可视化效果图 较好理解。如果膨胀比例dilation是一个元组,那么元组必须包含两个整型数,形式为(dilation_height, dilation_width)。否则,dilation_height = dilation_width = dilation。默认:dilation= 1。
groups (int,可选) - 二维转置卷积层的组数。从Alex Krizhevsky的CNN Deep论文中的群卷积中受到启发,当group=2时,输入和滤波器分别根据通道数量平均分为两组,第一组滤波器和第一组输入进行卷积计算,第二组滤波器和第二组输入进行卷积计算。默认:group = 1。
param_attr (ParamAttr,可选) :指定权重参数属性的对象。默认值为None,表示使用默认的权重参数属性。具体用法请参见 ParamAttr 。conv2d_transpose算子默认的权重初始化是Xavier。
bias_attr (ParamAttr|False,可选)- 指定偏置参数属性的对象。默认值为None,表示使用默认的偏置参数属性。具体用法请参见 ParamAttr 。conv2d_transpose算子默认的偏置初始化是0.0。
use_cudnn (bool,可选) - 是否使用cudnn内核,只有已安装cudnn库时才有效。默认:True。
act (str,可选) - 激活函数类型,如果设置为None,则不使用激活函数。默认:None。
name (str,可选) – 具体用法请参见 cn_api_guide_Name ,一般无需设置,默认值为None。
data_format (str,可选) - 指定输入的数据格式,输出的数据格式将与输入保持一致,可以是"NCHW"和"NHWC"。N是批尺寸,C是通道数,H是特征高度,W是特征宽度。默认值:"NCHW"。
返回:4-D Tensor,数据类型与 input
一致。如果未指定激活层,则返回转置卷积计算的结果,如果指定激活层,则返回转置卷积和激活计算之后的最终结果。
- 抛出异常:
-
ValueError
: 如果输入的shape、filter_size、stride、padding和groups不匹配,抛出ValueErrorValueError
- 如果data_format
既不是"NCHW"也不是"NHWC"。ValueError
- 如果padding
是字符串,既不是"SAME"也不是"VALID"。ValueError
- 如果padding
含有4个二元组,与批尺寸对应维度的值不为0或者与通道对应维度的值不为0。ValueError
- 如果output_size
和filter_size
同时为None。ShapeError
- 如果输入不是4-D Tensor。ShapeError
- 如果输入和滤波器的维度大小不相同。ShapeError
- 如果输入的维度大小与stride
之差不是2。
代码示例
import paddle
paddle.enable_static()
data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32')
conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3)
print(conv2d_transpose.shape) # [-1, 2, 34, 34]
此页内容是否对您有帮助
感谢反馈!