MXNet
-
- ndarray
- ndarray.CachedOp
- ndarray.NDArray
- ndarray.Activation
- ndarray.BatchNorm
- ndarray.BatchNorm_v1
- ndarray.BilinearSampler
- ndarray.BlockGrad
- ndarray.CTCLoss
- ndarray.Cast
- ndarray.Concat
- ndarray.Convolution
- ndarray.Convolution_v1
- ndarray.Correlation
- ndarray.Crop
- ndarray.Custom
- ndarray.Deconvolution
- ndarray.Dropout
- ndarray.ElementWiseSum
- ndarray.Embedding
- ndarray.Flatten
- ndarray.FullyConnected
- ndarray.GridGenerator
- ndarray.GroupNorm
- ndarray.IdentityAttachKLSparseReg
- ndarray.InstanceNorm
- ndarray.L2Normalization
- ndarray.LRN
- ndarray.LayerNorm
- ndarray.LeakyReLU
- ndarray.LinearRegressionOutput
- ndarray.LogisticRegressionOutput
- ndarray.MAERegressionOutput
- ndarray.MakeLoss
- ndarray.Pad
- ndarray.Pooling
- ndarray.Pooling_v1
- ndarray.RNN
- ndarray.ROIPooling
- ndarray.Reshape
- ndarray.SVMOutput
- ndarray.SequenceLast
- ndarray.SequenceMask
- ndarray.SequenceReverse
- ndarray.SliceChannel
- ndarray.Softmax
- ndarray.SoftmaxActivation
- ndarray.SoftmaxOutput
- ndarray.SpatialTransformer
- ndarray.SwapAxis
- ndarray.UpSampling
- ndarray.abs
- ndarray.adam_update
- ndarray.add_n
- ndarray.all_finite
- ndarray.amp_cast
- ndarray.amp_multicast
- ndarray.arccos
- ndarray.arccosh
- ndarray.arcsin
- ndarray.arcsinh
- ndarray.arctan
- ndarray.arctanh
- ndarray.argmax
- ndarray.argmax_channel
- ndarray.argmin
- ndarray.argsort
- ndarray.batch_dot
- ndarray.batch_take
- ndarray.broadcast_add
- ndarray.broadcast_axes
- ndarray.broadcast_axis
- ndarray.broadcast_div
- ndarray.broadcast_equal
- ndarray.broadcast_greater
- ndarray.broadcast_greater_equal
- ndarray.broadcast_hypot
- ndarray.broadcast_lesser
- ndarray.broadcast_lesser_equal
- ndarray.broadcast_like
- ndarray.broadcast_logical_and
- ndarray.broadcast_logical_or
- ndarray.broadcast_logical_xor
- ndarray.broadcast_maximum
- ndarray.broadcast_minimum
- ndarray.broadcast_minus
- ndarray.broadcast_mod
- ndarray.broadcast_mul
- ndarray.broadcast_not_equal
- ndarray.broadcast_plus
- ndarray.broadcast_power
- ndarray.broadcast_sub
- ndarray.broadcast_to
- ndarray.cast
- ndarray.cast_storage
- ndarray.cbrt
- ndarray.ceil
- ndarray.choose_element_0index
- ndarray.clip
- ndarray.col2im
- ndarray.concat
- ndarray.cos
- ndarray.cosh
- ndarray.crop
- ndarray.ctc_loss
- ndarray.cumsum
- ndarray.degrees
- ndarray.depth_to_space
- ndarray.diag
- ndarray.dot
- ndarray.elemwise_add
- ndarray.elemwise_div
- ndarray.elemwise_mul
- ndarray.elemwise_sub
- ndarray.erf
- ndarray.erfinv
- ndarray.exp
- ndarray.expand_dims
- ndarray.expm1
- ndarray.fill_element_0index
- ndarray.fix
- ndarray.flatten
- ndarray.flip
- ndarray.floor
- ndarray.ftml_update
- ndarray.ftrl_update
- ndarray.gamma
- ndarray.gammaln
- ndarray.gather_nd
- ndarray.hard_sigmoid
- ndarray.identity
- ndarray.im2col
- ndarray.khatri_rao
- ndarray.lamb_update_phase1
- ndarray.lamb_update_phase2
- ndarray.linalg_det
- ndarray.linalg_extractdiag
- ndarray.linalg_extracttrian
- ndarray.linalg_gelqf
- ndarray.linalg_gemm
- ndarray.linalg_gemm2
- ndarray.linalg_inverse
- ndarray.linalg_makediag
- ndarray.linalg_maketrian
- ndarray.linalg_potrf
- ndarray.linalg_potri
- ndarray.linalg_slogdet
- ndarray.linalg_sumlogdiag
- ndarray.linalg_syrk
- ndarray.linalg_trmm
- ndarray.linalg_trsm
- ndarray.log
- ndarray.log10
- ndarray.log1p
- ndarray.log2
- ndarray.log_softmax
- ndarray.logical_not
- ndarray.make_loss
- ndarray.max
- ndarray.max_axis
- ndarray.mean
- ndarray.min
- ndarray.min_axis
- ndarray.moments
- ndarray.mp_lamb_update_phase1
- ndarray.mp_lamb_update_phase2
- ndarray.mp_nag_mom_update
- ndarray.mp_sgd_mom_update
- ndarray.mp_sgd_update
- ndarray.multi_all_finite
- ndarray.multi_lars
- ndarray.multi_mp_sgd_mom_update
- ndarray.multi_mp_sgd_update
- ndarray.multi_sgd_mom_update
- ndarray.multi_sgd_update
- ndarray.multi_sum_sq
- ndarray.nag_mom_update
- ndarray.nanprod
- ndarray.nansum
- ndarray.negative
- ndarray.norm
- ndarray.normal
- ndarray.one_hot
- ndarray.ones_like
- ndarray.pad
- ndarray.pick
- ndarray.preloaded_multi_mp_sgd_mom_update
- ndarray.preloaded_multi_mp_sgd_update
- ndarray.preloaded_multi_sgd_mom_update
- ndarray.preloaded_multi_sgd_update
- ndarray.prod
- ndarray.radians
- ndarray.random_exponential
- ndarray.random_gamma
- ndarray.random_generalized_negative_binomial
- ndarray.random_negative_binomial
- ndarray.random_normal
- ndarray.random_pdf_dirichlet
- ndarray.random_pdf_exponential
- ndarray.random_pdf_gamma
- ndarray.random_pdf_generalized_negative_binomial
- ndarray.random_pdf_negative_binomial
- ndarray.random_pdf_normal
- ndarray.random_pdf_poisson
- ndarray.random_pdf_uniform
- ndarray.random_poisson
- ndarray.random_randint
- ndarray.random_uniform
- ndarray.ravel_multi_index
- ndarray.rcbrt
- ndarray.reciprocal
- ndarray.relu
- ndarray.repeat
- ndarray.reset_arrays
- ndarray.reshape
- ndarray.reshape_like
- ndarray.reverse
- ndarray.rint
- ndarray.rmsprop_update
- ndarray.rmspropalex_update
- ndarray.round
- ndarray.rsqrt
- ndarray.sample_exponential
- ndarray.sample_gamma
- ndarray.sample_generalized_negative_binomial
- ndarray.sample_multinomial
- ndarray.sample_negative_binomial
- ndarray.sample_normal
- ndarray.sample_poisson
- ndarray.sample_uniform
- ndarray.scatter_nd
- ndarray.sgd_mom_update
- ndarray.sgd_update
- ndarray.shape_array
- ndarray.shuffle
- ndarray.sigmoid
- ndarray.sign
- ndarray.signsgd_update
- ndarray.signum_update
- ndarray.sin
- ndarray.sinh
- ndarray.size_array
- ndarray.slice
- ndarray.slice_axis
- ndarray.slice_like
- ndarray.smooth_l1
- ndarray.softmax
- ndarray.softmax_cross_entropy
- ndarray.softmin
- ndarray.softsign
- ndarray.sort
- ndarray.space_to_depth
- ndarray.split
- ndarray.sqrt
- ndarray.square
- ndarray.squeeze
- ndarray.stack
- ndarray.stop_gradient
- ndarray.sum
- ndarray.sum_axis
- ndarray.swapaxes
- ndarray.take
- ndarray.tan
- ndarray.tanh
- ndarray.tile
- ndarray.topk
- ndarray.transpose
- ndarray.trunc
- ndarray.uniform
- ndarray.unravel_index
- ndarray.where
- ndarray.zeros_like
- ndarray.concatenate
- ndarray.ones
- ndarray.add
- ndarray.arange
- ndarray.linspace
- ndarray.eye
- ndarray.divide
- ndarray.equal
- ndarray.full
- ndarray.greater
- ndarray.greater_equal
- ndarray.imdecode
- ndarray.lesser
- ndarray.lesser_equal
- ndarray.logical_and
- ndarray.logical_or
- ndarray.logical_xor
- ndarray.maximum
- ndarray.minimum
- ndarray.moveaxis
- ndarray.modulo
- ndarray.multiply
- ndarray.not_equal
- ndarray.onehot_encode
- ndarray.power
- ndarray.subtract
- ndarray.true_divide
- ndarray.waitall
- ndarray.histogram
- ndarray.split_v2
- ndarray.to_dlpack_for_read
- ndarray.to_dlpack_for_write
- ndarray.from_dlpack
- ndarray.from_numpy
- ndarray.zeros
- ndarray.indexing_key_expand_implicit_axes
- ndarray.get_indexing_dispatch_code
- ndarray.get_oshape_of_gather_nd_op
- ndarray.empty
- ndarray.array
- ndarray.load
- ndarray.load_frombuffer
- ndarray.save
-
- ndarray.contrib
- ndarray.contrib.rand_zipfian
- ndarray.contrib.foreach
- ndarray.contrib.while_loop
- ndarray.contrib.cond
- ndarray.contrib.isinf
- ndarray.contrib.isfinite
- ndarray.contrib.isnan
- ndarray.contrib.AdaptiveAvgPooling2D
- ndarray.contrib.BilinearResize2D
- ndarray.contrib.CTCLoss
- ndarray.contrib.DeformableConvolution
- ndarray.contrib.DeformablePSROIPooling
- ndarray.contrib.ModulatedDeformableConvolution
- ndarray.contrib.MultiBoxDetection
- ndarray.contrib.MultiBoxPrior
- ndarray.contrib.MultiBoxTarget
- ndarray.contrib.MultiProposal
- ndarray.contrib.PSROIPooling
- ndarray.contrib.Proposal
- ndarray.contrib.ROIAlign
- ndarray.contrib.RROIAlign
- ndarray.contrib.SparseEmbedding
- ndarray.contrib.SyncBatchNorm
- ndarray.contrib.allclose
- ndarray.contrib.arange_like
- ndarray.contrib.backward_gradientmultiplier
- ndarray.contrib.backward_hawkesll
- ndarray.contrib.backward_index_copy
- ndarray.contrib.backward_quadratic
- ndarray.contrib.bipartite_matching
- ndarray.contrib.boolean_mask
- ndarray.contrib.box_decode
- ndarray.contrib.box_encode
- ndarray.contrib.box_iou
- ndarray.contrib.box_nms
- ndarray.contrib.box_non_maximum_suppression
- ndarray.contrib.calibrate_entropy
- ndarray.contrib.count_sketch
- ndarray.contrib.ctc_loss
- ndarray.contrib.dequantize
- ndarray.contrib.dgl_adjacency
- ndarray.contrib.dgl_csr_neighbor_non_uniform_sample
- ndarray.contrib.dgl_csr_neighbor_uniform_sample
- ndarray.contrib.dgl_graph_compact
- ndarray.contrib.dgl_subgraph
- ndarray.contrib.div_sqrt_dim
- ndarray.contrib.edge_id
- ndarray.contrib.fft
- ndarray.contrib.getnnz
- ndarray.contrib.gradientmultiplier
- ndarray.contrib.group_adagrad_update
- ndarray.contrib.hawkesll
- ndarray.contrib.ifft
- ndarray.contrib.index_array
- ndarray.contrib.index_copy
- ndarray.contrib.interleaved_matmul_encdec_qk
- ndarray.contrib.interleaved_matmul_encdec_valatt
- ndarray.contrib.interleaved_matmul_selfatt_qk
- ndarray.contrib.interleaved_matmul_selfatt_valatt
- ndarray.contrib.quadratic
- ndarray.contrib.quantize
- ndarray.contrib.quantize_v2
- ndarray.contrib.quantized_act
- ndarray.contrib.quantized_batch_norm
- ndarray.contrib.quantized_concat
- ndarray.contrib.quantized_conv
- ndarray.contrib.quantized_elemwise_add
- ndarray.contrib.quantized_elemwise_mul
- ndarray.contrib.quantized_embedding
- ndarray.contrib.quantized_flatten
- ndarray.contrib.quantized_fully_connected
- ndarray.contrib.quantized_pooling
- ndarray.contrib.requantize
- ndarray.contrib.round_ste
- ndarray.contrib.sign_ste
-
- ndarray.image
- ndarray.image.adjust_lighting
- ndarray.image.crop
- ndarray.image.flip_left_right
- ndarray.image.flip_top_bottom
- ndarray.image.normalize
- ndarray.image.random_brightness
- ndarray.image.random_color_jitter
- ndarray.image.random_contrast
- ndarray.image.random_flip_left_right
- ndarray.image.random_flip_top_bottom
- ndarray.image.random_hue
- ndarray.image.random_lighting
- ndarray.image.random_saturation
- ndarray.image.resize
- ndarray.image.to_tensor
-
- ndarray.linalg
- ndarray.linalg.det
- ndarray.linalg.extractdiag
- ndarray.linalg.extracttrian
- ndarray.linalg.gelqf
- ndarray.linalg.gemm
- ndarray.linalg.gemm2
- ndarray.linalg.inverse
- ndarray.linalg.makediag
- ndarray.linalg.maketrian
- ndarray.linalg.potrf
- ndarray.linalg.potri
- ndarray.linalg.slogdet
- ndarray.linalg.sumlogdiag
- ndarray.linalg.syevd
- ndarray.linalg.syrk
- ndarray.linalg.trmm
- ndarray.linalg.trsm
-
- ndarray.op
- ndarray.op.CachedOp
- ndarray.op.Activation
- ndarray.op.BatchNorm
- ndarray.op.BatchNorm_v1
- ndarray.op.BilinearSampler
- ndarray.op.BlockGrad
- ndarray.op.CTCLoss
- ndarray.op.Cast
- ndarray.op.Concat
- ndarray.op.Convolution
- ndarray.op.Convolution_v1
- ndarray.op.Correlation
- ndarray.op.Crop
- ndarray.op.Custom
- ndarray.op.Deconvolution
- ndarray.op.Dropout
- ndarray.op.ElementWiseSum
- ndarray.op.Embedding
- ndarray.op.Flatten
- ndarray.op.FullyConnected
- ndarray.op.GridGenerator
- ndarray.op.GroupNorm
- ndarray.op.IdentityAttachKLSparseReg
- ndarray.op.InstanceNorm
- ndarray.op.L2Normalization
- ndarray.op.LRN
- ndarray.op.LayerNorm
- ndarray.op.LeakyReLU
- ndarray.op.LinearRegressionOutput
- ndarray.op.LogisticRegressionOutput
- ndarray.op.MAERegressionOutput
- ndarray.op.MakeLoss
- ndarray.op.Pad
- ndarray.op.Pooling
- ndarray.op.Pooling_v1
- ndarray.op.RNN
- ndarray.op.ROIPooling
- ndarray.op.Reshape
- ndarray.op.SVMOutput
- ndarray.op.SequenceLast
- ndarray.op.SequenceMask
- ndarray.op.SequenceReverse
- ndarray.op.SliceChannel
- ndarray.op.Softmax
- ndarray.op.SoftmaxActivation
- ndarray.op.SoftmaxOutput
- ndarray.op.SpatialTransformer
- ndarray.op.SwapAxis
- ndarray.op.UpSampling
- ndarray.op.abs
- ndarray.op.adam_update
- ndarray.op.add_n
- ndarray.op.all_finite
- ndarray.op.amp_cast
- ndarray.op.amp_multicast
- ndarray.op.arccos
- ndarray.op.arccosh
- ndarray.op.arcsin
- ndarray.op.arcsinh
- ndarray.op.arctan
- ndarray.op.arctanh
- ndarray.op.argmax
- ndarray.op.argmax_channel
- ndarray.op.argmin
- ndarray.op.argsort
- ndarray.op.batch_dot
- ndarray.op.batch_take
- ndarray.op.broadcast_add
- ndarray.op.broadcast_axes
- ndarray.op.broadcast_axis
- ndarray.op.broadcast_div
- ndarray.op.broadcast_equal
- ndarray.op.broadcast_greater
- ndarray.op.broadcast_greater_equal
- ndarray.op.broadcast_hypot
- ndarray.op.broadcast_lesser
- ndarray.op.broadcast_lesser_equal
- ndarray.op.broadcast_like
- ndarray.op.broadcast_logical_and
- ndarray.op.broadcast_logical_or
- ndarray.op.broadcast_logical_xor
- ndarray.op.broadcast_maximum
- ndarray.op.broadcast_minimum
- ndarray.op.broadcast_minus
- ndarray.op.broadcast_mod
- ndarray.op.broadcast_mul
- ndarray.op.broadcast_not_equal
- ndarray.op.broadcast_plus
- ndarray.op.broadcast_power
- ndarray.op.broadcast_sub
- ndarray.op.broadcast_to
- ndarray.op.cast
- ndarray.op.cast_storage
- ndarray.op.cbrt
- ndarray.op.ceil
- ndarray.op.choose_element_0index
- ndarray.op.clip
- ndarray.op.col2im
- ndarray.op.concat
- ndarray.op.cos
- ndarray.op.cosh
- ndarray.op.crop
- ndarray.op.ctc_loss
- ndarray.op.cumsum
- ndarray.op.degrees
- ndarray.op.depth_to_space
- ndarray.op.diag
- ndarray.op.dot
- ndarray.op.elemwise_add
- ndarray.op.elemwise_div
- ndarray.op.elemwise_mul
- ndarray.op.elemwise_sub
- ndarray.op.erf
- ndarray.op.erfinv
- ndarray.op.exp
- ndarray.op.expand_dims
- ndarray.op.expm1
- ndarray.op.fill_element_0index
- ndarray.op.fix
- ndarray.op.flatten
- ndarray.op.flip
- ndarray.op.floor
- ndarray.op.ftml_update
- ndarray.op.ftrl_update
- ndarray.op.gamma
- ndarray.op.gammaln
- ndarray.op.gather_nd
- ndarray.op.hard_sigmoid
- ndarray.op.identity
- ndarray.op.im2col
- ndarray.op.khatri_rao
- ndarray.op.lamb_update_phase1
- ndarray.op.lamb_update_phase2
- ndarray.op.linalg_det
- ndarray.op.linalg_extractdiag
- ndarray.op.linalg_extracttrian
- ndarray.op.linalg_gelqf
- ndarray.op.linalg_gemm
- ndarray.op.linalg_gemm2
- ndarray.op.linalg_inverse
- ndarray.op.linalg_makediag
- ndarray.op.linalg_maketrian
- ndarray.op.linalg_potrf
- ndarray.op.linalg_potri
- ndarray.op.linalg_slogdet
- ndarray.op.linalg_sumlogdiag
- ndarray.op.linalg_syrk
- ndarray.op.linalg_trmm
- ndarray.op.linalg_trsm
- ndarray.op.log
- ndarray.op.log10
- ndarray.op.log1p
- ndarray.op.log2
- ndarray.op.log_softmax
- ndarray.op.logical_not
- ndarray.op.make_loss
- ndarray.op.max
- ndarray.op.max_axis
- ndarray.op.mean
- ndarray.op.min
- ndarray.op.min_axis
- ndarray.op.moments
- ndarray.op.mp_lamb_update_phase1
- ndarray.op.mp_lamb_update_phase2
- ndarray.op.mp_nag_mom_update
- ndarray.op.mp_sgd_mom_update
- ndarray.op.mp_sgd_update
- ndarray.op.multi_all_finite
- ndarray.op.multi_lars
- ndarray.op.multi_mp_sgd_mom_update
- ndarray.op.multi_mp_sgd_update
- ndarray.op.multi_sgd_mom_update
- ndarray.op.multi_sgd_update
- ndarray.op.multi_sum_sq
- ndarray.op.nag_mom_update
- ndarray.op.nanprod
- ndarray.op.nansum
- ndarray.op.negative
- ndarray.op.norm
- ndarray.op.normal
- ndarray.op.one_hot
- ndarray.op.ones_like
- ndarray.op.pad
- ndarray.op.pick
- ndarray.op.preloaded_multi_mp_sgd_mom_update
- ndarray.op.preloaded_multi_mp_sgd_update
- ndarray.op.preloaded_multi_sgd_mom_update
- ndarray.op.preloaded_multi_sgd_update
- ndarray.op.prod
- ndarray.op.radians
- ndarray.op.random_exponential
- ndarray.op.random_gamma
- ndarray.op.random_generalized_negative_binomial
- ndarray.op.random_negative_binomial
- ndarray.op.random_normal
- ndarray.op.random_pdf_dirichlet
- ndarray.op.random_pdf_exponential
- ndarray.op.random_pdf_gamma
- ndarray.op.random_pdf_generalized_negative_binomial
- ndarray.op.random_pdf_negative_binomial
- ndarray.op.random_pdf_normal
- ndarray.op.random_pdf_poisson
- ndarray.op.random_pdf_uniform
- ndarray.op.random_poisson
- ndarray.op.random_randint
- ndarray.op.random_uniform
- ndarray.op.ravel_multi_index
- ndarray.op.rcbrt
- ndarray.op.reciprocal
- ndarray.op.relu
- ndarray.op.repeat
- ndarray.op.reset_arrays
- ndarray.op.reshape
- ndarray.op.reshape_like
- ndarray.op.reverse
- ndarray.op.rint
- ndarray.op.rmsprop_update
- ndarray.op.rmspropalex_update
- ndarray.op.round
- ndarray.op.rsqrt
- ndarray.op.sample_exponential
- ndarray.op.sample_gamma
- ndarray.op.sample_generalized_negative_binomial
- ndarray.op.sample_multinomial
- ndarray.op.sample_negative_binomial
- ndarray.op.sample_normal
- ndarray.op.sample_poisson
- ndarray.op.sample_uniform
- ndarray.op.scatter_nd
- ndarray.op.sgd_mom_update
- ndarray.op.sgd_update
- ndarray.op.shape_array
- ndarray.op.shuffle
- ndarray.op.sigmoid
- ndarray.op.sign
- ndarray.op.signsgd_update
- ndarray.op.signum_update
- ndarray.op.sin
- ndarray.op.sinh
- ndarray.op.size_array
- ndarray.op.slice
- ndarray.op.slice_axis
- ndarray.op.slice_like
- ndarray.op.smooth_l1
- ndarray.op.softmax
- ndarray.op.softmax_cross_entropy
- ndarray.op.softmin
- ndarray.op.softsign
- ndarray.op.sort
- ndarray.op.space_to_depth
- ndarray.op.split
- ndarray.op.sqrt
- ndarray.op.square
- ndarray.op.squeeze
- ndarray.op.stack
- ndarray.op.stop_gradient
- ndarray.op.sum
- ndarray.op.sum_axis
- ndarray.op.swapaxes
- ndarray.op.take
- ndarray.op.tan
- ndarray.op.tanh
- ndarray.op.tile
- ndarray.op.topk
- ndarray.op.transpose
- ndarray.op.trunc
- ndarray.op.uniform
- ndarray.op.unravel_index
- ndarray.op.where
- ndarray.op.zeros_like
-
- ndarray.random
- ndarray.random.uniform
- ndarray.random.normal
- ndarray.random.randn
- ndarray.random.poisson
- ndarray.random.exponential
- ndarray.random.gamma
- ndarray.random.multinomial
- ndarray.random.negative_binomial
- ndarray.random.generalized_negative_binomial
- ndarray.random.shuffle
- ndarray.random.randint
- ndarray.random.exponential_like
- ndarray.random.gamma_like
- ndarray.random.generalized_negative_binomial_like
- ndarray.random.negative_binomial_like
- ndarray.random.normal_like
- ndarray.random.poisson_like
- ndarray.random.uniform_like
- ndarray.register
-
- ndarray.sparse
- ndarray.sparse.csr_matrix
- ndarray.sparse.row_sparse_array
- ndarray.sparse.add
- ndarray.sparse.subtract
- ndarray.sparse.multiply
- ndarray.sparse.divide
- ndarray.sparse.ElementWiseSum
- ndarray.sparse.Embedding
- ndarray.sparse.FullyConnected
- ndarray.sparse.LinearRegressionOutput
- ndarray.sparse.LogisticRegressionOutput
- ndarray.sparse.MAERegressionOutput
- ndarray.sparse.abs
- ndarray.sparse.adagrad_update
- ndarray.sparse.adam_update
- ndarray.sparse.add_n
- ndarray.sparse.arccos
- ndarray.sparse.arccosh
- ndarray.sparse.arcsin
- ndarray.sparse.arcsinh
- ndarray.sparse.arctan
- ndarray.sparse.arctanh
- ndarray.sparse.broadcast_add
- ndarray.sparse.broadcast_div
- ndarray.sparse.broadcast_minus
- ndarray.sparse.broadcast_mul
- ndarray.sparse.broadcast_plus
- ndarray.sparse.broadcast_sub
- ndarray.sparse.cast_storage
- ndarray.sparse.cbrt
- ndarray.sparse.ceil
- ndarray.sparse.clip
- ndarray.sparse.concat
- ndarray.sparse.cos
- ndarray.sparse.cosh
- ndarray.sparse.degrees
- ndarray.sparse.dot
- ndarray.sparse.elemwise_add
- ndarray.sparse.elemwise_div
- ndarray.sparse.elemwise_mul
- ndarray.sparse.elemwise_sub
- ndarray.sparse.exp
- ndarray.sparse.expm1
- ndarray.sparse.fix
- ndarray.sparse.floor
- ndarray.sparse.ftrl_update
- ndarray.sparse.gamma
- ndarray.sparse.gammaln
- ndarray.sparse.log
- ndarray.sparse.log10
- ndarray.sparse.log1p
- ndarray.sparse.log2
- ndarray.sparse.make_loss
- ndarray.sparse.mean
- ndarray.sparse.negative
- ndarray.sparse.norm
- ndarray.sparse.radians
- ndarray.sparse.relu
- ndarray.sparse.retain
- ndarray.sparse.rint
- ndarray.sparse.round
- ndarray.sparse.rsqrt
- ndarray.sparse.sgd_mom_update
- ndarray.sparse.sgd_update
- ndarray.sparse.sigmoid
- ndarray.sparse.sign
- ndarray.sparse.sin
- ndarray.sparse.sinh
- ndarray.sparse.slice
- ndarray.sparse.sqrt
- ndarray.sparse.square
- ndarray.sparse.stop_gradient
- ndarray.sparse.sum
- ndarray.sparse.tan
- ndarray.sparse.tanh
- ndarray.sparse.trunc
- ndarray.sparse.where
- ndarray.sparse.zeros_like
- ndarray.sparse.BaseSparseNDArray
- ndarray.sparse.CSRNDArray
- ndarray.sparse.RowSparseNDArray
-
- gluon.Block
- gluon.Block.apply
- gluon.Block.cast
- gluon.Block.collect_params
- gluon.Block.forward
- gluon.Block.hybridize
- gluon.Block.initialize
- gluon.Block.load_parameters
- gluon.Block.load_params
- gluon.Block.name_scope
- gluon.Block.register_child
- gluon.Block.register_forward_hook
- gluon.Block.register_forward_pre_hook
- gluon.Block.register_op_hook
- gluon.Block.save_parameters
- gluon.Block.save_params
- gluon.Block.summary
-
- gluon.HybridBlock
- gluon.HybridBlock.apply
- gluon.HybridBlock.cast
- gluon.HybridBlock.collect_params
- gluon.HybridBlock.export
- gluon.HybridBlock.forward
- gluon.HybridBlock.hybrid_forward
- gluon.HybridBlock.hybridize
- gluon.HybridBlock.infer_shape
- gluon.HybridBlock.infer_type
- gluon.HybridBlock.initialize
- gluon.HybridBlock.load_parameters
- gluon.HybridBlock.load_params
- gluon.HybridBlock.name_scope
- gluon.HybridBlock.optimize_for
- gluon.HybridBlock.register_child
- gluon.HybridBlock.register_forward_hook
- gluon.HybridBlock.register_forward_pre_hook
- gluon.HybridBlock.register_op_hook
- gluon.HybridBlock.save_parameters
- gluon.HybridBlock.save_params
- gluon.HybridBlock.summary
-
- gluon.SymbolBlock
- gluon.SymbolBlock.apply
- gluon.SymbolBlock.cast
- gluon.SymbolBlock.collect_params
- gluon.SymbolBlock.export
- gluon.SymbolBlock.forward
- gluon.SymbolBlock.hybrid_forward
- gluon.SymbolBlock.hybridize
- gluon.SymbolBlock.imports
- gluon.SymbolBlock.infer_shape
- gluon.SymbolBlock.infer_type
- gluon.SymbolBlock.initialize
- gluon.SymbolBlock.load_parameters
- gluon.SymbolBlock.load_params
- gluon.SymbolBlock.name_scope
- gluon.SymbolBlock.optimize_for
- gluon.SymbolBlock.register_child
- gluon.SymbolBlock.register_forward_hook
- gluon.SymbolBlock.register_forward_pre_hook
- gluon.SymbolBlock.register_op_hook
- gluon.SymbolBlock.save_parameters
- gluon.SymbolBlock.save_params
- gluon.SymbolBlock.summary
-
- gluon.Constant
- gluon.Constant.cast
- gluon.Constant.data
- gluon.Constant.grad
- gluon.Constant.initialize
- gluon.Constant.list_ctx
- gluon.Constant.list_data
- gluon.Constant.list_grad
- gluon.Constant.list_row_sparse_data
- gluon.Constant.reset_ctx
- gluon.Constant.row_sparse_data
- gluon.Constant.set_data
- gluon.Constant.var
- gluon.Constant.zero_grad
-
- gluon.Parameter
- gluon.Parameter.cast
- gluon.Parameter.data
- gluon.Parameter.grad
- gluon.Parameter.initialize
- gluon.Parameter.list_ctx
- gluon.Parameter.list_data
- gluon.Parameter.list_grad
- gluon.Parameter.list_row_sparse_data
- gluon.Parameter.reset_ctx
- gluon.Parameter.row_sparse_data
- gluon.Parameter.set_data
- gluon.Parameter.var
- gluon.Parameter.zero_grad
-
- gluon.ParameterDict
- gluon.ParameterDict.get
- gluon.ParameterDict.get_constant
- gluon.ParameterDict.initialize
- gluon.ParameterDict.list_ctx
- gluon.ParameterDict.load
- gluon.ParameterDict.load_dict
- gluon.ParameterDict.reset_ctx
- gluon.ParameterDict.save
- gluon.ParameterDict.setattr
- gluon.ParameterDict.update
- gluon.ParameterDict.zero_grad
- gluon.contrib
-
- gluon.data
- gluon.data.vision.datasets
- gluon.data.vision.transforms
- gluon.data.Dataset
- gluon.data.ArrayDataset
- gluon.data.RecordFileDataset
- gluon.data.SimpleDataset
- gluon.data.BatchSampler
- gluon.data.DataLoader
- gluon.data.FilterSampler
- gluon.data.RandomSampler
- gluon.data.Sampler
- gluon.data.SequentialSampler
-
- gluon.loss
- gluon.loss.Loss
- gluon.loss.L2Loss
- gluon.loss.L1Loss
- gluon.loss.SigmoidBinaryCrossEntropyLoss
- gluon.loss.SigmoidBCELoss
- gluon.loss.SoftmaxCrossEntropyLoss
- gluon.loss.SoftmaxCELoss
- gluon.loss.KLDivLoss
- gluon.loss.CTCLoss
- gluon.loss.HuberLoss
- gluon.loss.HingeLoss
- gluon.loss.SquaredHingeLoss
- gluon.loss.LogisticLoss
- gluon.loss.TripletLoss
- gluon.loss.PoissonNLLLoss
- gluon.loss.CosineEmbeddingLoss
- gluon.loss.SDMLLoss
- gluon.nn
- gluon.rnn
- initializer
- initializer.Bilinear
- initializer.Constant
- initializer.FusedRNN
- initializer.InitDesc
- initializer.Initializer
- initializer.LSTMBias
- initializer.Load
- initializer.MSRAPrelu
- initializer.Mixed
- initializer.Normal
- initializer.One
- initializer.Orthogonal
- initializer.Uniform
- initializer.Xavier
- initializer.Zero
- optimizer
- optimizer.AdaDelta
- optimizer.AdaGrad
- optimizer.Adam
- optimizer.Adamax
- optimizer.DCASGD
- optimizer.FTML
- optimizer.Ftrl
- optimizer.LARS
- optimizer.LBSGD
- optimizer.NAG
- optimizer.Nadam
- optimizer.Optimizer
- optimizer.RMSProp
- optimizer.SGD
- optimizer.SGLD
- optimizer.Signum
- optimizer.LAMB
- optimizer.Test
- optimizer.Updater
- optimizer.ccSGD
- metric
- metric.Accuracy
- metric.Caffe
- metric.CompositeEvalMetric
- metric.CrossEntropy
- metric.CustomMetric
- metric.EvalMetric
- metric.F1
- metric.Loss
- metric.MAE
- metric.MCC
- metric.MSE
- metric.NegativeLogLikelihood
- metric.PCC
- metric.PearsonCorrelation
- metric.Perplexity
- metric.RMSE
- metric.TopKAccuracy
- metric.Torch
- symbol
-
- symbol.contrib
- symbol.contrib.rand_zipfian
- symbol.contrib.foreach
- symbol.contrib.while_loop
- symbol.contrib.cond
- symbol.contrib.AdaptiveAvgPooling2D
- symbol.contrib.BilinearResize2D
- symbol.contrib.CTCLoss
- symbol.contrib.DeformableConvolution
- symbol.contrib.DeformablePSROIPooling
- symbol.contrib.ModulatedDeformableConvolution
- symbol.contrib.MultiBoxDetection
- symbol.contrib.MultiBoxPrior
- symbol.contrib.MultiBoxTarget
- symbol.contrib.MultiProposal
- symbol.contrib.PSROIPooling
- symbol.contrib.Proposal
- symbol.contrib.ROIAlign
- symbol.contrib.RROIAlign
- symbol.contrib.SparseEmbedding
- symbol.contrib.SyncBatchNorm
- symbol.contrib.allclose
- symbol.contrib.arange_like
- symbol.contrib.backward_gradientmultiplier
- symbol.contrib.backward_hawkesll
- symbol.contrib.backward_index_copy
- symbol.contrib.backward_quadratic
- symbol.contrib.bipartite_matching
- symbol.contrib.boolean_mask
- symbol.contrib.box_decode
- symbol.contrib.box_encode
- symbol.contrib.box_iou
- symbol.contrib.box_nms
- symbol.contrib.box_non_maximum_suppression
- symbol.contrib.calibrate_entropy
- symbol.contrib.count_sketch
- symbol.contrib.ctc_loss
- symbol.contrib.dequantize
- symbol.contrib.dgl_adjacency
- symbol.contrib.dgl_csr_neighbor_non_uniform_sample
- symbol.contrib.dgl_csr_neighbor_uniform_sample
- symbol.contrib.dgl_graph_compact
- symbol.contrib.dgl_subgraph
- symbol.contrib.div_sqrt_dim
- symbol.contrib.edge_id
- symbol.contrib.fft
- symbol.contrib.getnnz
- symbol.contrib.gradientmultiplier
- symbol.contrib.group_adagrad_update
- symbol.contrib.hawkesll
- symbol.contrib.ifft
- symbol.contrib.index_array
- symbol.contrib.index_copy
- symbol.contrib.interleaved_matmul_encdec_qk
- symbol.contrib.interleaved_matmul_encdec_valatt
- symbol.contrib.interleaved_matmul_selfatt_qk
- symbol.contrib.interleaved_matmul_selfatt_valatt
- symbol.contrib.quadratic
- symbol.contrib.quantize
- symbol.contrib.quantize_v2
- symbol.contrib.quantized_act
- symbol.contrib.quantized_batch_norm
- symbol.contrib.quantized_concat
- symbol.contrib.quantized_conv
- symbol.contrib.quantized_elemwise_add
- symbol.contrib.quantized_elemwise_mul
- symbol.contrib.quantized_embedding
- symbol.contrib.quantized_flatten
- symbol.contrib.quantized_fully_connected
- symbol.contrib.quantized_pooling
- symbol.contrib.requantize
- symbol.contrib.round_ste
- symbol.contrib.sign_ste
-
- symbol.image
- symbol.image.adjust_lighting
- symbol.image.crop
- symbol.image.flip_left_right
- symbol.image.flip_top_bottom
- symbol.image.normalize
- symbol.image.random_brightness
- symbol.image.random_color_jitter
- symbol.image.random_contrast
- symbol.image.random_flip_left_right
- symbol.image.random_flip_top_bottom
- symbol.image.random_hue
- symbol.image.random_lighting
- symbol.image.random_saturation
- symbol.image.resize
- symbol.image.to_tensor
-
- symbol.linalg
- symbol.linalg.det
- symbol.linalg.extractdiag
- symbol.linalg.extracttrian
- symbol.linalg.gelqf
- symbol.linalg.gemm
- symbol.linalg.gemm2
- symbol.linalg.inverse
- symbol.linalg.makediag
- symbol.linalg.maketrian
- symbol.linalg.potrf
- symbol.linalg.potri
- symbol.linalg.slogdet
- symbol.linalg.sumlogdiag
- symbol.linalg.syevd
- symbol.linalg.syrk
- symbol.linalg.trmm
- symbol.linalg.trsm
-
- symbol.op
- symbol.op.Activation
- symbol.op.BatchNorm
- symbol.op.BatchNorm_v1
- symbol.op.BilinearSampler
- symbol.op.BlockGrad
- symbol.op.CTCLoss
- symbol.op.Cast
- symbol.op.Concat
- symbol.op.Convolution
- symbol.op.Convolution_v1
- symbol.op.Correlation
- symbol.op.Crop
- symbol.op.Custom
- symbol.op.Deconvolution
- symbol.op.Dropout
- symbol.op.ElementWiseSum
- symbol.op.Embedding
- symbol.op.Flatten
- symbol.op.FullyConnected
- symbol.op.GridGenerator
- symbol.op.GroupNorm
- symbol.op.IdentityAttachKLSparseReg
- symbol.op.InstanceNorm
- symbol.op.L2Normalization
- symbol.op.LRN
- symbol.op.LayerNorm
- symbol.op.LeakyReLU
- symbol.op.LinearRegressionOutput
- symbol.op.LogisticRegressionOutput
- symbol.op.MAERegressionOutput
- symbol.op.MakeLoss
- symbol.op.Pad
- symbol.op.Pooling
- symbol.op.Pooling_v1
- symbol.op.RNN
- symbol.op.ROIPooling
- symbol.op.Reshape
- symbol.op.SVMOutput
- symbol.op.SequenceLast
- symbol.op.SequenceMask
- symbol.op.SequenceReverse
- symbol.op.SliceChannel
- symbol.op.Softmax
- symbol.op.SoftmaxActivation
- symbol.op.SoftmaxOutput
- symbol.op.SpatialTransformer
- symbol.op.SwapAxis
- symbol.op.UpSampling
- symbol.op.abs
- symbol.op.adam_update
- symbol.op.add_n
- symbol.op.all_finite
- symbol.op.amp_cast
- symbol.op.amp_multicast
- symbol.op.arccos
- symbol.op.arccosh
- symbol.op.arcsin
- symbol.op.arcsinh
- symbol.op.arctan
- symbol.op.arctanh
- symbol.op.argmax
- symbol.op.argmax_channel
- symbol.op.argmin
- symbol.op.argsort
- symbol.op.batch_dot
- symbol.op.batch_take
- symbol.op.broadcast_add
- symbol.op.broadcast_axes
- symbol.op.broadcast_axis
- symbol.op.broadcast_div
- symbol.op.broadcast_equal
- symbol.op.broadcast_greater
- symbol.op.broadcast_greater_equal
- symbol.op.broadcast_hypot
- symbol.op.broadcast_lesser
- symbol.op.broadcast_lesser_equal
- symbol.op.broadcast_like
- symbol.op.broadcast_logical_and
- symbol.op.broadcast_logical_or
- symbol.op.broadcast_logical_xor
- symbol.op.broadcast_maximum
- symbol.op.broadcast_minimum
- symbol.op.broadcast_minus
- symbol.op.broadcast_mod
- symbol.op.broadcast_mul
- symbol.op.broadcast_not_equal
- symbol.op.broadcast_plus
- symbol.op.broadcast_power
- symbol.op.broadcast_sub
- symbol.op.broadcast_to
- symbol.op.cast_storage
- symbol.op.cbrt
- symbol.op.ceil
- symbol.op.choose_element_0index
- symbol.op.clip
- symbol.op.col2im
- symbol.op.cos
- symbol.op.cosh
- symbol.op.ctc_loss
- symbol.op.cumsum
- symbol.op.degrees
- symbol.op.depth_to_space
- symbol.op.diag
- symbol.op.dot
- symbol.op.elemwise_add
- symbol.op.elemwise_div
- symbol.op.elemwise_mul
- symbol.op.elemwise_sub
- symbol.op.erf
- symbol.op.erfinv
- symbol.op.exp
- symbol.op.expand_dims
- symbol.op.expm1
- symbol.op.fill_element_0index
- symbol.op.fix
- symbol.op.flip
- symbol.op.floor
- symbol.op.ftml_update
- symbol.op.ftrl_update
- symbol.op.gamma
- symbol.op.gammaln
- symbol.op.gather_nd
- symbol.op.hard_sigmoid
- symbol.op.identity
- symbol.op.im2col
- symbol.op.khatri_rao
- symbol.op.lamb_update_phase1
- symbol.op.lamb_update_phase2
- symbol.op.linalg_det
- symbol.op.linalg_extractdiag
- symbol.op.linalg_extracttrian
- symbol.op.linalg_gelqf
- symbol.op.linalg_gemm
- symbol.op.linalg_gemm2
- symbol.op.linalg_inverse
- symbol.op.linalg_makediag
- symbol.op.linalg_maketrian
- symbol.op.linalg_potrf
- symbol.op.linalg_potri
- symbol.op.linalg_slogdet
- symbol.op.linalg_sumlogdiag
- symbol.op.linalg_syrk
- symbol.op.linalg_trmm
- symbol.op.linalg_trsm
- symbol.op.log
- symbol.op.log10
- symbol.op.log1p
- symbol.op.log2
- symbol.op.log_softmax
- symbol.op.logical_not
- symbol.op.make_loss
- symbol.op.max
- symbol.op.max_axis
- symbol.op.mean
- symbol.op.min
- symbol.op.min_axis
- symbol.op.moments
- symbol.op.mp_lamb_update_phase1
- symbol.op.mp_lamb_update_phase2
- symbol.op.mp_nag_mom_update
- symbol.op.mp_sgd_mom_update
- symbol.op.mp_sgd_update
- symbol.op.multi_all_finite
- symbol.op.multi_lars
- symbol.op.multi_mp_sgd_mom_update
- symbol.op.multi_mp_sgd_update
- symbol.op.multi_sgd_mom_update
- symbol.op.multi_sgd_update
- symbol.op.multi_sum_sq
- symbol.op.nag_mom_update
- symbol.op.nanprod
- symbol.op.nansum
- symbol.op.negative
- symbol.op.norm
- symbol.op.normal
- symbol.op.one_hot
- symbol.op.ones_like
- symbol.op.pick
- symbol.op.preloaded_multi_mp_sgd_mom_update
- symbol.op.preloaded_multi_mp_sgd_update
- symbol.op.preloaded_multi_sgd_mom_update
- symbol.op.preloaded_multi_sgd_update
- symbol.op.prod
- symbol.op.radians
- symbol.op.random_exponential
- symbol.op.random_gamma
- symbol.op.random_generalized_negative_binomial
- symbol.op.random_negative_binomial
- symbol.op.random_normal
- symbol.op.random_pdf_dirichlet
- symbol.op.random_pdf_exponential
- symbol.op.random_pdf_gamma
- symbol.op.random_pdf_generalized_negative_binomial
- symbol.op.random_pdf_negative_binomial
- symbol.op.random_pdf_normal
- symbol.op.random_pdf_poisson
- symbol.op.random_pdf_uniform
- symbol.op.random_poisson
- symbol.op.random_randint
- symbol.op.random_uniform
- symbol.op.ravel_multi_index
- symbol.op.rcbrt
- symbol.op.reciprocal
- symbol.op.relu
- symbol.op.repeat
- symbol.op.reset_arrays
- symbol.op.reshape_like
- symbol.op.reverse
- symbol.op.rint
- symbol.op.rmsprop_update
- symbol.op.rmspropalex_update
- symbol.op.round
- symbol.op.rsqrt
- symbol.op.sample_exponential
- symbol.op.sample_gamma
- symbol.op.sample_generalized_negative_binomial
- symbol.op.sample_multinomial
- symbol.op.sample_negative_binomial
- symbol.op.sample_normal
- symbol.op.sample_poisson
- symbol.op.sample_uniform
- symbol.op.scatter_nd
- symbol.op.sgd_mom_update
- symbol.op.sgd_update
- symbol.op.shape_array
- symbol.op.shuffle
- symbol.op.sigmoid
- symbol.op.sign
- symbol.op.signsgd_update
- symbol.op.signum_update
- symbol.op.sin
- symbol.op.sinh
- symbol.op.size_array
- symbol.op.slice
- symbol.op.slice_axis
- symbol.op.slice_like
- symbol.op.smooth_l1
- symbol.op.softmax_cross_entropy
- symbol.op.softmin
- symbol.op.softsign
- symbol.op.sort
- symbol.op.space_to_depth
- symbol.op.split
- symbol.op.sqrt
- symbol.op.square
- symbol.op.squeeze
- symbol.op.stack
- symbol.op.stop_gradient
- symbol.op.sum
- symbol.op.sum_axis
- symbol.op.swapaxes
- symbol.op.take
- symbol.op.tan
- symbol.op.tanh
- symbol.op.tile
- symbol.op.topk
- symbol.op.transpose
- symbol.op.trunc
- symbol.op.uniform
- symbol.op.unravel_index
- symbol.op.where
- symbol.op.zeros_like
-
- symbol.random
- symbol.random.uniform
- symbol.random.normal
- symbol.random.randn
- symbol.random.poisson
- symbol.random.exponential
- symbol.random.gamma
- symbol.random.multinomial
- symbol.random.negative_binomial
- symbol.random.generalized_negative_binomial
- symbol.random.shuffle
- symbol.random.randint
- symbol.random.exponential_like
- symbol.random.gamma_like
- symbol.random.generalized_negative_binomial_like
- symbol.random.negative_binomial_like
- symbol.random.normal_like
- symbol.random.poisson_like
- symbol.random.uniform_like
- symbol.register
-
- symbol.sparse
- symbol.sparse.ElementWiseSum
- symbol.sparse.Embedding
- symbol.sparse.FullyConnected
- symbol.sparse.LinearRegressionOutput
- symbol.sparse.LogisticRegressionOutput
- symbol.sparse.MAERegressionOutput
- symbol.sparse.abs
- symbol.sparse.adagrad_update
- symbol.sparse.adam_update
- symbol.sparse.add_n
- symbol.sparse.arccos
- symbol.sparse.arccosh
- symbol.sparse.arcsin
- symbol.sparse.arcsinh
- symbol.sparse.arctan
- symbol.sparse.arctanh
- symbol.sparse.broadcast_add
- symbol.sparse.broadcast_div
- symbol.sparse.broadcast_minus
- symbol.sparse.broadcast_mul
- symbol.sparse.broadcast_plus
- symbol.sparse.broadcast_sub
- symbol.sparse.cast_storage
- symbol.sparse.cbrt
- symbol.sparse.ceil
- symbol.sparse.clip
- symbol.sparse.concat
- symbol.sparse.cos
- symbol.sparse.cosh
- symbol.sparse.degrees
- symbol.sparse.dot
- symbol.sparse.elemwise_add
- symbol.sparse.elemwise_div
- symbol.sparse.elemwise_mul
- symbol.sparse.elemwise_sub
- symbol.sparse.exp
- symbol.sparse.expm1
- symbol.sparse.fix
- symbol.sparse.floor
- symbol.sparse.ftrl_update
- symbol.sparse.gamma
- symbol.sparse.gammaln
- symbol.sparse.log
- symbol.sparse.log10
- symbol.sparse.log1p
- symbol.sparse.log2
- symbol.sparse.make_loss
- symbol.sparse.mean
- symbol.sparse.negative
- symbol.sparse.norm
- symbol.sparse.radians
- symbol.sparse.relu
- symbol.sparse.retain
- symbol.sparse.rint
- symbol.sparse.round
- symbol.sparse.rsqrt
- symbol.sparse.sgd_mom_update
- symbol.sparse.sgd_update
- symbol.sparse.sigmoid
- symbol.sparse.sign
- symbol.sparse.sin
- symbol.sparse.sinh
- symbol.sparse.slice
- symbol.sparse.sqrt
- symbol.sparse.square
- symbol.sparse.stop_gradient
- symbol.sparse.sum
- symbol.sparse.tan
- symbol.sparse.tanh
- symbol.sparse.trunc
- symbol.sparse.where
- symbol.sparse.zeros_like
- symbol.Activation
- symbol.BatchNorm
- symbol.BatchNorm_v1
- symbol.BilinearSampler
- symbol.BlockGrad
- symbol.CTCLoss
- symbol.Cast
- symbol.Concat
- symbol.Convolution
- symbol.Convolution_v1
- symbol.Correlation
- symbol.Crop
- symbol.Custom
- symbol.Deconvolution
- symbol.Dropout
- symbol.ElementWiseSum
- symbol.Embedding
- symbol.Flatten
- symbol.FullyConnected
- symbol.GridGenerator
- symbol.GroupNorm
- symbol.IdentityAttachKLSparseReg
- symbol.InstanceNorm
- symbol.L2Normalization
- symbol.LRN
- symbol.LayerNorm
- symbol.LeakyReLU
- symbol.LinearRegressionOutput
- symbol.LogisticRegressionOutput
- symbol.MAERegressionOutput
- symbol.MakeLoss
- symbol.Pad
- symbol.Pooling
- symbol.Pooling_v1
- symbol.RNN
- symbol.ROIPooling
- symbol.Reshape
- symbol.SVMOutput
- symbol.SequenceLast
- symbol.SequenceMask
- symbol.SequenceReverse
- symbol.SliceChannel
- symbol.Softmax
- symbol.SoftmaxActivation
- symbol.SoftmaxOutput
- symbol.SpatialTransformer
- symbol.SwapAxis
- symbol.UpSampling
- symbol.abs
- symbol.adam_update
- symbol.add_n
- symbol.all_finite
- symbol.amp_cast
- symbol.amp_multicast
- symbol.arccos
- symbol.arccosh
- symbol.arcsin
- symbol.arcsinh
- symbol.arctan
- symbol.arctanh
- symbol.argmax
- symbol.argmax_channel
- symbol.argmin
- symbol.argsort
- symbol.batch_dot
- symbol.batch_take
- symbol.broadcast_add
- symbol.broadcast_axes
- symbol.broadcast_axis
- symbol.broadcast_div
- symbol.broadcast_equal
- symbol.broadcast_greater
- symbol.broadcast_greater_equal
- symbol.broadcast_hypot
- symbol.broadcast_lesser
- symbol.broadcast_lesser_equal
- symbol.broadcast_like
- symbol.broadcast_logical_and
- symbol.broadcast_logical_or
- symbol.broadcast_logical_xor
- symbol.broadcast_maximum
- symbol.broadcast_minimum
- symbol.broadcast_minus
- symbol.broadcast_mod
- symbol.broadcast_mul
- symbol.broadcast_not_equal
- symbol.broadcast_plus
- symbol.broadcast_power
- symbol.broadcast_sub
- symbol.broadcast_to
- symbol.cast_storage
- symbol.cbrt
- symbol.ceil
- symbol.choose_element_0index
- symbol.clip
- symbol.col2im
- symbol.cos
- symbol.cosh
- symbol.ctc_loss
- symbol.cumsum
- symbol.degrees
- symbol.depth_to_space
- symbol.diag
- symbol.dot
- symbol.elemwise_add
- symbol.elemwise_div
- symbol.elemwise_mul
- symbol.elemwise_sub
- symbol.erf
- symbol.erfinv
- symbol.exp
- symbol.expand_dims
- symbol.expm1
- symbol.fill_element_0index
- symbol.fix
- symbol.flip
- symbol.floor
- symbol.ftml_update
- symbol.ftrl_update
- symbol.gamma
- symbol.gammaln
- symbol.gather_nd
- symbol.hard_sigmoid
- symbol.identity
- symbol.im2col
- symbol.khatri_rao
- symbol.lamb_update_phase1
- symbol.lamb_update_phase2
- symbol.linalg_det
- symbol.linalg_extractdiag
- symbol.linalg_extracttrian
- symbol.linalg_gelqf
- symbol.linalg_gemm
- symbol.linalg_gemm2
- symbol.linalg_inverse
- symbol.linalg_makediag
- symbol.linalg_maketrian
- symbol.linalg_potrf
- symbol.linalg_potri
- symbol.linalg_slogdet
- symbol.linalg_sumlogdiag
- symbol.linalg_syrk
- symbol.linalg_trmm
- symbol.linalg_trsm
- symbol.log
- symbol.log10
- symbol.log1p
- symbol.log2
- symbol.log_softmax
- symbol.logical_not
- symbol.make_loss
- symbol.max
- symbol.max_axis
- symbol.mean
- symbol.min
- symbol.min_axis
- symbol.moments
- symbol.mp_lamb_update_phase1
- symbol.mp_lamb_update_phase2
- symbol.mp_nag_mom_update
- symbol.mp_sgd_mom_update
- symbol.mp_sgd_update
- symbol.multi_all_finite
- symbol.multi_lars
- symbol.multi_mp_sgd_mom_update
- symbol.multi_mp_sgd_update
- symbol.multi_sgd_mom_update
- symbol.multi_sgd_update
- symbol.multi_sum_sq
- symbol.nag_mom_update
- symbol.nanprod
- symbol.nansum
- symbol.negative
- symbol.norm
- symbol.normal
- symbol.one_hot
- symbol.ones_like
- symbol.pick
- symbol.preloaded_multi_mp_sgd_mom_update
- symbol.preloaded_multi_mp_sgd_update
- symbol.preloaded_multi_sgd_mom_update
- symbol.preloaded_multi_sgd_update
- symbol.prod
- symbol.radians
- symbol.random_exponential
- symbol.random_gamma
- symbol.random_generalized_negative_binomial
- symbol.random_negative_binomial
- symbol.random_normal
- symbol.random_pdf_dirichlet
- symbol.random_pdf_exponential
- symbol.random_pdf_gamma
- symbol.random_pdf_generalized_negative_binomial
- symbol.random_pdf_negative_binomial
- symbol.random_pdf_normal
- symbol.random_pdf_poisson
- symbol.random_pdf_uniform
- symbol.random_poisson
- symbol.random_randint
- symbol.random_uniform
- symbol.ravel_multi_index
- symbol.rcbrt
- symbol.reciprocal
- symbol.relu
- symbol.repeat
- symbol.reset_arrays
- symbol.reshape_like
- symbol.reverse
- symbol.rint
- symbol.rmsprop_update
- symbol.rmspropalex_update
- symbol.round
- symbol.rsqrt
- symbol.sample_exponential
- symbol.sample_gamma
- symbol.sample_generalized_negative_binomial
- symbol.sample_multinomial
- symbol.sample_negative_binomial
- symbol.sample_normal
- symbol.sample_poisson
- symbol.sample_uniform
- symbol.scatter_nd
- symbol.sgd_mom_update
- symbol.sgd_update
- symbol.shape_array
- symbol.shuffle
- symbol.sigmoid
- symbol.sign
- symbol.signsgd_update
- symbol.signum_update
- symbol.sin
- symbol.sinh
- symbol.size_array
- symbol.slice
- symbol.slice_axis
- symbol.slice_like
- symbol.smooth_l1
- symbol.softmax_cross_entropy
- symbol.softmin
- symbol.softsign
- symbol.sort
- symbol.space_to_depth
- symbol.split
- symbol.sqrt
- symbol.square
- symbol.squeeze
- symbol.stack
- symbol.stop_gradient
- symbol.sum
- symbol.sum_axis
- symbol.swapaxes
- symbol.take
- symbol.tan
- symbol.tanh
- symbol.tile
- symbol.topk
- symbol.transpose
- symbol.trunc
- symbol.uniform
- symbol.unravel_index
- symbol.where
- symbol.zeros_like
- symbol.var
- symbol.Variable
- symbol.Group
- symbol.load
- symbol.load_json
- symbol.pow
- symbol.power
- symbol.maximum
- symbol.minimum
- symbol.hypot
- symbol.eye
- symbol.zeros
- symbol.ones
- symbol.full
- symbol.arange
- symbol.linspace
- symbol.histogram
- symbol.split_v2
-
- contrib.ndarray
- contrib.ndarray.AdaptiveAvgPooling2D
- contrib.ndarray.BilinearResize2D
- contrib.ndarray.CTCLoss
- contrib.ndarray.DeformableConvolution
- contrib.ndarray.DeformablePSROIPooling
- contrib.ndarray.ModulatedDeformableConvolution
- contrib.ndarray.MultiBoxDetection
- contrib.ndarray.MultiBoxPrior
- contrib.ndarray.MultiBoxTarget
- contrib.ndarray.MultiProposal
- contrib.ndarray.PSROIPooling
- contrib.ndarray.Proposal
- contrib.ndarray.ROIAlign
- contrib.ndarray.RROIAlign
- contrib.ndarray.SparseEmbedding
- contrib.ndarray.SyncBatchNorm
- contrib.ndarray.allclose
- contrib.ndarray.arange_like
- contrib.ndarray.backward_gradientmultiplier
- contrib.ndarray.backward_hawkesll
- contrib.ndarray.backward_index_copy
- contrib.ndarray.backward_quadratic
- contrib.ndarray.bipartite_matching
- contrib.ndarray.boolean_mask
- contrib.ndarray.box_decode
- contrib.ndarray.box_encode
- contrib.ndarray.box_iou
- contrib.ndarray.box_nms
- contrib.ndarray.box_non_maximum_suppression
- contrib.ndarray.calibrate_entropy
- contrib.ndarray.count_sketch
- contrib.ndarray.ctc_loss
- contrib.ndarray.dequantize
- contrib.ndarray.dgl_adjacency
- contrib.ndarray.dgl_csr_neighbor_non_uniform_sample
- contrib.ndarray.dgl_csr_neighbor_uniform_sample
- contrib.ndarray.dgl_graph_compact
- contrib.ndarray.dgl_subgraph
- contrib.ndarray.div_sqrt_dim
- contrib.ndarray.edge_id
- contrib.ndarray.fft
- contrib.ndarray.getnnz
- contrib.ndarray.gradientmultiplier
- contrib.ndarray.group_adagrad_update
- contrib.ndarray.hawkesll
- contrib.ndarray.ifft
- contrib.ndarray.index_array
- contrib.ndarray.index_copy
- contrib.ndarray.interleaved_matmul_encdec_qk
- contrib.ndarray.interleaved_matmul_encdec_valatt
- contrib.ndarray.interleaved_matmul_selfatt_qk
- contrib.ndarray.interleaved_matmul_selfatt_valatt
- contrib.ndarray.quadratic
- contrib.ndarray.quantize
- contrib.ndarray.quantize_v2
- contrib.ndarray.quantized_act
- contrib.ndarray.quantized_batch_norm
- contrib.ndarray.quantized_concat
- contrib.ndarray.quantized_conv
- contrib.ndarray.quantized_elemwise_add
- contrib.ndarray.quantized_elemwise_mul
- contrib.ndarray.quantized_embedding
- contrib.ndarray.quantized_flatten
- contrib.ndarray.quantized_fully_connected
- contrib.ndarray.quantized_pooling
- contrib.ndarray.requantize
- contrib.ndarray.round_ste
- contrib.ndarray.sign_ste
-
- contrib.symbol
- contrib.symbol.AdaptiveAvgPooling2D
- contrib.symbol.BilinearResize2D
- contrib.symbol.CTCLoss
- contrib.symbol.DeformableConvolution
- contrib.symbol.DeformablePSROIPooling
- contrib.symbol.ModulatedDeformableConvolution
- contrib.symbol.MultiBoxDetection
- contrib.symbol.MultiBoxPrior
- contrib.symbol.MultiBoxTarget
- contrib.symbol.MultiProposal
- contrib.symbol.PSROIPooling
- contrib.symbol.Proposal
- contrib.symbol.ROIAlign
- contrib.symbol.RROIAlign
- contrib.symbol.SparseEmbedding
- contrib.symbol.SyncBatchNorm
- contrib.symbol.allclose
- contrib.symbol.arange_like
- contrib.symbol.backward_gradientmultiplier
- contrib.symbol.backward_hawkesll
- contrib.symbol.backward_index_copy
- contrib.symbol.backward_quadratic
- contrib.symbol.bipartite_matching
- contrib.symbol.boolean_mask
- contrib.symbol.box_decode
- contrib.symbol.box_encode
- contrib.symbol.box_iou
- contrib.symbol.box_nms
- contrib.symbol.box_non_maximum_suppression
- contrib.symbol.calibrate_entropy
- contrib.symbol.count_sketch
- contrib.symbol.ctc_loss
- contrib.symbol.dequantize
- contrib.symbol.dgl_adjacency
- contrib.symbol.dgl_csr_neighbor_non_uniform_sample
- contrib.symbol.dgl_csr_neighbor_uniform_sample
- contrib.symbol.dgl_graph_compact
- contrib.symbol.dgl_subgraph
- contrib.symbol.div_sqrt_dim
- contrib.symbol.edge_id
- contrib.symbol.fft
- contrib.symbol.getnnz
- contrib.symbol.gradientmultiplier
- contrib.symbol.group_adagrad_update
- contrib.symbol.hawkesll
- contrib.symbol.ifft
- contrib.symbol.index_array
- contrib.symbol.index_copy
- contrib.symbol.interleaved_matmul_encdec_qk
- contrib.symbol.interleaved_matmul_encdec_valatt
- contrib.symbol.interleaved_matmul_selfatt_qk
- contrib.symbol.interleaved_matmul_selfatt_valatt
- contrib.symbol.quadratic
- contrib.symbol.quantize
- contrib.symbol.quantize_v2
- contrib.symbol.quantized_act
- contrib.symbol.quantized_batch_norm
- contrib.symbol.quantized_concat
- contrib.symbol.quantized_conv
- contrib.symbol.quantized_elemwise_add
- contrib.symbol.quantized_elemwise_mul
- contrib.symbol.quantized_embedding
- contrib.symbol.quantized_flatten
- contrib.symbol.quantized_fully_connected
- contrib.symbol.quantized_pooling
- contrib.symbol.requantize
- contrib.symbol.round_ste
- contrib.symbol.sign_ste
- contrib.text
- mxnet.attribute
- mxnet.base
- mxnet.callback
- mxnet.context
- mxnet.engine
- mxnet.executor
- mxnet.executor_manager
- mxnet.image
- mxnet.io
- mxnet.kvstore_server
- mxnet.libinfo
- mxnet.log
- mxnet.model
- mxnet.monitor
- mxnet.name
- mxnet.notebook
- mxnet.operator
- mxnet.profiler
- mxnet.random
- mxnet.recordio
- mxnet.registry
- mxnet.rtc
- mxnet.runtime
- mxnet.test_utils
- mxnet.torch
- mxnet.util
- mxnet.visualization
mxnet.gluon / gluon.nn
gluon.nn¶
Gluon provides a large number of build-in neural network layers in the following two modules:
Neural network layers. |
|
Contributed neural network modules. |
We group all layers in these two modules according to their categories.
Sequential containers¶
Stacks Blocks sequentially. |
|
Stacks HybridBlocks sequentially. |
Basic Layers¶
Just your regular densely-connected NN layer. |
|
Applies an activation function to input. |
|
Applies Dropout to the input. |
|
Flattens the input to two dimensional. |
|
Wraps an operator or an expression as a Block object. |
|
Wraps an operator or an expression as a HybridBlock object. |
Convolutional Layers¶
1D convolution layer (e.g. |
|
2D convolution layer (e.g. |
|
3D convolution layer (e.g. |
|
Transposed 1D convolution layer (sometimes called Deconvolution). |
|
Transposed 2D convolution layer (sometimes called Deconvolution). |
|
Transposed 3D convolution layer (sometimes called Deconvolution). |
Pooling Layers¶
Max pooling operation for one dimensional data. |
|
Max pooling operation for two dimensional (spatial) data. |
|
Max pooling operation for 3D data (spatial or spatio-temporal). |
|
Average pooling operation for temporal data. |
|
Average pooling operation for spatial data. |
|
Average pooling operation for 3D data (spatial or spatio-temporal). |
|
Gloabl max pooling operation for one dimensional (temporal) data. |
|
Global max pooling operation for two dimensional (spatial) data. |
|
Global max pooling operation for 3D data (spatial or spatio-temporal). |
|
Global average pooling operation for temporal data. |
|
Global average pooling operation for spatial data. |
|
Global average pooling operation for 3D data (spatial or spatio-temporal). |
|
Pads the input tensor using the reflection of the input boundary. |
Normalization Layers¶
Batch normalization layer (Ioffe and Szegedy, 2014). |
|
Applies instance normalization to the n-dimensional input array. |
|
Applies layer normalization to the n-dimensional input array. |
Embedding Layers¶
Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. |
Advanced Activation Layers¶
Leaky version of a Rectified Linear Unit. |
|
Parametric leaky version of a Rectified Linear Unit. |
|
Exponential Linear Unit (ELU) |
|
Scaled Exponential Linear Unit (SELU) |
|
Swish Activation function |
API Reference¶
Neural network layers.
Classes
|
Applies an activation function to input. |
|
Average pooling operation for temporal data. |
|
Average pooling operation for spatial data. |
|
Average pooling operation for 3D data (spatial or spatio-temporal). |
|
Batch normalization layer (Ioffe and Szegedy, 2014). |
|
Base class for all neural network layers and models. |
|
1D convolution layer (e.g. temporal convolution). |
|
Transposed 1D convolution layer (sometimes called Deconvolution). |
|
2D convolution layer (e.g. spatial convolution over images). |
|
Transposed 2D convolution layer (sometimes called Deconvolution). |
|
3D convolution layer (e.g. spatial convolution over volumes). |
|
Transposed 3D convolution layer (sometimes called Deconvolution). |
|
Just your regular densely-connected NN layer. |
|
Applies Dropout to the input. |
|
Exponential Linear Unit (ELU) |
|
Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. |
|
Flattens the input to two dimensional. |
|
Gaussian Exponential Linear Unit (GELU) |
|
Global average pooling operation for temporal data. |
|
Global average pooling operation for spatial data. |
|
Global average pooling operation for 3D data (spatial or spatio-temporal). |
|
Gloabl max pooling operation for one dimensional (temporal) data. |
|
Global max pooling operation for two dimensional (spatial) data. |
|
Global max pooling operation for 3D data (spatial or spatio-temporal). |
|
Applies group normalization to the n-dimensional input array. |
|
HybridBlock supports forwarding with both Symbol and NDArray. |
|
Wraps an operator or an expression as a HybridBlock object. |
|
Stacks HybridBlocks sequentially. |
|
Applies instance normalization to the n-dimensional input array. |
|
Wraps an operator or an expression as a Block object. |
|
Applies layer normalization to the n-dimensional input array. |
|
Leaky version of a Rectified Linear Unit. |
|
Max pooling operation for one dimensional data. |
|
Max pooling operation for two dimensional (spatial) data. |
|
Max pooling operation for 3D data (spatial or spatio-temporal). |
|
Parametric leaky version of a Rectified Linear Unit. |
|
Pads the input tensor using the reflection of the input boundary. |
|
Scaled Exponential Linear Unit (SELU) |
|
Stacks Blocks sequentially. |
|
Swish Activation function |
|
Construct block from symbol. |
-
class
mxnet.gluon.nn.
Activation
(activation, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Applies an activation function to input.
- Parameters
activation (str) – Name of activation function to use. See
Activation()
for available choices.
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
-
class
mxnet.gluon.nn.
AvgPool1D
(pool_size=2, strides=None, padding=0, layout='NCW', ceil_mode=False, count_include_pad=True, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Average pooling operation for temporal data.
- Parameters
pool_size (int) – Size of the average pooling windows.
strides (int, or None) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size.
padding (int) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
layout (str, default 'NCW') – Dimension ordering of data and out (‘NCW’ or ‘NWC’). ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. padding is applied on ‘W’ dimension.
ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
count_include_pad (bool, default True) – When ‘False’, will exclude padding elements when computing the average value.
- Inputs:
data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
out_width = floor((width+2*padding-pool_size)/strides)+1
When ceil_mode is True, ceil will be used instead of floor in this equation.
-
class
mxnet.gluon.nn.
AvgPool2D
(pool_size=(2, 2), strides=None, padding=0, ceil_mode=False, layout='NCHW', count_include_pad=True, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Average pooling operation for spatial data.
- Parameters
pool_size (int or list/tuple of 2 ints,) – Size of the average pooling windows.
strides (int, list/tuple of 2 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size.
padding (int or list/tuple of 2 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
layout (str, default 'NCHW') – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension.
ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
count_include_pad (bool, default True) – When ‘False’, will exclude padding elements when computing the average value.
- Inputs:
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCHW. out_height and out_width are calculated as:
out_height = floor((height+2*padding[0]-pool_size[0])/strides[0])+1 out_width = floor((width+2*padding[1]-pool_size[1])/strides[1])+1
When ceil_mode is True, ceil will be used instead of floor in this equation.
-
class
mxnet.gluon.nn.
AvgPool3D
(pool_size=(2, 2, 2), strides=None, padding=0, ceil_mode=False, layout='NCDHW', count_include_pad=True, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Average pooling operation for 3D data (spatial or spatio-temporal).
- Parameters
pool_size (int or list/tuple of 3 ints,) – Size of the average pooling windows.
strides (int, list/tuple of 3 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size.
padding (int or list/tuple of 3 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
layout (str, default 'NCDHW') – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ dimension.
ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
count_include_pad (bool, default True) – When ‘False’, will exclude padding elements when computing the average value.
- Inputs:
data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCDHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCDHW. out_depth, out_height and out_width are calculated as:
out_depth = floor((depth+2*padding[0]-pool_size[0])/strides[0])+1 out_height = floor((height+2*padding[1]-pool_size[1])/strides[1])+1 out_width = floor((width+2*padding[2]-pool_size[2])/strides[2])+1
When ceil_mode is True, ceil will be used instead of floor in this equation.
-
class
mxnet.gluon.nn.
BatchNorm
(axis=1, momentum=0.9, epsilon=1e-05, center=True, scale=True, use_global_stats=False, beta_initializer='zeros', gamma_initializer='ones', running_mean_initializer='zeros', running_variance_initializer='ones', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Batch normalization layer (Ioffe and Szegedy, 2014). Normalizes the input at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
- Parameters
axis (int, default 1) – The axis that should be normalized. This is typically the channels (C) axis. For instance, after a Conv2D layer with layout=’NCHW’, set axis=1 in BatchNorm. If layout=’NHWC’, then set axis=3.
momentum (float, default 0.9) – Momentum for the moving average.
epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale (bool, default True) – If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.
use_global_stats (bool, default False) – If True, use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. If False, use local batch-norm.
beta_initializer (str or Initializer, default ‘zeros’) – Initializer for the beta weight.
gamma_initializer (str or Initializer, default ‘ones’) – Initializer for the gamma weight.
running_mean_initializer (str or Initializer, default ‘zeros’) – Initializer for the running mean.
running_variance_initializer (str or Initializer, default ‘ones’) – Initializer for the running variance.
in_channels (int, default 0) – Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Methods
cast
(dtype)Cast this Block to use another data type.
hybrid_forward
(F, x, gamma, beta, …)Overrides to construct symbolic graph for this Block.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
-
class
mxnet.gluon.nn.
Block
(prefix=None, params=None)[source]¶ Bases:
object
Base class for all neural network layers and models. Your models should subclass this class.
Block
can be nested recursively in a tree structure. You can create and assign childBlock
as regular attributes:from mxnet.gluon import Block, nn from mxnet import ndarray as F class Model(Block): def __init__(self, **kwargs): super(Model, self).__init__(**kwargs) # use name_scope to give child Blocks appropriate names. with self.name_scope(): self.dense0 = nn.Dense(20) self.dense1 = nn.Dense(20) def forward(self, x): x = F.relu(self.dense0(x)) return F.relu(self.dense1(x)) model = Model() model.initialize(ctx=mx.cpu(0)) model(F.zeros((10, 10), ctx=mx.cpu(0)))
Methods
apply
(fn)Applies
fn
recursively to every child block as well as self.cast
(dtype)Cast this Block to use another data type.
collect_params
([select])Returns a
ParameterDict
containing thisBlock
and all of its children’s Parameters(default), also can returns the selectParameterDict
which match some given regular expressions.forward
(*args)Overrides to implement forward computation using
NDArray
.hybridize
([active])Please refer description of HybridBlock hybridize().
initialize
([init, ctx, verbose, force_reinit])Initializes
Parameter
s of thisBlock
and its children.load_parameters
(filename[, ctx, …])Load parameters from file previously saved by save_parameters.
load_params
(filename[, ctx, allow_missing, …])[Deprecated] Please use load_parameters.
Returns a name space object managing a child
Block
and parameter names.register_child
(block[, name])Registers block as a child of self.
register_forward_hook
(hook)Registers a forward hook on the block.
Registers a forward pre-hook on the block.
register_op_hook
(callback[, monitor_all])Install callback monitor.
save_parameters
(filename[, deduplicate])Save parameters to file.
save_params
(filename)[Deprecated] Please use save_parameters. Note that if you want load
summary
(*inputs)Print the summary of the model’s output and parameters.
Attributes
Name of this
Block
, without ‘_’ in the end.Returns this
Block
’s parameter dictionary (does not include its children’s parameters).Prefix of this
Block
.Child
Block
assigned this way will be registered andcollect_params()
will collect their Parameters recursively. You can also manually register child blocks withregister_child()
.- Parameters
prefix (str) – Prefix acts like a name space. All children blocks created in parent block’s
name_scope()
will have parent block’s prefix in their name. Please refer to naming tutorial for more info on prefix and naming.params (ParameterDict or None) –
ParameterDict
for sharing weights with the newBlock
. For example, if you wantdense1
to sharedense0
’s weights, you can do:dense0 = nn.Dense(20) dense1 = nn.Dense(20, params=dense0.collect_params())
-
apply
(fn)[source]¶ Applies
fn
recursively to every child block as well as self.- Parameters
fn (callable) – Function to be applied to each submodule, of form fn(block).
- Returns
- Return type
this block
-
cast
(dtype)[source]¶ Cast this Block to use another data type.
- Parameters
dtype (str or numpy.dtype) – The new data type.
-
collect_params
(select=None)[source]¶ Returns a
ParameterDict
containing thisBlock
and all of its children’s Parameters(default), also can returns the selectParameterDict
which match some given regular expressions.For example, collect the specified parameters in [‘conv1_weight’, ‘conv1_bias’, ‘fc_weight’, ‘fc_bias’]:
model.collect_params('conv1_weight|conv1_bias|fc_weight|fc_bias')
or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done using regular expressions:
model.collect_params('.*weight|.*bias')
- Parameters
select (str) – regular expressions
- Returns
- Return type
The selected
ParameterDict
-
forward
(*args)[source]¶ Overrides to implement forward computation using
NDArray
. Only accepts positional arguments.- Parameters
*args (list of NDArray) – Input tensors.
-
initialize
(init=<mxnet.initializer.Uniform object>, ctx=None, verbose=False, force_reinit=False)[source]¶ Initializes
Parameter
s of thisBlock
and its children. Equivalent toblock.collect_params().initialize(...)
- Parameters
init (Initializer) – Global default Initializer to be used when
Parameter.init()
isNone
. Otherwise,Parameter.init()
takes precedence.ctx (Context or list of Context) – Keeps a copy of Parameters on one or many context(s).
verbose (bool, default False) – Whether to verbosely print out details on initialization.
force_reinit (bool, default False) – Whether to force re-initialization if parameter is already initialized.
-
load_parameters
(filename, ctx=None, allow_missing=False, ignore_extra=False, cast_dtype=False, dtype_source='current')[source]¶ Load parameters from file previously saved by save_parameters.
- Parameters
filename (str) – Path to parameter file.
ctx (Context or list of Context, default cpu()) – Context(s) to initialize loaded parameters on.
allow_missing (bool, default False) – Whether to silently skip loading parameters not represents in the file.
ignore_extra (bool, default False) – Whether to silently ignore parameters from the file that are not present in this Block.
cast_dtype (bool, default False) – Cast the data type of the NDArray loaded from the checkpoint to the dtype provided by the Parameter if any.
dtype_source (str, default 'current') – must be in {‘current’, ‘saved’} Only valid if cast_dtype=True, specify the source of the dtype for casting the parameters
References
-
load_params
(filename, ctx=None, allow_missing=False, ignore_extra=False)[source]¶ [Deprecated] Please use load_parameters.
Load parameters from file.
- filenamestr
Path to parameter file.
- ctxContext or list of Context, default cpu()
Context(s) to initialize loaded parameters on.
- allow_missingbool, default False
Whether to silently skip loading parameters not represents in the file.
- ignore_extrabool, default False
Whether to silently ignore parameters from the file that are not present in this Block.
-
name_scope
()[source]¶ Returns a name space object managing a child
Block
and parameter names. Should be used within awith
statement:with self.name_scope(): self.dense = nn.Dense(20)
Please refer to the naming tutorial for more info on prefix and naming.
-
property
params
¶ Returns this
Block
’s parameter dictionary (does not include its children’s parameters).
-
register_child
(block, name=None)[source]¶ Registers block as a child of self.
Block
s assigned to self as attributes will be registered automatically.
-
register_forward_hook
(hook)[source]¶ Registers a forward hook on the block.
The hook function is called immediately after
forward()
. It should not modify the input or output.- Parameters
hook (callable) – The forward hook function of form hook(block, input, output) -> None.
- Returns
- Return type
mxnet.gluon.utils.HookHandle
-
register_forward_pre_hook
(hook)[source]¶ Registers a forward pre-hook on the block.
The hook function is called immediately before
forward()
. It should not modify the input or output.- Parameters
hook (callable) – The forward hook function of form hook(block, input) -> None.
- Returns
- Return type
mxnet.gluon.utils.HookHandle
-
register_op_hook
(callback, monitor_all=False)[source]¶ Install callback monitor.
- Parameters
callback (function) – Takes a string and a NDArrayHandle.
monitor_all (bool, default False) – If true, monitor both input and output, otherwise monitor output only.
-
save_parameters
(filename, deduplicate=False)[source]¶ Save parameters to file.
Saved parameters can only be loaded with load_parameters. Note that this method only saves parameters, not model structure. If you want to save model structures, please use
HybridBlock.export()
.- Parameters
filename (str) – Path to file.
deduplicate (bool, default False) – If True, save shared parameters only once. Otherwise, if a Block contains multiple sub-blocks that share parameters, each of the shared parameters will be separately saved for every sub-block.
References
-
save_params
(filename)[source]¶ [Deprecated] Please use save_parameters. Note that if you want load from SymbolBlock later, please use export instead.
Save parameters to file.
- filenamestr
Path to file.
-
summary
(*inputs)[source]¶ Print the summary of the model’s output and parameters.
The network must have been initialized, and must not have been hybridized.
- Parameters
inputs (object) – Any input that the model supports. For any tensor in the input, only
mxnet.ndarray.NDArray
is supported.
-
class
mxnet.gluon.nn.
Conv1D
(channels, kernel_size, strides=1, padding=0, dilation=1, groups=1, layout='NCW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Conv
1D convolution layer (e.g. temporal convolution).
This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Parameters
channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution.
kernel_size (int or tuple/list of 1 int) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 1 int,) – Specify the strides of the convolution.
padding (int or a tuple/list of 1 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
dilation (int or tuple/list of 1 int) – Specifies the dilation rate to use for dilated convolution.
groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
layout (str, default 'NCW') – Dimension ordering of data and weight. Only supports ‘NCW’ layout for now. ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the ‘W’ dimension.
in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
activation (str) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).use_bias (bool) – Whether the layer uses a bias vector.
weight_initializer (str or Initializer) – Initializer for the weight weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
- Inputs:
data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
out_width = floor((width+2*padding-dilation*(kernel_size-1)-1)/stride)+1
-
class
mxnet.gluon.nn.
Conv1DTranspose
(channels, kernel_size, strides=1, padding=0, output_padding=0, dilation=1, groups=1, layout='NCW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Conv
Transposed 1D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Parameters
channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution.
kernel_size (int or tuple/list of 1 int) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 1 int) – Specify the strides of the convolution.
padding (int or a tuple/list of 1 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
output_padding (int or a tuple/list of 1 int) – Controls the amount of implicit zero-paddings on both sides of the output for output_padding number of points for each dimension.
dilation (int or tuple/list of 1 int) – Controls the spacing between the kernel points; also known as the a trous algorithm
groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
layout (str, default 'NCW') – Dimension ordering of data and weight. Only supports ‘NCW’ layout for now. ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the ‘W’ dimension.
in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
activation (str) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).use_bias (bool) – Whether the layer uses a bias vector.
weight_initializer (str or Initializer) – Initializer for the weight weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
- Inputs:
data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
out_width = (width-1)*strides-2*padding+kernel_size+output_padding
-
class
mxnet.gluon.nn.
Conv2D
(channels, kernel_size, strides=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1, layout='NCHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Conv
2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Parameters
channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution.
kernel_size (int or tuple/list of 2 int) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 2 int,) – Specify the strides of the convolution.
padding (int or a tuple/list of 2 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
dilation (int or tuple/list of 2 int) – Specifies the dilation rate to use for dilated convolution.
groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
layout (str, default 'NCHW') – Dimension ordering of data and weight. Only supports ‘NCHW’ and ‘NHWC’ layout for now. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the ‘H’ and ‘W’ dimensions.
in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
activation (str) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).use_bias (bool) – Whether the layer uses a bias vector.
weight_initializer (str or Initializer) – Initializer for the weight weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
- Inputs:
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCHW. out_height and out_width are calculated as:
out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1
-
class
mxnet.gluon.nn.
Conv2DTranspose
(channels, kernel_size, strides=(1, 1), padding=(0, 0), output_padding=(0, 0), dilation=(1, 1), groups=1, layout='NCHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Conv
Transposed 2D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Parameters
channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution.
kernel_size (int or tuple/list of 2 int) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 2 int) – Specify the strides of the convolution.
padding (int or a tuple/list of 2 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
output_padding (int or a tuple/list of 2 int) – Controls the amount of implicit zero-paddings on both sides of the output for output_padding number of points for each dimension.
dilation (int or tuple/list of 2 int) – Controls the spacing between the kernel points; also known as the a trous algorithm
groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
layout (str, default 'NCHW') – Dimension ordering of data and weight. Only supports ‘NCHW’ and ‘NHWC’ layout for now. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the ‘H’ and ‘W’ dimensions.
in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
activation (str) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).use_bias (bool) – Whether the layer uses a bias vector.
weight_initializer (str or Initializer) – Initializer for the weight weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
- Inputs:
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCHW. out_height and out_width are calculated as:
out_height = (height-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0] out_width = (width-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1]
-
class
mxnet.gluon.nn.
Conv3D
(channels, kernel_size, strides=(1, 1, 1), padding=(0, 0, 0), dilation=(1, 1, 1), groups=1, layout='NCDHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Conv
3D convolution layer (e.g. spatial convolution over volumes).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Parameters
channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution.
kernel_size (int or tuple/list of 3 int) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 3 int,) – Specify the strides of the convolution.
padding (int or a tuple/list of 3 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
dilation (int or tuple/list of 3 int) – Specifies the dilation rate to use for dilated convolution.
groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
layout (str, default 'NCDHW') – Dimension ordering of data and weight. Only supports ‘NCDHW’ and ‘NDHWC’ layout for now. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the ‘D’, ‘H’ and ‘W’ dimensions.
in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
activation (str) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).use_bias (bool) – Whether the layer uses a bias vector.
weight_initializer (str or Initializer) – Initializer for the weight weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
- Inputs:
data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCDHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCDHW. out_depth, out_height and out_width are calculated as:
out_depth = floor((depth+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_height = floor((height+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1 out_width = floor((width+2*padding[2]-dilation[2]*(kernel_size[2]-1)-1)/stride[2])+1
-
class
mxnet.gluon.nn.
Conv3DTranspose
(channels, kernel_size, strides=(1, 1, 1), padding=(0, 0, 0), output_padding=(0, 0, 0), dilation=(1, 1, 1), groups=1, layout='NCDHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Conv
Transposed 3D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Parameters
channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution.
kernel_size (int or tuple/list of 3 int) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 3 int) – Specify the strides of the convolution.
padding (int or a tuple/list of 3 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points
output_padding (int or a tuple/list of 3 int) – Controls the amount of implicit zero-paddings on both sides of the output for output_padding number of points for each dimension.
dilation (int or tuple/list of 3 int) – Controls the spacing between the kernel points; also known as the a trous algorithm.
groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
layout (str, default 'NCDHW') – Dimension ordering of data and weight. Only supports ‘NCDHW’ and ‘NDHWC’ layout for now. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the ‘D’, ‘H’ and ‘W’ dimensions.
in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
activation (str) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).use_bias (bool) – Whether the layer uses a bias vector.
weight_initializer (str or Initializer) – Initializer for the weight weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
- Inputs:
data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCDHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCDHW. out_depth, out_height and out_width are calculated as:
out_depth = (depth-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0] out_height = (height-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1] out_width = (width-1)*strides[2]-2*padding[2]+kernel_size[2]+output_padding[2]
-
class
mxnet.gluon.nn.
Dense
(units, activation=None, use_bias=True, flatten=True, dtype='float32', weight_initializer=None, bias_initializer='zeros', in_units=0, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Just your regular densely-connected NN layer.
Dense implements the operation: output = activation(dot(input, weight) + bias) where activation is the element-wise activation function passed as the activation argument, weight is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True).
Note
the input must be a tensor with rank 2. Use flatten to convert it to rank 2 manually if necessary.
Methods
hybrid_forward
(F, x, weight[, bias])Overrides to construct symbolic graph for this Block.
- Parameters
units (int) – Dimensionality of the output space.
activation (str) – Activation function to use. See help on Activation layer. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).
use_bias (bool, default True) – Whether the layer uses a bias vector.
flatten (bool, default True) – Whether the input tensor should be flattened. If true, all but the first axis of input data are collapsed together. If false, all but the last axis of input data are kept the same, and the transformation applies on the last axis.
dtype (str or np.dtype, default 'float32') – Data type of output embeddings.
weight_initializer (str or Initializer) – Initializer for the kernel weights matrix.
bias_initializer (str or Initializer) – Initializer for the bias vector.
in_units (int, optional) – Size of the input data. If not specified, initialization will be deferred to the first time forward is called and in_units will be inferred from the shape of input data.
prefix (str or None) – See document of Block.
params (ParameterDict or None) – See document of Block.
- Inputs:
data: if flatten is True, data should be a tensor with shape (batch_size, x1, x2, …, xn), where x1 * x2 * … * xn is equal to in_units. If flatten is False, data should have shape (x1, x2, …, xn, in_units).
- Outputs:
out: if flatten is True, out will be a tensor with shape (batch_size, units). If flatten is False, out will have shape (x1, x2, …, xn, units).
-
class
mxnet.gluon.nn.
Dropout
(rate, axes=(), **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Applies Dropout to the input.
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
- Parameters
rate (float) – Fraction of the input units to drop. Must be a number between 0 and 1.
axes (tuple of int, default ()) – The axes on which dropout mask is shared. If empty, regular dropout is applied.
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
References
Dropout: A Simple Way to Prevent Neural Networks from Overfitting
-
class
mxnet.gluon.nn.
ELU
(alpha=1.0, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
- Exponential Linear Unit (ELU)
“Fast and Accurate Deep Network Learning by Exponential Linear Units”, Clevert et al, 2016 https://arxiv.org/abs/1511.07289 Published as a conference paper at ICLR 2016
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
- Parameters
alpha (float) – The alpha parameter as described by Clevert et al, 2016
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
-
class
mxnet.gluon.nn.
Embedding
(input_dim, output_dim, dtype='float32', weight_initializer=None, sparse_grad=False, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. eg. [4, 20] -> [[0.25, 0.1], [0.6, -0.2]]
Note
if sparse_grad is set to True, the gradient w.r.t weight will be sparse. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. By default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
Methods
hybrid_forward
(F, x, weight)Overrides to construct symbolic graph for this Block.
- Parameters
input_dim (int) – Size of the vocabulary, i.e. maximum integer index + 1.
output_dim (int) – Dimension of the dense embedding.
dtype (str or np.dtype, default 'float32') – Data type of output embeddings.
weight_initializer (Initializer) – Initializer for the embeddings matrix.
sparse_grad (bool) – If True, gradient w.r.t. weight will be a ‘row_sparse’ NDArray.
Inputs –
data: (N-1)-D tensor with shape: (x1, x2, …, xN-1).
Output –
out: N-D tensor with shape: (x1, x2, …, xN-1, output_dim).
-
class
mxnet.gluon.nn.
Flatten
(**kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Flattens the input to two dimensional.
- Inputs:
data: input tensor with arbitrary shape (N, x1, x2, …, xn)
- Output:
out: 2D tensor with shape: (N, x1 cdot x2 cdot … cdot xn)
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
-
class
mxnet.gluon.nn.
GELU
(**kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
- Gaussian Exponential Linear Unit (GELU)
“Gaussian Error Linear Units (GELUs)”, Hendrycks et al, 2016 https://arxiv.org/abs/1606.08415
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
-
class
mxnet.gluon.nn.
GlobalAvgPool1D
(layout='NCW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Global average pooling operation for temporal data.
- Parameters
layout (str, default 'NCW') – Dimension ordering of data and out (‘NCW’ or ‘NWC’). ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. padding is applied on ‘W’ dimension.
- Inputs:
data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 3D output tensor with shape (batch_size, channels, 1).
-
class
mxnet.gluon.nn.
GlobalAvgPool2D
(layout='NCHW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Global average pooling operation for spatial data.
- Parameters
layout (str, default 'NCHW') – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively.
- Inputs:
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 4D output tensor with shape (batch_size, channels, 1, 1) when layout is NCHW.
-
class
mxnet.gluon.nn.
GlobalAvgPool3D
(layout='NCDHW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Global average pooling operation for 3D data (spatial or spatio-temporal).
- Parameters
layout (str, default 'NCDHW') – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ dimension.
- Inputs:
data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCDHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 5D output tensor with shape (batch_size, channels, 1, 1, 1) when layout is NCDHW.
-
class
mxnet.gluon.nn.
GlobalMaxPool1D
(layout='NCW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Gloabl max pooling operation for one dimensional (temporal) data.
- Parameters
layout (str, default 'NCW') – Dimension ordering of data and out (‘NCW’ or ‘NWC’). ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Pooling is applied on the W dimension.
- Inputs:
data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 3D output tensor with shape (batch_size, channels, 1) when layout is NCW.
-
class
mxnet.gluon.nn.
GlobalMaxPool2D
(layout='NCHW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Global max pooling operation for two dimensional (spatial) data.
- Parameters
layout (str, default 'NCHW') – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension.
- Inputs:
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 4D output tensor with shape (batch_size, channels, 1, 1) when layout is NCHW.
-
class
mxnet.gluon.nn.
GlobalMaxPool3D
(layout='NCDHW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Global max pooling operation for 3D data (spatial or spatio-temporal).
- Parameters
layout (str, default 'NCDHW') – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ dimension.
- Inputs:
data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 5D output tensor with shape (batch_size, channels, 1, 1, 1) when layout is NCDHW.
-
class
mxnet.gluon.nn.
GroupNorm
(num_groups=1, epsilon=1e-05, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Applies group normalization to the n-dimensional input array. This operator takes an n-dimensional input array where the leftmost 2 axis are batch and channel respectively:
\[x = x.reshape((N, num_groups, C // num_groups, ...)) axis = (2, ...) out = \frac{x - mean[x, axis]}{ \sqrt{Var[x, axis] + \epsilon}} * gamma + beta\]Methods
hybrid_forward
(F, data, gamma, beta)Overrides to construct symbolic graph for this Block.
- Parameters
num_groups (int, default 1) – Number of groups to separate the channel axis into.
epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale (bool, default True) – If True, multiply by gamma. If False, gamma is not used.
beta_initializer (str or Initializer, default ‘zeros’) – Initializer for the beta weight.
gamma_initializer (str or Initializer, default ‘ones’) – Initializer for the gamma weight.
- Inputs:
data: input tensor with shape (N, C, …).
- Outputs:
out: output tensor with the same shape as data.
References
Examples
>>> # Input of shape (2, 3, 4) >>> x = mx.nd.array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> # Group normalization is calculated with the above formula >>> layer = GroupNorm() >>> layer.initialize(ctx=mx.cpu(0)) >>> layer(x) [[[-1.5932543 -1.3035717 -1.0138891 -0.7242065] [-0.4345239 -0.1448413 0.1448413 0.4345239] [ 0.7242065 1.0138891 1.3035717 1.5932543]] [[-1.5932543 -1.3035717 -1.0138891 -0.7242065] [-0.4345239 -0.1448413 0.1448413 0.4345239] [ 0.7242065 1.0138891 1.3035717 1.5932543]]] <NDArray 2x3x4 @cpu(0)>
-
class
mxnet.gluon.nn.
HybridBlock
(prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.Block
HybridBlock supports forwarding with both Symbol and NDArray.
HybridBlock is similar to Block, with a few differences:
import mxnet as mx from mxnet.gluon import HybridBlock, nn class Model(HybridBlock): def __init__(self, **kwargs): super(Model, self).__init__(**kwargs) # use name_scope to give child Blocks appropriate names. with self.name_scope(): self.dense0 = nn.Dense(20) self.dense1 = nn.Dense(20) def hybrid_forward(self, F, x): x = F.relu(self.dense0(x)) return F.relu(self.dense1(x)) model = Model() model.initialize(ctx=mx.cpu(0)) model.hybridize() model(mx.nd.zeros((10, 10), ctx=mx.cpu(0)))
Methods
cast
(dtype)Cast this Block to use another data type.
export
(path[, epoch, remove_amp_cast])Export HybridBlock to json format that can be loaded by gluon.SymbolBlock.imports, mxnet.mod.Module or the C++ interface.
forward
(x, *args)Defines the forward computation.
hybrid_forward
(F, x, *args, **kwargs)Overrides to construct symbolic graph for this Block.
hybridize
([active, backend, backend_opts])Activates or deactivates
HybridBlock
s recursively.infer_shape
(*args)Infers shape of Parameters from inputs.
infer_type
(*args)Infers data type of Parameters from inputs.
optimize_for
(x, *args[, backend, backend_opts])Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.
register_child
(block[, name])Registers block as a child of self.
register_op_hook
(callback[, monitor_all])Install op hook for block recursively.
Forward computation in
HybridBlock
must be static to work withSymbol
s, i.e. you cannot callNDArray.asnumpy()
,NDArray.shape
,NDArray.dtype
, NDArray indexing (x[i]) etc on tensors. Also, you cannot use branching or loop logic that bases on non-constant expressions like random numbers or intermediate results, since they change the graph structure for each iteration.Before activating with
hybridize()
,HybridBlock
works just like normalBlock
. After activation,HybridBlock
will create a symbolic graph representing the forward computation and cache it. On subsequent forwards, the cached graph will be used instead ofhybrid_forward()
.Please see references for detailed tutorial.
References
Hybrid - Faster training and easy deployment
-
cast
(dtype)[source]¶ Cast this Block to use another data type.
- Parameters
dtype (str or numpy.dtype) – The new data type.
-
export
(path, epoch=0, remove_amp_cast=True)[source]¶ Export HybridBlock to json format that can be loaded by gluon.SymbolBlock.imports, mxnet.mod.Module or the C++ interface.
Note
When there are only one input, it will have name data. When there Are more than one inputs, they will be named as data0, data1, etc.
- Parameters
path (str) – Path to save model. Two files path-symbol.json and path-xxxx.params will be created, where xxxx is the 4 digits epoch number.
epoch (int) – Epoch number of saved model.
-
forward
(x, *args)[source]¶ Defines the forward computation. Arguments can be either
NDArray
orSymbol
.
-
hybrid_forward
(F, x, *args, **kwargs)[source]¶ Overrides to construct symbolic graph for this Block.
-
hybridize
(active=True, backend=None, backend_opts=None, **kwargs)[source]¶ Activates or deactivates
HybridBlock
s recursively. Has no effect on non-hybrid children.- Parameters
active (bool, default True) – Whether to turn hybrid on or off.
backend (str) – The name of backend, as registered in SubgraphBackendRegistry, default None
backend_opts (dict of user-specified options to pass to the backend for partitioning, optional) – Passed on to PrePartition and PostPartition functions of SubgraphProperty
static_alloc (bool, default False) – Statically allocate memory to improve speed. Memory usage may increase.
static_shape (bool, default False) – Optimize for invariant input shapes between iterations. Must also set static_alloc to True. Change of input shapes is still allowed but slower.
-
optimize_for
(x, *args, backend=None, backend_opts=None, **kwargs)[source]¶ Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass. Modifies the HybridBlock in-place.
Immediately partitions a HybridBlock using the specified backend. Combines the work done in the hybridize API with part of the work done in the forward pass without calling the CachedOp. Can be used in place of hybridize, afterwards export can be called or inference can be run. See README.md in example/extensions/lib_subgraph/README.md for more details.
Examples
# partition and then export to file block.optimize_for(x, backend=’myPart’) block.export(‘partitioned’)
# partition and then run inference block.optimize_for(x, backend=’myPart’) block(x)
- Parameters
x (NDArray) – first input to model
*args (NDArray) – other inputs to model
backend (str) – The name of backend, as registered in SubgraphBackendRegistry, default None
backend_opts (dict of user-specified options to pass to the backend for partitioning, optional) – Passed on to PrePartition and PostPartition functions of SubgraphProperty
static_alloc (bool, default False) – Statically allocate memory to improve speed. Memory usage may increase.
static_shape (bool, default False) – Optimize for invariant input shapes between iterations. Must also set static_alloc to True. Change of input shapes is still allowed but slower.
-
-
class
mxnet.gluon.nn.
HybridLambda
(function, prefix=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Wraps an operator or an expression as a HybridBlock object.
- Parameters
function (str or function) –
Function used in lambda must be one of the following: 1) The name of an operator that is available in both symbol and ndarray. For example:
block = HybridLambda('tanh')
A function that conforms to
def function(F, data, *args)
. For example:block = HybridLambda(lambda F, x: F.LeakyReLU(x, slope=0.1))
Inputs –
- ** args *: one or more input data. First argument must be symbol or ndarray. Their
shapes depend on the function.
Output –
** outputs *: one or more output data. Their shapes depend on the function.
Methods
hybrid_forward
(F, x, *args)Overrides to construct symbolic graph for this Block.
-
class
mxnet.gluon.nn.
HybridSequential
(prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Stacks HybridBlocks sequentially.
Example:
net = nn.HybridSequential() # use net's name_scope to give child Blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.hybridize()
Methods
add
(*blocks)Adds block on top of the stack.
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
-
class
mxnet.gluon.nn.
InstanceNorm
(axis=1, epsilon=1e-05, center=True, scale=False, beta_initializer='zeros', gamma_initializer='ones', in_channels=0, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Applies instance normalization to the n-dimensional input array. This operator takes an n-dimensional input array where (n>2) and normalizes the input using the following formula:
\[ \begin{align}\begin{aligned}\bar{C} = \{i \mid i \neq 0, i \neq axis\}\\out = \frac{x - mean[data, \bar{C}]}{ \sqrt{Var[data, \bar{C}]} + \epsilon} * gamma + beta\end{aligned}\end{align} \]Methods
hybrid_forward
(F, x, gamma, beta)Overrides to construct symbolic graph for this Block.
- Parameters
axis (int, default 1) – The axis that will be excluded in the normalization process. This is typically the channels (C) axis. For instance, after a Conv2D layer with layout=’NCHW’, set axis=1 in InstanceNorm. If layout=’NHWC’, then set axis=3. Data will be normalized along axes excluding the first axis and the axis given.
epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale (bool, default True) – If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.
beta_initializer (str or Initializer, default ‘zeros’) – Initializer for the beta weight.
gamma_initializer (str or Initializer, default ‘ones’) – Initializer for the gamma weight.
in_channels (int, default 0) – Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
References
Instance Normalization: The Missing Ingredient for Fast Stylization
Examples
>>> # Input of shape (2,1,2) >>> x = mx.nd.array([[[ 1.1, 2.2]], ... [[ 3.3, 4.4]]]) >>> # Instance normalization is calculated with the above formula >>> layer = InstanceNorm() >>> layer.initialize(ctx=mx.cpu(0)) >>> layer(x) [[[-0.99998355 0.99998331]] [[-0.99998319 0.99998361]]] <NDArray 2x1x2 @cpu(0)>
-
class
mxnet.gluon.nn.
Lambda
(function, prefix=None)[source]¶ Bases:
mxnet.gluon.block.Block
Wraps an operator or an expression as a Block object.
- Parameters
function (str or function) –
Function used in lambda must be one of the following: 1) the name of an operator that is available in ndarray. For example:
block = Lambda('tanh')
a function that conforms to
def function(*args)
. For example:block = Lambda(lambda x: nd.LeakyReLU(x, slope=0.1))
Inputs –
** args *: one or more input data. Their shapes depend on the function.
Output –
** outputs *: one or more output data. Their shapes depend on the function.
Methods
forward
(*args)Overrides to implement forward computation using
NDArray
.
-
class
mxnet.gluon.nn.
LayerNorm
(axis=-1, epsilon=1e-05, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', in_channels=0, prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Applies layer normalization to the n-dimensional input array. This operator takes an n-dimensional input array and normalizes the input using the given axis:
\[out = \frac{x - mean[data, axis]}{ \sqrt{Var[data, axis] + \epsilon}} * gamma + beta\]Methods
hybrid_forward
(F, data, gamma, beta)Overrides to construct symbolic graph for this Block.
- Parameters
axis (int, default -1) – The axis that should be normalized. This is typically the axis of the channels.
epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale (bool, default True) – If True, multiply by gamma. If False, gamma is not used.
beta_initializer (str or Initializer, default ‘zeros’) – Initializer for the beta weight.
gamma_initializer (str or Initializer, default ‘ones’) – Initializer for the gamma weight.
in_channels (int, default 0) – Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
References
Examples
>>> # Input of shape (2, 5) >>> x = mx.nd.array([[1, 2, 3, 4, 5], [1, 1, 2, 2, 2]]) >>> # Layer normalization is calculated with the above formula >>> layer = LayerNorm() >>> layer.initialize(ctx=mx.cpu(0)) >>> layer(x) [[-1.41421 -0.707105 0. 0.707105 1.41421 ] [-1.2247195 -1.2247195 0.81647956 0.81647956 0.81647956]] <NDArray 2x5 @cpu(0)>
-
class
mxnet.gluon.nn.
LeakyReLU
(alpha, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Leaky version of a Rectified Linear Unit.
It allows a small gradient when the unit is not active
\[\begin{split}f\left(x\right) = \left\{ \begin{array}{lr} \alpha x & : x \lt 0 \\ x & : x \geq 0 \\ \end{array} \right.\\\end{split}\]Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
- Parameters
alpha (float) – slope coefficient for the negative half axis. Must be >= 0.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
-
class
mxnet.gluon.nn.
MaxPool1D
(pool_size=2, strides=None, padding=0, layout='NCW', ceil_mode=False, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Max pooling operation for one dimensional data.
- Parameters
pool_size (int) – Size of the max pooling windows.
strides (int, or None) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size.
padding (int) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
layout (str, default 'NCW') – Dimension ordering of data and out (‘NCW’ or ‘NWC’). ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Pooling is applied on the W dimension.
ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
- Inputs:
data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
out_width = floor((width+2*padding-pool_size)/strides)+1
When ceil_mode is True, ceil will be used instead of floor in this equation.
-
class
mxnet.gluon.nn.
MaxPool2D
(pool_size=(2, 2), strides=None, padding=0, layout='NCHW', ceil_mode=False, **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Max pooling operation for two dimensional (spatial) data.
- Parameters
pool_size (int or list/tuple of 2 ints,) – Size of the max pooling windows.
strides (int, list/tuple of 2 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size.
padding (int or list/tuple of 2 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
layout (str, default 'NCHW') – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension.
ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
- Inputs:
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
- Outputs:
out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCHW. out_height and out_width are calculated as:
out_height = floor((height+2*padding[0]-pool_size[0])/strides[0])+1 out_width = floor((width+2*padding[1]-pool_size[1])/strides[1])+1
When ceil_mode is True, ceil will be used instead of floor in this equation.
-
class
mxnet.gluon.nn.
MaxPool3D
(pool_size=(2, 2, 2), strides=None, padding=0, ceil_mode=False, layout='NCDHW', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.conv_layers._Pooling
Max pooling operation for 3D data (spatial or spatio-temporal).
- Parameters
pool_size (int or list/tuple of 3 ints,) – Size of the max pooling windows.
strides (int, list/tuple of 3 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size.
padding (int or list/tuple of 3 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
layout (str, default 'NCDHW') – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ dimension.
ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
- Inputs:
data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
- Outputs:
out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCDHW. out_depth, out_height and out_width are calculated as:
out_depth = floor((depth+2*padding[0]-pool_size[0])/strides[0])+1 out_height = floor((height+2*padding[1]-pool_size[1])/strides[1])+1 out_width = floor((width+2*padding[2]-pool_size[2])/strides[2])+1
When ceil_mode is True, ceil will be used instead of floor in this equation.
-
class
mxnet.gluon.nn.
PReLU
(alpha_initializer=<mxnet.initializer.Constant object>, in_channels=1, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Parametric leaky version of a Rectified Linear Unit. <https://arxiv.org/abs/1502.01852>`_ paper.
It learns a gradient when the unit is not active
\[\begin{split}f\left(x\right) = \left\{ \begin{array}{lr} \alpha x & : x \lt 0 \\ x & : x \geq 0 \\ \end{array} \right.\\\end{split}\]Methods
hybrid_forward
(F, x, alpha)Overrides to construct symbolic graph for this Block.
where alpha is a learned parameter.
- Parameters
alpha_initializer (Initializer) – Initializer for the embeddings matrix.
in_channels (int, default 1) – Number of channels (alpha parameters) to learn. Can either be 1 or n where n is the size of the second dimension of the input tensor.
Inputs –
data: input tensor with arbitrary shape.
Outputs –
out: output tensor with the same shape as data.
-
class
mxnet.gluon.nn.
ReflectionPad2D
(padding=0, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Pads the input tensor using the reflection of the input boundary.
- Parameters
padding (int) – An integer padding size
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
- Inputs:
data: input tensor with the shape \((N, C, H_{in}, W_{in})\).
- Outputs:
out: output tensor with the shape \((N, C, H_{out}, W_{out})\), where
\[ \begin{align}\begin{aligned}H_{out} = H_{in} + 2 \cdot padding\\W_{out} = W_{in} + 2 \cdot padding\end{aligned}\end{align} \]
Examples
>>> m = nn.ReflectionPad2D(3) >>> input = mx.nd.random.normal(shape=(16, 3, 224, 224)) >>> output = m(input)
-
class
mxnet.gluon.nn.
SELU
(**kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
- Scaled Exponential Linear Unit (SELU)
“Self-Normalizing Neural Networks”, Klambauer et al, 2017 https://arxiv.org/abs/1706.02515
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
-
class
mxnet.gluon.nn.
Sequential
(prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.Block
Stacks Blocks sequentially.
Example:
net = nn.Sequential() # use net's name_scope to give child Blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20))
Methods
add
(*blocks)Adds block on top of the stack.
forward
(x)Overrides to implement forward computation using
NDArray
.hybridize
([active])Activates or deactivates HybridBlock s recursively.
-
class
mxnet.gluon.nn.
Swish
(beta=1.0, **kwargs)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
- Swish Activation function
Methods
hybrid_forward
(F, x)Overrides to construct symbolic graph for this Block.
- Parameters
beta (float) – swish(x) = x * sigmoid(beta*x)
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
-
class
mxnet.gluon.nn.
SymbolBlock
(outputs, inputs, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Construct block from symbol. This is useful for using pre-trained models as feature extractors. For example, you may want to extract the output from fc2 layer in AlexNet.
- Parameters
outputs (Symbol or list of Symbol) – The desired output for SymbolBlock.
inputs (Symbol or list of Symbol) – The Variables in output’s argument that should be used as inputs.
params (ParameterDict) – Parameter dictionary for arguments and auxililary states of outputs that are not inputs.
Methods
cast
(dtype)Cast this Block to use another data type.
forward
(x, *args)Defines the forward computation.
hybrid_forward
(F, x, *args, **kwargs)Overrides to construct symbolic graph for this Block.
imports
(symbol_file, input_names[, …])Import model previously saved by gluon.HybridBlock.export or Module.save_checkpoint as a gluon.SymbolBlock for use in Gluon.
Examples
>>> # To extract the feature from fc1 and fc2 layers of AlexNet: >>> alexnet = gluon.model_zoo.vision.alexnet(pretrained=True, ctx=mx.cpu(), prefix='model_') >>> inputs = mx.sym.var('data') >>> out = alexnet(inputs) >>> internals = out.get_internals() >>> print(internals.list_outputs()) ['data', ..., 'model_dense0_relu_fwd_output', ..., 'model_dense1_relu_fwd_output', ...] >>> outputs = [internals['model_dense0_relu_fwd_output'], internals['model_dense1_relu_fwd_output']] >>> # Create SymbolBlock that shares parameters with alexnet >>> feat_model = gluon.SymbolBlock(outputs, inputs, params=alexnet.collect_params()) >>> x = mx.nd.random.normal(shape=(16, 3, 224, 224)) >>> print(feat_model(x))
-
cast
(dtype)[source]¶ Cast this Block to use another data type.
- Parameters
dtype (str or numpy.dtype) – The new data type.
-
forward
(x, *args)[source]¶ Defines the forward computation. Arguments can be either
NDArray
orSymbol
.
-
hybrid_forward
(F, x, *args, **kwargs)[source]¶ Overrides to construct symbolic graph for this Block.
-
static
imports
(symbol_file, input_names, param_file=None, ctx=None)[source]¶ Import model previously saved by gluon.HybridBlock.export or Module.save_checkpoint as a gluon.SymbolBlock for use in Gluon.
- Parameters
symbol_file (str) – Path to symbol file.
input_names (list of str) – List of input variable names
param_file (str, optional) – Path to parameter file.
ctx (Context, default None) – The context to initialize gluon.SymbolBlock on.
- Returns
gluon.SymbolBlock loaded from symbol and parameter files.
- Return type
Examples
>>> net1 = gluon.model_zoo.vision.resnet18_v1( ... prefix='resnet', pretrained=True) >>> net1.hybridize() >>> x = mx.nd.random.normal(shape=(1, 3, 32, 32)) >>> out1 = net1(x) >>> net1.export('net1', epoch=1) >>> >>> net2 = gluon.SymbolBlock.imports( ... 'net1-symbol.json', ['data'], 'net1-0001.params') >>> out2 = net2(x)
此页内容是否对您有帮助
感谢反馈!