MXNet
-
- ndarray
- ndarray.CachedOp
- ndarray.NDArray
- ndarray.Activation
- ndarray.BatchNorm
- ndarray.BatchNorm_v1
- ndarray.BilinearSampler
- ndarray.BlockGrad
- ndarray.CTCLoss
- ndarray.Cast
- ndarray.Concat
- ndarray.Convolution
- ndarray.Convolution_v1
- ndarray.Correlation
- ndarray.Crop
- ndarray.Custom
- ndarray.Deconvolution
- ndarray.Dropout
- ndarray.ElementWiseSum
- ndarray.Embedding
- ndarray.Flatten
- ndarray.FullyConnected
- ndarray.GridGenerator
- ndarray.GroupNorm
- ndarray.IdentityAttachKLSparseReg
- ndarray.InstanceNorm
- ndarray.L2Normalization
- ndarray.LRN
- ndarray.LayerNorm
- ndarray.LeakyReLU
- ndarray.LinearRegressionOutput
- ndarray.LogisticRegressionOutput
- ndarray.MAERegressionOutput
- ndarray.MakeLoss
- ndarray.Pad
- ndarray.Pooling
- ndarray.Pooling_v1
- ndarray.RNN
- ndarray.ROIPooling
- ndarray.Reshape
- ndarray.SVMOutput
- ndarray.SequenceLast
- ndarray.SequenceMask
- ndarray.SequenceReverse
- ndarray.SliceChannel
- ndarray.Softmax
- ndarray.SoftmaxActivation
- ndarray.SoftmaxOutput
- ndarray.SpatialTransformer
- ndarray.SwapAxis
- ndarray.UpSampling
- ndarray.abs
- ndarray.adam_update
- ndarray.add_n
- ndarray.all_finite
- ndarray.amp_cast
- ndarray.amp_multicast
- ndarray.arccos
- ndarray.arccosh
- ndarray.arcsin
- ndarray.arcsinh
- ndarray.arctan
- ndarray.arctanh
- ndarray.argmax
- ndarray.argmax_channel
- ndarray.argmin
- ndarray.argsort
- ndarray.batch_dot
- ndarray.batch_take
- ndarray.broadcast_add
- ndarray.broadcast_axes
- ndarray.broadcast_axis
- ndarray.broadcast_div
- ndarray.broadcast_equal
- ndarray.broadcast_greater
- ndarray.broadcast_greater_equal
- ndarray.broadcast_hypot
- ndarray.broadcast_lesser
- ndarray.broadcast_lesser_equal
- ndarray.broadcast_like
- ndarray.broadcast_logical_and
- ndarray.broadcast_logical_or
- ndarray.broadcast_logical_xor
- ndarray.broadcast_maximum
- ndarray.broadcast_minimum
- ndarray.broadcast_minus
- ndarray.broadcast_mod
- ndarray.broadcast_mul
- ndarray.broadcast_not_equal
- ndarray.broadcast_plus
- ndarray.broadcast_power
- ndarray.broadcast_sub
- ndarray.broadcast_to
- ndarray.cast
- ndarray.cast_storage
- ndarray.cbrt
- ndarray.ceil
- ndarray.choose_element_0index
- ndarray.clip
- ndarray.col2im
- ndarray.concat
- ndarray.cos
- ndarray.cosh
- ndarray.crop
- ndarray.ctc_loss
- ndarray.cumsum
- ndarray.degrees
- ndarray.depth_to_space
- ndarray.diag
- ndarray.dot
- ndarray.elemwise_add
- ndarray.elemwise_div
- ndarray.elemwise_mul
- ndarray.elemwise_sub
- ndarray.erf
- ndarray.erfinv
- ndarray.exp
- ndarray.expand_dims
- ndarray.expm1
- ndarray.fill_element_0index
- ndarray.fix
- ndarray.flatten
- ndarray.flip
- ndarray.floor
- ndarray.ftml_update
- ndarray.ftrl_update
- ndarray.gamma
- ndarray.gammaln
- ndarray.gather_nd
- ndarray.hard_sigmoid
- ndarray.identity
- ndarray.im2col
- ndarray.khatri_rao
- ndarray.lamb_update_phase1
- ndarray.lamb_update_phase2
- ndarray.linalg_det
- ndarray.linalg_extractdiag
- ndarray.linalg_extracttrian
- ndarray.linalg_gelqf
- ndarray.linalg_gemm
- ndarray.linalg_gemm2
- ndarray.linalg_inverse
- ndarray.linalg_makediag
- ndarray.linalg_maketrian
- ndarray.linalg_potrf
- ndarray.linalg_potri
- ndarray.linalg_slogdet
- ndarray.linalg_sumlogdiag
- ndarray.linalg_syrk
- ndarray.linalg_trmm
- ndarray.linalg_trsm
- ndarray.log
- ndarray.log10
- ndarray.log1p
- ndarray.log2
- ndarray.log_softmax
- ndarray.logical_not
- ndarray.make_loss
- ndarray.max
- ndarray.max_axis
- ndarray.mean
- ndarray.min
- ndarray.min_axis
- ndarray.moments
- ndarray.mp_lamb_update_phase1
- ndarray.mp_lamb_update_phase2
- ndarray.mp_nag_mom_update
- ndarray.mp_sgd_mom_update
- ndarray.mp_sgd_update
- ndarray.multi_all_finite
- ndarray.multi_lars
- ndarray.multi_mp_sgd_mom_update
- ndarray.multi_mp_sgd_update
- ndarray.multi_sgd_mom_update
- ndarray.multi_sgd_update
- ndarray.multi_sum_sq
- ndarray.nag_mom_update
- ndarray.nanprod
- ndarray.nansum
- ndarray.negative
- ndarray.norm
- ndarray.normal
- ndarray.one_hot
- ndarray.ones_like
- ndarray.pad
- ndarray.pick
- ndarray.preloaded_multi_mp_sgd_mom_update
- ndarray.preloaded_multi_mp_sgd_update
- ndarray.preloaded_multi_sgd_mom_update
- ndarray.preloaded_multi_sgd_update
- ndarray.prod
- ndarray.radians
- ndarray.random_exponential
- ndarray.random_gamma
- ndarray.random_generalized_negative_binomial
- ndarray.random_negative_binomial
- ndarray.random_normal
- ndarray.random_pdf_dirichlet
- ndarray.random_pdf_exponential
- ndarray.random_pdf_gamma
- ndarray.random_pdf_generalized_negative_binomial
- ndarray.random_pdf_negative_binomial
- ndarray.random_pdf_normal
- ndarray.random_pdf_poisson
- ndarray.random_pdf_uniform
- ndarray.random_poisson
- ndarray.random_randint
- ndarray.random_uniform
- ndarray.ravel_multi_index
- ndarray.rcbrt
- ndarray.reciprocal
- ndarray.relu
- ndarray.repeat
- ndarray.reset_arrays
- ndarray.reshape
- ndarray.reshape_like
- ndarray.reverse
- ndarray.rint
- ndarray.rmsprop_update
- ndarray.rmspropalex_update
- ndarray.round
- ndarray.rsqrt
- ndarray.sample_exponential
- ndarray.sample_gamma
- ndarray.sample_generalized_negative_binomial
- ndarray.sample_multinomial
- ndarray.sample_negative_binomial
- ndarray.sample_normal
- ndarray.sample_poisson
- ndarray.sample_uniform
- ndarray.scatter_nd
- ndarray.sgd_mom_update
- ndarray.sgd_update
- ndarray.shape_array
- ndarray.shuffle
- ndarray.sigmoid
- ndarray.sign
- ndarray.signsgd_update
- ndarray.signum_update
- ndarray.sin
- ndarray.sinh
- ndarray.size_array
- ndarray.slice
- ndarray.slice_axis
- ndarray.slice_like
- ndarray.smooth_l1
- ndarray.softmax
- ndarray.softmax_cross_entropy
- ndarray.softmin
- ndarray.softsign
- ndarray.sort
- ndarray.space_to_depth
- ndarray.split
- ndarray.sqrt
- ndarray.square
- ndarray.squeeze
- ndarray.stack
- ndarray.stop_gradient
- ndarray.sum
- ndarray.sum_axis
- ndarray.swapaxes
- ndarray.take
- ndarray.tan
- ndarray.tanh
- ndarray.tile
- ndarray.topk
- ndarray.transpose
- ndarray.trunc
- ndarray.uniform
- ndarray.unravel_index
- ndarray.where
- ndarray.zeros_like
- ndarray.concatenate
- ndarray.ones
- ndarray.add
- ndarray.arange
- ndarray.linspace
- ndarray.eye
- ndarray.divide
- ndarray.equal
- ndarray.full
- ndarray.greater
- ndarray.greater_equal
- ndarray.imdecode
- ndarray.lesser
- ndarray.lesser_equal
- ndarray.logical_and
- ndarray.logical_or
- ndarray.logical_xor
- ndarray.maximum
- ndarray.minimum
- ndarray.moveaxis
- ndarray.modulo
- ndarray.multiply
- ndarray.not_equal
- ndarray.onehot_encode
- ndarray.power
- ndarray.subtract
- ndarray.true_divide
- ndarray.waitall
- ndarray.histogram
- ndarray.split_v2
- ndarray.to_dlpack_for_read
- ndarray.to_dlpack_for_write
- ndarray.from_dlpack
- ndarray.from_numpy
- ndarray.zeros
- ndarray.indexing_key_expand_implicit_axes
- ndarray.get_indexing_dispatch_code
- ndarray.get_oshape_of_gather_nd_op
- ndarray.empty
- ndarray.array
- ndarray.load
- ndarray.load_frombuffer
- ndarray.save
-
- ndarray.contrib
- ndarray.contrib.rand_zipfian
- ndarray.contrib.foreach
- ndarray.contrib.while_loop
- ndarray.contrib.cond
- ndarray.contrib.isinf
- ndarray.contrib.isfinite
- ndarray.contrib.isnan
- ndarray.contrib.AdaptiveAvgPooling2D
- ndarray.contrib.BilinearResize2D
- ndarray.contrib.CTCLoss
- ndarray.contrib.DeformableConvolution
- ndarray.contrib.DeformablePSROIPooling
- ndarray.contrib.ModulatedDeformableConvolution
- ndarray.contrib.MultiBoxDetection
- ndarray.contrib.MultiBoxPrior
- ndarray.contrib.MultiBoxTarget
- ndarray.contrib.MultiProposal
- ndarray.contrib.PSROIPooling
- ndarray.contrib.Proposal
- ndarray.contrib.ROIAlign
- ndarray.contrib.RROIAlign
- ndarray.contrib.SparseEmbedding
- ndarray.contrib.SyncBatchNorm
- ndarray.contrib.allclose
- ndarray.contrib.arange_like
- ndarray.contrib.backward_gradientmultiplier
- ndarray.contrib.backward_hawkesll
- ndarray.contrib.backward_index_copy
- ndarray.contrib.backward_quadratic
- ndarray.contrib.bipartite_matching
- ndarray.contrib.boolean_mask
- ndarray.contrib.box_decode
- ndarray.contrib.box_encode
- ndarray.contrib.box_iou
- ndarray.contrib.box_nms
- ndarray.contrib.box_non_maximum_suppression
- ndarray.contrib.calibrate_entropy
- ndarray.contrib.count_sketch
- ndarray.contrib.ctc_loss
- ndarray.contrib.dequantize
- ndarray.contrib.dgl_adjacency
- ndarray.contrib.dgl_csr_neighbor_non_uniform_sample
- ndarray.contrib.dgl_csr_neighbor_uniform_sample
- ndarray.contrib.dgl_graph_compact
- ndarray.contrib.dgl_subgraph
- ndarray.contrib.div_sqrt_dim
- ndarray.contrib.edge_id
- ndarray.contrib.fft
- ndarray.contrib.getnnz
- ndarray.contrib.gradientmultiplier
- ndarray.contrib.group_adagrad_update
- ndarray.contrib.hawkesll
- ndarray.contrib.ifft
- ndarray.contrib.index_array
- ndarray.contrib.index_copy
- ndarray.contrib.interleaved_matmul_encdec_qk
- ndarray.contrib.interleaved_matmul_encdec_valatt
- ndarray.contrib.interleaved_matmul_selfatt_qk
- ndarray.contrib.interleaved_matmul_selfatt_valatt
- ndarray.contrib.quadratic
- ndarray.contrib.quantize
- ndarray.contrib.quantize_v2
- ndarray.contrib.quantized_act
- ndarray.contrib.quantized_batch_norm
- ndarray.contrib.quantized_concat
- ndarray.contrib.quantized_conv
- ndarray.contrib.quantized_elemwise_add
- ndarray.contrib.quantized_elemwise_mul
- ndarray.contrib.quantized_embedding
- ndarray.contrib.quantized_flatten
- ndarray.contrib.quantized_fully_connected
- ndarray.contrib.quantized_pooling
- ndarray.contrib.requantize
- ndarray.contrib.round_ste
- ndarray.contrib.sign_ste
-
- ndarray.image
- ndarray.image.adjust_lighting
- ndarray.image.crop
- ndarray.image.flip_left_right
- ndarray.image.flip_top_bottom
- ndarray.image.normalize
- ndarray.image.random_brightness
- ndarray.image.random_color_jitter
- ndarray.image.random_contrast
- ndarray.image.random_flip_left_right
- ndarray.image.random_flip_top_bottom
- ndarray.image.random_hue
- ndarray.image.random_lighting
- ndarray.image.random_saturation
- ndarray.image.resize
- ndarray.image.to_tensor
-
- ndarray.linalg
- ndarray.linalg.det
- ndarray.linalg.extractdiag
- ndarray.linalg.extracttrian
- ndarray.linalg.gelqf
- ndarray.linalg.gemm
- ndarray.linalg.gemm2
- ndarray.linalg.inverse
- ndarray.linalg.makediag
- ndarray.linalg.maketrian
- ndarray.linalg.potrf
- ndarray.linalg.potri
- ndarray.linalg.slogdet
- ndarray.linalg.sumlogdiag
- ndarray.linalg.syevd
- ndarray.linalg.syrk
- ndarray.linalg.trmm
- ndarray.linalg.trsm
-
- ndarray.op
- ndarray.op.CachedOp
- ndarray.op.Activation
- ndarray.op.BatchNorm
- ndarray.op.BatchNorm_v1
- ndarray.op.BilinearSampler
- ndarray.op.BlockGrad
- ndarray.op.CTCLoss
- ndarray.op.Cast
- ndarray.op.Concat
- ndarray.op.Convolution
- ndarray.op.Convolution_v1
- ndarray.op.Correlation
- ndarray.op.Crop
- ndarray.op.Custom
- ndarray.op.Deconvolution
- ndarray.op.Dropout
- ndarray.op.ElementWiseSum
- ndarray.op.Embedding
- ndarray.op.Flatten
- ndarray.op.FullyConnected
- ndarray.op.GridGenerator
- ndarray.op.GroupNorm
- ndarray.op.IdentityAttachKLSparseReg
- ndarray.op.InstanceNorm
- ndarray.op.L2Normalization
- ndarray.op.LRN
- ndarray.op.LayerNorm
- ndarray.op.LeakyReLU
- ndarray.op.LinearRegressionOutput
- ndarray.op.LogisticRegressionOutput
- ndarray.op.MAERegressionOutput
- ndarray.op.MakeLoss
- ndarray.op.Pad
- ndarray.op.Pooling
- ndarray.op.Pooling_v1
- ndarray.op.RNN
- ndarray.op.ROIPooling
- ndarray.op.Reshape
- ndarray.op.SVMOutput
- ndarray.op.SequenceLast
- ndarray.op.SequenceMask
- ndarray.op.SequenceReverse
- ndarray.op.SliceChannel
- ndarray.op.Softmax
- ndarray.op.SoftmaxActivation
- ndarray.op.SoftmaxOutput
- ndarray.op.SpatialTransformer
- ndarray.op.SwapAxis
- ndarray.op.UpSampling
- ndarray.op.abs
- ndarray.op.adam_update
- ndarray.op.add_n
- ndarray.op.all_finite
- ndarray.op.amp_cast
- ndarray.op.amp_multicast
- ndarray.op.arccos
- ndarray.op.arccosh
- ndarray.op.arcsin
- ndarray.op.arcsinh
- ndarray.op.arctan
- ndarray.op.arctanh
- ndarray.op.argmax
- ndarray.op.argmax_channel
- ndarray.op.argmin
- ndarray.op.argsort
- ndarray.op.batch_dot
- ndarray.op.batch_take
- ndarray.op.broadcast_add
- ndarray.op.broadcast_axes
- ndarray.op.broadcast_axis
- ndarray.op.broadcast_div
- ndarray.op.broadcast_equal
- ndarray.op.broadcast_greater
- ndarray.op.broadcast_greater_equal
- ndarray.op.broadcast_hypot
- ndarray.op.broadcast_lesser
- ndarray.op.broadcast_lesser_equal
- ndarray.op.broadcast_like
- ndarray.op.broadcast_logical_and
- ndarray.op.broadcast_logical_or
- ndarray.op.broadcast_logical_xor
- ndarray.op.broadcast_maximum
- ndarray.op.broadcast_minimum
- ndarray.op.broadcast_minus
- ndarray.op.broadcast_mod
- ndarray.op.broadcast_mul
- ndarray.op.broadcast_not_equal
- ndarray.op.broadcast_plus
- ndarray.op.broadcast_power
- ndarray.op.broadcast_sub
- ndarray.op.broadcast_to
- ndarray.op.cast
- ndarray.op.cast_storage
- ndarray.op.cbrt
- ndarray.op.ceil
- ndarray.op.choose_element_0index
- ndarray.op.clip
- ndarray.op.col2im
- ndarray.op.concat
- ndarray.op.cos
- ndarray.op.cosh
- ndarray.op.crop
- ndarray.op.ctc_loss
- ndarray.op.cumsum
- ndarray.op.degrees
- ndarray.op.depth_to_space
- ndarray.op.diag
- ndarray.op.dot
- ndarray.op.elemwise_add
- ndarray.op.elemwise_div
- ndarray.op.elemwise_mul
- ndarray.op.elemwise_sub
- ndarray.op.erf
- ndarray.op.erfinv
- ndarray.op.exp
- ndarray.op.expand_dims
- ndarray.op.expm1
- ndarray.op.fill_element_0index
- ndarray.op.fix
- ndarray.op.flatten
- ndarray.op.flip
- ndarray.op.floor
- ndarray.op.ftml_update
- ndarray.op.ftrl_update
- ndarray.op.gamma
- ndarray.op.gammaln
- ndarray.op.gather_nd
- ndarray.op.hard_sigmoid
- ndarray.op.identity
- ndarray.op.im2col
- ndarray.op.khatri_rao
- ndarray.op.lamb_update_phase1
- ndarray.op.lamb_update_phase2
- ndarray.op.linalg_det
- ndarray.op.linalg_extractdiag
- ndarray.op.linalg_extracttrian
- ndarray.op.linalg_gelqf
- ndarray.op.linalg_gemm
- ndarray.op.linalg_gemm2
- ndarray.op.linalg_inverse
- ndarray.op.linalg_makediag
- ndarray.op.linalg_maketrian
- ndarray.op.linalg_potrf
- ndarray.op.linalg_potri
- ndarray.op.linalg_slogdet
- ndarray.op.linalg_sumlogdiag
- ndarray.op.linalg_syrk
- ndarray.op.linalg_trmm
- ndarray.op.linalg_trsm
- ndarray.op.log
- ndarray.op.log10
- ndarray.op.log1p
- ndarray.op.log2
- ndarray.op.log_softmax
- ndarray.op.logical_not
- ndarray.op.make_loss
- ndarray.op.max
- ndarray.op.max_axis
- ndarray.op.mean
- ndarray.op.min
- ndarray.op.min_axis
- ndarray.op.moments
- ndarray.op.mp_lamb_update_phase1
- ndarray.op.mp_lamb_update_phase2
- ndarray.op.mp_nag_mom_update
- ndarray.op.mp_sgd_mom_update
- ndarray.op.mp_sgd_update
- ndarray.op.multi_all_finite
- ndarray.op.multi_lars
- ndarray.op.multi_mp_sgd_mom_update
- ndarray.op.multi_mp_sgd_update
- ndarray.op.multi_sgd_mom_update
- ndarray.op.multi_sgd_update
- ndarray.op.multi_sum_sq
- ndarray.op.nag_mom_update
- ndarray.op.nanprod
- ndarray.op.nansum
- ndarray.op.negative
- ndarray.op.norm
- ndarray.op.normal
- ndarray.op.one_hot
- ndarray.op.ones_like
- ndarray.op.pad
- ndarray.op.pick
- ndarray.op.preloaded_multi_mp_sgd_mom_update
- ndarray.op.preloaded_multi_mp_sgd_update
- ndarray.op.preloaded_multi_sgd_mom_update
- ndarray.op.preloaded_multi_sgd_update
- ndarray.op.prod
- ndarray.op.radians
- ndarray.op.random_exponential
- ndarray.op.random_gamma
- ndarray.op.random_generalized_negative_binomial
- ndarray.op.random_negative_binomial
- ndarray.op.random_normal
- ndarray.op.random_pdf_dirichlet
- ndarray.op.random_pdf_exponential
- ndarray.op.random_pdf_gamma
- ndarray.op.random_pdf_generalized_negative_binomial
- ndarray.op.random_pdf_negative_binomial
- ndarray.op.random_pdf_normal
- ndarray.op.random_pdf_poisson
- ndarray.op.random_pdf_uniform
- ndarray.op.random_poisson
- ndarray.op.random_randint
- ndarray.op.random_uniform
- ndarray.op.ravel_multi_index
- ndarray.op.rcbrt
- ndarray.op.reciprocal
- ndarray.op.relu
- ndarray.op.repeat
- ndarray.op.reset_arrays
- ndarray.op.reshape
- ndarray.op.reshape_like
- ndarray.op.reverse
- ndarray.op.rint
- ndarray.op.rmsprop_update
- ndarray.op.rmspropalex_update
- ndarray.op.round
- ndarray.op.rsqrt
- ndarray.op.sample_exponential
- ndarray.op.sample_gamma
- ndarray.op.sample_generalized_negative_binomial
- ndarray.op.sample_multinomial
- ndarray.op.sample_negative_binomial
- ndarray.op.sample_normal
- ndarray.op.sample_poisson
- ndarray.op.sample_uniform
- ndarray.op.scatter_nd
- ndarray.op.sgd_mom_update
- ndarray.op.sgd_update
- ndarray.op.shape_array
- ndarray.op.shuffle
- ndarray.op.sigmoid
- ndarray.op.sign
- ndarray.op.signsgd_update
- ndarray.op.signum_update
- ndarray.op.sin
- ndarray.op.sinh
- ndarray.op.size_array
- ndarray.op.slice
- ndarray.op.slice_axis
- ndarray.op.slice_like
- ndarray.op.smooth_l1
- ndarray.op.softmax
- ndarray.op.softmax_cross_entropy
- ndarray.op.softmin
- ndarray.op.softsign
- ndarray.op.sort
- ndarray.op.space_to_depth
- ndarray.op.split
- ndarray.op.sqrt
- ndarray.op.square
- ndarray.op.squeeze
- ndarray.op.stack
- ndarray.op.stop_gradient
- ndarray.op.sum
- ndarray.op.sum_axis
- ndarray.op.swapaxes
- ndarray.op.take
- ndarray.op.tan
- ndarray.op.tanh
- ndarray.op.tile
- ndarray.op.topk
- ndarray.op.transpose
- ndarray.op.trunc
- ndarray.op.uniform
- ndarray.op.unravel_index
- ndarray.op.where
- ndarray.op.zeros_like
-
- ndarray.random
- ndarray.random.uniform
- ndarray.random.normal
- ndarray.random.randn
- ndarray.random.poisson
- ndarray.random.exponential
- ndarray.random.gamma
- ndarray.random.multinomial
- ndarray.random.negative_binomial
- ndarray.random.generalized_negative_binomial
- ndarray.random.shuffle
- ndarray.random.randint
- ndarray.random.exponential_like
- ndarray.random.gamma_like
- ndarray.random.generalized_negative_binomial_like
- ndarray.random.negative_binomial_like
- ndarray.random.normal_like
- ndarray.random.poisson_like
- ndarray.random.uniform_like
- ndarray.register
-
- ndarray.sparse
- ndarray.sparse.csr_matrix
- ndarray.sparse.row_sparse_array
- ndarray.sparse.add
- ndarray.sparse.subtract
- ndarray.sparse.multiply
- ndarray.sparse.divide
- ndarray.sparse.ElementWiseSum
- ndarray.sparse.Embedding
- ndarray.sparse.FullyConnected
- ndarray.sparse.LinearRegressionOutput
- ndarray.sparse.LogisticRegressionOutput
- ndarray.sparse.MAERegressionOutput
- ndarray.sparse.abs
- ndarray.sparse.adagrad_update
- ndarray.sparse.adam_update
- ndarray.sparse.add_n
- ndarray.sparse.arccos
- ndarray.sparse.arccosh
- ndarray.sparse.arcsin
- ndarray.sparse.arcsinh
- ndarray.sparse.arctan
- ndarray.sparse.arctanh
- ndarray.sparse.broadcast_add
- ndarray.sparse.broadcast_div
- ndarray.sparse.broadcast_minus
- ndarray.sparse.broadcast_mul
- ndarray.sparse.broadcast_plus
- ndarray.sparse.broadcast_sub
- ndarray.sparse.cast_storage
- ndarray.sparse.cbrt
- ndarray.sparse.ceil
- ndarray.sparse.clip
- ndarray.sparse.concat
- ndarray.sparse.cos
- ndarray.sparse.cosh
- ndarray.sparse.degrees
- ndarray.sparse.dot
- ndarray.sparse.elemwise_add
- ndarray.sparse.elemwise_div
- ndarray.sparse.elemwise_mul
- ndarray.sparse.elemwise_sub
- ndarray.sparse.exp
- ndarray.sparse.expm1
- ndarray.sparse.fix
- ndarray.sparse.floor
- ndarray.sparse.ftrl_update
- ndarray.sparse.gamma
- ndarray.sparse.gammaln
- ndarray.sparse.log
- ndarray.sparse.log10
- ndarray.sparse.log1p
- ndarray.sparse.log2
- ndarray.sparse.make_loss
- ndarray.sparse.mean
- ndarray.sparse.negative
- ndarray.sparse.norm
- ndarray.sparse.radians
- ndarray.sparse.relu
- ndarray.sparse.retain
- ndarray.sparse.rint
- ndarray.sparse.round
- ndarray.sparse.rsqrt
- ndarray.sparse.sgd_mom_update
- ndarray.sparse.sgd_update
- ndarray.sparse.sigmoid
- ndarray.sparse.sign
- ndarray.sparse.sin
- ndarray.sparse.sinh
- ndarray.sparse.slice
- ndarray.sparse.sqrt
- ndarray.sparse.square
- ndarray.sparse.stop_gradient
- ndarray.sparse.sum
- ndarray.sparse.tan
- ndarray.sparse.tanh
- ndarray.sparse.trunc
- ndarray.sparse.where
- ndarray.sparse.zeros_like
- ndarray.sparse.BaseSparseNDArray
- ndarray.sparse.CSRNDArray
- ndarray.sparse.RowSparseNDArray
-
- gluon.Block
- gluon.Block.apply
- gluon.Block.cast
- gluon.Block.collect_params
- gluon.Block.forward
- gluon.Block.hybridize
- gluon.Block.initialize
- gluon.Block.load_parameters
- gluon.Block.load_params
- gluon.Block.name_scope
- gluon.Block.register_child
- gluon.Block.register_forward_hook
- gluon.Block.register_forward_pre_hook
- gluon.Block.register_op_hook
- gluon.Block.save_parameters
- gluon.Block.save_params
- gluon.Block.summary
-
- gluon.HybridBlock
- gluon.HybridBlock.apply
- gluon.HybridBlock.cast
- gluon.HybridBlock.collect_params
- gluon.HybridBlock.export
- gluon.HybridBlock.forward
- gluon.HybridBlock.hybrid_forward
- gluon.HybridBlock.hybridize
- gluon.HybridBlock.infer_shape
- gluon.HybridBlock.infer_type
- gluon.HybridBlock.initialize
- gluon.HybridBlock.load_parameters
- gluon.HybridBlock.load_params
- gluon.HybridBlock.name_scope
- gluon.HybridBlock.optimize_for
- gluon.HybridBlock.register_child
- gluon.HybridBlock.register_forward_hook
- gluon.HybridBlock.register_forward_pre_hook
- gluon.HybridBlock.register_op_hook
- gluon.HybridBlock.save_parameters
- gluon.HybridBlock.save_params
- gluon.HybridBlock.summary
-
- gluon.SymbolBlock
- gluon.SymbolBlock.apply
- gluon.SymbolBlock.cast
- gluon.SymbolBlock.collect_params
- gluon.SymbolBlock.export
- gluon.SymbolBlock.forward
- gluon.SymbolBlock.hybrid_forward
- gluon.SymbolBlock.hybridize
- gluon.SymbolBlock.imports
- gluon.SymbolBlock.infer_shape
- gluon.SymbolBlock.infer_type
- gluon.SymbolBlock.initialize
- gluon.SymbolBlock.load_parameters
- gluon.SymbolBlock.load_params
- gluon.SymbolBlock.name_scope
- gluon.SymbolBlock.optimize_for
- gluon.SymbolBlock.register_child
- gluon.SymbolBlock.register_forward_hook
- gluon.SymbolBlock.register_forward_pre_hook
- gluon.SymbolBlock.register_op_hook
- gluon.SymbolBlock.save_parameters
- gluon.SymbolBlock.save_params
- gluon.SymbolBlock.summary
-
- gluon.Constant
- gluon.Constant.cast
- gluon.Constant.data
- gluon.Constant.grad
- gluon.Constant.initialize
- gluon.Constant.list_ctx
- gluon.Constant.list_data
- gluon.Constant.list_grad
- gluon.Constant.list_row_sparse_data
- gluon.Constant.reset_ctx
- gluon.Constant.row_sparse_data
- gluon.Constant.set_data
- gluon.Constant.var
- gluon.Constant.zero_grad
-
- gluon.Parameter
- gluon.Parameter.cast
- gluon.Parameter.data
- gluon.Parameter.grad
- gluon.Parameter.initialize
- gluon.Parameter.list_ctx
- gluon.Parameter.list_data
- gluon.Parameter.list_grad
- gluon.Parameter.list_row_sparse_data
- gluon.Parameter.reset_ctx
- gluon.Parameter.row_sparse_data
- gluon.Parameter.set_data
- gluon.Parameter.var
- gluon.Parameter.zero_grad
-
- gluon.ParameterDict
- gluon.ParameterDict.get
- gluon.ParameterDict.get_constant
- gluon.ParameterDict.initialize
- gluon.ParameterDict.list_ctx
- gluon.ParameterDict.load
- gluon.ParameterDict.load_dict
- gluon.ParameterDict.reset_ctx
- gluon.ParameterDict.save
- gluon.ParameterDict.setattr
- gluon.ParameterDict.update
- gluon.ParameterDict.zero_grad
- gluon.contrib
-
- gluon.data
- gluon.data.vision.datasets
- gluon.data.vision.transforms
- gluon.data.Dataset
- gluon.data.ArrayDataset
- gluon.data.RecordFileDataset
- gluon.data.SimpleDataset
- gluon.data.BatchSampler
- gluon.data.DataLoader
- gluon.data.FilterSampler
- gluon.data.RandomSampler
- gluon.data.Sampler
- gluon.data.SequentialSampler
-
- gluon.loss
- gluon.loss.Loss
- gluon.loss.L2Loss
- gluon.loss.L1Loss
- gluon.loss.SigmoidBinaryCrossEntropyLoss
- gluon.loss.SigmoidBCELoss
- gluon.loss.SoftmaxCrossEntropyLoss
- gluon.loss.SoftmaxCELoss
- gluon.loss.KLDivLoss
- gluon.loss.CTCLoss
- gluon.loss.HuberLoss
- gluon.loss.HingeLoss
- gluon.loss.SquaredHingeLoss
- gluon.loss.LogisticLoss
- gluon.loss.TripletLoss
- gluon.loss.PoissonNLLLoss
- gluon.loss.CosineEmbeddingLoss
- gluon.loss.SDMLLoss
- gluon.nn
- gluon.rnn
- initializer
- initializer.Bilinear
- initializer.Constant
- initializer.FusedRNN
- initializer.InitDesc
- initializer.Initializer
- initializer.LSTMBias
- initializer.Load
- initializer.MSRAPrelu
- initializer.Mixed
- initializer.Normal
- initializer.One
- initializer.Orthogonal
- initializer.Uniform
- initializer.Xavier
- initializer.Zero
- optimizer
- optimizer.AdaDelta
- optimizer.AdaGrad
- optimizer.Adam
- optimizer.Adamax
- optimizer.DCASGD
- optimizer.FTML
- optimizer.Ftrl
- optimizer.LARS
- optimizer.LBSGD
- optimizer.NAG
- optimizer.Nadam
- optimizer.Optimizer
- optimizer.RMSProp
- optimizer.SGD
- optimizer.SGLD
- optimizer.Signum
- optimizer.LAMB
- optimizer.Test
- optimizer.Updater
- optimizer.ccSGD
- metric
- metric.Accuracy
- metric.Caffe
- metric.CompositeEvalMetric
- metric.CrossEntropy
- metric.CustomMetric
- metric.EvalMetric
- metric.F1
- metric.Loss
- metric.MAE
- metric.MCC
- metric.MSE
- metric.NegativeLogLikelihood
- metric.PCC
- metric.PearsonCorrelation
- metric.Perplexity
- metric.RMSE
- metric.TopKAccuracy
- metric.Torch
- symbol
-
- symbol.contrib
- symbol.contrib.rand_zipfian
- symbol.contrib.foreach
- symbol.contrib.while_loop
- symbol.contrib.cond
- symbol.contrib.AdaptiveAvgPooling2D
- symbol.contrib.BilinearResize2D
- symbol.contrib.CTCLoss
- symbol.contrib.DeformableConvolution
- symbol.contrib.DeformablePSROIPooling
- symbol.contrib.ModulatedDeformableConvolution
- symbol.contrib.MultiBoxDetection
- symbol.contrib.MultiBoxPrior
- symbol.contrib.MultiBoxTarget
- symbol.contrib.MultiProposal
- symbol.contrib.PSROIPooling
- symbol.contrib.Proposal
- symbol.contrib.ROIAlign
- symbol.contrib.RROIAlign
- symbol.contrib.SparseEmbedding
- symbol.contrib.SyncBatchNorm
- symbol.contrib.allclose
- symbol.contrib.arange_like
- symbol.contrib.backward_gradientmultiplier
- symbol.contrib.backward_hawkesll
- symbol.contrib.backward_index_copy
- symbol.contrib.backward_quadratic
- symbol.contrib.bipartite_matching
- symbol.contrib.boolean_mask
- symbol.contrib.box_decode
- symbol.contrib.box_encode
- symbol.contrib.box_iou
- symbol.contrib.box_nms
- symbol.contrib.box_non_maximum_suppression
- symbol.contrib.calibrate_entropy
- symbol.contrib.count_sketch
- symbol.contrib.ctc_loss
- symbol.contrib.dequantize
- symbol.contrib.dgl_adjacency
- symbol.contrib.dgl_csr_neighbor_non_uniform_sample
- symbol.contrib.dgl_csr_neighbor_uniform_sample
- symbol.contrib.dgl_graph_compact
- symbol.contrib.dgl_subgraph
- symbol.contrib.div_sqrt_dim
- symbol.contrib.edge_id
- symbol.contrib.fft
- symbol.contrib.getnnz
- symbol.contrib.gradientmultiplier
- symbol.contrib.group_adagrad_update
- symbol.contrib.hawkesll
- symbol.contrib.ifft
- symbol.contrib.index_array
- symbol.contrib.index_copy
- symbol.contrib.interleaved_matmul_encdec_qk
- symbol.contrib.interleaved_matmul_encdec_valatt
- symbol.contrib.interleaved_matmul_selfatt_qk
- symbol.contrib.interleaved_matmul_selfatt_valatt
- symbol.contrib.quadratic
- symbol.contrib.quantize
- symbol.contrib.quantize_v2
- symbol.contrib.quantized_act
- symbol.contrib.quantized_batch_norm
- symbol.contrib.quantized_concat
- symbol.contrib.quantized_conv
- symbol.contrib.quantized_elemwise_add
- symbol.contrib.quantized_elemwise_mul
- symbol.contrib.quantized_embedding
- symbol.contrib.quantized_flatten
- symbol.contrib.quantized_fully_connected
- symbol.contrib.quantized_pooling
- symbol.contrib.requantize
- symbol.contrib.round_ste
- symbol.contrib.sign_ste
-
- symbol.image
- symbol.image.adjust_lighting
- symbol.image.crop
- symbol.image.flip_left_right
- symbol.image.flip_top_bottom
- symbol.image.normalize
- symbol.image.random_brightness
- symbol.image.random_color_jitter
- symbol.image.random_contrast
- symbol.image.random_flip_left_right
- symbol.image.random_flip_top_bottom
- symbol.image.random_hue
- symbol.image.random_lighting
- symbol.image.random_saturation
- symbol.image.resize
- symbol.image.to_tensor
-
- symbol.linalg
- symbol.linalg.det
- symbol.linalg.extractdiag
- symbol.linalg.extracttrian
- symbol.linalg.gelqf
- symbol.linalg.gemm
- symbol.linalg.gemm2
- symbol.linalg.inverse
- symbol.linalg.makediag
- symbol.linalg.maketrian
- symbol.linalg.potrf
- symbol.linalg.potri
- symbol.linalg.slogdet
- symbol.linalg.sumlogdiag
- symbol.linalg.syevd
- symbol.linalg.syrk
- symbol.linalg.trmm
- symbol.linalg.trsm
-
- symbol.op
- symbol.op.Activation
- symbol.op.BatchNorm
- symbol.op.BatchNorm_v1
- symbol.op.BilinearSampler
- symbol.op.BlockGrad
- symbol.op.CTCLoss
- symbol.op.Cast
- symbol.op.Concat
- symbol.op.Convolution
- symbol.op.Convolution_v1
- symbol.op.Correlation
- symbol.op.Crop
- symbol.op.Custom
- symbol.op.Deconvolution
- symbol.op.Dropout
- symbol.op.ElementWiseSum
- symbol.op.Embedding
- symbol.op.Flatten
- symbol.op.FullyConnected
- symbol.op.GridGenerator
- symbol.op.GroupNorm
- symbol.op.IdentityAttachKLSparseReg
- symbol.op.InstanceNorm
- symbol.op.L2Normalization
- symbol.op.LRN
- symbol.op.LayerNorm
- symbol.op.LeakyReLU
- symbol.op.LinearRegressionOutput
- symbol.op.LogisticRegressionOutput
- symbol.op.MAERegressionOutput
- symbol.op.MakeLoss
- symbol.op.Pad
- symbol.op.Pooling
- symbol.op.Pooling_v1
- symbol.op.RNN
- symbol.op.ROIPooling
- symbol.op.Reshape
- symbol.op.SVMOutput
- symbol.op.SequenceLast
- symbol.op.SequenceMask
- symbol.op.SequenceReverse
- symbol.op.SliceChannel
- symbol.op.Softmax
- symbol.op.SoftmaxActivation
- symbol.op.SoftmaxOutput
- symbol.op.SpatialTransformer
- symbol.op.SwapAxis
- symbol.op.UpSampling
- symbol.op.abs
- symbol.op.adam_update
- symbol.op.add_n
- symbol.op.all_finite
- symbol.op.amp_cast
- symbol.op.amp_multicast
- symbol.op.arccos
- symbol.op.arccosh
- symbol.op.arcsin
- symbol.op.arcsinh
- symbol.op.arctan
- symbol.op.arctanh
- symbol.op.argmax
- symbol.op.argmax_channel
- symbol.op.argmin
- symbol.op.argsort
- symbol.op.batch_dot
- symbol.op.batch_take
- symbol.op.broadcast_add
- symbol.op.broadcast_axes
- symbol.op.broadcast_axis
- symbol.op.broadcast_div
- symbol.op.broadcast_equal
- symbol.op.broadcast_greater
- symbol.op.broadcast_greater_equal
- symbol.op.broadcast_hypot
- symbol.op.broadcast_lesser
- symbol.op.broadcast_lesser_equal
- symbol.op.broadcast_like
- symbol.op.broadcast_logical_and
- symbol.op.broadcast_logical_or
- symbol.op.broadcast_logical_xor
- symbol.op.broadcast_maximum
- symbol.op.broadcast_minimum
- symbol.op.broadcast_minus
- symbol.op.broadcast_mod
- symbol.op.broadcast_mul
- symbol.op.broadcast_not_equal
- symbol.op.broadcast_plus
- symbol.op.broadcast_power
- symbol.op.broadcast_sub
- symbol.op.broadcast_to
- symbol.op.cast_storage
- symbol.op.cbrt
- symbol.op.ceil
- symbol.op.choose_element_0index
- symbol.op.clip
- symbol.op.col2im
- symbol.op.cos
- symbol.op.cosh
- symbol.op.ctc_loss
- symbol.op.cumsum
- symbol.op.degrees
- symbol.op.depth_to_space
- symbol.op.diag
- symbol.op.dot
- symbol.op.elemwise_add
- symbol.op.elemwise_div
- symbol.op.elemwise_mul
- symbol.op.elemwise_sub
- symbol.op.erf
- symbol.op.erfinv
- symbol.op.exp
- symbol.op.expand_dims
- symbol.op.expm1
- symbol.op.fill_element_0index
- symbol.op.fix
- symbol.op.flip
- symbol.op.floor
- symbol.op.ftml_update
- symbol.op.ftrl_update
- symbol.op.gamma
- symbol.op.gammaln
- symbol.op.gather_nd
- symbol.op.hard_sigmoid
- symbol.op.identity
- symbol.op.im2col
- symbol.op.khatri_rao
- symbol.op.lamb_update_phase1
- symbol.op.lamb_update_phase2
- symbol.op.linalg_det
- symbol.op.linalg_extractdiag
- symbol.op.linalg_extracttrian
- symbol.op.linalg_gelqf
- symbol.op.linalg_gemm
- symbol.op.linalg_gemm2
- symbol.op.linalg_inverse
- symbol.op.linalg_makediag
- symbol.op.linalg_maketrian
- symbol.op.linalg_potrf
- symbol.op.linalg_potri
- symbol.op.linalg_slogdet
- symbol.op.linalg_sumlogdiag
- symbol.op.linalg_syrk
- symbol.op.linalg_trmm
- symbol.op.linalg_trsm
- symbol.op.log
- symbol.op.log10
- symbol.op.log1p
- symbol.op.log2
- symbol.op.log_softmax
- symbol.op.logical_not
- symbol.op.make_loss
- symbol.op.max
- symbol.op.max_axis
- symbol.op.mean
- symbol.op.min
- symbol.op.min_axis
- symbol.op.moments
- symbol.op.mp_lamb_update_phase1
- symbol.op.mp_lamb_update_phase2
- symbol.op.mp_nag_mom_update
- symbol.op.mp_sgd_mom_update
- symbol.op.mp_sgd_update
- symbol.op.multi_all_finite
- symbol.op.multi_lars
- symbol.op.multi_mp_sgd_mom_update
- symbol.op.multi_mp_sgd_update
- symbol.op.multi_sgd_mom_update
- symbol.op.multi_sgd_update
- symbol.op.multi_sum_sq
- symbol.op.nag_mom_update
- symbol.op.nanprod
- symbol.op.nansum
- symbol.op.negative
- symbol.op.norm
- symbol.op.normal
- symbol.op.one_hot
- symbol.op.ones_like
- symbol.op.pick
- symbol.op.preloaded_multi_mp_sgd_mom_update
- symbol.op.preloaded_multi_mp_sgd_update
- symbol.op.preloaded_multi_sgd_mom_update
- symbol.op.preloaded_multi_sgd_update
- symbol.op.prod
- symbol.op.radians
- symbol.op.random_exponential
- symbol.op.random_gamma
- symbol.op.random_generalized_negative_binomial
- symbol.op.random_negative_binomial
- symbol.op.random_normal
- symbol.op.random_pdf_dirichlet
- symbol.op.random_pdf_exponential
- symbol.op.random_pdf_gamma
- symbol.op.random_pdf_generalized_negative_binomial
- symbol.op.random_pdf_negative_binomial
- symbol.op.random_pdf_normal
- symbol.op.random_pdf_poisson
- symbol.op.random_pdf_uniform
- symbol.op.random_poisson
- symbol.op.random_randint
- symbol.op.random_uniform
- symbol.op.ravel_multi_index
- symbol.op.rcbrt
- symbol.op.reciprocal
- symbol.op.relu
- symbol.op.repeat
- symbol.op.reset_arrays
- symbol.op.reshape_like
- symbol.op.reverse
- symbol.op.rint
- symbol.op.rmsprop_update
- symbol.op.rmspropalex_update
- symbol.op.round
- symbol.op.rsqrt
- symbol.op.sample_exponential
- symbol.op.sample_gamma
- symbol.op.sample_generalized_negative_binomial
- symbol.op.sample_multinomial
- symbol.op.sample_negative_binomial
- symbol.op.sample_normal
- symbol.op.sample_poisson
- symbol.op.sample_uniform
- symbol.op.scatter_nd
- symbol.op.sgd_mom_update
- symbol.op.sgd_update
- symbol.op.shape_array
- symbol.op.shuffle
- symbol.op.sigmoid
- symbol.op.sign
- symbol.op.signsgd_update
- symbol.op.signum_update
- symbol.op.sin
- symbol.op.sinh
- symbol.op.size_array
- symbol.op.slice
- symbol.op.slice_axis
- symbol.op.slice_like
- symbol.op.smooth_l1
- symbol.op.softmax_cross_entropy
- symbol.op.softmin
- symbol.op.softsign
- symbol.op.sort
- symbol.op.space_to_depth
- symbol.op.split
- symbol.op.sqrt
- symbol.op.square
- symbol.op.squeeze
- symbol.op.stack
- symbol.op.stop_gradient
- symbol.op.sum
- symbol.op.sum_axis
- symbol.op.swapaxes
- symbol.op.take
- symbol.op.tan
- symbol.op.tanh
- symbol.op.tile
- symbol.op.topk
- symbol.op.transpose
- symbol.op.trunc
- symbol.op.uniform
- symbol.op.unravel_index
- symbol.op.where
- symbol.op.zeros_like
-
- symbol.random
- symbol.random.uniform
- symbol.random.normal
- symbol.random.randn
- symbol.random.poisson
- symbol.random.exponential
- symbol.random.gamma
- symbol.random.multinomial
- symbol.random.negative_binomial
- symbol.random.generalized_negative_binomial
- symbol.random.shuffle
- symbol.random.randint
- symbol.random.exponential_like
- symbol.random.gamma_like
- symbol.random.generalized_negative_binomial_like
- symbol.random.negative_binomial_like
- symbol.random.normal_like
- symbol.random.poisson_like
- symbol.random.uniform_like
- symbol.register
-
- symbol.sparse
- symbol.sparse.ElementWiseSum
- symbol.sparse.Embedding
- symbol.sparse.FullyConnected
- symbol.sparse.LinearRegressionOutput
- symbol.sparse.LogisticRegressionOutput
- symbol.sparse.MAERegressionOutput
- symbol.sparse.abs
- symbol.sparse.adagrad_update
- symbol.sparse.adam_update
- symbol.sparse.add_n
- symbol.sparse.arccos
- symbol.sparse.arccosh
- symbol.sparse.arcsin
- symbol.sparse.arcsinh
- symbol.sparse.arctan
- symbol.sparse.arctanh
- symbol.sparse.broadcast_add
- symbol.sparse.broadcast_div
- symbol.sparse.broadcast_minus
- symbol.sparse.broadcast_mul
- symbol.sparse.broadcast_plus
- symbol.sparse.broadcast_sub
- symbol.sparse.cast_storage
- symbol.sparse.cbrt
- symbol.sparse.ceil
- symbol.sparse.clip
- symbol.sparse.concat
- symbol.sparse.cos
- symbol.sparse.cosh
- symbol.sparse.degrees
- symbol.sparse.dot
- symbol.sparse.elemwise_add
- symbol.sparse.elemwise_div
- symbol.sparse.elemwise_mul
- symbol.sparse.elemwise_sub
- symbol.sparse.exp
- symbol.sparse.expm1
- symbol.sparse.fix
- symbol.sparse.floor
- symbol.sparse.ftrl_update
- symbol.sparse.gamma
- symbol.sparse.gammaln
- symbol.sparse.log
- symbol.sparse.log10
- symbol.sparse.log1p
- symbol.sparse.log2
- symbol.sparse.make_loss
- symbol.sparse.mean
- symbol.sparse.negative
- symbol.sparse.norm
- symbol.sparse.radians
- symbol.sparse.relu
- symbol.sparse.retain
- symbol.sparse.rint
- symbol.sparse.round
- symbol.sparse.rsqrt
- symbol.sparse.sgd_mom_update
- symbol.sparse.sgd_update
- symbol.sparse.sigmoid
- symbol.sparse.sign
- symbol.sparse.sin
- symbol.sparse.sinh
- symbol.sparse.slice
- symbol.sparse.sqrt
- symbol.sparse.square
- symbol.sparse.stop_gradient
- symbol.sparse.sum
- symbol.sparse.tan
- symbol.sparse.tanh
- symbol.sparse.trunc
- symbol.sparse.where
- symbol.sparse.zeros_like
- symbol.Activation
- symbol.BatchNorm
- symbol.BatchNorm_v1
- symbol.BilinearSampler
- symbol.BlockGrad
- symbol.CTCLoss
- symbol.Cast
- symbol.Concat
- symbol.Convolution
- symbol.Convolution_v1
- symbol.Correlation
- symbol.Crop
- symbol.Custom
- symbol.Deconvolution
- symbol.Dropout
- symbol.ElementWiseSum
- symbol.Embedding
- symbol.Flatten
- symbol.FullyConnected
- symbol.GridGenerator
- symbol.GroupNorm
- symbol.IdentityAttachKLSparseReg
- symbol.InstanceNorm
- symbol.L2Normalization
- symbol.LRN
- symbol.LayerNorm
- symbol.LeakyReLU
- symbol.LinearRegressionOutput
- symbol.LogisticRegressionOutput
- symbol.MAERegressionOutput
- symbol.MakeLoss
- symbol.Pad
- symbol.Pooling
- symbol.Pooling_v1
- symbol.RNN
- symbol.ROIPooling
- symbol.Reshape
- symbol.SVMOutput
- symbol.SequenceLast
- symbol.SequenceMask
- symbol.SequenceReverse
- symbol.SliceChannel
- symbol.Softmax
- symbol.SoftmaxActivation
- symbol.SoftmaxOutput
- symbol.SpatialTransformer
- symbol.SwapAxis
- symbol.UpSampling
- symbol.abs
- symbol.adam_update
- symbol.add_n
- symbol.all_finite
- symbol.amp_cast
- symbol.amp_multicast
- symbol.arccos
- symbol.arccosh
- symbol.arcsin
- symbol.arcsinh
- symbol.arctan
- symbol.arctanh
- symbol.argmax
- symbol.argmax_channel
- symbol.argmin
- symbol.argsort
- symbol.batch_dot
- symbol.batch_take
- symbol.broadcast_add
- symbol.broadcast_axes
- symbol.broadcast_axis
- symbol.broadcast_div
- symbol.broadcast_equal
- symbol.broadcast_greater
- symbol.broadcast_greater_equal
- symbol.broadcast_hypot
- symbol.broadcast_lesser
- symbol.broadcast_lesser_equal
- symbol.broadcast_like
- symbol.broadcast_logical_and
- symbol.broadcast_logical_or
- symbol.broadcast_logical_xor
- symbol.broadcast_maximum
- symbol.broadcast_minimum
- symbol.broadcast_minus
- symbol.broadcast_mod
- symbol.broadcast_mul
- symbol.broadcast_not_equal
- symbol.broadcast_plus
- symbol.broadcast_power
- symbol.broadcast_sub
- symbol.broadcast_to
- symbol.cast_storage
- symbol.cbrt
- symbol.ceil
- symbol.choose_element_0index
- symbol.clip
- symbol.col2im
- symbol.cos
- symbol.cosh
- symbol.ctc_loss
- symbol.cumsum
- symbol.degrees
- symbol.depth_to_space
- symbol.diag
- symbol.dot
- symbol.elemwise_add
- symbol.elemwise_div
- symbol.elemwise_mul
- symbol.elemwise_sub
- symbol.erf
- symbol.erfinv
- symbol.exp
- symbol.expand_dims
- symbol.expm1
- symbol.fill_element_0index
- symbol.fix
- symbol.flip
- symbol.floor
- symbol.ftml_update
- symbol.ftrl_update
- symbol.gamma
- symbol.gammaln
- symbol.gather_nd
- symbol.hard_sigmoid
- symbol.identity
- symbol.im2col
- symbol.khatri_rao
- symbol.lamb_update_phase1
- symbol.lamb_update_phase2
- symbol.linalg_det
- symbol.linalg_extractdiag
- symbol.linalg_extracttrian
- symbol.linalg_gelqf
- symbol.linalg_gemm
- symbol.linalg_gemm2
- symbol.linalg_inverse
- symbol.linalg_makediag
- symbol.linalg_maketrian
- symbol.linalg_potrf
- symbol.linalg_potri
- symbol.linalg_slogdet
- symbol.linalg_sumlogdiag
- symbol.linalg_syrk
- symbol.linalg_trmm
- symbol.linalg_trsm
- symbol.log
- symbol.log10
- symbol.log1p
- symbol.log2
- symbol.log_softmax
- symbol.logical_not
- symbol.make_loss
- symbol.max
- symbol.max_axis
- symbol.mean
- symbol.min
- symbol.min_axis
- symbol.moments
- symbol.mp_lamb_update_phase1
- symbol.mp_lamb_update_phase2
- symbol.mp_nag_mom_update
- symbol.mp_sgd_mom_update
- symbol.mp_sgd_update
- symbol.multi_all_finite
- symbol.multi_lars
- symbol.multi_mp_sgd_mom_update
- symbol.multi_mp_sgd_update
- symbol.multi_sgd_mom_update
- symbol.multi_sgd_update
- symbol.multi_sum_sq
- symbol.nag_mom_update
- symbol.nanprod
- symbol.nansum
- symbol.negative
- symbol.norm
- symbol.normal
- symbol.one_hot
- symbol.ones_like
- symbol.pick
- symbol.preloaded_multi_mp_sgd_mom_update
- symbol.preloaded_multi_mp_sgd_update
- symbol.preloaded_multi_sgd_mom_update
- symbol.preloaded_multi_sgd_update
- symbol.prod
- symbol.radians
- symbol.random_exponential
- symbol.random_gamma
- symbol.random_generalized_negative_binomial
- symbol.random_negative_binomial
- symbol.random_normal
- symbol.random_pdf_dirichlet
- symbol.random_pdf_exponential
- symbol.random_pdf_gamma
- symbol.random_pdf_generalized_negative_binomial
- symbol.random_pdf_negative_binomial
- symbol.random_pdf_normal
- symbol.random_pdf_poisson
- symbol.random_pdf_uniform
- symbol.random_poisson
- symbol.random_randint
- symbol.random_uniform
- symbol.ravel_multi_index
- symbol.rcbrt
- symbol.reciprocal
- symbol.relu
- symbol.repeat
- symbol.reset_arrays
- symbol.reshape_like
- symbol.reverse
- symbol.rint
- symbol.rmsprop_update
- symbol.rmspropalex_update
- symbol.round
- symbol.rsqrt
- symbol.sample_exponential
- symbol.sample_gamma
- symbol.sample_generalized_negative_binomial
- symbol.sample_multinomial
- symbol.sample_negative_binomial
- symbol.sample_normal
- symbol.sample_poisson
- symbol.sample_uniform
- symbol.scatter_nd
- symbol.sgd_mom_update
- symbol.sgd_update
- symbol.shape_array
- symbol.shuffle
- symbol.sigmoid
- symbol.sign
- symbol.signsgd_update
- symbol.signum_update
- symbol.sin
- symbol.sinh
- symbol.size_array
- symbol.slice
- symbol.slice_axis
- symbol.slice_like
- symbol.smooth_l1
- symbol.softmax_cross_entropy
- symbol.softmin
- symbol.softsign
- symbol.sort
- symbol.space_to_depth
- symbol.split
- symbol.sqrt
- symbol.square
- symbol.squeeze
- symbol.stack
- symbol.stop_gradient
- symbol.sum
- symbol.sum_axis
- symbol.swapaxes
- symbol.take
- symbol.tan
- symbol.tanh
- symbol.tile
- symbol.topk
- symbol.transpose
- symbol.trunc
- symbol.uniform
- symbol.unravel_index
- symbol.where
- symbol.zeros_like
- symbol.var
- symbol.Variable
- symbol.Group
- symbol.load
- symbol.load_json
- symbol.pow
- symbol.power
- symbol.maximum
- symbol.minimum
- symbol.hypot
- symbol.eye
- symbol.zeros
- symbol.ones
- symbol.full
- symbol.arange
- symbol.linspace
- symbol.histogram
- symbol.split_v2
-
- contrib.ndarray
- contrib.ndarray.AdaptiveAvgPooling2D
- contrib.ndarray.BilinearResize2D
- contrib.ndarray.CTCLoss
- contrib.ndarray.DeformableConvolution
- contrib.ndarray.DeformablePSROIPooling
- contrib.ndarray.ModulatedDeformableConvolution
- contrib.ndarray.MultiBoxDetection
- contrib.ndarray.MultiBoxPrior
- contrib.ndarray.MultiBoxTarget
- contrib.ndarray.MultiProposal
- contrib.ndarray.PSROIPooling
- contrib.ndarray.Proposal
- contrib.ndarray.ROIAlign
- contrib.ndarray.RROIAlign
- contrib.ndarray.SparseEmbedding
- contrib.ndarray.SyncBatchNorm
- contrib.ndarray.allclose
- contrib.ndarray.arange_like
- contrib.ndarray.backward_gradientmultiplier
- contrib.ndarray.backward_hawkesll
- contrib.ndarray.backward_index_copy
- contrib.ndarray.backward_quadratic
- contrib.ndarray.bipartite_matching
- contrib.ndarray.boolean_mask
- contrib.ndarray.box_decode
- contrib.ndarray.box_encode
- contrib.ndarray.box_iou
- contrib.ndarray.box_nms
- contrib.ndarray.box_non_maximum_suppression
- contrib.ndarray.calibrate_entropy
- contrib.ndarray.count_sketch
- contrib.ndarray.ctc_loss
- contrib.ndarray.dequantize
- contrib.ndarray.dgl_adjacency
- contrib.ndarray.dgl_csr_neighbor_non_uniform_sample
- contrib.ndarray.dgl_csr_neighbor_uniform_sample
- contrib.ndarray.dgl_graph_compact
- contrib.ndarray.dgl_subgraph
- contrib.ndarray.div_sqrt_dim
- contrib.ndarray.edge_id
- contrib.ndarray.fft
- contrib.ndarray.getnnz
- contrib.ndarray.gradientmultiplier
- contrib.ndarray.group_adagrad_update
- contrib.ndarray.hawkesll
- contrib.ndarray.ifft
- contrib.ndarray.index_array
- contrib.ndarray.index_copy
- contrib.ndarray.interleaved_matmul_encdec_qk
- contrib.ndarray.interleaved_matmul_encdec_valatt
- contrib.ndarray.interleaved_matmul_selfatt_qk
- contrib.ndarray.interleaved_matmul_selfatt_valatt
- contrib.ndarray.quadratic
- contrib.ndarray.quantize
- contrib.ndarray.quantize_v2
- contrib.ndarray.quantized_act
- contrib.ndarray.quantized_batch_norm
- contrib.ndarray.quantized_concat
- contrib.ndarray.quantized_conv
- contrib.ndarray.quantized_elemwise_add
- contrib.ndarray.quantized_elemwise_mul
- contrib.ndarray.quantized_embedding
- contrib.ndarray.quantized_flatten
- contrib.ndarray.quantized_fully_connected
- contrib.ndarray.quantized_pooling
- contrib.ndarray.requantize
- contrib.ndarray.round_ste
- contrib.ndarray.sign_ste
-
- contrib.symbol
- contrib.symbol.AdaptiveAvgPooling2D
- contrib.symbol.BilinearResize2D
- contrib.symbol.CTCLoss
- contrib.symbol.DeformableConvolution
- contrib.symbol.DeformablePSROIPooling
- contrib.symbol.ModulatedDeformableConvolution
- contrib.symbol.MultiBoxDetection
- contrib.symbol.MultiBoxPrior
- contrib.symbol.MultiBoxTarget
- contrib.symbol.MultiProposal
- contrib.symbol.PSROIPooling
- contrib.symbol.Proposal
- contrib.symbol.ROIAlign
- contrib.symbol.RROIAlign
- contrib.symbol.SparseEmbedding
- contrib.symbol.SyncBatchNorm
- contrib.symbol.allclose
- contrib.symbol.arange_like
- contrib.symbol.backward_gradientmultiplier
- contrib.symbol.backward_hawkesll
- contrib.symbol.backward_index_copy
- contrib.symbol.backward_quadratic
- contrib.symbol.bipartite_matching
- contrib.symbol.boolean_mask
- contrib.symbol.box_decode
- contrib.symbol.box_encode
- contrib.symbol.box_iou
- contrib.symbol.box_nms
- contrib.symbol.box_non_maximum_suppression
- contrib.symbol.calibrate_entropy
- contrib.symbol.count_sketch
- contrib.symbol.ctc_loss
- contrib.symbol.dequantize
- contrib.symbol.dgl_adjacency
- contrib.symbol.dgl_csr_neighbor_non_uniform_sample
- contrib.symbol.dgl_csr_neighbor_uniform_sample
- contrib.symbol.dgl_graph_compact
- contrib.symbol.dgl_subgraph
- contrib.symbol.div_sqrt_dim
- contrib.symbol.edge_id
- contrib.symbol.fft
- contrib.symbol.getnnz
- contrib.symbol.gradientmultiplier
- contrib.symbol.group_adagrad_update
- contrib.symbol.hawkesll
- contrib.symbol.ifft
- contrib.symbol.index_array
- contrib.symbol.index_copy
- contrib.symbol.interleaved_matmul_encdec_qk
- contrib.symbol.interleaved_matmul_encdec_valatt
- contrib.symbol.interleaved_matmul_selfatt_qk
- contrib.symbol.interleaved_matmul_selfatt_valatt
- contrib.symbol.quadratic
- contrib.symbol.quantize
- contrib.symbol.quantize_v2
- contrib.symbol.quantized_act
- contrib.symbol.quantized_batch_norm
- contrib.symbol.quantized_concat
- contrib.symbol.quantized_conv
- contrib.symbol.quantized_elemwise_add
- contrib.symbol.quantized_elemwise_mul
- contrib.symbol.quantized_embedding
- contrib.symbol.quantized_flatten
- contrib.symbol.quantized_fully_connected
- contrib.symbol.quantized_pooling
- contrib.symbol.requantize
- contrib.symbol.round_ste
- contrib.symbol.sign_ste
- contrib.text
- mxnet.attribute
- mxnet.base
- mxnet.callback
- mxnet.context
- mxnet.engine
- mxnet.executor
- mxnet.executor_manager
- mxnet.image
- mxnet.io
- mxnet.kvstore_server
- mxnet.libinfo
- mxnet.log
- mxnet.model
- mxnet.monitor
- mxnet.name
- mxnet.notebook
- mxnet.operator
- mxnet.profiler
- mxnet.random
- mxnet.recordio
- mxnet.registry
- mxnet.rtc
- mxnet.runtime
- mxnet.test_utils
- mxnet.torch
- mxnet.util
- mxnet.visualization
mxnet.gluon / gluon.contrib
gluon.contrib¶
This document lists the contrib APIs in Gluon:
Contrib neural network module. |
The Gluon Contrib API, defined in the gluon.contrib package, provides many useful experimental APIs for new features. This is a place for the community to try out the new features, so that feature contributors can receive feedback.
Warning
This package contains experimental APIs and may change in the near future.
In the rest of this document, we list routines provided by the gluon.contrib package.
Neural Network¶
Lays Block s concurrently. |
|
Lays HybridBlock s concurrently. |
|
Block that passes through the input directly. |
|
Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. |
|
Cross-GPU Synchronized Batch normalization (SyncBN) |
|
Pixel-shuffle layer for upsampling in 1 dimension. |
|
Pixel-shuffle layer for upsampling in 2 dimensions. |
|
Pixel-shuffle layer for upsampling in 3 dimensions. |
Convolutional Neural Network¶
2-D Deformable Convolution v_1 (Dai, 2017). |
Recurrent Neural Network¶
Applies Variational Dropout on base cell. |
|
1D Convolutional RNN cell. |
|
2D Convolutional RNN cell. |
|
3D Convolutional RNN cells |
|
1D Convolutional LSTM network cell. |
|
2D Convolutional LSTM network cell. |
|
3D Convolutional LSTM network cell. |
|
1D Convolutional Gated Rectified Unit (GRU) network cell. |
|
2D Convolutional Gated Rectified Unit (GRU) network cell. |
|
3D Convolutional Gated Rectified Unit (GRU) network cell. |
|
Long-Short Term Memory Projected (LSTMP) network cell. |
Data¶
Samples elements from [0, length) at fixed intervals. |
Text Dataset¶
WikiText-2 word-level dataset for language modeling, from Salesforce research. |
|
WikiText-103 word-level dataset for language modeling, from Salesforce research. |
Event Handler¶
Stop conditions to stop training Stop training if maximum number of batches or epochs reached. |
|
Metric Handler that update metric values at batch end |
|
Validation Handler that evaluate model on validation dataset |
|
Basic Logging Handler that applies to every Gluon estimator by default. |
|
Save the model after user define period |
|
Early stop training if monitored value is not improving |
API Reference¶
Contrib neural network module.
Contributed neural network modules.
-
class
mxnet.gluon.contrib.nn.
Concurrent
(axis=-1, prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.nn.basic_layers.Sequential
Lays Block s concurrently.
This block feeds its input to all children blocks, and produce the output by concatenating all the children blocks’ outputs on the specified axis.
Example:
net = Concurrent() # use net's name_scope to give children blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.add(Identity())
- Parameters
axis (int, default -1) – The axis on which to concatenate the outputs.
-
class
mxnet.gluon.contrib.nn.
HybridConcurrent
(axis=-1, prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.nn.basic_layers.HybridSequential
Lays HybridBlock s concurrently.
This block feeds its input to all children blocks, and produce the output by concatenating all the children blocks’ outputs on the specified axis.
Example:
net = HybridConcurrent() # use net's name_scope to give children blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.add(Identity())
- Parameters
axis (int, default -1) – The axis on which to concatenate the outputs.
-
class
mxnet.gluon.contrib.nn.
Identity
(prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Block that passes through the input directly.
This block can be used in conjunction with HybridConcurrent block for residual connection.
Example:
net = HybridConcurrent() # use net's name_scope to give child Blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.add(Identity())
-
class
mxnet.gluon.contrib.nn.
SparseEmbedding
(input_dim, output_dim, dtype='float32', weight_initializer=None, **kwargs)[source]¶ Bases:
mxnet.gluon.block.Block
Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. eg. [4, 20] -> [[0.25, 0.1], [0.6, -0.2]]
This SparseBlock is designed for distributed training with extremely large input dimension. Both weight and gradient w.r.t. weight are RowSparseNDArray.
Note: if sparse_grad is set to True, the gradient w.r.t weight will be sparse. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. By default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
- Parameters
input_dim (int) – Size of the vocabulary, i.e. maximum integer index + 1.
output_dim (int) – Dimension of the dense embedding.
dtype (str or np.dtype, default 'float32') – Data type of output embeddings.
weight_initializer (Initializer) – Initializer for the embeddings matrix.
Inputs –
data: (N-1)-D tensor with shape: (x1, x2, …, xN-1).
Output –
out: N-D tensor with shape: (x1, x2, …, xN-1, output_dim).
-
class
mxnet.gluon.contrib.nn.
SyncBatchNorm
(in_channels=0, num_devices=None, momentum=0.9, epsilon=1e-05, center=True, scale=True, use_global_stats=False, beta_initializer='zeros', gamma_initializer='ones', running_mean_initializer='zeros', running_variance_initializer='ones', **kwargs)[source]¶ Bases:
mxnet.gluon.nn.basic_layers.BatchNorm
Cross-GPU Synchronized Batch normalization (SyncBN)
Standard BN 1 implementation only normalize the data within each device. SyncBN normalizes the input within the whole mini-batch. We follow the implementation described in the paper 2.
Note: Current implementation of SyncBN does not support FP16 training. For FP16 inference, use standard nn.BatchNorm instead of SyncBN.
- Parameters
in_channels (int, default 0) – Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
num_devices (int, default number of visible GPUs) –
momentum (float, default 0.9) – Momentum for the moving average.
epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale (bool, default True) – If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.
use_global_stats (bool, default False) – If True, use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. If False, use local batch-norm.
beta_initializer (str or Initializer, default ‘zeros’) – Initializer for the beta weight.
gamma_initializer (str or Initializer, default ‘ones’) – Initializer for the gamma weight.
running_mean_initializer (str or Initializer, default ‘zeros’) – Initializer for the running mean.
running_variance_initializer (str or Initializer, default ‘ones’) – Initializer for the running variance.
- Inputs:
data: input tensor with arbitrary shape.
- Outputs:
out: output tensor with the same shape as data.
- Reference:
- 1
Ioffe, Sergey, and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” ICML 2015
- 2
Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, and Amit Agrawal. “Context Encoding for Semantic Segmentation.” CVPR 2018
-
class
mxnet.gluon.contrib.nn.
PixelShuffle1D
(factor)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Pixel-shuffle layer for upsampling in 1 dimension.
Pixel-shuffling is the operation of taking groups of values along the channel dimension and regrouping them into blocks of pixels along the
W
dimension, thereby effectively multiplying that dimension by a constant factor in size.For example, a feature map of shape \((fC, W)\) is reshaped into \((C, fW)\) by forming little value groups of size \(f\) and arranging them in a grid of size \(W\).
- Parameters
factor (int or 1-tuple of int) – Upsampling factor, applied to the
W
dimension.Inputs –
data: Tensor of shape
(N, f*C, W)
.
Outputs –
out: Tensor of shape
(N, C, W*f)
.
Examples
>>> pxshuf = PixelShuffle1D(2) >>> x = mx.nd.zeros((1, 8, 3)) >>> pxshuf(x).shape (1, 4, 6)
-
class
mxnet.gluon.contrib.nn.
PixelShuffle2D
(factor)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Pixel-shuffle layer for upsampling in 2 dimensions.
Pixel-shuffling is the operation of taking groups of values along the channel dimension and regrouping them into blocks of pixels along the
H
andW
dimensions, thereby effectively multiplying those dimensions by a constant factor in size.For example, a feature map of shape \((f^2 C, H, W)\) is reshaped into \((C, fH, fW)\) by forming little \(f \times f\) blocks of pixels and arranging them in an \(H \times W\) grid.
Pixel-shuffling together with regular convolution is an alternative, learnable way of upsampling an image by arbitrary factors. It is reported to help overcome checkerboard artifacts that are common in upsampling with transposed convolutions (also called deconvolutions). See the paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network for further details.
- Parameters
factor (int or 2-tuple of int) – Upsampling factors, applied to the
H
andW
dimensions, in that order.Inputs –
data: Tensor of shape
(N, f1*f2*C, H, W)
.
Outputs –
out: Tensor of shape
(N, C, H*f1, W*f2)
.
Examples
>>> pxshuf = PixelShuffle2D((2, 3)) >>> x = mx.nd.zeros((1, 12, 3, 5)) >>> pxshuf(x).shape (1, 2, 6, 15)
-
class
mxnet.gluon.contrib.nn.
PixelShuffle3D
(factor)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
Pixel-shuffle layer for upsampling in 3 dimensions.
Pixel-shuffling (or voxel-shuffling in 3D) is the operation of taking groups of values along the channel dimension and regrouping them into blocks of voxels along the
D
,H
andW
dimensions, thereby effectively multiplying those dimensions by a constant factor in size.For example, a feature map of shape \((f^3 C, D, H, W)\) is reshaped into \((C, fD, fH, fW)\) by forming little \(f \times f \times f\) blocks of voxels and arranging them in a \(D \times H \times W\) grid.
Pixel-shuffling together with regular convolution is an alternative, learnable way of upsampling an image by arbitrary factors. It is reported to help overcome checkerboard artifacts that are common in upsampling with transposed convolutions (also called deconvolutions). See the paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network for further details.
- Parameters
factor (int or 3-tuple of int) – Upsampling factors, applied to the
D
,H
andW
dimensions, in that order.Inputs –
data: Tensor of shape
(N, f1*f2*f3*C, D, H, W)
.
Outputs –
out: Tensor of shape
(N, C, D*f1, H*f2, W*f3)
.
Examples
>>> pxshuf = PixelShuffle3D((2, 3, 4)) >>> x = mx.nd.zeros((1, 48, 3, 5, 7)) >>> pxshuf(x).shape (1, 2, 6, 15, 28)
Contrib convolutional neural network module.
-
class
mxnet.gluon.contrib.cnn.
DeformableConvolution
(channels, kernel_size=(1, 1), strides=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1, num_deformable_group=1, layout='NCHW', use_bias=True, in_channels=0, activation=None, weight_initializer=None, bias_initializer='zeros', offset_weight_initializer='zeros', offset_bias_initializer='zeros', offset_use_bias=True, op_name='DeformableConvolution', adj=None, prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
2-D Deformable Convolution v_1 (Dai, 2017). Normal Convolution uses sampling points in a regular grid, while the sampling points of Deformablem Convolution can be offset. The offset is learned with a separate convolution layer during the training. Both the convolution layer for generating the output features and the offsets are included in this gluon layer.
- Parameters
channels (int,) – The dimensionality of the output space i.e. the number of output channels in the convolution.
kernel_size (int or tuple/list of 2 ints, (Default value = (1,1))) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 2 ints, (Default value = (1,1))) – Specifies the strides of the convolution.
padding (int or tuple/list of 2 ints, (Default value = (0,0))) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
dilation (int or tuple/list of 2 ints, (Default value = (1,1))) – Specifies the dilation rate to use for dilated convolution.
groups (int, (Default value = 1)) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two convolution layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
num_deformable_group (int, (Default value = 1)) – Number of deformable group partitions.
layout (str, (Default value = NCHW)) – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, ‘NCHW’, ‘NHWC’, ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. Convolution is performed over ‘D’, ‘H’, and ‘W’ dimensions.
use_bias (bool, (Default value = True)) – Whether the layer for generating the output features uses a bias vector.
in_channels (int, (Default value = 0)) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and input channels will be inferred from the shape of input data.
activation (str, (Default value = None)) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).weight_initializer (str or Initializer, (Default value = None)) – Initializer for the weight weights matrix for the convolution layer for generating the output features.
bias_initializer (str or Initializer, (Default value = zeros)) – Initializer for the bias vector for the convolution layer for generating the output features.
offset_weight_initializer (str or Initializer, (Default value = zeros)) – Initializer for the weight weights matrix for the convolution layer for generating the offset.
offset_bias_initializer (str or Initializer, (Default value = zeros),) – Initializer for the bias vector for the convolution layer for generating the offset.
offset_use_bias (bool, (Default value = True)) – Whether the layer for generating the offset uses a bias vector.
Inputs –
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
Outputs –
out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCHW. out_height and out_width are calculated as:
out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1
-
class
mxnet.gluon.contrib.cnn.
ModulatedDeformableConvolution
(channels, kernel_size=(1, 1), strides=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1, num_deformable_group=1, layout='NCHW', use_bias=True, in_channels=0, activation=None, weight_initializer=None, bias_initializer='zeros', offset_weight_initializer='zeros', offset_bias_initializer='zeros', offset_use_bias=True, op_name='ModulatedDeformableConvolution', adj=None, prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.block.HybridBlock
2-D Deformable Convolution v2 (Dai, 2018).
The modulated deformable convolution operation is described in https://arxiv.org/abs/1811.11168
- Parameters
channels (int,) – The dimensionality of the output space i.e. the number of output channels in the convolution.
kernel_size (int or tuple/list of 2 ints, (Default value = (1,1))) – Specifies the dimensions of the convolution window.
strides (int or tuple/list of 2 ints, (Default value = (1,1))) – Specifies the strides of the convolution.
padding (int or tuple/list of 2 ints, (Default value = (0,0))) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points.
dilation (int or tuple/list of 2 ints, (Default value = (1,1))) – Specifies the dilation rate to use for dilated convolution.
groups (int, (Default value = 1)) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two convolution layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
num_deformable_group (int, (Default value = 1)) – Number of deformable group partitions.
layout (str, (Default value = NCHW)) – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, ‘NCHW’, ‘NHWC’, ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. Convolution is performed over ‘D’, ‘H’, and ‘W’ dimensions.
use_bias (bool, (Default value = True)) – Whether the layer for generating the output features uses a bias vector.
in_channels (int, (Default value = 0)) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and input channels will be inferred from the shape of input data.
activation (str, (Default value = None)) – Activation function to use. See
Activation()
. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).weight_initializer (str or Initializer, (Default value = None)) – Initializer for the weight weights matrix for the convolution layer for generating the output features.
bias_initializer (str or Initializer, (Default value = zeros)) – Initializer for the bias vector for the convolution layer for generating the output features.
offset_weight_initializer (str or Initializer, (Default value = zeros)) – Initializer for the weight weights matrix for the convolution layer for generating the offset.
offset_bias_initializer (str or Initializer, (Default value = zeros),) – Initializer for the bias vector for the convolution layer for generating the offset.
offset_use_bias (bool, (Default value = True)) – Whether the layer for generating the offset uses a bias vector.
Inputs –
data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCHW. For other layouts shape is permuted accordingly.
Outputs –
out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCHW. out_height and out_width are calculated as:
out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1
Contrib recurrent neural network module.
-
class
mxnet.gluon.contrib.rnn.
Conv1DRNNCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, ), i2h_dilate=(1, ), h2h_dilate=(1, ), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvRNNCell
1D Convolutional RNN cell.
\[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCW’ the shape should be (C, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0,)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1,)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1,)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_rnn_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv2DRNNCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, 0), i2h_dilate=(1, 1), h2h_dilate=(1, 1), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCHW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvRNNCell
2D Convolutional RNN cell.
\[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCHW’ the shape should be (C, H, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCHW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_rnn_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv3DRNNCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, 0, 0), i2h_dilate=(1, 1, 1), h2h_dilate=(1, 1, 1), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCDHW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvRNNCell
3D Convolutional RNN cells
\[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0, 0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1, 1)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCDHW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_rnn_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv1DLSTMCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, ), i2h_dilate=(1, ), h2h_dilate=(1, ), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvLSTMCell
1D Convolutional LSTM network cell.
“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting” paper. Xingjian et al. NIPS2015
\[\begin{split}\begin{array}{ll} i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ h_t = o_t \circ tanh(c_t) \\ \end{array}\end{split}\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCW’ the shape should be (C, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0,)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1,)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1,)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function used in c^prime_t. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_lstm_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv2DLSTMCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, 0), i2h_dilate=(1, 1), h2h_dilate=(1, 1), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCHW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvLSTMCell
2D Convolutional LSTM network cell.
“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting” paper. Xingjian et al. NIPS2015
\[\begin{split}\begin{array}{ll} i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ h_t = o_t \circ tanh(c_t) \\ \end{array}\end{split}\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCHW’ the shape should be (C, H, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCHW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function used in c^prime_t. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_lstm_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv3DLSTMCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, 0, 0), i2h_dilate=(1, 1, 1), h2h_dilate=(1, 1, 1), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCDHW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvLSTMCell
3D Convolutional LSTM network cell.
“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting” paper. Xingjian et al. NIPS2015
\[\begin{split}\begin{array}{ll} i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ h_t = o_t \circ tanh(c_t) \\ \end{array}\end{split}\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0, 0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1, 1)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCDHW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function used in c^prime_t. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_lstm_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv1DGRUCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, ), i2h_dilate=(1, ), h2h_dilate=(1, ), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvGRUCell
1D Convolutional Gated Rectified Unit (GRU) network cell.
\[\begin{split}\begin{array}{ll} r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ \end{array}\end{split}\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCW’ the shape should be (C, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0,)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1,)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1,)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function used in n_t. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_gru_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv2DGRUCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, 0), i2h_dilate=(1, 1), h2h_dilate=(1, 1), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCHW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvGRUCell
2D Convolutional Gated Rectified Unit (GRU) network cell.
\[\begin{split}\begin{array}{ll} r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ \end{array}\end{split}\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCHW’ the shape should be (C, H, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCHW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function used in n_t. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_gru_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
Conv3DGRUCell
(input_shape, hidden_channels, i2h_kernel, h2h_kernel, i2h_pad=(0, 0, 0), i2h_dilate=(1, 1, 1), h2h_dilate=(1, 1, 1), i2h_weight_initializer=None, h2h_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', conv_layout='NCDHW', activation='tanh', prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvGRUCell
3D Convolutional Gated Rectified Unit (GRU) network cell.
\[\begin{split}\begin{array}{ll} r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ \end{array}\end{split}\]- Parameters
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout. For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).
hidden_channels (int) – Number of output channels.
i2h_kernel (int or tuple of int) – Input convolution kernel sizes.
h2h_kernel (int or tuple of int) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.
i2h_pad (int or tuple of int, default (0, 0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1, 1)) – Recurrent convolution dilate.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the input convolutions.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the input convolutions.
i2h_bias_initializer (str or Initializer, default zeros) – Initializer for the input convolution bias vectors.
h2h_bias_initializer (str or Initializer, default zeros) – Initializer for the recurrent convolution bias vectors.
conv_layout (str, default 'NCDHW') – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.
activation (str or gluon.Block, default 'tanh') – Type of activation function used in n_t. If argument type is string, it’s equivalent to nn.Activation(act_type=str). See
Activation()
for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.prefix (str, default
'conv_gru_
’) – Prefix for name of layers (and name of weight if params is None).params (RNNParams, default None) – Container for weight sharing between cells. Created if None.
-
class
mxnet.gluon.contrib.rnn.
VariationalDropoutCell
(base_cell, drop_inputs=0.0, drop_states=0.0, drop_outputs=0.0)[source]¶ Bases:
mxnet.gluon.rnn.rnn_cell.ModifierCell
Applies Variational Dropout on base cell. https://arxiv.org/pdf/1512.05287.pdf
Variational dropout uses the same dropout mask across time-steps. It can be applied to RNN inputs, outputs, and states. The masks for them are not shared.
The dropout mask is initialized when stepping forward for the first time and will remain the same until .reset() is called. Thus, if using the cell and stepping manually without calling .unroll(), the .reset() should be called after each sequence.
- Parameters
base_cell (RecurrentCell) – The cell on which to perform variational dropout.
drop_inputs (float, default 0.) – The dropout rate for inputs. Won’t apply dropout if it equals 0.
drop_states (float, default 0.) – The dropout rate for state inputs on the first state channel. Won’t apply dropout if it equals 0.
drop_outputs (float, default 0.) – The dropout rate for outputs. Won’t apply dropout if it equals 0.
-
unroll
(length, inputs, begin_state=None, layout='NTC', merge_outputs=None, valid_length=None)[source]¶ Unrolls an RNN cell across time steps.
- Parameters
length (int) – Number of steps to unroll.
inputs (Symbol, list of Symbol, or None) –
If inputs is a single Symbol (usually the output of Embedding symbol), it should have shape (batch_size, length, …) if layout is ‘NTC’, or (length, batch_size, …) if layout is ‘TNC’.
If inputs is a list of symbols (usually output of previous unroll), they should all have shape (batch_size, …).
begin_state (nested list of Symbol, optional) – Input states created by begin_state() or output state of another cell. Created from begin_state() if None.
layout (str, optional) – layout of input symbol. Only used if inputs is a single Symbol.
merge_outputs (bool, optional) – If False, returns outputs as a list of Symbols. If True, concatenates output across time steps and returns a single symbol with shape (batch_size, length, …) if layout is ‘NTC’, or (length, batch_size, …) if layout is ‘TNC’. If None, output whatever is faster.
valid_length (Symbol, NDArray or None) – valid_length specifies the length of the sequences in the batch without padding. This option is especially useful for building sequence-to-sequence models where the input and output sequences would potentially be padded. If valid_length is None, all sequences are assumed to have the same length. If valid_length is a Symbol or NDArray, it should have shape (batch_size,). The ith element will be the length of the ith sequence in the batch. The last valid state will be return and the padded outputs will be masked with 0. Note that valid_length must be smaller or equal to length.
- Returns
outputs (list of Symbol or Symbol) – Symbol (if merge_outputs is True) or list of Symbols (if merge_outputs is False) corresponding to the output from the RNN from this unrolling.
states (list of Symbol) – The new state of this RNN after this unrolling. The type of this symbol is same as the output of begin_state().
-
class
mxnet.gluon.contrib.rnn.
LSTMPCell
(hidden_size, projection_size, i2h_weight_initializer=None, h2h_weight_initializer=None, h2r_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', input_size=0, prefix=None, params=None)[source]¶ Bases:
mxnet.gluon.rnn.rnn_cell.HybridRecurrentCell
Long-Short Term Memory Projected (LSTMP) network cell. (https://arxiv.org/abs/1402.1128)
Each call computes the following function:
\[\begin{split}\begin{array}{ll} i_t = sigmoid(W_{ii} x_t + b_{ii} + W_{ri} r_{(t-1)} + b_{ri}) \\ f_t = sigmoid(W_{if} x_t + b_{if} + W_{rf} r_{(t-1)} + b_{rf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{rc} r_{(t-1)} + b_{rg}) \\ o_t = sigmoid(W_{io} x_t + b_{io} + W_{ro} r_{(t-1)} + b_{ro}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \\ r_t = W_{hr} h_t \end{array}\end{split}\]where \(r_t\) is the projected recurrent activation at time t, \(h_t\) is the hidden state at time t, \(c_t\) is the cell state at time t, \(x_t\) is the input at time t, and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and out gates, respectively.
- Parameters
hidden_size (int) – Number of units in cell state symbol.
projection_size (int) – Number of units in output symbol.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the linear transformation of the inputs.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the linear transformation of the hidden state.
h2r_weight_initializer (str or Initializer) – Initializer for the projection weights matrix, used for the linear transformation of the recurrent state.
i2h_bias_initializer (str or Initializer, default 'lstmbias') – Initializer for the bias vector. By default, bias for the forget gate is initialized to 1 while all other biases are initialized to zero.
h2h_bias_initializer (str or Initializer) – Initializer for the bias vector.
prefix (str, default
'lstmp_
’) – Prefix for name of Block`s (and name of weight if params is `None).params (Parameter or None) – Container for weight sharing between cells. Created if None.
Inputs –
data: input tensor with shape (batch_size, input_size).
states: a list of two initial recurrent state tensors, with shape (batch_size, projection_size) and (batch_size, hidden_size) respectively.
Outputs –
out: output tensor with shape (batch_size, num_hidden).
next_states: a list of two output recurrent state tensors. Each has the same shape as states.
Dataset sampler.
-
class
mxnet.gluon.contrib.data.sampler.
IntervalSampler
(length, interval, rollover=True)[source]¶ Bases:
mxnet.gluon.data.sampler.Sampler
Samples elements from [0, length) at fixed intervals.
- Parameters
length (int) – Length of the sequence.
interval (int) – The number of items to skip between two samples.
rollover (bool, default True) – Whether to start again from the first skipped item after reaching the end. If true, this sampler would start again from the first skipped item until all items are visited. Otherwise, iteration stops when end is reached and skipped items are ignored.
Examples
>>> sampler = contrib.data.IntervalSampler(13, interval=3) >>> list(sampler) [0, 3, 6, 9, 12, 1, 4, 7, 10, 2, 5, 8, 11] >>> sampler = contrib.data.IntervalSampler(13, interval=3, rollover=False) >>> list(sampler) [0, 3, 6, 9, 12]
Text datasets.
-
class
mxnet.gluon.contrib.data.text.
WikiText2
(root='/home/jenkins_slave/.mxnet/datasets/wikitext-2', segment='train', vocab=None, seq_len=35)[source]¶ Bases:
mxnet.gluon.contrib.data.text._WikiText
WikiText-2 word-level dataset for language modeling, from Salesforce research.
From https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/
License: Creative Commons Attribution-ShareAlike
Each sample is a vector of length equal to the specified sequence length. At the end of each sentence, an end-of-sentence token ‘<eos>’ is added.
- Parameters
root (str, default $MXNET_HOME/datasets/wikitext-2) – Path to temp folder for storing data.
segment (str, default 'train') – Dataset segment. Options are ‘train’, ‘validation’, ‘test’.
vocab (
Vocabulary
, default None) – The vocabulary to use for indexing the text dataset. If None, a default vocabulary is created.seq_len (int, default 35) – The sequence length of each sample, regardless of the sentence boundary.
-
class
mxnet.gluon.contrib.data.text.
WikiText103
(root='/home/jenkins_slave/.mxnet/datasets/wikitext-103', segment='train', vocab=None, seq_len=35)[source]¶ Bases:
mxnet.gluon.contrib.data.text._WikiText
WikiText-103 word-level dataset for language modeling, from Salesforce research.
From https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/
License: Creative Commons Attribution-ShareAlike
Each sample is a vector of length equal to the specified sequence length. At the end of each sentence, an end-of-sentence token ‘<eos>’ is added.
- Parameters
root (str, default $MXNET_HOME/datasets/wikitext-103) – Path to temp folder for storing data.
segment (str, default 'train') – Dataset segment. Options are ‘train’, ‘validation’, ‘test’.
vocab (
Vocabulary
, default None) – The vocabulary to use for indexing the text dataset. If None, a default vocabulary is created.seq_len (int, default 35) – The sequence length of each sample, regardless of the sentence boundary.
Gluon Estimator Module
-
class
mxnet.gluon.contrib.estimator.
BatchProcessor
[source]¶ Bases:
object
BatchProcessor Class for plug and play fit_batch & evaluate_batch
During training or validation, data are divided into minibatches for processing. This class aims at providing hooks of training or validating on a minibatch of data. Users may provide customized fit_batch() and evaluate_batch() methods by inheriting from this class and overriding class methods.
BatchProcessor
can be used to replace fit_batch() and evaluate_batch() in the base estimator class-
evaluate_batch
(estimator, val_batch, batch_axis=0)[source]¶ Evaluate the estimator model on a batch of validation data.
- Parameters
estimator (Estimator) – Reference to the estimator
val_batch (tuple) – Data and label of a batch from the validation data loader.
batch_axis (int, default 0) – Batch axis to split the validation data into devices.
-
fit_batch
(estimator, train_batch, batch_axis=0)[source]¶ Trains the estimator model on a batch of training data.
- Parameters
estimator (Estimator) – Reference to the estimator
train_batch (tuple) – Data and label of a batch from the training data loader.
batch_axis (int, default 0) – Batch axis to split the training data into devices.
- Returns
data (List of NDArray) – Sharded data from the batch. Data is sharded with gluon.split_and_load.
label (List of NDArray) – Sharded label from the batch. Labels are sharded with gluon.split_and_load.
pred (List of NDArray) – Prediction on each of the sharded inputs.
loss (List of NDArray) – Loss on each of the sharded inputs.
-
-
class
mxnet.gluon.contrib.estimator.
CheckpointHandler
(model_dir, model_prefix='model', monitor=None, verbose=0, save_best=False, mode='auto', epoch_period=1, batch_period=None, max_checkpoints=5, resume_from_checkpoint=False)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.TrainBegin
,mxnet.gluon.contrib.estimator.event_handler.BatchEnd
,mxnet.gluon.contrib.estimator.event_handler.EpochEnd
Save the model after user define period
CheckpointHandler
saves the network architecture after first batch if the model can be fully hybridized, saves model parameters and trainer states after user defined period, default saves every epoch.- Parameters
model_dir (str) – File directory to save all the model related files including model architecture, model parameters, and trainer states.
model_prefix (str default 'model') – Prefix to add for all checkpoint file names.
monitor (EvalMetric, default None) – The metrics to monitor and determine if model has improved
verbose (int, default 0) – Verbosity mode, 1 means inform user every time a checkpoint is saved
save_best (bool, default False) – If True, monitor must not be None,
CheckpointHandler
will save the model parameters and trainer states with the best monitored value.mode (str, default 'auto') – One of {auto, min, max}, if save_best=True, the comparison to make and determine if the monitored value has improved. if ‘auto’ mode,
CheckpointHandler
will try to use min or max based on the monitored metric name.epoch_period (int, default 1) – Epoch intervals between saving the network. By default, checkpoints are saved every epoch.
batch_period (int, default None) – Batch intervals between saving the network. By default, checkpoints are not saved based on the number of batches.
max_checkpoints (int, default 5) – Maximum number of checkpoint files to keep in the model_dir, older checkpoints will be removed. Best checkpoint file is not counted.
resume_from_checkpoint (bool, default False) – Whether to resume training from checkpoint in model_dir. If True and checkpoints found,
CheckpointHandler
will load net parameters and trainer states, and train the remaining of epochs and batches.
-
class
mxnet.gluon.contrib.estimator.
EarlyStoppingHandler
(monitor, min_delta=0, patience=0, mode='auto', baseline=None)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.TrainBegin
,mxnet.gluon.contrib.estimator.event_handler.EpochEnd
,mxnet.gluon.contrib.estimator.event_handler.TrainEnd
Early stop training if monitored value is not improving
- Parameters
monitor (EvalMetric) – The metric to monitor, and stop training if this metric does not improve.
min_delta (float, default 0) – Minimal change in monitored value to be considered as an improvement.
patience (int, default 0) – Number of epochs to wait for improvement before terminate training.
mode (str, default 'auto') – One of {auto, min, max}, if save_best_only=True, the comparison to make and determine if the monitored value has improved. if ‘auto’ mode, checkpoint handler will try to use min or max based on the monitored metric name.
baseline (float) – Baseline value to compare the monitored value with.
-
class
mxnet.gluon.contrib.estimator.
Estimator
(net, loss, train_metrics=None, val_metrics=None, initializer=None, trainer=None, context=None, val_net=None, val_loss=None, batch_processor=None)[source]¶ Bases:
object
Estimator Class for easy model training
Estimator
can be used to facilitate the training & validation process- Parameters
net (gluon.Block) – The model used for training.
loss (gluon.loss.Loss) – Loss (objective) function to calculate during training.
train_metrics (EvalMetric or list of EvalMetric) – Training metrics for evaluating models on training dataset.
val_metrics (EvalMetric or list of EvalMetric) – Validation metrics for evaluating models on validation dataset.
initializer (Initializer) – Initializer to initialize the network.
trainer (Trainer) – Trainer to apply optimizer on network parameters.
context (Context or list of Context) – Device(s) to run the training on.
val_net (gluon.Block) –
The model used for validation. The validation model does not necessarily belong to the same model class as the training model. But the two models typically share the same architecture. Therefore the validation model can reuse parameters of the training model.
The code example of consruction of val_net sharing the same network parameters as the training net is given below:
>>> net = _get_train_network() >>> val_net = _get_test_network(params=net.collect_params()) >>> net.initialize(ctx=ctx) >>> est = Estimator(net, loss, val_net=val_net)
Proper namespace match is required for weight sharing between two networks. Most networks inheriting
Block
can share their parameters correctly. An exception is Sequential networks that Block scope must be specified for correct weight sharing. For the naming in mxnet Gluon API, please refer to the site (https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/naming.html) for future information.val_loss (gluon.loss.loss) – Loss (objective) function to calculate during validation. If set val_loss None, it will use the same loss function as self.loss
batch_processor (BatchProcessor) – BatchProcessor provides customized fit_batch() and evaluate_batch() methods
-
evaluate
(val_data, batch_axis=0, event_handlers=None)[source]¶ Evaluate model on validation data.
This function calls
evaluate_batch()
on each of the batches from the validation data loader. Thus, for custom use cases, it’s possible to inherit the estimator class and overrideevaluate_batch()
.- Parameters
val_data (DataLoader) – Validation data loader with data and labels.
batch_axis (int, default 0) – Batch axis to split the validation data into devices.
event_handlers (EventHandler or list of EventHandler) – List of
EventHandlers
to apply during validation. Besides event handlers specified here, a default MetricHandler and a LoggingHandler will be added if not specified explicitly.
-
fit
(train_data, val_data=None, epochs=None, event_handlers=None, batches=None, batch_axis=0)[source]¶ Trains the model with a given
DataLoader
for a specified number of epochs or batches. The batch size is inferred from the data loader’s batch_size.This function calls
fit_batch()
on each of the batches from the training data loader. Thus, for custom use cases, it’s possible to inherit the estimator class and overridefit_batch()
.- Parameters
train_data (DataLoader) – Training data loader with data and labels.
val_data (DataLoader, default None) – Validation data loader with data and labels.
epochs (int, default None) – Number of epochs to iterate on the training data. You can only specify one and only one type of iteration(epochs or batches).
event_handlers (EventHandler or list of EventHandler) – List of
EventHandlers
to apply during training. Besides the event handlers specified here, a StoppingHandler, LoggingHandler and MetricHandler will be added by default if not yet specified manually. If validation data is provided, a ValidationHandler is also added if not already specified.batches (int, default None) – Number of batches to iterate on the training data. You can only specify one and only one type of iteration(epochs or batches).
batch_axis (int, default 0) – Batch axis to split the training data into devices.
-
logger
= None¶ logging.Logger object associated with the Estimator.
The logger is used for all logs generated by this estimator and its handlers. A new logging.Logger is created during Estimator construction and configured to write all logs with level logging.INFO or higher to sys.stdout.
You can modify the logging settings using the standard Python methods. For example, to save logs to a file in addition to printing them to stdout output, you can attach a logging.FileHandler to the logger.
>>> est = Estimator(net, loss) >>> import logging >>> est.logger.addHandler(logging.FileHandler(filename))
-
class
mxnet.gluon.contrib.estimator.
GradientUpdateHandler
(priority=-2000)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.BatchEnd
Gradient Update Handler that apply gradients on network weights
GradientUpdateHandler
takes the priority level. It updates weight parameters at the end of each batch- Parameters
priority (scalar, default -2000) – priority level of the gradient update handler. Priority level is sorted in ascending order. The lower the number is, the higher priority level the handler is.
---------- –
-
class
mxnet.gluon.contrib.estimator.
LoggingHandler
(log_interval='epoch', metrics=None, priority=inf)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.TrainBegin
,mxnet.gluon.contrib.estimator.event_handler.TrainEnd
,mxnet.gluon.contrib.estimator.event_handler.EpochBegin
,mxnet.gluon.contrib.estimator.event_handler.EpochEnd
,mxnet.gluon.contrib.estimator.event_handler.BatchBegin
,mxnet.gluon.contrib.estimator.event_handler.BatchEnd
Basic Logging Handler that applies to every Gluon estimator by default.
LoggingHandler
logs hyper-parameters, training statistics, and other useful information during training- Parameters
log_interval (int or str, default 'epoch') – Logging interval during training. log_interval=’epoch’: display metrics every epoch log_interval=integer k: display metrics every interval of k batches
metrics (list of EvalMetrics) – Metrics to be logged, logged at batch end, epoch end, train end.
priority (scalar, default np.Inf) – Priority level of the LoggingHandler. Priority level is sorted in ascending order. The lower the number is, the higher priority level the handler is.
-
class
mxnet.gluon.contrib.estimator.
MetricHandler
(metrics, priority=-1000)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.EpochBegin
,mxnet.gluon.contrib.estimator.event_handler.BatchEnd
Metric Handler that update metric values at batch end
MetricHandler
takes model predictions and true labels and update the metrics, it also update metric wrapper for loss with loss values. Validation loss and metrics will be handled byValidationHandler
- Parameters
metrics (List of EvalMetrics) – Metrics to be updated at batch end.
priority (scalar) – Priority level of the MetricHandler. Priority level is sorted in ascending order. The lower the number is, the higher priority level the handler is.
-
class
mxnet.gluon.contrib.estimator.
StoppingHandler
(max_epoch=None, max_batch=None)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.TrainBegin
,mxnet.gluon.contrib.estimator.event_handler.BatchEnd
,mxnet.gluon.contrib.estimator.event_handler.EpochEnd
Stop conditions to stop training Stop training if maximum number of batches or epochs reached.
- Parameters
max_epoch (int, default None) – Number of maximum epochs to train.
max_batch (int, default None) – Number of maximum batches to train.
-
class
mxnet.gluon.contrib.estimator.
ValidationHandler
(val_data, eval_fn, epoch_period=1, batch_period=None, priority=-1000, event_handlers=None)[source]¶ Bases:
mxnet.gluon.contrib.estimator.event_handler.TrainBegin
,mxnet.gluon.contrib.estimator.event_handler.BatchEnd
,mxnet.gluon.contrib.estimator.event_handler.EpochEnd
Validation Handler that evaluate model on validation dataset
ValidationHandler
takes validation dataset, an evaluation function, metrics to be evaluated, and how often to run the validation. You can provide custom evaluation function or use the one provided myEstimator
- Parameters
val_data (DataLoader) – Validation data set to run evaluation.
eval_fn (function) – A function defines how to run evaluation and calculate loss and metrics.
epoch_period (int, default 1) – How often to run validation at epoch end, by default
ValidationHandler
validate every epoch.batch_period (int, default None) – How often to run validation at batch end, by default
ValidationHandler
does not validate at batch end.priority (scalar, default -1000) – Priority level of the ValidationHandler. Priority level is sorted in ascending order. The lower the number is, the higher priority level the handler is.
event_handlers (EventHandler or list of EventHandlers) – List of
EventHandler
to apply during validaiton. This argument is used by self.eval_fn function in order to process customized event handlers.
此页内容是否对您有帮助
感谢反馈!