MXNet
-
- ndarray
- ndarray.CachedOp
- ndarray.NDArray
- ndarray.Activation
- ndarray.BatchNorm
- ndarray.BatchNorm_v1
- ndarray.BilinearSampler
- ndarray.BlockGrad
- ndarray.CTCLoss
- ndarray.Cast
- ndarray.Concat
- ndarray.Convolution
- ndarray.Convolution_v1
- ndarray.Correlation
- ndarray.Crop
- ndarray.Custom
- ndarray.Deconvolution
- ndarray.Dropout
- ndarray.ElementWiseSum
- ndarray.Embedding
- ndarray.Flatten
- ndarray.FullyConnected
- ndarray.GridGenerator
- ndarray.GroupNorm
- ndarray.IdentityAttachKLSparseReg
- ndarray.InstanceNorm
- ndarray.L2Normalization
- ndarray.LRN
- ndarray.LayerNorm
- ndarray.LeakyReLU
- ndarray.LinearRegressionOutput
- ndarray.LogisticRegressionOutput
- ndarray.MAERegressionOutput
- ndarray.MakeLoss
- ndarray.Pad
- ndarray.Pooling
- ndarray.Pooling_v1
- ndarray.RNN
- ndarray.ROIPooling
- ndarray.Reshape
- ndarray.SVMOutput
- ndarray.SequenceLast
- ndarray.SequenceMask
- ndarray.SequenceReverse
- ndarray.SliceChannel
- ndarray.Softmax
- ndarray.SoftmaxActivation
- ndarray.SoftmaxOutput
- ndarray.SpatialTransformer
- ndarray.SwapAxis
- ndarray.UpSampling
- ndarray.abs
- ndarray.adam_update
- ndarray.add_n
- ndarray.all_finite
- ndarray.amp_cast
- ndarray.amp_multicast
- ndarray.arccos
- ndarray.arccosh
- ndarray.arcsin
- ndarray.arcsinh
- ndarray.arctan
- ndarray.arctanh
- ndarray.argmax
- ndarray.argmax_channel
- ndarray.argmin
- ndarray.argsort
- ndarray.batch_dot
- ndarray.batch_take
- ndarray.broadcast_add
- ndarray.broadcast_axes
- ndarray.broadcast_axis
- ndarray.broadcast_div
- ndarray.broadcast_equal
- ndarray.broadcast_greater
- ndarray.broadcast_greater_equal
- ndarray.broadcast_hypot
- ndarray.broadcast_lesser
- ndarray.broadcast_lesser_equal
- ndarray.broadcast_like
- ndarray.broadcast_logical_and
- ndarray.broadcast_logical_or
- ndarray.broadcast_logical_xor
- ndarray.broadcast_maximum
- ndarray.broadcast_minimum
- ndarray.broadcast_minus
- ndarray.broadcast_mod
- ndarray.broadcast_mul
- ndarray.broadcast_not_equal
- ndarray.broadcast_plus
- ndarray.broadcast_power
- ndarray.broadcast_sub
- ndarray.broadcast_to
- ndarray.cast
- ndarray.cast_storage
- ndarray.cbrt
- ndarray.ceil
- ndarray.choose_element_0index
- ndarray.clip
- ndarray.col2im
- ndarray.concat
- ndarray.cos
- ndarray.cosh
- ndarray.crop
- ndarray.ctc_loss
- ndarray.cumsum
- ndarray.degrees
- ndarray.depth_to_space
- ndarray.diag
- ndarray.dot
- ndarray.elemwise_add
- ndarray.elemwise_div
- ndarray.elemwise_mul
- ndarray.elemwise_sub
- ndarray.erf
- ndarray.erfinv
- ndarray.exp
- ndarray.expand_dims
- ndarray.expm1
- ndarray.fill_element_0index
- ndarray.fix
- ndarray.flatten
- ndarray.flip
- ndarray.floor
- ndarray.ftml_update
- ndarray.ftrl_update
- ndarray.gamma
- ndarray.gammaln
- ndarray.gather_nd
- ndarray.hard_sigmoid
- ndarray.identity
- ndarray.im2col
- ndarray.khatri_rao
- ndarray.lamb_update_phase1
- ndarray.lamb_update_phase2
- ndarray.linalg_det
- ndarray.linalg_extractdiag
- ndarray.linalg_extracttrian
- ndarray.linalg_gelqf
- ndarray.linalg_gemm
- ndarray.linalg_gemm2
- ndarray.linalg_inverse
- ndarray.linalg_makediag
- ndarray.linalg_maketrian
- ndarray.linalg_potrf
- ndarray.linalg_potri
- ndarray.linalg_slogdet
- ndarray.linalg_sumlogdiag
- ndarray.linalg_syrk
- ndarray.linalg_trmm
- ndarray.linalg_trsm
- ndarray.log
- ndarray.log10
- ndarray.log1p
- ndarray.log2
- ndarray.log_softmax
- ndarray.logical_not
- ndarray.make_loss
- ndarray.max
- ndarray.max_axis
- ndarray.mean
- ndarray.min
- ndarray.min_axis
- ndarray.moments
- ndarray.mp_lamb_update_phase1
- ndarray.mp_lamb_update_phase2
- ndarray.mp_nag_mom_update
- ndarray.mp_sgd_mom_update
- ndarray.mp_sgd_update
- ndarray.multi_all_finite
- ndarray.multi_lars
- ndarray.multi_mp_sgd_mom_update
- ndarray.multi_mp_sgd_update
- ndarray.multi_sgd_mom_update
- ndarray.multi_sgd_update
- ndarray.multi_sum_sq
- ndarray.nag_mom_update
- ndarray.nanprod
- ndarray.nansum
- ndarray.negative
- ndarray.norm
- ndarray.normal
- ndarray.one_hot
- ndarray.ones_like
- ndarray.pad
- ndarray.pick
- ndarray.preloaded_multi_mp_sgd_mom_update
- ndarray.preloaded_multi_mp_sgd_update
- ndarray.preloaded_multi_sgd_mom_update
- ndarray.preloaded_multi_sgd_update
- ndarray.prod
- ndarray.radians
- ndarray.random_exponential
- ndarray.random_gamma
- ndarray.random_generalized_negative_binomial
- ndarray.random_negative_binomial
- ndarray.random_normal
- ndarray.random_pdf_dirichlet
- ndarray.random_pdf_exponential
- ndarray.random_pdf_gamma
- ndarray.random_pdf_generalized_negative_binomial
- ndarray.random_pdf_negative_binomial
- ndarray.random_pdf_normal
- ndarray.random_pdf_poisson
- ndarray.random_pdf_uniform
- ndarray.random_poisson
- ndarray.random_randint
- ndarray.random_uniform
- ndarray.ravel_multi_index
- ndarray.rcbrt
- ndarray.reciprocal
- ndarray.relu
- ndarray.repeat
- ndarray.reset_arrays
- ndarray.reshape
- ndarray.reshape_like
- ndarray.reverse
- ndarray.rint
- ndarray.rmsprop_update
- ndarray.rmspropalex_update
- ndarray.round
- ndarray.rsqrt
- ndarray.sample_exponential
- ndarray.sample_gamma
- ndarray.sample_generalized_negative_binomial
- ndarray.sample_multinomial
- ndarray.sample_negative_binomial
- ndarray.sample_normal
- ndarray.sample_poisson
- ndarray.sample_uniform
- ndarray.scatter_nd
- ndarray.sgd_mom_update
- ndarray.sgd_update
- ndarray.shape_array
- ndarray.shuffle
- ndarray.sigmoid
- ndarray.sign
- ndarray.signsgd_update
- ndarray.signum_update
- ndarray.sin
- ndarray.sinh
- ndarray.size_array
- ndarray.slice
- ndarray.slice_axis
- ndarray.slice_like
- ndarray.smooth_l1
- ndarray.softmax
- ndarray.softmax_cross_entropy
- ndarray.softmin
- ndarray.softsign
- ndarray.sort
- ndarray.space_to_depth
- ndarray.split
- ndarray.sqrt
- ndarray.square
- ndarray.squeeze
- ndarray.stack
- ndarray.stop_gradient
- ndarray.sum
- ndarray.sum_axis
- ndarray.swapaxes
- ndarray.take
- ndarray.tan
- ndarray.tanh
- ndarray.tile
- ndarray.topk
- ndarray.transpose
- ndarray.trunc
- ndarray.uniform
- ndarray.unravel_index
- ndarray.where
- ndarray.zeros_like
- ndarray.concatenate
- ndarray.ones
- ndarray.add
- ndarray.arange
- ndarray.linspace
- ndarray.eye
- ndarray.divide
- ndarray.equal
- ndarray.full
- ndarray.greater
- ndarray.greater_equal
- ndarray.imdecode
- ndarray.lesser
- ndarray.lesser_equal
- ndarray.logical_and
- ndarray.logical_or
- ndarray.logical_xor
- ndarray.maximum
- ndarray.minimum
- ndarray.moveaxis
- ndarray.modulo
- ndarray.multiply
- ndarray.not_equal
- ndarray.onehot_encode
- ndarray.power
- ndarray.subtract
- ndarray.true_divide
- ndarray.waitall
- ndarray.histogram
- ndarray.split_v2
- ndarray.to_dlpack_for_read
- ndarray.to_dlpack_for_write
- ndarray.from_dlpack
- ndarray.from_numpy
- ndarray.zeros
- ndarray.indexing_key_expand_implicit_axes
- ndarray.get_indexing_dispatch_code
- ndarray.get_oshape_of_gather_nd_op
- ndarray.empty
- ndarray.array
- ndarray.load
- ndarray.load_frombuffer
- ndarray.save
-
- ndarray.contrib
- ndarray.contrib.rand_zipfian
- ndarray.contrib.foreach
- ndarray.contrib.while_loop
- ndarray.contrib.cond
- ndarray.contrib.isinf
- ndarray.contrib.isfinite
- ndarray.contrib.isnan
- ndarray.contrib.AdaptiveAvgPooling2D
- ndarray.contrib.BilinearResize2D
- ndarray.contrib.CTCLoss
- ndarray.contrib.DeformableConvolution
- ndarray.contrib.DeformablePSROIPooling
- ndarray.contrib.ModulatedDeformableConvolution
- ndarray.contrib.MultiBoxDetection
- ndarray.contrib.MultiBoxPrior
- ndarray.contrib.MultiBoxTarget
- ndarray.contrib.MultiProposal
- ndarray.contrib.PSROIPooling
- ndarray.contrib.Proposal
- ndarray.contrib.ROIAlign
- ndarray.contrib.RROIAlign
- ndarray.contrib.SparseEmbedding
- ndarray.contrib.SyncBatchNorm
- ndarray.contrib.allclose
- ndarray.contrib.arange_like
- ndarray.contrib.backward_gradientmultiplier
- ndarray.contrib.backward_hawkesll
- ndarray.contrib.backward_index_copy
- ndarray.contrib.backward_quadratic
- ndarray.contrib.bipartite_matching
- ndarray.contrib.boolean_mask
- ndarray.contrib.box_decode
- ndarray.contrib.box_encode
- ndarray.contrib.box_iou
- ndarray.contrib.box_nms
- ndarray.contrib.box_non_maximum_suppression
- ndarray.contrib.calibrate_entropy
- ndarray.contrib.count_sketch
- ndarray.contrib.ctc_loss
- ndarray.contrib.dequantize
- ndarray.contrib.dgl_adjacency
- ndarray.contrib.dgl_csr_neighbor_non_uniform_sample
- ndarray.contrib.dgl_csr_neighbor_uniform_sample
- ndarray.contrib.dgl_graph_compact
- ndarray.contrib.dgl_subgraph
- ndarray.contrib.div_sqrt_dim
- ndarray.contrib.edge_id
- ndarray.contrib.fft
- ndarray.contrib.getnnz
- ndarray.contrib.gradientmultiplier
- ndarray.contrib.group_adagrad_update
- ndarray.contrib.hawkesll
- ndarray.contrib.ifft
- ndarray.contrib.index_array
- ndarray.contrib.index_copy
- ndarray.contrib.interleaved_matmul_encdec_qk
- ndarray.contrib.interleaved_matmul_encdec_valatt
- ndarray.contrib.interleaved_matmul_selfatt_qk
- ndarray.contrib.interleaved_matmul_selfatt_valatt
- ndarray.contrib.quadratic
- ndarray.contrib.quantize
- ndarray.contrib.quantize_v2
- ndarray.contrib.quantized_act
- ndarray.contrib.quantized_batch_norm
- ndarray.contrib.quantized_concat
- ndarray.contrib.quantized_conv
- ndarray.contrib.quantized_elemwise_add
- ndarray.contrib.quantized_elemwise_mul
- ndarray.contrib.quantized_embedding
- ndarray.contrib.quantized_flatten
- ndarray.contrib.quantized_fully_connected
- ndarray.contrib.quantized_pooling
- ndarray.contrib.requantize
- ndarray.contrib.round_ste
- ndarray.contrib.sign_ste
-
- ndarray.image
- ndarray.image.adjust_lighting
- ndarray.image.crop
- ndarray.image.flip_left_right
- ndarray.image.flip_top_bottom
- ndarray.image.normalize
- ndarray.image.random_brightness
- ndarray.image.random_color_jitter
- ndarray.image.random_contrast
- ndarray.image.random_flip_left_right
- ndarray.image.random_flip_top_bottom
- ndarray.image.random_hue
- ndarray.image.random_lighting
- ndarray.image.random_saturation
- ndarray.image.resize
- ndarray.image.to_tensor
-
- ndarray.linalg
- ndarray.linalg.det
- ndarray.linalg.extractdiag
- ndarray.linalg.extracttrian
- ndarray.linalg.gelqf
- ndarray.linalg.gemm
- ndarray.linalg.gemm2
- ndarray.linalg.inverse
- ndarray.linalg.makediag
- ndarray.linalg.maketrian
- ndarray.linalg.potrf
- ndarray.linalg.potri
- ndarray.linalg.slogdet
- ndarray.linalg.sumlogdiag
- ndarray.linalg.syevd
- ndarray.linalg.syrk
- ndarray.linalg.trmm
- ndarray.linalg.trsm
-
- ndarray.op
- ndarray.op.CachedOp
- ndarray.op.Activation
- ndarray.op.BatchNorm
- ndarray.op.BatchNorm_v1
- ndarray.op.BilinearSampler
- ndarray.op.BlockGrad
- ndarray.op.CTCLoss
- ndarray.op.Cast
- ndarray.op.Concat
- ndarray.op.Convolution
- ndarray.op.Convolution_v1
- ndarray.op.Correlation
- ndarray.op.Crop
- ndarray.op.Custom
- ndarray.op.Deconvolution
- ndarray.op.Dropout
- ndarray.op.ElementWiseSum
- ndarray.op.Embedding
- ndarray.op.Flatten
- ndarray.op.FullyConnected
- ndarray.op.GridGenerator
- ndarray.op.GroupNorm
- ndarray.op.IdentityAttachKLSparseReg
- ndarray.op.InstanceNorm
- ndarray.op.L2Normalization
- ndarray.op.LRN
- ndarray.op.LayerNorm
- ndarray.op.LeakyReLU
- ndarray.op.LinearRegressionOutput
- ndarray.op.LogisticRegressionOutput
- ndarray.op.MAERegressionOutput
- ndarray.op.MakeLoss
- ndarray.op.Pad
- ndarray.op.Pooling
- ndarray.op.Pooling_v1
- ndarray.op.RNN
- ndarray.op.ROIPooling
- ndarray.op.Reshape
- ndarray.op.SVMOutput
- ndarray.op.SequenceLast
- ndarray.op.SequenceMask
- ndarray.op.SequenceReverse
- ndarray.op.SliceChannel
- ndarray.op.Softmax
- ndarray.op.SoftmaxActivation
- ndarray.op.SoftmaxOutput
- ndarray.op.SpatialTransformer
- ndarray.op.SwapAxis
- ndarray.op.UpSampling
- ndarray.op.abs
- ndarray.op.adam_update
- ndarray.op.add_n
- ndarray.op.all_finite
- ndarray.op.amp_cast
- ndarray.op.amp_multicast
- ndarray.op.arccos
- ndarray.op.arccosh
- ndarray.op.arcsin
- ndarray.op.arcsinh
- ndarray.op.arctan
- ndarray.op.arctanh
- ndarray.op.argmax
- ndarray.op.argmax_channel
- ndarray.op.argmin
- ndarray.op.argsort
- ndarray.op.batch_dot
- ndarray.op.batch_take
- ndarray.op.broadcast_add
- ndarray.op.broadcast_axes
- ndarray.op.broadcast_axis
- ndarray.op.broadcast_div
- ndarray.op.broadcast_equal
- ndarray.op.broadcast_greater
- ndarray.op.broadcast_greater_equal
- ndarray.op.broadcast_hypot
- ndarray.op.broadcast_lesser
- ndarray.op.broadcast_lesser_equal
- ndarray.op.broadcast_like
- ndarray.op.broadcast_logical_and
- ndarray.op.broadcast_logical_or
- ndarray.op.broadcast_logical_xor
- ndarray.op.broadcast_maximum
- ndarray.op.broadcast_minimum
- ndarray.op.broadcast_minus
- ndarray.op.broadcast_mod
- ndarray.op.broadcast_mul
- ndarray.op.broadcast_not_equal
- ndarray.op.broadcast_plus
- ndarray.op.broadcast_power
- ndarray.op.broadcast_sub
- ndarray.op.broadcast_to
- ndarray.op.cast
- ndarray.op.cast_storage
- ndarray.op.cbrt
- ndarray.op.ceil
- ndarray.op.choose_element_0index
- ndarray.op.clip
- ndarray.op.col2im
- ndarray.op.concat
- ndarray.op.cos
- ndarray.op.cosh
- ndarray.op.crop
- ndarray.op.ctc_loss
- ndarray.op.cumsum
- ndarray.op.degrees
- ndarray.op.depth_to_space
- ndarray.op.diag
- ndarray.op.dot
- ndarray.op.elemwise_add
- ndarray.op.elemwise_div
- ndarray.op.elemwise_mul
- ndarray.op.elemwise_sub
- ndarray.op.erf
- ndarray.op.erfinv
- ndarray.op.exp
- ndarray.op.expand_dims
- ndarray.op.expm1
- ndarray.op.fill_element_0index
- ndarray.op.fix
- ndarray.op.flatten
- ndarray.op.flip
- ndarray.op.floor
- ndarray.op.ftml_update
- ndarray.op.ftrl_update
- ndarray.op.gamma
- ndarray.op.gammaln
- ndarray.op.gather_nd
- ndarray.op.hard_sigmoid
- ndarray.op.identity
- ndarray.op.im2col
- ndarray.op.khatri_rao
- ndarray.op.lamb_update_phase1
- ndarray.op.lamb_update_phase2
- ndarray.op.linalg_det
- ndarray.op.linalg_extractdiag
- ndarray.op.linalg_extracttrian
- ndarray.op.linalg_gelqf
- ndarray.op.linalg_gemm
- ndarray.op.linalg_gemm2
- ndarray.op.linalg_inverse
- ndarray.op.linalg_makediag
- ndarray.op.linalg_maketrian
- ndarray.op.linalg_potrf
- ndarray.op.linalg_potri
- ndarray.op.linalg_slogdet
- ndarray.op.linalg_sumlogdiag
- ndarray.op.linalg_syrk
- ndarray.op.linalg_trmm
- ndarray.op.linalg_trsm
- ndarray.op.log
- ndarray.op.log10
- ndarray.op.log1p
- ndarray.op.log2
- ndarray.op.log_softmax
- ndarray.op.logical_not
- ndarray.op.make_loss
- ndarray.op.max
- ndarray.op.max_axis
- ndarray.op.mean
- ndarray.op.min
- ndarray.op.min_axis
- ndarray.op.moments
- ndarray.op.mp_lamb_update_phase1
- ndarray.op.mp_lamb_update_phase2
- ndarray.op.mp_nag_mom_update
- ndarray.op.mp_sgd_mom_update
- ndarray.op.mp_sgd_update
- ndarray.op.multi_all_finite
- ndarray.op.multi_lars
- ndarray.op.multi_mp_sgd_mom_update
- ndarray.op.multi_mp_sgd_update
- ndarray.op.multi_sgd_mom_update
- ndarray.op.multi_sgd_update
- ndarray.op.multi_sum_sq
- ndarray.op.nag_mom_update
- ndarray.op.nanprod
- ndarray.op.nansum
- ndarray.op.negative
- ndarray.op.norm
- ndarray.op.normal
- ndarray.op.one_hot
- ndarray.op.ones_like
- ndarray.op.pad
- ndarray.op.pick
- ndarray.op.preloaded_multi_mp_sgd_mom_update
- ndarray.op.preloaded_multi_mp_sgd_update
- ndarray.op.preloaded_multi_sgd_mom_update
- ndarray.op.preloaded_multi_sgd_update
- ndarray.op.prod
- ndarray.op.radians
- ndarray.op.random_exponential
- ndarray.op.random_gamma
- ndarray.op.random_generalized_negative_binomial
- ndarray.op.random_negative_binomial
- ndarray.op.random_normal
- ndarray.op.random_pdf_dirichlet
- ndarray.op.random_pdf_exponential
- ndarray.op.random_pdf_gamma
- ndarray.op.random_pdf_generalized_negative_binomial
- ndarray.op.random_pdf_negative_binomial
- ndarray.op.random_pdf_normal
- ndarray.op.random_pdf_poisson
- ndarray.op.random_pdf_uniform
- ndarray.op.random_poisson
- ndarray.op.random_randint
- ndarray.op.random_uniform
- ndarray.op.ravel_multi_index
- ndarray.op.rcbrt
- ndarray.op.reciprocal
- ndarray.op.relu
- ndarray.op.repeat
- ndarray.op.reset_arrays
- ndarray.op.reshape
- ndarray.op.reshape_like
- ndarray.op.reverse
- ndarray.op.rint
- ndarray.op.rmsprop_update
- ndarray.op.rmspropalex_update
- ndarray.op.round
- ndarray.op.rsqrt
- ndarray.op.sample_exponential
- ndarray.op.sample_gamma
- ndarray.op.sample_generalized_negative_binomial
- ndarray.op.sample_multinomial
- ndarray.op.sample_negative_binomial
- ndarray.op.sample_normal
- ndarray.op.sample_poisson
- ndarray.op.sample_uniform
- ndarray.op.scatter_nd
- ndarray.op.sgd_mom_update
- ndarray.op.sgd_update
- ndarray.op.shape_array
- ndarray.op.shuffle
- ndarray.op.sigmoid
- ndarray.op.sign
- ndarray.op.signsgd_update
- ndarray.op.signum_update
- ndarray.op.sin
- ndarray.op.sinh
- ndarray.op.size_array
- ndarray.op.slice
- ndarray.op.slice_axis
- ndarray.op.slice_like
- ndarray.op.smooth_l1
- ndarray.op.softmax
- ndarray.op.softmax_cross_entropy
- ndarray.op.softmin
- ndarray.op.softsign
- ndarray.op.sort
- ndarray.op.space_to_depth
- ndarray.op.split
- ndarray.op.sqrt
- ndarray.op.square
- ndarray.op.squeeze
- ndarray.op.stack
- ndarray.op.stop_gradient
- ndarray.op.sum
- ndarray.op.sum_axis
- ndarray.op.swapaxes
- ndarray.op.take
- ndarray.op.tan
- ndarray.op.tanh
- ndarray.op.tile
- ndarray.op.topk
- ndarray.op.transpose
- ndarray.op.trunc
- ndarray.op.uniform
- ndarray.op.unravel_index
- ndarray.op.where
- ndarray.op.zeros_like
-
- ndarray.random
- ndarray.random.uniform
- ndarray.random.normal
- ndarray.random.randn
- ndarray.random.poisson
- ndarray.random.exponential
- ndarray.random.gamma
- ndarray.random.multinomial
- ndarray.random.negative_binomial
- ndarray.random.generalized_negative_binomial
- ndarray.random.shuffle
- ndarray.random.randint
- ndarray.random.exponential_like
- ndarray.random.gamma_like
- ndarray.random.generalized_negative_binomial_like
- ndarray.random.negative_binomial_like
- ndarray.random.normal_like
- ndarray.random.poisson_like
- ndarray.random.uniform_like
- ndarray.register
-
- ndarray.sparse
- ndarray.sparse.csr_matrix
- ndarray.sparse.row_sparse_array
- ndarray.sparse.add
- ndarray.sparse.subtract
- ndarray.sparse.multiply
- ndarray.sparse.divide
- ndarray.sparse.ElementWiseSum
- ndarray.sparse.Embedding
- ndarray.sparse.FullyConnected
- ndarray.sparse.LinearRegressionOutput
- ndarray.sparse.LogisticRegressionOutput
- ndarray.sparse.MAERegressionOutput
- ndarray.sparse.abs
- ndarray.sparse.adagrad_update
- ndarray.sparse.adam_update
- ndarray.sparse.add_n
- ndarray.sparse.arccos
- ndarray.sparse.arccosh
- ndarray.sparse.arcsin
- ndarray.sparse.arcsinh
- ndarray.sparse.arctan
- ndarray.sparse.arctanh
- ndarray.sparse.broadcast_add
- ndarray.sparse.broadcast_div
- ndarray.sparse.broadcast_minus
- ndarray.sparse.broadcast_mul
- ndarray.sparse.broadcast_plus
- ndarray.sparse.broadcast_sub
- ndarray.sparse.cast_storage
- ndarray.sparse.cbrt
- ndarray.sparse.ceil
- ndarray.sparse.clip
- ndarray.sparse.concat
- ndarray.sparse.cos
- ndarray.sparse.cosh
- ndarray.sparse.degrees
- ndarray.sparse.dot
- ndarray.sparse.elemwise_add
- ndarray.sparse.elemwise_div
- ndarray.sparse.elemwise_mul
- ndarray.sparse.elemwise_sub
- ndarray.sparse.exp
- ndarray.sparse.expm1
- ndarray.sparse.fix
- ndarray.sparse.floor
- ndarray.sparse.ftrl_update
- ndarray.sparse.gamma
- ndarray.sparse.gammaln
- ndarray.sparse.log
- ndarray.sparse.log10
- ndarray.sparse.log1p
- ndarray.sparse.log2
- ndarray.sparse.make_loss
- ndarray.sparse.mean
- ndarray.sparse.negative
- ndarray.sparse.norm
- ndarray.sparse.radians
- ndarray.sparse.relu
- ndarray.sparse.retain
- ndarray.sparse.rint
- ndarray.sparse.round
- ndarray.sparse.rsqrt
- ndarray.sparse.sgd_mom_update
- ndarray.sparse.sgd_update
- ndarray.sparse.sigmoid
- ndarray.sparse.sign
- ndarray.sparse.sin
- ndarray.sparse.sinh
- ndarray.sparse.slice
- ndarray.sparse.sqrt
- ndarray.sparse.square
- ndarray.sparse.stop_gradient
- ndarray.sparse.sum
- ndarray.sparse.tan
- ndarray.sparse.tanh
- ndarray.sparse.trunc
- ndarray.sparse.where
- ndarray.sparse.zeros_like
- ndarray.sparse.BaseSparseNDArray
- ndarray.sparse.CSRNDArray
- ndarray.sparse.RowSparseNDArray
-
- gluon.Block
- gluon.Block.apply
- gluon.Block.cast
- gluon.Block.collect_params
- gluon.Block.forward
- gluon.Block.hybridize
- gluon.Block.initialize
- gluon.Block.load_parameters
- gluon.Block.load_params
- gluon.Block.name_scope
- gluon.Block.register_child
- gluon.Block.register_forward_hook
- gluon.Block.register_forward_pre_hook
- gluon.Block.register_op_hook
- gluon.Block.save_parameters
- gluon.Block.save_params
- gluon.Block.summary
-
- gluon.HybridBlock
- gluon.HybridBlock.apply
- gluon.HybridBlock.cast
- gluon.HybridBlock.collect_params
- gluon.HybridBlock.export
- gluon.HybridBlock.forward
- gluon.HybridBlock.hybrid_forward
- gluon.HybridBlock.hybridize
- gluon.HybridBlock.infer_shape
- gluon.HybridBlock.infer_type
- gluon.HybridBlock.initialize
- gluon.HybridBlock.load_parameters
- gluon.HybridBlock.load_params
- gluon.HybridBlock.name_scope
- gluon.HybridBlock.optimize_for
- gluon.HybridBlock.register_child
- gluon.HybridBlock.register_forward_hook
- gluon.HybridBlock.register_forward_pre_hook
- gluon.HybridBlock.register_op_hook
- gluon.HybridBlock.save_parameters
- gluon.HybridBlock.save_params
- gluon.HybridBlock.summary
-
- gluon.SymbolBlock
- gluon.SymbolBlock.apply
- gluon.SymbolBlock.cast
- gluon.SymbolBlock.collect_params
- gluon.SymbolBlock.export
- gluon.SymbolBlock.forward
- gluon.SymbolBlock.hybrid_forward
- gluon.SymbolBlock.hybridize
- gluon.SymbolBlock.imports
- gluon.SymbolBlock.infer_shape
- gluon.SymbolBlock.infer_type
- gluon.SymbolBlock.initialize
- gluon.SymbolBlock.load_parameters
- gluon.SymbolBlock.load_params
- gluon.SymbolBlock.name_scope
- gluon.SymbolBlock.optimize_for
- gluon.SymbolBlock.register_child
- gluon.SymbolBlock.register_forward_hook
- gluon.SymbolBlock.register_forward_pre_hook
- gluon.SymbolBlock.register_op_hook
- gluon.SymbolBlock.save_parameters
- gluon.SymbolBlock.save_params
- gluon.SymbolBlock.summary
-
- gluon.Constant
- gluon.Constant.cast
- gluon.Constant.data
- gluon.Constant.grad
- gluon.Constant.initialize
- gluon.Constant.list_ctx
- gluon.Constant.list_data
- gluon.Constant.list_grad
- gluon.Constant.list_row_sparse_data
- gluon.Constant.reset_ctx
- gluon.Constant.row_sparse_data
- gluon.Constant.set_data
- gluon.Constant.var
- gluon.Constant.zero_grad
-
- gluon.Parameter
- gluon.Parameter.cast
- gluon.Parameter.data
- gluon.Parameter.grad
- gluon.Parameter.initialize
- gluon.Parameter.list_ctx
- gluon.Parameter.list_data
- gluon.Parameter.list_grad
- gluon.Parameter.list_row_sparse_data
- gluon.Parameter.reset_ctx
- gluon.Parameter.row_sparse_data
- gluon.Parameter.set_data
- gluon.Parameter.var
- gluon.Parameter.zero_grad
-
- gluon.ParameterDict
- gluon.ParameterDict.get
- gluon.ParameterDict.get_constant
- gluon.ParameterDict.initialize
- gluon.ParameterDict.list_ctx
- gluon.ParameterDict.load
- gluon.ParameterDict.load_dict
- gluon.ParameterDict.reset_ctx
- gluon.ParameterDict.save
- gluon.ParameterDict.setattr
- gluon.ParameterDict.update
- gluon.ParameterDict.zero_grad
- gluon.contrib
-
- gluon.data
- gluon.data.vision.datasets
- gluon.data.vision.transforms
- gluon.data.Dataset
- gluon.data.ArrayDataset
- gluon.data.RecordFileDataset
- gluon.data.SimpleDataset
- gluon.data.BatchSampler
- gluon.data.DataLoader
- gluon.data.FilterSampler
- gluon.data.RandomSampler
- gluon.data.Sampler
- gluon.data.SequentialSampler
-
- gluon.loss
- gluon.loss.Loss
- gluon.loss.L2Loss
- gluon.loss.L1Loss
- gluon.loss.SigmoidBinaryCrossEntropyLoss
- gluon.loss.SigmoidBCELoss
- gluon.loss.SoftmaxCrossEntropyLoss
- gluon.loss.SoftmaxCELoss
- gluon.loss.KLDivLoss
- gluon.loss.CTCLoss
- gluon.loss.HuberLoss
- gluon.loss.HingeLoss
- gluon.loss.SquaredHingeLoss
- gluon.loss.LogisticLoss
- gluon.loss.TripletLoss
- gluon.loss.PoissonNLLLoss
- gluon.loss.CosineEmbeddingLoss
- gluon.loss.SDMLLoss
- gluon.nn
- gluon.rnn
- initializer
- initializer.Bilinear
- initializer.Constant
- initializer.FusedRNN
- initializer.InitDesc
- initializer.Initializer
- initializer.LSTMBias
- initializer.Load
- initializer.MSRAPrelu
- initializer.Mixed
- initializer.Normal
- initializer.One
- initializer.Orthogonal
- initializer.Uniform
- initializer.Xavier
- initializer.Zero
- optimizer
- optimizer.AdaDelta
- optimizer.AdaGrad
- optimizer.Adam
- optimizer.Adamax
- optimizer.DCASGD
- optimizer.FTML
- optimizer.Ftrl
- optimizer.LARS
- optimizer.LBSGD
- optimizer.NAG
- optimizer.Nadam
- optimizer.Optimizer
- optimizer.RMSProp
- optimizer.SGD
- optimizer.SGLD
- optimizer.Signum
- optimizer.LAMB
- optimizer.Test
- optimizer.Updater
- optimizer.ccSGD
- metric
- metric.Accuracy
- metric.Caffe
- metric.CompositeEvalMetric
- metric.CrossEntropy
- metric.CustomMetric
- metric.EvalMetric
- metric.F1
- metric.Loss
- metric.MAE
- metric.MCC
- metric.MSE
- metric.NegativeLogLikelihood
- metric.PCC
- metric.PearsonCorrelation
- metric.Perplexity
- metric.RMSE
- metric.TopKAccuracy
- metric.Torch
- symbol
-
- symbol.contrib
- symbol.contrib.rand_zipfian
- symbol.contrib.foreach
- symbol.contrib.while_loop
- symbol.contrib.cond
- symbol.contrib.AdaptiveAvgPooling2D
- symbol.contrib.BilinearResize2D
- symbol.contrib.CTCLoss
- symbol.contrib.DeformableConvolution
- symbol.contrib.DeformablePSROIPooling
- symbol.contrib.ModulatedDeformableConvolution
- symbol.contrib.MultiBoxDetection
- symbol.contrib.MultiBoxPrior
- symbol.contrib.MultiBoxTarget
- symbol.contrib.MultiProposal
- symbol.contrib.PSROIPooling
- symbol.contrib.Proposal
- symbol.contrib.ROIAlign
- symbol.contrib.RROIAlign
- symbol.contrib.SparseEmbedding
- symbol.contrib.SyncBatchNorm
- symbol.contrib.allclose
- symbol.contrib.arange_like
- symbol.contrib.backward_gradientmultiplier
- symbol.contrib.backward_hawkesll
- symbol.contrib.backward_index_copy
- symbol.contrib.backward_quadratic
- symbol.contrib.bipartite_matching
- symbol.contrib.boolean_mask
- symbol.contrib.box_decode
- symbol.contrib.box_encode
- symbol.contrib.box_iou
- symbol.contrib.box_nms
- symbol.contrib.box_non_maximum_suppression
- symbol.contrib.calibrate_entropy
- symbol.contrib.count_sketch
- symbol.contrib.ctc_loss
- symbol.contrib.dequantize
- symbol.contrib.dgl_adjacency
- symbol.contrib.dgl_csr_neighbor_non_uniform_sample
- symbol.contrib.dgl_csr_neighbor_uniform_sample
- symbol.contrib.dgl_graph_compact
- symbol.contrib.dgl_subgraph
- symbol.contrib.div_sqrt_dim
- symbol.contrib.edge_id
- symbol.contrib.fft
- symbol.contrib.getnnz
- symbol.contrib.gradientmultiplier
- symbol.contrib.group_adagrad_update
- symbol.contrib.hawkesll
- symbol.contrib.ifft
- symbol.contrib.index_array
- symbol.contrib.index_copy
- symbol.contrib.interleaved_matmul_encdec_qk
- symbol.contrib.interleaved_matmul_encdec_valatt
- symbol.contrib.interleaved_matmul_selfatt_qk
- symbol.contrib.interleaved_matmul_selfatt_valatt
- symbol.contrib.quadratic
- symbol.contrib.quantize
- symbol.contrib.quantize_v2
- symbol.contrib.quantized_act
- symbol.contrib.quantized_batch_norm
- symbol.contrib.quantized_concat
- symbol.contrib.quantized_conv
- symbol.contrib.quantized_elemwise_add
- symbol.contrib.quantized_elemwise_mul
- symbol.contrib.quantized_embedding
- symbol.contrib.quantized_flatten
- symbol.contrib.quantized_fully_connected
- symbol.contrib.quantized_pooling
- symbol.contrib.requantize
- symbol.contrib.round_ste
- symbol.contrib.sign_ste
-
- symbol.image
- symbol.image.adjust_lighting
- symbol.image.crop
- symbol.image.flip_left_right
- symbol.image.flip_top_bottom
- symbol.image.normalize
- symbol.image.random_brightness
- symbol.image.random_color_jitter
- symbol.image.random_contrast
- symbol.image.random_flip_left_right
- symbol.image.random_flip_top_bottom
- symbol.image.random_hue
- symbol.image.random_lighting
- symbol.image.random_saturation
- symbol.image.resize
- symbol.image.to_tensor
-
- symbol.linalg
- symbol.linalg.det
- symbol.linalg.extractdiag
- symbol.linalg.extracttrian
- symbol.linalg.gelqf
- symbol.linalg.gemm
- symbol.linalg.gemm2
- symbol.linalg.inverse
- symbol.linalg.makediag
- symbol.linalg.maketrian
- symbol.linalg.potrf
- symbol.linalg.potri
- symbol.linalg.slogdet
- symbol.linalg.sumlogdiag
- symbol.linalg.syevd
- symbol.linalg.syrk
- symbol.linalg.trmm
- symbol.linalg.trsm
-
- symbol.op
- symbol.op.Activation
- symbol.op.BatchNorm
- symbol.op.BatchNorm_v1
- symbol.op.BilinearSampler
- symbol.op.BlockGrad
- symbol.op.CTCLoss
- symbol.op.Cast
- symbol.op.Concat
- symbol.op.Convolution
- symbol.op.Convolution_v1
- symbol.op.Correlation
- symbol.op.Crop
- symbol.op.Custom
- symbol.op.Deconvolution
- symbol.op.Dropout
- symbol.op.ElementWiseSum
- symbol.op.Embedding
- symbol.op.Flatten
- symbol.op.FullyConnected
- symbol.op.GridGenerator
- symbol.op.GroupNorm
- symbol.op.IdentityAttachKLSparseReg
- symbol.op.InstanceNorm
- symbol.op.L2Normalization
- symbol.op.LRN
- symbol.op.LayerNorm
- symbol.op.LeakyReLU
- symbol.op.LinearRegressionOutput
- symbol.op.LogisticRegressionOutput
- symbol.op.MAERegressionOutput
- symbol.op.MakeLoss
- symbol.op.Pad
- symbol.op.Pooling
- symbol.op.Pooling_v1
- symbol.op.RNN
- symbol.op.ROIPooling
- symbol.op.Reshape
- symbol.op.SVMOutput
- symbol.op.SequenceLast
- symbol.op.SequenceMask
- symbol.op.SequenceReverse
- symbol.op.SliceChannel
- symbol.op.Softmax
- symbol.op.SoftmaxActivation
- symbol.op.SoftmaxOutput
- symbol.op.SpatialTransformer
- symbol.op.SwapAxis
- symbol.op.UpSampling
- symbol.op.abs
- symbol.op.adam_update
- symbol.op.add_n
- symbol.op.all_finite
- symbol.op.amp_cast
- symbol.op.amp_multicast
- symbol.op.arccos
- symbol.op.arccosh
- symbol.op.arcsin
- symbol.op.arcsinh
- symbol.op.arctan
- symbol.op.arctanh
- symbol.op.argmax
- symbol.op.argmax_channel
- symbol.op.argmin
- symbol.op.argsort
- symbol.op.batch_dot
- symbol.op.batch_take
- symbol.op.broadcast_add
- symbol.op.broadcast_axes
- symbol.op.broadcast_axis
- symbol.op.broadcast_div
- symbol.op.broadcast_equal
- symbol.op.broadcast_greater
- symbol.op.broadcast_greater_equal
- symbol.op.broadcast_hypot
- symbol.op.broadcast_lesser
- symbol.op.broadcast_lesser_equal
- symbol.op.broadcast_like
- symbol.op.broadcast_logical_and
- symbol.op.broadcast_logical_or
- symbol.op.broadcast_logical_xor
- symbol.op.broadcast_maximum
- symbol.op.broadcast_minimum
- symbol.op.broadcast_minus
- symbol.op.broadcast_mod
- symbol.op.broadcast_mul
- symbol.op.broadcast_not_equal
- symbol.op.broadcast_plus
- symbol.op.broadcast_power
- symbol.op.broadcast_sub
- symbol.op.broadcast_to
- symbol.op.cast_storage
- symbol.op.cbrt
- symbol.op.ceil
- symbol.op.choose_element_0index
- symbol.op.clip
- symbol.op.col2im
- symbol.op.cos
- symbol.op.cosh
- symbol.op.ctc_loss
- symbol.op.cumsum
- symbol.op.degrees
- symbol.op.depth_to_space
- symbol.op.diag
- symbol.op.dot
- symbol.op.elemwise_add
- symbol.op.elemwise_div
- symbol.op.elemwise_mul
- symbol.op.elemwise_sub
- symbol.op.erf
- symbol.op.erfinv
- symbol.op.exp
- symbol.op.expand_dims
- symbol.op.expm1
- symbol.op.fill_element_0index
- symbol.op.fix
- symbol.op.flip
- symbol.op.floor
- symbol.op.ftml_update
- symbol.op.ftrl_update
- symbol.op.gamma
- symbol.op.gammaln
- symbol.op.gather_nd
- symbol.op.hard_sigmoid
- symbol.op.identity
- symbol.op.im2col
- symbol.op.khatri_rao
- symbol.op.lamb_update_phase1
- symbol.op.lamb_update_phase2
- symbol.op.linalg_det
- symbol.op.linalg_extractdiag
- symbol.op.linalg_extracttrian
- symbol.op.linalg_gelqf
- symbol.op.linalg_gemm
- symbol.op.linalg_gemm2
- symbol.op.linalg_inverse
- symbol.op.linalg_makediag
- symbol.op.linalg_maketrian
- symbol.op.linalg_potrf
- symbol.op.linalg_potri
- symbol.op.linalg_slogdet
- symbol.op.linalg_sumlogdiag
- symbol.op.linalg_syrk
- symbol.op.linalg_trmm
- symbol.op.linalg_trsm
- symbol.op.log
- symbol.op.log10
- symbol.op.log1p
- symbol.op.log2
- symbol.op.log_softmax
- symbol.op.logical_not
- symbol.op.make_loss
- symbol.op.max
- symbol.op.max_axis
- symbol.op.mean
- symbol.op.min
- symbol.op.min_axis
- symbol.op.moments
- symbol.op.mp_lamb_update_phase1
- symbol.op.mp_lamb_update_phase2
- symbol.op.mp_nag_mom_update
- symbol.op.mp_sgd_mom_update
- symbol.op.mp_sgd_update
- symbol.op.multi_all_finite
- symbol.op.multi_lars
- symbol.op.multi_mp_sgd_mom_update
- symbol.op.multi_mp_sgd_update
- symbol.op.multi_sgd_mom_update
- symbol.op.multi_sgd_update
- symbol.op.multi_sum_sq
- symbol.op.nag_mom_update
- symbol.op.nanprod
- symbol.op.nansum
- symbol.op.negative
- symbol.op.norm
- symbol.op.normal
- symbol.op.one_hot
- symbol.op.ones_like
- symbol.op.pick
- symbol.op.preloaded_multi_mp_sgd_mom_update
- symbol.op.preloaded_multi_mp_sgd_update
- symbol.op.preloaded_multi_sgd_mom_update
- symbol.op.preloaded_multi_sgd_update
- symbol.op.prod
- symbol.op.radians
- symbol.op.random_exponential
- symbol.op.random_gamma
- symbol.op.random_generalized_negative_binomial
- symbol.op.random_negative_binomial
- symbol.op.random_normal
- symbol.op.random_pdf_dirichlet
- symbol.op.random_pdf_exponential
- symbol.op.random_pdf_gamma
- symbol.op.random_pdf_generalized_negative_binomial
- symbol.op.random_pdf_negative_binomial
- symbol.op.random_pdf_normal
- symbol.op.random_pdf_poisson
- symbol.op.random_pdf_uniform
- symbol.op.random_poisson
- symbol.op.random_randint
- symbol.op.random_uniform
- symbol.op.ravel_multi_index
- symbol.op.rcbrt
- symbol.op.reciprocal
- symbol.op.relu
- symbol.op.repeat
- symbol.op.reset_arrays
- symbol.op.reshape_like
- symbol.op.reverse
- symbol.op.rint
- symbol.op.rmsprop_update
- symbol.op.rmspropalex_update
- symbol.op.round
- symbol.op.rsqrt
- symbol.op.sample_exponential
- symbol.op.sample_gamma
- symbol.op.sample_generalized_negative_binomial
- symbol.op.sample_multinomial
- symbol.op.sample_negative_binomial
- symbol.op.sample_normal
- symbol.op.sample_poisson
- symbol.op.sample_uniform
- symbol.op.scatter_nd
- symbol.op.sgd_mom_update
- symbol.op.sgd_update
- symbol.op.shape_array
- symbol.op.shuffle
- symbol.op.sigmoid
- symbol.op.sign
- symbol.op.signsgd_update
- symbol.op.signum_update
- symbol.op.sin
- symbol.op.sinh
- symbol.op.size_array
- symbol.op.slice
- symbol.op.slice_axis
- symbol.op.slice_like
- symbol.op.smooth_l1
- symbol.op.softmax_cross_entropy
- symbol.op.softmin
- symbol.op.softsign
- symbol.op.sort
- symbol.op.space_to_depth
- symbol.op.split
- symbol.op.sqrt
- symbol.op.square
- symbol.op.squeeze
- symbol.op.stack
- symbol.op.stop_gradient
- symbol.op.sum
- symbol.op.sum_axis
- symbol.op.swapaxes
- symbol.op.take
- symbol.op.tan
- symbol.op.tanh
- symbol.op.tile
- symbol.op.topk
- symbol.op.transpose
- symbol.op.trunc
- symbol.op.uniform
- symbol.op.unravel_index
- symbol.op.where
- symbol.op.zeros_like
-
- symbol.random
- symbol.random.uniform
- symbol.random.normal
- symbol.random.randn
- symbol.random.poisson
- symbol.random.exponential
- symbol.random.gamma
- symbol.random.multinomial
- symbol.random.negative_binomial
- symbol.random.generalized_negative_binomial
- symbol.random.shuffle
- symbol.random.randint
- symbol.random.exponential_like
- symbol.random.gamma_like
- symbol.random.generalized_negative_binomial_like
- symbol.random.negative_binomial_like
- symbol.random.normal_like
- symbol.random.poisson_like
- symbol.random.uniform_like
- symbol.register
-
- symbol.sparse
- symbol.sparse.ElementWiseSum
- symbol.sparse.Embedding
- symbol.sparse.FullyConnected
- symbol.sparse.LinearRegressionOutput
- symbol.sparse.LogisticRegressionOutput
- symbol.sparse.MAERegressionOutput
- symbol.sparse.abs
- symbol.sparse.adagrad_update
- symbol.sparse.adam_update
- symbol.sparse.add_n
- symbol.sparse.arccos
- symbol.sparse.arccosh
- symbol.sparse.arcsin
- symbol.sparse.arcsinh
- symbol.sparse.arctan
- symbol.sparse.arctanh
- symbol.sparse.broadcast_add
- symbol.sparse.broadcast_div
- symbol.sparse.broadcast_minus
- symbol.sparse.broadcast_mul
- symbol.sparse.broadcast_plus
- symbol.sparse.broadcast_sub
- symbol.sparse.cast_storage
- symbol.sparse.cbrt
- symbol.sparse.ceil
- symbol.sparse.clip
- symbol.sparse.concat
- symbol.sparse.cos
- symbol.sparse.cosh
- symbol.sparse.degrees
- symbol.sparse.dot
- symbol.sparse.elemwise_add
- symbol.sparse.elemwise_div
- symbol.sparse.elemwise_mul
- symbol.sparse.elemwise_sub
- symbol.sparse.exp
- symbol.sparse.expm1
- symbol.sparse.fix
- symbol.sparse.floor
- symbol.sparse.ftrl_update
- symbol.sparse.gamma
- symbol.sparse.gammaln
- symbol.sparse.log
- symbol.sparse.log10
- symbol.sparse.log1p
- symbol.sparse.log2
- symbol.sparse.make_loss
- symbol.sparse.mean
- symbol.sparse.negative
- symbol.sparse.norm
- symbol.sparse.radians
- symbol.sparse.relu
- symbol.sparse.retain
- symbol.sparse.rint
- symbol.sparse.round
- symbol.sparse.rsqrt
- symbol.sparse.sgd_mom_update
- symbol.sparse.sgd_update
- symbol.sparse.sigmoid
- symbol.sparse.sign
- symbol.sparse.sin
- symbol.sparse.sinh
- symbol.sparse.slice
- symbol.sparse.sqrt
- symbol.sparse.square
- symbol.sparse.stop_gradient
- symbol.sparse.sum
- symbol.sparse.tan
- symbol.sparse.tanh
- symbol.sparse.trunc
- symbol.sparse.where
- symbol.sparse.zeros_like
- symbol.Activation
- symbol.BatchNorm
- symbol.BatchNorm_v1
- symbol.BilinearSampler
- symbol.BlockGrad
- symbol.CTCLoss
- symbol.Cast
- symbol.Concat
- symbol.Convolution
- symbol.Convolution_v1
- symbol.Correlation
- symbol.Crop
- symbol.Custom
- symbol.Deconvolution
- symbol.Dropout
- symbol.ElementWiseSum
- symbol.Embedding
- symbol.Flatten
- symbol.FullyConnected
- symbol.GridGenerator
- symbol.GroupNorm
- symbol.IdentityAttachKLSparseReg
- symbol.InstanceNorm
- symbol.L2Normalization
- symbol.LRN
- symbol.LayerNorm
- symbol.LeakyReLU
- symbol.LinearRegressionOutput
- symbol.LogisticRegressionOutput
- symbol.MAERegressionOutput
- symbol.MakeLoss
- symbol.Pad
- symbol.Pooling
- symbol.Pooling_v1
- symbol.RNN
- symbol.ROIPooling
- symbol.Reshape
- symbol.SVMOutput
- symbol.SequenceLast
- symbol.SequenceMask
- symbol.SequenceReverse
- symbol.SliceChannel
- symbol.Softmax
- symbol.SoftmaxActivation
- symbol.SoftmaxOutput
- symbol.SpatialTransformer
- symbol.SwapAxis
- symbol.UpSampling
- symbol.abs
- symbol.adam_update
- symbol.add_n
- symbol.all_finite
- symbol.amp_cast
- symbol.amp_multicast
- symbol.arccos
- symbol.arccosh
- symbol.arcsin
- symbol.arcsinh
- symbol.arctan
- symbol.arctanh
- symbol.argmax
- symbol.argmax_channel
- symbol.argmin
- symbol.argsort
- symbol.batch_dot
- symbol.batch_take
- symbol.broadcast_add
- symbol.broadcast_axes
- symbol.broadcast_axis
- symbol.broadcast_div
- symbol.broadcast_equal
- symbol.broadcast_greater
- symbol.broadcast_greater_equal
- symbol.broadcast_hypot
- symbol.broadcast_lesser
- symbol.broadcast_lesser_equal
- symbol.broadcast_like
- symbol.broadcast_logical_and
- symbol.broadcast_logical_or
- symbol.broadcast_logical_xor
- symbol.broadcast_maximum
- symbol.broadcast_minimum
- symbol.broadcast_minus
- symbol.broadcast_mod
- symbol.broadcast_mul
- symbol.broadcast_not_equal
- symbol.broadcast_plus
- symbol.broadcast_power
- symbol.broadcast_sub
- symbol.broadcast_to
- symbol.cast_storage
- symbol.cbrt
- symbol.ceil
- symbol.choose_element_0index
- symbol.clip
- symbol.col2im
- symbol.cos
- symbol.cosh
- symbol.ctc_loss
- symbol.cumsum
- symbol.degrees
- symbol.depth_to_space
- symbol.diag
- symbol.dot
- symbol.elemwise_add
- symbol.elemwise_div
- symbol.elemwise_mul
- symbol.elemwise_sub
- symbol.erf
- symbol.erfinv
- symbol.exp
- symbol.expand_dims
- symbol.expm1
- symbol.fill_element_0index
- symbol.fix
- symbol.flip
- symbol.floor
- symbol.ftml_update
- symbol.ftrl_update
- symbol.gamma
- symbol.gammaln
- symbol.gather_nd
- symbol.hard_sigmoid
- symbol.identity
- symbol.im2col
- symbol.khatri_rao
- symbol.lamb_update_phase1
- symbol.lamb_update_phase2
- symbol.linalg_det
- symbol.linalg_extractdiag
- symbol.linalg_extracttrian
- symbol.linalg_gelqf
- symbol.linalg_gemm
- symbol.linalg_gemm2
- symbol.linalg_inverse
- symbol.linalg_makediag
- symbol.linalg_maketrian
- symbol.linalg_potrf
- symbol.linalg_potri
- symbol.linalg_slogdet
- symbol.linalg_sumlogdiag
- symbol.linalg_syrk
- symbol.linalg_trmm
- symbol.linalg_trsm
- symbol.log
- symbol.log10
- symbol.log1p
- symbol.log2
- symbol.log_softmax
- symbol.logical_not
- symbol.make_loss
- symbol.max
- symbol.max_axis
- symbol.mean
- symbol.min
- symbol.min_axis
- symbol.moments
- symbol.mp_lamb_update_phase1
- symbol.mp_lamb_update_phase2
- symbol.mp_nag_mom_update
- symbol.mp_sgd_mom_update
- symbol.mp_sgd_update
- symbol.multi_all_finite
- symbol.multi_lars
- symbol.multi_mp_sgd_mom_update
- symbol.multi_mp_sgd_update
- symbol.multi_sgd_mom_update
- symbol.multi_sgd_update
- symbol.multi_sum_sq
- symbol.nag_mom_update
- symbol.nanprod
- symbol.nansum
- symbol.negative
- symbol.norm
- symbol.normal
- symbol.one_hot
- symbol.ones_like
- symbol.pick
- symbol.preloaded_multi_mp_sgd_mom_update
- symbol.preloaded_multi_mp_sgd_update
- symbol.preloaded_multi_sgd_mom_update
- symbol.preloaded_multi_sgd_update
- symbol.prod
- symbol.radians
- symbol.random_exponential
- symbol.random_gamma
- symbol.random_generalized_negative_binomial
- symbol.random_negative_binomial
- symbol.random_normal
- symbol.random_pdf_dirichlet
- symbol.random_pdf_exponential
- symbol.random_pdf_gamma
- symbol.random_pdf_generalized_negative_binomial
- symbol.random_pdf_negative_binomial
- symbol.random_pdf_normal
- symbol.random_pdf_poisson
- symbol.random_pdf_uniform
- symbol.random_poisson
- symbol.random_randint
- symbol.random_uniform
- symbol.ravel_multi_index
- symbol.rcbrt
- symbol.reciprocal
- symbol.relu
- symbol.repeat
- symbol.reset_arrays
- symbol.reshape_like
- symbol.reverse
- symbol.rint
- symbol.rmsprop_update
- symbol.rmspropalex_update
- symbol.round
- symbol.rsqrt
- symbol.sample_exponential
- symbol.sample_gamma
- symbol.sample_generalized_negative_binomial
- symbol.sample_multinomial
- symbol.sample_negative_binomial
- symbol.sample_normal
- symbol.sample_poisson
- symbol.sample_uniform
- symbol.scatter_nd
- symbol.sgd_mom_update
- symbol.sgd_update
- symbol.shape_array
- symbol.shuffle
- symbol.sigmoid
- symbol.sign
- symbol.signsgd_update
- symbol.signum_update
- symbol.sin
- symbol.sinh
- symbol.size_array
- symbol.slice
- symbol.slice_axis
- symbol.slice_like
- symbol.smooth_l1
- symbol.softmax_cross_entropy
- symbol.softmin
- symbol.softsign
- symbol.sort
- symbol.space_to_depth
- symbol.split
- symbol.sqrt
- symbol.square
- symbol.squeeze
- symbol.stack
- symbol.stop_gradient
- symbol.sum
- symbol.sum_axis
- symbol.swapaxes
- symbol.take
- symbol.tan
- symbol.tanh
- symbol.tile
- symbol.topk
- symbol.transpose
- symbol.trunc
- symbol.uniform
- symbol.unravel_index
- symbol.where
- symbol.zeros_like
- symbol.var
- symbol.Variable
- symbol.Group
- symbol.load
- symbol.load_json
- symbol.pow
- symbol.power
- symbol.maximum
- symbol.minimum
- symbol.hypot
- symbol.eye
- symbol.zeros
- symbol.ones
- symbol.full
- symbol.arange
- symbol.linspace
- symbol.histogram
- symbol.split_v2
-
- contrib.ndarray
- contrib.ndarray.AdaptiveAvgPooling2D
- contrib.ndarray.BilinearResize2D
- contrib.ndarray.CTCLoss
- contrib.ndarray.DeformableConvolution
- contrib.ndarray.DeformablePSROIPooling
- contrib.ndarray.ModulatedDeformableConvolution
- contrib.ndarray.MultiBoxDetection
- contrib.ndarray.MultiBoxPrior
- contrib.ndarray.MultiBoxTarget
- contrib.ndarray.MultiProposal
- contrib.ndarray.PSROIPooling
- contrib.ndarray.Proposal
- contrib.ndarray.ROIAlign
- contrib.ndarray.RROIAlign
- contrib.ndarray.SparseEmbedding
- contrib.ndarray.SyncBatchNorm
- contrib.ndarray.allclose
- contrib.ndarray.arange_like
- contrib.ndarray.backward_gradientmultiplier
- contrib.ndarray.backward_hawkesll
- contrib.ndarray.backward_index_copy
- contrib.ndarray.backward_quadratic
- contrib.ndarray.bipartite_matching
- contrib.ndarray.boolean_mask
- contrib.ndarray.box_decode
- contrib.ndarray.box_encode
- contrib.ndarray.box_iou
- contrib.ndarray.box_nms
- contrib.ndarray.box_non_maximum_suppression
- contrib.ndarray.calibrate_entropy
- contrib.ndarray.count_sketch
- contrib.ndarray.ctc_loss
- contrib.ndarray.dequantize
- contrib.ndarray.dgl_adjacency
- contrib.ndarray.dgl_csr_neighbor_non_uniform_sample
- contrib.ndarray.dgl_csr_neighbor_uniform_sample
- contrib.ndarray.dgl_graph_compact
- contrib.ndarray.dgl_subgraph
- contrib.ndarray.div_sqrt_dim
- contrib.ndarray.edge_id
- contrib.ndarray.fft
- contrib.ndarray.getnnz
- contrib.ndarray.gradientmultiplier
- contrib.ndarray.group_adagrad_update
- contrib.ndarray.hawkesll
- contrib.ndarray.ifft
- contrib.ndarray.index_array
- contrib.ndarray.index_copy
- contrib.ndarray.interleaved_matmul_encdec_qk
- contrib.ndarray.interleaved_matmul_encdec_valatt
- contrib.ndarray.interleaved_matmul_selfatt_qk
- contrib.ndarray.interleaved_matmul_selfatt_valatt
- contrib.ndarray.quadratic
- contrib.ndarray.quantize
- contrib.ndarray.quantize_v2
- contrib.ndarray.quantized_act
- contrib.ndarray.quantized_batch_norm
- contrib.ndarray.quantized_concat
- contrib.ndarray.quantized_conv
- contrib.ndarray.quantized_elemwise_add
- contrib.ndarray.quantized_elemwise_mul
- contrib.ndarray.quantized_embedding
- contrib.ndarray.quantized_flatten
- contrib.ndarray.quantized_fully_connected
- contrib.ndarray.quantized_pooling
- contrib.ndarray.requantize
- contrib.ndarray.round_ste
- contrib.ndarray.sign_ste
-
- contrib.symbol
- contrib.symbol.AdaptiveAvgPooling2D
- contrib.symbol.BilinearResize2D
- contrib.symbol.CTCLoss
- contrib.symbol.DeformableConvolution
- contrib.symbol.DeformablePSROIPooling
- contrib.symbol.ModulatedDeformableConvolution
- contrib.symbol.MultiBoxDetection
- contrib.symbol.MultiBoxPrior
- contrib.symbol.MultiBoxTarget
- contrib.symbol.MultiProposal
- contrib.symbol.PSROIPooling
- contrib.symbol.Proposal
- contrib.symbol.ROIAlign
- contrib.symbol.RROIAlign
- contrib.symbol.SparseEmbedding
- contrib.symbol.SyncBatchNorm
- contrib.symbol.allclose
- contrib.symbol.arange_like
- contrib.symbol.backward_gradientmultiplier
- contrib.symbol.backward_hawkesll
- contrib.symbol.backward_index_copy
- contrib.symbol.backward_quadratic
- contrib.symbol.bipartite_matching
- contrib.symbol.boolean_mask
- contrib.symbol.box_decode
- contrib.symbol.box_encode
- contrib.symbol.box_iou
- contrib.symbol.box_nms
- contrib.symbol.box_non_maximum_suppression
- contrib.symbol.calibrate_entropy
- contrib.symbol.count_sketch
- contrib.symbol.ctc_loss
- contrib.symbol.dequantize
- contrib.symbol.dgl_adjacency
- contrib.symbol.dgl_csr_neighbor_non_uniform_sample
- contrib.symbol.dgl_csr_neighbor_uniform_sample
- contrib.symbol.dgl_graph_compact
- contrib.symbol.dgl_subgraph
- contrib.symbol.div_sqrt_dim
- contrib.symbol.edge_id
- contrib.symbol.fft
- contrib.symbol.getnnz
- contrib.symbol.gradientmultiplier
- contrib.symbol.group_adagrad_update
- contrib.symbol.hawkesll
- contrib.symbol.ifft
- contrib.symbol.index_array
- contrib.symbol.index_copy
- contrib.symbol.interleaved_matmul_encdec_qk
- contrib.symbol.interleaved_matmul_encdec_valatt
- contrib.symbol.interleaved_matmul_selfatt_qk
- contrib.symbol.interleaved_matmul_selfatt_valatt
- contrib.symbol.quadratic
- contrib.symbol.quantize
- contrib.symbol.quantize_v2
- contrib.symbol.quantized_act
- contrib.symbol.quantized_batch_norm
- contrib.symbol.quantized_concat
- contrib.symbol.quantized_conv
- contrib.symbol.quantized_elemwise_add
- contrib.symbol.quantized_elemwise_mul
- contrib.symbol.quantized_embedding
- contrib.symbol.quantized_flatten
- contrib.symbol.quantized_fully_connected
- contrib.symbol.quantized_pooling
- contrib.symbol.requantize
- contrib.symbol.round_ste
- contrib.symbol.sign_ste
- contrib.text
- mxnet.attribute
- mxnet.base
- mxnet.callback
- mxnet.context
- mxnet.engine
- mxnet.executor
- mxnet.executor_manager
- mxnet.image
- mxnet.io
- mxnet.kvstore_server
- mxnet.libinfo
- mxnet.log
- mxnet.model
- mxnet.monitor
- mxnet.name
- mxnet.notebook
- mxnet.operator
- mxnet.profiler
- mxnet.random
- mxnet.recordio
- mxnet.registry
- mxnet.rtc
- mxnet.runtime
- mxnet.test_utils
- mxnet.torch
- mxnet.util
- mxnet.visualization
mxnet.symbol / sparse / symbol.sparse
symbol.sparse¶
Sparse Symbol API of MXNet.
Functions
|
Adds all input arguments element-wise. |
|
Maps integer indices to vector representations (embeddings). |
|
Applies a linear transformation: \(Y = XW^T + b\). |
|
Computes and optimizes for squared loss during backward propagation. |
|
Applies a logistic function to the input. |
|
Computes mean absolute error of the input. |
|
Returns element-wise absolute value of the input. |
|
Update function for AdaGrad optimizer. |
|
Update function for Adam optimizer. |
|
Adds all input arguments element-wise. |
|
Returns element-wise inverse cosine of the input array. |
|
Returns the element-wise inverse hyperbolic cosine of the input array, computed element-wise. |
|
Returns element-wise inverse sine of the input array. |
|
Returns the element-wise inverse hyperbolic sine of the input array, computed element-wise. |
|
Returns element-wise inverse tangent of the input array. |
|
Returns the element-wise inverse hyperbolic tangent of the input array, computed element-wise. |
|
Returns element-wise sum of the input arrays with broadcasting. |
|
Returns element-wise division of the input arrays with broadcasting. |
|
Returns element-wise difference of the input arrays with broadcasting. |
|
Returns element-wise product of the input arrays with broadcasting. |
|
Returns element-wise sum of the input arrays with broadcasting. |
|
Returns element-wise difference of the input arrays with broadcasting. |
|
Casts tensor storage type to the new type. |
|
Returns element-wise cube-root value of the input. |
|
Returns element-wise ceiling of the input. |
|
Clips (limits) the values in an array. |
|
Joins input arrays along a given axis. |
|
Computes the element-wise cosine of the input array. |
|
Returns the hyperbolic cosine of the input array, computed element-wise. |
|
Converts each element of the input array from radians to degrees. |
|
Dot product of two arrays. |
|
Adds arguments element-wise. |
|
Divides arguments element-wise. |
|
Multiplies arguments element-wise. |
|
Subtracts arguments element-wise. |
|
Returns element-wise exponential value of the input. |
|
Returns |
|
Returns element-wise rounded value to the nearest integer towards zero of the input. |
|
Returns element-wise floor of the input. |
|
Update function for Ftrl optimizer. |
|
Returns the gamma function (extension of the factorial function to the reals), computed element-wise on the input array. |
|
Returns element-wise log of the absolute value of the gamma function of the input. |
|
Returns element-wise Natural logarithmic value of the input. |
|
Returns element-wise Base-10 logarithmic value of the input. |
|
Returns element-wise |
|
Returns element-wise Base-2 logarithmic value of the input. |
|
Make your own loss function in network construction. |
|
Computes the mean of array elements over given axes. |
|
Numerical negative of the argument, element-wise. |
|
Computes the norm on an NDArray. |
|
Converts each element of the input array from degrees to radians. |
|
Computes rectified linear activation. |
|
Pick rows specified by user input index array from a row sparse matrix and save them in the output sparse matrix. |
|
Returns element-wise rounded value to the nearest integer of the input. |
|
Returns element-wise rounded value to the nearest integer of the input. |
|
Returns element-wise inverse square-root value of the input. |
|
Momentum update function for Stochastic Gradient Descent (SGD) optimizer. |
|
Update function for Stochastic Gradient Descent (SGD) optimizer. |
|
Computes sigmoid of x element-wise. |
|
Returns element-wise sign of the input. |
|
Computes the element-wise sine of the input array. |
|
Returns the hyperbolic sine of the input array, computed element-wise. |
|
Slices a region of the array. |
|
Returns element-wise square-root value of the input. |
|
Returns element-wise squared value of the input. |
|
Stops gradient computation. |
|
Computes the sum of array elements over given axes. |
|
Computes the element-wise tangent of the input array. |
|
Returns the hyperbolic tangent of the input array, computed element-wise. |
|
Return the element-wise truncated value of the input. |
|
Return the elements, either from x or y, depending on the condition. |
|
Return an array of zeros with the same shape, type and storage type as the input array. |
-
mxnet.symbol.sparse.
ElementWiseSum
(*args, **kwargs)¶ Adds all input arguments element-wise.
\[add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n\]add_n
is potentially more efficient than callingadd
by n times.The storage type of
add_n
output depends on storage types of inputsadd_n(row_sparse, row_sparse, ..) = row_sparse
add_n(default, csr, default) = default
add_n(any input combinations longer than 4 (>4) with at least one default type) = default
otherwise,
add_n
falls all inputs back to default storage and generates default storage
Defined in src/operator/tensor/elemwise_sum.cc:L156 This function support variable length of positional input.
-
mxnet.symbol.sparse.
Embedding
(data=None, weight=None, input_dim=_Null, output_dim=_Null, dtype=_Null, sparse_grad=_Null, name=None, attr=None, out=None, **kwargs)¶ Maps integer indices to vector representations (embeddings).
This operator maps words to real-valued vectors in a high-dimensional space, called word embeddings. These embeddings can capture semantic and syntactic properties of the words. For example, it has been noted that in the learned embedding spaces, similar words tend to be close to each other and dissimilar words far apart.
For an input array of shape (d1, …, dK), the shape of an output array is (d1, …, dK, output_dim). All the input values should be integers in the range [0, input_dim).
If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be (ip0, op0).
When “sparse_grad” is False, if any index mentioned is too large, it is replaced by the index that addresses the last vector in an embedding matrix. When “sparse_grad” is True, an error will be raised if invalid indices are found.
Examples:
input_dim = 4 output_dim = 5 // Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3) y = [[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.], [ 15., 16., 17., 18., 19.]] // Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)] x = [[ 1., 3.], [ 0., 2.]] // Mapped input x to its vector representation y. Embedding(x, y, 4, 5) = [[[ 5., 6., 7., 8., 9.], [ 15., 16., 17., 18., 19.]], [[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]]]
The storage type of weight can be either row_sparse or default.
Note
If “sparse_grad” is set to True, the storage type of gradient w.r.t weights will be “row_sparse”. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
Defined in src/operator/tensor/indexing_op.cc:L598
- Parameters
data (Symbol) – The input array to the embedding operator.
weight (Symbol) – The embedding weight matrix.
input_dim (int, required) – Vocabulary size of the input indices.
output_dim (int, required) – Dimension of the embedding vectors.
dtype ({'bfloat16', 'float16', 'float32', 'float64', 'int32', 'int64', 'int8', 'uint8'},optional, default='float32') – Data type of weight.
sparse_grad (boolean, optional, default=0) – Compute row sparse gradient in the backward calculation. If set to True, the grad’s storage type is row_sparse.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
FullyConnected
(data=None, weight=None, bias=None, num_hidden=_Null, no_bias=_Null, flatten=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies a linear transformation: \(Y = XW^T + b\).
If
flatten
is set to be true, then the shapes are:data: (batch_size, x1, x2, …, xn)
weight: (num_hidden, x1 * x2 * … * xn)
bias: (num_hidden,)
out: (batch_size, num_hidden)
If
flatten
is set to be false, then the shapes are:data: (x1, x2, …, xn, input_dim)
weight: (num_hidden, input_dim)
bias: (num_hidden,)
out: (x1, x2, …, xn, num_hidden)
The learnable parameters include both
weight
andbias
.If
no_bias
is set to be true, then thebias
term is ignored.Note
The sparse support for FullyConnected is limited to forward evaluation with row_sparse weight and bias, where the length of weight.indices and bias.indices must be equal to num_hidden. This could be useful for model inference with row_sparse weights trained with importance sampling or noise contrastive estimation.
To compute linear transformation with ‘csr’ sparse data, sparse.dot is recommended instead of sparse.FullyConnected.
Defined in src/operator/nn/fully_connected.cc:L287
- Parameters
data (Symbol) – Input data.
weight (Symbol) – Weight matrix.
bias (Symbol) – Bias parameter.
num_hidden (int, required) – Number of hidden nodes of the output.
no_bias (boolean, optional, default=0) – Whether to disable bias parameter.
flatten (boolean, optional, default=1) – Whether to collapse all but the first axis of the input data tensor.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
LinearRegressionOutput
(data=None, label=None, grad_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes and optimizes for squared loss during backward propagation. Just outputs
data
during forward propagation.If \(\hat{y}_i\) is the predicted value of the i-th sample, and \(y_i\) is the corresponding target value, then the squared loss estimated over \(n\) samples is defined as
\(\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2\)
Note
Use the LinearRegressionOutput as the final output layer of a net.
The storage type of
label
can bedefault
orcsr
LinearRegressionOutput(default, default) = default
LinearRegressionOutput(default, csr) = default
By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.
Defined in src/operator/regression_output.cc:L92
-
mxnet.symbol.sparse.
LogisticRegressionOutput
(data=None, label=None, grad_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies a logistic function to the input.
The logistic function, also known as the sigmoid function, is computed as \(\frac{1}{1+exp(-\textbf{x})}\).
Commonly, the sigmoid is used to squash the real-valued output of a linear model \(wTx+b\) into the [0,1] range so that it can be interpreted as a probability. It is suitable for binary classification or probability prediction tasks.
Note
Use the LogisticRegressionOutput as the final output layer of a net.
The storage type of
label
can bedefault
orcsr
LogisticRegressionOutput(default, default) = default
LogisticRegressionOutput(default, csr) = default
The loss function used is the Binary Cross Entropy Loss:
\(-{(y\log(p) + (1 - y)\log(1 - p))}\)
Where y is the ground truth probability of positive outcome for a given example, and p the probability predicted by the model. By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.
Defined in src/operator/regression_output.cc:L152
-
mxnet.symbol.sparse.
MAERegressionOutput
(data=None, label=None, grad_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes mean absolute error of the input.
MAE is a risk metric corresponding to the expected value of the absolute error.
If \(\hat{y}_i\) is the predicted value of the i-th sample, and \(y_i\) is the corresponding target value, then the mean absolute error (MAE) estimated over \(n\) samples is defined as
\(\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1\)
Note
Use the MAERegressionOutput as the final output layer of a net.
The storage type of
label
can bedefault
orcsr
MAERegressionOutput(default, default) = default
MAERegressionOutput(default, csr) = default
By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.
Defined in src/operator/regression_output.cc:L120
-
mxnet.symbol.sparse.
abs
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise absolute value of the input.
Example:
abs([-2, 0, 3]) = [2, 0, 3]
The storage type of
abs
output depends upon the input storage type:abs(default) = default
abs(row_sparse) = row_sparse
abs(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L720
-
mxnet.symbol.sparse.
adagrad_update
(weight=None, grad=None, history=None, lr=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for AdaGrad optimizer.
Referenced from Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, and available at http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf.
Updates are applied by:
rescaled_grad = clip(grad * rescale_grad, clip_gradient) history = history + square(rescaled_grad) w = w - learning_rate * rescaled_grad / sqrt(history + epsilon)
Note that non-zero values for the weight decay option are not supported.
Defined in src/operator/optimizer_op.cc:L909
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
history (Symbol) – History
lr (float, required) – Learning rate
epsilon (float, optional, default=1.00000001e-07) – epsilon
wd (float, optional, default=0) – weight decay
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
adam_update
(weight=None, grad=None, mean=None, var=None, lr=_Null, beta1=_Null, beta2=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Adam optimizer. Adam is seen as a generalization of AdaGrad.
Adam update consists of the following steps, where g represents gradient and m, v are 1st and 2nd order moment estimates (mean and variance).
\[\begin{split}g_t = \nabla J(W_{t-1})\\ m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\ W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon }\end{split}\]It updates the weights using:
m = beta1*m + (1-beta1)*grad v = beta2*v + (1-beta2)*(grad**2) w += - learning_rate * m / (sqrt(v) + epsilon)
However, if grad’s storage type is
row_sparse
,lazy_update
is True and the storage type of weight is the same as those of m and v, only the row slices whose indices appear in grad.indices are updated (for w, m and v):for row in grad.indices: m[row] = beta1*m[row] + (1-beta1)*grad[row] v[row] = beta2*v[row] + (1-beta2)*(grad[row]**2) w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon)
Defined in src/operator/optimizer_op.cc:L688
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mean (Symbol) – Moving mean
var (Symbol) – Moving variance
lr (float, required) – Learning rate
beta1 (float, optional, default=0.899999976) – The decay rate for the 1st moment estimates.
beta2 (float, optional, default=0.999000013) – The decay rate for the 2nd moment estimates.
epsilon (float, optional, default=9.99999994e-09) – A small constant for numerical stability.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and all of w, m and v have the same stype
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
add_n
(*args, **kwargs)¶ Adds all input arguments element-wise.
\[add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n\]add_n
is potentially more efficient than callingadd
by n times.The storage type of
add_n
output depends on storage types of inputsadd_n(row_sparse, row_sparse, ..) = row_sparse
add_n(default, csr, default) = default
add_n(any input combinations longer than 4 (>4) with at least one default type) = default
otherwise,
add_n
falls all inputs back to default storage and generates default storage
Defined in src/operator/tensor/elemwise_sum.cc:L156 This function support variable length of positional input.
-
mxnet.symbol.sparse.
arccos
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse cosine of the input array.
The input should be in range [-1, 1]. The output is in the closed interval \([0, \pi]\)
\[arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0]\]The storage type of
arccos
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L233
-
mxnet.symbol.sparse.
arccosh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the element-wise inverse hyperbolic cosine of the input array, computed element-wise.
The storage type of
arccosh
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L535
-
mxnet.symbol.sparse.
arcsin
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse sine of the input array.
The input should be in the range [-1, 1]. The output is in the closed interval of [\(-\pi/2\), \(\pi/2\)].
\[arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2]\]The storage type of
arcsin
output depends upon the input storage type:arcsin(default) = default
arcsin(row_sparse) = row_sparse
arcsin(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L187
-
mxnet.symbol.sparse.
arcsinh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the element-wise inverse hyperbolic sine of the input array, computed element-wise.
The storage type of
arcsinh
output depends upon the input storage type:arcsinh(default) = default
arcsinh(row_sparse) = row_sparse
arcsinh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L494
-
mxnet.symbol.sparse.
arctan
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse tangent of the input array.
The output is in the closed interval \([-\pi/2, \pi/2]\)
\[arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4]\]The storage type of
arctan
output depends upon the input storage type:arctan(default) = default
arctan(row_sparse) = row_sparse
arctan(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L282
-
mxnet.symbol.sparse.
arctanh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the element-wise inverse hyperbolic tangent of the input array, computed element-wise.
The storage type of
arctanh
output depends upon the input storage type:arctanh(default) = default
arctanh(row_sparse) = row_sparse
arctanh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L579
-
mxnet.symbol.sparse.
broadcast_add
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise sum of the input arrays with broadcasting.
broadcast_plus is an alias to the function broadcast_add.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_add(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]] broadcast_plus(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]]
Supported sparse operations:
broadcast_add(csr, dense(1D)) = dense broadcast_add(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58
-
mxnet.symbol.sparse.
broadcast_div
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise division of the input arrays with broadcasting.
Example:
x = [[ 6., 6., 6.], [ 6., 6., 6.]] y = [[ 2.], [ 3.]] broadcast_div(x, y) = [[ 3., 3., 3.], [ 2., 2., 2.]]
Supported sparse operations:
broadcast_div(csr, dense(1D)) = csr
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L187
-
mxnet.symbol.sparse.
broadcast_minus
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise difference of the input arrays with broadcasting.
broadcast_minus is an alias to the function broadcast_sub.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_sub(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] broadcast_minus(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]]
Supported sparse operations:
broadcast_sub/minus(csr, dense(1D)) = dense broadcast_sub/minus(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106
-
mxnet.symbol.sparse.
broadcast_mul
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise product of the input arrays with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_mul(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]]
Supported sparse operations:
broadcast_mul(csr, dense(1D)) = csr
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L146
-
mxnet.symbol.sparse.
broadcast_plus
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise sum of the input arrays with broadcasting.
broadcast_plus is an alias to the function broadcast_add.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_add(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]] broadcast_plus(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]]
Supported sparse operations:
broadcast_add(csr, dense(1D)) = dense broadcast_add(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58
-
mxnet.symbol.sparse.
broadcast_sub
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise difference of the input arrays with broadcasting.
broadcast_minus is an alias to the function broadcast_sub.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_sub(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] broadcast_minus(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]]
Supported sparse operations:
broadcast_sub/minus(csr, dense(1D)) = dense broadcast_sub/minus(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106
-
mxnet.symbol.sparse.
cast_storage
(data=None, stype=_Null, name=None, attr=None, out=None, **kwargs)¶ Casts tensor storage type to the new type.
When an NDArray with default storage type is cast to csr or row_sparse storage, the result is compact, which means:
for csr, zero values will not be retained
for row_sparse, row slices of all zeros will not be retained
The storage type of
cast_storage
output depends on stype parameter:cast_storage(csr, ‘default’) = default
cast_storage(row_sparse, ‘default’) = default
cast_storage(default, ‘csr’) = csr
cast_storage(default, ‘row_sparse’) = row_sparse
cast_storage(csr, ‘csr’) = csr
cast_storage(row_sparse, ‘row_sparse’) = row_sparse
Example:
dense = [[ 0., 1., 0.], [ 2., 0., 3.], [ 0., 0., 0.], [ 0., 0., 0.]] # cast to row_sparse storage type rsp = cast_storage(dense, 'row_sparse') rsp.indices = [0, 1] rsp.values = [[ 0., 1., 0.], [ 2., 0., 3.]] # cast to csr storage type csr = cast_storage(dense, 'csr') csr.indices = [1, 0, 2] csr.values = [ 1., 2., 3.] csr.indptr = [0, 1, 3, 3, 3]
Defined in src/operator/tensor/cast_storage.cc:L71
-
mxnet.symbol.sparse.
cbrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise cube-root value of the input.
\[cbrt(x) = \sqrt[3]{x}\]Example:
cbrt([1, 8, -125]) = [1, 2, -5]
The storage type of
cbrt
output depends upon the input storage type:cbrt(default) = default
cbrt(row_sparse) = row_sparse
cbrt(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L270
-
mxnet.symbol.sparse.
ceil
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise ceiling of the input.
The ceil of the scalar x is the smallest integer i, such that i >= x.
Example:
ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.]
The storage type of
ceil
output depends upon the input storage type:ceil(default) = default
ceil(row_sparse) = row_sparse
ceil(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L817
-
mxnet.symbol.sparse.
clip
(data=None, a_min=_Null, a_max=_Null, name=None, attr=None, out=None, **kwargs)¶ Clips (limits) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. Clipping
x
between a_min and a_max would be:: .. math:clip(x, a_min, a_max) = \max(\min(x, a_max), a_min))
- Example::
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.]
The storage type of
clip
output depends on storage types of inputs and the a_min, a_max parameter values:clip(default) = default
clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse
clip(csr, a_min <= 0, a_max >= 0) = csr
clip(row_sparse, a_min < 0, a_max < 0) = default
clip(row_sparse, a_min > 0, a_max > 0) = default
clip(csr, a_min < 0, a_max < 0) = csr
clip(csr, a_min > 0, a_max > 0) = csr
Defined in src/operator/tensor/matrix_op.cc:L677
-
mxnet.symbol.sparse.
concat
(*data, **kwargs)¶ Joins input arrays along a given axis.
Note
Concat is deprecated. Use concat instead.
The dimensions of the input arrays should be the same except the axis along which they will be concatenated. The dimension of the output array along the concatenated axis will be equal to the sum of the corresponding dimensions of the input arrays.
The storage type of
concat
output depends on storage types of inputsconcat(csr, csr, …, csr, dim=0) = csr
otherwise,
concat
generates output with default storage
Example:
x = [[1,1],[2,2]] y = [[3,3],[4,4],[5,5]] z = [[6,6], [7,7],[8,8]] concat(x,y,z,dim=0) = [[ 1., 1.], [ 2., 2.], [ 3., 3.], [ 4., 4.], [ 5., 5.], [ 6., 6.], [ 7., 7.], [ 8., 8.]] Note that you cannot concat x,y,z along dimension 1 since dimension 0 is not the same for all the input arrays. concat(y,z,dim=1) = [[ 3., 3., 6., 6.], [ 4., 4., 7., 7.], [ 5., 5., 8., 8.]]
Defined in src/operator/nn/concat.cc:L385 This function support variable length of positional input.
-
mxnet.symbol.sparse.
cos
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes the element-wise cosine of the input array.
The input should be in radians (\(2\pi\) rad equals 360 degrees).
\[cos([0, \pi/4, \pi/2]) = [1, 0.707, 0]\]The storage type of
cos
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L90
-
mxnet.symbol.sparse.
cosh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hyperbolic cosine of the input array, computed element-wise.
\[cosh(x) = 0.5\times(exp(x) + exp(-x))\]The storage type of
cosh
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L409
-
mxnet.symbol.sparse.
degrees
(data=None, name=None, attr=None, out=None, **kwargs)¶ Converts each element of the input array from radians to degrees.
\[degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360]\]The storage type of
degrees
output depends upon the input storage type:degrees(default) = default
degrees(row_sparse) = row_sparse
degrees(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L332
-
mxnet.symbol.sparse.
dot
(lhs=None, rhs=None, transpose_a=_Null, transpose_b=_Null, forward_stype=_Null, name=None, attr=None, out=None, **kwargs)¶ Dot product of two arrays.
dot
’s behavior depends on the input array dimensions:1-D arrays: inner product of vectors
2-D arrays: matrix multiplication
N-D arrays: a sum product over the last axis of the first input and the first axis of the second input
For example, given 3-D
x
with shape (n,m,k) andy
with shape (k,r,s), the result array will have shape (n,m,r,s). It is computed by:dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b])
Example:
x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2)) y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2)) dot(x,y)[0,0,1,1] = 0 sum(x[0,0,:]*y[:,1,1]) = 0
The storage type of
dot
output depends on storage types of inputs, transpose option and forward_stype option for output storage type. Implemented sparse operations include:dot(default, default, transpose_a=True/False, transpose_b=True/False) = default
dot(csr, default, transpose_a=True) = default
dot(csr, default, transpose_a=True) = row_sparse
dot(csr, default) = default
dot(csr, row_sparse) = default
dot(default, csr) = csr (CPU only)
dot(default, csr, forward_stype=’default’) = default
dot(default, csr, transpose_b=True, forward_stype=’default’) = default
If the combination of input storage types and forward_stype does not match any of the above patterns,
dot
will fallback and generate output with default storage.Note
If the storage type of the lhs is “csr”, the storage type of gradient w.r.t rhs will be “row_sparse”. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
Defined in src/operator/tensor/dot.cc:L77
- Parameters
lhs (Symbol) – The first input
rhs (Symbol) – The second input
transpose_a (boolean, optional, default=0) – If true then transpose the first input before dot.
transpose_b (boolean, optional, default=0) – If true then transpose the second input before dot.
forward_stype ({None, 'csr', 'default', 'row_sparse'},optional, default='None') – The desired storage type of the forward output given by user, if thecombination of input storage types and this hint does not matchany implemented ones, the dot operator will perform fallback operationand still produce an output of the desired storage type.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
elemwise_add
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Adds arguments element-wise.
The storage type of
elemwise_add
output depends on storage types of inputselemwise_add(row_sparse, row_sparse) = row_sparse
elemwise_add(csr, csr) = csr
elemwise_add(default, csr) = default
elemwise_add(csr, default) = default
elemwise_add(default, rsp) = default
elemwise_add(rsp, default) = default
otherwise,
elemwise_add
generates output with default storage
-
mxnet.symbol.sparse.
elemwise_div
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Divides arguments element-wise.
The storage type of
elemwise_div
output is always dense
-
mxnet.symbol.sparse.
elemwise_mul
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Multiplies arguments element-wise.
The storage type of
elemwise_mul
output depends on storage types of inputselemwise_mul(default, default) = default
elemwise_mul(row_sparse, row_sparse) = row_sparse
elemwise_mul(default, row_sparse) = row_sparse
elemwise_mul(row_sparse, default) = row_sparse
elemwise_mul(csr, csr) = csr
otherwise,
elemwise_mul
generates output with default storage
-
mxnet.symbol.sparse.
elemwise_sub
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Subtracts arguments element-wise.
The storage type of
elemwise_sub
output depends on storage types of inputselemwise_sub(row_sparse, row_sparse) = row_sparse
elemwise_sub(csr, csr) = csr
elemwise_sub(default, csr) = default
elemwise_sub(csr, default) = default
elemwise_sub(default, rsp) = default
elemwise_sub(rsp, default) = default
otherwise,
elemwise_sub
generates output with default storage
-
mxnet.symbol.sparse.
exp
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise exponential value of the input.
\[exp(x) = e^x \approx 2.718^x\]Example:
exp([0, 1, 2]) = [1., 2.71828175, 7.38905621]
The storage type of
exp
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L64
-
mxnet.symbol.sparse.
expm1
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns
exp(x) - 1
computed element-wise on the input.This function provides greater precision than
exp(x) - 1
for small values ofx
.The storage type of
expm1
output depends upon the input storage type:expm1(default) = default
expm1(row_sparse) = row_sparse
expm1(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L244
-
mxnet.symbol.sparse.
fix
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise rounded value to the nearest integer towards zero of the input.
Example:
fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.]
The storage type of
fix
output depends upon the input storage type:fix(default) = default
fix(row_sparse) = row_sparse
fix(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L874
-
mxnet.symbol.sparse.
floor
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise floor of the input.
The floor of the scalar x is the largest integer i, such that i <= x.
Example:
floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.]
The storage type of
floor
output depends upon the input storage type:floor(default) = default
floor(row_sparse) = row_sparse
floor(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L836
-
mxnet.symbol.sparse.
ftrl_update
(weight=None, grad=None, z=None, n=None, lr=_Null, lamda1=_Null, beta=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Ftrl optimizer. Referenced from Ad Click Prediction: a View from the Trenches, available at http://dl.acm.org/citation.cfm?id=2488200.
It updates the weights using:
rescaled_grad = clip(grad * rescale_grad, clip_gradient) z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate n += rescaled_grad**2 w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)
If w, z and n are all of
row_sparse
storage type, only the row slices whose indices appear in grad.indices are updated (for w, z and n):for row in grad.indices: rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient) z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate n[row] += rescaled_grad[row]**2 w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)
Defined in src/operator/optimizer_op.cc:L876
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
z (Symbol) – z
n (Symbol) – Square of grad
lr (float, required) – Learning rate
lamda1 (float, optional, default=0.00999999978) – The L1 regularization coefficient.
beta (float, optional, default=1) – Per-Coordinate Learning Rate beta.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
gamma
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the gamma function (extension of the factorial function to the reals), computed element-wise on the input array.
The storage type of
gamma
output is always dense
-
mxnet.symbol.sparse.
gammaln
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise log of the absolute value of the gamma function of the input.
The storage type of
gammaln
output is always dense
-
mxnet.symbol.sparse.
log
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise Natural logarithmic value of the input.
The natural logarithm is logarithm in base e, so that
log(exp(x)) = x
The storage type of
log
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L77
-
mxnet.symbol.sparse.
log10
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise Base-10 logarithmic value of the input.
10**log10(x) = x
The storage type of
log10
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L94
-
mxnet.symbol.sparse.
log1p
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise
log(1 + x)
value of the input.This function is more accurate than
log(1 + x)
for smallx
so that \(1+x\approx 1\)The storage type of
log1p
output depends upon the input storage type:log1p(default) = default
log1p(row_sparse) = row_sparse
log1p(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L199
-
mxnet.symbol.sparse.
log2
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise Base-2 logarithmic value of the input.
2**log2(x) = x
The storage type of
log2
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L106
-
mxnet.symbol.sparse.
make_loss
(data=None, name=None, attr=None, out=None, **kwargs)¶ Make your own loss function in network construction.
This operator accepts a customized loss function symbol as a terminal loss and the symbol should be an operator with no backward dependency. The output of this function is the gradient of loss with respect to the input data.
For example, if you are a making a cross entropy loss function. Assume
out
is the predicted output andlabel
is the true label, then the cross entropy can be defined as:cross_entropy = label * log(out) + (1 - label) * log(1 - out) loss = make_loss(cross_entropy)
We will need to use
make_loss
when we are creating our own loss function or we want to combine multiple loss functions. Also we may want to stop some variables’ gradients from backpropagation. See more detail inBlockGrad
orstop_gradient
.The storage type of
make_loss
output depends upon the input storage type:make_loss(default) = default
make_loss(row_sparse) = row_sparse
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L358
-
mxnet.symbol.sparse.
mean
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the mean of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L84
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
negative
(data=None, name=None, attr=None, out=None, **kwargs)¶ Numerical negative of the argument, element-wise.
The storage type of
negative
output depends upon the input storage type:negative(default) = default
negative(row_sparse) = row_sparse
negative(csr) = csr
-
mxnet.symbol.sparse.
norm
(data=None, ord=_Null, axis=_Null, out_dtype=_Null, keepdims=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the norm on an NDArray.
This operator computes the norm on an NDArray with the specified axis, depending on the value of the ord parameter. By default, it computes the L2 norm on the entire array. Currently only ord=2 supports sparse ndarrays.
Examples:
x = [[[1, 2], [3, 4]], [[2, 2], [5, 6]]] norm(x, ord=2, axis=1) = [[3.1622777 4.472136 ] [5.3851647 6.3245554]] norm(x, ord=1, axis=1) = [[4., 6.], [7., 8.]] rsp = x.cast_storage('row_sparse') norm(rsp) = [5.47722578] csr = x.cast_storage('csr') norm(csr) = [5.47722578]
Defined in src/operator/tensor/broadcast_reduce_norm_value.cc:L89
- Parameters
data (Symbol) – The input
ord (int, optional, default='2') – Order of the norm. Currently ord=1 and ord=2 is supported.
axis (Shape or None, optional, default=None) –
- The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed.
out_dtype ({None, 'float16', 'float32', 'float64', 'int32', 'int64', 'int8'},optional, default='None') – The data type of the output.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axis is left in the result as dimension with size one.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
radians
(data=None, name=None, attr=None, out=None, **kwargs)¶ Converts each element of the input array from degrees to radians.
\[radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi]\]The storage type of
radians
output depends upon the input storage type:radians(default) = default
radians(row_sparse) = row_sparse
radians(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L351
-
mxnet.symbol.sparse.
relu
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes rectified linear activation.
\[max(features, 0)\]The storage type of
relu
output depends upon the input storage type:relu(default) = default
relu(row_sparse) = row_sparse
relu(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85
-
mxnet.symbol.sparse.
retain
(data=None, indices=None, name=None, attr=None, out=None, **kwargs)¶ Pick rows specified by user input index array from a row sparse matrix and save them in the output sparse matrix.
Example:
data = [[1, 2], [3, 4], [5, 6]] indices = [0, 1, 3] shape = (4, 2) rsp_in = row_sparse_array(data, indices) to_retain = [0, 3] rsp_out = retain(rsp_in, to_retain) rsp_out.data = [[1, 2], [5, 6]] rsp_out.indices = [0, 3]
The storage type of
retain
output depends on storage types of inputsretain(row_sparse, default) = row_sparse
otherwise,
retain
is not supported
Defined in src/operator/tensor/sparse_retain.cc:L53
-
mxnet.symbol.sparse.
rint
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise rounded value to the nearest integer of the input.
Note
For input
n.5
rint
returnsn
whileround
returnsn+1
.For input
-n.5
bothrint
andround
returns-n-1
.
Example:
rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 1., -2., 2., 2.]
The storage type of
rint
output depends upon the input storage type:rint(default) = default
rint(row_sparse) = row_sparse
rint(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L798
-
mxnet.symbol.sparse.
round
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise rounded value to the nearest integer of the input.
Example:
round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.]
The storage type of
round
output depends upon the input storage type:round(default) = default
round(row_sparse) = row_sparse
round(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L777
-
mxnet.symbol.sparse.
rsqrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse square-root value of the input.
\[rsqrt(x) = 1/\sqrt{x}\]Example:
rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25]
The storage type of
rsqrt
output is always denseDefined in src/operator/tensor/elemwise_unary_op_pow.cc:L221
-
mxnet.symbol.sparse.
sgd_mom_update
(weight=None, grad=None, mom=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]It updates the weights using:
v = momentum * v - learning_rate * gradient weight += v
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.However, if grad’s storage type is
row_sparse
,lazy_update
is True and weight’s storage type is the same as momentum’s storage type, only the row slices whose indices appear in grad.indices are updated (for both weight and momentum):for row in gradient.indices: v[row] = momentum[row] * v[row] - learning_rate * gradient[row] weight[row] += v[row]
Defined in src/operator/optimizer_op.cc:L565
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mom (Symbol) – Momentum
lr (float, required) – Learning rate
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and both weight and momentum have the same stype
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
sgd_update
(weight=None, grad=None, lr=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Stochastic Gradient Descent (SGD) optimizer.
It updates the weights using:
weight = weight - learning_rate * (gradient + wd * weight)
However, if gradient is of
row_sparse
storage type andlazy_update
is True, only the row slices whose indices appear in grad.indices are updated:for row in gradient.indices: weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])
Defined in src/operator/optimizer_op.cc:L524
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
lr (float, required) – Learning rate
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
sigmoid
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes sigmoid of x element-wise.
\[y = 1 / (1 + exp(-x))\]The storage type of
sigmoid
output is always denseDefined in src/operator/tensor/elemwise_unary_op_basic.cc:L119
-
mxnet.symbol.sparse.
sign
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise sign of the input.
Example:
sign([-2, 0, 3]) = [-1, 0, 1]
The storage type of
sign
output depends upon the input storage type:sign(default) = default
sign(row_sparse) = row_sparse
sign(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L758
-
mxnet.symbol.sparse.
sin
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes the element-wise sine of the input array.
The input should be in radians (\(2\pi\) rad equals 360 degrees).
\[sin([0, \pi/4, \pi/2]) = [0, 0.707, 1]\]The storage type of
sin
output depends upon the input storage type:sin(default) = default
sin(row_sparse) = row_sparse
sin(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L47
-
mxnet.symbol.sparse.
sinh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hyperbolic sine of the input array, computed element-wise.
\[sinh(x) = 0.5\times(exp(x) - exp(-x))\]The storage type of
sinh
output depends upon the input storage type:sinh(default) = default
sinh(row_sparse) = row_sparse
sinh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L371
-
mxnet.symbol.sparse.
slice
(data=None, begin=_Null, end=_Null, step=_Null, name=None, attr=None, out=None, **kwargs)¶ Slices a region of the array. .. note::
crop
is deprecated. Useslice
instead. This function returns a sliced array between the indices given by begin and end with the corresponding step. For an input array ofshape=(d_0, d_1, ..., d_n-1)
, slice operation withbegin=(b_0, b_1...b_m-1)
,end=(e_0, e_1, ..., e_m-1)
, andstep=(s_0, s_1, ..., s_m-1)
, where m <= n, results in an array with the shape(|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1)
. The resulting array’s k-th dimension contains elements from the k-th dimension of the input array starting from indexb_k
(inclusive) with steps_k
until reachinge_k
(exclusive). If the k-th elements are None in the sequence of begin, end, and step, the following rule will be used to set default values. If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k; else, set b_k=d_k-1, e_k=-1. The storage type ofslice
output depends on storage types of inputs - slice(csr) = csr - otherwise,slice
generates output with default storage .. note:: When input data storage type is csr, it only supportsstep=(), or step=(None,), or step=(1,) to generate a csr output. For other step parameter values, it falls back to slicing a dense tensor.
- Example::
- x = [[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.], [ 9., 10., 11., 12.]]
- slice(x, begin=(0,1), end=(2,4)) = [[ 2., 3., 4.],
[ 6., 7., 8.]]
- slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = [[9., 11.],
[5., 7.], [1., 3.]]
Defined in src/operator/tensor/matrix_op.cc:L482
- Parameters
data (Symbol) – Source input
begin (Shape(tuple), required) – starting indices for the slice operation, supports negative indices.
end (Shape(tuple), required) – ending indices for the slice operation, supports negative indices.
step (Shape(tuple), optional, default=[]) – step for the slice operation, supports negative values.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
sqrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise square-root value of the input.
\[\textrm{sqrt}(x) = \sqrt{x}\]Example:
sqrt([4, 9, 16]) = [2, 3, 4]
The storage type of
sqrt
output depends upon the input storage type:sqrt(default) = default
sqrt(row_sparse) = row_sparse
sqrt(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L170
-
mxnet.symbol.sparse.
square
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise squared value of the input.
\[square(x) = x^2\]Example:
square([2, 3, 4]) = [4, 9, 16]
The storage type of
square
output depends upon the input storage type:square(default) = default
square(row_sparse) = row_sparse
square(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L119
-
mxnet.symbol.sparse.
stop_gradient
(data=None, name=None, attr=None, out=None, **kwargs)¶ Stops gradient computation.
Stops the accumulated gradient of the inputs from flowing through this operator in the backward direction. In other words, this operator prevents the contribution of its inputs to be taken into account for computing gradients.
Example:
v1 = [1, 2] v2 = [0, 1] a = Variable('a') b = Variable('b') b_stop_grad = stop_gradient(3 * b) loss = MakeLoss(b_stop_grad + a) executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2)) executor.forward(is_train=True, a=v1, b=v2) executor.outputs [ 1. 5.] executor.backward() executor.grad_arrays [ 0. 0.] [ 1. 1.]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L325
-
mxnet.symbol.sparse.
sum
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the sum of array elements over given axes.
Note
sum and sum_axis are equivalent. For ndarray of csr storage type summation along axis 0 and axis 1 is supported. Setting keepdims or exclude to True will cause a fallback to dense operator.
Example:
data = [[[1, 2], [2, 3], [1, 3]], [[1, 4], [4, 3], [5, 2]], [[7, 1], [7, 2], [7, 3]]] sum(data, axis=1) [[ 4. 8.] [ 10. 9.] [ 21. 6.]] sum(data, axis=[1,2]) [ 12. 19. 27.] data = [[1, 2, 0], [3, 0, 1], [4, 1, 0]] csr = cast_storage(data, 'csr') sum(csr, axis=0) [ 8. 3. 1.] sum(csr, axis=1) [ 3. 4. 5.]
Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L67
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.sparse.
tan
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes the element-wise tangent of the input array.
The input should be in radians (\(2\pi\) rad equals 360 degrees).
\[tan([0, \pi/4, \pi/2]) = [0, 1, -inf]\]The storage type of
tan
output depends upon the input storage type:tan(default) = default
tan(row_sparse) = row_sparse
tan(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L140
-
mxnet.symbol.sparse.
tanh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hyperbolic tangent of the input array, computed element-wise.
\[tanh(x) = sinh(x) / cosh(x)\]The storage type of
tanh
output depends upon the input storage type:tanh(default) = default
tanh(row_sparse) = row_sparse
tanh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L451
-
mxnet.symbol.sparse.
trunc
(data=None, name=None, attr=None, out=None, **kwargs)¶ Return the element-wise truncated value of the input.
The truncated value of the scalar x is the nearest integer i which is closer to zero than x is. In short, the fractional part of the signed number x is discarded.
Example:
trunc([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 1., 1., 2.]
The storage type of
trunc
output depends upon the input storage type:trunc(default) = default
trunc(row_sparse) = row_sparse
trunc(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L856
-
mxnet.symbol.sparse.
where
(condition=None, x=None, y=None, name=None, attr=None, out=None, **kwargs)¶ Return the elements, either from x or y, depending on the condition.
Given three ndarrays, condition, x, and y, return an ndarray with the elements from x or y, depending on the elements from condition are true or false. x and y must have the same shape. If condition has the same shape as x, each element in the output array is from x if the corresponding element in the condition is true, and from y if false.
If condition does not have the same shape as x, it must be a 1D array whose size is the same as x’s first dimension size. Each row of the output array is from x’s row if the corresponding element from condition is true, and from y’s row if false.
Note that all non-zero values are interpreted as
True
in condition.Examples:
x = [[1, 2], [3, 4]] y = [[5, 6], [7, 8]] cond = [[0, 1], [-1, 0]] where(cond, x, y) = [[5, 2], [3, 8]] csr_cond = cast_storage(cond, 'csr') where(csr_cond, x, y) = [[5, 2], [3, 8]]
Defined in src/operator/tensor/control_flow_op.cc:L57
-
mxnet.symbol.sparse.
zeros_like
(data=None, name=None, attr=None, out=None, **kwargs)¶ Return an array of zeros with the same shape, type and storage type as the input array.
The storage type of
zeros_like
output depends on the storage type of the inputzeros_like(row_sparse) = row_sparse
zeros_like(csr) = csr
zeros_like(default) = default
Examples:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] zeros_like(x) = [[ 0., 0., 0.], [ 0., 0., 0.]]
此页内容是否对您有帮助
感谢反馈!