PaddlePaddle
- abs
- acos
- add
- add_n
- addmm
- all
- allclose
- any
- arange
- argmax
- argmin
- argsort
- asin
- assign
- atan
- bernoulli
- bmm
- broadcast_to
- cast
- ceil
- cholesky
- chunk
- clip
- concat
- conj
- cos
- cosh
- CPUPlace
- cross
- CUDAPinnedPlace
- CUDAPlace
- cumsum
- DataParallel
- diag
- disable_static
- dist
- divide
- dot
- empty
- empty_like
- enable_static
- equal
- equal_all
- erf
- exp
- expand
- expand_as
- eye
- flatten
- flip
- floor
- floor_divide
- flops
- full
- full_like
- gather
- gather_nd
- get_cuda_rng_state
- get_cudnn_version
- get_default_dtype
- get_device
- grad
- greater_equal
- greater_than
- histogram
- imag
- in_dynamic_mode
- increment
- index_sample
- index_select
- inverse
- is_compiled_with_cuda
- is_compiled_with_xpu
- is_empty
- is_tensor
- isfinite
- isinf
- isnan
- kron
- less_equal
- less_than
- linspace
- load
- log
- log10
- log1p
- log2
- logical_and
- logical_not
- logical_or
- logical_xor
- logsumexp
- masked_select
- matmul
- max
- maximum
- mean
- median
- meshgrid
- min
- minimum
- mm
- mod
- Model
- multinomial
- multiplex
- multiply
- mv
- no_grad
- nonzero
- norm
- normal
- not_equal
- numel
- ones
- ones_like
- ParamAttr
- pow
- prod
- rand
- randint
- randn
- randperm
- rank
- real
- reciprocal
- reshape
- reshape_
- roll
- round
- rsqrt
- save
- scale
- scatter
- scatter_
- scatter_nd
- scatter_nd_add
- seed
- set_cuda_rng_state
- set_default_dtype
- set_device
- shape
- shard_index
- sign
- sin
- sinh
- slice
- sort
- split
- sqrt
- square
- squeeze
- squeeze_
- stack
- stanh
- std
- strided_slice
- subtract
- sum
- summary
- t
- tan
- tanh
- tanh_
- Tensor
- tile
- to_tensor
- topk
- trace
- transpose
- tril
- triu
- unbind
- uniform
- unique
- unsqueeze
- unsqueeze_
- unstack
- var
- where
- XPUPlace
- zeros
- zeros_like
- create_lod_tensor
- create_random_int_lodtensor
- cuda_pinned_places
- data
- DataFeedDesc
- DataFeeder
- device_guard
- DistributeTranspiler
- DistributeTranspilerConfig
- get_flags
-
- adaptive_pool2d
- adaptive_pool3d
- add_position_encoding
- affine_channel
- affine_grid
- anchor_generator
- argmax
- argmin
- argsort
- array_length
- array_read
- array_write
- assign
- autoincreased_step_counter
- BasicDecoder
- beam_search
- beam_search_decode
- bipartite_match
- box_clip
- box_coder
- box_decoder_and_assign
- bpr_loss
- brelu
- Categorical
- center_loss
- clip
- clip_by_norm
- collect_fpn_proposals
- concat
- cond
- continuous_value_model
- cosine_decay
- create_array
- create_py_reader_by_data
- create_tensor
- crop
- crop_tensor
- cross_entropy
- ctc_greedy_decoder
- cumsum
- data
- DecodeHelper
- Decoder
- deformable_conv
- deformable_roi_pooling
- density_prior_box
- detection_output
- diag
- distribute_fpn_proposals
- double_buffer
- dropout
- dynamic_gru
- dynamic_lstm
- dynamic_lstmp
- DynamicRNN
- edit_distance
- elementwise_add
- elementwise_div
- elementwise_floordiv
- elementwise_max
- elementwise_min
- elementwise_mod
- elementwise_pow
- elementwise_sub
- elu
- embedding
- equal
- expand
- expand_as
- exponential_decay
- eye
- fc
- fill_constant
- filter_by_instag
- flatten
- fsp_matrix
- gather
- gather_nd
- gaussian_random
- gelu
- generate_mask_labels
- generate_proposal_labels
- generate_proposals
- get_tensor_from_selected_rows
- greater_equal
- greater_than
- GreedyEmbeddingHelper
- grid_sampler
- gru_unit
- GRUCell
- hard_shrink
- hard_sigmoid
- hard_swish
- has_inf
- has_nan
- hash
- hsigmoid
- huber_loss
- IfElse
- im2sequence
- image_resize
- image_resize_short
- increment
- inplace_abn
- inverse_time_decay
- iou_similarity
- isfinite
- kldiv_loss
- l2_normalize
- label_smooth
- leaky_relu
- less_equal
- less_than
- linear_chain_crf
- linear_lr_warmup
- locality_aware_nms
- lod_append
- lod_reset
- logsigmoid
- lrn
- lstm
- lstm_unit
- LSTMCell
- margin_rank_loss
- matmul
- matrix_nms
- maxout
- mean
- merge_selected_rows
- mse_loss
- mul
- multiclass_nms
- MultivariateNormalDiag
- natural_exp_decay
- noam_decay
- Normal
- not_equal
- one_hot
- ones
- ones_like
- pad
- pad2d
- pad_constant_like
- piecewise_decay
- pixel_shuffle
- polygon_box_transform
- polynomial_decay
- pool2d
- pool3d
- pow
- prior_box
- prroi_pool
- psroi_pool
- py_reader
- random_crop
- range
- rank_loss
- read_file
- reduce_all
- reduce_any
- reduce_max
- reduce_mean
- reduce_min
- reduce_prod
- reduce_sum
- relu
- relu6
- reorder_lod_tensor_by_rank
- reshape
- resize_bilinear
- resize_nearest
- resize_trilinear
- retinanet_detection_output
- retinanet_target_assign
- reverse
- rnn
- RNNCell
- roi_align
- roi_perspective_transform
- roi_pool
- rpn_target_assign
- sampled_softmax_with_cross_entropy
- SampleEmbeddingHelper
- sampling_id
- scatter
- selu
- sequence_concat
- sequence_conv
- sequence_enumerate
- sequence_expand
- sequence_expand_as
- sequence_first_step
- sequence_last_step
- sequence_mask
- sequence_pad
- sequence_pool
- sequence_reshape
- sequence_reverse
- sequence_scatter
- sequence_slice
- sequence_softmax
- sequence_unpad
- shuffle_channel
- sigmoid_cross_entropy_with_logits
- sigmoid_focal_loss
- sign
- similarity_focus
- size
- smooth_l1
- soft_relu
- softmax
- softplus
- softshrink
- softsign
- space_to_depth
- split
- squeeze
- ssd_loss
- stack
- StaticRNN
- strided_slice
- sum
- sums
- swish
- Switch
- tanh
- tanh_shrink
- target_assign
- teacher_student_sigmoid_loss
- tensor_array_to_tensor
- thresholded_relu
- topk
- TrainingHelper
- unbind
- Uniform
- uniform_random
- unique
- unique_with_counts
- unsqueeze
- warpctc
- where
- While
- while_loop
- yolo_box
- yolov3_loss
- zeros
- zeros_like
- load_op_library
- LoDTensor
- LoDTensorArray
- memory_optimize
- one_hot
- release_memory
- require_version
- set_flags
- Tensor
- Overview
- AdaptiveAvgPool1D
- AdaptiveAvgPool2D
- AdaptiveAvgPool3D
- AdaptiveMaxPool1D
- AdaptiveMaxPool2D
- AdaptiveMaxPool3D
- AlphaDropout
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNorm
- BatchNorm1D
- BatchNorm2D
- BatchNorm3D
- BCELoss
- BCEWithLogitsLoss
- BeamSearchDecoder
- Bilinear
- BiRNN
- ClipGradByGlobalNorm
- ClipGradByNorm
- ClipGradByValue
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- CosineSimilarity
- CrossEntropyLoss
- CTCLoss
- Dropout
- Dropout2D
- Dropout3D
- dynamic_decode
- ELU
- Embedding
- Flatten
-
- adaptive_avg_pool1d
- adaptive_avg_pool2d
- adaptive_avg_pool3d
- adaptive_max_pool1d
- adaptive_max_pool2d
- adaptive_max_pool3d
- affine_grid
- alpha_dropout
- avg_pool1d
- avg_pool2d
- avg_pool3d
- batch_norm
- bilinear
- binary_cross_entropy
- binary_cross_entropy_with_logits
- conv1d
- conv1d_transpose
- conv2d
- conv2d_transpose
- conv3d
- conv3d_transpose
- cosine_similarity
- cross_entropy
- ctc_loss
- diag_embed
- dice_loss
- dropout
- dropout2d
- dropout3d
- elu
- elu_
- embedding
- gather_tree
- gelu
- grid_sample
- hardshrink
- hardsigmoid
- hardswish
- hardtanh
- hsigmoid_loss
- instance_norm
- interpolate
- kl_div
- l1_loss
- label_smooth
- layer_norm
- leaky_relu
- linear
- local_response_norm
- log_loss
- log_sigmoid
- log_softmax
- margin_ranking_loss
- max_pool1d
- max_pool2d
- max_pool3d
- maxout
- mse_loss
- nll_loss
- normalize
- npair_loss
- one_hot
- pad
- pixel_shuffle
- prelu
- relu
- relu6
- relu_
- selu
- sigmoid
- sigmoid_focal_loss
- smooth_l1_loss
- softmax
- softmax_
- softmax_with_cross_entropy
- softplus
- softshrink
- softsign
- square_error_cost
- swish
- tanhshrink
- temporal_shift
- thresholded_relu
- unfold
- upsample
- GELU
- GroupNorm
- GRU
- GRUCell
- Hardshrink
- Hardsigmoid
- Hardswish
- Hardtanh
- HSigmoidLoss
- InstanceNorm1D
- InstanceNorm2D
- InstanceNorm3D
- KLDivLoss
- L1Loss
- Layer
- LayerList
- LayerNorm
- LeakyReLU
- Linear
- LocalResponseNorm
- LogSigmoid
- LogSoftmax
- LSTM
- LSTMCell
- MarginRankingLoss
- Maxout
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MSELoss
- MultiHeadAttention
- NLLLoss
- Pad1D
- Pad2D
- Pad3D
- PairwiseDistance
- ParameterList
- PixelShuffle
- PReLU
- ReLU
- ReLU6
- RNN
- RNNCellBase
- SELU
- Sequential
- Sigmoid
- SimpleRNN
- SimpleRNNCell
- SmoothL1Loss
- Softmax
- Softplus
- Softshrink
- Softsign
- SpectralNorm
- Swish
- SyncBatchNorm
- Tanh
- Tanhshrink
- ThresholdedReLU
- Transformer
- TransformerDecoder
- TransformerDecoderLayer
- TransformerEncoder
- TransformerEncoderLayer
- Upsample
- UpsamplingBilinear2D
- UpsamplingNearest2D
- append_backward
- BuildStrategy
- CompiledProgram
- cpu_places
- create_global_var
- create_parameter
- cuda_places
- data
- default_main_program
- default_startup_program
- deserialize_persistables
- deserialize_program
- device_guard
- ExecutionStrategy
- Executor
- global_scope
- gradients
- InputSpec
- load
- load_from_file
- load_inference_model
- load_program_state
- name_scope
- ParallelExecutor
- Program
- program_guard
- py_func
- save
- save_inference_model
- save_to_file
- scope_guard
- serialize_persistables
- serialize_program
- set_program_state
- Variable
- WeightNormParamAttr
-
- adjust_brightness
- adjust_contrast
- adjust_hue
- adjust_saturation
- BaseTransform
- BrightnessTransform
- center_crop
- CenterCrop
- ColorJitter
- Compose
- ContrastTransform
- crop
- Grayscale
- hflip
- HueTransform
- Normalize
- normalize
- Pad
- pad
- RandomCrop
- RandomHorizontalFli
- RandomResizedCrop
- RandomRotation
- RandomVerticalFlip
- Resize
- resize
- rotate
- SaturationTransform
- to_grayscale
- to_tensor
- ToTensor
- Transpose
- vflip
paddle.distributed / InMemoryDataset
InMemoryDataset¶
InMemoryDataset,它将数据加载到内存中,并在训练前随机整理数据。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
-
init
( **kwargs ) ¶
注意:
1. 该API只在非 Dygraph 模式下生效
对InMemoryDataset的实例进行配置初始化。
- 参数:
-
kwargs - 可选的关键字参数,由调用者提供, 目前支持以下关键字配置。
batch_size (int) - batch size的大小. 默认值为1。
thread_num (int) - 用于训练的线程数, 默认值为1。
use_var (list) - 用于输入的variable列表,默认值为[]。
input_type (int) - 输入到模型训练样本的类型. 0 代表一条样本, 1 代表一个batch。 默认值为0。
fs_name (str) - hdfs名称. 默认值为""。
fs_ugi (str) - hdfs的ugi. 默认值为""。
pipe_command (str) - 在当前的
dataset
中设置的pipe命令用于数据的预处理。pipe命令只能使用UNIX的pipe命令,默认为"cat"。download_cmd (str) - 数据下载pipe命令。 pipe命令只能使用UNIX的pipe命令, 默认为"cat"。
返回:None。
代码示例
import paddle
import os
paddle.enable_static()
with open("test_queue_dataset_run_a.txt", "w") as f:
data = "2 1 2 2 5 4 2 2 7 2 1 3\n"
data += "2 6 2 2 1 4 2 2 4 2 2 3\n"
data += "2 5 2 2 9 9 2 2 7 2 1 3\n"
data += "2 7 2 2 1 9 2 3 7 2 5 3\n"
f.write(data)
with open("test_queue_dataset_run_b.txt", "w") as f:
data = "2 1 2 2 5 4 2 2 7 2 1 3\n"
data += "2 6 2 2 1 4 2 2 4 2 2 3\n"
data += "2 5 2 2 9 9 2 2 7 2 1 3\n"
data += "2 7 2 2 1 9 2 3 7 2 5 3\n"
f.write(data)
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset = paddle.distributed.InMemoryDataset()
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
dataset.set_filelist(
["test_queue_dataset_run_a.txt", "test_queue_dataset_run_b.txt"])
dataset.load_into_memory()
place = paddle.CPUPlace()
exe = paddle.static.Executor(place)
startup_program = paddle.static.Program()
main_program = paddle.static.Program()
exe.run(startup_program)
exe.train_from_dataset(main_program, dataset)
os.remove("./test_queue_dataset_run_a.txt")
os.remove("./test_queue_dataset_run_b.txt")
-
_init_distributed_settings
( **kwargs ) ¶
注意:
1. 该API只在非 Dygraph 模式下生效 2. 本api需要在机大规模参数服务器训练下生效,敬请期待详细使用文档
对InMemoryDataset的实例进行分布式训练相关配置的初始化。
- 参数:
-
kwargs - 可选的关键字参数,由调用者提供, 目前支持以下关键字配置。
merge_size (int) - 通过样本id来设置合并,相同id的样本将会在shuffle之后进行合并,你应该在一个data生成器里面解析样本id。merge_size表示合并的最小数量,默认值为-1,表示不做合并。
parse_ins_id (bool) - 是否需要解析每条样的id,默认值为False。
parse_content (bool) - 是否需要解析每条样本的content, 默认值为False。
fleet_send_batch_size (int) - 设置发送batch的大小,默认值为1024。
fleet_send_sleep_seconds (int) - 设置发送batch后的睡眠时间,默认值为0。
fea_eval (bool) - 设置特征打乱特征验证模式,来修正特征级别的重要性, 特征打乱需要
fea_eval
被设置为True. 默认值为False。candidate_size (int) - 特征打乱特征验证模式下,用于随机化特征的候选池大小. 默认值为10000。
返回:None。
代码示例
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=[])
dataset._init_distributed_settings(
parse_ins_id=True,
parse_content=True,
fea_eval=True,
candidate_size=10000)
-
update_settings
( **kwargs ) ¶
注意:
1. 该API只在非 Dygraph 模式下生效
对InMemoryDataset的实例通过init和_init_distributed_settings初始化的配置进行更新。
- 参数:
-
kwargs - 可选的关键字参数,由调用者提供, 目前支持以下关键字配置。
batch_size (int) - batch size的大小. 默认值为1。
thread_num (int) - 用于训练的线程数, 默认值为1。
use_var (list) - 用于输入的variable列表,默认值为[]。
input_type (int) - 输入到模型训练样本的类型. 0 代表一条样本, 1 代表一个batch。 默认值为0。
fs_name (str) - hdfs名称. 默认值为""。
fs_ugi (str) - hdfs的ugi. 默认值为""。
pipe_command (str) - 在当前的
dataset
中设置的pipe命令用于数据的预处理。pipe命令只能使用UNIX的pipe命令,默认为"cat"。download_cmd (str) - 数据下载pipe命令。 pipe命令只能使用UNIX的pipe命令, 默认为"cat"。
merge_size (int) - 通过样本id来设置合并,相同id的样本将会在shuffle之后进行合并,你应该在一个data生成器里面解析样本id。merge_size表示合并的最小数量,默认值为-1,表示不做合并。
parse_ins_id (bool) - 是否需要解析每条样的id,默认值为False。
parse_content (bool) 是否需要解析每条样本的content, 默认值为False。
fleet_send_batch_size (int) - 设置发送batch的大小,默认值为1024。
fleet_send_sleep_seconds (int) - 设置发送batch后的睡眠时间,默认值为0。
fea_eval (bool) - 设置特征打乱特征验证模式,来修正特征级别的重要性, 特征打乱需要
fea_eval
被设置为True. 默认值为False。candidate_size (int) - 特征打乱特征验证模式下,用于随机化特征的候选池大小. 默认值为10000。
返回:None。
代码示例
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=[])
dataset._init_distributed_settings(
parse_ins_id=True,
parse_content=True,
fea_eval=True,
candidate_size=10000)
dataset.update_settings(batch_size=2)
-
load_into_memory
( ) ¶
注意:
1. 该API只在非 Dygraph 模式下生效
向内存中加载数据。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
-
preload_into_memory
( thread_num=None ) ¶
向内存中以异步模式加载数据。
- 参数:
-
thread_num (int) - 异步加载数据时的线程数。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.preload_into_memory()
dataset.wait_preload_done()
-
wait_preload_done
( ) ¶
等待 preload_into_memory
完成。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.preload_into_memory()
dataset.wait_preload_done()
-
local_shuffle
( ) ¶
局部shuffle。加载到内存的训练样本进行单机节点内部的打乱
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.local_shuffle()
-
global_shuffle
( fleet=None, thread_num=12 ) ¶
全局shuffle。只能用在分布式模式(单机多进程或多机多进程)中。您如果在分布式模式中运行,应当传递fleet而非None。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.global_shuffle()
- 参数:
-
fleet (Fleet) – fleet单例。默认为None。
thread_num (int) - 全局shuffle时的线程数。
-
release_memory
( ) ¶
当数据不再使用时,释放InMemoryDataset内存数据。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.global_shuffle()
exe = paddle.static.Executor(paddle.CPUPlace())
startup_program = paddle.static.Program()
main_program = paddle.static.Program()
exe.run(startup_program)
exe.train_from_dataset(main_program, dataset)
dataset.release_memory()
-
get_memory_data_size
( fleet=None ) ¶
用户可以调用此函数以了解加载进内存后所有workers中的样本数量。
注解
该函数可能会导致性能不佳,因为它具有barrier。
- 参数:
-
fleet (Fleet) – fleet对象。
返回:内存数据的大小。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
print dataset.get_memory_data_size()
-
get_shuffle_data_size
( fleet=None ) ¶
获取shuffle数据大小,用户可以调用此函数以了解局域/全局shuffle后所有workers中的样本数量。
注解
该函数可能会导致局域shuffle性能不佳,因为它具有barrier。但其不影响局域shuffle。
- 参数:
-
fleet (Fleet) – fleet对象。
返回:shuffle数据的大小。
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
dataset = paddle.distributed.InMemoryDataset()
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.global_shuffle()
print dataset.get_shuffle_data_size()
-
slots_shuffle
( slots ) ¶
该方法是在特征层次上的一个打乱方法,经常被用在有着较大缩放率实例的稀疏矩阵上,为了比较metric,比如auc,在一个或者多个有着baseline的特征上做特征打乱来验证特征level的重要性。
- 参数:
-
slots (list[string]) - 要打乱特征的集合
代码示例:
import paddle
paddle.enable_static()
dataset = paddle.distributed.InMemoryDataset()
dataset._init_distributed_settings(fea_eval=True)
slots = ["slot1", "slot2", "slot3", "slot4"]
slots_vars = []
for slot in slots:
var = paddle.static.data(
name=slot, shape=[None, 1], dtype="int64", lod_level=1)
slots_vars.append(var)
dataset.init(
batch_size=1,
thread_num=2,
input_type=1,
pipe_command="cat",
use_var=slots_vars)
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.slots_shuffle(['slot1'])
此页内容是否对您有帮助
感谢反馈!