LanguageModelInputter
- class opennmt.models.LanguageModelInputter(*args, **kwargs)[source]
A special inputter for language modeling.
Inherits from:
opennmt.inputters.WordEmbedder
- property inference
Inference mode.
- initialize(data_config)[source]
Initializes the inputter.
- Parameters
data_config – A dictionary containing the data configuration set by the user.
- make_inference_dataset(*args, **kwargs)[source]
Builds a dataset to be used for inference.
For evaluation and training datasets, see
opennmt.inputters.ExampleInputter
.- Parameters
features_file – The test file.
batch_size – The batch size to use.
batch_type – The batching strategy to use: can be “examples” or “tokens”.
length_bucket_width – The width of the length buckets to select batch candidates from (for efficiency). Set
None
to not constrain batch formation.num_threads – The number of elements processed in parallel.
prefetch_buffer_size – The number of batches to prefetch asynchronously. If
None
, use an automatically tuned value.
- Returns
A
tf.data.Dataset
.
See also