SequenceRecordInputter
- class opennmt.inputters.SequenceRecordInputter(*args, **kwargs)[source]
Inputter that reads
tf.train.SequenceExample
.See also
opennmt.inputters.create_sequence_records()
to generate a compatible dataset.Inherits from:
opennmt.inputters.Inputter
- __init__(input_depth, **kwargs)[source]
Initializes the parameters of the record inputter.
- Parameters
input_depth – The depth dimension of the input vectors.
**kwargs – Additional layer keyword arguments.
- make_dataset(data_file, training=None)[source]
Creates the base dataset required by this inputter.
- Parameters
data_file – The data file.
training – Run in training mode.
- Returns
A
tf.data.Dataset
instance or a list oftf.data.Dataset
instances.
- make_features(element=None, features=None, training=None)[source]
Creates features from data.
This is typically called in a data pipeline (such as
Dataset.map
). Common transformation includes tokenization, parsing, vocabulary lookup, etc.This method accepts both a single
element
from the dataset or a partially built dictionary offeatures
.- Parameters
element – An element from the dataset returned by
opennmt.inputters.Inputter.make_dataset()
.features – An optional and possibly partial dictionary of features to augment.
training – Run in training mode.
- Returns
A dictionary of
tf.Tensor
.
- call(features, training=None)[source]
Creates the model input from the features (e.g. word embeddings).
- Parameters
features – A dictionary of
tf.Tensor
, the output ofopennmt.inputters.Inputter.make_features()
.training – Run in training mode.
- Returns
The model input.