SequenceToSequence

class opennmt.models.SequenceToSequence(*args, **kwargs)[source]

A sequence to sequence model.

Inherits from: opennmt.models.SequenceGenerator

Extended by:

__init__(source_inputter, target_inputter, encoder, decoder, share_embeddings=0)[source]

Initializes a sequence-to-sequence model.

Parameters
Raises

TypeError – if target_inputter is not a opennmt.inputters.WordEmbedder.

auto_config(num_replicas=1)[source]

Returns automatic configuration values specific to this model.

Parameters

num_replicas – The number of synchronous model replicas used for the training.

Returns

A partial training configuration.

map_v1_weights(weights)[source]

Maps current weights to V1 weights.

Parameters

weights – A nested dictionary following the scope names used in V1. The leaves are tuples with the variable value and optionally the optimizer slots.

Returns

A list of tuples associating variables and their V1 equivalent.

initialize(data_config, params=None)[source]

Initializes the model from the data configuration.

Parameters
  • data_config – A dictionary containing the data configuration set by the user (e.g. vocabularies, tokenization, pretrained embeddings, etc.).

  • params – A dictionary of hyperparameters.

build(input_shape)[source]

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(features, labels=None, training=None, step=None)[source]

Runs the model.

Parameters
  • features – A nested structure of features tf.Tensor.

  • labels – A nested structure of labels tf.Tensor.

  • training – If True, run in training mode.

  • step – The current training step.

Returns

A tuple containing,

  • The model outputs (usually unscaled probabilities).

  • The model predictions.

serve_function()[source]

Returns a function for serving this model.

Returns

A tf.function.

compute_loss(outputs, labels, training=True)[source]

Computes the loss.

Parameters
  • outputs – The model outputs (usually unscaled probabilities).

  • labels – The dict of labels tf.Tensor.

  • training – If True, compute the loss for training.

Returns

The loss or a tuple (numerator, train_denominator, stats_denominator) to use a different normalization for training compared to reporting (e.g. batch-normalized for training vs. token-normalized for reporting).

format_prediction(prediction, params=None)[source]

Formats the model prediction for file saving.

Parameters
  • prediction – The model prediction (same structure as the second output of opennmt.models.Model.call()).

  • params – (optional) Dictionary of formatting parameters.

Returns

A string or list of strings.

transfer_weights(new_model, new_optimizer=None, optimizer=None, ignore_weights=None)[source]

Transfers weights (and optionally optimizer slots) from this model to another.

This default implementation assumes that self and new_model have exactly the same variables. Subclasses can override this method to transfer weights to another model type or architecture. For example, opennmt.models.SequenceToSequence can transfer weights to a model with a different vocabulary.

All model and optimizer variables are expected to be initialized.

Parameters
  • new_model – The new model to transfer weights to.

  • new_optimizer – The new optimizer.

  • optimizer – The optimizer used for the current model.

  • ignore_weights – Optional list of weights to not transfer.