# ParallelEncoder

class opennmt.encoders.ParallelEncoder(*args, **kwargs)[source]

An encoder that encodes its input with several encoders and reduces the outputs and states together. Additional layers can be applied on each encoder output and on the combined output (e.g. to layer normalize each encoder output).

If the input is a single tf.Tensor, the same input will be encoded by every encoders. Otherwise, when the input is a Python sequence (e.g. the non reduced output of a opennmt.inputters.ParallelInputter), each encoder will encode its corresponding input in the sequence.

See for example “Multi-Columnn Encoder” in https://arxiv.org/abs/1804.09849.

Inherits from: opennmt.encoders.Encoder

__init__(encoders, outputs_reducer=None, states_reducer=None, outputs_layer_fn=None, combined_output_layer_fn=None)[source]

Initializes the parameters of the encoder.

Parameters
Raises

ValueError – if outputs_layer_fn is a list with a size not equal to the number of encoders.

call(inputs, sequence_length=None, training=None)[source]

Encodes an input sequence.

Parameters
• inputs – The inputs to encode of shape $$[B, T, ...]$$.

• sequence_length – The length of each input with shape $$[B]$$.

• training – Run in training mode.

Returns

A tuple (outputs, state, sequence_length).