Training

Monitoring

OpenNMT-tf uses TensorBoard to log information during the training. Simply start tensorboard by setting the active log directory, e.g.:

tensorboard --logdir="."

then open the URL displayed in the shell to monitor and visualize several data, including:

  • training and evaluation loss
  • training speed
  • learning rate
  • gradients norm
  • computation graphs
  • word embeddings
  • decoder sampling probability

Replicated training

OpenNMT-tf training can make use of multiple GPUs with in-graph replication. In this mode, the main section of the graph is replicated over multiple devices and batches are processed in parallel. The resulting graph is equivalent to train with batches N times larger, where N is the number of used GPUs.

For example, if your machine has 4 GPUs, simply add the --num_gpus option:

onmt-main train [...] --num_gpus 4

Note that evaluation and inference will run on a single device.

Distributed training

OpenNMT-tf also supports asynchronous distributed training with between-graph replication. In this mode, each graph replica processes a batch independently, compute the gradients, and asynchronously update a shared set of parameters.

To enable distributed training, the user should use the train_and_eval run type and set on the command line:

  • a chief worker host that runs a training loop and manages checkpoints, summaries, etc.
  • a list of worker hosts that run a training loop
  • a list of parameter server hosts that synchronize the parameters

Then a training instance should be started on each host with a selected task, e.g.:

CUDA_VISIBLE_DEVICES=0 onmt-main train_and_eval [...] \
    --ps_hosts localhost:2222 \
    --chief_host localhost:2223 \
    --worker_hosts localhost:2224,localhost:2225 \
    --task_type worker \
    --task_index 1

will start the worker 1 on the current machine and first GPU. By setting CUDA_VISIBLE_DEVICES correctly, asynchronous distributed training can be run on a single multi-GPU machine.

For more details, see the documentation of tf.estimator.train_and_evaluate. Also see tensorflow/ecosystem to integrate distributed training with open-source frameworks like Docker or Kubernetes.

Note: distributed training will also split the training directory model_dir accross the instances. This could impact features that restore checkpoints like inference, manual export, or checkpoint averaging. The recommend approach to properly support these features while running distributed training is to store the model_dir on a shared filesystem, e.g. by using HDFS.

Mixed precision training

Thanks to work from NVIDIA, OpenNMT-tf supports training models using FP16 computation. Mixed precision training is automatically enabled when the data type of the inputters is defined to be tf.float16. See for example the predefined model TransformerFP16:

onmt-main train [...] --model_type TransformerFP16

Additional training configurations are available to tune the loss scaling algorithm:

params:
  # (optional) For mixed precision training, the loss scaling to apply (a constant value or
  # an automatic scaling algorithm: "backoff", "logmax", default: "backoff")
  loss_scale: backoff
  # (optional) For mixed precision training, the additional parameters to pass the loss scale
  # (see the source file opennmt/optimizers/mixed_precision_wrapper.py).
  loss_scale_params:
    scale_min: 1.0
    step_factor: 2.0

For more information about the implementation and get expert recommendation on how to maximize performance, see the OpenSeq2Seq’s documentation. Currently, mixed precision training requires Volta GPUs and the NVIDIA’s TensorFlow Docker image.

If you want to convert an existing checkpoint to FP16 from FP32 (or vice-versa), see the script onmt-convert-checkpoint. Typically, it is useful when you want to train using FP16 but still release a model in FP32, e.g.:

onmt-convert-checkpoint --model_dir ende-fp16/ --output_dir ende-fp32/ --target_dtype float32

The checkpoint generated in ende-fp32/ can then be used in infer or export run types.

Retraining

Continuing from a stopped training

This is the most common case of retrainings: the training was interrupted but should run longer. In that case, simply launch the same command that you used for the initial training, e.g.:

# Start the training.
onmt-main train_and_eval --model_type NMTSmall --auto_config --config data.yml

# ... the training is interrupted or stopped ...

# Continue from the latest checkpoint.
onmt-main train_and_eval --model_type NMTSmall --auto_config --config data.yml

Note: If the train was stopped because train_steps was reached, you should first increase this value before continuing.

Fine-tune an existing model

Retraining can also be useful to fine-tune an existing model. For example in machine translation, it is faster to adapt a generic model to a specific domain compared to starting a training from scratch.

OpenNMT-tf offers some features to make this process easier:

  • The script onmt-update-vocab can be used to change the word vocabularies contained in a checkpoint (e.g. to add a domain terminology)
  • The command line argument --checkpoint_path can be used to load the weights of an existing checkpoint while starting from a fresh training state (i.e. with new learning rate schedule and optimizer variables)