Server

Models

class onmt.translate.translation_server.ServerModel(opt, model_id, preprocess_opt=None, tokenizer_opt=None, postprocess_opt=None, custom_opt=None, load=False, timeout=-1, on_timeout='to_cpu', model_root='./', ct2_model=None, ct2_translator_args=None, ct2_translate_batch_args=None, features_opt=None)[source]

Bases: object

Wrap a model with server functionality.

Parameters:
  • opt (dict) – Options for the Translator

  • model_id (int) – Model ID

  • preprocess_opt (list) – Options for preprocess processus or None

  • tokenizer_opt (dict) – Options for the tokenizer or None

  • postprocess_opt (list) – Options for postprocess processus or None

  • custom_opt (dict) – Custom options, can be used within preprocess or postprocess, default None

  • load (bool) – whether to load the model during :func: __init__()

  • timeout (int) – Seconds before running :func: do_timeout() Negative values means no timeout

  • on_timeout (str) – Options are [to_cpu, unload]. Set what to do on

  • (see (timeout) – func: do_timeout().)

  • model_root (str) – Path to the model directory it must contain the model and tokenizer file

build_tokenizer(tokenizer_opt)[source]

Build tokenizer described by tokenizer_opt.

detokenize(sequence, side='tgt')[source]

Detokenize a single sequence

Same args/returns as :func:tokenize()

do_timeout()[source]

Timeout function that frees GPU memory.

Moves the model to CPU or unloads it; depending on attr self.on_timemout value

maybe_convert_align(src, tgt, align, align_scores)[source]

Convert alignment to match detokenized src/tgt (or not).

Parameters:
  • src (str) – The tokenized source sequence.

  • tgt (str) – The tokenized target sequence.

  • align (str) – The alignment correspand to src/tgt pair.

Returns:

The alignment correspand to detokenized src/tgt.

Return type:

align (str)

maybe_detokenize(sequence, side='tgt')[source]

De-tokenize the sequence (or not) Same args/returns as :func:tokenize()

maybe_detokenize_with_align(sequence, src, side='tgt')[source]

De-tokenize (or not) the sequence (with alignment).

Parameters:
  • sequence (str) – The sequence to detokenize, possible with

  • '|||' (alignment seperate by) –

Returns:

The detokenized sequence. align (str): The alignment correspand to detokenized src/tgt sorted or None if no alignment in output.

Return type:

sequence (str)

maybe_postprocess(sequence)[source]

Postprocess the sequence (or not)

maybe_preprocess(sequence)[source]

Preprocess the sequence (or not)

maybe_tokenize(sequence, side='src')[source]

Tokenize the sequence (or not).

Same args/returns as tokenize

maybe_transform_feats(raw_src, tok_src, feats)[source]

Apply InferFeatsTransform to features

parse_opt(opt)[source]

Parse the option set passed by the user using onmt.opts

Parameters:

opt (dict) – Options passed by the user

Returns:

full set of options for the Translator

Return type:

opt (argparse.Namespace)

postprocess(sequence)[source]

Preprocess a single sequence.

Parameters:

sequence (str) – The sequence to process.

Returns:

The postprocessed sequence.

Return type:

sequence (str)

preprocess(sequence)[source]

Preprocess a single sequence.

Parameters:

sequence (str) – The sequence to preprocess.

Returns:

The preprocessed sequence.

Return type:

sequence (str)

rebuild_seg_packages(all_preprocessed, results, scores, aligns, align_scores, n_best)[source]

Rebuild proper segment packages based on initial n_seg.

to_gpu()[source]

Move the model to GPU.

tokenize(sequence, side='src')[source]

Tokenize a single sequence.

Parameters:

sequence (str) – The sequence to tokenize.

Returns:

The tokenized sequence.

Return type:

tok (str)

tokenizer_marker(side='src')[source]

Return marker used in side tokenizer.

Core Server

exception onmt.translate.translation_server.ServerModelError[source]

Bases: Exception

class onmt.translate.translation_server.Timer(start=False)[source]

Bases: object

class onmt.translate.translation_server.TranslationServer[source]

Bases: object

clone_model(model_id, opt, timeout=-1)[source]

Clone a model model_id

Different options may be passed. If opt is None, it will use the same set of options

list_models()[source]

Return the list of available models

load_model(opt, model_id=None, **model_kwargs)[source]

Load a model given a set of options

preload_model(opt, model_id=None, **model_kwargs)[source]

Preloading the model: updating internal datastructure

It will effectively load the model if load is set

run(inputs)[source]

Translate inputs

We keep the same format as the Lua version i.e. [{"id": model_id, "src": "sequence to translate"},{ ...}]

We use inputs[0][“id”] as the model id

start(config_file)[source]

Read the config file and pre-/load the models.

unload_model(model_id)[source]

Manually unload a model.

It will free the memory and cancel the timer