tensorpack.predict package

class tensorpack.predict.PredictorBase[source]

Bases: object

Base class for all predictors.

return_input

whether the call will also return (inputs, outputs) or just outputs

Type

bool

__call__(*dp)[source]

Call the predictor on some inputs.

Example

When you have a predictor defined with two inputs, call it with:

predictor(e1, e2)
Returns

list[array] – list of outputs

class tensorpack.predict.OnlinePredictor(input_tensors, output_tensors, return_input=False, sess=None)[source]

Bases: tensorpack.predict.base.PredictorBase

A predictor which directly use an existing session and given tensors.

sess

The tf.Session object associated with this predictor.

ACCEPT_OPTIONS = False

See Session.make_callable

__init__(input_tensors, output_tensors, return_input=False, sess=None)[source]
Parameters
  • input_tensors (list) – list of names.

  • output_tensors (list) – list of names.

  • return_input (bool) – same as PredictorBase.return_input.

  • sess (tf.Session) – the session this predictor runs in. If None, will use the default session at the first call. Note that in TensorFlow, default session is thread-local.

class tensorpack.predict.OfflinePredictor(config)[source]

Bases: tensorpack.predict.base.OnlinePredictor

A predictor built from a given config. A single-tower model will be built without any prefix.

Example:

config = PredictConfig(model=my_model,
                       inputs_names=['image'],
                       # use names of tensors defined in the model
                       output_names=['linear/output', 'prediction'])
predictor = OfflinePredictor(config)
image = np.random.rand(1, 100, 100, 3)  # the shape of "image" defined in the model
linear_output, prediction = predictor(image)
__init__(config)[source]
Parameters

config (PredictConfig) – the config to use.

class tensorpack.predict.MultiThreadAsyncPredictor(predictors, batch_size=5)[source]

Bases: tensorpack.predict.base.AsyncPredictorBase

An multithreaded online async predictor which runs a list of OnlinePredictor. It would do an extra batching internally.

__init__(predictors, batch_size=5)[source]
Parameters
  • predictors (list) – a list of OnlinePredictor available to use.

  • batch_size (int) – the maximum of an internal batch.

put_task(dp, callback=None)[source]
Parameters
  • dp (list) – A datapoint as inputs. It could be either batched or not batched depending on the predictor implementation).

  • callback – a thread-safe callback. When the results are ready, it will be called with the “future” object.

Returns

concurrent.futures.Future – a Future of results.

start()[source]

Start workers

class tensorpack.predict.PredictConfig(model=None, tower_func=None, input_signature=None, input_names=None, output_names=None, session_creator=None, session_init=None, return_input=False, create_graph=True)[source]

Bases: object

__init__(model=None, tower_func=None, input_signature=None, input_names=None, output_names=None, session_creator=None, session_init=None, return_input=False, create_graph=True)[source]

Users need to provide enough arguments to create a tower function, which will be used to construct the graph. This can be provided in the following ways:

  1. model: a ModelDesc instance. It will contain a tower function by itself.

  2. tower_func: a tfutils.TowerFunc instance.

    Provide a tower function instance directly.

  3. tower_func: a symbolic function and input_signature: the signature of the function.

    Provide both a function and its signature.

Example:

config = PredictConfig(model=my_model,
                       inputs_names=['image'],
                       output_names=['linear/output', 'prediction'])
Parameters
  • model (ModelDescBase) – to be used to construct a tower function.

  • tower_func – a callable which takes input tensors (by positional args) and construct a tower. or a tfutils.TowerFunc instance.

  • input_signature ([tf.TensorSpec]) – if tower_func is a plain function (instead of a TowerFunc), this describes the list of inputs it takes.

  • input_names (list) – a list of input tensor names. Defaults to match input_signature. The name can be either the name of a tensor, or the name of one input of the tower.

  • output_names (list) – a list of names of the output tensors to predict, the tensors can be any tensor in the graph that’s computable from the tensors correponding to input_names.

  • session_creator (tf.train.SessionCreator) – how to create the session. Defaults to NewSessionCreator().

  • session_init (SessionInit) – how to initialize variables of the session. Defaults to do nothing.

  • return_input (bool) – same as in PredictorBase.return_input.

  • create_graph (bool) – create a new graph, or use the default graph when predictor is first initialized.

class tensorpack.predict.DatasetPredictorBase(config, dataset)[source]

Bases: object

Base class for dataset predictors. These are predictors which run over a DataFlow.

__init__(config, dataset)[source]
Parameters
get_all_result()[source]
Returns

list – all outputs for all datapoints in the DataFlow.

abstract get_result()[source]
Yields

output for each datapoint in the DataFlow.

class tensorpack.predict.SimpleDatasetPredictor(config, dataset)[source]

Bases: tensorpack.predict.dataset.DatasetPredictorBase

Simply create one predictor and run it on the DataFlow.

class tensorpack.predict.MultiProcessDatasetPredictor(config, dataset, nr_proc, use_gpu=True, ordered=True)[source]

Bases: tensorpack.predict.dataset.DatasetPredictorBase

Run prediction in multiple processes, on either CPU or GPU. Each process fetch datapoints as tasks and run predictions independently.

__init__(config, dataset, nr_proc, use_gpu=True, ordered=True)[source]
Parameters
  • config – same as in DatasetPredictorBase.

  • dataset – same as in DatasetPredictorBase.

  • nr_proc (int) – number of processes to use

  • use_gpu (bool) – use GPU or CPU. If GPU, then nr_proc cannot be more than what’s in CUDA_VISIBLE_DEVICES.

  • ordered (bool) – produce outputs in the original order of the datapoints. This will be a bit slower. Otherwise, get_result() will produce outputs in any order.

class tensorpack.predict.FeedfreePredictor(config, input_source)[source]

Bases: tensorpack.predict.base.PredictorBase

Create a predictor that takes inputs from an InputSource, instead of from feeds. An instance pred of FeedfreePredictor can be called only by pred(), which returns a list of output values as defined in config.output_names.

__init__(config, input_source)[source]
Parameters
  • config (PredictConfig) – the config to use.

  • input_source (InputSource) – the feedfree InputSource to use. Must match the signature of the tower function in config.

class tensorpack.predict.MultiTowerOfflinePredictor(config, towers)[source]

Bases: tensorpack.predict.base.OnlinePredictor

A multi-tower multi-GPU predictor. It builds one predictor for each tower.

__init__(config, towers)[source]
Parameters
  • config (PredictConfig) – the config to use.

  • towers – a list of relative GPU id.

get_predictor(n)[source]
Returns

OnlinePredictor – the nth predictor on the nth tower.

get_predictors()[source]
Returns

list[OnlinePredictor] – a list of predictor

class tensorpack.predict.DataParallelOfflinePredictor(config, towers)[source]

Bases: tensorpack.predict.base.OnlinePredictor

A data-parallel predictor. It builds one predictor that utilizes all GPUs.

Note that it doesn’t split/concat inputs/outputs automatically. Instead, its inputs are: [input[0] in tower[0], input[1] in tower[0], ..., input[0] in tower[1], input[1] in tower[1], ...] Similar for the outputs.

__init__(config, towers)[source]
Parameters
  • config (PredictConfig) – the config to use.

  • towers – a list of relative GPU id.