Input Pipeline

This tutorial contains some general discussions on the topic of "how to read data efficiently to work with TensorFlow", and how tensorpack supports these methods. You don't have to read it because these are details under the tensorpack interface, but knowing it could help understand the efficiency and choose the best input pipeline for your task.

Prepare Data in Parallel

prefetch

A common sense no matter what framework you use:

Prepare data in parallel with the training!

The reasons are:

  1. Data preparation often consumes non-trivial time (depend on the actual problem).

  2. Data preparation often uses completely different resources from training (see figure above) -- doing them together doesn't slow you down. In fact you can further parallelize different stages in the preparation since they also use different resources.

  3. Data preparation often doesn't depend on the result of the previous training step.

Let's do some simple math: according to tensorflow/benchmarks, 4 P100 GPUs can train ResNet50 at 852 images/sec, and the size of those images are 852*224*224*3*4bytes = 489MB. Assuming you have 5GB/s memcpy bandwidth (roughly like this if you run single-thread copy), simply copying the data once would take 0.1s -- slowing down your training by 10%. Think about how many more copies are made during your preprocessing.

Failure to hide the data preparation latency is the major reason why people cannot see good GPU utilization. Always choose a framework that allows latency hiding. However most other TensorFlow wrappers are designed to be feed_dict based. This is the major reason why tensorpack is faster.

Python Reader or TF Reader ?

The above discussion is valid regardless of what you use to load/preprocess data, either Python code or TensorFlow operators. Both are supported in tensorpack, while we recommend using Python.

TensorFlow Reader: Pros

  • Faster read/preprocessing.

    • Often true, but not necessarily. With Python you have access to many other fast libraries, which might be unsupported in TF.

    • Python may be just fast enough.

      As long as data preparation keeps up with training, and the latency of all four blocks in the above figure is hidden, running faster brings no more gains to overall throughput. For most types of problems, up to the scale of multi-GPU ImageNet training, Python can offer enough speed if you use a fast library (e.g. tensorpack.dataflow). See the Efficient DataFlow tutorial on how to build a fast Python reader with DataFlow.

  • No "Copy to TF" (i.e. feed_dict) stage.

    • True. But as mentioned above, the latency can usually be hidden.

      In tensorpack, TF queues are usually used to hide the "Copy to TF" latency, and TF StagingArea can help hide the "Copy to GPU" latency. They are used by most examples in tensorpack.

TensorFlow Reader: Cons

The disadvantage of TF reader is obvious and it's huge: it's too complicated.

Unlike running a mathematical model, reading data is a complicated and poorly-structured task. You need to handle different formats, handle corner cases, noisy data, which all require condition operations, loops, sometimes even exception handling. These operations are naturally not suitable for a symbolic graph.

Let's take a look at what users are asking for tf.data:

To support all these features which could've been done with 3 lines of code in Python, you need either a new TF API, or ask Dataset.from_generator (i.e. Python again) to the rescue.

It only makes sense to use TF to read data, if your data is originally very clean and well-formated. If not, you may feel like writing a script to format your data, but then you're almost writing a Python loader already!

Think about it: it's a waste of time to write a Python script to transform from raw data to TF-friendly format, then a TF script to transform from this format to tensors. The intermediate format doesn't have to exist. You just need the right interface to connect Python to the graph directly, efficiently. tensorpack.InputSource is such an interface.

InputSource

InputSource is an abstract interface in tensorpack, to describe where the inputs come from and how they enter the graph. For example,

  1. FeedInput: Come from a DataFlow and get fed to the graph (slow).

  2. QueueInput: Come from a DataFlow and get buffered on CPU by a TF queue.

  3. StagingInput: Come from some InputSource, then prefetched on GPU by a TF StagingArea.

  4. TFDatasetInput Come from a tf.data.Dataset.

  5. dataflow_to_dataset Come from a DataFlow, and further processed by tf.data.Dataset.

  6. TensorInput: Come from some tensors you define (can be reading ops, for example).

  7. ZMQInput Come from some ZeroMQ pipe, where the reading/preprocessing may happen in a different process or even a different machine.

Typically, we recommend QueueInput + StagingInput as it's good for most use cases. If your data has to come from a separate process for whatever reasons, use ZMQInput. If you still like to use TF reading ops, define a tf.data.Dataset and use TFDatasetInput.