Save and Load models¶
Inspect a TF Checkpoint¶
ModelSaver callback saves the model to the directory defined by
in TensorFlow checkpoint format.
A TF checkpoint typically includes a
.data-xxxxx file and a
Both are necessary.
tf.train.NewCheckpointReader is the offical tool to parse TensorFlow checkpoint.
Read TF docs for details.
Tensorpack also provides some small tools to work with checkpoints, see
scripts/ls-checkpoint.py demos how to print all variables and their shapes in a checkpoint.
scripts/dump-model-params.py can be used to remove unnecessary variables in a checkpoint.
It takes a metagraph file (which is also saved by
ModelSaver) and only saves variables that the model needs at inference time.
It can dump the model to a
var-name: value dict saved in npz format.
Load a Model to a Session¶
Model loading (in either training or inference) is through the
Currently there are two ways a session can be restored:
which restores a TF checkpoint,
or session_init=DictRestore(…) which restores a dict.
is a small helper to decide which one to use from a file name.
To load multiple models, use ChainInit.
Variable restoring is completely based on name match between
variables in the current graph and variables in the
Variables that appear in only one side will be printed as warning.
Therefore, transfer learning is trivial. If you want to load a pre-trained model, just use the same variable names. If you want to re-train some layer, just rename it.