Getting started with TensorFlow

TensorFlow is an open source library to help you develop and train machine learning models.

TensorFlow conda packages

WML CE includes several packages from the TensorFlow ecosystem. This table illustrates which packages are installed when installing the powerai or powerai-cpu meta packages, as well as ones that get pulled in when installing one of the TensorFlow variants directly. You can install packages with conda install <package name>:

Table 1.
Name Description Installed with powerai Installed with tensorflow-gpu Installed with powerai-cpu Installed with tensorflow
tensorflow TensorFlow Meta package     X X
tensorflow-gpu TensorFlow GPU Meta package X X    
tensorflow-base Contains the core TensorFlow Logic X X X X
tensorflow-estimator Required TensorFlow Estimator package X X X X
tensorboard Visualization Dashboard for TensorFlow X X X X
tensorflow-probability Optional TensorFlow Probability package X   X  
ddl-tensorflow Distributed Deep Learning custom operator for TensorFlow X      
bazel Fast, scalable, multi-language and extensible build system        
tensorflow-serving-api Serving system for machine learning models X   X  
tensorflow-serving Serving system for machine learning models        
tensorrt C++ library running pre-trained networks quickly and efficiently X X    

More information

The TensorFlow home page has various information, including tutorials, how to documents, and a getting started guide.

Additional tutorials and examples are available from the community, for example:

This release of WML CE includes TensorFlow 2.1.0, the first release of TensorFlow 2 supported in WML CE. TensorFlow 2 contains many changes to make users more productive including eager execution, standardization on the Keras API and the removal of redundant APIs. For more information on the changes for TensorFlow 2 see:

tensorflow and tensorflow-gpu conda packages

WML CE includes a version of TensorFlow built without GPU support. This inclusion allows for training and inferencing to be done on POWER8 and POWER9 systems that do not have GPUs, or on systems where you want to train and inference without using the GPUs.
  • To install TensorFlow built for CPU support run the following command:
    conda install --strict-channel-priority tensorflow
    This command installs the TensorFlow package, with no packages for GPU support.
  • To install TensorFlow built for GPU support run the following command:
    conda install --strict-channel-priority tensorflow-gpu
    This command installs TensorFlow along with the CUDA, cuDNN, and NCCL conda packages used with the GPUs.

tf.keras Tensoflow high-level API

tf.keras version 2.3.0 is included with Tensorflow 2.1.0.

TensorFlow Large Model Support (TFLMS)

Large Model Support provides an approach to training large models and batch sizes that cannot fit in GPU memory. It does this by use of a graph editing library that takes the user model's computational graph and automatically adds swap-in and swap-out nodes for transferring tensors from GPU memory to system memory and vice versa during training.

For more information about TensorFlow LMS, see Getting started with TensorFlow large model support.

Distributed Deep Learning (DDL) custom operator for TensorFlow

The DDL custom operator uses IBM Spectrumâ„¢ MPI and NCCL to provide high-speed communications for distributed TensorFlow.

The DDL custom operator can be found in the ddl-tensorflow package. For more information about DDL and about the TensorFlow operator, see Integration with deep learning frameworks

TensorFlow with NVIDIA TensorRT (TF-TRT)

NVIDIA TensorRT is a plaform for high-performance deep learning inference. Trained models can be optimized with TensorRT; this is done by replacing TensorRT-compatible subgraphs with a single TRTEngineOp that is used to build a TensorRT engine. TensorRT can also calibrate for lower precision (FP16 and INT8) with a minimal loss of accuracy. After a model is optimized with TensorRT, the TensorFlow workflow is still used for inferencing, including TensorFlow-Serving.

A saved model can be optimized for TensorRT with the following python snippet:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverterV2(
    input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

TensorRT is enabled in the tensorflow-gpu and tensorflow-serving packages.

For additional information on TF-TRT, see the official Nvidia docs.

Additional TensorFlow features

The powerai TensorFlow packages include TensorBoard. For more information, see Getting started with TensorBoard.

The TensorFlow package includes support for additional features:

TensorFlow Estimator

The tensorflow-estimator package is installed with TensorFlow in both GPU and CPU variants. TensorFlow estimator is an alternative high level API for TensorFlow and provides the tf.estimator API. Several premade estimators for different model types are included. More information about these estimator models and TensorFlow Estimator in general can be found on the TensorFlow Estimators page.

Automatic mixed precision support

TensorFlow includes a feature called Automatic Mixed Precision (AMP) that automatically takes advantage of lower precision hardware such at the Tensor Cores included in NVIDIA's V100 GPUs. AMP can speed up training in certain models. To enable AMP, add the following lines of Python to the model code:

from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
mixed_precision.set_policy(policy)

For more information see the TensorFlow guide on Mixed precision.