Getting started with TensorFlow
TensorFlow is an open source library to help you develop and train machine learning models.
- TensorFlow conda packages
- More information
- tensorflow and tensorflow-gpu conda packages
- tf.keras Tensoflow high-level API
- TensorFlow Large Model Support (TFLMS)
- Distributed Deep Learning (DDL) custom operator for TensorFlow
- TensorFlow with NVIDIA TensorRT (TF-TRT)
- Additional TensorFlow features
- TensorFlow Estimator
- Automatic mixed precision support
TensorFlow conda packages
WML CE includes several packages from the
TensorFlow ecosystem. This table illustrates which packages are installed when installing the
powerai-cpu meta packages, as well as ones that get
pulled in when installing one of the TensorFlow variants directly. You can install packages with
conda install <package name>:
|Name||Description||Installed with powerai||Installed with tensorflow-gpu||Installed with powerai-cpu||Installed with tensorflow|
|tensorflow||TensorFlow Meta package||X||X|
|tensorflow-gpu||TensorFlow GPU Meta package||X||X|
|tensorflow-base||Contains the core TensorFlow Logic||X||X||X||X|
|tensorflow-estimator||Required TensorFlow Estimator package||X||X||X||X|
|tensorboard||Visualization Dashboard for TensorFlow||X||X||X||X|
|tensorflow-probability||Optional TensorFlow Probability package||X||X|
|ddl-tensorflow||Distributed Deep Learning custom operator for TensorFlow||X|
|bazel||Fast, scalable, multi-language and extensible build system|
|tensorflow-serving-api||Serving system for machine learning models||X||X|
|tensorflow-serving||Serving system for machine learning models|
|tensorrt||C++ library running pre-trained networks quickly and efficiently||X||X|
The TensorFlow home page has various information, including tutorials, how to documents, and a getting started guide.
Additional tutorials and examples are available from the community, for example:
tensorflow and tensorflow-gpu conda packages
- To install TensorFlow built for CPU support run the following command:
This command installs the TensorFlow package, with no packages for GPU support.
conda install --strict-channel-priority tensorflow
- To install TensorFlow built for GPU support run the following
This command installs TensorFlow along with the CUDA, cuDNN, and NCCL conda packages used with the GPUs.
conda install --strict-channel-priority tensorflow-gpu
tf.keras Tensoflow high-level API
tf.keras version 2.3.0 is included with Tensorflow 2.1.0.
TensorFlow Large Model Support (TFLMS)
Large Model Support provides an approach to training large models and batch sizes that cannot fit in GPU memory. It does this by use of a graph editing library that takes the user model's computational graph and automatically adds swap-in and swap-out nodes for transferring tensors from GPU memory to system memory and vice versa during training.
For more information about TensorFlow LMS, see Getting started with TensorFlow large model support.
Distributed Deep Learning (DDL) custom operator for TensorFlow
The DDL custom operator uses IBM Spectrum™ MPI and NCCL to provide high-speed communications for distributed TensorFlow.
The DDL custom operator can be found in the ddl-tensorflow package. For more information about DDL and about the TensorFlow operator, see Integration with deep learning frameworks
TensorFlow with NVIDIA TensorRT (TF-TRT)
NVIDIA TensorRT is a plaform for high-performance deep learning inference. Trained models can be optimized with TensorRT; this is done by replacing TensorRT-compatible subgraphs with a single TRTEngineOp that is used to build a TensorRT engine. TensorRT can also calibrate for lower precision (FP16 and INT8) with a minimal loss of accuracy. After a model is optimized with TensorRT, the TensorFlow workflow is still used for inferencing, including TensorFlow-Serving.
A saved model can be optimized for TensorRT with the following python snippet:
from tensorflow.python.compiler.tensorrt import trt_convert as trt converter = trt.TrtGraphConverterV2( input_saved_model_dir=input_saved_model_dir) converter.convert() converter.save(output_saved_model_dir)
TensorRT is enabled in the
For additional information on TF-TRT, see the official Nvidia docs.
Additional TensorFlow features
The powerai TensorFlow packages include TensorBoard. For more information, see Getting started with TensorBoard.
The TensorFlow package includes support for additional features:
- XLA (Accelerated Linear Algebra) Compiler
- Hadoop Distributed File System (HDFS) support
- Amazon Web Services (AWS) S3 support
- Google Compute Platform (GCP) support
- NVIDIA Collective Communications Library (NCCL) 2
- CUDA compute capabilities: 3.7, 6.0, 7.0, 7.5 for NVIDIA Tesla K80, P100, V100 and T4 GPUs
- CUDA 10.2 support
- Automatic Mixed Precision (AMP) support
tensorflow-estimator package is installed with TensorFlow in both GPU and
CPU variants. TensorFlow estimator is an alternative high level API for TensorFlow and provides the
tf.estimator API. Several premade estimators for different model types are
included. More information about these estimator models and TensorFlow Estimator in general can be
found on the TensorFlow Estimators page.
Automatic mixed precision support
TensorFlow includes a feature called Automatic Mixed Precision (AMP) that automatically takes advantage of lower precision hardware such at the Tensor Cores included in NVIDIA's V100 GPUs. AMP can speed up training in certain models. To enable AMP, add the following lines of Python to the model code:
from tensorflow.keras.mixed_precision import experimental as mixed_precision policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') mixed_precision.set_policy(policy)
For more information see the TensorFlow guide on Mixed precision.