PyTorch was originally developed by researchers at Meta in late 2016. It’s a Python port of the older Torch library, at whose core was a tensor. By 2022, at which point PyTorch moved to the Linux Foundation, over 2,400 contributors had reportedly over 150,000 projects using PyTorch. (Open-source machine learning is the dominant paradigm, since the field flourishes from extensive collaboration.) Like TensorFlow, PyTorch similarly allows developers to perform NumPy-like operations, but using GPUs instead of CPUs—making PyTorch another deep learning framework.
“PyTorch or TensorFlow?” is often an initial question for those embarking on a machine learning effort (Formerly, a library called Theano was also in the mix; it was deprecated in 2017). While there is no wrong answer, PyTorch is emerging as a favorite with many developers for its flexible and forgiving (“Pythonic”) design and ease of use. Long favored among academics and researchers, industry increasingly uses it for ambitious, scalable use cases as well. Tesla’s Autopilot, for instance, was built using PyTorch, and Microsoft’s cloud computing platform Azure supports it. PyTorch has become so popular that an ecosystem of supporting tools (like Torchvision and TorchText) has grown around it. Both Tensorflow and Pytorch use a computational graph—a data structure that represents the flow of operations and variables during model training.
IBM is a member of the PyTorch Foundation; it uses PyTorch with its watsonx portfolio.