The primary distinctions between PyTorch and TensorFlow are presented as a guide. This article is for anyone who wants to start a fresh project or move from one deep learning structure to another. The focus is on programming and flexibility in setting up the parts of the profound learning stack. I’m not going to go into smooth-outs (velocity/memory).

Summary

In studies, for hobbyists and for tiny scale projects PyTorch is best for quick prototyping. For large scale implementations, TensorFlow is better, particularly if cross-platform and integrated deployment are taken into account.

Ramp-up Time

Winner: PyTorch

PyTorch is fundamentally a NumPy drop-in replacing the device with higher functionality for profound neural networks construction and training. PyTorch is particularly simple to learn if you know NumPy, Python and the normal profound learning abstractions (constructive layers, recurrent layers, SGD, etc.).

On the other side, the programming language integrated within Python is a nice mental model for TensorFlow. When you type TensorFlow, it is “composed” by Python in the graph, then run by the execution engine of TensorFlow. I saw TensorFlow newcomers fighting to wrap their heads in this additional layer of indirectness. This also enables TensorFlow to know a number of additional concepts like the session, graph, variable scope and site holders. There is also a need for more boilerplate code to run a fundamental model. TensorFlow certainly takes more than PyTorch to get started.

Graph Creation and Debugging

Winner: PyTorch

Perhaps the two frameworks vary the most when creating and operating a computational chart. The graphics design is dynamic in PyTorch, which means that the graph is constructed at runtime. The graphic design is static in TensorFlow, which means the graph is “compiled” then executed. In PyTorch, you can use the normal Python syntax to compose a loop structure.

And T may alter between this code’s executions. This involves the use of control flow functions in the construction of a graph like tf.while loop in TensorFlow. TensorFlow has the dynamic rnn for the more common builds, but it is more difficult to create personalized dynamic computations.

The easy graphic design in PyTorch is simpler to explain, but it is perhaps even simpler to debug. PyTorch code debugging is like Python code debugging. You can use pdb, and anywhere you break. It’s not so simple to debug the TensorFlow code. Both alternatives are to request or learn and use TensorFlow Debugger (tFDBG) for the variables you want to check from the session.

Coverage

Winner: TensorFlow

I expect the gap to converge to zero as PyTorch ages. However, TensorFlow still supports some features that PyTorch does not support. Some characteristics that PyTorch hasn’t (at writing moment) are:

  • Flipping a tensor along a dimension ( )
  • Checking a tensor for NaN and infinity ( )
  • Fast Fourier transforms ( )

All of these are TensorFlow endorsed. The TensorFlow contribute package is also much greater than PyTorch with features and models.

Serialization

Winner: TensorFlow

In both frameworks, saving and loading models is easy. PyTorch has a particularly easy API that can either save a model’s weights or pick up the entire class. The TensorFlow Saver object is also simple to operate and offers a number of other control alternatives.

The primary benefit of TensorFlow is that the whole graph can be stored as a protocol buffer during serialisation. Parameters and activities are included here. Then the graph can be taken in other languages endorsed (C++, Java). For stacks where Python isn’t an alternative, this is critical. This can be useful also in theory when you modify the source code of the model, but you want old models to run.

Deployment

Winner: TensorFlow

Both frames are easily wrapped in e.g. a Flask web server for small-scale server-side deployments.

TensorFlow works for portable and embedded applications. This is more than that of most others, including the PyTorch deep learning frameworks. Deploying to iOS requires no trivial work in TensorFlow but at least, the whole inference portion of your model does not have to be rewritten into Java or C++.

TensorFlow Serving is available for high-performance server-side deployments. I have no TensorFlow Serving experience, therefore I can’t write about the benefits and disadvantages confidently. I suspect TensorFlow Serving could be adequate reason to remain with TensorFlow for highly used machine learning services. Besides efficiency, one of TensorFlow Serving’s noticeable characteristics is that the models can be readily hot-swapped without downgraders. Check-out of this Zendesk blog post for an instance of a QA bot that serves TensorFlow.

Documentation

Winner: Tie

For both frameworks, I’ve discovered all I need in docs. The Python APIs are well documented with sufficient examples and tutorials.

The one thing that’s mostly undocumented is the PyTorch C library. However, this only counts if a custom C extension is written and maybe if the software is supported.

Data Loading

Winner: PyTorch

The data loading APIs in PyTorch are well designed. The interfaces have a dataset, a sampler and a data loader. A data loader uses a data set and a sampling device and generates a data set iterator according to the timetable of the sampler. Parallel loading is as easy as transferring an argument to the information loader by num workers.

The instruments for information loading in TensorFlow (readers, queues, queue runners, etc.) have not been discovered to be particularly helpful. Then click Quill It right to Paraphrase it. Partly because it’s not easy (e.g. to compute a spectrogram), to add the entire pre-processing code you want to run parallel to the TensorFlow graph. In addition, the API is more verbose and difficult to learn.

Device Management

Winner: TensorFlow

TensorFlow device management is as smooth as it is. Usually, since defaults are well defined you don’t need to enter anything. For example, if one is available, TensorFlow assumes that you want to run on the GPU. Even if CUDA is activated, you have to transfer everything explicitly in PyTorch on the device.

The only downside with the leadership of TensorFlow devices is that it consumes all the memory on all accessible GPUs by default, even if only one is used. CUDA VISIBLE DEVICES is an easy workaround. Sometimes individuals forget about it, and GPUs may seem busy if they’re actually idle.

In PyTorch, I discovered that my code requires more frequent CUDA accessibility checks and more specific device administration. In particular, this is the case when writing code, both on the CPU and on the GPU. It’s also somewhat verbose to convert a PyTorch variable into a GPU NumPy array.

Custom Extensions

Winner: PyTorch

Custom extensions published in C, C++ or CUDA can be constructed or linked with both frameworks. Again, TensorFlow needs a more boilerplate code but is probably cleaner to support various models and devices. You just write a PyTorch interface and the relevant CPU and GPU application. The extension compile also has both frameworks and does not involve the download of headers or source code outside the setup of the pipeline.

A note on TensorBoard

TensorBoard is an instrument that enables you to see different elements of machine learning models. The TensorFlow project is one of the most helpful characteristics. You can see training curves and validation outcomes of any model with a few code snippets in a training script. As a web service, TensorBoard works particularly easily for visualizing outcomes stored on nodes without headings.

This was one function I made sure I could maintain it before using PyTorch (or find an alternative to it). Fortunately, at least, it is possible to have two open-source projects. Tensorboard logger is the first one and a pencil is the second one. The library tensorboard logger is still simpler to use than the “Summaries” of Tensorboard in TensorFlow, but it does need to be used with TensorBoard. The pencil project replaces TensorBoard in its entirety but needs to be further configuration (docker is a precondition).

A note on Keras

Keras is an advanced API with back-end configuration. TensorFlow, Theano and CNTK are currently endorsed, though PyTorch may also be included in the not too remote future. As part of tf.contrib, Keras is also allocated to TensorFlow.

Although I haven’t talked about Keras above I can use the API particularly easily. It is one of the fastest ways to run a profound neural network architecture with many of the most widely used. However, the PyTorch or core TensorFlow are not as flexible as the API.

A note on TensorFlow Fold

In February 2017, Google announced TensorFlow Fold. The library is constructed on top of TensorFlow and makes graphics more dynamic. Dynamic batching seems to be the primary benefit of the library. Calculations of different sizes (believe recursive networks on parsena trees) are banned automatically by dynamic batching. The syntax is not as simple as PyTorch in terms of programmability, but in several cases, the improvements in batching performance can pay the price.