Let’s see how easy it is to actually install PyTorch on your machine!

It is very simple, depending on system features like the Operating System or package managers. It may be installed from the Prompt command or from an IDE, for example, PyCharm.
Now, let’s look at how NumPy is incorporated into PyTorch.
Tensors
‘Tensors’ are similar to n-dimensional arrays of NumPy, and ‘Tensors’ may be used for the acceleration of computing on a GPU.
Let’s build a straightforward tensor and verify the performance. Let us first look at how we can build an uninitiated matrix of 5 x 3:
1 2 |
x = torch.empty(5, 3) print(x) |
Output:
tensor([[8.3665e+22, 4.5580e-41, 1.6025e-03], [3.0763e-41, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 3.4438e-41], [0.0000e+00, 4.8901e-36, 2.8026e-45], [6.6121e+31, 0.0000e+00, 9.1084e-44]])
Let’s build a randomly initialized matrix now:
1 2 |
x = torch.rand(5, 3) print(x) |
tensor([[0.1607, 0.0298, 0.7555], [0.8887, 0.1625, 0.6643], [0.7328, 0.5419, 0.6686], [0.0793, 0.1133, 0.5956], [0.3149, 0.9995, 0.6372]])
Build a direct data tensor:
1 2 |
x = torch.tensor([5.5, 3]) print(x) |
Output:
tensor([5.5000, 3.0000])
Tensor Operations
Multiple operating syntaxes are available. Let’s look at the supplementary procedure in the following instance:
1 2 |
y = torch.rand(5, 3) print(x + y) |
Output:
tensor([[ 0.2349, -0.0427, -0.5053], [ 0.6455, 0.1199, 0.4239], [ 0.1279, 0.1105, 1.4637], [ 0.4259, -0.0763, -0.9671], [ 0.6856, 0.5047, 0.4250]])
Resizing: You can use “torch.view” to reshape/resize a tensor:
1 2 3 4 |
x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # the size -1 is inferred from other dimensions print(x.size(), y.size(), z.size()) |
Output:
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
Pytorch: AutoGrad Module
For all activities on tensor systems, the autograd package offers an automatic differentiation.
It is a specified frame that implies you can define your backup by how you operate your code, and that each iteration can be distinct.
Let’s look an interesting and easy to use case next on this PyTorch Tutorial Blog.
PyTorch Use Case: Image Classifier
In general, you can use conventional python packages that load information into the Numpy array when dealing with images, text, audio or video information. This array can then be converted to a torch.
- Packages like Pillow and OpenCV are helpful for pictures.
- Packages like Scipy and Librosa for audio.
- For text, it is helpful to load both raw Python, Cython or SpaCy and NLTK.
For vision purposes, in particular, a package is named torchview which has popular information loaders such as Imagenet, CIFAR10, MNIST etc and image information transformers.
This makes writing boilerplate code very convenient.
It has the following classes: aircraft, automobile, bird, cat, deer, frog, horse, ship, truck. It is a very important class for all sorts of people: heroes. The CIFAR-10 pictures are 3x32x32 in the size, that is to say, 3-channel color pictures with the following size 32 pixels:

PyTorch: Training The CIFAR10 Classifier

1. Loading And Normalizing CIFAR10
It’s very simple to load CIFAR10 with torch vision!
1 2 3 |
import torch import torchvision import torchvision.transforms as transforms |
PILImage pictures range [ 0, 1 ] are the output of the torchvision data sets. We convert them to standardized-range tensors[-1, 1 ].
1 2 3 4 5 6 7 8 9 10 11 12 13 |
transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') |
Output:
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz Files already downloaded and verified
Let us next print some dataset training pictures!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) |

Output:
dog bird horse horse
2. Define A Convolution Neural Network
Take into account the use of pictures with 3 channels, red, green and blue. Here is the CNN architecture code for defining:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() |
3. Define Loss Function And Optimizer
The loss function will have to be defined. In this case, we can use a Cross-Entropy loss for classification. We are also going to use SGD with momentum.
The Cross-Entropy Loss is basically a 0-1 probability value. A 0 loss is the ideal model, but the expected value might be 0.2, but you get 2. The model is the same as 0.2. This leads to an extremely large loss and is not at all effective!
1 2 3 4 |
import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) |
4. Train The Network
That’s when things get exciting! We merely need to wind around our information iterator and feed and optimize the inputs into the network.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') |
Output:
[1, 2000] loss: 2.236
[1, 4000] loss: 1.880
[1, 6000] loss: 1.676
[1, 8000] loss: 1.586
[1, 10000] loss: 1.515
[1, 12000] loss: 1.464
[2, 2000] loss: 1.410
[2, 4000] loss: 1.360
[2, 6000] loss: 1.360
[2, 8000] loss: 1.325
[2, 10000] loss: 1.312
[2, 12000] loss: 1.302 Finished Training
5. Test The Network On The Test Data
We have formed the network over the training dataset for 2 runs. However, we have to verify whether the network has learned something.
We will check this by anticipating the neural network output class label and test it for the soil. We will add a sample to the list of the right projections if the forecast is right.
All right, the first step! First step! Show us a sample set picture to familiarize ourselves.
1 2 3 4 5 6 |
dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) |

Output:
GroundTruth: cat ship ship plane
Okay, let’s now see what the above instances in the Neural Network believe:
outputs = net(images)
The inputs for the 10 classes are energy.
1 2 3 4 |
predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) |
Output:
Predicted: cat car car plane
The results are good
Let’s look at how the network performs on the whole dataset next on this PyTorch Tutorial blog!
1 2 3 4 5 6 7 8 9 10 11 12 |
correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) |
Output:
Accuracy of the network on the 10000 test images: 54 %
That looks better than chance, which is 10% accuracy (randomly picking a class out of 10 classes).
Alas, the network has learned.
What are the lessons that have been good and which have not been good?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) |
Output:
Accuracy of plane : 61 %
Accuracy of car : 85 %
Accuracy of bird : 46 %
Accuracy of cat : 23 %
Accuracy of deer : 40 %
Accuracy of dog : 36 %
Accuracy of frog : 80 %
Accuracy of horse : 59 %
Accuracy of ship : 65 %
Accuracy of truck : 46 %
We have ensured we train a tiny neural network that classifies the imagery in this PyTorch Tutorial blog!