Using Optimizers from PyTorch

Optimization is a process where we try to find the best possible set of parameters for a deep learning model. Optimizers generate new parameter values and evaluate them using some criterion to determine the best option. Being an important part of neural network architecture, optimizers help in determining best weights, biases or other hyper-parameters that will result in the desired output.

There are many kinds of optimizers available in PyTorch, each with its own strengths and weaknesses. These include Adagrad, Adam, RMSProp and so on.

In the previous tutorials, we implemented all necessary steps of an optimizer to update the weights and biases during training. Here, you’ll learn about some PyTorch packages that make the implementation of the optimizers

 

 

To finish reading, please visit source site