Initializing Weights for Deep Learning Models

In order to build a classifier that accurately classifies the data samples and performs well on test data, you need to initialize the weights in a way that the model converges well. Usually we randomized the weights. But when we use mean square error (MSE) as loss for training a logistic regression model, we may sometimes face a few problems. Before we get into further details, note that the methodology used here also applies to classification models other than logistic regression and it will be used in the upcoming tutorials.

Our model can converge well if the weights are initialized in a proper region. However, if we started the model weights in an unfavorable region, we may see the

 

 

To finish reading, please visit source site