Articles About Machine Learning

Snapshot Ensemble Deep Learning Neural Network in Python

Last Updated on August 28, 2020 Model ensembles can achieve lower generalization error than single models but are challenging to develop with deep learning neural networks given the computational cost of training each single model. An alternative is to train multiple model snapshots during a single training run and combine their predictions to make an ensemble prediction. A limitation of this approach is that the saved models will be similar, resulting in similar predictions and predictions errors and not offering […]

Read more

Ensemble Neural Network Model Weights in Keras (Polyak Averaging)

Last Updated on August 28, 2020 The training process of neural networks is a challenging optimization process that can often fail to converge. This can mean that the model at the end of training may not be a stable or best-performing set of weights to use as a final model. One approach to address this problem is to use an average of the weights from multiple models seen toward the end of the training run. This is called Polyak-Ruppert averaging […]

Read more

A Gentle Introduction to the Rectified Linear Unit (ReLU)

Last Updated on August 20, 2020 In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input. The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. It has become the default activation function for many types of neural networks because a model […]

Read more

How to Fix the Vanishing Gradients Problem Using the ReLU

Last Updated on August 25, 2020 The vanishing gradients problem is one example of unstable behavior that you may encounter when training a deep neural network. It describes the situation where a deep multilayer feed-forward network or a recurrent neural network is unable to propagate useful gradient information from the output end of the model back to the layers near the input end of the model. The result is the general inability of models with many layers to learn on […]

Read more

3 Must-Own Books for Deep Learning Practitioners

Last Updated on August 6, 2019 Developing neural networks is often referred to as a dark art. The reason for this is that being skilled at developing neural network models comes from experience. There are no reliable methods to analytically calculate how to design a “good” or “best” model for your specific dataset. You must draw on experience and experiment in order to discover what works on your problem. A lot of this experience can come from actually developing neural […]

Read more

A Gentle Introduction to Batch Normalization for Deep Neural Networks

Last Updated on December 4, 2019 Training deep neural networks with tens of layers is challenging as they can be sensitive to the initial random weights and configuration of the learning algorithm. One possible reason for this difficulty is the distribution of the inputs to layers deep in the network may change after each mini-batch when the weights are updated. This can cause the learning algorithm to forever chase a moving target. This change in the distribution of inputs to […]

Read more

Practical Deep Learning for Coders (Review)

Last Updated on November 1, 2019 Practical deep learning is a challenging subject in which to get started. It is often taught in a bottom-up manner, requiring that you first get familiar with linear algebra, calculus, and mathematical optimization before eventually learning the neural network techniques. This can take years, and most of the background theory will not help you to get good results, fast. Instead, a top-down approach can be used where first you learn how to get results […]

Read more

How to Accelerate Learning of Deep Neural Networks With Batch Normalization

Last Updated on August 25, 2020 Batch normalization is a technique designed to automatically standardize the inputs to a layer in a deep learning neural network. Once implemented, batch normalization has the effect of dramatically accelerating the training process of a neural network, and in some cases improves the performance of the model via a modest regularization effect. In this tutorial, you will discover how to use batch normalization to accelerate the training of deep learning neural networks in Python […]

Read more

How to Control the Stability of Training Neural Networks With the Batch Size

Last Updated on August 28, 2020 Neural networks are trained using gradient descent where the estimate of the error used to update the weights is calculated based on a subset of the training dataset. The number of examples from the training dataset used in the estimate of the error gradient is called the batch size and is an important hyperparameter that influences the dynamics of the learning algorithm. It is important to explore the dynamics of your model to ensure […]

Read more

How to Configure the Learning Rate When Training Deep Learning Neural Networks

Last Updated on August 6, 2019 The weights of a neural network cannot be calculated using an analytical method. Instead, the weights must be discovered via an empirical optimization procedure called stochastic gradient descent. The optimization problem addressed by stochastic gradient descent for neural networks is challenging and the space of solutions (sets of weights) may be comprised of many good solutions (called global optima) as well as easy to find, but low in skill solutions (called local optima). The […]

Read more
1 187 188 189 190 191 226