Articles About Machine Learning

Understand the Impact of Learning Rate on Neural Network Performance

Last Updated on September 12, 2020 Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm. The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging as a value too small may result in a long training process that could get stuck, whereas a value too large may result in learning a sub-optimal set […]

Read more

Loss and Loss Functions for Training Deep Learning Neural Networks

Last Updated on October 23, 2019 Neural networks are trained using stochastic gradient descent and require that you choose a loss function when designing and configuring your model. There are many loss functions to choose from and it can be challenging to know what to choose, or even what a loss function is and the role it plays when training a neural network. In this post, you will discover the role of loss and loss functions in training deep learning […]

Read more

How to Choose Loss Functions When Training Deep Learning Neural Networks

Last Updated on August 25, 2020 Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm. As part of the optimization algorithm, the error for the current state of the model must be estimated repeatedly. This requires the choice of an error function, conventionally called a loss function, that can be used to estimate the loss of the model so that the weights can be updated to reduce the loss on the next evaluation. Neural network models […]

Read more

How to Use Greedy Layer-Wise Pretraining in Deep Learning Neural Networks

Last Updated on August 25, 2020 Training deep neural networks was traditionally challenging as the vanishing gradient meant that weights in layers close to the input layer were not updated in response to errors calculated on the training dataset. An innovation and important milestone in the field of deep learning was greedy layer-wise pretraining that allowed very deep neural networks to be successfully trained, achieving then state-of-the-art performance. In this tutorial, you will discover greedy layer-wise pretraining as a technique […]

Read more

How to use Data Scaling Improve Deep Learning Model Stability and Performance

Last Updated on August 25, 2020 Deep learning neural networks learn how to map inputs to outputs from examples in a training dataset. The weights of the model are initialized to small random values and updated via an optimization algorithm in response to estimates of error on the training dataset. Given the use of small weights in the model and the use of error between predictions and expected values, the scale of inputs and outputs used to train the model […]

Read more

How to Avoid Exploding Gradients With Gradient Clipping

Last Updated on August 28, 2020 Training a neural network can become unstable given the choice of error function, learning rate, or even the scale of the target variable. Large updates to weights during training can cause a numerical overflow or underflow often referred to as “exploding gradients.” The problem of exploding gradients is more common with recurrent neural networks, such as LSTMs given the accumulation of gradients unrolled over hundreds of input time steps. A common and relatively easy […]

Read more

How to Improve Performance With Transfer Learning for Deep Learning Neural Networks

Last Updated on August 25, 2020 An interesting benefit of deep learning neural networks is that they can be reused on related problems. Transfer learning refers to a technique for predictive modeling on a different but somehow similar problem that can then be reused partly or wholly to accelerate the training and improve the performance of a model on the problem of interest. In deep learning, this means reusing the weights in one or more layers from a pre-trained network […]

Read more

Your First Machine Learning Project in Python Step-By-Step

Last Updated on August 19, 2020 Do you want to do machine learning using Python, but you’re having trouble getting started? In this post, you will complete your first machine learning project using Python. In this step-by-step tutorial you will: Download and install Python SciPy and get the most useful package for machine learning in Python. Load a dataset and understand it’s structure using statistical summaries and data visualization. Create 6 machine learning models, pick the best and build confidence […]

Read more

Framework for Better Deep Learning

Last Updated on August 6, 2019 Modern deep learning libraries such as Keras allow you to define and start fitting a wide range of neural network models in minutes with just a few lines of code. Nevertheless, it is still challenging to configure a neural network to get good performance on a new predictive modeling problem. The challenge of getting good performance can be broken down into three main areas: problems with learning, problems with generalization, and problems with predictions. […]

Read more

How to Control Neural Network Model Capacity With Nodes and Layers

Last Updated on August 25, 2020 The capacity of a deep learning neural network model controls the scope of the types of mapping functions that it is able to learn. A model with too little capacity cannot learn the training dataset meaning it will underfit, whereas a model with too much capacity may memorize the training dataset, meaning it will overfit or may get stuck or lost during the optimization process. The capacity of a neural network model is defined […]

Read more
1 188 189 190 191 192 226