How to Develop Multilayer Perceptron Models for Time Series Forecasting

Last Updated on August 28, 2020 Multilayer Perceptrons, or MLPs for short, can be applied to time series forecasting. A challenge with using MLPs for time series forecasting is in the preparation of the data. Specifically, lag observations must be flattened into feature vectors. In this tutorial, you will discover how to develop a suite of MLP models for a range of standard time series forecasting problems. The objective of this tutorial is to provide standalone examples of each model […]

Read more

How to Develop Convolutional Neural Network Models for Time Series Forecasting

Last Updated on August 28, 2020 Convolutional Neural Network models, or CNNs for short, can be applied to time series forecasting. There are many types of CNN models that can be used for each specific type of time series forecasting problem. In this tutorial, you will discover how to develop a suite of CNN models for a range of standard time series forecasting problems. The objective of this tutorial is to provide standalone examples of each model on each type […]

Read more

How to Develop LSTM Models for Time Series Forecasting

Last Updated on August 28, 2020 Long Short-Term Memory networks, or LSTMs for short, can be applied to time series forecasting. There are many types of LSTM models that can be used for each specific type of time series forecasting problem. In this tutorial, you will discover how to develop a suite of LSTM models for a range of standard time series forecasting problems. The objective of this tutorial is to provide standalone examples of each model on each type […]

Read more

How to Grid Search Deep Learning Models for Time Series Forecasting

Last Updated on August 28, 2020 Grid searching is generally not an operation that we can perform with deep learning methods. This is because deep learning methods often require large amounts of data and large models, together resulting in models that take hours, days, or weeks to train. In those cases where the datasets are smaller, such as univariate time series, it may be possible to use a grid search to tune the hyperparameters of a deep learning model. In […]

Read more

Use Weight Regularization to Reduce Overfitting of Deep Learning Models

Last Updated on August 6, 2019 Neural networks learn a set of weights that best map inputs to outputs. A network with large network weights can be a sign of an unstable network where small changes in the input can lead to large changes in the output. This can be a sign that the network has overfit the training dataset and will likely perform poorly when making predictions on new data. A solution to this problem is to update the […]

Read more

How to Use Weight Decay to Reduce Overfitting of Neural Network in Keras

Last Updated on August 25, 2020 Weight regularization provides an approach to reduce the overfitting of a deep learning neural network model on the training data and improve the performance of the model on new data, such as the holdout test set. There are multiple types of weight regularization, such as L1 and L2 vector norms, and each requires a hyperparameter that must be configured. In this tutorial, you will discover how to apply weight regularization to improve the performance […]

Read more

A Gentle Introduction to Weight Constraints in Deep Learning

Last Updated on August 6, 2019 Weight regularization methods like weight decay introduce a penalty to the loss function when training a neural network to encourage the network to use small weights. Smaller weights in a neural network can result in a model that is more stable and less likely to overfit the training dataset, in turn having better performance when making a prediction on new data. Unlike weight regularization, a weight constraint is a trigger that checks the size […]

Read more

How to Reduce Overfitting Using Weight Constraints in Keras

Last Updated on August 25, 2020 Weight constraints provide an approach to reduce the overfitting of a deep learning neural network model on the training data and improve the performance of the model on new data, such as the holdout test set. There are multiple types of weight constraints, such as maximum and unit vector norms, and some require a hyperparameter that must be configured. In this tutorial, you will discover the Keras API for adding weight constraints to deep […]

Read more

A Gentle Introduction to Activation Regularization in Deep Learning

Last Updated on August 6, 2019 Deep learning models are capable of automatically learning a rich internal representation from raw input data. This is called feature or representation learning. Better learned representations, in turn, can lead to better insights into the domain, e.g. via visualization of learned features, and to better predictive models that make use of the learned features. A problem with learned features is that they can be too specialized to the training data, or overfit, and not […]

Read more

How to Reduce Generalization Error With Activity Regularization in Keras

Last Updated on August 25, 2020 Activity regularization provides an approach to encourage a neural network to learn sparse features or internal representations of raw observations. It is common to seek sparse learned representations in autoencoders, called sparse autoencoders, and in encoder-decoder models, although the approach can also be used generally to reduce overfitting and improve a model’s ability to generalize to new observations. In this tutorial, you will discover the Keras API for adding activity regularization to deep learning […]

Read more
1 822 823 824 825 826 905