How to Use Timesteps in LSTM Networks for Time Series Forecasting

Last Updated on August 28, 2020 The Long Short-Term Memory (LSTM) network in Keras supports time steps. This raises the question as to whether lag observations for a univariate time series can be used as time steps for an LSTM and whether or not this improves forecast performance. In this tutorial, we will investigate the use of lag observations as time steps in LSTMs models in Python. After completing this tutorial, you will know: How to develop a test harness […]

Read more

How to Use Features in LSTM Networks for Time Series Forecasting

Last Updated on August 28, 2020 The Long Short-Term Memory (LSTM) network in Keras supports multiple input features. This raises the question as to whether lag observations for a univariate time series can be used as features for an LSTM and whether or not this improves forecast performance. In this tutorial, we will investigate the use of lag observations as features in LSTM models in Python. After completing this tutorial, you will know: How to develop a test harness to […]

Read more

Stateful and Stateless LSTM for Time Series Forecasting with Python

Last Updated on August 28, 2020 The Keras Python deep learning library supports both stateful and stateless Long Short-Term Memory (LSTM) networks. When using stateful LSTM networks, we have fine-grained control over when the internal state of the LSTM network is reset. Therefore, it is important to understand different ways of managing this internal state when fitting and making predictions with LSTM networks affect the skill of the network. In this tutorial, you will explore the performance of stateful and […]

Read more

Instability of Online Learning for Stateful LSTM for Time Series Forecasting

Last Updated on August 28, 2020 Some neural network configurations can result in an unstable model. This can make them hard to characterize and compare to other model configurations on the same problem using descriptive statistics. One good example of a seemingly unstable model is the use of online learning (a batch size of 1) for a stateful Long Short-Term Memory (LSTM) model. In this tutorial, you will discover how to explore the results of a stateful LSTM fit using […]

Read more

How to Configure Multilayer Perceptron Network for Time Series Forecasting

Last Updated on August 28, 2020 It can be difficult when starting out on a new predictive modeling project with neural networks. There is so much to configure, and no clear idea where to start. It is important to be systematic. You can break bad assumptions and quickly hone in on configurations that work and areas for further investigation likely to payoff. In this tutorial, you will discover how to use exploratory configuration of multilayer perceptron (MLP) neural networks to […]

Read more

Dropout with LSTM Networks for Time Series Forecasting

Last Updated on August 28, 2020 Long Short-Term Memory (LSTM) models are a type of recurrent neural network capable of learning sequences of observations. This may make them a network well suited to time series forecasting. An issue with LSTMs is that they can easily overfit training data, reducing their predictive skill. Dropout is a regularization method where input and recurrent connections to LSTM units are probabilistically excluded from activation and weight updates while training a network. This has the […]

Read more

Estimate the Number of Experiment Repeats for Stochastic Machine Learning Algorithms

Last Updated on August 14, 2020 A problem with many stochastic machine learning algorithms is that different runs of the same algorithm on the same data return different results. This means that when performing experiments to configure a stochastic algorithm or compare algorithms, you must collect multiple results and use the average performance to summarize the skill of the model. This raises the question as to how many repeats of an experiment are enough to sufficiently characterize the skill of […]

Read more

How to Use Statistical Significance Tests to Interpret Machine Learning Results

Last Updated on August 8, 2019 It is good practice to gather a population of results when comparing two different machine learning algorithms or when comparing the same algorithm with different configurations. Repeating each experimental run 30 or more times gives you a population of results from which you can calculate the mean expected performance, given the stochastic nature of most machine learning algorithms. If the mean expected performance from two algorithms or configurations are different, how do you know […]

Read more

Weight Regularization with LSTM Networks for Time Series Forecasting

Last Updated on August 28, 2020 Long Short-Term Memory (LSTM) models are a recurrent neural network capable of learning sequences of observations. This may make them a network well suited to time series forecasting. An issue with LSTMs is that they can easily overfit training data, reducing their predictive skill. Weight regularization is a technique for imposing constraints (such as L1 or L2) on the weights within LSTM nodes. This has the effect of reducing overfitting and improving model performance. […]

Read more

How to Convert a Time Series to a Supervised Learning Problem in Python

Last Updated on August 21, 2019 Machine learning methods like deep learning can be used for time series forecasting. Before machine learning can be used, time series forecasting problems must be re-framed as supervised learning problems. From a sequence to pairs of input and output sequences. In this tutorial, you will discover how to transform univariate and multivariate time series forecasting problems into supervised learning problems for use with machine learning algorithms. After completing this tutorial, you will know: How […]

Read more
1 797 798 799 800 801 905