Articles About Machine Learning

Sensitivity Analysis of Dataset Size vs. Model Performance

Machine learning model performance often improves with dataset size for predictive modeling. This depends on the specific datasets and on the choice of model, although it often means that using more data can result in better performance and that discoveries made using smaller datasets to estimate model performance often scale to using larger datasets. The problem is the relationship is unknown for a given dataset and model, and may not exist for some datasets and models. Additionally, if such a […]

Read more

Code Adam Optimization Algorithm From Scratch

Last Updated on February 21, 2021 Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A limitation of gradient descent is that a single step size (learning rate) is used for all input variables. Extensions to gradient descent like AdaGrad and RMSProp update the algorithm to use a separate step size for each input variable but may result in a step size that rapidly decreases […]

Read more

Simulated Annealing From Scratch in Python

Simulated Annealing is a stochastic global search optimization algorithm. This means that it makes use of randomness as part of the search process. This makes the algorithm appropriate for nonlinear objective functions where other local search algorithms do not operate well. Like the stochastic hill climbing local search algorithm, it modifies a single solution and searches the relatively local area of the search space until the local optima is located. Unlike the hill climbing algorithm, it may accept worse solutions […]

Read more

No Free Lunch Theorem for Machine Learning

The No Free Lunch Theorem is often thrown around in the field of optimization and machine learning, often with little understanding of what it means or implies. The theorem states that all optimization algorithms perform equally well when their performance is averaged across all possible problems. It implies that there is no single best optimization algorithm. Because of the close relationship between optimization, search, and machine learning, it also implies that there is no single best machine learning algorithm for […]

Read more

A Gentle Introduction to Stochastic Optimization Algorithms

Stochastic optimization refers to the use of randomness in the objective function or in the optimization algorithm. Challenging optimization algorithms, such as high-dimensional nonlinear objective problems, may contain multiple local optima in which deterministic optimization algorithms may get stuck. Stochastic optimization algorithms provide an alternative approach that permits less optimal local decisions to be made within the search procedure that may increase the probability of the procedure locating the global optima of the objective function. In this tutorial, you will […]

Read more

How to Use Optimization Algorithms to Manually Fit Regression Models

Regression models are fit on training data using linear regression and local search optimization algorithms. Models like linear regression and logistic regression are trained by least squares optimization, and this is the most efficient approach to finding coefficients that minimize error for these models. Nevertheless, it is possible to use alternate optimization algorithms to fit a regression model to a training dataset. This can be a useful exercise to learn more about how regression functions and the central nature of […]

Read more

How to Develop a Neural Net for Predicting Disturbances in the Ionosphere

It can be challenging to develop a neural network predictive model for a new dataset. One approach is to first inspect the dataset and develop ideas for what models might work, then explore the learning dynamics of simple models on the dataset, then finally develop and tune a model for the dataset with a robust test harness. This process can be used to develop effective neural network models for classification and regression predictive modeling problems. In this tutorial, you will […]

Read more

Function Optimization With SciPy

Optimization involves finding the inputs to an objective function that result in the minimum or maximum output of the function. The open-source Python library for scientific computing called SciPy provides a suite of optimization algorithms. Many of the algorithms are used as a building block in other algorithms, most notably machine learning algorithms in the scikit-learn library. These optimization algorithms can be used directly in a standalone manner to optimize a function. Most notably, algorithms for local search and algorithms […]

Read more

Gradient Descent With Momentum from Scratch

Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A problem with gradient descent is that it can bounce around the search space on optimization problems that have large amounts of curvature or noisy gradients, and it can get stuck in flat spots in the search space that have no gradient. Momentum is an extension to the gradient descent optimization algorithm that allows the search […]

Read more

Difference Between Backpropagation and Stochastic Gradient Descent

Last Updated on February 1, 2021 There is a lot of confusion for beginners around what algorithm is used to train deep learning neural network models. It is common to hear neural networks learn using the “back-propagation of error” algorithm or “stochastic gradient descent.” Sometimes, either of these algorithms is used as a shorthand for how a neural net is fit on a training dataset, although in many cases, there is a deep confusion as to what these algorithms are, […]

Read more
1 69 70 71 72 73 226