Articles About Machine Learning

How to Develop a Light Gradient Boosted Machine (LightGBM) Ensemble

Light Gradient Boosted Machine, or LightGBM for short, is an open-source library that provides an efficient and effective implementation of the gradient boosting algorithm. LightGBM extends the gradient boosting algorithm by adding a type of automatic feature selection as well as focusing on boosting examples with larger gradients. This can result in a dramatic speedup of training and improved predictive performance. As such, LightGBM has become a de facto algorithm for machine learning competitions when working with tabular data for […]

Read more

How to Develop Random Forest Ensembles With XGBoost

The XGBoost library provides an efficient implementation of gradient boosting that can be configured to train random forest ensembles. Random forest is a simpler algorithm than gradient boosting. The XGBoost library allows the models to be trained in a way that repurposes and harnesses the computational efficiencies implemented in the library for training random forest models. In this tutorial, you will discover how to use the XGBoost library to develop random forest ensembles. After completing this tutorial, you will know: […]

Read more

Blending Ensemble Machine Learning With Python

Blending is an ensemble machine learning algorithm. It is a colloquial name for stacked generalization or stacking ensemble where instead of fitting the meta-model on out-of-fold predictions made by the base model, it is fit on predictions made on a holdout dataset. Blending was used to describe stacking models that combined many hundreds of predictive models by competitors in the $1M Netflix machine learning competition, and as such, remains a popular technique and name for stacking in competitive machine learning […]

Read more

Books on Genetic Programming

Genetic Programming (GP) is an algorithm for evolving programs to solve specific well-defined problems. It is a type of automatic programming intended for challenging problems where the task is well defined and solutions can be checked easily at a low cost, although the search space of possible solutions is vast, and there is little intuition as to the best way to solve the problem. This often includes open problems such as controller design, circuit design, as well as predictive modeling […]

Read more

How to Manually Optimize Neural Network Models

Deep learning neural network models are fit on training data using the stochastic gradient descent optimization algorithm. Updates to the weights of the model are made, using the backpropagation of error algorithm. The combination of the optimization and weight update algorithm was carefully chosen and is the most efficient approach known to fit neural networks. Nevertheless, it is possible to use alternate optimization algorithms to fit a neural network model to a training dataset. This can be a useful exercise […]

Read more

Autoencoder Feature Extraction for Classification

Autoencoder is a type of neural network that can be used to learn a compressed representation of raw data. An autoencoder is composed of an encoder and a decoder sub-models. The encoder compresses the input and the decoder attempts to recreate the input from the compressed version provided by the encoder. After training, the encoder model is saved and the decoder is discarded. The encoder can then be used as a data preparation technique to perform feature extraction on raw […]

Read more

Autoencoder Feature Extraction for Regression

Autoencoder is a type of neural network that can be used to learn a compressed representation of raw data. An autoencoder is composed of encoder and a decoder sub-models. The encoder compresses the input and the decoder attempts to recreate the input from the compressed version provided by the encoder. After training, the encoder model is saved and the decoder is discarded. The encoder can then be used as a data preparation technique to perform feature extraction on raw data […]

Read more

Perceptron Algorithm for Classification in Python

The Perceptron is a linear machine learning algorithm for binary classification tasks. It may be considered one of the first and one of the simplest types of artificial neural networks. It is definitely not “deep” learning but is an important building block. Like logistic regression, it can quickly learn a linear separation in feature space for two-class classification tasks, although unlike logistic regression, it learns using the stochastic gradient descent optimization algorithm and does not predict calibrated probabilities. In this […]

Read more

UnrealPerson: An Adaptive Pipeline towards Costless Person Re-identification

The main difficulty of person re-identification (ReID) lies in collecting annotated data and transferring the model across different domains. This paper presents UnrealPerson, a novel pipeline that makes full use of unreal image data to decrease the costs in both the training and deployment stages… Its fundamental part is a system that can generate synthesized images of high-quality and from controllable distributions. Instance-level annotation goes with the synthesized data and is almost free. We point out some details in image […]

Read more

Construction of optimal spectral methods in phase retrieval

We consider the phase retrieval problem, in which the observer wishes to recover a $n$-dimensional real or complex signal $mathbf{X}^star$ from the (possibly noisy) observation of $|mathbf{Phi} mathbf{X}^star|$, in which $mathbf{Phi}$ is a matrix of size $m times n$. We consider a emph{high-dimensional} setting where $n,m to infty$ with $m/n = mathcal{O}(1)$, and a large class of (possibly correlated) random matrices $mathbf{Phi}$ and observation channels… Spectral methods are a powerful tool to obtain approximate observations of the signal $mathbf{X}^star$ which […]

Read more
1 75 76 77 78 79 226