Articles About Machine Learning

Add Binary Flags for Missing Values for Machine Learning

Last Updated on August 17, 2020 Missing values can cause problems when modeling classification and regression prediction problems with machine learning algorithms. A common approach is to replace missing values with a calculated statistic, such as the mean of the column. This allows the dataset to be modeled as per normal but gives no indication to the model that the row original contained missing values. One approach to address this issue is to include additional binary flag input features that […]

Read more

How to Selectively Scale Numerical Input Variables for Machine Learning

Last Updated on August 17, 2020 Many machine learning models perform better when input variables are carefully transformed or scaled prior to modeling. It is convenient, and therefore common, to apply the same data transforms, such as standardization and normalization, equally to all input variables. This can achieve good results on many problems. Nevertheless, better results may be achieved by carefully selecting which data transform to apply to each input variable prior to modeling. In this tutorial, you will discover […]

Read more

Train-Test Split for Evaluating Machine Learning Algorithms

Last Updated on August 26, 2020 The train-test split procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model. It is a fast and easy procedure to perform, the results of which allow you to compare the performance of machine learning algorithms for your predictive modeling problem. Although simple to use and interpret, there are times when the procedure should not be used, such […]

Read more

LOOCV for Evaluating Machine Learning Algorithms

Last Updated on August 26, 2020 The Leave-One-Out Cross-Validation, or LOOCV, procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model. It is a computationally expensive procedure to perform, although it results in a reliable and unbiased estimate of model performance. Although simple to use and no configuration to specify, there are times when the procedure should not be used, such as when you […]

Read more

Nested Cross-Validation for Machine Learning with Python

Last Updated on August 28, 2020 The k-fold cross-validation procedure is used to estimate the performance of machine learning models when making predictions on data not used during training. This procedure can be used both when optimizing the hyperparameters of a model on a dataset, and when comparing and selecting a model for the dataset. When the same cross-validation procedure and dataset are used to both tune and select a model, it is likely to lead to an optimistically biased […]

Read more

How to Configure k-Fold Cross-Validation

Last Updated on August 26, 2020 The k-fold cross-validation procedure is a standard method for estimating the performance of a machine learning algorithm on a dataset. A common value for k is 10, although how do we know that this configuration is appropriate for our dataset and our algorithms? One approach is to explore the effect of different k values on the estimate of model performance and compare this to an ideal test condition. This can help to choose an […]

Read more

Repeated k-Fold Cross-Validation for Model Evaluation in Python

Last Updated on August 26, 2020 The k-fold cross-validation procedure is a standard method for estimating the performance of a machine learning algorithm or configuration on a dataset. A single run of the k-fold cross-validation procedure may result in a noisy estimate of model performance. Different splits of the data may result in very different results. Repeated k-fold cross-validation provides a way to improve the estimated performance of a machine learning model. This involves simply repeating the cross-validation procedure multiple […]

Read more

How to Use XGBoost for Time Series Forecasting

Last Updated on August 27, 2020 XGBoost is an efficient implementation of gradient boosting for classification and regression problems. It is both fast and efficient, performing well, if not the best, on a wide range of predictive modeling tasks and is a favorite among data science competition winners, such as those on Kaggle. XGBoost can also be used for time series forecasting, although it requires that the time series dataset be transformed into a supervised learning problem first. It also […]

Read more

Multi-Class Imbalanced Classification

Last Updated on August 21, 2020 Imbalanced classification are those prediction tasks where the distribution of examples across class labels is not equal. Most imbalanced classification examples focus on binary classification tasks, yet many of the tools and techniques for imbalanced classification also directly support multi-class classification problems. In this tutorial, you will discover how to use the tools of imbalanced classification with a multi-class dataset. After completing this tutorial, you will know: About the glass identification standard imbalanced multi-class […]

Read more

How to use Seaborn Data Visualization for Machine Learning

Last Updated on August 19, 2020 Data visualization provides insight into the distribution and relationships between variables in a dataset. This insight can be helpful in selecting data preparation techniques to apply prior to modeling and the types of algorithms that may be most suited to the data. Seaborn is a data visualization library for Python that runs on top of the popular Matplotlib data visualization library, although it provides a simple interface and aesthetically better-looking plots. In this tutorial, […]

Read more
1 213 214 215 216 217 226