How to Use Polynomial Feature Transforms for Machine Learning

Last Updated on August 28, 2020

Often, the input features for a predictive modeling task interact in unexpected and often nonlinear ways.

These interactions can be identified and modeled by a learning algorithm. Another approach is to engineer new features that expose these interactions and see if they improve model performance. Additionally, transforms like raising input variables to a power can help to better expose the important relationships between input variables and the target variable.

These features are called interaction and polynomial features and allow the use of simpler modeling algorithms as some of the complexity of interpreting the input variables and their relationships is pushed back to the data preparation stage. Sometimes these features can result in improved modeling performance, although at the cost of adding thousands or even millions of additional input variables.

In this tutorial, you will discover how to use polynomial feature transforms for feature engineering with numerical input variables.

After completing this tutorial, you will know:

  • Some machine learning algorithms prefer or perform better with polynomial input features.
  • How to use the polynomial features transform to create new versions of input variables for predictive modeling.
  • How the degree of the polynomial impacts the
    To finish reading, please visit source site