Complete Guide to Feature Engineering: Zero to Hero (2024)

This article was published as a part of theData Science Blogathon

Introduction

You must be aware of the fact that Feature Engineering is the heart of any Machine Learning model. How successful a model is or how accurately it predicts that depends on the application of various feature engineering techniques. In this article, we are going to dive deep to study feature engineering. The article will be explaining all the techniques and will also include code wherever necessary. So, let’s start from ground zero, what is feature engineering?

Complete Guide to Feature Engineering: Zero to Hero (1)

Image 1

What is feature engineering?

All machine learning algorithms use some input data to generate outputs. Input data contains many features which may not be in proper form to be given to the model directly. It needs some kind of processing and here feature engineering helps. Feature engineering fulfils mainly two goals:

  • It prepares the input dataset in the form which is required for a specific model or machine learning algorithm.
  • Feature engineering helps in improving the performance of machine learning models magically.

According to some surveys, data scientists spend their time on data preparation. See this figure below:

Complete Guide to Feature Engineering: Zero to Hero (2)

Image 2

This clearly shows the importance of feature engineering in machine learning. So, this article will help you in understanding this whole concept.

Prerequisites:

1. Install Python and get its basic hands-on knowledge.

2. Pandas library in python. Command to install: pip install pandas

3. Numpy library in python. Command to install: pip install numpy

Then import these two libraries like this:

import pandas as pdimport numpy as np

Now, let’s begin!

I am listing here the main feature engineering techniques to process the data. We will then look at each technique one by one in detail with its applications.

The main feature engineering techniques that will be discussed are:

1. Missing data imputation

2. Categorical encoding

3. Variable transformation

4. Outlier engineering

5. Date and time engineering

Missing Data Imputation for Feature Engineering

In your input data, there may be some features or columns which will have missing data, missing values. It occurs if there is no data stored for a certain observation in a variable. Missing data is very common and it is an unavoidable problem especially in real-world data sets. If this data containing a missing value is used then you can see the significance in the results. So, imputation is the act of replacing missing data with statistical estimates of the missing values. It helps you to complete your training data which can then be provided to any model or an algorithm for prediction.

There are multiple techniques for missing data imputation. These are as follows:-

  1. Complete case analysis
  2. Mean / Median / Mode imputation
  3. Missing Value Indicator

Complete Case Analysis for Missing Data Imputation

Complete case analysis is basically analyzing those observations in the dataset that contains values in all the variables. Or you can say, remove all the observations that contain missing values. But this method can only be used when there are only a few observations which has a missing dataset otherwise it will reduce the dataset size and then it will be of not much use.

So, it can be used when missing data is small but in real-life datasets, the amount of missing data is always big. So, practically, complete case analysis is never an option to use, although you can use it if the missing data size is small.

Let’s see the use of this on the titanic dataset.

Download the titanic dataset from here.

import numpy as npimport pandas as pd titanic = pd.read_csv('titanic/train.csv')# make a copy of titanic datasetdata1 = titanic.copy()data1.isnull().mean()

Hit Run to see the output

If we remove all the missing observations, we would end up with a very small dataset, given that the Cabin is missing for 77% of the observations.

# check how many observations we would dropprint('total passengers with values in all variables: ', data1.dropna().shape[0])print('total passengers in the Titanic: ', data1.shape[0])print('percentage of data without missing values: ', data1.dropna().shape[0]/ np.float(data1.shape[0]))
total passengers with values in all variables: 183total passengers in the Titanic: 891percentage of data without missing values: 0.2053872053872054

So, we have complete information for only 20% of our observations in the Titanic dataset. Thus, Complete Case Analysis method would not be an option for this dataset.

Mean/ Median/ Mode for Missing Data Imputation

Missing values can also be replaced with the mean, median, or mode of the variable(feature). It is widely used in data competitions and in almost every situation. It is suitable to use this technique where data is missing at random places and in small proportions.

# impute missing values in age in train and test setmedian = X_train.Age.median()for df in [X_train, X_test]: df['Age'].fillna(median, inplace=True)X_train['Age'].isnull().sum()

Output:
0

0 represents that now the Age feature has no null values.

One important point to consider while doing imputation is that it should be done over the training set first and then to the test set. All missing values in the train set and test set should be filled with the value which is extracted from the train set only. This helps in avoiding overfitting.

Missing Value Indicator For Missing Value Indication

This technique involves adding a binary variable to indicate whether the value is missing for a certain observation. This variable takes the value 1 if the observation is missing, or 0 otherwise. But we still need to replace the missing values in the original variable, which we tend to do with mean or median imputation. By using these 2 techniques together, if the missing value has predictive power, it will be captured by the missing indicator, and if it doesn’t it will be masked by the mean / median imputation.

X_train['Age_NA'] = np.where(X_train['Age'].isnull(), 1, 0)X_test['Age_NA'] = np.where(X_test['Age'].isnull(), 1, 0)X_train.head()

Output:

Complete Guide to Feature Engineering: Zero to Hero (3)
X_train.Age.mean(), X_train.Age.median()
(29.915338645418327, 29.0)

Now, since mean and median are the same, let’s replace them with the median.

X_train['Age'].fillna(X_train.Age.median(), inplace=True)X_test['Age'].fillna(X_train.Age.median(), inplace=True)X_train.head(10)

So, the Age_NAvariable was created to capture the missingness.

Categorical encoding in Feature Engineering

Categorical data is defined as that data that takes only a number of values. Let’s understand this with an example. Parameter Gender in a dataset will have categorical values like Male, Female. If a survey is done to know which car people own then the result will be categorical (because the answers would be in categories like Honda, Toyota, Hyundai, Maruti, None, etc.). So, the point to notice here is that data falls in a fixed set of categories.

If you directly give this dataset with categorical variables to a model, you will get an error. Hence, they are required to be encoded. There are multiple techniques to do so:

  1. One-Hot encoding (OHE)
  2. Ordinal encoding
  3. Count and Frequency encoding
  4. Target encoding / Mean encoding

Let’s understand them in detail.

One-Hot Encoding

It is a commonly used technique for encoding categorical variables. It basically creates binary variables for each category present in the categorical variable. These binary variables will have 0 if it is absent in the category or 1 if it is present. Each new variable is called a dummy variable or binary variable.

Example: If the categorical variable is Gender with labels female and male, two boolean variables can be generated called male and female. Male will take 1 if the person is male or 0 otherwise. Similarly for a female variable. See this code below for the titanic dataset.

pd.get_dummies(data['Sex']).head()
pd.concat([data['Sex'], pd.get_dummies(data['Sex'])], axis=1).head()

Output:

Sexfemale
0male01
1female10
2female10
3female10
4male01

But you can see that we only need 1 dummy variable to represent Sex categorical variable. So, you can take it as a general formula where if there are n categories, you only need an n-1 dummy variable. So you can easily drop anyone dummy variable. To get n-1 dummy variables simply use this:

pd.get_dummies(data['Sex'], drop_first=True).head()

Ordinal Encoding

What does ordinal mean? It simply means a categorical variable whose categories can be ordered and that too meaningfully.

For example, Student’s grades in an exam are ordinal. (A,B,C,D, Fail). In this case, a simple way to encode is to replace the labels with some ordinal number. Look at sample code:

from sklearn import preprocessing>>> le = preprocessing.LabelEncoder()
le = preprocessing.LabelEncoder()le.fit(["paris", "paris", "tokyo", "amsterdam"])le.transform(["tokyo", "tokyo", "paris"])>>>array([2, 2, 1]...)list(le.inverse_transform([2, 2, 1]))>>>['tokyo', 'tokyo', 'paris']

Count and Frequency Encoding

In this encoding technique, categories are replaced by the count of the observations that show that category in the dataset. Replacement can also be done with the frequency of the percentage of observations in the dataset. Suppose, if 30 of 100 genders are male we can replace male with 30 or by 0.3.

This approach is popularly used in data science competitions, so basically it represents how many times each label appears in the dataset.

Target / Mean Encoding

In target encoding, also called mean encoding, we replace each category of a variable with the mean value of the target for the observations that show a certain category. For example, there is a categorical variable “city”, and we want to predict if the customer will buy a TV provided we send a letter. If 30 percent of the people in the city “London” buy the TV, we would replace London with 0.3. So it helps in capturing some information regarding the target at the time of encoding the category and it also does not expands the feature space. Hence, it also can be considered as an option for encoding. But it may cause over-fitting to the model, so be careful. Look at this code for implementation:

import pandas as pd# creating datasetdata={'CarName':['C1','C2','C3','C1','C4','C3','C2','C1','C2','C4','C1'], 'Target':[1,0,1,1,1,0,0,1,1,1,0]}df = pd.DataFrame(data)print(df)

Output:

CarName Target
0 C1 1
1 C2 0
2 C3 1
3 C1 1
4 C4 1
5 C3 0
6 C2 0
7 C1 1
8 C2 1
9 C4 1
10 C1 0

df.groupby([‘CarName’])[‘Target’].count()

Output:

CarName
C1 4
C2 3
C3 2
C4 2
Name: Target, dtype: int64

df.groupby(['CarName'])['Target'].mean()

Output:

CarName
s1 0.750000
s2 0.333333
s3 0.500000
s4 1.000000
Name: Target, dtype: float64

Mean_encoded = df.groupby(['CarName'])['Target'].mean().to_dict()

df['CarName'] = df['CarName'].map(Mean_encoded)

print(df)

CarName Target
0 0.750000 1
1 0.333333 0
2 0.500000 1
3 0.750000 1
4 1.000000 1
5 0.500000 0
6 0.333333 0
7 0.750000 1
8 0.333333 1
9 1.000000 1
10 0.750000 0

Variable Transformation

Machine learning algorithms like linear and logistic regression assume that the variables are normally distributed. If a variable is not normally distributed, sometimes it is possible to find a mathematical transformation so that the transformed variable is Gaussian. Gaussian distributed variables many times boost the machine learning algorithm performance.

Commonly used mathematical transformations are:

  1. Logarithm transformation – log(x)
  2. Square root transformation – sqrt(x)
  3. Reciprocal transformation – 1 / x
  4. Exponential transformation – exp(x)

Let’s check these out on the titanic dataset.

Loading numerical features of the titanic dataset.

cols_reqiuired = ['Age', 'Fare', 'Survived'])data[cols_reqiuired].head()Output:SurvivedAgeFare0022.07.25001138.071.28332126.07.92503135.053.10004035.08.0500

First, we need to fill in missing data. We will start with filling missing data with a random sample.

def impute(data, variable): df = data.copy() df[variable+'_random'] = df[variable] # extract the random sample to fill the na random_sample = df[variable].dropna().sample(df[variable].isnull().sum(), random_state=0) random_sample.index = df[df[variable].isnull()].index df.loc[df[variable].isnull(), variable+'_random'] = random_sample return df[variable+'_random']# fill nadata['Age'] = impute_na(data, 'Age')

Now, to visualize the distribution of the age variable we will plot histogram and Q-Q-plot.

def plots(df, variable): plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) df[variable].hist() plt.subplot(1, 2, 2) stats.probplot(df[variable], dist="norm", plot=pylab) plt.show()plots(data, 'Age')

Output:

Complete Guide to Feature Engineering: Zero to Hero (4)

The age variable is almost normally distributed, except for some observations on the lower tail. Also, you can notice slight skew in the histogram to the left.

Now, let’s apply the above transformation and compare the transformed Age variable.

Logarithmic transformation

data['Age_log'] = np.log(data.Age)plots(data, 'Age_log')

Output:

Complete Guide to Feature Engineering: Zero to Hero (5)

You can observe here that logarithmic transformation did not produce a Gaussian-like distribution for Age column.

Square root transformation – sqrt(x)

data['Age_sqr'] =data.Age**(1/2)plots(data, 'Age_sqr')

Output:

Complete Guide to Feature Engineering: Zero to Hero (6)

This is a bit better, but the still variable is not Gaussian.

Reciprocal transformation – 1 / x

data['Age_reciprocal'] = 1 / data.Ageplots(data, 'Age_reciprocal')

Output:

Complete Guide to Feature Engineering: Zero to Hero (7)

This transformation is also not useful to transform Age into a normally distributed variable.

Exponential transformation – exp(x)

data['Age_exp'] = data.Age**(1/1.2) plots(data, 'Age_exp')

Output:

Complete Guide to Feature Engineering: Zero to Hero (8)

This one is the best of all the transformations above, at the time of generating a variable that is normally distributed.

Outlier engineering

Outliers are defined as those values that are unusually high or low with respect to the rest of the observations of the variable. Some of the techniques to handle outliers are:

1. Outlier removal

2. Treating outliers as missing values

3. Outlier capping

How to identify outliers?

For that, the basic form of detection is an extreme value analysis of data. If the distribution of the variable is Gaussian then outliers will lie outside the mean plus or minus three times the standard deviation of the variable. But if the variable is not normally distributed, then quantiles can be used. Calculate the quantiles and then inter quartile range:

Inter quantile is 75th quantile-25quantile.

upper boundary:75th quantile + (IQR * 1.5)

lower boundary:25th quantile – (IQR * 1.5)

So, the outlier will sit outside these boundaries.

Outlier removal

In this technique, simply remove outlier observations from the dataset. In datasets if outliers are not abundant, then dropping the outliers will not affect the data much. But if multiple variables have outliers then we may end up removing a big chunk of data from our dataset. So, this point has to be kept in mind whenever dropping the outliers.

Treating outliers as missing values

You can also treat outliers as missing values. But then these missing values also have to be filled. So to fill missing values you can use any of the methods as discussed above in this article.

Outlier capping

This procedure involves capping the maximum and minimum values at a predefined value. This value can be derived from the variable distribution. If a variable is normally distributed we can cap the maximum and minimum values at the mean plus or minus three times the standard deviation. But if the variable is skewed, we can use the inter-quantile range proximity rule or cap at the bottom percentiles.

Date and Time Feature Engineering

Date variables are considered a special type of categorical variable and if they are processed well they can enrich the dataset to a great extent. From the date we can extract various important information like: Month, Semester, Quarter, Day, Day of the week, Is it a weekend or not, hours, minutes, and many more. Let’s use some dataset and do some coding around it.

For this, we will use the Lending club dataset. Download it from here.

We will use only two columns from the dataset: issue_d and last_pymnt_d.

use_cols = ['issue_d', 'last_pymnt_d']data = pd.read_csv('/kaggle/input/lending-club-loan-data/loan.csv', usecols=use_cols, nrows=10000)data.head()Output:issue_dlast_pymnt_d0Dec-2018Feb-20191Dec-2018Feb-20192Dec-2018Feb-20193Dec-2018Feb-20194Dec-2018Feb-2019

Now, parse dates into DateTime format as they are coded in strings currently.

data['issue_dt'] = pd.to_datetime(data.issue_d)data['last_pymnt_dt'] = pd.to_datetime(data.last_pymnt_d)data[['issue_d','issue_dt','last_pymnt_d', 'last_pymnt_dt']].head()Output: issue_dissue_dtlast_pymnt_dlast_pymnt_dt0Dec-20182018-12-01Feb-20192019-02-011Dec-20182018-12-01Feb-20192019-02-012Dec-20182018-12-01Feb-20192019-02-013Dec-20182018-12-01Feb-20192019-02-014Dec-20182018-12-01Feb-20192019-02-01

Now, extracting month from date.

data['issue_dt_month'] = data['issue_dt'].dt.monthdata[['issue_dt', 'issue_dt_month']].head()Output: issue_dt issue_dt_month02018-12-011212018-12-011222018-12-011232018-12-011242018-12-0112

Extracting quarter from date.

data['issue_dt_quarter'] = data['issue_dt'].dt.quarterdata[['issue_dt', 'issue_dt_quarter']].head()Output: issue_dtissue_dt_quarter02018-12-01412018-12-01422018-12-01432018-12-01442018-12-014

Extracting the day of the week from the date.

data['issue_dt_dayofweek'] = data['issue_dt'].dt.dayofweekdata[['issue_dt', 'issue_dt_dayofweek']].head()Output: issue_dtissue_dt_dayofweek02018-12-01512018-12-01522018-12-01532018-12-01542018-12-015

Extracting the weekday name from the date.

data['issue_dt_dayofweek'] = data['issue_dt'].dt.weekday_namedata[['issue_dt', 'issue_dt_dayofweek']].head()Output: issue_dtissue_dt_dayofweek02018-12-01Saturday12018-12-01Saturday22018-12-01Saturday32018-12-01Saturday42018-12-01Saturday

So, these are just a few examples with date and time, you can explore more. These kinds of things always help in improving the quality of data.

End Notes

In this article, I tried to explain feature engineering in detail with some code examples on the dataset. Feature engineering is very helpful in making your model more accurate and effective. As a next step, try out the techniques we discussed above on some other datasets for better understanding. I hope you find this article helpful. Let’s connect on Linkedin.

Thanks for reading if you reached here :).

Happy coding!

Image Sources-

  1. Image 1 https://onlinecoursebay.com/
  2. Image 2 https://www.forbes.com/

The media shown in this article on recursion in Python are not owned by Analytics Vidhya and are used at the Author’s discretion.

blogathondata sciencefeature engineeringpython

G

Gaurav Sharma27 Sep 2022

Love Programming, Blog writing and Poetry

BeginnerData CleaningData VisualizationGuideMachine Learning

Complete Guide to Feature Engineering: Zero to Hero (2024)
Top Articles
Make Ahead Veggie Breakfast Casserole Recipe | Little Spice Jar
39 Delicious Vegan Recipes That Are Perfect For Losing Weight! - TrimmedandToned
2018 Jeep Wrangler Unlimited All New for sale - Portland, OR - craigslist
Login Page
East Cocalico Police Department
How to change your Android phone's default Google account
Undergraduate Programs | Webster Vienna
Poe Pohx Profile
라이키 유출
Flights to Miami (MIA)
Moviesda Dubbed Tamil Movies
Remnant Graveyard Elf
Www.paystubportal.com/7-11 Login
Sotyktu Pronounce
Osrs Blessed Axe
Operation Cleanup Schedule Fresno Ca
Interactive Maps: States where guns are sold online most
[Birthday Column] Celebrating Sarada's Birthday on 3/31! Looking Back on the Successor to the Uchiha Legacy Who Dreams of Becoming Hokage! | NARUTO OFFICIAL SITE (NARUTO & BORUTO)
I Saysopensesame
Glenda Mitchell Law Firm: Law Firm Profile
Juicy Deal D-Art
Bjerrum difference plots - Big Chemical Encyclopedia
Colonial Executive Park - CRE Consultants
Used Patio Furniture - Craigslist
Makemv Splunk
fft - Fast Fourier transform
14 Top-Rated Attractions & Things to Do in Medford, OR
Account Now Login In
Firefly Festival Logan Iowa
Expression Home XP-452 | Grand public | Imprimantes jet d'encre | Imprimantes | Produits | Epson France
Stephanie Bowe Downey Ca
Mobile crane from the Netherlands, used mobile crane for sale from the Netherlands
123Moviestvme
2016 Honda Accord Belt Diagram
Ny Post Front Page Cover Today
Frcp 47
Craiglist Hollywood
The TBM 930 Is Another Daher Masterpiece
Housing Intranet Unt
Craigslist Lakeside Az
Gfs Ordering Online
2007 Jaguar XK Low Miles for sale - Palm Desert, CA - craigslist
Craigs List Hartford
Senior Houses For Sale Near Me
Child care centers take steps to avoid COVID-19 shutdowns; some require masks for kids
Ucla Basketball Bruinzone
Haunted Mansion Showtimes Near Millstone 14
Rovert Wrestling
Google Flights Missoula
Diccionario De Los Sueños Misabueso
Black Adam Showtimes Near Cinemark Texarkana 14
Ubg98.Github.io Unblocked
Latest Posts
Article information

Author: Dan Stracke

Last Updated:

Views: 5400

Rating: 4.2 / 5 (63 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Dan Stracke

Birthday: 1992-08-25

Address: 2253 Brown Springs, East Alla, OH 38634-0309

Phone: +398735162064

Job: Investor Government Associate

Hobby: Shopping, LARPing, Scrapbooking, Surfing, Slacklining, Dance, Glassblowing

Introduction: My name is Dan Stracke, I am a homely, gleaming, glamorous, inquisitive, homely, gorgeous, light person who loves writing and wants to share my knowledge and understanding with you.