Handling Rare Events and Class Imbalance in Predictive Modeling for Machine Failure

Most supervised Machine Learning algorithms face difficulty when there is class imbalance in the training data i.e., amount of data belonging one class heavily outnumber the other class. However, there are may real life problems where we encounter this situation e.g., fraud, customer churn and machine failure. There are various techniques to address this thorny problem of class imbalance.

In this post we will go over a technique based on oversampling o the minority class data called Synthetic Minority Over-sampling Technique (SMOTE). We will go into the details of a Hadoop based implementation using machine failure data as the use case.

The implementation can be found in my OSS projects avenir. The ETL map reduce  jobs necessary for pre processing the data before running the SMOTE algorithm come from another OSS project of mine chombo.

Class Imbalance

Most classification algorithms are adversely affected by class imbalance, with some exceptions.Decision trees are relatively more tolerant of class imbalance. Most supervised learning algorithms can tolerate small amount of class imbalance. When the imbalance is excessive, some corrective action is necessary.

As outlined in the article cited above and this reddit post, here are the techniques to handle class imbalance.

  1. Collect more data
  2. Use different performance metric
  3. Resample data set
  4. Generate synthetic data (SMOTE)
  5. Adaptive synthetic over sampling (Borderline-SMOTE and ADA-SYN)
  6. Sampling with data cleaning techniques
  7. Cluster-based sampling method (cluster-based oversampling – CBO)
  8. Integration of sampling and boosting (SMOTEBoost, DataBoost-IM)
  9. Over / under sampling with jittering (JOUS-Boost)
  10. Cost-Sensitive Dataspace Weighting with Adaptive Boosting (AdaC1, AdaC2, and AdaC3)
  11. Cost-Sensitive Decision Trees
  12. Cost-Sensitive Neural Networks
  13. Cost-Sensitive Bayesian Classifiers
  14. Cost-Sensitive SVMs
  15. Auto associator
  16. The Mahalanobis-Taguchi System (MTS)
  17. Use different classification algorithm
  18. Use cost based performance metric.
  19. Use different perspective e.g. anomaly detection
  20. Be more creative e.g. one class classifier

Most of the techniques follow a sampling based or miss-classification cost based approach.Some of them classification algorithms have been extended to account for oversampling. So those techniques are classification algorithm specific. We are going to use the 4th technique in the list above

Synthetic Minority Over Sampling Technique

With over sampling based approaches, we generate additional data for the minority class until we the 2 classes are equally represented in the data set. The steps of the SMOTE algorithm are as follows

  1. For each minority class record, find nearest neighbors with same class
  2. Randomly sample to select a neighbor
  3. For each field, interpolate between the source record and neighbor based on a random interpolation point  between the 2 values from the 2 records
  4. Repeat steps 2 and 3 depending on the amount of imbalance

The SMOTE algorithm introduces randomness in generating the synthetic data in 2 ways as follows.

  1. Random selection among k nearest neighbors
  2. Random field wise interpolation between source record and selected neighboring records

Before running the algorithm, we need to perform these 4 additional pre-processing steps.The second step is an optimization step to cut down the amount of processing.

  1. Normalize data
  2. Remove majority class records
  3. Find distance between each pair of records
  4. Find the nearest neighbors for minority class records

Implementation of the solution involves 5 map reduce jobs chained together. The output of one goes as input to the next Map Reduce job. Next we will discuss them in details.

Machine Failure Data

Our use case involves failure data for some machine e.g motor. The data has an ID field, 8 feature attribute fields and 1 class attribute field as follows

  1. ID
  2. age
  3. time since last maintenance
  4. number of break downs
  5. first spectral frequency
  6. first spectral frequency amplitude
  7. second spectral frequency
  8. second spectral frequency amplitude
  9. failure status

The last field is the class attribute. The majority class will correspond to a machine not having failed. The majority class constitutes about 95% of the data in the data set we will be using.

The data is collected for multiple machines of same type. The data could be generated every 24 hours. The whole training data set may contain 1 month’s data. A predictive model can be built with training data over a 1 month period. Data collected over a 24 hours period could be used to predict any impending failure of any machine, using the predictive model.

There are 4 fields related to frequency spectral density. When operating under normal conditions, rotating machinery exhibit a frequency spectral density of vibration amplitudes.

When a failure in impending, the frequency spectral density generally undergoes a change.The frequency spectral density is characterized in the data with the two highest frequency components.


Numerical fields in the data may have different scales and the data may have categorical data also. For accurate distance calculation, the different fields should have uniform influence on the distance.  To enforce this,  we need to normalize the numerical fields.

Normalization is performed by the Map Reduce class Normalizer. Different normalization techniques are supported

  1. min max
  2. zscore
  3. center
  4. unit sum

We have use  zscore. With zscore,  optionally we could eliminate outliers also. We have not chosen that option. Here is some sample output data after normalization


Filtering Out Majority Class Records

Since we don’t need the majority class records any more, in this step we filter them outusing a Map Reduce job Projection. Filtering out the majority class records will drastically reduce the amount of computation necessary, especially for inter record distance calculation which is the next step.

Projection is essentially a Map Reduce implementation of SQL select query. Fields to be projected and the where clause expressions are provided through configuration parameters. Here is some sample output


Inter Record Distance

This is a computationally intensive task since it calculates distance between all possible record pairs. All the field meta data is provided through a JSON files. Another JSON file provides various distance calculation related parameters.

The implementation is in the Map Reduce class RecordSimilarity. Data is hashed into buckets and the buckets are then Cartesian joined. Number of reducer calls is b2, where bis the number of buckets, which is a configurable parameter.

It supports the following field types. Weights could be assigned to different fields to control influence of different fields in the final distance. For text fields, various algorithms are supported.

  1. numeric
  2. categorical
  3. text
  4. geo location.

Following algorithms  are supported for distance calculation. Each of these could be applied to a subset of fields and then the results could be further aggregated for the whole record. Or we could apply one algorithm for all the fields.

  1. eulidean
  2. manhattan
  3. minkowski
  4. categorical

More details on distance calculation can be found in an earlier post. Here is some sample output. Each line contains 2 IDs, 2 records and distance between the records, optionally scaled.


Nearest Neighbors

In this step we find the nearest neighbors for records. The Map Reduce class implementation is TopMatchesByClass. There two options for selecting nearest neighbors as below

  1. Nearest by count
  2. Nearest by distance.

If nearest count option is chosen, the number of neighbors needs to be specified. With the nearest by distance option, the distance needs to be specified. In this case the number of neighbors selected will be variable. Here is some sample output, Each line contains a record followed by k nearest records, first one being the closest.


Choosing the correct neighborhood size for synthetic sample generation can be tricky. If it’s too small, the generated samples may not provide enough coverage through the feature space.

If the neighborhood too large, depending on the complexity of the distribution of the records belonging to the 2 classes in the feature space,  there is the risk of creating a sample in regions with  predominantly majority class records.

Synthetic Minority Class Sample Generation

This is the last step of the workflow, which generates the synthetic minority class samples.The algorithm was described earlier.The Map reduce implementation is in the ClassBasedOverSampler class.

Distribution of the neighbors for sampling purpose by default is uniform. It also supports exponential distribution of the neighbors. With exponential distribution, neighbors that are closer will have higher probability. So they will tend to be sampled more. With exponential distribution a rejection samling based technique is used. We have used the uniform distribution option.

One of the parameters that need to be specified is the multiplier for oversampling. it can be calculated as below

m = round(p/q) – 1   where 
m = multiplier
p = number of majority class records
q = number of minority class records

Here is some sample output. All the feature attribute values are generated by interpolation. Class attribute value remains as is. The ID is generated from the ID of the source record and the neighbor record.


When the output generated from this step is combined with the original data set we get a class balanced data set.


Based on available data we are generating additional data. It’s critical that additional data generated does not change the underlying distribution of the data significantly. This may happen if the class boundaries are complex and there are multiple mutually exclusive regions in the feature space belong to some class.

To alleviate this problem, the neighborhood size for nearest neighbors should selected carefully. Other wise you end up generating minority class record in  a region of majority class in the feature space.

Final Words

This is yet another example where you set out to build a predictive model and immediately get swamped with burgeoning list of  data munging tasks, before you can even start.

My goal was to build an SVM based predictive model for machine failures and I immediately realized the data was highly unbalanced. That triggered a chain of ETL data munging tasks to get the data ready.

There are many techniques to combat class imbalance problems. We have used an oversampling based technique called SMOTE. We have gone through the details Map Reduce workflow using machine failure data as an use case. Details of the execution steps can be found in this tutorial document.

ThirdEye Data

Transforming Enterprises with
Data & AI Services & Solutions.

ThirdEye delivers Data and AI services & solutions for enterprises worldwide by
leveraging state-of-the-art Data & AI technologies.