Machine learning can detect anomalies, such as an item priced far below its actual price.

In 2013 during the peak holiday season, a software glitch at a large e-commerce website resulted in $1,000 treadmills being erroneously listed for sale online for $33 each. Word spread quickly about the massive markdown, causing a noticeable increase in web traffic and orders. The site had to scramble and cancel the orders. Not only did this incident cost the company a great deal of money, but it also created a sizable dent in customer satisfaction.

These days, every e-commerce enterprise lives in mortal fear of such a Black Swan event. These incidents prove that traditional data anomaly detection techniques, such as reporting, are increasingly ineffective and there is a need to create better algorithms/machines that augment human decisions.

Identifying an Anomaly from a Huge Pool of Data

A typical e-commerce platform processes thousands of transactions every day. These platforms also undergo a host of other changes continuously—such as new product listings, changes in product prices and promotions. This adds to the complexity of executing these transactions in a safe, secure and rapid manner. In a hyper-competitive marketplace, a single misstep in any of these activities can have a disproportionately large repercussion for the business.

Given the ever growing stock of data, it’s important to remember that not all anomalies are the same. A one-off spike in the number of visitors to a website is a Point Anomaly with respect to data. However, a rapid increase in page-views and orders for a slow moving product is a Contextual Anomaly.  Another is Collective Anomaly, which is not based on an individual data point but for a collection of related data instances such as, a poor response to a pricing campaign across the entire product category. Identifying each of these types of anomalies requires a different approach. 

advertisement

Challenges Around the Traditional Approach

The traditional Business Intelligence (BI) approach to anomaly detection is:

However, this approach has several key, real-life challenges:

  • Defining a representative distribution: Data tends to be messy in real life. Also, the distinction between normal and outlier behavior is not precise, for e.g.  a sudden spike in sales of a product can happen because of a pricing software glitch or a genuine surge in demand.
  • Shifting notion of an outlier: An outlier in a certain business environment may be classified as normal when conditions are changed. For e.g. a surge in website traffic from mobile devices can be an outlier or a systemic change from desktop to mobile devices.
  • Data points for validation: Lack of enough data points available for training and validation can cause ambiguity about whether or not a certain incident in an anomaly. For e.g. an increase in customer complaints for a new product may not be an anomaly that requires an (often-costly) systemic response.

These challenges, demand that we re-think the traditional approach of anomaly detection, and leverage techniques that are capable of better identifying event-based exceptions in near-real time.

advertisement

Machine Learning Approach for Anomaly Detection

As e-commerce continues to grow, Anomaly Detection across the value chain is becoming an increasing area of interest. And, many firms are looking to Machine Learning (ML) for solutions.

In all, Point Anomalies are most common, and there are broadly two types of approaches:

  • Classification based: These are typically supervised techniques that take into account historical data to classify each new event and require training on an ongoing basis. ‘Semi-supervised’ classification techniques are emerging as an interesting alternative, where “normal” behavior is learned and deviations are detected as anomalous behavior.
  • Clustering based: This approach assumes that “normal” data belongs to large and dense clusters, while anomalies end up in smaller or low density clusters, or in extreme cases, in no clusters at all. While this has the inherent advantage of not requiring supervision, it is computationally intensive and may not work very well with data sets that are sparse.

Spotting Contextual Anomalies requires the creation of a context and then identifying outliers by behavioral attributes. For instance, the number of pages a customer visits and the average time spent on a site before reaching the checkout page can be used to create the context for a genuine user. Once this context is learned, any activity that has, say, a single page visit and a very short time spent on the site before landing on the checkout page can be imputed to be a price-scraping bot and treated as an anomaly. The obvious challenge is to build the relevant contexts that continue to be relevant over time.

advertisement

Collective Anomaly Detection, on the other hand, involves understanding the relationship between data instances over time (e.g. anomalous sequences) or space (e.g. anomalous sub-regions in a spatial data set). For instance, assume there is a weekly pattern of web traffic to the grocery web pages for a specific customer segment, say, the millennials. Algorithms can be trained on the weekly sub-sequence of traffic trends; the occurrence of a week’s worth of traffic that is below a specific threshold can then be flagged as an anomaly.

Machines can outperform humans when it comes to spotting anomalies in vast amounts of data. But businesses evolve and consumer preferences change, which constantly require human intuition to step in and re-evaluate the algorithms.

Having said this, there is a huge potential to apply Anomaly Detection in multiple areas like customer behavior and product performance, and use these signals to drive better decisions. One thing is clear: businesses need to move away from static BI and Reporting to a world of adaptive, learning systems.

Eshita Jha, a member of the Mu Sigma India Delivery Leadership team, contributed to this article.

advertisement

Mu Sigma is a big data analytics company specializing in helping entrprises use data to make decisions.

Favorite