How to Control Alert Fatigue?

by | Sep 8, 2021

Alerts are indispensable to any IT operations system today. Site reliability engineers (SREs) or ITOps executives set up several monitoring tools for their IT landscape. When there is a change, high-risk action, or outage in any of these incidents, the monitoring tool triggers an automated alert. This could happen on the monitoring tool’s dashboard itself, via email, or enterprise collaboration tools like Slack or Teams.

The alert is the first and most significant indication that something is wrong. Therefore, alerts are essential for the smooth functioning of applications, networks, and devices. Being on top of incoming alerts and taking timely action is key for every service level agreement (SLA). However, IT teams regularly complain that there are too many alerts, most of them being false alarms.

37% of C-level security executives at large enterprises worldwide said they receive more than 10,000 alerts each month!

To overcome this, they hire large IT teams, who constantly stay on top of every alert that is raised — over half of them false positives/redundancies — causes what experts call alert fatigue.

What is alert fatigue?

Alert fatigue happens when a large number of notifications/alarms overwhelm the receiver so much that they shut it off. They see alerts so often that they become desensitized and ignore them, consciously or otherwise.

This leads to two significant business problems. When IT teams are fatigued by persistent alerts:

They ignore it entirely, exacerbating the issue. For instance, when a database service is down, they might receive an alert. Due to alert fatigue, they might ignore it, which results in a customer-facing application going down, leading directly to revenue losses.

They might respond belatedly, increasing its impact. For instance, when they see the alert on the DB service, they might put it off for later. This can impact a wide range of upstream or downstream services, increasing mean-time-to-resolve (MTTR) for these services.

How to reduce alert fatigue?

The primary reason for alert fatigue is false alarms — when an alert is raised for something that is not, in reality, an issue. In statistical parlance, this is called false positive.

Here are a few ways enterprises can leverage AIOps methods and machine learning models to suppress false positives.

#1 Collect the right data

Not all data is relevant. While setting up monitoring systems, it is vital to collect the right data from metrics, past alerts, logs, and topology. We’ve written in detail about the data to collect for better alerting and root-cause analysis in this blog post.

#2 Optimize anomaly detection with the right models

Not all anomalies will lead to incidents. A good anomaly detection tool should be able to tell the difference between expected anomalies and unexpected anomalies. At HEAL, one of the fundamental ways we make this distinction is through our proprietary workload-behavior correlation algorithms.

It is based on the assumption that the workload of any service has a significant impact on its metrics. For instance, if a large number of users log on to an e-commerce application on the day of Thanksgiving, affecting the CPU utilization, it is not an anomaly. On the contrary, it should have been expected behavior.

#3 Work toward predictive root-cause analysis

Not all incidents are unpredictable. Most IT teams perform root-cause analysis post-facto, i.e., after the incident has occurred. This can reduce the MTTR but not actually predict the problem before it happens.

At HEAL, we perform root-cause analysis on relevant anomalies to predict — and prevent — incidents it might cause. We take a bottom-up approach to anomalies, exploring the topology of various services involved to identify the root cause. We also investigate similar incidents and gauge impact radius to inform future events.

#4 Improve alerts over time using feedback from historical incidents

Every incident can be a critical datapoint in creating an early warning signal in the future. For instance, you can take data from past anomalies, corresponding incidents, logs, and remediation to predict what kind of issues might cause outages. We leverage frequent pattern mining, long short-term memory (LSTM), and recurrent neural networks (RNN) to prioritize alerts, giving IT teams a clear picture of the seriousness of the alerts they are receiving.

Choose HEAL for freedom from alert fatigue

HEAL can seamlessly integrate with many leading monitoring tools to provide you with a robust AIOps engine that performs accurate anomaly detection, speedy root-cause analysis, and makes precise recommendations for remediation.

One of the primary goals of HEAL is to reduce false positives and the resultant alert fatigue. We leverage deep learning methods to optimize precision and recall, suppressing irrelevant alerts and only triggering the most serious ones.

To see how HEAL can help your teams overcome alert fatigue and manage ITOps more effectively, speak to a consultant today.