The phenomenon of outliers is that you can hardly determine them without additional investigation. Outliers occur because of measurement errors or just because the data are not as consistent as the researchers wish them to be. At the same time, outliers can stay invisible and distort the general results of the analysis. Experts recommend removing them as they essentially violate the direct forecast of data trends or segmentations. However, in practice, disposal doesn't always symbolize the minimalization of risks.

Outliers have no practical use, don't they? 

A significant number of outliers produce a negative impact on the whole system as it neglects transparency. At first glance, it would be reasonable to delete them. Whether there are a few outliers, you can quickly push them away. But it is problematic to remove the overwhelming majority.

I try to gain practical information with them and wish to share with you several ways of doing it. Let's keep in mind that these are a part of data, not a typical mistake, so we should find levers of interaction. To go deeper into the data cleansing process, check this amazing post.

The well-known methods of turning outliers into a benefit are:

  • Univariate that is focused on searching points with extreme value on only one variable.
  • Multivariate that accepts a wide range of combinations on all the variables.
  • Minkowski error that absorbs outliers' effort during a procedure.

Should we treat them as helpful?

As I have mentioned above, it's possible to utilize outliers with sense. Sometimes people compare outliers with noise, but this opinion is wrong. Outliers are valid anomalies, but noise is random mistakes or missing data. It's necessary to investigate outliers as parametric statistics are often sensitive to them. Outliers don't serve as enemies for data. They only supply it with mess though sometimes it doesn't cause trouble. As evidence, outliers indicate harmful data and accumulate details about individuals that don't suit regulations. Violations include natural catastrophes, overflows, and cheating. 

How to model or calculate the outliers?

Outliers modeling can be a complicated process, but with statistical analysis, it can be evaluated. It demands attention and a logical approach. If to talk about avalanches or other unsafe objects, it's sensible to note the return period. It helps to anticipate the velocity and frequency of repetition. Or even if a volcano erupts every 50 years, obviously, its return period will be equal to 50. If researchers notice suspicious fractions, the risk will increase.

But as avalanches aren't usual, we should pay attention to snowfall, as avalanches are just sequences of a reason. And this sequence is known as an outlier.

So I guess it's easy to realize that outliers can be really useful. It effectively serves as a warning and shares a signal to rescue us from dangerous events.

What are peaks over the threshold?

Peak flows over the limit of a current threshold flow constitute POT. You don't need to search for an appropriate pattern, as POT's naming represents its meaning. While giving you an explanation, let me return to avalanches. We don't guess about them when it snows or turns into a blizzard. We treat avalanches as a severe form of snow. A similar thing happens with machine learning or statistical models. 

The system principle resembles a chain of common units - the same with snow. We focus on daily data as they are usual and frequent. As a result, it is complicated for a model to discern and repel distinctive features of avalanches. Their signs will be rather noticed as noise.

So we emphasize the extreme data points as they are the most important for the model. The isolated allocation of the head data set is known as child distribution.

As the threshold is a high priority for child distribution, there is no way to be chill and indifferent. The responsibility of finding the suitable one is big. However, which steps should we implement not to make a wrong decision? Are there any rules or hints to organizing threshold? Can we get stuck in trouble while modeling concentrated on a narrow set of data? It certainly is not a work of a moment. Only a few data points appear, and their variance is usually on edge. Otherwise, why do we know them as outliers?

Here are the measures you can take while cooperating with peaks over the threshold:

  • Launch a threshold to cut a data
  • Don't be afraid to use diversity and make several steps to accomplish a beneficial allocation for the child distribution
  • Interact with backtesting to monitor productivity


I hope this article gave you clear answers, and my point of view wasn't blurred. I wish to invite you to read more about IT and AI technologies. We'll discuss bizarre statistical elements, system obstacles, and much more about peaks over the threshold. See you!