pctechguide.com

  • Home
  • Guides
  • Tutorials
  • Articles
  • Reviews
  • Glossary
  • Contact

Monitoring in Machine Learning Part 2: Monitoring Techniques

We talked about the reasons that you need to monitor in machine learning in our last post. We are now clear about the main factors that can degrade the performance of a model.

So we can define monitoring as the phase of Machine Learning Operations in which we measure different performance variables of the model and compare them with reference values to determine if it continues to generate adequate predictions or if it is necessary to take actions to improve performance.

And there are several ways to perform this monitoring, some quite simple and others more sophisticated.

Monitoring through global metrics
The simplest of all is to continuously record a global metric of the model’s performance and compare it to a reference level.

For example, if we have a face detection system that at the development stage had an accuracy of 97% then we can periodically (e.g. daily) record this performance on the deployed model and if it is observed to fall below this reference level an alert could be generated indicating that we should take some action before things continue to get worse.

The drawback of monitoring using a global performance metric is that we cannot determine the reasons behind the degradation, i.e. whether the underlying problem is “data drift” or “concept drift”.

Monitoring through statistical methods
A more sophisticated way to perform monitoring is for example to obtain the statistical distribution of the input data before deployment and periodically calculate this distribution but for the data used by the deployed model, and then apply a statistical test to determine if there are significant differences between one and the other. In the case of finding differences we could conclude that the origin of the degradation is in “data drift”.

Monitoring in detail
Perfect, we already have clear the main factors that can degrade the performance of a model.

So we can define monitoring as the phase of Machine Learning Operations in which we measure different performance variables of the model and compare them with reference values to determine if it continues to generate adequate predictions or if it is necessary to take actions to improve performance.

And there are several ways to perform this monitoring, some quite simple and others more sophisticated.

Monitoring through global metrics
The simplest of all is to continuously record a global metric of the model’s performance and compare it to a reference level.

For example, if we have a face detection system that at the development stage had an accuracy of 97% then we can periodically (e.g. daily) record this performance on the deployed model and if it is observed to fall below this reference level an alert could be generated indicating that we should take some action before things continue to get worse.

The drawback of monitoring using a global performance metric is that we cannot determine the reasons behind the degradation, i.e. whether the underlying problem is “data drift” or “concept drift”.

Monitoring through statistical methods
A more sophisticated way to perform monitoring is for example to obtain the statistical distribution of the input data before deployment and periodically calculate this distribution but for the data used by the deployed model, and then apply a statistical test to determine if there are significant differences between one and the other. In the case of finding differences we could conclude that the origin of the degradation is in “data drift”.

Or we can do something similar but for the data distributions at the model output before and after deployment, so that if we find statistically significant differences we can conclude that the performance degradation is in this case due to “concept drift”.

Conclusion
Very well, in this article we have seen that after deployment it is very likely that the performance of the model begins to decline and this is precisely because both the data and the environment in which the model is in are dynamic and can continuously present variations.

So monitoring allows detecting this performance degradation, either by analyzing global metrics or by using more advanced techniques such as the use of statistical tests applied to the model’s input or output data.

But this process does not end with monitoring, because if performance degradation is confirmed, corrective actions must be taken to keep the model in production. This phase is known as model maintenance and will be discussed in a future article.

Filed Under: Articles

Latest Articles

Client-Server Architecture

Client-server networking architectures became popular in the late 1980s and early 1990s as many applications were migrated from centralised minicomputers and mainframes to networks of personal computers. The design of applications for a distributed … [Read More...]

Navigating Charging Stations in the US: Rules, Tips, and Pricing – 2023 Guide

The transition to electric vehicles (EVs) is in full swing, and as more Americans embrace this sustainable mode of transportation, the demand for EV charging stations continues to rise. Navigating charging stations in the US can be a breeze with the right knowledge. In this 2023 guide, we'll explore … [Read More...]

V92 Technology

Announced in 2000, the ITU's V.92 analogue modem standard has the same download speed as the V.90 standard (56 Kbit/s) but increases the maximum upload speed from 33.6 Kbit/s to 48 Kbit/s. As well as this performance improvement - referred to as PCM … [Read More...]

Top Taplio Alternatives in 2025 : Why MagicPost Leads for LinkedIn Posting ?

LinkedIn has become a strong platform for professionals, creators, and businesses to establish authority, grow networks, and elicit engagement. Simple … [Read More...]

Shocking Cybercrime Statistics for 2025

People all over the world are becoming more concerned about cybercrime than ever. We have recently collected some statistics on this topic and … [Read More...]

Gaming Laptop Security Guide: Protecting Your High-End Hardware Investment in 2025

Since Jacob took over PC Tech Guide, we’ve looked at how tech intersects with personal well-being and digital safety. Gaming laptops are now … [Read More...]

20 Cool Creative Commons Photographs About the Future of AI

AI technology is starting to have a huge impact on our lives. The market value for AI is estimated to have been worth $279.22 billion in 2024 and it … [Read More...]

13 Impressive Stats on the Future of AI

AI technology is starting to become much more important in our everyday lives. Many businesses are using it as well. While he has created a lot of … [Read More...]

Graphic Designers on Reddit Share their Views of AI

There are clearly a lot of positive things about AI. However, it is not a good thing for everyone. One of the things that many people are worried … [Read More...]

Guides

  • Computer Communications
  • Mobile Computing
  • PC Components
  • PC Data Storage
  • PC Input-Output
  • PC Multimedia
  • Processors (CPUs)

Recent Posts

Tips to Getting Engagement that Works the Instagram Algorithm in Your Favor

Building a formidable presence on Instagram these days requires much planning and patience. With over a billion active users, the site is rocketing in … [Read More...]

Why Cross-Chain Trading Is the Future of Crypto Investing?

The rapid growth and evolution of the cryptocurrency market have opened up exciting opportunities for investors. Within this dynamic landscape, … [Read More...]

Streamline Startup Operations with Virtual Data Rooms

Modern startup projects that we see in the business cluster today are more than just a brilliant idea of a potentially successful project at the … [Read More...]

[footer_backtotop]

Copyright © 2026 About | Privacy | Contact Information | Wrtie For Us | Disclaimer | Copyright License | Authors