pctechguide.com

  • Home
  • Guides
  • Tutorials
  • Articles
  • Reviews
  • Glossary
  • Contact

Monitoring in Machine Learning Part 2: Monitoring Techniques

We talked about the reasons that you need to monitor in machine learning in our last post. We are now clear about the main factors that can degrade the performance of a model.

So we can define monitoring as the phase of Machine Learning Operations in which we measure different performance variables of the model and compare them with reference values to determine if it continues to generate adequate predictions or if it is necessary to take actions to improve performance.

And there are several ways to perform this monitoring, some quite simple and others more sophisticated.

Monitoring through global metrics
The simplest of all is to continuously record a global metric of the model’s performance and compare it to a reference level.

For example, if we have a face detection system that at the development stage had an accuracy of 97% then we can periodically (e.g. daily) record this performance on the deployed model and if it is observed to fall below this reference level an alert could be generated indicating that we should take some action before things continue to get worse.

The drawback of monitoring using a global performance metric is that we cannot determine the reasons behind the degradation, i.e. whether the underlying problem is “data drift” or “concept drift”.

Monitoring through statistical methods
A more sophisticated way to perform monitoring is for example to obtain the statistical distribution of the input data before deployment and periodically calculate this distribution but for the data used by the deployed model, and then apply a statistical test to determine if there are significant differences between one and the other. In the case of finding differences we could conclude that the origin of the degradation is in “data drift”.

Monitoring in detail
Perfect, we already have clear the main factors that can degrade the performance of a model.

So we can define monitoring as the phase of Machine Learning Operations in which we measure different performance variables of the model and compare them with reference values to determine if it continues to generate adequate predictions or if it is necessary to take actions to improve performance.

And there are several ways to perform this monitoring, some quite simple and others more sophisticated.

Monitoring through global metrics
The simplest of all is to continuously record a global metric of the model’s performance and compare it to a reference level.

For example, if we have a face detection system that at the development stage had an accuracy of 97% then we can periodically (e.g. daily) record this performance on the deployed model and if it is observed to fall below this reference level an alert could be generated indicating that we should take some action before things continue to get worse.

The drawback of monitoring using a global performance metric is that we cannot determine the reasons behind the degradation, i.e. whether the underlying problem is “data drift” or “concept drift”.

Monitoring through statistical methods
A more sophisticated way to perform monitoring is for example to obtain the statistical distribution of the input data before deployment and periodically calculate this distribution but for the data used by the deployed model, and then apply a statistical test to determine if there are significant differences between one and the other. In the case of finding differences we could conclude that the origin of the degradation is in “data drift”.

Or we can do something similar but for the data distributions at the model output before and after deployment, so that if we find statistically significant differences we can conclude that the performance degradation is in this case due to “concept drift”.

Conclusion
Very well, in this article we have seen that after deployment it is very likely that the performance of the model begins to decline and this is precisely because both the data and the environment in which the model is in are dynamic and can continuously present variations.

So monitoring allows detecting this performance degradation, either by analyzing global metrics or by using more advanced techniques such as the use of statistical tests applied to the model’s input or output data.

But this process does not end with monitoring, because if performance degradation is confirmed, corrective actions must be taken to keep the model in production. This phase is known as model maintenance and will be discussed in a future article.

Filed Under: Articles

Latest Articles

Speedstep – Intel’s mobile CPU dynamic power management architecture

Just a few weeks after its launch of the Dixon, Intel demonstrated a revolutionary new mobile processor technology that was expected to close the performance gap between mobile PCs and their historically higher performance desktop counterparts. The demonstration was of a dual-mode mobile … [Read More...]

11 Invaluable Tips to Make the Most of Your Flash Drive

Many people are surprised to hear that flash drives are still very popular. However, the market for them is growing by 7% a year. It is a good idea to recognize the benefits of flash drives, as well as some of the things that you can do to ensure they work well. Keep reading to learn more. Why … [Read More...]

How to Create Your Own Private VPN

VPN (virtual private network) services have been available online for almost as long as the internet. You probably know the basics by now. In case you need a refresher, the technology establishes an encrypted connection to the internet by routing your requests through a private network. This … [Read More...]

Top Taplio Alternatives in 2025 : Why MagicPost Leads for LinkedIn Posting ?

LinkedIn has become a strong platform for professionals, creators, and businesses to establish authority, grow networks, and elicit engagement. Simple … [Read More...]

Shocking Cybercrime Statistics for 2025

People all over the world are becoming more concerned about cybercrime than ever. We have recently collected some statistics on this topic and … [Read More...]

Gaming Laptop Security Guide: Protecting Your High-End Hardware Investment in 2025

Since Jacob took over PC Tech Guide, we’ve looked at how tech intersects with personal well-being and digital safety. Gaming laptops are now … [Read More...]

20 Cool Creative Commons Photographs About the Future of AI

AI technology is starting to have a huge impact on our lives. The market value for AI is estimated to have been worth $279.22 billion in 2024 and it … [Read More...]

13 Impressive Stats on the Future of AI

AI technology is starting to become much more important in our everyday lives. Many businesses are using it as well. While he has created a lot of … [Read More...]

Graphic Designers on Reddit Share their Views of AI

There are clearly a lot of positive things about AI. However, it is not a good thing for everyone. One of the things that many people are worried … [Read More...]

Guides

  • Computer Communications
  • Mobile Computing
  • PC Components
  • PC Data Storage
  • PC Input-Output
  • PC Multimedia
  • Processors (CPUs)

Recent Posts

Ensure your IT is Ready for 2016

With the holidays around the corner, 2016 will be here sooner than you know it. This makes it an excellent time to start thinking about next year’s IT … [Read More...]

Amazon and Realogy Join Forces To Match Buyers with Agents Through “Turnkey”

Amazon has proven themselves over time to be a global phenomenon for customers looking for quick purchases. It is no wonder they generated … [Read More...]

Celeron Coppermine

In the spring of 2000 the packaging picture became even more complicated with the announcement of the first Celeron … [Read More...]

[footer_backtotop]

Copyright © 2026 About | Privacy | Contact Information | Wrtie For Us | Disclaimer | Copyright License | Authors