pctechguide.com

  • Home
  • Guides
  • Tutorials
  • Articles
  • Reviews
  • Glossary
  • Contact

Monitoring in Machine Learning Part 2: Monitoring Techniques

We talked about the reasons that you need to monitor in machine learning in our last post. We are now clear about the main factors that can degrade the performance of a model.

So we can define monitoring as the phase of Machine Learning Operations in which we measure different performance variables of the model and compare them with reference values to determine if it continues to generate adequate predictions or if it is necessary to take actions to improve performance.

And there are several ways to perform this monitoring, some quite simple and others more sophisticated.

Monitoring through global metrics
The simplest of all is to continuously record a global metric of the model’s performance and compare it to a reference level.

For example, if we have a face detection system that at the development stage had an accuracy of 97% then we can periodically (e.g. daily) record this performance on the deployed model and if it is observed to fall below this reference level an alert could be generated indicating that we should take some action before things continue to get worse.

The drawback of monitoring using a global performance metric is that we cannot determine the reasons behind the degradation, i.e. whether the underlying problem is “data drift” or “concept drift”.

Monitoring through statistical methods
A more sophisticated way to perform monitoring is for example to obtain the statistical distribution of the input data before deployment and periodically calculate this distribution but for the data used by the deployed model, and then apply a statistical test to determine if there are significant differences between one and the other. In the case of finding differences we could conclude that the origin of the degradation is in “data drift”.

Monitoring in detail
Perfect, we already have clear the main factors that can degrade the performance of a model.

So we can define monitoring as the phase of Machine Learning Operations in which we measure different performance variables of the model and compare them with reference values to determine if it continues to generate adequate predictions or if it is necessary to take actions to improve performance.

And there are several ways to perform this monitoring, some quite simple and others more sophisticated.

Monitoring through global metrics
The simplest of all is to continuously record a global metric of the model’s performance and compare it to a reference level.

For example, if we have a face detection system that at the development stage had an accuracy of 97% then we can periodically (e.g. daily) record this performance on the deployed model and if it is observed to fall below this reference level an alert could be generated indicating that we should take some action before things continue to get worse.

The drawback of monitoring using a global performance metric is that we cannot determine the reasons behind the degradation, i.e. whether the underlying problem is “data drift” or “concept drift”.

Monitoring through statistical methods
A more sophisticated way to perform monitoring is for example to obtain the statistical distribution of the input data before deployment and periodically calculate this distribution but for the data used by the deployed model, and then apply a statistical test to determine if there are significant differences between one and the other. In the case of finding differences we could conclude that the origin of the degradation is in “data drift”.

Or we can do something similar but for the data distributions at the model output before and after deployment, so that if we find statistically significant differences we can conclude that the performance degradation is in this case due to “concept drift”.

Conclusion
Very well, in this article we have seen that after deployment it is very likely that the performance of the model begins to decline and this is precisely because both the data and the environment in which the model is in are dynamic and can continuously present variations.

So monitoring allows detecting this performance degradation, either by analyzing global metrics or by using more advanced techniques such as the use of statistical tests applied to the model’s input or output data.

But this process does not end with monitoring, because if performance degradation is confirmed, corrective actions must be taken to keep the model in production. This phase is known as model maintenance and will be discussed in a future article.

Filed Under: Articles

Latest Articles

Search Google Drive with Cortana

Windows 10’s Cortana is billed as the next generation of digital assistants, bringing a whole suite of capabilities to the fore in an attempt to upstage Apple’s Siri and Google Now. The latest is the extension of Cortana’s assistance to non-Microsoft services like Dropbox and Google Drive. The … [Read More...]

Use TRACERT to Diagnose Network Issues

Windows comes with several commands that help you diagnose network issues. But, one of the more useful commands available to you is the TRACERT command. TRACERT is a command line tool that will show the different hops a packet will take as it reaches its destination. You can type TRACERT followed by … [Read More...]

Digital Video for Windows

AVI (Audio Video Interleaved) is Microsoft's generic format for digital video in Windows, provided via its MCI (Media Control Interface). AVI allows for a number of compression methods, in real-time, non-real-time, and with … [Read More...]

Gaming Laptop Security Guide: Protecting Your High-End Hardware Investment in 2025

Since Jacob took over PC Tech Guide, we’ve looked at how tech intersects with personal well-being and digital safety. Gaming laptops are now … [Read More...]

20 Cool Creative Commons Photographs About the Future of AI

AI technology is starting to have a huge impact on our lives. The market value for AI is estimated to have been worth $279.22 billion in 2024 and it … [Read More...]

13 Impressive Stats on the Future of AI

AI technology is starting to become much more important in our everyday lives. Many businesses are using it as well. While he has created a lot of … [Read More...]

Graphic Designers on Reddit Share their Views of AI

There are clearly a lot of positive things about AI. However, it is not a good thing for everyone. One of the things that many people are worried … [Read More...]

Redditors Talk About the Impact of AI on Freelance Writers

AI technology has had a huge impact on our lives. A 2023 survey by Pew Research found that 56% of people use AI at least once a day or once a week. … [Read More...]

11 Most Popular Books on Perl Programming

Perl is not the most popular programming language. It has only one million users, compared to 12 million that use Python. However, it has a lot of … [Read More...]

Guides

  • Computer Communications
  • Mobile Computing
  • PC Components
  • PC Data Storage
  • PC Input-Output
  • PC Multimedia
  • Processors (CPUs)

Recent Posts

Keyboards

A computer keyboard is a peripheral modelled on the typewriter keyboard. Keyboards are designed for the input of text … [Read More...]

Illustrated Intel Pentium Tillamook CPU technology guide

Conspicuous by its absence from Intel's launch of MMX at the beginning of 1997 was a 200MHz version of the Pentium MMX for … [Read More...]

The Anatomy of a CRT Monitor (and CRT TVs)

Most CRT monitors have case depths about as deep as the screen is wide, begging the question what is it that's inside a monitor that requires as much … [Read More...]

[footer_backtotop]

Copyright © 2025 About | Privacy | Contact Information | Wrtie For Us | Disclaimer | Copyright License | Authors