Analysis of CPU Usage Data Properties and their possible impact on Performance Monitoring

Authors

  • Konstantin S. Stefanov M.V. Lomonosov Moscow State University, Moscow
  • Alexey A. Gradskov M.V. Lomonosov Moscow State University, Moscow

DOI:

https://doi.org/10.14529/jsfi160405

Abstract

CPU usage data (CPU user, system, iowait etc. load levels) are often the basic data used for performance monitoring. The source of these data is  the operating system. In this paper we analyze some properties of CPU usage data provided by Linux kernel. We examine kernel source code and provide test results to find which level of accuracy and precision one may expect when using CPU load level data.

References

Dongarra JJ, Moler CB, Bunch JR, Stewart GW. LINPACK User’s guide. Society for Industrial and Applied Mathematics; 1979. 344 p.

Dongarra J, Heroux MA, Luszczek P. HPCG Benchmark: a New Metric for Ranking High Performance Computing Systems. Knoxville, Tennessee; 2015.

Kluge M, Hartung M. Mapping of RAID Controller Performance Data to the Job History on Large Computing Systems. In: 2014 International Workshop on Data Intensive Scalable Computing Systems. New Orleans, Louisiana, USA; 2014. p. 73–80.

Sottile MJ, Minnich RG. Supermon: a high-speed cluster monitoring system. In: Proceedings IEEE International Conference on Cluster Computing. IEEE Comput. Soc; 2002. p. 39–46.

proc(5) - process information pseudo-file system [Internet]. Available from: http://linux.die.net/man/5/proc

Korn W, Teller PJ, Castillo G. Just how accurate are performance counters? In: Conference Proceedings of the 2001 IEEE International Performance, Computing, and Communications Conference (Cat No01CH37210). IEEE; 2001. p. 303–10.

Weaver VM, McKee SA. Can hardware performance counters be trusted? In: 2008 IEEE International Symposium on Workload Characterization, IISWC’08. 2008. p. 141–50.

Weaver V, Dongarra J. Can hardware performance counters produce expected, deterministic results. Proceedings of Third Workshop on Functionality of Hardware Performance Monitoring. 2010;

Weaver VM, Terpstra D, Moore S. Nondeterminism and Overcount in Hardware Counter Implementations. In: 2013 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). Austin, TX: IEEE; 2013. p. 215–24.

Moore SV. A Comparison of Counting and Sampling Modes of Using Performance Monitoring Hardware. In: Computational Science — ICCS 2002. Springer Berlin Heidelberg; 2002. p. 904–12.

Smythies D. Linux reported load averages, for example from top and uptime commands, can be incorrect [Internet]. 2012 [cited 2016 Jun 14]. Available from: http://www.smythies.com/~doug/network/load_average/

NO_HZ: Reducing Scheduling-Clock Ticks [Internet]. Available from: https://www.kernel.org/doc/Documentation/timers/NO_HZ.txt

nanosleep(2): high-resolution sleep [Internet]. Available from: https://linux.die.net/man/2/nanosleep

time(1) - time a simple command or give resource usage [Internet]. Available from: https://linux.die.net/man/1/time

Floyd S, Jacobson V. The synchronization of periodic routing messages. IEEE/ACM Transactions on Networking. 1994 Apr;2(2):122–36.

Downloads

Published

2016-12-08

How to Cite

Stefanov, K. S., & Gradskov, A. A. (2016). Analysis of CPU Usage Data Properties and their possible impact on Performance Monitoring. Supercomputing Frontiers and Innovations, 3(4), 66–73. https://doi.org/10.14529/jsfi160405

Most read articles by the same author(s)