Server Level Liquid Cooling: Do Higher System Temperatures Improve Energy Efficiency?

Alexander A. Moskovsky, Egor A. Druzhinin, Alexey B. Shmelev, Vladimir V. Mironov, Andrey Semin

Abstract


Liquid cooling is now a mainstream approach to boost energy efficiency for high performance computing systems. Higher coolant temperature is usually considered as an advantage, since it allows heat reuse/recuperation and simplifies datacenter infrastructure by eliminating the need of chiller machine. However, the use of hot coolant imposes high requirements for cooling equipment. A promising approach is to utilize coldplates with channel structure and liquid circulation for heat removal from semiconductor components. We have designed a coldplate with low heat-resistance that ensures effective cooling with only 2030° temperature difference between coolant and electronic parts of a server. Under stress-test conditions the coolant temperature was up to 65 °C while server operation was undisturbed. We also studied power efficiency (expressed in floating point operations per watt) dependence on the coolant temperature (19-65 °C) on theindividualserverlevel (based on Intel Grantley platform with dual Intel Xeon E5-2697v3 processors). иThe power performance ratio shows moderate (≈10%) efficiency drop from 19 to 65°C due to increase of leak age current in chipset components and reduction of processor frequency resulted into proportional reduction of DGEMM benchmark performance. It must be taken into account by datacenter designers, that the amount of recuperated energy from 65 °C should be at least≈10% to justify the choice of high temperature coolant solution.

Full Text:

PDF

References


TOP500 supercomputer sites. [Online]. Available: http://www.top500.org

A. Petitet, R. C. Whaley, J. Dongarra, and A. Cleary. (2008) HPL - a portable implementation of the high-performance Linpack benchmark for distributed-memory computers. [Online]. Available: http://www.netlib.org/benchmark/hpl/

J. Koomey, S. Berard, M. Sanchez, and H. Wong, “Implications of historical trends in the electrical efficiency of computing,” IEEE Annals of the History of Computing, vol. 33, no. 3, pp. 46–54, 2011.

P. Kogge, K. Bergman, S. Borkar, D. Campbell, W. Carson, W. Dally, M. Denneau, P. Franzon, W. Harrod, K. Hill, J. Hiller, M. Richards, and A. Snavely, “ExaScale computing study: technology challenges in achieving exascale systems,” Government PROcurement, vol. TR-2008-13, p. 278,

D. Wade, “ASC business plan (NA-ASC-104R-15-Vol.1-Rev.0),” Tech. Rep., 2015.

J. Haas, J. Froedge, J. Pflueger, and D. Azevedo, Usage and public reporting guidelines for The Green Grid’s infrastructure metrics (PUE/DCiE), The Green Grid, 2009. [Online]. Available: http://www.thegreengrid.org/ ∼ /media/WhitePapers/WhitePaper22PUEDCiEUsageGuidelinesfinalv21.pdf

M. P. David, M. Iyengar, P. Parida, R. Simons, M. Schultz, M. Gaynes, R. Schmidt, and T. Chainer, “Experimental characterization of an energy efficient chiller-less data center test facility with warm water cooled servers,” in 2012 28th Annual IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM). IEEE, 2012, pp. 232–237.

C. Gough, I. Steiner, and W. A. Saunders, Energy efficient servers: blueprints for data center optimization, 1st ed. Apress, 2015.

Yahoo launches second ’computing coop' data center in New York state. [Online]. Available: http://www.datacenterknowledge.com/archives/2015/04/27/

second-yahoo-data-center-comes-online-in-new-york-state/

The performance of standard cold plate technologies is compared in a graph showing local thermal resistance. Lytron Inc. [Online]. Available: http://www.lytron.com/Cold-Plates/Standard/Performance-Comparison

M. Berktold and T. Tian, CPU monitoring with DTS/PECI, 2010. [On-

line]. Available: http://www.intel.com/content/www/us/en/embedded/testing-and-validation/cpu-monitoring-dts-peci-paper.html

Intel Turbo Boost technology 2.0, Intel. [Online]. Available: http://www.intel.com/technology/turboboost/

J. Demmel and A. Gearhart, “Instrumenting linear algebra energy consumption via on chip energy counters,” UC at Berkeley, Tech. Rep., 2012. [Online]. Available: http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-168.html

G. Evola and L. Marletta, “Exergy and thermoeconomic optimization of a water-cooled glazed hybrid photovoltaic/thermal (PVT) collector,” Solar Energy, vol. 107, pp. 12–25, 2014.

P. Hammarlund, A. J. Martinez, A. A. Bajwa, D. L. Hill, E. Hallnor, H. Jiang, M. Dixon, M. Derr, M. Hunsaker, R. Kumar, R. B. Osborne, R. Rajwar, R. Singhal, R. D’Sa, R. Chappell, S. Kaushik, S. Chennupaty, S. Jourdan, S. Gunther, T. Piazza, and T. Burton, “Haswell: the fourth-generation Intel Core processor,” IEEE Micro, vol. 34, no. 2, pp. 6–20, 2014.

Nam Sung Kim, T. Austin, D. Blaauw, T. Mudge, K. Flautner, Jie S. Hu, M. Irwin, M. Kandemir, and V. Narayanan, “Leakage current: Moore’s law meets static power,” Computer, vol. 36, no. 12, pp. 68–75, 2003.

S. Zimmermann, M. K. Tiwari, I. Meijer, S. Paredes, B. Michel, and D. Poulikakos, “Hot water cooled electronics: exergy analysis and waste heat reuse feasibility,” International Journal of Heat and Mass Transfer, vol. 55, no. 23-24, pp. 6391–6399, 2012.

S. Zimmermann, I. Meijer, M. K. Tiwari, S. Paredes, B. Michel, and D. Poulikakos, “Aquasar: a hot water cooled data center with direct energy reuse,” Energy, vol. 43, no. 1, pp. 237–245, 2012.

N. Meyer, M. Ries, S. Solbrig, and T. Wettig, “iDataCool: HPC with hot-water cooling and energy reuse,” in Supercomputing: 28th International Supercomputing Conference, 2013, pp. 383–394.

A. Auweter and H. Huber, “Direct warm water cooled Linux cluster Munich: CooLMUC,” Inside, vol. 10, no. 1, pp. 81–82, 2012.

Hot water cooled supercomputer. Eurotech. [Online]. Available: http://www.eurotech.com/en/hpc/hpc+solutions/liquid+cooling

The Green500 list. [Online]. Available: http://www.green500.org




Publishing Center of South Ural State University (454080, Lenin prospekt, 76, Chelyabinsk, Russia)