The global demand for electricity is about 20,000 TWH; The ICT (Information and communication technology) industry uses 2,000 TWH, while data centers use about 200 TWH, or 1% of the total. Therefore, data centers are an important part of energy consumption in most countries and regions. It is estimated that there are more than 18 million servers in data centers worldwide. In addition to their own power needs, these IT devices also require supporting infrastructure, such as cooling, power distribution, fire suppression, uninterruptible power supply, generators, etc. To compare the energy efficiency of a data center, IT is common practice to use “Power usage efficiency” (PUE) as a metric, which is the ratio of the total energy used by the data center to the energy used by IT. The ideal PUE is 1, which means that all energy is used for IT and the supporting infrastructure consumes no energy.
Therefore, to maximize the reduction of PUE, it is necessary to reduce the consumption of supporting infrastructure such as cooling and power distribution. Existing traditional data centers typically have a PUE of about 2, while large hyperscale data centers can reach below 1.2. In 2020, the global average is about 1.67. This means that, on average, 40% of total energy consumption is non-IT consumption. However, the PUE is a ratio that does not capture the total energy consumption, which means that if the IT equipment consumes a high level of energy compared to the cooling system, the PUE will look low. Therefore, IT is also important to measure the total power consumption as well as the efficiency and life cycle of the IT equipment. In addition, from an environmental point of view, the method of power generation, the amount of water consumed (including power generation and on-site cooling) and whether waste heat is used should also be considered.
The PUE concept was first proposed by the Green Grid Consortium in 2006 and published as an ISO standard in 2016. Green Grid is an open industry consortium of data center operators, cloud providers, technology and equipment vendors, facility architects, and end users working to improve the energy and resource efficiency of the data center ecosystem and reduce carbon emissions globally.
PUE is still a common way to calculate energy efficiency in data centers. At Munters, for example, PUE is assessed on a peaking and annualized basis for each project. When calculating PUE indicators, only IT load and cooling load are considered in the calculation of PUE. This is called partial PUE (pPUE) or mechanical PUE (PUEM). Electrical engineers use peak pPUE to determine the maximum load as well as the size of the backup generator. The annualized pPUE is used to assess typical power consumption over a year and to compare it with other cooling schemes. While PUE may not be a perfect tool, it is gaining support with the adoption of other measures such as WUE (water use efficiency), CUE (carbon use efficiency), and methods such as SPUE (server PUE) and TUE (Total PUE) that enhance PUE relevance.
2. Data Center Trends
Over the past decade, efficient hyperscale data centers have increased their relative share of total data center energy consumption, while many less efficient traditional data centers have closed. As a result, total energy consumption has not yet increased significantly. These new hyperscale data centers are designed for increased efficiency. However, we know that the demand for information services and computer-intensive applications will continue to grow due to many emerging trends such as artificial intelligence, machine learning, automation, driverless cars, and more. As a result, energy demand for data centers is expected to increase somewhat, and by how much is a point of debate. In the best-case scenario, global data center energy consumption will triple by 2030 compared to current demand, but it is more likely to increase eightfold. Both IT and non-IT infrastructure are included in these energy consumption projections. The majority of non-IT energy consumption comes from cooling, or more precisely, heat dissipation from servers, and cooling costs alone can easily account for 25% or more of total annual energy costs. There is no doubt that cooling is necessary to maintain IT functionality and can be optimized with well-designed and efficiently run building systems.
An important recent trend is the increase in server rack power density, some as high as 30 to 40 kW and above. According to research conducted by AFCOM, a trade association for data center professionals, the 2020 State of the Data Center report indicates that average rack density jumped to 8.2 kW per rack, up from 7.3 kW in 2019 and 7.2 kW in 2018. About 68 percent of respondents said rack density has increased over the past three years.
The shift to cloud computing has undoubtedly driven the growth of hyperscale and colocation data centers. Historically, a 1 MW data center was designed to meet the needs of banks, airlines, or universities, but many institutions and companies are now moving to cloud services within hyperscale and colocation data center facilities. As this demand continues to grow, so does the demand for data speed, all of which serve mission-critical applications, so the reliability of the infrastructure is critical.
At the same time, there is a growing focus on edge data centers to reduce latency, as well as liquid cooling for high-performance chips to reduce energy use.