Search
Close this search box.

Power, Productivity and the Internet: Part 1 – The core of the problem

Data-Center

A recent NYTimes article touches upon a number of topics in the ongoing conversation about data center energy efficiency. Some reading that article may react as if some secret revelation has been exposed, incriminating our beloved social media networks and data center as spendthrifts or environmentally ignorant.
The fact of the matter is that we live in an information driven world. Information systems are the foundation of our economies, governments, entertainment and many aspects of our daily lives. Maintaining this information and conducting the data processing around it is an industry. It is as much a part of our industrial fabric as steel and manufacturing were in the 20th century.
The data processing that serves our 21st century lives takes place in facilities called “data centers.” Data centers are essentially industrial factories. From an energy profile perspective, they look exactly like any other factory in that they consume large amounts of resources (electricity and water in their case).
1E has a pedigree of addressing data center energy efficiency and we’ll share that with you presently but first we’d like to give you a little more background.

The core of the problem

There are some out there that will claim the heart of the problem is our dependency or desire for more and more data processing. That is, we are a data processing driven society, hurtling toward the planet’s demise. We’ll leave that to another discussion and instead assume that the increase of data processing demand in our society is a reflection of progress, commerce, and democracy. If you grant me that assertion, the core of our energy demand problem here is that silicon semiconductor-based data processing systems require energy to operate and produce a good bit of heat as a byproduct of their activity. This is compounded exponentially by a matter of scale.
Semiconductor devices have become increasingly dense (in terms of number of transistor gates per unit of area), with higher and higher clock speeds. As these increase, so does energy demand. As individual devices become increasingly dense, we correspondingly demand more and more of them. The result is computer rooms with massive quantities of data processing servers, each of which have massively dense semiconductor chips.
We mentioned a moment ago that a byproduct of the power going to the server is heat. These very dense silicon chips operate at temperatures so high that one could not possibly touch them bare handed. Interestingly, this large amount of heat produced by the semiconductor chips is also a threat to their very health. Consequently, computer servers have lots of fans that pull cool air into the front of the server and blow hot exhaust air out of the back of the server. Yes, fans consume loads of energy too, but the bigger problem still is all this hot exhaust air from all the servers sharing the same space in the data center. For this reason, a large amount of mechanical equipment and resources are a part of data centers as well. These mechanical systems are in the form of air handlers, chillers, cooling towers, and plumbing that is in place simply to remove all this hot air from the data center for the purpose of maintaining a healthy ambient operating temperature for the servers.
In an average run-of-the-mill data center today, approximately half of the electricity supplied by the utility to the data center makes it to the power cord of the IT (server) equipment. Why only half? Well, the mechanical equipment that cools the data center requires a large amount of it, and there are other losses along the way due to common inefficiencies in power distribution and mechanical and electrical technology (one never gets 100% of what one puts in). To make matters worse still, of the electricity which actually makes it to the IT power cord, much less than that actually goes toward actual data processing due to fan energy consumption, conversion losses, and other subsystems within the server itself.
In summary, we need lots of data processing, and data processing technology consumes large amounts of energy.

All hands on deck

These issues have been thoroughly understood and very publicly visible steps taken to address them for many years already. In the United States, the US Department of Energy (DoE) created the “Save Energy Now” program. This program partners the DoE with industry to drive energy efficiency improvements year over year in data centers, with specific goals of saving over 20 billion kWh annually (as compared to historic trends). In the EU, the “EU Code of Conduct” was created to establish a framework of best practices covering energy efficiency, power consumption, and carbon emissions.
Within the data center community, numerous industry groups, trade organizations, and ad hoc committees have been at work on these issues for years. The work of the Green Grid, in particular, has been instrumental in creating the common language used in the community addressing this problem, resulting in a number of energy efficiency management metrics and data center design conventions that we now consider de rigueur.
With governments and the industry itself working the problem, the equipment manufacturers have a role to play as well. Mechanical and Electrical plant (MEP) equipment manufacturers have responded with higher efficiency transformers and UPS, and innovations in pump, fan, and cooling technologies. When it comes to the IT equipment which is truly the engine of this factory we call a data center, the work of participating equipment manufacturers in the ASHRAE TC9.9 body of work is truly remarkable. This is remarkable in that major server manufacturers mutually revealed engineering details of their products to one another to the extent allowing specification of wider ranges of operating temperature and humidity envelopes. This is crucial to energy efficiency in that it is fundamental to allowing reduced energy consumption of MEP, and greatly expands the opportunities for use of free cooling.
Once can go on about this, but suffice to say the evidence is clear that energy consumption by data processing facilities is a widely recognized problem, and much is being done in a coordinated and public way, to provide relief. It’s improper to draw conclusions about a specific data center facility, based upon news of a high profile business with completely different data centers. Some energy efficiency techniques are available to everyone everywhere, and many are not. This is a complex subject with significant nuance, and generalizations can come with risk.
In the end, the Business has invested quite a lot of money in its data center, and to acquire the servers and software within it. Over the years, the Business spends quite a lot of money maintaining and supporting these systems, and is also spending quite a lot of money on energy for power and cooling.
In part two, I’ll look at how to identify server waste and what you can do to eliminate it.

Report

The FORRESTER WAVE™: End-User Experience Management, Q3 2022

The FORRESTER WAVE™: End-User Experience Management, Q3 2022