Syracuse University (SU) has built, with the help of IBM, what is described as ‘a showcase of world-class innovations in advanced energy-efficient information technology and building systems’. The construction was announced earlier this year and it’s taken just over six months to build the 12,000-square-foot, $12.4m facility.
What’s different about this data centre is that rather than just addressing energy efficiency it has also included an on-site power generation system for electricity, heating and cooling. The system uses natural gas-fuelled micro-turbines to generate all the electricity, such that the centre will be completely off-grid. The facility also incorporates a direct current (DC) power distribution system, which means that no power is lost in the conversion from AC to DC.
Other features include a liquid cooling system that converts the exhaust heat from the micro-turbines into chilled water to cool the data centre’s servers. Server racks have ‘cooling doors’ that use chilled water to remove heat from each rack. And sensors will monitor server temperatures and usage to tailor the amount of cooling delivered to each server.
IBM estimates that when it becomes fully operational in January, the data centre will, in IBM’s words ‘use about 50 percent less energy than a typical data centre in operation today’.
This is all good stuff and it sounds like a very innovative project that IBM can justly be proud of.
However, the ‘about 50% less energy’ is the only figure that was released that gives any indication of emissions reductions and increased efficiency. I queried IBM about this, asking for a PUE (Power Usage Effectiveness) figure, for example, and received the following response:
“About 50% of the energy consumed by a typical data centre is used to cool the servers in the data centre. So only half the energy consumed by a data centre is being used for actual processing. The half being used for cooling is often referred to as the inefficient portion or ‘wasted’ energy consumption in a data centre.
When estimating about 50 percent savings for the Syracuse Green Data Centre, it includes the savings of not having the transmission losses of getting electricity off the grid and our recovery of exhaust heat to make hot and cold water. So, we are talking about a 50 percent savings of ‘primary energy’ - the full spectrum of energy created, moved, converted, conditioned, transformed, etc.
The approach being used with this data centre does not fit easily into conventional comparisons. For example, PUE assumes power coming to the data centre from the grid, and doesn't recognize the thermal and electrical efficiencies of on-site tri-generation. We are not providing a PUE, but it will be significantly lower than the data centre average of 2.0”
I take the point. But nevertheless, if you reduce a PUE from a ‘typical’ 2.0 down to 1.15, which other companies are achieving for new builds, then you have already reduced energy use by 42.5% (if my calculations are correct), so on the face of it this doesn’t seem a huge step forward. The 50% saving on ‘primary energy’ is no doubt more significant, but it’s difficult to judge from what’s been provided. And if you used renewable energy then emissions would be zero, and it is, after all, emissions that are the issue.
My point is that for reasons of transparency and to help the global IT community build better data centres it would be great if more data were available from developments like this. What is the gain prevented power loss from on-site DC supply? How much is the efficiency gain? What is the benefit from re-using waste heat?
The target for the ICT industry is not to build the greenest data centre, but to develop good practice to help reduce emissions in all data centres. Information on what can be achieved and how would be best served by publishing more data.