When we started designing our Prineville, Ore., data center as part of the Open Compute Project, we did so with a “less is more” philosophy. We wanted a highly energy efficient, less costly, simpler and more reliable facility that could serve as a model for other data centers. In the end, we built an innovative data center that has:
- PUE (power usage effectiveness) of 1.07 at full load.
- WUE (water use effectiveness) of 0.31 liters/kWH (see the Summary below).
- CapEx lowered by 45%, and reduced OpEx.
- Higher reliability due to its simpler construction.
How did we do this? When we were brainstorming the design, we asked ourselves what we could do differently from traditional data center designs: What can we remove from the system? Can we raise operating temperatures and have the servers survive? Can we increase server delta T and relative humidity operational ranges to make the system much more robust and efficient? Do we need a centralized UPS, PDUs, or chillers?
Our deliberations resulted in a data center design with four primary innovations that resulted from our eliminating:
- The centralized UPS by developing a standby 48VDC UPS system at the server cabinet level.
- PDUs by utilizing 277VAC distribution to our IT equipment.
- Chillers as a source of heat rejection.
- Air distribution ductwork.
Increasing Electrical Efficiency
The traditional data center loses 21% to 27% of its power due to inefficiencies built into the system. Losses enter the system during every stage of power transformation and conversion in a data center:
- When utility medium voltage is transformed to 480VAC, there is a 2% loss.
- Within the centralized UPS, there are two power conversions: AC to DC and DC back to AC, which results in a 6% to 12% loss.
- Power transformation at the PDU level from 480VAC to 208VAC results in a 3% power loss.
- Two-way server power supplies have two voltages: 208VAC to various DC voltages. This results in a 10% loss, assuming the power supply is an industry-average 90% efficient.
By eliminating the centralized UPS and the PDUs, and by designing a 94.5% efficient server power supply, the Prineville data center has a total loss of 7.5% (including the 2% transformation loss). We relocated the UPS so it’s closer to the server level and eliminates single points of failure upstream from the Open Compute servers in Prineville. Coupling this with the fact that we no longer need to synchronize between the centralized UPS and the PDUs, availability increases from five 9s to six 9s (99.9999%), based on our assumptions.
DC Backup Power
The Prineville servers get 45 seconds of backup power at full load from a custom 48VDC UPS, stored in cabinets in the data center aisles. Each unit is 56kW or 75kW, 480/277V, 3-phase input. Each cabinet contains 20 sealed VRLA batteries in five strings. The DC power cables connect directly to the server racks.
The UPS is a standby system, not an online system. During normal operation, there is negligible energy loss. The only energy used during normal operation is to trickle charge the battery.
The cabinet has a battery monitoring system, which measures impedance, voltage and temperature.
Custom Reactor Power Panel
More electrical efficiency comes from a custom fabricated reactor power panel that delivers 175kW, 480/277V, 3-phase power to the server cabinet level. The reactor power panel:
- Reduces short circuit current to less than 10kA.
- Corrects leading power factor towards unity, a 3% improvement on power factor correction.
- Reduces current total harmonic distortion, improving electrical system performance 2%. High iTHD or PF can jeopardize the backup generators when they start up.
- Consumes 360W power, a negligible 0.2% loss.
Cooling, Airflow Innovations
Locating the data center in the high desert of central Oregon enabled us to innovate the cooling system design. We didn’t build a chiller plant, eliminating associated cooling towers, piping, pumps and controls. The innovative system:
- Uses a 100% outside air evaporative cooling and humidification system.
- Recycles return air in winter. Wasted heat is recirculated to heat the office space and is mixed with outside air in the penthouse to meet temperature set point and warm the air before it enters the data center.
- Has a ductless air distribution system. Cold air is distributed into the middle of the data center via dry wall supply air shafts from the mechanical penthouse. Hot aisles are contained to minimize thermal mixing and draw air into the drop ceiling return air plenum. The hot air is then either recirculated as above or is completely ejected outside the building, depending on outside air conditions.
In addition, we have the capability to install indirect evaporative cooling if needed.
Summary
These strategies enabled us to build a very efficient, inexpensive and reliable facility.
It’s energy efficient in both power consumption and water use. Compare the PUE of 1.07 at full load to that of the typical data center, which has a 1.5 PUE (see page 53). Without chillers and cooling towers, the WUE (water use efficiency) of 0.31 liters/kWH is much lower than the national average of 1.0 l/kWH (this value is based on Green Grid data).
By using simpler construction, we improved reliability. Having fewer components means higher reliability as there are fewer parts that can fail. Such a system is also less expensive to build and operate — our capital expenses were lowered by 45%. We’ll have a good sense of how much operating expenses we’re saving in time once the system is fully online.
You can read more about the Open Compute Project and download the data center specification.
Jay Park is the director of data center design at Facebook.