Energy Logic
Energy Logic is a vendor-neutral approach to achieving energy efficiency in data centers. Developed and initially released in 2007, the Energy Logic efficiency model suggests ten holistic actions – encompassing IT equipment as well as traditional data center infrastructure – guided by the principles dictated by the "Cascade Effect."
Energy Logic Assumptions
The first iteration of the Energy Logic model was introduced by Emerson Network Power on November 29, 2007.[1] Described as a “new approach to energy optimization,” the model was developed in response to industry feedback suggesting a growing emphasis on promoting efficiency initiatives, without compromising the performance and reliability of the data center.[2]
The Energy Logic data center efficiency model was developed based on research and modeling of 5,000 square foot data center, including average IT equipment densities, common data center and facility infrastructures (power, cooling, etc.) and their collective energy draw.
Energy draw for the 5,000 square foot data center model was based on the following assumptions:[3]
- Server refresh rate: 4 to 5 years
- Data center has mix of servers ranging from new to 4-years old
- No virtualization or blades
- No high-density loads
- Average density: 3 kW/rack (120 W/sq. ft.)
- Total compute load: about 600 kW
- UPS configuration: 2x750 kVA, 1+1 redundant
- Hot-aisle/cold-aisle configuration
- Floor-mount cooling connected to building chilled water plant
- MV transformer (5 MVA) at building entrance with switchgear
The Cascade Effect
Based on the benchmarks established by the 5,000 sq. ft. model, Emerson Network Power recommended improvements to IT and data center infrastructures capable of maximizing total energy savings by leveraging the “cascade effect.” For the purposes of the Energy Logic model, the cascade effect assumes that for every one watt of energy saved at the server component level, a data center can expect to realize up to 2.84 Watts in cumulative energy savings as the initial reduction “cascades” through the infrastructure (DC-DC, AC/DC, Power Distribution, etc.).[4]
Energy Logic Actions
The Energy Logic model proposes ten vendor-neutral actions that are forecast to reduce cumulative energy consumption by up to 50 percent (reducing energy consumption to 585 kW from the data center's initial 1,127 kW load).[5] The ten recommended actions prescribed in the Energy Logic model are:
- Integrating IT equipment with low-power processors (yields a 10 percent savings)
- Deploying high efficiency power supplies matched to power needs (yields a 12 percent savings)
- Implementing a server power management system/strategy (yields an 11 percent savings)
- Deploying blade servers (yields a 1 percent savings)
- Implementing server virtualization throughout the IT infrastructure (yields a 14 percent savings)
- Establishing an efficient power distribution architecture (yields a 3 percent savings)
- Implementing data center cooling best practices (Optimizing Airflow, Using Optimal Set Points, Reducing Energy Waste, Etc.) (yields a 2 percent savings)
- Deploying variable capacity cooling equipment (including chilled-water and direct-expansion systems) (yields a 7 percent savings)
- Deploying high-density supplemental cooling (yields an 18 percent savings)
- Implementing a data center monitoring and optimization strategy (yields a 2 percent savings)
The Energy Logic model also suggests additional opportunities for energy savings, including:
- Identifying and disconnecting unused servers
- Consolidating data storage
- Implementing economizers for subsidized cooling
- Monitoring and reducing energy losses tied to facility infrastructure (generators, lighting, perimeter access, etc.)
Energy Logic 2.0
In 2012, Emerson Network Power introduced an update to the Energy Logic model, to take into consideration advances in IT and data center infrastructure technology.
Using the same 5,000 square foot data center benchmarked in the 2007 model, Energy Logic 2.0 updates the ten prescribed actions to reflect current technologies and average equipment efficiency. As a result, the updated actions are forecast to yield energy savings up to 74 percent (reducing energy consumption from 1,543 kW to 408 kW in the model data center).[6]
The ten updated actions[7] include:
- Deploying low-power components (yields a. 11.2 percent energy savings)
- Deploying high-efficiency power supplies matched to power needs (yields a 7.1 percent energy savings)
- Implementing a server power management system/strategy (yields a 9.4 percent energy savings)
- Establishing an ICT architecture (yields a 3.5 percent energy savings)
- Implementing a server virtualization and consolidation strategy (yields a 29 percent energy savings)
- Optimizing the data center's power architecture (yields a 4.1 percent energy savings)
- Implementing a temperature and airflow management strategy (yields a 5.2 percent energy savings)
- Deploying variable-capacity cooling equipment (including chilled-water and direct-expansion systems) (yields a 2.6 percent energy savings)
- Establishing a high-density cooling infrastructure (yields a 1.5 percent energy savings)
- Implementing a comprehensive Data Center Infrastructure Management (DCIM) strategy
References
- ↑ “Emerson Network Power Introduces New Approach to Data Center Energy Optimization,” Press Release from Corporate Web Site [1]
- ↑ "Emerson Delivers Free Energy Logic Blueprint for Building a Power Efficient Data Center”, "InfoWorld" [2]
- ↑ “Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems,” White Paper [3]
- ↑ “Emerson Analysis IDs Cascading Energy Gains,” Data Center Knowledge
- ↑ “Energy Logic Presentation,” via the 7x24 Exchange[yes|permanent dead link|dead link}}]
- ↑ “Emerson Updates Energy Logic Roadmap,” via Data Center Knowledge
- ↑ ""Energy Logic 2.0 Actions," via EfficientDataCenters.com". http://www.emersonnetworkpower.com/en-US/Latest-Thinking/EDC/EnergyLogic/Pages/EnergyLogicActions.aspx.
External links
Original source: https://en.wikipedia.org/wiki/Energy Logic.
Read more |