Asperitas Microdatacenter

From HandWiki

A microdatacenter is a small, self contained data center consisting of computing, storage, networking, power and cooling. Micro data centers typically employ water cooling to achieve compactness, low component count, low cost and high energy efficiency. Its small size allows decentralised deployment in places where traditional data centers cannot go, for instance edge computing for Internet of things.

Distributed Micro Edge Data centers

In July 2017, the Dutch company Asperitas presented a distributed micro edge data center model[1] at the Datacenter Transformation[2] educational event in Manchester.

Heat reuse

The model is focused on the energy transformation[3] to usable heat and flexible deployment where heat is required in a larger scale, with constant heat demand. The micro data centers optimally don't require any overhead installations for cooling or no-break systems. The cooling of the servers is facilitated by sourcing cold water from the heat user, thus creating a synergy between different industries.Especially the adoption of temperature chaining, or a cascade of thermal energy, high reusable temperatures can be achieved. Due to the minimised overhead, these nodes can be deployed in large quantities near or within network hubs for urban or office areas or even as part of a non-data center facility which can directly benefit from the reusable heat. This allows for fast network access and simple energy reuse.

Distributed Micro Edge Datacentre web

The micro edge nodes (10-100 kW) function as forward locations of the core data centers. The edge nodes provide services like data processing for IoT systems, data caching for digital content (YouTube, Netflix, etc.) and fast access to cloud services. The edge nodes are continuously replicated with the core datacenters and several strategic other edge nodes. This provides constant availability through geo-redundancy.[4]

By making information available in multiple locations at the same time, it becomes easy to exchange between different physical structures when interacting with the information. The capacity of overhead installations can be minimised to allow only for normal operation and a shutdown phase in case of emergency while active data processes are moved to a different facility.

The micro edge nodes are small locations with minimised overhead installations. They will have simplified configurations which consists of a small data floor, switchboard and energy delivery. Often without redundancy in power or cooling infrastructure (there is a significant thermal buffer with Immersed Computing®), but with sufficient sustainable Li-ion battery power (i.e. Tesla Powerpack) to allow for replication and shutdown. The facilities are based on Immersed Computing® and additional liquid technologies when required. This allows these facilities to become enclosed air environments which prevents environmental impact like noise or exterior installations. The liquid infrastructure is cooled with any present external cooling strategy which is available.

Edge management

The management of the distributed datacenter model, is possible through the emergence of software platforms providing ubiquitous management of data, network and computation capacities. These kind of platforms already exists for traditional centralised infrastructure, but new challenges emerge from this hybrid and distributed architecture. Closer to the end users, edge nodes in urban areas have new constraints in terms of energy consumption and heat production. Containerisation, through technologies like Docker[5] or Singularity,[6] opens great opportunities to make applications more scalable, flexible and less dependent on the infrastructure. Many frameworks appeared recently (Swarm, Kubernetes[7]) to manage decentralised clusters. Some of them also integrate energy and heat management by design like “Q.ware[8] developed by Qarnot computing.[9] This positive dynamic in the software industry is an essential pillar to enabling core datacenters and edge nodes with an integrated architecture.

Network optimization

The use of core datacenters and edge nodes allows for network optimisation by preventing long distance transport of raw (large) data and allowing the processing of data close to the source. By bringing data which is in high demand closer to the end user (caching), high volume data transmission across long distance backbones is greatly reduced, as well as latency which is a critical factor for delivering good end user experience.

Energy grid balancing

One of the limitations for datacenter growth today is the capacity of the existing power grid. In most areas in the world, the power grid was designed and implemented long before data centers even existed. There are numerous areas where the power grid will reach its maximum capacity within the next 3–5 years. The traditional datacenter approach causes high loads on very specific parts of the grid. By applying the distributed data center model, the power grid is more balanced and the impact of expansion greatly reduced.

Energy production

By focusing on the reuse of energy, each edge node rejects its thermal energy directly into a reusable heat infrastructure (district heating/heat storage), building heating (hospitals/industry), water heating (hospitals/zoos) or other heat users. The core data centers become large suppliers of district heating networks or will be connected to 24/7 industries which require constant heating within a large scale industrial process.

Cooling strategies in the Edge

There are numerous edge cooling strategies which are optimal for the scale of micro edge nodes. All strategies require 24/7 thermal rejection, thus completely eliminate the need for cooling installations.

Here are a few commonly available cooling strategies in urban areas:

  • Spas and swimming facilities with multiple pools have a constant demand for heating due to constant convection (near 100% reuse).
  • Hospitals and hotels equipped with warm water loops which require constant 24/7 thermal input (near 100% reuse).
  • Urban fish and vegetable farms using aquaponics (near 100% reuse).
  • Aquifers for energy storage, these can normally be supplied with thermal energy 24/7 (75% reuse).
  • Water mains can provide distributed energy savings (29% reuse).
  • Canals, lakes and sewage water can be used for heat rejection when reuse is not possible (0% reuse).

Edge Technologies

Asperitas Micro Datacenter
Asperitas AIC24.jpg
Asperitas Immersed Computing module

Asperitas Immersed Computing and AIC24

In March 2017 the Dutch company Asperitas presented Immersed Computing®,[10] a concept and portfolio dedicated for usability and easy deployment in core- and micro data centers. This different micro edge data center solution and approach is compatible with generic and branded servers and allows for large scale energy reuse. The AIC24 solution is based on a larger enclosure[11] which can deliver a maximum of 22 kW of heat.

Iceotope

A different technology which also uses complete immersion of servers is Iceotope.[12]

See also

References