The Boden BTDCone research project, sponsored by the EU as part of the Horizon 2020 initiative, has seen a lot of data center industry press recently, as their stated goal is to construct and operate Open Compute Project(OCP) IT gear in the most cost and energy efficient data center in the world, with minimal environmental impact. This is being achieved by integrating adiabatic fresh (direct) air cooling, modular building design, and clean and high quality electricity supplied by renewable energy sources within an ideal environmental climate. The research is being carried out in partnership with the award-winning RISE SICS North, Sweden's national research institute.
As part of my trip to Sweden to represent OCP at the “Datacenters Meet 5G at the Edge” workshop, organized with LTU and IMasons, I also wanted to see first-hand how RISE SICS North and their consortium partners were going about achieving such laudable efficiency aims for the project. My host for the visit was Alan Beresford, the MD of EcoCooling. Alan’s company is one of the five consortium members involved in delivering the project and EcoCooling have provided the direct air cooling system and environmental control system software. When I arrived at the site of BTDCOne, my first impression was that it didn’t look at all out of place, as many traditional data center buildings do, rather this building had been designed to fit in with the vernacular architecture and had been built using locally sourced and sustainable materials, such as timber, and painted with the traditional colour of Falu red.
Once inside the facility in Boden, which is in the north of Sweden, it looked like a traditional white space physical infrastructure, but Alan Beresford explained that there were some major differences. The facility is divided into three Pods, Pod 1 is the Open Compute Pod containing twelve OCP Open Racks with 140kW of IT load and is being used as a test bed to understand the most efficient way of cooling to maintain compliant conditions for Open Compute IT gear.
The efficiency is achieved by using direct fresh air with adiabatic cooling. This operates in 4 ways: firstly, when there is an average ambient temperature it uses fresh air; secondly in hot conditions it uses adiabatic cooling; thirdly it uses the adiabatic pads in a unique way by providing the humidification system; and finally, when the ambient temperature is cold it mixes re-circulated air with outside air. By using these four methods in this location, BTDCOne can achieve 100% Ashrae-recommended conditions and achieve the project’s primary objective, to keep the OCP IT gear in compliant conditions and maintain maximum reliability at the minimum cost of operation.
In a typical cooling system for this type of IT gear it will normally use somewhere between 30- 60kW of power to cool 100kW of IT load. Within this research project they are trying to achieve an outstanding efficiency of only 1kW to 3kW of electricity per 100kW of OCP equipment IT load, and a partial cooling PUE of less than 1.01. A holistic control system is used for direct communication between the OCP IT gear to a central cooling control system, so that everything is optimised with regard to fan speeds in both the IT gear and direct air cooling systems. Fans follow a cube rule: if the speed of a fan is halved it only uses 12.5% of the energy. Therefore if a holistic control system can make the fans operate as slowly as possible, and at the same time maintain the correct chip temperature in complaint conditions, then the most efficient system can be realised.
The dissemination of the results of the research project will then help to influence the data center industry to improve on the way it constructs and controls the environment in a data center, to achieve 100% Ashrae-recommended conditions and keep OCP IT gear in a compliant environment that maintains maximum reliability at the minimum cost of operation.
Results of the research and development of the project at the OCP Regional Summit was given by Jon Summers in his talk ‘Building and Operating an OCP Data Center at Small Scale’