Datacentre and Infrastructure

The physical environment in which a datacentre is located has a few relevant implications when it comes to being Zero-Outage compliant. On one side, the building needs to provide a physically robust and secure shelter of the equipment, on the other side it needs to provide the required “physical resources” for the equipment to properly function.

Common classifications in the market divide datacenter infrastructures in tiers depending on the degree of redundancy and fault tolerance that they provide.

Zero Outage data centre facility should be designed to provide protection against unauthorised access, have adequate fire prevention systems and to be robust against natural events. An example maybe earthquakes, at a level depending on their location and based on international earthquake zones classifications.

Nowadays, it is industry standard to have identity-based access control using card readers or biometric methods and monitoring the perimeter of data centres using video technology.

The premises housing the data centre should be sufficiently fire resistant. By this we mean that the premises should have a fire-resistance class of at least F90 (or equivalent classifications from international standards like DIN 4102-2, BS 476, EN 13501) or higher for the load-bearing and fire-separating components and a fire protection barrier of at least T30 or higher for doors and room partitions/walls. Any apertures, openings or ducts in the building’s infrastructure should be fitted with fire dampers or fire barrier devices. Flammable or fast-burning materials should not be used as building materials. Combustible building materials may only be used where there is no alternative if this does not increase risk unduly. This applies in particular to fire and smoke behaviour, molten droplets, thermal release and the formation of dangerous gases/fumes.

Data centers used in Zero Outage deployments should be also designed to provide to the equipment uninterruptible power supply and to maintain the system’s temperature and humidity at a constant level.

Any risk of damage to the electrical installations and equipment’s caused by direct lightning strikes or mains-borne surges must be excluded. In addition, there must be no interference in the data centre following lightning strikes.

Electricity suppliers cannot guarantee an uninterrupted energy supply, therefore interruptions or prolonged power failures must be bridged using emergency power systems or backup generators to provide alternative means of power. This is the only way to guarantee the function of all technical systems required to operate a data centre, such as air conditioning, energy and security. Emergency power system planning determines permissible downtimes. Types of uninterruptible power supplies are classified in the IEC-Standard 62040-3, which describes the corresponding determination method.

Another option that can be implemented to increase resiliency is to connect the data-center power lines to two independent power grids thru a mechanism that allows to seamlessly switch between them in case of failure.

A centralised environmental monitoring and management system, that is able to monitor the status of all physical resources, is also a requirement.

A considerable amount of energy must be generated to run the air conditioning, particularly for the data centre cooling system. The air-conditioning/cooling system, and the associated power back-up system, must be designed so that the maximum permitted temperature in the data centre is not exceeded also in the event of a malfunction or a failure of a defined duration of the cooling system (e.g. power failure, breakdown of air-conditioning or cooling system, etc.).

The diversity of paths of the connections between the datacentres should be considered: for instance fibres that act as redundant protection to each other should exit the building from different points of entry / exit and should never be part of the same fibre duct along their way to the other datacentre.

Besides building very strong and resistant data-center infrastructures, IT systems can be operated in a fault or disaster-tolerant manner across several data centre sites, utilizing one of the following two approaches:

  • Fault-tolerant IT systems:
    • Operation in data centre 1, backup in data centre 2 (Alternative location.)
    • Active / passive configuration
  • Disaster-tolerant IT systems:
    • Operation across two data centre locations (Business Continuity Management)
    • Active / active configuration

For active / active configurations with latency requirements, it is important to note that the data centres need to be located close enough to minimise latency and yet far enough apart to be sufficiently separated in terms of their geographical location. The communication links of applications within multi-tier architectures must be carefully evaluated. They must not be distributed over wide distances if latency between the individual components is a relevant factor.

 MinimumIdeal
DescriptionConcurrently operational site infrastructure Multiple power and cooling paths, of which one is active with a redundant alternative
Planned maintenance during ongoing operations without interruption possible: every single component redundant
Several power and cooling services, with active redundant components
SupplyPower, cooling and connectivity are all redundant, however a small reduction in environmental parameters may be experiencedPower, cooling and connectivity are all fully redundant, no impact or loss of service at switchover
RisksPlanned maintenance - service will remain operational, with reduced fault tolerancePlanned maintenance does not impact system performance
Electrical infrastructure provisionDiverse links to single power providerDiverse links to multiple power providers
Electrical supply redundancy1 active and 1 passive pathAll links active
Electrical supply redundancyUPS to provide power conditioningUPS to provide power conditioning, with backup generator for longer term outages
Cooling systemDual systems in an active passive configurationDual systems both active, each running at no more than 50% duty cycle
Data centre connectivityDiverse (two or more) route connections via diverse routesDiverse (two or more) route connections via diverse routes and multiple vendors