Request Support Partner

DATACENTER - infrastructure

Infrastructure Datacenter

All Hosting and Cloud services provided by CoreTech Srl are based on SUPERNAP Italia data center infrastructure. The SUPERNAP Italia data center campus in Milan (Siziano) was designed and built according to Switch LAS VEGAS multi-tenant data center specifications, of Tier IV, offering unparalleled connectivity.


Via Marche 8-10
27010 Siziano (PV)

Data Center Supernap

The Pavia's (Siziano) Data Center SUPERNAP Italia was designed to meet the needs of reliability, flexibility, and performance of the Customers in the business world. All facilities and infrastructure are made with the aim of Business Continuity (power supply, conditioning, anti-fire, perimeter security, network IP) are subject to monitoring and constant surveillance H24 via automated systems and control of specialized personnel.

During the Data Center design stage, energy efficiency has been taken into account, and through the use of advanced technologies for conditioning and climate control, we achieved the dual objective of obtaining a significant decrease in management costs and reducing the environmental impact.



La facility di Siziano (PV) - SUPERNAP Italia

The Facility is located within an area of 100.000 m2, shaping to be the biggest and most advanced data center in Italy. The structure, which covers a surface of 42.000 m2, was designed based on experience gained in the realization of SUPERNAP data centers in the USA, to achieve 100% Reliability, Power & Cooling guaranteed uptime.

The infrastructure has been designed and built with attention to environmental risks mitigation:

The architecture, engineering, technology, and data center activities benefit from the existence of 500 patents (registered and pending approval). The patents, linked to technological systems and engineering solutions adopted, make the SUPERNAP data center of Siziano one of the most advanced currently available on the Italian and European market.

Data Hall Locals

The Data Center consists of 4 data halls that can host up to 1056 racks each. The connectivity to DC is guaranteed by 200 fiber pairs with differentiated paths in multi-carrier mode. In every data hall, racks are divided into aisles (T-SCIF Thermal Separate Compartment in Facility) protected by a cage structure.

Among the solutions used in the realization of the Data Center, the ones of special interest are:

Cooling system

The cooling system is based on an AHU (Air Handling Unit) modular set, that uses the principle of indirect evaporative cooling, via air-to-air exchangers opportunely cooled by water systems (external to the DC).

The steel infrastructure, that supports the T-SCIF system, has the function of a thermal flywheel, allowing the Facility to reach higher resilience levels above the highest industry standards.

Power supply system

The campus at full capacity can sustain consumption up to 40 Mw. The whole IT load is protected by a thrice redundant UPS system, capable of ensuring 100% availability (100% uptime).

This result is made possible by the electrical system's architecture, which is one of a kind, engineered as required for obtaining Tier4 certification ("system + system" mode (2N+1)).

A "system + system" plant is based on two separated electric plants, each of which can support, at any time, the load of the entire Facility. Each system is equipped with its UPS (Uninterruptible Power Systems), System Bypass Modules, PDU (Power Distribution Units), RPP (Remote Power Panels), and other components adequate to the tiering level. These systems are independent one of another, to allow all the maintenance activities, updates, and troubleshooting both as planned activity and specific need, without creating any impact on Customer activity. In the data center, each power supply system is identified with a different color (red, blue, grey) allowing maintainers to know safely which power supply system they are working on.

All racks are equipped with a dual power supply, both resulting from two separated power supply systems (Feed A + Feed B).


All wirings (structured cabling, optical fiber, electrical wiring) are placed in suitable air channels.

"Green" approach

The Facility offers the highest levels of efficiency in the world, as confirmed by the parameter that measures data centers' energy efficiency.

Environmental monitoring

The Living Data Center (LDC) is the automated proprietary monitoring system of performance and adjustments of environmental parameters (current consumption, temperature and humidity regulation, air flow regulation, etc.) monitored 24x7 by the Network Operation Center (NOC). The tool views and monitors the environment in real-time, based on millions of daily data collected by over 10.000 sensors. The LDC also detects the energy consumption of every single rack, sending alarm signals based on the reach of attention thresholds. The system ensures optimal data center performance at any time of the year. The NOC monitoring done by on-site operators guarantees prompt reaction if necessary.

Security and data processing

Physical access

Server access is physically protected by the following levels of protection:

Telematic access

Telematic access is regulated by perimeter firewalls that only allow useful accesses for the proper functioning of the system and those that may be requested by the customer.

Obligation of security, protection and retention of data

All data managed by CoreTech on behalf of the customer, because of the provision of services provided, are strictly confidential and exclusive.

CoreTech keeps strict confidentiality on all customer Information and does not use, nor divulge or disclose, in whole or in part, that Information. At any time, at the request of the customer, CoreTech will immediately return to the customer all the Information in its possession. Unless otherwise specified, CoreTech acknowledges that the customer Information must:

CoreTech declares under its responsibility that these confidentiality obligations have also been subscribed by its staff.

Ripe Ncc Member