Facilities






RENCI has a 24,000+ square foot facility, including a 2,000 square foot data center at 100 Europa Drive, Chapel Hill.

Europa Data Center

  • 2,000 square feet of floor space on an 18-inch raised floor
  • 600 kVA Commercial power
  • 375 kVA UPS power
  • 30 kVA Generator power
  • 134 Tons dedicated cooling
  • Room for 40 Racks of High-performance Computing, Storage, and Networking

Computational Infrastructure

RENCI began operations as an organization starting in 2004. Since that time, the organization has acquired various computational systems to support projects and activities. The following is a list of the major computational infrastructure currently active at RENCI.

HPC (Hatteras)

Hatteras is a 4352-core cluster running CentOS Linux and the SLURM resource manager. Hatteras is segmented into several independent sub-clusters with varying architectures. It is capable of concurrently running five 512-way parallel jobs and one 1488-way parallel job.

Hatteras’ sub-clusters have the following configurations:

  • Chassis 2-3 (512 interconnected cores per chassis)
    • 32 x Dell M420 quarter-height blade server
      • Two Intel Xeon E5-2450 CPUs (2.1GHz, 8-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Chassis 6-7 (640 interconnected cores per chassis)
    • 32 x Dell M420 Quarter-Height Blade Server
      • Two Intel Xeon E5-2470v2 CPUs (2.4GHz, 10-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Chassis 8 (512 interconnected cores)
    • 16 x Dell M630 Half-Height Blade Server
      • Two Intel Xeon E5-2683v4 CPUs (2.1GHz, 16-core)
      • 256GB 2400MHz RAM
      • 100GB SSD for local I/O
    • 56Gb/s Mellanox (4X FDR) Interconnect
  • Rack R1-05 Compute (1488 interconnected cores)
    • 31 x Dell PowerEdge R640 Servers
      • Two Intel Xeon Gold 6240R CPU (2.40GHz, 24-core)
      • 192GB 2933MHz RAM
    • 100Gb/s Mellanox (4X EDR) Interconnect
  • Rack R1-05 Largemem (96 interconnected cores)
    • 2 x Dell PowerEdge R640 Servers
      • Two Intel Xeon Gold 6240R CPU (2.40GHz, 24-core)
      • 1.5TB 2933MHz RAM
    • 100Gb/s Mellanox (4X EDR) Interconnect
  • Rack R1-05 GPU (96 interconnected cores)
    • 2 x Dell PowerEdge R740xd Servers
      • Two Intel Xeon Gold 6240R CPU (2.40GHz, 24-core)
      • 192GB 2933MHz RAM
      • Two NVIDIA Tesla V100
    • 100Gb/s Mellanox (4X EDR) Interconnect

Kubernetes

RENCI manages one Kubernetes cluster in the Europa Data Center named Sterling and several project-specific clusters on Google Kubernetes Engine (GKE).

The Sterling cluster consists of the following hardware:

  • 18x Dell PowerEdge R640 servers, each with 96 CPU and 1.5TB RAM
  • 4x Nvidia Ampere A100 GPUs
  • 11x 4TB NVMe Drives

We configure our GKE clusters to take advantage of several fully automated features to provide a wide variety of resources to the application. These features include:

  • Node Types: compute vs. memory optimized, GPU capable, local storage
  • Auto Scaling: scales number of nodes based on resource utilization (CPU, memory, GPU)
  • Node Pools: specific node pools are utilized depending on resource need
  • Node Repair: auto repair process is initiated for unhealthy nodes

Storage Infrastructure

The RENCI Storage Infrastructure includes:

  • NetApp Clustered Data ONTAP
    • FAS8300 node HA Pair
      • 1.5PB Raw
    • AFF-A300 node HA Pair
      • 218TB Raw
    • FAS8200 node HA Pair
      • 3.3PB Raw
  • Isilon OneFS Cluster
    • NL410 node (six nodes)
      • 1.2PB Raw
      • 48GB RAM
      • 800GB SSD for R/W cache
    • A2000 node (twelve nodes)
      • 2.3PB Raw
      • 16GB RAM
      • 800GB SSD for R/W cache

Network Infrastructure

The RENCI production network connects to the North Carolina Research and Education Network (NCREN) and the University of North Carolina’s campus network. NCREN provides connectivity to Internet2 Layer 3 service at 100Gbps, RENCI shares a 100G interface on Al2S for bandwidth-on-demand applications with other Triangle campuses (Duke, NCSU).

RENCI’s production connectivity to the outside world at 20Gbps is supported through a pair of Arista routers managed by UNC ITS (Information Technology Services) and RENCI staff. Connectivity into the datacenter is facilitated through a mix of switches managed by UNC ITS and RENCI. RENCI’s internal datacenter network infrastructure is supported by two Arista switches configured in a Multi-Chassis Link Aggregation (MLAG) capable of supporting 10/25/100Gb/s connections. This allows RENCI to cleanly separate production, research and experimental networking infrastructures such that they can coexist without interfering with each other.

RENCI datacenter hosts a deployment of perfSonar servers (ps1.renci.org and ps2.renci.org) as well as a Bro IDS processing traffic at line rate

The Layer 2 Breakable Experimental Network (BEN; http://ben.renci.org) is the primary platform for RENCI experimental network research. It consists of several segments of NCNI dark fiber across the Triangle area of NC, a time-shared resource that’s available to the Triangle research community. BEN is a research facility created for researchers and scientists in order to promote scientific discovery by providing the Universities with world-class infrastructure and resources for experimentation with disruptive technologies. BEN provides non-production network connectivity between RENCI, UNC main campus, Duke and NCSU. BEN PoPs (Points of Presence) are distributed across the Triangle metro region that form a research test bed. RENCI acts as a caretaker of the facility as well as a participant in the experimentation activities on BEN.

On BEN RENCI has deployed a number of Corsa virtualizable OpenFlow switches supporting connectivity at 10Gbps across sites. Each site is also equipped with a load-generating server to support performance measurements.

BEN can also act as a dark-fiber testbed, as the Corsa switches can be disconnected from the fiber to be replaced by other types of equipment, as needed by the research community.

FABRIC Infrastructure

RENCI is the lead organization for FABRIC MSRI-1 (https://whatisfabric.net) that is currently under construction. A number of FABRIC sites including a RENCI development sites are deployed, consisting of a small number of Dell AMD-based servers packed with GPUs, network cards, and other devices, 1¼PB of storage, a Dell management switch and a Cisco 5500 dataplane switch. Other active sites can be seen on the map shown on facility portal https://portal.fabric-testbed.net – they are located throughout the United States and the world. Many sites are connected to each other using dedicated 100Gbps or 1.2Tbps optical connectivity. FABRIC sites are also interconnected to local infrastructure – other NSF-funded computational resources and testbeds, to create a ‘testbed-of-testbeds’ for enabling distributed experiments in novel network architectures, protocols, distributed systems, cybersecurity, IoT/Edge Clouds, ML/AI and many other areas.

Virtualization Infrastructure

RENCI has two VMware vSphere Enterprise clusters that service the needs of most projects.

  • Europa Center Cluster (located onsite at RENCI in the Europa datacenter)
    • 5 x Dell PowerEdge R740
    • 2 x 2.1GHz Intel Xeon Gold 6252 Processors (48 cores total)
    • 1.5 TB System memory
    • 6 x 10 GbE Network connections
  • ITS Manning Cluster (located on campus in the ITS Manning datacenter)
    • 3 x Dell PowerEdge R740
    • 2 x 2.4 GHz Intel Xeon Gold 6148 Processors (40 cores total)
    • 1.5 TB System memory
    • 6 x 10 GbE Network connections

Visualization Infrastructure

The visualization component of RENCI is composed of conference rooms with video capability.

Europa Center Videoconferencing: There are seven video conferencing rooms at Europa: one with five projectors, two with three, and four with one projector or large LCD display.

Office Facilities

RENCI Space at Europa

Suite/RoomDescriptionSq. ft
110, 130-140Cubes/Office7,942
520Cubes/Office2,019
540Cubes/Office3,072
580Cubes/Office/Conference Room4,112
590Cubes/Office/Conference Room4,267
599Cubes/Office3,057
Total24, 469

RENCI Space at ITS Manning

Suite/RoomDescriptionSq. ft
1102Social Computing Room (SCR)448
1220-1226Cubes/Office704
1230-1236Cubes/Office688
3200, 3300Equipment Room/Conference Room811
Total2,651

RENCI Total Current Space

Total: 27,073 sq. ft.