BonFIRE logo and link to main BonFIRE site

Table Of Contents

Previous topic

An overview of BonFIRE features

Next topic

Steps To Getting started

This Page

Infrastructure

This section provides an overview of the infrastructures specifications at the individual sites in BonFIRE. The features that each site supports is not covered here; instead refer to the page on Using the BonFIRE Testbeds. Also, note that each site supports different Instance Types that define the specification of the compute resources you can deploy in terms of CPU and RAM.

Each of the local infrastructures offers a set of permanent and on-request resources. The permanent resources can be used for small-scale experiments and are available permanently during the BonFIRE project lifetime. The on-request infrastructure is available for large-scale testing, but is not exclusively dedicated to the BonFIRE project. These resources can only be reserved by requesting them directly from the infrastructure provider. The BonFIRE infrastructure map depicted below provides an overview of the permanent and on-request resources available.

BonFIRE provides an infrastructure Health Map to give an overview about the operation status of all testbed sites. For more details about the infrastructure offer at each site, see below.


../_images/BonFIRE_Infrastructure_20140314_R405.jpg

BonFIRE infrastructure map for permanent and on-request resources

Permanent Resources:

Provider Core RAM (GB) Storage (TB) Nodes
EPCC 176 416 6-16 7
iMinds 192 384 4 16
Inria 96 256 2.4 4
HLRS 344 1068 12 36
PSNC 48 192 0.6 4

On Request Resources:

Provider Core RAM (GB) Storage (TB) Nodes
EPCC X X X X
iMinds 816 3000 17 68
Inria 1672 4495 40 >160
HLRS X X X X
PSNC X X X X

The BonFIRE infrastructure providers are:

  • The University of Edinburgh, United Kingdom (EPCC, uk-epcc)
  • Interdisciplinary Institute for Broadband Technology, Belgium (iMinds, be-ibbt)
  • Inria, France (fr-inria)
  • Universität Stuttgart, Germany (USTUTT-HLRS, de-hlrs)
  • The Poznan Supercomputing and Networking Centre, Poland (PSNC, pl-psnc)

EPCC

General configuration

uk-epcc runs OpenNebula, in a version derived from OpenNebula 3.0 for BonFIRE.

  • Hypervisor used Nodes run XEN 3.1.
  • Image management Block devices are managed using a custom TM driver.
  • Image storage Images are stored using the ‘’raw’’ format
  • OpenNebula scheduler configuration: These values are subject to frequent changes. Their meaning can be explored in http://opennebula.org/documentation:archives:rel3.0:schg
    • -t (sconds between two scheduling actions): 30
    • -m (max number of VMs managed in each scheduling action ): 300
    • -d (max number of VMs dispatched in each scheduling action): 30
    • -h (max number of VMs dispatched to a given host in each scheduling action): 1

Permanent resources

EPCC provides eight dedicated nodes as permanent resources. One dedicated node hosts the EPCC front-end and service VMs. It currently offers 6 TB of storage to BonFIRE, with another 6-10TB available, subject to RAID configuration.

The other seven nodes are worker nodes, dedicated to host BonFIRE user VMs. Their architecture is as follows:

  • Compute: total core count is 176; total memory count is 416GB of which approximately 400GB is available for VMs, as follows:
    • 2 nodes: 4 x 12-core AMD Opteron 6176 (2.3GHz) with 128GB of memory each
    • 5 nodes: 2 x 8-core AMD Opteron 4274 HE (2.5GHz, 2.8GHz turbo frequency) with 32GB of memory each
  • Network. 1GB/s ethernet. 12 public IP addresses are presently available for VMs (more may be made available in future). There are no firewall restrictions on public interfaces. Private (to an experiment) internal networks are supported. Public IPv6 networking is available and enabled for all VMs deployed on the BonFIRE WAN.

This infrastructure is monitored for power consumption using eMAA12 PDUs from Eaton.

Information about Physical Host Details at EPCC.

On-request resources

EPCC decided to provide a substantial infrastructure permanently available to the BonFIRE users. No on-request resources is available.

Networking

The nodes are connected by Gigabit ethernet. 12 public IP addresses are presently available for VMs (more may be made available in future). There are no firewall restrictions on public interfces.

Other networking features available are a bandwidth-on-demand service to PSNC via AutoBAHN, interconnection with FEDERICA and private (to an experiment) internal networks. Public IPv6 networking is available and enabled for all VMs deployed on the BonFIRE WAN.

iMinds

General configuration

be-ibbt operated by iMinds runs Emulab for configuration of the network topology and impairments. No virtualization technologies are used as each image is mapped onto one hardware node.

Permanent resources

be-ibbt makes 16 nodes available as dedicated resources. These Dual Intel Xeon E5645 (2.4 GHz) six-core nodes each have 24GB RAM and 250 GB storage. Up to 6 Gigabit Ethernet networking interfaces are available per node. These interfaces allow interconnecting the nodes in an emulated network, with controlled link characteristics for delay, bandwidth capacity and packet loss rate.

On-request resources

Up to 84 nodes can be reserved for an experiment for a limited period, as long as the reservation is made sufficiently in advance. Regarding the storage capacity, the use of those 84 nodes would make a total of 2,016 GB RAM and 21 TB storage available.

Inria

General configuration

fr-inria runs OpenNebula, in a version derived from OpenNebula 3.6 for BonFIRE.

  • Hypervisor used Nodes run XEN 4.1

  • Image management Inria’s setup has been described in a blog entry on OpenNebula’s blog platform: http://blog.opennebula.org/?author=59

    NFS is configured on the hypervisor of the service machine and mounted on the OpenNebula frontend and on workers. TM drivers are modified to use dd to copy VM images from the nfs mount to the local disk of the worker node (local LV to be precise), and cp to save MV images to NFS. This way, we:

    • have an efficient copy of images to workers (no ssh tunneling)
    • may have significant improve thanks to NFS cache
    • don’t suffer of concurrent write access to NFS because VMs are booted on a local copy
  • Image storage Images are stored using the ‘’raw’’ format

  • OpenNebula scheduler configuration: These values are subject to frequent changes. Their meaning can be explored in http://opennebula.org/documentation:archives:rel3.0:schg

    • -t (seconds between two scheduling actions): 10
    • -m (max number of VMs managed in each scheduling action ): 300
    • -d (max number of VMs dispatched in each scheduling action): 30
    • -h (max number of VMs dispatched to a given host in each scheduling action): 2

Available resources

Permanent resources

Inria provides 4 dedicated worker nodes (DELL PowerEdge C6220 machines) as permanent resources. These worker nodes have the following characteristics:

  • CPU 2 Intel(R) Xeon(R) CPU E5-2620 @ 2.00GHz, Hyperthreading enabled, with 6 cores each
  • Memory 64GiB, in 8*8GiB, DDR3 1600 MHz memory banks
  • Local storage. * 2* 300GB SAS storage.
  • Network. 2* 1GB/s ethernet links bonded together. 128 public IP addresses are presently available for VMs. Only TCP ports 80/443/22 are opened.

One server nodes with 2+8 disks (RAID1 for system, RAID 5 on 8 SAS 10k 600G disks), 6 cores and 48GB of RAM, 2 cards of 4 Gbps ports to host the different services needed to run the local testbed. Gigabit Ethernet networking interconnections are available between these nodes, with bonds to increase performance (2GB/s on nodes, 4GB/s on the server).

This infrastructure is monitored for power consumption using eMAA12 PDUs from Eaton.

Information about Physical Host Details at Inria .

On-request resources

fr-inria can expand over the 160 nodes of Grid‘5000 located in Rennes. When using on-request resources of Grid‘5000, BonFIRE users have a dedicated pool of machines that can be reserved in advance for better control of experiment conditions, but nevertheless accessible using the standard BonFIRE API. The interface of the reservation system is documented in the dedicated page.

When requesting resources, a description of the nodes made available is shown on the web interface so the user can choose between the 4 available types of nodes. Parapluie nodes are instrumented for power consumption with the same pdus as the permanent infrastructure

USTUTT

Like most of the other providers, de-hlrs runs OpenNebula 3.6 in a version derived for BonFIRE.

General configuration

  • Hypervisor used Nodes run XEN 3.1.2
  • Image management Block devices are managed using the same modified version of the LVM manager as used by Inria.
  • Image storage Images are stored using the ‘’raw’’ format
  • OpenNebula scheduler configuration: These values are subject to frequent changes. Their meaning can be explored in http://opennebula.org/documentation:archives:rel3.0:schg
    • -t (sconds between two scheduling actions): 30
    • -m (max number of VMs managed in each scheduling action ): 300
    • -d (max number of VMs dispatched in each scheduling action): 30
    • -h (max number of VMs dispatched to a given host in each scheduling action): 1

Permanent resources

USTUTT provides 36 dedicated worker nodes with different hardware combinations:

  • 14 nodes: 2x Quad Core Intel Xeon @ 2.83 GHz, 16GB RAM
  • 14 nodes: 2x Quad Core Intel Xeon @ 2.83 GHz, 32GB RAM
  • 6 nodes: 2x Dual Core Intel Xeon @ 3.2 GHZ, 2GB RAM
  • 2 nodes: 4x AMD Opteron 12 cores @ 2.6GHz, 196GB RAM

These 4 types of various architectures offer users a lot of capabilities for performing their experiments. Every node can be accessed directly by specifying the <host> element during the creation of a compute resource. For a better overview of all available resources, USTUTT provides a current summary of the worker nodes at http://nebulosus.rus.uni-stuttgart.de/one-status.txt. In addition, a storage server with a total amount of 12TB disk space supplies the experimenters of the BonFIRE project. All these nodes are connected via Gigabit Ethernet network interconnections.

This infrastructure is monitored for power consumption using eMAA12 PDUs from Eaton.

Information about Physical Host Details at HLRS .

On-request resources

Currently, USTUTT is not providing any additional on-request infrastructure.

PSNC

Currently pl-psnc provides OpenNebula 3.6.x in a version derived from OpenNebula 3.6 for BonFIRE.

General configuration

  • Hypervisor used Nodes run QEMU KVM 1.0
  • Image management The images are managed via SSH
  • Image storage Images are stored using the ‘’raw’’ format
  • OpenNebula scheduler configuration: These values are subject to frequent changes. Their meaning can be explored in http://opennebula.org/documentation:archives:rel3.0:schg
    • -t (seconds between two scheduling actions): 10
    • -m (max number of VMs managed in each scheduling action ): 10
    • -d (max number of VMs dispatched in each scheduling action): 2
    • -h (max number of VMs dispatched to a given host in each scheduling action): 1

Permanent resources

PSNC provides four dedicated nodes as permanent resources. Each of nodes offers two six-core Intel Xeon E5645 with a total amount of 600GB storage. Switched Gigabit Ethernet networking interconnections are available between multiple interfaces on these nodes.

  • 4 nodes: 2x Intel Xeon E5645 @ 2.4 GHz, 48GB RAM, 600 GB disk (RAID1)

These nodes are used for both production and integration testbeds including management node.

On-request resources

PSNC decided to provide a substantial infrastructure, permanently available to the BonFIRE users and no additional on-request resources will be made available.