Federica Enactor Adaptor

The liaison activity with NOVI and FEDERICA projects provides the possibility to extend the BonFIRE experimental resources with managed, geographically dispread virtual networks offered by the FEDERICA infrastructure.

Thanks to the SFA API provided by the NOVI project it is possible to create network slices on top of FEDERICA physical equipment. These network slices are isolated from each other and there is no performance interference between users experiments thanks to the virtualization capabilities offered by JUNIPER equipment deployed in FEDERICA.

The FEDERICA enactor adapter is responsible for interacting with FEDERICA infrastructure through the NOVI SFA API and therefore providing BonFIRE with two types additional resources:

  • Computing resources: virtual machines can be created inside the FEDERICA servers. These virtual machines are created over the VMWare hypervisor. Several parameters like CPU, HDD, memory, operating system and interfaces are specified in the request
  • Network resources: logical routers, logical interfaces and links between them
    • Logical routers: partitions of a physical router that share common resources like CPU, memory, etc. but have its individual routing table and configurations so that they act as different physical routers
    • Physical interfaces: can be partitioned in logical ones with the possibility to connect logical interfaces among logical routers inside one physical router

The main advantage is that the experimenter can fully design the desired network topology, including IP configuration, dynamic routing protocol configuration (e.g. OSPF and BGP) as well as static routes

Note

For a deeper insight on the development please check Source code location.

APIs provided

The following are the APIs offered by the Federica Adaptor.

SFA

The Slice-Based Federation Architecture (SFA) is described in the SFA Draft v2.0 and has been implemented by PlanetLab. We can analyse SFA as a set of fundamentals for creating an architecture for federated infrastructures based on a slice-based view.

SFA defines the minimal set of interfaces and data types that enable a federation of slice-based network components to interoperate. This architecture defines two key abstractions:

  • Component: primary building block of the architecture.

    A component might correspond to an edge computer, a customizable router, or a programmable access point. It is comprised of a collection of resources, including physical resources (e.g., CPU, memory, disk, bandwidth) logical resources (e.g., file descriptors, port numbers), and synthetic resources (e.g., packet forwarding fast paths). These resources can be contained in a single physical device or distributed across a set of devices, depending on the nature of the component. Each component is controlled via a component manager (CM), which exports a well-defined, remotely accessible interface. The component manager defines the operations available to user level services to manage the allocation of component resources to different users and their experiments.

  • Slice: defined by a set of components spanning a set of network components, plus an associated set of users that are allowed to access those components for the purpose of running an experiment on the substrate.

SFA architecture and federation

SFA architecture and federation

Resource definition

SFA gives users access to heterogeneous resource types. The RSpec (resource specification) is the mean that SFA uses for declaring those resources. RSpecs provide a language for describing the resources (both physical and logical) exported by an aggregate (collection of resources). So far, SFA has taken a “bottom-up” approach to defining the RSpec, allowing each new type of aggregate to specify its own RSpec format using XML. The RSpec serves two purposes: to let the aggregate advertise information to the user about the available resources, and to enable the user to request a subset of the resources to be allocated to a slice. The aggregate manager is responsible for generating and processing RSpecs.

In SFA there are defined three types of RSpecs:

  • Advertisement RSpec: the one that exposes the information about the testbed resources, or about the slivers attached to a slice
  • Request RSpec: when a slice is created, the resources that will take part of it are specified in a unique XML
  • Manifest RSpec: once the slice has been successfully created, this RSpec is returned. It is mainly an extension of the request RSpec with information for accessing resources of this slice

Resource access

By formalizing the interface around the slice, resource owners and users are free to cooperate more easily. Owners simplify the administrative overhead of making their systems easily accessible to more users, and users gain access to interesting systems without the over-head of setup and administration.

The basic SFA operations are explained below:

  • GetSelfCredential(cert_string, hrn, "user")

    This operation is supported by the Registry. Provides a bootstrap mechanism for a user who still has no credentials to talk to the system, but is registered and can provide a certificate for his/her public key

  • Register(record, auth_cred)

    Using this operation, an authority can store the record into the Registry. The Record can be of one of the following types: slice, node, user or authority

  • ListResources(creds, call_options)

    Depending on the call_options value, this operation will show the resources from a slice or all the resources from the testbed

  • CreateSliver(slice_urn, cred, `RSpec`, users)

    There is a misnaming in this operation due to historical reasons. Despite it is called create sliver, this call creates the slice which resources are specified in the RSpec

  • ListSlices(creds)

    This operation is used to learn the names of the slices instantiated on that component or aggregate

  • DeleteSliver(slice_urn, creds)

    Deletes the slice identified by the slice_urn. Again, there is a misnaming

APIs used

OCCI

In the BonFIRE architecture the Enactor maps OCCI calls to the API used by the various testbeds facilities. So far in the project, these testbeds have all used OCCI variants so this task has been relatively straightforward. Mapping from OCCI to SFA is a more challenging task. The first difficulty is that SFA is a document-based approach that describes the whole experiment in a single document, while OCCI builds up the experiment in multiple, single-resource requests. BonFIRE has tackled this resource model to document model mapping problem already for the Virtual Wall implementation. In that instance the mapping was carried out by a new service at the Virtual Wall facility that exposed an OCCI API to the Enactor.

When interconnecting FEDERICA, the facility will not provide an OCCI API so the mapping must be performed by an Enactor plug-in (SFA Adaptor). Mapping from an OCCI model to a document-model requires that the mapper holds the current state of the experiment as it is built up. More information on Mapping to SFA section.

The addition of new resources types to represent the new infrastructure resources (routers and links) has no great impact on the BonFIRE high-level architecture. All it requires is the addition of new OCCI resources and some changes to the OCCI of existing resources so that they can be connected to the new infrastructure resources. The challenge here is to design the new OCCI resources so that they integrate well with the existing resources, conform to the design philosophy of the OCCI specification, and seem natural for users familiar with the these infrastructure resources.

OCCI extension

This API has been extended with the following elements:

  • XML document describing FEDERICA infrastructure (physical routers and physical nodes)
  • New <router> element
  • Additions to the existing <network> and <compute> OCCI resources

Moreover, the following assumptions have been considered during the definition of the new OCCI resources:

  • The experimenter provides sensible IP configuration within the network. There is not checking about the IP coherence in the submission process
  • The experimenter has knowledge of the routers’ physical configuration since he has to provide the configuration when the logical router is requested
  • FEDERICA resources cannot be modified during the experimenter lifetime. This limitation may be solved in future releases. On the other hand, the OCCI is designed in a robust way. From the OCCI pe`RSpec`tive, FEDERICA resources could be extended during project lifetime by requesting additional logical routers, additional logical interfaces, new computing VMs or changing router configuration. In future steps, NOVI API will provide the functionality of updating or increasing experimenter slice, and by that moment, no changes will be necessary in BonFIRE to support this feature
Physical resources

FEDERICA offers two kinds of resources: physical nodes and routers. These, as well as the interconnections between them, must be defined on some description file. Using this infrastructure description file, the portal will display to the experimenter the physical resources available through a visualization tool. The visualization tool displays the network topology and how/where the physical nodes are attached to the network nodes.

To describe a physical router, we list its interfaces and where is connected each interface:

<physical_router xmlns="http://api.bonfire-project.eu/doc/schemas/occi">
  <host></host>
  <description> </description>
  <interface>
    <connected_to>
      <host> </host>
      <physical_interface> </physical_interface>
    </connected_to>
  </interface>
  ...
  ...
</physical_router>

A brief explanation about the tags:

  • <host>: names the physical router
  • <description>: additional information, such as the router OS
  • <connected_to>: where is connected this network interface
  • <host>: physical router or node connected to this interface
  • <physical_interface>: physical interface of the physical router or node connected to this interface

Similarly, to describe a physical node, we list its interfaces and how are interconnected with the FEDERICA routers:

<physical_node>
  <name></name>
  <description></description>
  <interface>
    <connected_to>
      <host></host>
      <physical_interface> </physical_interface>
    </connected_to>
  </interface>
  ...
  ...
</physical_node>
Router

We have cited the necessity of providing bounded requests when demanding FEDERICA resources. Hence, providing or displaying the physical infrastructure before submitting the experiment is a requirement. Experimenters will check in advance the physical infrastructure available and decide on top of which physical resources to request their network slice. The description of the physical infrastructure is provided in a separate XML file. This file describes the two types of resources offered by FEDERICA (routers and physical nodes) and the interconnections between them. (Note that physical nodes are powerful servers to deploy VMs on). Using this infrastructure description file, the portal will display to the experimenter the physical resources available through a visualization tool. The visualization tool displays the network topology and how/where the physical nodes are attached to the network nodes.

To describe a physical router, we list its interfaces and where is connected each interface:

<router xmlns="http://api.bonfire-project.eu/doc/schemas/occi">
  <location href="/locations/federica" />
    <host></host>
    <name></name>
    <description></description>
    <interface>
      <name></name>
      <physical_interface></physical_interface>
      <ip></ip>
      <netmask></netmask>
    </interface>
  <config></config>
</router>

A brief explanation about the OCCI tags:

  • <location>: references the BonFIRE location where this router is going to be created. Initially, in routers case, it will always be /locations/federica/routers/
  • <host>: physical router where the logical router is requested. This tag will also be used in the computes when the experimenter wants to request computes into a specific physical node
  • <name>: name given to the logical router
  • <description>: free field for experimenter convenience
  • <interface>: logical ports requested in the logical router. A logical router can have as many interfaces as required by the experimenter. Interfaces have the following parameters to define:
    • <name>: identifies the logical interface within the logical router
    • <ip>, <netmask>: IP configuration assigned to the logical interface
    • <physical_interface>: physical interface where is going to be created the logical interface
  • <config>: where experimenter provides the desired router configuration as plain text
Network

Some remarkable changes are introduced in the network OCCI resource. Until now, experimenters were required to explicitly configure a network to which compute resources will be attached. After adding FEDERICA resources, <network> also describes the connectivity between routers. For doing so, the <network_link> tag (and its children) have been added. The OCCI to request a network resource in FEDERICA is the following:

<network xmlns=”http://api.bonfire-project.eu/doc/schemas/occi”>
  <location href=”/locations/federica" />
  <description></description>
  <network_link>
    <endpoint>
      <router href=””/>
      <router_interface></router_interface>
    </endpoint>
    <endpoint>
      <router href=””>
        <router_interface></router_interface>
    </endpoint>
  </network_link>
</network>

A brief explanation about the OCCI tags:

  • <location>: references the BonFIRE location where this network is going to be created
  • <description>: free field for experimenter convenience
  • <network_link>: specifies the link between two endpoints. There are as many network_links as required by the topology
  • <endpoint>: one of the ends of a link connecting two resources
  • <router href=””>: references a logical router created previously
  • <router_interface>: name of the logical interface that comprise the link
Compute

Compute OCCI is updated to incorporate access to the new controlled network resources. Two tags are added in the <nic> to specify that the compute is connected to FEDERICA resources. For that, the <nic> has to indicate to which logical router and logical interface is linked. Furthermore, to request FEDERICA computes is mandatory to fill the tags <host>, <ip> and <netmask> to specify from which physical node experimenter requests a virtual machine and to which network is connected. Filling these tags is optional in other sites like OpenNebula, but as mentioned before, FEDERICA requires to explicitly pointing the physical resources.

<compute xmlns=”http://api.bonfire-project.eu/doc/schemas/occi”>
  <location href=”/locations/federica”/>
  <name></name>
  <host></host>
  <instance_type></instance_type>
  <disk>
    <storage href=/>
  </disk>
  <nic>
    <name></name>
    <device></device>
    <network href=””>
    <router href=””>
    <router_interface></router_interface>
    <ip></ip>
    <netmask></netmask>
  </nic>
</compute>

A brief explanation about the OCCI tags:

  • <host>: physical node where the VM is requested
  • <router href=>: references a logical router created previously.
  • <router_interface>: logical interface name of the above mentioned router which is directly connected with the VM NIC

Mapping to SFA

The OCCI to SFA conversion is performed at the Enactor, inside the SFA adaptor. The SFA adaptor functionality is explained in detail in section 5.3 and section 6.

Among others, one of the tasks of the SFA adaptor is to batch and group the individual OCCI requests and translate it into a SFA request. During this conversion process, some consistency and validation is carried out. The consistency and validation operation consist on checking that after merging all the individual OCCI requests the demanded slice can be mapped on top of the existing physical infrastructure. This means checking proper connectivity between resources. For instance, if an experimenter demands for a compute in PSNC connected to the router named psnc.poz.router1, the enactor will validate that it really exists a direct link between the PSNC VM and the PSNC router. If any inconsistency is detected, it will through an error message. It is envisaged certain intelligence in the Enactor to facilitate some automatic mapping in case the experimenter does not provide complete information.

In the next diagram its is described how the BonFIRE OCCI maps with the SFA RSpec defined in NOVI.

OCCI mapping to SFA

OCCI mapping to SFA

BonFIRE–NOVI workflow

Here will be explained the following sequences of steps which need to be taken in BonFIRE when:

  • an experimenter requests to visualize the available controlled network resources previously to submit a request
  • an experimenter wishes to make a request of controlled network resources
  • an experimenter wants to check slice resources or release the allocated resources

Creating experiment

In BonFIRE, an experiment can be composed of several resources, and for each one of them the BonFIRE Resource Manager sends an OCCI request to be created associated to it with a BonFIRE Experiment ID. The SFA-Adaptor checks first if there is an experiment associated with that ID in the database, if there is no experiment, creates a new experiment entry in it, and returns an OK message saying that the resource was created successfully (it verifies that OCCI sent by the Resource Manager is OK). Then, the Resource Manager can send more resources associated to the same Experiment ID, those resources are stored in the Enactor DB. After all resources are created the Experiment will send a “GO” message, indicating to the SFA-Adaptor that all resources for the experiment are there and the execution of it can start. Similarly to how VirtualWall works in BonFIRE, the whole group of OCCI requests representing the entire experiment resources are batched in the Enactor.

The SFA-Adaptor will read all the resource information from the Enactor DB, create an RSpec document from it. Then, this RSpec is forwarded to the NOVI’s Aggregate Manager (AM) that finally allocates the slice and confirms the success sending back a Manifest response. After receiving the Manifest response, the broker replies with a Change status OK and the experimenter can start his experimentation.

Creating an experiment through the SFA-Adaptor sequence diagram

Creating an experiment through the SFA-Adaptor sequence diagram

Getting resource information

If the experimenters wants to query the resource from an experiment already started, the system do not need to ask to NOVI AM since this information is already stored in the Enactor DB. Then, the SFA adaptor will generate a query to the Enactor DB and retrieve the experiment details and send them back to the experimenter.

Query of experiment resources

Query of experiment resources

Deleting experiment

When deleting an experiment, the SFA-adapter translate the message coming from the Resource Manager to the RSpec message that process the Delete Slice request to the NOVI AM. During the deleting process, apart from releasing the slice by contacting the NOVI AM, we also need to delete the experiment and its resource identifiers from the Enactor DB.

Delete an experiment

Delete an experiment

Message queue use

Writes to the Management Message queue. See MMQ.

Implementation details

This adaptor works as follows:

  1. SFA API expects one unique XML (see Sample RSpec) where all the resources belonging to the slice are defined. Hence, in FEDERICA case, the Resource Manager sends one by one the different OCCI requests for the resource creation.
  2. After a basic verification of the OCCI sent by the Resource Manager those request are stored temporarily in a database.
  3. The Enactor sends back to the Resource Manager an OK message, saying that the resource was created successfully.
  4. When everything is ready to create the RSpec (the experimenters sets to RUNNING the experiment), the Enactor creates the Request RSpec and sends it to the SFA testbed.

For the moment, FEDERICA resources are connected to EPCC and PSNC. Thus, FEDERICA networks can only be used for interconnecting those BonFIRE sites.

Source code location

Code is located under broker/Enactor/trunk/src/main/java/eu/bonfire/broker/enactor/endpoints/:

Sample RSpec

Follows a sample RSpec where the whole set of resources are requested in the SFA fashion:

<?xml version="1.0" encoding="UTF-8"?>
<rspec xmlns="http://sorch.netmode.ntua.gr/ws/RSpec"
       xmlns:cc="http://sorch.netmode.ntua.gr/ws/RSpec/ext/federica"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://sorch.netmode.ntua.gr/ws/RSpec
                           http://sorch.netmode.ntua.gr/ws/RSpec/request.xsd
                           http://sorch.netmode.ntua.gr/ws/RSpec/ext
                           http://sorch.netmode.ntua.gr/ws/RSpec/ext/federica/vm.xsd"
      type="request">

  <!--routerA-->
  <node client_id="RouterA" exclusive="false"
    component_manager_id="urn:publicid:IDN+federica.eu+authority+cm"
    component_id="urn:publicid:IDN+federica.eu+node+dfn.erl.router1">
      <hardware_type name="genericNetworkDevice" />
      <sliver_type name="router" />
      <services>
        <login authentication="ssh-keys" hostname="r1.erl.de" port="22"/>
      </services>
      <interface client_id="RouterA:eth_a1"
        component_id="urn:publicid:IDN+federica.eu+interface+dfn.erl.router1:ge-022"
        cc:exclusive="false">
          <ip address="192.168.1.2" netmask="255.255.255.0" type="ipv4"/>
      </interface>
      <interface client_id="RouterA:eth_a2"
        component_id="urn:publicid:IDN+federica.eu+interface+dfn.erl.router1:ge-010"
        cc:exclusive="false">
          <ip address="192.168.2.1" netmask="255.255.255.0" type="ipv4"/>
      </interface>
    </node>

  <!--routerC-->
  <node client_id="RouterC" exclusive="false"
    component_manager_id="urn:publicid:IDN+federica.eu+authority+cm"
    component_id="urn:publicid:IDN+federica.eu+node+psnc.poz.router1">
      <hardware_type name="genericNetworkDevice" />
      <sliver_type name="router" />
      <services>
        <login authentication="ssh-keys" hostname="r1.poz.pl" port="22"/>
      </services>
      <interface client_id="RouterC:eth_c1"
        component_id="urn:publicid:IDN+federica.eu+interface+psnc.poz.router1:ge-000"
        cc:exclusive="false">
          <ip address="192.168.2.2" netmask="255.255.255.0" type="ipv4"/>
      </interface>
      <interface client_id="RouterC:eth_c2"
        component_id="urn:publicid:IDN+federica.eu+interface+garr.mil.router1:ge-013"
        cc:exclusive="false">
          <ip address="192.168.3.1" netmask="255.255.255.0" type="ipv4"/>
      </interface>
  </node>

  <node client_id="ComputeX" exclusive="false">
    <hardware_type name="pc" />
    <sliver_type name="vm">
      <cc:compute_capacity cpuSpeed="1000000" numCpuCores="0.25" ramSize="250000"
        diskSize="100000" guestOS="Freebsd" />
      <disk_image name="federica-ops//FBSD72-STD" />
    </sliver_type>
    <services>
      <login authentication="ssh-keys"/>
    </services>
    <interface client_id="ComputeX:if_x" cc:exclusive="false"/>
  </node>

  <node client_id="ComputeZ" exclusive="false">
    <hardware_type name="pc" />
    <sliver_type name="vm">
      <cc:compute_capacity cpuSpeed="1000000" numCpuCores="0.25" ramSize="250000"
        diskSize="100000" guestOS="Freebsd" />
      <disk_image name="federica-ops//FBSD72-STD" />
    </sliver_type>
    <services>
      <login authentication="ssh-keys"/>
    </services>
    <interface client_id="ComputeX:if_z" cc:exclusive="false"/>
  </node>

  <link client_id="vlink1">
    <component_manager name="urn:publicid:IDN+federica.eu+authority+cm"/>
    <component_hop
      component_id="urn:publicid:IDN+federica.eu+link+dfn.erl.router1:ge-022-dfn.erl.vserver1:vmnic7">
        <interface_ref
          component_id="urn:publicid:IDN+federica.eu+interface+dfn.erl.router1:ge-022"
          component_manager_id="urn:publicid:IDN+federica.eu+authority+cm"/>
          <interface_ref
            component_id="urn:publicid:IDN+federica.eu+interface+dfn.erl.vserver1:vmnic7"
            component_manager_id="urn:publicid:IDN+federica.eu+authority+cm" />
    </component_hop>
    <interface_ref client_id="RouterB:if_a1" />
    <interface_ref client_id="ComputeX:if_x"/>
  </link>

  <link client_id="vlink2">
    <component_manager name="urn:publicid:IDN+federica.eu+authority+cm"/>
    <component_hop
      component_id="urn:publicid:IDN+federica.eu+link+dfn.erl.router1:ge-010-psnc.poz.router1:ge-000">
        <interface_ref
          component_id="urn:publicid:IDN+federica.eu+interface+dfn.erl.router1:ge-010"
          component_manager_id="urn:publicid:IDN+federica.eu+authority+cm"/>
        <interface_ref
          component_id="urn:publicid:IDN+federica.eu+interface+dfn.erl.router1:ge-000"
          component_manager_id="urn:publicid:IDN+federica.eu+authority+cm" />
    </component_hop>
    <interface_ref client_id="RouterC:if_c1" />
    <interface_ref client_id="RouterA:if_a2"/>
  </link>

  <link client_id="vlink3">
    <component_manager name="urn:publicid:IDN+federica.eu+authority+cm"/>
    <component_hop
      component_id="urn:publicid:IDN+federica.eu+link+psnc.poz.router1:ge-013-psnc.poz.vserver1:vmnic0">
      <interface_ref
        component_id="urn:publicid:IDN+federica.eu+interface+psnc.poz.router1:ge-013"
        component_manager_id="urn:publicid:IDN+federica.eu+authority+cm"/>
          <interface_ref
            component_id="urn:publicid:IDN+federica.eu+interface+psnc.poz.vserver1:vmnic0"
            component_manager_id="urn:publicid:IDN+federica.eu+authority+cm" />
    </component_hop>
    <interface_ref client_id="RouterC:if_c2" />
    <interface_ref client_id="ComputeZ:if_z"/>
  </link>

</rspec>