Virtual Wall OCCI

The Virtual Wall runs on top of Emulab. We extended the BonFIRE OCCI API to be able to provide some advanced capabilities of the Emulab framework, the most important being the advanced network emulation features. For the BonFIRE project a NFS storage server has been attached to the Virtual Wall and the BonFIRE WAN, which gives advanced shareable storage resources to the experimenters.

../_images/VW_fig.png

APIs provided

The Virtual Wall’s OCCI server implements the full BonFIRE OCCI XML Schema and Protocol. Due to Emulab, the software that drives the Virtual Wall, some extras are provided.

Source code location

All code is maintained in the BonFIRE SVN.

Advanced Network features

The Emulab framework is specifically targeted at creating small to large and complicated network topologies. These emulated networks can be fully and dynamically controlled. For example, the bandwidth of certain network links can be changed during the experiment. Bandwidth is not the only parameter that can be adjusted. This is the complete set of manageable parameters: bandwidth, latency, lossrate.

Next to adjusting certain parameters of the network, it is also possible to turn networks on or off. In this way an experimenter can emulate Computes going down or up without the hassle of really shutting down the Compute and making it necessary to reconfigure the entire Compute. This is done by changing the state of the Network.

It is also possible for the experimenter to set up a network with some active traffic. In order to determine the influence of background traffic on the tested system, transport layer streams can be created on each of the network links. These streams can use the UDP or TCP protocol, which can be chosen through the protocol parameter. The bandwidth of the stream can be controlled by setting the other two parameters for packetsize (in bytes) and throughput (in packets per second).

Most recently we also added a very advanced Emulab feature to the BonFIRE OCCI XML Schema: Emulab’s buffering strategy. This allows to change how Emulab will do the queueing in its emulated networks. There are three possible buffering strategies: Default, RED, GRED. The last two require extra parameters to be filled in: queue-in-bytes, limit, maxthresh, thresh, linterm, q_weigth.

Example of a fully featured Virtual Wall Network resource:

<?xml version="1.0" encoding="UTF-8"?>
<network xmlns="http://api.bonfire-project.eu/doc/schemas/occi" href="/locations/be-ibbt/networks/27">
  <id>27</id>
  <name>Network name</name>
  <address>192.168.0.0</address>
  <public>No</public>
  <size>C</size>
  <!-- managed network -->
  <bandwidth>1000</bandwidth>
  <latency>10</latency>
  <lossrate>0.1</lossrate>
  <!-- active network -->
  <throughput>200</throughput>
  <protocol>UDP</protocol>
  <packetsize>10</packetsize>
  <!-- buffering strategy -->
  <strategy>RED</strategy>
  <queue-in-bytes>0</queue-in-bytes>
  <limit>10</limit>
  <maxthresh>15</maxthresh>
  <thresh>5</thresh>
  <linterm>2</linterm>
  <q_weigth>20</q_weigth>
</network>

Network attributes

A network resource can have a number of attributes:

  • name: The network name.
  • description: The network description.
  • address: The address you wish for your network subnet.
  • size: The size of the network subnet. Possible values include A, B, C (these letters refer to the IANA-reserved private IPv4 network ranges) or CIDR prefix.
  • public: A flag that specifies if that network resource can be seen and used by anyone. Possible values are YES or NO. Defaults to NO.
  • latency: The controlled latency (in ms) of the network. Defaults to 0 (zero - no controlled latency).
  • bandwidth: The controlled bandwidth (in Mbps) of the network. Defaults to 1000Mbps.
  • lossrate: The controller loss rate (value between 0 and 1) to introduce in the network. Defaults to 0 (zero).
  • throughput: A traffic generator (TG) can be added to the network. The throughput of the TG is specified in #packets/s. Optional, but if one of the TG parameters is specified, all three of them need to be there (throughput, protocol and packetsize).
  • protocol: The protocol of the TG. Should be either UDP or TCP. Optional, but if one of the TG parameters is specified, all three of them need to be there (throughput, protocol and packetsize).
  • packetsize: The packet size used by the TG (in bytes). Optional, but if one of the TG parameters is specified, all three of them need
to be there (throughput, protocol and packetsize).
  • strategy: The queue strategy used. Optional. Default value is DropTail, while other possible values are RED and GRED.
  • queue-in-bytes: Use bytes instead of packets in the following parameters. Optional, but this parameter needs to be present when selected either the RED or GRED strategy. Value must be either 0 or 1.
  • limit: The queue size in packets. Optional, but this parameter needs to be present when selected either the RED or GRED strategy.
  • maxthresh: The maximum threshold for the average queue size in packets. Optional, but this parameter needs to be present when selected either the RED or GRED strategy.
  • thresh: The minimum threshold for the average queue size in packets. Optional, but this parameter needs to be present when selected either the RED or GRED strategy.
  • linterm: As the average queue size varies between thresh and maxthresh, the packet dropping probability varies between 0 and 1/linterm.Optional, but this parameter needs to be present when selected either the RED or GRED strategy.
  • q_weigth: The queue weight, used in the exponential-weighted moving average for calculating the average queue size. Optional, but this parameter needs to be present when selected either the RED or GRED strategy.

Experiment operations

In Emulab it is only possible to create experiments when the entire experiment description is known. After such an experiment is deployed, it is not possible to add or delete new Computes to an already running experiment. If an experimenter needs this cloud behaviour next to the advanced network features of the Virtual Wall, the Computes already need to be described in the experiment. By turning on/off the network that connects them to the rest of the experiment, one can emulate the dynamic behaviour of the cloud.

To tell the Virtual Wall when an experiment is fully set, the OCCI server introduces the following experiment operation:

<experiment>
        <action>active</action>
</experiment>

A similar message is used for the shutdown process

<experiment>
        <action>shutdown</action>
</experiment>

If the OCCI server receives a DELETE for a particular Experiment, it deletes this experiment, thus deleting all resources attached to that Experiment.

Shared Storage

Not really a feature of Emulab, but more thanks to the way the Virtual Wall is set up. Attached to the Virtual Wall’s internal network is an NFS server dedicated to the BonFIRE project. This NFS server is also accessible from the BonFIRE WAN, making it accessible from all Computes part of BonFIRE. To describe such a shared storage the BonFIRE OCCI is extended:

<?xml version="1.0" encoding="UTF-8"?>
<storage xmlns="http://api.bonfire-project.eu/doc/schemas/occi" name="shared" href="/locations/be-ibbt/storages/6">
    <description>test shared storage</description>
    <groups>gvseghbr</groups>
    <mountpoint>172.18.4.253:/mnt/bonfire-iscsi/gvseghbr/shared/6</mountpoint>
    <name>shared</name>
    <type>SHARED</type>
    <target>/mnt/shared</target>
    <link rel="experiment" href="/experiments/12204" type="application/vnd.bonfire+xml"/>
</storage>

The mountpoint parameter is how one can reach the NFS directory configured as a shared storage. On the Virtual Wall infrastructure this storage (like all other storage types) is mounted at boot-time to the target directory on the Compute.

APIs used

No particular APIs are used.

Message queue use

The Virtual Wall OCCI server does not read from the MMQ, it only writes the internal site events.

CRUD events

be-ibbt.*.create

For all resources, this event is fired whenever the Virtual Wall OCCI (VW) receives a POST. It is fired when the description of that resource is persisted in the VW’s internal DB.

A be-ibbt.storage.create event is also launched when the experimenter does a Save_As on the OS image attached to a particular Compute.

be-ibbt.*.delete

The VW sends this event for all resources whenever the VW receives the DELETE of the Experiment or of a Storage resource that is not attached to a particular Experiment.

be-ibbt.compute.update

When the Experiment is set to the running state, the Experiment is deployed to the Virtual Wall. Only at this point we can know the IP addresses the different computes get, because this is managed by Emulab. Via an update event everybody listening to the MMQ is notified about this extra information. Same holds for the MAC addresses.

be-ibbt.network.update

When in a running Experiment the experimenter changes the parameters of a managed or active Network, an update event will be launched with the adjusted parameters.

State events

Computes

  • on creation in BonFIRE: be-ibbt.compute.state.pending
  • after Experiment is set to running: be-ibbt.compute.state.active or be-ibbt.compute.state.failed
  • after Experiment is set to shutdown: be-ibbt.compute.state.stopping
  • when all save_as processes have been finished: be-ibbt.compute.state.stopped
  • when the Experiment is deleted: be-ibbt.compute.state.deleting

Networks

  • on creation in BonFIRE: be-ibbt.network.state.pending
  • after Experiment is set to running: be-ibbt.network.state.active.up
  • on turning the network on/off: be-ibbt.network.state.active.up or be-ibbt.network.state.active.up
  • after Experiment is set to shutdown: be-ibbt.network.state.stopping
  • when all save_as processes have been finished: be-ibbt.network.state.stopped
  • when the Experiment is deleted: be-ibbt.network.state.deleting

Storages

  • on creation in BonFIRE: be-ibbt.storage.state.pending
  • after Experiment is set to running: be-ibbt.storage.state.active
  • after Experiment is set to shutdown: be-ibbt.storage.state.stopping
  • when all save_as processes have been finished: be-ibbt.storage.state.stopped
  • when the Experiment is deleted: be-ibbt.storage.state.deleting

Implementation details

As said before the Virtual Wall is built on the Emulab software. This allows us to make Emulab’s advanced network capabilities available to the experimenter.

The NFS storage server attached to the Virtual Wall and the BonFIRE WAN allows the experimenter to create shared storage datablocks and make them available to not only the Virtual Wall computes, but also to the other sites attached to the BonFIRE WAN.

Some extra implementation details:

  • Computes are not virtualized. A single Compute uses one entire physical machine of our Virtual Wall.
  • Managed Networks also take up Compute resources in Emulab. Rule of thumb is: one compute for every 2 emulated networks.
  • All storages at the Virtual Wall are setup on the physical machine (except for the Shared Storages which are hosted by our NFS server) and are immediately mounted.
  • Both integration and production environments use the same Virtual Wall. They, however, use different startup scripts and different OCCI servers.