BonFIRE logo and link to main BonFIRE site

Table Of Contents

Previous topic

Experiment Manager API

Next topic

Creating a Managed Experiment From the GUI

This Page

JSON Experiment Descriptor

We’ll start with an example of how the JSON experiment descriptor would describe the initial set up of a server and client, and then describe the various options available in more detail.

In this experiment a server VM is set up at EPCC based on an image called “BonFIRE Debian Squeeze v6” and using the “BonFIRE WAN” network. A client VM is set up at Inria using the “BonFIRE Debian Squeeze v6” image and “BonFIRE WAN” network.

In this example the experiment will run for a maximum of 120 minutes ( “duration”: 120).

{
 "name": "My Experiment",
 "description": "Experiment description",
 "duration": 120,
 "resources": [
     {
         "compute": {
             "name": "Server",
             "description": "A description of the server.",
             "instanceType": "small",
             "locations": ["uk-epcc"],
             "resources": [
                 { "storage": "@BonFIRE Debian Squeeze v6"},
                 { "network": "@BonFIRE WAN"}
             ]
         }
     },
     {
         "compute": {
             "name": "Client",
             "description": "A description of the client.",
             "instanceType": "small",
             "locations": ["fr-inria"],
             "resources": [
                 { "storage": "@BonFIRE Debian Squeeze v6"},
                 { "network": "@BonFIRE WAN"}
             ]
         }
     }
 ]
}

References

There are 3 kinds of references that can be used in the experiment descriptor:

  • resources that already exist can be referenced by name, prefixed by an “@” symbol as seen in the example above, e.g. “@BonFIRE WAN”
  • resources that already exist can be referenced by uri, e.g.”https://api.bonfire-project.eu/locations/fr-inria/storages/14
  • resources that have previously been created in the experiment descriptor document can simply be referenced by name, e.g. “Server”.

Please note that the Experiment Manager processes resources in order of definition in the experiment descriptor. A compute cannot refer to a resource before the resource has been defined in the experiment descriptor.

Contexts

A compute description can contain a set of contextualisation information. There are currently two types of contexts supported: – IP Address Dependency – Contextualisation Element

IP Address Dependencies

The runtime IP address of one VM can be automatically be sent to another as follows:

...
"compute": {
        "name": "Client",
        ...
        "contexts": [
           {
               "ServerIP": [ "Server", "BonFIRE WAN" ]
           }
        ]
        }

The “contexts” section says that the “ServerIP” Contextualisation property will be populated with the IP address of the Server compute on the BonFIRE WAN. (ServerIP is an example name.)

A special case is when we want to pass the IP address of an aggregator compute to a client. Then we must name the context “aggregator_ip”. For example, if we have created an aggregator compute called “BonFIRE-Monitor”, then our aggregator client could use the following to obtain the aggregator’s IP address:

...
"compute": {
        "name": "AggregatorClient",
        ...
        "contexts": [
           {
               "aggregator_ip": [ "BonFIRE-Monitor", "BonFIRE WAN" ]
           }
        ]
        }

Cross-references in both directions cannot be handled, i.e. The client can be told the address of the server (or vice versa) but it is not possible to send the client the server’s address and send the server the client’s address. This is because a VM must be deployed before its IP address is known, and the contextualisation is set as it is deployed. The json should be written in deployment order. In the example above, the Server must be defined before the client that references it.

Note that the experiment manager cannot resolve IP dependencies to both a pre-existing network and a network with the same name defined in the experiment descriptor. For example, if you define a private network called “BonFIRE WAN” in your experiment descriptor, then define a compute with IP dependencies both to the public BonFIRE WAN and the private BonFIRE WAN, then this will not work.

Contextualisation Element

Contextualisation elements, also known as contextualisation variables, can be used to pass initialisation values for an experiment. They are defined as simple name-value pairs. For example:

"compute": {
        "name": "Client",
        ...
        "contexts": [
           {
               "experimentmin": "0"
           },
           {
               "experimentmax": "100"
           }
        ]
        }

This would create parameters EXPERIMENTMIN=0 and EXPERIMENTMAX=100 in the file /etc/bonfire on the created virtual machine.

Ordering

The order in which the VM should be deployed can be defined by listing sets of computes in the order of deployment. All of the VMs in the first group will be deployed before the VMs in the next group.

"order": [[ "Server" ], "created", [ "Client", "OtherClient" ]]

The option to wait until the prior VMs are “running” rather that just “created” will be added in a later version.

Storage

Datablock storage resources can be created as follows:

"storage": {
    "name": "Storage block",
    "description": "An extra storage block.",
    "type": "DATABLOCK",
    "size": 512,
    "fstype": "ext3"
}

And then referenced from a compute like this:

"compute": {
            "name": "Client",
            "description": "A description of the client.",
            "instanceType": "small",
            "locations": ["fr-inria"],
            "resources": [
                { "storage": "Storage block"},
                { "storage": "@BonFIRE Debian Squeeze v6"},
                { "network": "@BonFIRE WAN"}
            ]
        }

Or datablocks can be defined within a compute like this:

"compute": {
            "name": "Client",
            "description": "A description of the client.",
            "instanceType": "small",
            "locations": ["fr-inria"],
            "resources": [
                {
                    "storage": {
                        "name": "Storage block",
                        "description": "A place to store the log files of the server.",
                        "type": "DATABLOCK",
                        "size": 1024,
                        "fstype": "ext3"
                    }
                },
                {"storage": "@BonFIRE Debian Squeeze v6"},
                {"network": "@BonFIRE WAN"}
            ]
        }

Normally, storages created this way only exist for the lifetime of the compute that they are attached to. By specifying “save_as” on a storage, the storage block will be saved when the compute is shut down. For example:

"compute": {
         "name": "Client",
         "description": "A description of the client.",
         "instanceType": "small",
         "locations": ["fr-inria"],
         "resources": [
             {
                 "storage": {
                     "name": "Storage block name",
                     "description": "A place to store the log files of the server.",
                     "type": "DATABLOCK",
                     "size": 1024,
                     "fstype": "ext3"
                 },
                 "save_as": "my saved storage"
             },
             {"storage": "@BonFIRE Debian Squeeze v6"},
             {"network": "@BonFIRE WAN"}

Storages can also be created as persistent. In this case, the Experiment manager creates and makes the storage available before the compute is created. For example:

"compute": {
         "name": "Client",
         "description": "A description of the client.",
         "instanceType": "small",
         "locations": ["fr-inria"],
         "resources": [
             {
                 "storage": {
                     "name": "Storage block name",
                     "description": "A place to store the log files of the server.",
                     "type": "DATABLOCK",
                     "size": 1024,
                     "fstype": "ext3"
                     "persistence": "YES"
                 }
             },
             {"storage": "@BonFIRE Debian Squeeze v6"},
             {"network": "@BonFIRE WAN"}

As the experiment manager creates all its resources inside an experiment, when the experiment is deleted all its resources are deleted. This includes any persistent storages created in the experiment. To prevent this, a storage can be created with a “delete_with_experiment” field. By default this is “YES”. If this is supplied as “NO”, the storage is created as a resource outside the experiment and not deleted with the experiment.

For example:

"compute": {
         "name": "Client",
         "description": "A description of the client.",
         "instanceType": "small",
         "locations": ["fr-inria"],
         "resources": [
             {
                 "storage": {
                     "name": "Storage block name",
                     "description": "A place to store the log files of the server.",
                     "type": "DATABLOCK",
                     "size": 1024,
                     "fstype": "ext3"
                     "persistence": "YES",
                     "delete_with_experiment": "NO"
                 }
             },
             {"storage": "@BonFIRE Debian Squeeze v6"},
             {"network": "@BonFIRE WAN"}

Multiple instances of a VM

Setting the minimum number of instances of a VM to create will create the given number of instances of it. In the example below, three clients will be produced; all attached to the same network and based on the same storage OS image. These will be named Client1, Client2, Client3. If a storage or network is created along with the compute, a new one will be created for each compute.

"compute": {
            "name": "Client",
            "description": "A description of the client.",
            "instanceType": "small",
            "locations": ["fr-inria"],
                            "min": 3,
            "resources": [
                { "storage": "@BonFIRE Debian Squeeze v6"},
                { "network": "@BonFIRE WAN"}
            ]
        }

Controlled networks

Custom controlled networks can be created and used within an experiment. The lossrate, bandwidth and latency parameters can be set to simulate slow or noisy networks. The following experiment descriptor shows an example of creating a custom network and using it within a compute.

{
  "name": "controlled network experiment",
  "description": "Example of using a controlled network",
  "duration": 120,
  "resources": [
  {
     "network": {
       "name": "customnetwork",
       "locations": [
        "be-ibbt"
       ],
       "address": "192.168.1.1",
       "size": "C",
       "lossrate": 1.0,
       "bandwidth": 700,
       "latency": 21
     }
   },
   {
   "compute": {
       "name": "networkCompute",
       "locations": [
       "be-ibbt"
       ],
       "instanceType": "Large-EN",
       "min": 1,
       "resources": [
       {
           "storage": "@BonFIRE Debian Squeeze 2G v6"
         },
         {
         "network": "@BonFIRE WAN"
         },
         {
         "network": "customnetwork"
         }
       ],
       "contexts": []
     }
   }
 ]
}

Some sites such as IBBT allow further parameters for controlling background traffic simulation. These are protocol, packetsize and throughput. Please note that all three parameters must be specified together if used. Please see Emulated Network at the Virtual Wall for more information.

Custom computes

As well as using supplied compute instance types, custom compute instances may be defined. These instances allow memory, number of physical cpus and number of virtual cpus to be explicitly defined. The following example shows a compute defining 1 physical cpu, 2 virtual cpus and 256 MB of memory.

"compute": {
              "name": "CustomClient",
              "description": "A description of the client.",
              "instanceType": "custom",
              "cpu": "1",
              "vcpu": "2.0",
              "memory": "256",
              "locations": ["fr-inria"],
              "resources": [
                  {"storage": "@BonFIRE Debian Squeeze v6"},
                  {"network": "@BonFIRE WAN"}
              ]
          }

Host and Cluster

Computes may be requested to run on specific hosts and clusters by supplying values for host and cluster. Please see Deploying Compute Resources in BonFIRE for further information.

"compute": {
              "name": "Client1",
              "description": "A description of the client.",
              "instanceType": "small",
              "cluster": "5ff33cb5ad0e64302ef64f55a08817d818780b81",
              "locations": ["uk-epcc"],
              "resources": [
                  {"storage": "@BonFIRE Debian Squeeze v6"},
                  {"network": "@BonFIRE WAN"}
              ]
          }

"compute": {
              "name": "Client2",
              "description": "A description of the client.",
              "instanceType": "small",
              "host": "node-1.bonfire.grid5000.fr",
              "locations": ["fr-inria"],
              "resources": [
                  {"storage": "@BonFIRE Debian Squeeze v6"},
                  {"network": "@BonFIRE WAN"}
              ]
          }

Aggregators

Aggregator computes can be used to monitor information about other computes within the same experiment. There are two ways to create aggregators: through the “aggregator” keyword; and manually.

Aggregator keyword

In release 4 we have introduced a shortcut for creating monitoring experiments via the experiment manager. Instead of supplying the information below, it is now possible to supply a “aggregator” keyword in an experiment descriptor, and a BonFIRE-Monitor compute of instance type “small” is automatically created, using the most recent version of the Zabbix image available at the specifed site. A persistent storage block of size 1024MB, named “Aggregator Storage For Managed Experiment [managed experiment id]”, is automatically created for the BonFIRE-Monitor compute. This is created outside (before) the experiment, so that results are retained beyond the lifetime of the experiment. Also, the aggregator_ip dependencies and usage information are automatically added to all VMs in the experiment during deployment.

In the following example, an Aggregator is created at EPCC, and a “metricsclient” compute is created at fr-inria, with a user-defined metric “users” (for more detail on user metrics see below).

{
  "name": "aggregator keyword example",
  "description": "automatically creates a BonFIRE-Monitor aggregator",
  "duration": 600,
  "aggregator": "uk-epcc",
  "resources": [

    {
      "compute": {
        "name": "metricsclient",
        "locations": [ "fr-inria" ],

        "instanceType": "lite",
        "min": 1,
        "resources": [

          {
            "storage":"@BonFIRE Debian Squeeze v6"

          },

          {
            "network": "@BonFIRE WAN"

          }

        ],

       "contexts": [

         {
            "metrics": [
               "users,wc -l /etc/passwd|cut -d\" \" -f1, rate=20, valuetype=3, history=10",
               "system.test, who|wc -l, history=25, valuetype=3"
               ]

         }

        ]

      }

    }

  ]

}

Manual Aggregator Creation

Aggregator computes can be used to monitor information about other computes within the same experiment.Aggregator computes must be created from an appropriate aggregator image containing a Zabbix installation.Aggregators and aggregator clients are created in a similar way to other computes, but some additional usage information must be supplied.

The following example shows an aggregator compute which monitors a client compute. The aggregator is also able to monitor itself. Please note that BonFIRE expects aggregator computes to be named “BonFIRE-Monitor”.

{
  "name": "AggregatorExperiment",
  "description": "example aggregator with client",
  "duration": 60,
  "resources": [
  {
     "compute": {
       "name": "BonFIRE-Monitor",
       "locations": [
       "de-hlrs"
       ],
       "instanceType": "small",
       "min": 1,
       "resources": [
       {
           "storage": "@BonFIRE Zabbix Aggregator v8"
         },
         {
         "network": "@BonFIRE WAN"
         }
       ],
       "contexts": [
       {
               "usage": "zabbix-agent;infra-monitoring-init"
        }]
     }
   },
   {
     "compute": {
       "name": "aggregatorClientEPCC",
       "locations": [
         "uk-epcc"
       ],
       "instanceType": "lite",
       "min": 1,
       "resources": [
         {
           "storage": "@BonFIRE Debian Squeeze v6"
         },
         {
           "network": "@BonFIRE WAN"
         }
       ],
       "contexts": [
         {
           "aggregator_ip": [
             "BonFIRE-Monitor",
             "BonFIRE WAN"
           ]
         },
         {
               "usage": "zabbix-agent"
         }
       ]
     }
   }
 ]
}

In the BonFIRE-Monitor compute, note the usage information specifying that this compute is monitoring its own usage (zabbix-agent); and that it also fetches infrastruture monitoring metrics for all physical hosts on which VMs of this experiment are running (infra-monitoring-init):

"contexts": [
      {
              "usage": "zabbix-agent;infra-monitoring-init"
       }]

An additional option when creating an aggregator compute is to allow it to save its monitoring data to an additional disk attached to the compute.

"contexts": [
      {
              "usage": "zabbix-agent;zabbix-aggr-extend;infra-monitoring-init"
       }]

In this case a second storage must been specified in the BonFIRE-Monitor compute description, for example (assuming that a storage called ExternalStorageBlock already exists):

"resources": [
       {
           "storage": "@BonFIRE Zabbix Aggregator v8"
         },
        {
           "storage": "@ExternalStorageBlock"
        }
         {
         "network": "@BonFIRE WAN"
         }
       ]

In the client compute, there are two things to note. The first is that the computes must declare an IP dependancy called “aggregator_ip”. This allows the computes to obtain the address of the Zabbix monitoring service to which they will publish information. The second is that the usage information declaring that this compute will publish monitoring information.

"contexts": [
       {
          "aggregator_ip": [
            "BonFIRE-Monitor",
            "BonFIRE WAN"
          ]
        },
        {
              "usage": "zabbix-agent"
        }
      ]

User metrics

User defined metrics to be collected by an aggregator may be defined in a compute description. The metrics must be passed as an array called “metrics”, with a seperate string for each metric. The strings are automatically placed in CDATA sections to handle characters which may confuse xml parsers. However, any characters which may confuse string processing must be escaped, as in the example below.

"contexts": [

        {
           "metrics": [
              "users,wc -l /etc/passwd|cut -d\" \" -f1, rate=20, valuetype=3, history=10",
              "system.test, who|wc -l, history=25, valuetype=3"
              ]
        }
       ]

Bandwidth on demand

Bandwidth on demand resources can be created by specifying site link resources and networks which use the site links. Please see Controlled Bandwidth with AutoBAHN for further information.

A site link is specified by declaring a site link resource. Currently a site link may only be used between EPCC and PSNC. Bandwidth is specified in megabits per second.

"site_link": {
         "name": "mySiteLink",
         "description": "autobahn site link",
         "locations": ["autobahn"],
         "endpoints": [ "uk-epcc", "pl-psnc" ],
         "bandwidth": 100
       }

To create a network using a site link, the network must declare a “vlan” dependency referring to the appropriate site link within the same experiment descriptor. The vlan dependency must contain “vlanFromSiteLink” and the name of the site link. The experiment manager will automatically wait for the site link to become active, then it will obtain the vlan identifier from the site link and use this to create the network.

"network": {
            "name": "myEpccAutoBahnNetwork",
            "description": "epcc link to autobahn",
            "locations": ["uk-epcc"],
            "address": "192.168.4.0",
            "netmask": "255.255.255.0",
            "vlan":  { "vlanFromSiteLink": "mySiteLink" }
       }

Alternatively, if the vlan identifier for an existing site link is known, then this can be used directly in the experiment descriptor.

"network": {
            "name": "myEpccAutoBahnNetwork",
            "description": "epcc link to autobahn",
            "locations": ["uk-epcc"],
            "address": "192.168.4.0",
            "netmask": "255.255.255.0",
            "vlan": "10"
       }

The network can subsequently be used in a compute as per normal:

"compute": {
            "name": "server",
            "description": "server at epcc",
            "instanceType": "small",
            "locations": ["uk-epcc"],
            "resources": [
                { "storage": "@BonFIRE Debian Squeeze v6"},
    { "network": "myEpccAutoBahnNetwork"},
    { "network": "@BonFIRE WAN"}
            ],
            "contexts": [
                              {       "iproute": "192.168.3.0/24 dev eth1" }
            ]
        }

FEDERICA routers

Routers and networks can be created at FEDERICA by defining router resources in an experiment descriptor. Please read the FEDERICA documention at Controlled public networking with FEDERICA for further information. Multiple interfaces may be specfied per router. Optional additional configuration for each router can supplied in a “config” element.

"router": {
            "name": "RouterDFN",
            "host": "dfn.erl.router1",
"locations": ["federica"],
            "interfaces": [
                {
                    "name": "ifDFN",
                    "physicalInterface": "ge-0/1/0",
                    "ip": "192.168.10.10",
                    "netmask": "255.255.255.0"
                }
            ],
"config": "optional configuration information"
      }

To specify a network based on FEDERICA router end points, a special form of network resource must be specifed. The network must contain an array of networkLinks, where each element contains a router name and the name of an interface specified within that router.

"network": {
             "name": "myFedericaNetwork",
  "description": "federica network",
              "locations": ["federica"],
              "networkLinks": [
                    [ { "router": "RouterDFN","interface": "ifDFN"},
    {"router": "RouterPSNC", "interface": "ifPSNC"}
  ]
      }

Note that this network cannot be used directly within a compute. A further network must be defined which declares the FEDERICA network as a dependency. This must include a vlan dependency, specifing “vlanFromNetwork” and the name of the Federica network. The experiment manager will automatically create the router and network at FEDERICA and wait for them to become active. Once they are active, the experiment manager will obtain the vlan identifer for the network and pass this into the new network at creation time. Alternatively, if the vlan identifier for an existing FEDERICA network is already known, then this can be specified directly in the experiment descriptor.

"network": {
            "name": "epccNetwork",
            "description": "epccNetwork",
            "locations": ["uk-epcc"],
            "address": "192.168.1.0/24",
            "netmask": "255.255.255.0",
            "vlan":  { "vlanFromNetwork": "myFedericaNetwork" }
            }

Elasticity

A special case of an experiment descriptor can be used to create an elasticity experiment. This allows the number of active computes to be automatically managed based on cpu usage. This experiment descriptor must contain additonal context information. For further information please see How To Use the BonFIRE EaaS

{
   "name": "elasticitytest",
   "description": "elasticitytest",
   "duration": 3600,
   "resources": [
     {
       "compute": {
         "name": "BonFIRE-Monitor",
         "locations": [
           "fr-inria"
         ],
         "instanceType": "small",
         "min": 1,
         "resources": [
           {
             "storage":"@BonFIRE Zabbix Aggregator v8"
           },
           {
             "network": "@BonFIRE WAN"
           }
         ]
       }
     },
     {
       "compute": {
         "name": "BonFIRE-Elasticity-Engine-webservice",
         "locations": [
           "fr-inria"
         ],
         "instanceType": "small",
         "min": 1,
         "resources": [
           {
             "storage":"@BonFIRE Debian Squeeze v6"
           },
           {
             "network": "@BonFIRE WAN"
           }
         ],
        "contexts": [
           {
             "aggregator_ip": [ "BonFIRE-Monitor", "BonFIRE WAN"]
           },
                {
                        "usage": "elasticity-engine"
                },
                {
                        "ELASTICITY_TRIGGER_UPSCALE_EXPRESSION": "{system.cpu.usage.last(0)}>60"
                },
                {
                        "ELASTICITY_VMGROUP_MAX": "3"
                },
                {
                        "AGGREGATOR_USER": "Admin"
                },
                {
                        "ELASTICITY_INSTANCETYPE": "lite"
                },
                {
                        "ELASTICITY_LB_SCHEME": "HAProxy"
                },
                {
                        "ELASTICITY_VMGROUP_MIN": "1"
                },
                {
                        "ELASTICITY_LB_PORT": "80"
                },
                {
                        "ELASTICITY_LB_LOCATION": "/locations/fr-inria"
                },
                {
                        "ELASTICITY_VMGROUP_NAME": "webservice"
                },
                {
                        "AGGREGATOR_PASSWD": "zabbix"
                },
                {
                        "ELASTICITY_DISKSOURCE": "/locations/fr-inria/storages/1333"
                }
         ]

       }
     }
   ]
}

Comments

The JSON format does not itself support comments. However, we thought it might be useful to support comments for documentation within BonFIRE experiment descriptors, particularly for complicated scenarios. Therefore we have defined a “comment” element.

"comment": "some helpful and illuminating comment"

These may be used throughout an experiment descriptor.