BonFIRE logo and link to main BonFIRE site

Table Of Contents

Previous topic

Contextualisation How-To

Next topic

Deploying Compute Resources in BonFIRE

This Page

On Request Resources

On BonFIRE, some testbeds offer the possibility to reserve additional resources on demand. If you think your experiment needs a lot of compute resources (in the hundreds), or you want a dedicated set of resources so that you can experiment with VM colocation and other placement constraints, then you should use this API. On-request (compute) resources are only available at Inria, USTUTT and IBBT.

For information about the exact infrastructure on offer by the different testbed for on-request resources, please read the Infrastructure page. Note that, currently, only Inria offers a dynamic advanced reservation system for on-request resources. For other testbeds, this needs to be done manually. See details below.

At IBBT

Requests for on-request resources at IBBT from BonFIRE will compete with requests from other Virtual Wall users and be subject to the same charter on acceptable usage time and duration. In general, this corresponds to standard usage policies applicable on Emulab testbeds (http://www.protogeni.net/trac/emulab/wiki/Swapping).

To request additional resources at IBBT, you need to send an email to vwall-ops@atlantis.ugent.be.

You need to specify the requests as follows:

  • Number of physical resources, each type of physical resources available being completely described. Therefore, the mapping of requirements between instance types and physical hosts is left to the user.
  • Start time.
  • Duration.
  • Experiment project already accepted to run on BonFIRE related to the request.

If accepted, the resources will be scheduled by the Virtual Wall testbed responsible person and enforced on the local platform. Simple and small requests that conform to IBBT’s policy for BonFIRE access to the Virtual Wall can be approved quickly (within two working days). Requests for longer durations (several days) or larger numbers of nodes (more than twenty) will be queued for approval by the people responsible for IBBT’s testbed. The expected delay in this case is smaller than one week.

At USTUTT

Different to the other testbeds, USTUTT does not offer Cloud resources on-request, but offers access to the HPC cluster at HLRS. BonFIRE users have access to a 545 nodes / 60 TFLOPs cluster.

The process for using the HPC cluster is as follows:

  1. Boot the special HPC cluster image at HLRS which includes the Globus Toolkit
  2. Contact Michael Gienger <gienger@hlrs.de> to obtain a user certificate (no generic certificates are available)
  3. Insert the user certificate into the VM
  4. Copy your data into the VM
  5. Submit your batch-job to the HPC-cluster
  6. Wait until the job finishes

You are not able to install software, but common software tools like compilers or simulation software packages are already installed. If additional software is needed, system admins have to do that. The cluster is not integrated into the BonFIRE WAN, but it can be accessed through the booted VM via gsi-ssh. No monitoring data will be available.

At Inria

Inria offers a booking system accessible via the web browser or an API supported by Restfully. How to make the reservations and what goes on under the bonned is explained in sections below.

Please note that any reservations at Inria must comply with the Grid‘5000 policy rules and be aware that the resources are also available for other Grid‘5000 users. In particular, be aware of the following acceptable usage times and duration:

  • Between 9:00 and 19:00 UTC+1, during weekdays, no user should use more than 3600 core-hours (daytime usage).
  • Overnight and during week-ends, no restrictions apply (free usage).
  • No resource usage should cumulate day time usage and free usage.

Additional conditions apply to BonFIRE users: without additional approval from Inria, no experiment should consume more than 446,400 core-hours (complete usage of the site in Rennes for 4 week-ends).

Using the GUI

It’s actually quite simple. Just open the following link: https://api.bonfire-project.eu/locations/fr-inria/reservations, click on the ‘New reservation’ button, and follow the instructions.

Here is what you should see:

  • The list of your current and past reservations (empty the first time you log in):

    ../_images/list.jpg
  • The form where you select your resources:

    ../_images/form.jpg
  • The details of a reservation:

    ../_images/resa.jpg

Note

In the future, this GUI will probably be integrated into the BonFIRE portal. For now this is a separate interface, but you can still log in with your existing BonFIRE credentials.

At the end of the process, you’ll see the details of your reservation. You will notice a field named CLUSTER UUID, which you’ll have to use when you create compute resources. Simply add <cluster>{{cluster_uuid}}</cluster> to your compute creation request (in the XML payload), so that the compute resource gets created on your dedicated cluster, and not on the resources shared by all BonFIRE users. Assuming your CLUSTER UUID is 5ff33cb5ad0e64302ef64f55a08817d818780b81, the XML payload that you send to create a resource in the portal or via the API would look like:

<compute xmlns="...">
  <name>Compute name</name>
  <cluster>5ff33cb5ad0e64302ef64f55a08817d818780b81</cluster>
  <instance_type>lite</instance_type>
  ...
  <context>
    ...
  </context>
</compute>

Using the API

You can also make your reservations through a JSON API. The recommended way is to use Restfully, which automatically supports that kind of API.

Here is how you would do it in a Restfully session:

reservations = get("/locations/fr-inria/reservations")
resa = reservations.submit(:clusters => {:paradent => 2, :paramount => 1}, :from => Time.now.to_i, :to => Time.now.to_i+7200)
pp resa

puts "***"
puts "CLUSTER ID to use in your reservations = #{resa['name']}"

Run it like this:

$ restfully -c ~/.restfully/config --shell http://doc.bonfire-project.eu/R2/_static/examples/restfully/on-request-inria.rb

Then, assuming your reservation is RUNNING you can use a script like the following, to launch compute resources in your dedicated cluster:

logger.info "Starting..."

inria = root.locations[:'fr-inria']
fail "No location" if inria.nil?

image = inria.storages[:'BonFIRE Debian Squeeze v6']
fail "No image" if image.nil?

# For now, you must use a specific network if you wish to SSH into your VMs
# (and you must go through INRIA's SSH gateway). In the near future, you'll be
# able to use the BonFIRE WAN network, as you would expect.
network = inria.networks[:'BonFIRE OnDemand WAN']
fail "No network" if network.nil?

count = (ENV['COUNT'] || 1).to_i
cluster = ENV['CLUSTER'] || "default"

experiment = root.experiments.submit(
  :name => "BonFIRE On Demand",
  :description => "Started on #{Time.now.to_s}.",
  :walltime => 10*3600
)

computes = experiment.computes

logger.info "I'm going to submit #{count} VMs..."

count.times do |i|
  payload = {
    :name => "VM-#{i}",
    :instance_type => "lite",
    :location => inria,
    :disk => [{:storage => image}],
    :nic => [{:network => network}],
    :cluster => cluster
  }
  payload[:host] = ENV['HOST'] unless ENV['HOST'].nil?

  c = computes.submit(payload)
  logger.info "Submitted VM##{c['id']} with IP=#{c['nic'][0]['ip']}, in cluster #{cluster}."
end

Run it like this (replace 5ff33cb5ad0e64302ef64f55a08817d818780b81 with your cluster UUID):

$ CLUSTER=5ff33cb5ad0e64302ef64f55a08817d818780b81 COUNT=5 restfully -c ~/.restfully/config -v --shell http://doc.bonfire-project.eu/R2/_static/examples/restfully/placement-constraint.rb

Here is the kind of output you’ll see:

I, [2012-01-20T16:42:23.597846 #10716]  INFO -- : Loading configuration from /Users/crohr/.restfully/api.bonfire-project.eu.integration...
I, [2012-01-20T16:42:23.598316 #10716]  INFO -- : Disabling RestClient::Rack::Compatibility.
I, [2012-01-20T16:42:23.598365 #10716]  INFO -- : Enabling Restfully::Rack::BasicAuth.
I, [2012-01-20T16:42:23.598416 #10716]  INFO -- : Enabling Rack::Cache.
I, [2012-01-20T16:42:23.598454 #10716]  INFO -- : Requiring ApplicationVndBonfireXml...
I, [2012-01-20T16:42:23.640362 #10716]  INFO -- : Starting...
cache: [GET ] miss, store
cache: [GET /locations] miss, store
cache: [GET /locations/fr-inria/storages] miss, store
cache: [GET /locations/fr-inria/storages/3] miss, store
cache: [GET /locations/fr-inria/networks] miss, store
cache: [GET /locations/fr-inria/networks/4] miss, store
cache: [GET ] fresh
cache: [GET /experiments] miss, store
cache: [POST /experiments] invalidate, pass
cache: [GET /experiments/59] miss, store
cache: [GET /experiments/59/computes] miss, store
I, [2012-01-20T16:42:26.740140 #10716]  INFO -- : I'm going to submit 5 VMs...
cache: [POST /experiments/59/computes] invalidate, pass
cache: [GET /locations/fr-inria/computes/21257] miss, store
I, [2012-01-20T16:42:28.715626 #10716]  INFO -- : Submitted VM#21257 with IP=172.18.7.61, in cluster 13e7fb9f6a3820fc77c8b10a02c37a2767b5d4f3.
cache: [POST /experiments/59/computes] invalidate, pass
cache: [GET /locations/fr-inria/computes/21258] miss, store
I, [2012-01-20T16:42:30.846760 #10716]  INFO -- : Submitted VM#21258 with IP=172.18.7.62, in cluster 13e7fb9f6a3820fc77c8b10a02c37a2767b5d4f3.
cache: [POST /experiments/59/computes] invalidate, pass
cache: [GET /locations/fr-inria/computes/21259] miss, store
I, [2012-01-20T16:42:33.176797 #10716]  INFO -- : Submitted VM#21259 with IP=172.18.7.63, in cluster 13e7fb9f6a3820fc77c8b10a02c37a2767b5d4f3.
cache: [POST /experiments/59/computes] invalidate, pass
cache: [GET /locations/fr-inria/computes/21260] miss, store
I, [2012-01-20T16:42:38.486076 #10716]  INFO -- : Submitted VM#21260 with IP=172.18.7.64, in cluster 13e7fb9f6a3820fc77c8b10a02c37a2767b5d4f3.
cache: [POST /experiments/59/computes] invalidate, pass
cache: [GET /locations/fr-inria/computes/21261] miss, store
I, [2012-01-20T16:42:44.105053 #10716]  INFO -- : Submitted VM#21261 with IP=172.18.7.65, in cluster 13e7fb9f6a3820fc77c8b10a02c37a2767b5d4f3.

You could also specify a specific HOST of a specific CLUSTER. Just pass <host>hostname</host> in your compute creation request. For instance, if you got resources at INRIA, you might get nodes such as paradent-1.rennes.grid5000.fr (this will be displayed in your reservation details). Therefore, you could ask that 5 VMs be started on the same physical host by specifying the HOST environment variable when you start the script:

$ CLUSTER=5ff33cb5ad0e64302ef64f55a08817d818780b81 HOST=paradent-1.rennes.grid5000.fr COUNT=5 restfully -c ~/.restfully/config -v --shell http://doc.bonfire-project.eu/R2/_static/examples/restfully/placement-constraint.rb

Please refer to Overview of Compute to know how to specify HOST and CLUSTER in the OCCI request.

What this does under the hood

If you’re wondering how this feature works, here are some details:

  • when you create a reservation, the API will reserve the adequate number of Grid‘5000 nodes at INRIA, using the existing Grid‘5000 API.
  • when your reservation starts on Grid‘5000, it executes a script that will deploy a specific image (this is one of the prominent feature of Grid‘5000: you fully control the physical machine). This image is the same as the one deployed on the nodes of the permanent BonFIRE infrastructure at INRIA.
  • when the deployment is terminated, BonFIRE base images are efficiently propagated on all your nodes (using the mighty taktuk tool), so that creating and booting VMs from those base images happens in a snap (less than 10s). This, plus other low-level optimizations allow you to start hundreds of VMs in a few minutes. We might allow you to pre-copy your own images as well, so that you are able to get the speed boost for your custom images (otherwise there is a small delay the first time a VM is booted from one of these images).
  • finally, your (now properly configured) nodes are added to our local OpenNebula installation, which means there is no difference in treatment between permanent and physical resources, except for the fact that only the people that know the cluster UUID can submit VMs on the cluster’s physical machines.
  • once the nodes have been registered with OpenNebula, we notify you of the availability of your cluster (if you specified a notification address when you created your reservation).
  • you can now start VMs on your dedicated cluster!