One of the main common characteristics of cloud computing is resource sharing amongst multiple users, through which providers can optimise utilization and efficiency of their system.

However, at the same time this raises some concerns for performance predictability, reliability and security:

  • Resource (i.e. CPU, storage and network) sharing inevitably creates contention, which affects applications’ performance and reliability
  • Workloads and applications of different users residing on the same physical machine, storage and network are more vulnerable to malicious attacks

Studying the effect of resource contention and maliciousness in a cloud environment can be of interest for different stakeholders. Experimenters may want to evaluate the performance and security mechanisms of their system under test (SuT). On the other hand cloud providers may want to assess their mechanisms to enforce performance isolation and security.

The aim of COCOMA framework is to create, monitor and control contentious and malicious system workload. By using this framework experimenters are able to create operational conditions under which tests and experiments can be carried out. This allows more insight into the testing process so that various scenarios of the cloud infrastructure behaviour can be analysed by collecting and correlate metrics of the emulated environment with the test results.

COCOMA is provided by BonFIRE as a service within a VM already configured that can be added to an experiment.

In order to use COCOMA, an experimenter defines an emulation which embeds all environment operational conditions as shown in the figure below. The actual operational conditions are defined in what are called distributions, which create specific workloads over the targeted resource of a specific resource type. For example, distribution 1 targets the CPU creating an exponential trend over a specific time range within the whole emulation. Each distribution time is divided into multiple time-slots based on the distribution granularity then broken down into multiple runs each one injecting a different load level per time slot, which depends on the discrete function of the distribution.


APIs provided

COCOMA exposes API to create emulations and tests (POST request), and query for information (GET request) about emulations, results, logs, distributions and emulators. The API URIs summary list is as follow:



http:method:: GET /

The root returns a collection of all the available resources. Example of a XML response:

<?xml version="1.0" ?>
<root href="/">
        <link href="/emulations" rel="emulations" type="application/vnd.bonfire+xml"/>
        <link href="/emulators" rel="emulators" type="application/vnd.bonfire+xml"/>
        <link href="/distributions" rel="distributions"
        <link href="/tests" rel="tests" type="application/vnd.bonfire+xml"/>
        <link href="/results" rel="results" type="application/vnd.bonfire+xml"/>
        <link href="/logs" rel="logs" type="application/vnd.bonfire+xml"/>

http:method:: GET /emulations

The emulations returns a collection of all the available emulation resources. Example of a XML response:

<?xml version="1.0" ?>
<collection href="/emulations" xmlns="">
        <items offset="0" total="3">
                <emulation href="/emulations/1-Emu-CPU-RAM-IO" id="1"
                name="1-Emu-CPU-RAM-IO" state="inactive"/>
                <emulation href="/emulations/2-CPU_EMU" id="2" name="2-CPU_EMU"
                <emulation href="/emulations/3-CPU_EMU" id="3" name="3-CPU_EMU"
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /emulations/{name}

Displays information about emulation by name. The returned 200-OK XML is:

<?xml version="1.0" ?>
<emulation href="/emulations/1-Emu-CPU-RAM-IO" xmlns="">
                <jobsempty>No jobs are scheduled</jobsempty>
        <distributions ID="1" name="Distro1">
        <distributions ID="2" name="Distro2">
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>
        <link href="/emulations" rel="parent" type="application/vnd.bonfire+xml"/>

The returned 404 ? Not Found XML is:

<error>Emulation Name: 1-Emu-CPU-RAM-IO1 not found. Error:too many
values to unpack</error>

http:method:: POST /emulations

:param string XML: Emulation parameters defined via XML as shown in the examples section.

The returned 201-Created XML:

<?xml version="1.0" ?>
<emulation href="/emulations/4-CPU_EMU" xmlns="">
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>
        <link href="/emulations" rel="parent" type="application/vnd.bonfire+xml"/>

The returned 400 ? Bad Request XML:

<?xml version="1.0" ?>
<error>XML is not well formed Error: syntax error: line 1, column 0</error>

http:method:: GET /emulators

Displays emulators list. The returned 200- OK XML:

<?xml version="1.0" ?>
<collection href="/emulators" xmlns="">
        <items offset="0" total="3">
                <emulator href="/emulators/lookbusy" name="lookbusy"/>
                <emulator href="/emulators/stressapptest" name="stressapptest"/>
                <emulator href="/emulators/iperf" name="iperf"/>
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /emulators/{name}

:arg name: Name of emulator that you want to get more info

Displays information about emulator by name. It returns parameters that can be used with emulator and the values limits (where applicable). The returned 200- OK XML:

<?xml version="1.0" ?>
<emulator href="/emulators/lookbusy" xmlns="">
    Emulator lookbusy can be used for following resources:
    1)Loads CPU with parameters:
      ncpus - Number of CPUs to keep busy (default: autodetected)

    2)Loads Memory(MEM) with parameters:
      memSleep - Time to sleep between iterations, in usec (default 1000)

    3)Changing size of files to use during IO with parameters:
      ioBlockSize - Size of blocks to use for I/O in MB
      ioSleep - Time to sleep between iterations, in msec (default 100)

    XML block example:

  <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>
  <link href="/emulators" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /distributions

Displays distributions list. The returned 200- OK XML:

<?xml version="1.0" ?>
<collection href="/distributions" xmlns="">
        <items offset="0" total="3">
                <distribution href="/distributions/linear" name="linear"/>
                <distribution href="/distributions/linear_incr" name="linear_incr"/>
                <distribution href="/distributions/trapezoidal" name="trapezoidal"/>
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /distributions/{name}

:arg name: Name of distributions that you want to get more info

Displays information about distributions by name. It returns parameters that can be used with distributions and the values limits(where applicable). The returned 200- OK XML:

<?xml version="1.0" ?>
<distribution href="/distributions/linear_incr" xmlns="">
    <help>Linear Increase distribution takes in start and stop load
                (plus malloclimit for MEM) parameters and gradually increasing
                resource workload by spawning jobs in parallel. Can be used with
                MEM,IO,NET resource types.</help>
  <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>
  <link href="/distributions" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /tests

Displays tests list. The returned 200- OK XML:

<?xml version="1.0" ?>
<collection href="/tests" xmlns="">
        <items offset="0" total="20">
                <test href="/tests/01-CPU-Linear-Lookbusy_10-95.xml"
                <test href="/tests/03-NET-Linear_incr-Iperf-100-1000.xml"
                <test href="/tests/02-IO-Linear-Stressapptest_1-10.xml"
                <test href="/tests/02-IO-Linear_incr-Stressapptest_1-10.xml"
                <test href="/tests/02-MEM-Linear_incr-Stressapptest_100-1000.xml"
                <test href="/tests/01-CPU-Trapezoidal-Lookbusy_10-95.xml"
                <test href="/tests/01-IO-Trapezoidal-Lookbusy_1-10.xml"
                <test href="/tests/01-NET_TEST.xml" name="01-NET_TEST.xml"/>
                <test href="/tests/03-MEM-500-1000MB-overlap.xml"
                <test href="/tests/01-CPU-Linear_incr-Lookbusy_10-95.xml"
                <test href="/tests/01-IO-Linear_incr-Lookbusy_1-10.xml"
                <test href="/tests/02-IO-Trapezoidal-Stressapptest_1-10.xml"
                <test href="/tests/03-CPU-opposite.xml" name="03-CPU-opposite.xml"/>
                <test href="/tests/01-MEM-Linear_incr-Lookbusy_100-1000.xml"
                <test href="/tests/03-MEM-500-1000MB.xml"
                <test href="/tests/03-MEM-Linear-Stressapptest_500-1000MB.xml"
                <test href="/tests/01-MEM-Trapezoidal-Lookbusy_100-1000.xml"
                <test href="/tests/02-MEM-Trapezoidal-Stressapptest_100-1000.xml"
                <test href="/tests/03-NET-Trapezoidal-Iperf-100-1000.xml"
                <test href="/tests/01-IO-Linear-Lookbusy_1-10.xml"
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /tests/{name}

:arg name: Name of tests that you want to get more info

Displays Content of XML file.

http:method:: POST /tests

:param string: name of the test that is located on COCOMA machine

Create emulation from available tests. The returned 201- Created XML:

<?xml version="1.0" ?>
<test href="/tests/5-CPU_EMU" xmlns="">

The returned 400- Not Found reply XML:

<?xml version="1.0" ?>
<error>error message</error>

http:method:: GET /results

Displays results list. The returned 200- OK XML:

<?xml version="1.0" ?>
<collection href="/results" xmlns="">
        <items offset="0" total="5">
                <results failedRuns="0" href="/results/1-Emu-CPU-RAM-IO"
                name="1-Emu-CPU-RAM-IO" state="inactive"/>
                <results failedRuns="0" href="/results/2-CPU_EMU"
                name="2-CPU_EMU" state="inactive"/>
                <results failedRuns="0" href="/results/3-CPU_EMU"
                name="3-CPU_EMU" state="inactive"/>
                <results failedRuns="0" href="/results/4-CPU_EMU"
                name="4-CPU_EMU" state="inactive"/>
                <results failedRuns="0" href="/results/5-CPU_EMU"
                name="5-CPU_EMU" state="inactive"/>
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /results/{name}

:arg name: Name of tests that you want to get more info

Displays information about results by name. The returned 200- OK XML:

<?xml version="1.0" ?>
<results href="/results/1-Emu-CPU-RAM-IO" xmlns="">

http:method:: GET /logs

Displays logs list. The returned 200- OK XML:

<?xml version="1.0" ?>
<logs href="/logs">
        <link href="/logs/emulations" rel="emulations"
        <link href="/logs/system" rel="system" type="application/vnd.bonfire+xml"/>

http:method:: GET /logs/system

Return Zip file with system logs.

http:method:: GET /logs/emulations

Displays emulations logs list. The returned 200- OK XML:

<?xml version="1.0" ?>
<collection href="/logs/emulations" xmlns="">
        <items offset="0" total="3">
                <emulationLog href="/logs/emulations/3-CPU_EMU" name="3-CPU_EMU"/>
                <emulationLog href="/logs/emulations/5-CPU_EMU" name="5-CPU_EMU"/>
                <emulationLog href="/logs/emulations/4-CPU_EMU" name="4-CPU_EMU"/>
        <link href="/" rel="parent" type="application/vnd.bonfire+xml"/>
        <link href="/logs" rel="parent" type="application/vnd.bonfire+xml"/>

http:method:: GET /logs/{name}

:arg name: Name of emulation logs that you want to get

Return Zip file with emulation logs.

API used

Experiment manager API are used through restfully client to setup monitoring metrics into Zabbix aggregator.

Message queue use

COCOMA writes messages to the EMQ, which are used by the provenance service. Each message contains a timestamp, the message content and the component that created it. The message contains further information as the type of action, and various parameters depending on the specific action. The format adopted is to have key starting in capital, and use the camel notation in case of multi-words. Below is the set of messages:

{"Timestamp": 1378893008.422242, "Message": {"Action": "Scheduler started",
"Interface": "eth0", "Port": "51889"}, "From": "Scheduler"}

{"Timestamp": 1378893809.897368, "Message": {"Action": "USER REQUEST Create
Emulation", "File": "tests/02-MEM-Linear_incr-Stressapptest_100-1000.xml"},
"From": "ccmsh"}

{"Timestamp": 1378893810.206373, "Message": {"Action": "Emulation request
received", "UserEmulationName": "MEM_EMU"}, "From": "Emulation Manager"}

{"Timestamp": 1378893810.744948, "Message": {"ResourceTypeDist": "mem",
"JobName": "2-MEM_EMU-2-0-mem_distro-lookbusy-mem", "DistributionName":
"mem_distro", "Emulator": "lookbusy", "Action": "Job Created", "RunNo":
"0", "EndTime": "2013-09-11 10:04:31", "EmulationName": "2-MEM_EMU",
"DistributionID": 2, "StressValue": 100, "StartTime": "2013-09-11 10:03:31",
"Duration": 60.0}, "From": "Scheduler"}

{"Timestamp": 1378893811.128323, "Message": {"ResourceTypeDist": "mem",
"JobName": "2-MEM_EMU-2-1-mem_distro-lookbusy-mem", "DistributionName":
"mem_distro", "Emulator": "lookbusy", "Action": "Job Created", "RunNo":
"1", "EndTime": "2013-09-11 10:04:31", "EmulationName": "2-MEM_EMU",
"DistributionID": 2, "StressValue": 75, "StartTime": "2013-09-11 10:03:43",
"Duration": 48.0}, "From": "Scheduler"}

{"Timestamp": 1378893811.479812, "Message": {"ResourceTypeDist": "mem",
"JobName": "2-MEM_EMU-2-2-mem_distro-lookbusy-mem", "DistributionName":
"mem_distro", "Emulator": "lookbusy", "Action": "Job Created", "RunNo": "2",
"EndTime": "2013-09-11 10:04:31", "EmulationName": "2-MEM_EMU",
"DistributionID": 2, "StressValue": 75, "StartTime": "2013-09-11 10:03:55",
"Duration": 36.0}, "From": "Scheduler"}

{"Timestamp": 1378893811.838568, "Message": {"ResourceTypeDist": "mem",
"JobName": "2-MEM_EMU-2-3-mem_distro-lookbusy-mem", "DistributionName":
"mem_distro", "Emulator": "lookbusy", "Action": "Job Created", "RunNo":
"3", "EndTime": "2013-09-11 10:04:31", "EmulationName": "2-MEM_EMU",
"DistributionID": 2, "StressValue": 75, "StartTime": "2013-09-11 10:04:07",
"Duration": 24.0}, "From": "Scheduler"}

{"Timestamp": 1378893812.189469, "Message": {"ResourceTypeDist": "mem",
"JobName": "2-MEM_EMU-2-4-mem_distro-lookbusy-mem", "DistributionName":
"mem_distro", "Emulator": "lookbusy", "Action": "Job Created", "RunNo":
"4", "EndTime": "2013-09-11 10:04:31", "EmulationName": "2-MEM_EMU",
"DistributionID": 2, "StressValue": 75, "StartTime": "2013-09-11 10:04:19",
"Duration": 12.0}, "From": "Scheduler"}

{"Timestamp": 1378893812.621874, "Message": {"Action": "Emulation created",
"EmulationName": "MEM_EMU"}, "From": "Emulation Manager"}

{"Timestamp": 1378893871.00535, "Message": {"Action": "Emulation finished",
"EmulationName": "2-MEM_EMU"}, "From": "Logger"}

{"Timestamp": 1378893871.163372, "Message": {"Action": "Job Executed Successfully",
"StartTime": "2013-09-11 10:04:07", "Duration": 24.0,
"EndTime": "2013-09-11 10:04:31", "StressValue": 75, "JobName":
"2-MEM_EMU-2-3-mem_distro-lookbusy-mem"}, "From": "Scheduler"}

{"Timestamp": 1378893871.274156, "Message": {"Action": "Job Executed Successfully",
"StartTime": "2013-09-11 10:04:19", "Duration": 12.0, "EndTime":
"2013-09-11 10:04:31", "StressValue": 75, "JobName":
"2-MEM_EMU-2-4-mem_distro-lookbusy-mem"}, "From": "Scheduler"}

{"Timestamp": 1378893871.398665, "Message": {"Action": "Job Executed Successfully",
"StartTime": "2013-09-11 10:03:55", "Duration": 36.0, "EndTime":
"2013-09-11 10:04:31", "StressValue": 75, "JobName":
"2-MEM_EMU-2-2-mem_distro-lookbusy-mem"}, "From": "Scheduler"}

{"Timestamp": 1378893871.493218, "Message": {"Action": "Job Executed Successfully",
"StartTime": "2013-09-11 10:03:43", "Duration": 48.0,
"EndTime": "2013-09-11 10:04:31", "StressValue": 75, "JobName":
"2-MEM_EMU-2-1-mem_distro-lookbusy-mem"}, "From": "Scheduler"}

{"Timestamp": 1378893871.628944, "Message": {"Action": "Job Executed Successfully",
"StartTime": "2013-09-11 10:03:31", "Duration": 60.0, "EndTime":
"2013-09-11 10:04:31", "StressValue": 100, "JobName":
"2-MEM_EMU-2-0-mem_distro-lookbusy-mem"}, "From": "Scheduler"}

{"Timestamp": 1378893913.604134, "Message": {"Action":
"USER REQUEST list all Emulations"}, "From": "ccmsh"}

{"Timestamp": 1378893929.615051, "Message": {"Action":
"USER REQUEST list Emulation", "EmulationName": "2-MEM_EMU"}, "From": "ccmsh"}

{"Timestamp": 1378894024.729127, "Message": {"Action":
"USER REQUEST delete Emulation", "EmulationName": "2-MEM_EMU"}, "From": "ccmsh"}

{"Timestamp": 1378894042.969776, "Message": {"Action":
"USER REQUEST purge all Emulations"}, "From": "ccmsh"}

Implementation details

The Controlled Contentious and Malicious patterns (COCOMA) framework aims to provide experimenters the ability to create specific contentious and malicious payloads and workloads in a controlled fashion. The experimenter is able to use pre-defined common distributions or specify new payloads and workloads. In the table below we present the terminology introduced by COCOMA.

Term Description
Emulation Process that imitates a specific behaviour specified in the emulation type, over a resource type, using one or more distributions during the emulation lifetime
Emulation type

An emulation can be of the following types:

  • Contentiousness
  • Maliciousness
  • Faultiness (not yet implemented)
  • Mixed (a combination of the above types)
Resource type

A resource can be of the following types:

  • CPU
  • RAM
  • I/O
  • Network
Emulator Specific mechanism/tool that is used to create an emulation type. For example load generators, stress generators, fault generators and malicious payload creation.
Distribution In the case of contention, it is a discrete function of a specific resource type over a specific time within the emulation lifetime. The distribution time is divided into multiple timeslots (t0, ? , tn) based on the distribution granularity. A distribution is broken down into multiple runs each one injecting a different load level per time slot depending on the discrete function of the distribution. In the case of malicious, it is a straight mapping to the emulator
Distribution granularity Number of runs for the distribution
Emulation lifetime Duration of the emulation
Run Basic emulator instantiation

When a user defines an emulation, he needs to specify pairs of distribution-emulator. When specifying an emulator it is bound to a specific resource type. For more complex scenarios users can specify multiple pairs which can also overlap from the time point of view.

COCOMA is provided within a BonFIRE VM, which is interfaced with the BonFIRE aggregator as shown in the figure below:


COCOMA design and components interactions

The different functions provided by the COCOMA components and their interactions are explained below:

  • ccmsh: this is the command line interface (CLI) to interact with COCOMA. Users can specify an emulation in an XML file, which is interpreted by the XMLParser component. Also, the CLI allows to check and control the current running emulations (list, delete, etc.) by interacting directly with the DB
  • REST API: COCOMA provides also a REST API to interact with the framework programmatically
  • XMLParser: it checks xml correct format and return interpreted values to create an emulation. It is used by both CLI and API
  • emulationManager: the emulationManager receives input from the ccmsh or the REST API to create/query/delete an emulation
  • distributionManager: the distributionManager receives input from the emulationManager to load the distribution(s) and apply the relative algorithm in the distribution(s). It basically calculates how many runs (individual basic emulator instances) are needed, and the parameters values of each run. These are then passed to the scheduler
  • scheduler: it creates the runs using the values obtained from the distributionManager at due time
  • DB: it holds information about running emulations, registered emulators, distributions and some configurations
  • Logger: it creates 2 different log files, one with all events relative to the runs created and the other one with the resource usage

COCOMA is designed to work with different emulators. To add a new emulator a user needs to install the new emulator where COCOMA is installed, create a python wrapper for the specific parameters of the emulator and place this wrapper into the emulators COCOMA directory. A similar approach is for distributions. Users can specify their desired discrete functions in a python file and place it in the distributions directory. Emulators and distributions that are in the those directories are automatically available to be used. In the next sections we provide details of how to create both emulators and distributions.

Emulation states

An emulation can be in one of 4 possible states:

  • scheduled: in this state the emulation hasn’t run yet
  • running: in this state the emulation is running, meaning that its end time has not been reached yet
  • executed: all runs have been executed successfully and its end time has been passed already
  • failed: some run has failed, so the emulation is marked as failed

Adding a new emulator

In order to add a new emulator, a new wrapper has to be implemented. This needs to inherit from the relative abstract class, which can be found in the same emulators directory. The class needs 3 different methods:

  • emulatorHelp: Used for displaying help about an emulator (eg. what parameters it needs)
  • emulatorArgNames: Used for returning the names of the arguments that a given emulator takes

Specific methods to execute the wanted emulator instance with the relative needed parameters will have to be added. Checking the existing emulator wrappers should give a clear view on how the wrapping process can be carried out.

Adding a new distribution

In order to add a new distribution, it needs to inherit from the relative abstract class, which can be found in the same distributions directory. The class needs 3 different methods:

  • distHelp: Used for displaying help about a distribution (eg. what Resources types it can use)
  • functionCount: Used for getting values for: stressValues, runStartTimeList, runDurations. The actual algorithm (which calculates those values) goes in this function
  • argNames: Used for returning the names of the arguments that a given resource takes

Real trace parse

This feature allows a user to create a distribution from a real trace file. The format of the trace file has to be as follow:

MEMTOTAL 2074448
TIMESTAMP 1378900076312
2               34
2               34
2               34
2               34

The first 4 lines provide information about the machine the trace was recorded from. This allows to scale the usage to the machine that has to reply it. As it can be seen, for now only CPU and MEM are supported. In the future, IO and NET might be supported too. Below is a xml snippet showing a new tag called trace which provides the path to the trace file from which the distribution real_trace creates the runs:

<distributions >
        <!--duration in seconds -->
        <distribution href="/distributions/real_trace" name="real_trace" />
        <emulator href="/emulators/lookbusy" name="lookbusy" />
                <!--time between iterations in usec (default 1000)-->

The duration is not used as the actual duration is calculated from the trace itself. So if the emulation ends before the distribution, all jobs left (running and scheduled) will be stopped.

As the concept of distribution in COCOMA relates to a single resource (CPU, RAM, IO, NET), if a mixed (CPU and RAM) real trace emulation wanted to be performed, 2 distributions can be added in the xml, each targeting one of the resources, but having the same startTime and trace.

Recording a real trace

COCOMA ships with a script called which can be used to create a trace file with CPU and MEM used. The script can get as option the recording frequency, which by default is 1 sec. As the script can be used also as a live monitoring tool, in order to save the data into a file, the output redirection should be uses, such as:

$ 2 > trace_file.txt
        this uses a polling time of 2 seconds

$ timeout 30s 2 > trace_file.txt
        this uses the command *timeout* in front of the script so that
        it will run for the specified (30 seconds) amount of time

Event-driven approach

COCOMA offers 2 different ways to manage events. One is time-based, where a distribution is run for a finite and known amount of time. In this case when having two time-based distributions, where the second one has to run right after the first one has finished, given that the duration of the distribution is explicetely specified, it is possible to calculate the exact ending time of the first distribution and therefore schedule the start of the second one accordingly. However, there might be cases that the duration of a distribution is not known, e.g. in malicious distributions. In this case, say for example that out of the two distributions that want to be run sequentially, the first one doesn’t have a duration, it would be impossible to schedule the second one since the end time of the first one is unknown. Therefore, we introduced the event-driven approach, where the first one creates an end-job event that triggers the schedule of the second distribution. This allows to take into account these duration-less distribution. An example is in the xml snippet below:

        <!--duration in seconds -->
        <distribution href="/distributions/event" name="event" />
        <emulator href="/emulators/backfuzz" name="backfuzz" />


        <!--duration in seconds -->
        <distribution href="/distributions/event" name="event" />
        <emulator href="/emulators/backfuzz" name="backfuzz" />


The new distribution format in this case has a new tag nextevent, which allows the scheduler to understand that the following distribution to be scheduled, once the first distribution finishes, is the one in this tag. The example relates to a malicious distribution, which is further explained in a dedicated section of this document. Please note that in this case, although they are specified, the starttime, granularity and duration are not actually used as they don’t apply in the event-driven context. Finally, the emustopTime still sets the emulation ends, so in case a distribution hasn’t finished within the emulation time range specified, the jobs left (running and scheduled) will be stopped.

Malicious module

The malicious module allows users to create distributions that can target a specific machine by sending fuzzing data over a chosen protocol. As the emulator supporting our malicious module is backfuzz [1] [2], it offers fuzzing over various known protocol such as HTTP, SSH, FTP, IMAP, etc. The nice thing that all protocols are added to the tool as plugins, so if a new protocol wants to be tested, a new plugin for it can be created and added to the tool for the purpose. The fuzzing process time cannot be known a priori as it depends from factors out of the user control, such as the network between COCOMA and the SuT to target. Therefore, the event-driven approach was introduced to support this. The xml snippet below (the same of the event-driven section) shows a maliciuos distribution using backfuzz:

        <!--duration in seconds -->
        <distribution href="/distributions/event" name="event" />
        <emulator href="/emulators/backfuzz" name="backfuzz" />


In the emulator parameters part we can specify the server IP and its port, the minimum and maximum lenght of the fuzzing string sent, the type of protocol and the time after which the fuzz starts.

The Web UI

Here we explain how to use the web UI to create and manage emulations and view their results.

Opening the UI

Provided that the API is running, the web UI will be accessible at

http://[COCOMA API IP]:5050/index.html

The COCOMA API IP refers to the IP of the interface on which the API have been started. The page is compatible with Chrome, Firefox and Safari web browsers, while it is not with Internet Explorer. The page will automatically load in available emulators, distributions and resources. It will detect which distributions and resources are compatible with the given emulator so that the user needs not worry about creating XML the framework cannot process. Any emulations which already exist will also be displayed in the right hand bar.



Creating an emulation

Each emulation requires a name and at least one distribution, although as many distributions as required can be added. Each distribution requires a name and all required fields to be filled, this data will vary as per the emulation, distribution or resource selected. Distribution windows can be minimized for overall readability or removed entirely (not added to the emulation) by clicking the ‘x’ in the top right corner:


Multi-distribution interface

Distribution Parameters

Start time determines how long (in seconds) after the overall emulation has begun, this particular distribution will begin. Duration is how long the distribution will last for. Granularity refers to the number of the steps taken from startLoad through to stopLoad over the course of the distributions run. For example a 60 second duration CPU stressing distribution with a granularity of 10 will move from startLoad to stopLoad in steps of 6 seconds. More information on the emulator or distribution currently selected and the specific parameters they require can be viewed by hovering over the blue question mark beside it:


Help pop-ups

Logging and Message Queue

After the distributions have been created and specified, there is an option to enable or disable logging. Enabling logging give 2 more options, the frequency in seconds and the level, which dictates the amount of output the logs will contain. Below this is the option to enable or disable the message queue followed by various parameters allowing for it’s setup


Logging and EMQ settings

Running the emulation

Once all the parameters are set there are two options; run the emulation right away by clicking the Run now button, or schedule the emulation to begin running at a set time in the future by clicking the Run at button:


Set time for emulation

Working with existing emulations

Any existing emulations in your system will be listed on the right hand side of the screen. The UI also displays the total number of runs, how many of those failed and the current state of the emulation (active or inactive). Hovering over the emulation name will display the information on that emulation in a popup and by clicking on it the emulation data can be loaded into the creation screen on the left where its parameters can be edited or simply run again right away. Clicking the small download icon to the right of each emulation will prompt the download of a zip file to your system. This zip file contains the .xml used to create the emulation as well as .csv files with the system logs and the logs for that specific emulation.


Emulations interface


COCOMA has two main sets of tests supplied with it, API Tests and Command Line Interface (CLI) Tests, both of which are implemented using python’s unit testing framework pyUnit. The test files (TestAPI and TestCLI) are located in “/usr/share/pyshared/cocoma/unitTest”. To run a set of tests on the API or CLI, navigate to the unitTest folder and use one of the commands:

$ python -m unittest -v TestAPI
$ python -m unittest -v TestCLI

The -v argument gives more verbose output, and may be omitted if required.

Individual test results are output to the terminal in the format test_Emulators (TestAPI.TestAPI) ... ok if the test was successful. An unsuccessful test will produce the same output, with ERROR or FAIL instead of ok. Once all the tests in the file have run, a summary of the results will be printed. This will indicate which (if any) tests were unsuccessful, and attempt to give a reason why the test failed.

CLI Testing

Individual tests can be run on the CLI using the syntax:

$ python -m unittest TestCLI.TestCLI.test_Name

Where test_Name is replaced by one of the following:





This will produce output similar to running the entire set of tests.

API Testing

Individual tests can be run on the API using the syntax:

$ python -m unittest TestAPI.TestAPI.test_Name

Where test_Name is replaced by one of the following:



This will produce output similar to running the entire set of tests.

Resource Overloading

In order to prevent resources from becoming Overloaded (using more than 100% of a resource during the emulation time,) the system calculates the resource usage before any Emulation is run. If an Emulation would cause any of the resources to become overloaded, then that emulation will not run and an exception will be raised with the format:

Unable to create distribution:
CPU resource will become Overloaded: Stopping execution

Known Issues

The interaction of the various emulators used in COCOMA can cause unexpected issues. Some of these issues are listed below (Please note that this is not an exhaustive list, and will be updated as new issues are discovered):

  • Stressapptest uses ~100% CPU, regardless of what resource it is being ran on
  • If a Linear increase distribution is run on memory using stressapptest at the same time as Iperf is being used to load the Network, then the Network resource may not reach its target load. This problem is usually encountered when the memory usage reaches over ~80% (as shown in the graph below)

MEM-NET issue

Code structure

COCOMA code structure is shown below:


The bin directory contains the main components presented in the COCOMA design and components interactions figure. It also contains 3 more files

  • used to create and manage the scheduler jobs
  • contains the common functions used by various components
  • implements the logging mechanisms

The webUI files are also contained within the bin directory, specifically in a subdirectory called webcontent.

The data folder contains the SQLite database file. The distributions directory contains the distributions currently available, while the emulators wrappers are in the emulators folder. In the rb-examples there are restufully examples to create emulations using the COCOMA API. In the scripts directory there is the rec_res_usage script that can be used to record a real trace from a system and replayed in COCOMA. Example xml tests are in the tests directory, while the automated tests are in the unitTest folder.

Building process

COCOMA is implemented in python, while the webUI in Javascripts, therefore there is no need for building anything. On the other hand a deb package has been created to install the software along with some of the dependencies. A building script is provided along with the debian files (control, postinst and postrm). The script uses 4 specific commands in order:

  • sdist: this is used by python python sdist to create a python source distribution as tar.gz file format [3]. In order to use the command, a is provided as well [4]. To use the setuptools in the, a is also needed. This is provided too
  • py2dsc: this generates a debian source package from a Python package [5]
  • dch: this adds a new revision at the top of the Debian changelog file [6]
  • debuild: this buils the debian package [7]

The building script pulls the cocoma source code from the github repository [8]. It takes as argument the version of the debian package that one wants to built such as:

./build_cocoma-deb 1.7.4

The building script, along the other needed python scripts (, and the directory structure needed can be found in the BonFIRE svn repository [12]. Once the directory is checked out, the script can be run from that directory and the deb package will be created in the dist/deb_dist folder.

Dependencies and other tools

In order for COCOMA to work as required, a number of dependencies and tools are needed. Dependecies are:

  • python version is v2.7x. Version v3.x hasn’t been tested. python-support and python-dev are also needed

  • python modules used by the componenents to implement various functionalities
    • bottle: is a fast, simple and lightweight WSGI micro web-framework. Latest tested version is v0.11.6
    • psutil: is a module providing an interface for retrieving information on all running processes and system utilization (CPU, memory, disks, network). Latest tested version is v1.0.1
    • pyro4: it is a library that enables you to build applications in which objects can talk to eachother over the network. Working tested version is v4.20 is required. Later versions give serialization problems for the scheduler. This is a known issue by the pyro developers. If fixed, later versions should work
    • apscheduler: is a light but powerful in-process task scheduler that lets you schedule functions (or any other python callables) to be executed at times of your choosing. Latest tested version is v2.1.1
    • pika: is a pure-Python implementation of the AMQP 0-9-1 protocol. Latest tested version is v0.9.13
    • PyUnit: Python language version of unit testing framework JUnit. Latest tested version is v1.4.1
    • requests: it is a HTTP library. Latest tested version is v2.0.0
    • numpy: is a general-purpose array-processing package designed to efficiently manipulate large multi-dimensional arrays of arbitrary records. Latest tested version is v1.7.1
  • pip: A tool for installing and managing Python packages

As tools, COCOMA uses some as emulators, such as stressapptest [9], lookbusy [10], backfuzz [1], iperf [11]. Other used tools are for installation, such as curl, bc, unzip, gcc, g++, make.

Installation procedure

An installation script has been created. This configures the VM with the needed environment tools and commands, ` It then install the dependencies such as python and pip, and some emulators, Then the VM contextualization script, is added to the boot linux services. After the installation, the automatic tests (see section testing) can be run to assess that the process went well.