aboutsummaryrefslogtreecommitdiff
path: root/doc/source
diff options
context:
space:
mode:
Diffstat (limited to 'doc/source')
-rw-r--r--doc/source/_static/.gitignore0
-rw-r--r--doc/source/_templates/.gitignore0
-rw-r--r--doc/source/additional_topics.rst101
-rw-r--r--doc/source/agenda.rst608
-rw-r--r--doc/source/changes.rst7
-rw-r--r--doc/source/conf.py270
-rw-r--r--doc/source/configuration.rst188
-rw-r--r--doc/source/contributing.rst45
-rw-r--r--doc/source/conventions.rst74
-rw-r--r--doc/source/daq_device_setup.rst246
-rw-r--r--doc/source/device_setup.rst407
-rw-r--r--doc/source/execution_model.rst115
-rw-r--r--doc/source/index.rst138
-rw-r--r--doc/source/installation.rst144
-rw-r--r--doc/source/instrumentation_method_map.rst73
-rw-r--r--doc/source/instrumentation_method_map.template17
-rw-r--r--doc/source/invocation.rst135
-rw-r--r--doc/source/quickstart.rst162
-rw-r--r--doc/source/resources.rst45
-rw-r--r--doc/source/revent.rst97
-rw-r--r--doc/source/wa-execution.pngbin0 -> 104977 bytes
-rw-r--r--doc/source/writing_extensions.rst956
22 files changed, 3828 insertions, 0 deletions
diff --git a/doc/source/_static/.gitignore b/doc/source/_static/.gitignore
new file mode 100644
index 00000000..e69de29b
--- /dev/null
+++ b/doc/source/_static/.gitignore
diff --git a/doc/source/_templates/.gitignore b/doc/source/_templates/.gitignore
new file mode 100644
index 00000000..e69de29b
--- /dev/null
+++ b/doc/source/_templates/.gitignore
diff --git a/doc/source/additional_topics.rst b/doc/source/additional_topics.rst
new file mode 100644
index 00000000..520b3170
--- /dev/null
+++ b/doc/source/additional_topics.rst
@@ -0,0 +1,101 @@
+Additional Topics
++++++++++++++++++
+
+Modules
+=======
+
+Modules are essentially plug-ins for Extensions. They provide a way of defining
+common and reusable functionality. An Extension can load zero or more modules
+during it's creation. Loaded modules will then add their capabilities (see
+Capabilities_) to those of the Extension. When calling code tries to access an
+attribute of an Extension the Extension doesn't have, it will try to find the
+attribute among it's loaded modules and will return that instead.
+
+.. note:: Modules are themselves extensions, and can therefore load their own
+ modules. *Do not* abuse this.
+
+For example, calling code may wish to reboot an unresponsive device by calling
+``device.hard_reset()``, but the ``Device`` in question does not have a
+``hard_reset`` method; however the ``Device`` has loaded ``netio_switch``
+module which allows to disable power supply over a network (say this device
+is in a rack and is powered through such a switch). The module has
+``reset_power`` capability (see Capabilities_ below) and so implements
+``hard_reset``. This will get invoked when ``device.hard_rest()`` is called.
+
+.. note:: Modules can only extend Extensions with new attributes; they cannot
+ override existing functionality. In the example above, if the
+ ``Device`` has implemented ``hard_reset()`` itself, then *that* will
+ get invoked irrespective of which modules it has loaded.
+
+If two loaded modules have the same capability or implement the same method,
+then the last module to be loaded "wins" and its method will be invoke,
+effectively overriding the module that was loaded previously.
+
+Specifying Modules
+------------------
+
+Modules get loaded when an Extension is instantiated by the extension loader.
+There are two ways to specify which modules should be loaded for a device.
+
+
+Capabilities
+============
+
+Capabilities define the functionality that is implemented by an Extension,
+either within the Extension itself or through loadable modules. A capability is
+just a label, but there is an implied contract. When an Extension claims to have
+a particular capability, it promises to expose a particular set of
+functionality through a predefined interface.
+
+Currently used capabilities are described below.
+
+.. note:: Since capabilities are basically random strings, the user can always
+ define their own; and it is then up to the user to define, enforce and
+ document the contract associated with their capability. Below, are the
+ "standard" capabilities used in WA.
+
+
+.. note:: The method signatures in the descriptions below show the calling
+ signature (i.e. they're omitting the initial self parameter).
+
+active_cooling
+--------------
+
+Intended to be used by devices and device modules, this capability implies
+that the device implements a controllable active cooling solution (e.g.
+a programmable fan). The device/module must implement the following methods:
+
+start_active_cooling()
+ Active cooling is started (e.g. the fan is turned on)
+
+stop_active_cooling()
+ Active cooling is stopped (e.g. the fan is turned off)
+
+
+reset_power
+-----------
+
+Intended to be used by devices and device modules, this capability implies
+that the device is capable of performing a hard reset by toggling power. The
+device/module must implement the following method:
+
+hard_reset()
+ The device is restarted. This method cannot rely on the device being
+ responsive and must work even if the software on the device has crashed.
+
+
+flash
+-----
+
+Intended to be used by devices and device modules, this capability implies
+that the device can be flashed with new images. The device/module must
+implement the following method:
+
+flash(image_bundle=None, images=None)
+ ``image_bundle`` is a path to a "bundle" (e.g. a tarball) that contains
+ all the images to be flashed. Which images go where must also be defined
+ within the bundle. ``images`` is a dict mapping image destination (e.g.
+ partition name) to the path to that specific image. Both
+ ``image_bundle`` and ``images`` may be specified at the same time. If
+ there is overlap between the two, ``images`` wins and its contents will
+ be flashed in preference to the ``image_bundle``.
diff --git a/doc/source/agenda.rst b/doc/source/agenda.rst
new file mode 100644
index 00000000..5b5ac690
--- /dev/null
+++ b/doc/source/agenda.rst
@@ -0,0 +1,608 @@
+.. _agenda:
+
+======
+Agenda
+======
+
+An agenda specifies what is to be done during a Workload Automation run,
+including which workloads will be run, with what configuration, which
+instruments and result processors will be enabled, etc. Agenda syntax is
+designed to be both succinct and expressive.
+
+Agendas are specified using YAML_ notation. It is recommended that you
+familiarize yourself with the linked page.
+
+.. _YAML: http://en.wikipedia.org/wiki/YAML
+
+.. note:: Earlier versions of WA have supported CSV-style agendas. These were
+ there to facilitate transition from WA1 scripts. The format was more
+ awkward and supported only a limited subset of the features. Support
+ for it has now been removed.
+
+
+Specifying which workloads to run
+=================================
+
+The central purpose of an agenda is to specify what workloads to run. A
+minimalist agenda contains a single entry at the top level called "workloads"
+that maps onto a list of workload names to run:
+
+.. code-block:: yaml
+
+ workloads:
+ - dhrystone
+ - memcpy
+ - cyclictest
+
+This specifies a WA run consisting of ``dhrystone`` followed by ``memcpy``, followed by
+``cyclictest`` workloads, and using instruments and result processors specified in
+config.py (see :ref:`configuration-specification` section).
+
+.. note:: If you're familiar with YAML, you will recognize the above as a single-key
+ associative array mapping onto a list. YAML has two notations for both
+ associative arrays and lists: block notation (seen above) and also
+ in-line notation. This means that the above agenda can also be
+ written in a single line as ::
+
+ workloads: [dhrystone, memcpy, cyclictest]
+
+ (with the list in-lined), or ::
+
+ {workloads: [dhrystone, memcpy, cyclictest]}
+
+ (with both the list and the associative array in-line). WA doesn't
+ care which of the notations is used as they all get parsed into the
+ same structure by the YAML parser. You can use whatever format you
+ find easier/clearer.
+
+Multiple iterations
+-------------------
+
+There will normally be some variability in workload execution when running on a
+real device. In order to quantify it, multiple iterations of the same workload
+are usually performed. You can specify the number of iterations for each
+workload by adding ``iterations`` field to the workload specifications (or
+"specs"):
+
+.. code-block:: yaml
+
+ workloads:
+ - name: dhrystone
+ iterations: 5
+ - name: memcpy
+ iterations: 5
+ - name: cyclictest
+ iterations: 5
+
+Now that we're specifying both the workload name and the number of iterations in
+each spec, we have to explicitly name each field of the spec.
+
+It is often the case that, as in in the example above, you will want to run all
+workloads for the same number of iterations. Rather than having to specify it
+for each and every spec, you can do with a single entry by adding a ``global``
+section to your agenda:
+
+.. code-block:: yaml
+
+ global:
+ iterations: 5
+ workloads:
+ - dhrystone
+ - memcpy
+ - cyclictest
+
+The global section can contain the same fields as a workload spec. The
+fields in the global section will get added to each spec. If the same field is
+defined both in global section and in a spec, then the value in the spec will
+overwrite the global value. For example, suppose we wanted to run all our workloads
+for five iterations, except cyclictest which we want to run for ten (e.g.
+because we know it to be particularly unstable). This can be specified like
+this:
+
+.. code-block:: yaml
+
+ global:
+ iterations: 5
+ workloads:
+ - dhrystone
+ - memcpy
+ - name: cyclictest
+ iterations: 10
+
+Again, because we are now specifying two fields for cyclictest spec, we have to
+explicitly name them.
+
+Configuring workloads
+---------------------
+
+Some workloads accept configuration parameters that modify their behavior. These
+parameters are specific to a particular workload and can alter the workload in
+any number of ways, e.g. set the duration for which to run, or specify a media
+file to be used, etc. The vast majority of workload parameters will have some
+default value, so it is only necessary to specify the name of the workload in
+order for WA to run it. However, sometimes you want more control over how a
+workload runs.
+
+For example, by default, dhrystone will execute 10 million loops across four
+threads. Suppose you device has six cores available and you want the workload to
+load them all. You also want to increase the total number of loops accordingly
+to 15 million. You can specify this using dhrystone's parameters:
+
+.. code-block:: yaml
+
+ global:
+ iterations: 5
+ workloads:
+ - name: dhrystone
+ params:
+ threads: 6
+ mloops: 15
+ - memcpy
+ - name: cyclictest
+ iterations: 10
+
+.. note:: You can find out what parameters a workload accepts by looking it up
+ in the :ref:`Workloads` section. You can also look it up using WA itself
+ with "show" command::
+
+ wa show dhrystone
+
+ see the :ref:`Invocation` section for details.
+
+In addition to configuring the workload itself, we can also specify
+configuration for the underlying device. This can be done by setting runtime
+parameters in the workload spec. For example, suppose we want to ensure the
+maximum score for our benchmarks, at the expense of power consumption, by
+setting the cpufreq governor to "performance" on cpu0 (assuming all our cores
+are in the same DVFS domain and so setting the governor for cpu0 will affect all
+cores). This can be done like this:
+
+.. code-block:: yaml
+
+ global:
+ iterations: 5
+ workloads:
+ - name: dhrystone
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ workload_params:
+ threads: 6
+ mloops: 15
+ - memcpy
+ - name: cyclictest
+ iterations: 10
+
+
+Here, we're specifying ``sysfile_values`` runtime parameter for the device. The
+value for this parameter is a mapping (an associative array, in YAML) of file
+paths onto values that should be written into those files. ``sysfile_values`` is
+the only runtime parameter that is available for any (Linux) device. Other
+runtime parameters will depend on the specifics of the device used (e.g. its
+CPU cores configuration). I've renamed ``params`` to ``workload_params`` for
+clarity, but that wasn't strictly necessary as ``params`` is interpreted as
+``workload_params`` inside a workload spec.
+
+.. note:: ``params`` field is interpreted differently depending on whether it's in a
+ workload spec or the global section. In a workload spec, it translates to
+ ``workload_params``, in the global section it translates to ``runtime_params``.
+
+Runtime parameters do not automatically reset at the end of workload spec
+execution, so all subsequent iterations will also be affected unless they
+explicitly change the parameter (in the example above, performance governor will
+also be used for ``memcpy`` and ``cyclictest``. There are two ways around this:
+either set ``reboot_policy`` WA setting (see :ref:`configuration-specification` section) such that
+the device gets rebooted between spec executions, thus being returned to its
+initial state, or set the default runtime parameter values in the ``global``
+section of the agenda so that they get set for every spec that doesn't
+explicitly override them.
+
+.. note:: "In addition to ``runtime_params`` there are also ``boot_params`` that
+ work in a similar way, but they get passed to the device when it
+ reboots. At the moment ``TC2`` is the only device that defines a boot
+ parameter, which is explained in ``TC2`` documentation, so boot
+ parameters will not be mentioned further.
+
+IDs and Labels
+--------------
+
+It is possible to list multiple specs with the same workload in an agenda. You
+may wish to this if you want to run a workload with different parameter values
+or under different runtime configurations of the device. The workload name
+therefore does not uniquely identify a spec. To be able to distinguish between
+different specs (e.g. in reported results), each spec has an ID which is unique
+to all specs within an agenda (and therefore with a single WA run). If an ID
+isn't explicitly specified using ``id`` field (note that the field name is in
+lower case), one will be automatically assigned to the spec at the beginning of
+the WA run based on the position of the spec within the list. The first spec
+*without an explicit ID* will be assigned ID ``1``, the second spec *without an
+explicit ID* will be assigned ID ``2``, and so forth.
+
+Numerical IDs aren't particularly easy to deal with, which is why it is
+recommended that, for non-trivial agendas, you manually set the ids to something
+more meaningful (or use labels -- see below). An ID can be pretty much anything
+that will pass through the YAML parser. The only requirement is that it is
+unique to the agenda. However, is usually better to keep them reasonably short
+(they don't need to be *globally* unique), and to stick with alpha-numeric
+characters and underscores/dashes. While WA can handle other characters as well,
+getting too adventurous with your IDs may cause issues further down the line
+when processing WA results (e.g. when uploading them to a database that may have
+its own restrictions).
+
+In addition to IDs, you can also specify labels for your workload specs. These
+are similar to IDs but do not have the uniqueness restriction. If specified,
+labels will be used by some result processes instead of (or in addition to) the
+workload name. For example, the ``csv`` result processor will put the label in the
+"workload" column of the CSV file.
+
+It is up to you how you chose to use IDs and labels. WA itself doesn't expect
+any particular format (apart from uniqueness for IDs). Below is the earlier
+example updated to specify explicit IDs and label dhrystone spec to reflect
+parameters used.
+
+.. code-block:: yaml
+
+ global:
+ iterations: 5
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
+
+Result Processors and Instrumentation
+=====================================
+
+Result Processors
+-----------------
+
+Result processors, as the name suggests, handle the processing of results
+generated form running workload specs. By default, WA enables a couple of basic
+result processors (e.g. one generates a csv file with all scores reported by
+workloads), which you can see in ``~/.workload_automation/config.py``. However,
+WA has a number of other, more specialized, result processors (e.g. for
+uploading to databases). You can list available result processors with
+``wa list result_processors`` command. If you want to permanently enable a
+result processor, you can add it to your ``config.py``. You can also enable a
+result processor for a particular run by specifying it in the ``config`` section
+in the agenda. As the name suggests, ``config`` section mirrors the structure of
+``config.py``\ (although using YAML rather than Python), and anything that can
+be specified in the latter, can also be specified in the former.
+
+As with workloads, result processors may have parameters that define their
+behavior. Parameters of result processors are specified a little differently,
+however. Result processor parameter values are listed in the config section,
+namespaced under the name of the result processor.
+
+For example, suppose we want to be able to easily query the results generated by
+the workload specs we've defined so far. We can use ``sqlite`` result processor
+to have WA create an sqlite_ database file with the results. By default, this
+file will be generated in WA's output directory (at the same level as
+results.csv); but suppose we want to store the results in the same file for
+every run of the agenda we do. This can be done by specifying an alternative
+database file with ``database`` parameter of the result processor:
+
+.. code-block:: yaml
+
+ config:
+ result_processors: [sqlite]
+ sqlite:
+ database: ~/my_wa_results.sqlite
+ global:
+ iterations: 5
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
+A couple of things to observe here:
+
+- There is no need to repeat the result processors listed in ``config.py``. The
+ processors listed in ``result_processors`` entry in the agenda will be used
+ *in addition to* those defined in the ``config.py``.
+- The database file is specified under "sqlite" entry in the config section.
+ Note, however, that this entry alone is not enough to enable the result
+ processor, it must be listed in ``result_processors``, otherwise the "sqilte"
+ config entry will be ignored.
+- The database file must be specified as an absolute path, however it may use
+ the user home specifier '~' and/or environment variables.
+
+.. _sqlite: http://www.sqlite.org/
+
+
+Instrumentation
+---------------
+
+WA can enable various "instruments" to be used during workload execution.
+Instruments can be quite diverse in their functionality, but the majority of
+instruments available in WA today are there to collect additional data (such as
+trace) from the device during workload execution. You can view the list of
+available instruments by using ``wa list instruments`` command. As with result
+processors, a few are enabled by default in the ``config.py`` and additional
+ones may be added in the same place, or specified in the agenda using
+``instrumentation`` entry.
+
+For example, we can collect core utilisation statistics (for what proportion of
+workload execution N cores were utilized above a specified threshold) using
+``coreutil`` instrument.
+
+.. code-block:: yaml
+
+ config:
+ instrumentation: [coreutil]
+ coreutil:
+ threshold: 80
+ result_processors: [sqlite]
+ sqlite:
+ database: ~/my_wa_results.sqlite
+ global:
+ iterations: 5
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
+Instrumentation isn't "free" and it is advisable not to have too many
+instruments enabled at once as that might skew results. For example, you don't
+want to have power measurement enabled at the same time as event tracing, as the
+latter may prevent cores from going into idle states and thus affecting the
+reading collected by the former.
+
+Unlike result processors, instrumentation may be enabled (and disabled -- see below)
+on per-spec basis. For example, suppose we want to collect /proc/meminfo from the
+device when we run ``memcpy`` workload, but not for the other two. We can do that using
+``sysfs_extractor`` instrument, and we will only enable it for ``memcpy``:
+
+.. code-block:: yaml
+
+ config:
+ instrumentation: [coreutil]
+ coreutil:
+ threshold: 80
+ sysfs_extractor:
+ paths: [/proc/meminfo]
+ result_processors: [sqlite]
+ sqlite:
+ database: ~/my_wa_results.sqlite
+ global:
+ iterations: 5
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ instrumentation: [sysfs_extractor]
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
+As with ``config`` sections, ``instrumentation`` entry in the spec needs only to
+list additional instruments and does not need to repeat instruments specified
+elsewhere.
+
+.. note:: At present, it is only possible to enable/disable instrumentation on
+ per-spec base. It is *not* possible to provide configuration on
+ per-spec basis in the current version of WA (e.g. in our example, it
+ is not possible to specify different ``sysfs_extractor`` paths for
+ different workloads). This restriction may be lifted in future
+ versions of WA.
+
+Disabling result processors and instrumentation
+-----------------------------------------------
+
+As seen above, extensions specified with ``instrumentation`` and
+``result_processor`` clauses get added to those already specified previously.
+Just because an instrument specified in ``config.py`` is not listed in the
+``config`` section of the agenda, does not mean it will be disabled. If you do
+want to disable an instrument, you can always remove/comment it out from
+``config.py``. However that will be introducing a permanent configuration change
+to your environment (one that can be easily reverted, but may be just as
+easily forgotten). If you want to temporarily disable a result processor or an
+instrument for a particular run, you can do that in your agenda by prepending a
+tilde (``~``) to its name.
+
+For example, let's say we want to disable ``cpufreq`` instrument enabled in our
+``config.py`` (suppose we're going to send results via email and so want to
+reduce to total size of the output directory):
+
+.. code-block:: yaml
+
+ config:
+ instrumentation: [coreutil, ~cpufreq]
+ coreutil:
+ threshold: 80
+ sysfs_extractor:
+ paths: [/proc/meminfo]
+ result_processors: [sqlite]
+ sqlite:
+ database: ~/my_wa_results.sqlite
+ global:
+ iterations: 5
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ instrumentation: [sysfs_extractor]
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
+
+Sections
+========
+
+It is a common requirement to be able to run the same set of workloads under
+different device configurations. E.g. you may want to investigate impact of
+changing a particular setting to different values on the benchmark scores, or to
+quantify the impact of enabling a particular feature in the kernel. WA allows
+this by defining "sections" of configuration with an agenda.
+
+For example, suppose what we really want, is to measure the impact of using
+interactive cpufreq governor vs the performance governor on the three
+benchmarks. We could create another three workload spec entries similar to the
+ones we already have and change the sysfile value being set to "interactive".
+However, this introduces a lot of duplication; and what if we want to change
+spec configuration? We would have to change it in multiple places, running the
+risk of forgetting one.
+
+A better way is to keep the three workload specs and define a section for each
+governor:
+
+.. code-block:: yaml
+
+ config:
+ instrumentation: [coreutil, ~cpufreq]
+ coreutil:
+ threshold: 80
+ sysfs_extractor:
+ paths: [/proc/meminfo]
+ result_processors: [sqlite]
+ sqlite:
+ database: ~/my_wa_results.sqlite
+ global:
+ iterations: 5
+ sections:
+ - id: perf
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ - id: inter
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: interactive
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ instrumentation: [sysfs_extractor]
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
+A section, just like an workload spec, needs to have a unique ID. Apart from
+that, a "section" is similar to the ``global`` section we've already seen --
+everything that goes into a section will be applied to each workload spec.
+Workload specs defined under top-level ``workloads`` entry will be executed for
+each of the sections listed under ``sections``.
+
+.. note:: It is also possible to have a ``workloads`` entry within a section,
+ in which case, those workloads will only be executed for that specific
+ section.
+
+In order to maintain the uniqueness requirement of workload spec IDs, they will
+be namespaced under each section by prepending the section ID to the spec ID
+with an under score. So in the agenda above, we no longer have a workload spec
+with ID ``01_dhry``, instead there are two specs with IDs ``perf_01_dhry`` and
+``inter_01_dhry``.
+
+Note that the ``global`` section still applies to every spec in the agenda. So
+the precedence order is -- spec settings override section settings, which in
+turn override global settings.
+
+
+Other Configuration
+===================
+
+.. _configuration_in_agenda:
+
+As mentioned previously, ``config`` section in an agenda can contain anything
+that can be defined in ``config.py`` (with Python syntax translated to the
+equivalent YAML). Certain configuration (e.g. ``run_name``) makes more sense
+to define in an agenda than a config file. Refer to the
+:ref:`configuration-specification` section for details.
+
+.. code-block:: yaml
+
+ config:
+ project: governor_comparison
+ run_name: performance_vs_interactive
+
+ device: generic_android
+ reboot_policy: never
+
+ instrumentation: [coreutil, ~cpufreq]
+ coreutil:
+ threshold: 80
+ sysfs_extractor:
+ paths: [/proc/meminfo]
+ result_processors: [sqlite]
+ sqlite:
+ database: ~/my_wa_results.sqlite
+ global:
+ iterations: 5
+ sections:
+ - id: perf
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: performance
+ - id: inter
+ runtime_params:
+ sysfile_values:
+ /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: interactive
+ workloads:
+ - id: 01_dhry
+ name: dhrystone
+ label: dhrystone_15over6
+ workload_params:
+ threads: 6
+ mloops: 15
+ - id: 02_memc
+ name: memcpy
+ instrumentation: [sysfs_extractor]
+ - id: 03_cycl
+ name: cyclictest
+ iterations: 10
+
diff --git a/doc/source/changes.rst b/doc/source/changes.rst
new file mode 100644
index 00000000..9d1dd58d
--- /dev/null
+++ b/doc/source/changes.rst
@@ -0,0 +1,7 @@
+What's New in Workload Automation
+=================================
+
+Version 2.3.0
+-------------
+
+- First publicly-released version.
diff --git a/doc/source/conf.py b/doc/source/conf.py
new file mode 100644
index 00000000..56c30053
--- /dev/null
+++ b/doc/source/conf.py
@@ -0,0 +1,270 @@
+# -*- coding: utf-8 -*-
+# Copyright 2015 ARM Limited
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+#
+# Workload Automation 2 documentation build configuration file, created by
+# sphinx-quickstart on Mon Jul 15 09:00:46 2013.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os
+import warnings
+
+warnings.filterwarnings('ignore', "Module louie was already imported")
+
+this_dir = os.path.dirname(__file__)
+sys.path.insert(0, os.path.join(this_dir, '../..'))
+import wlauto
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#sys.path.insert(0, os.path.abspath('.'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.mathjax', 'sphinx.ext.ifconfig', 'sphinx.ext.viewcode']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'Workload Automation'
+copyright = u'2013, ARM Ltd'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = wlauto.__version__
+# The full version, including alpha/beta/rc tags.
+release = wlauto.__version__
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['**/*-example']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+html_theme = 'default'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'WorkloadAutomationdoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+ ('index', 'WorkloadAutomation.tex', u'Workload Automation Documentation',
+ u'WA Mailing List \\textless{}workload-automation@arm.com\\textgreater{},Sergei Trofimov \\textless{}sergei.trofimov@arm.com\\textgreater{}, Vasilis Flouris \\textless{}vasilis.flouris@arm.com\\textgreater{}, Mohammed Binsabbar \\textless{}mohammed.binsabbar@arm.com\\textgreater{}', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ ('index', 'workloadautomation', u'Workload Automation Documentation',
+ [u'WA Mailing List <workload-automation@arm.com>, Sergei Trofimov <sergei.trofimov@arm.com>, Vasilis Flouris <vasilis.flouris@arm.com>'], 1)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output ------------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ ('index', 'WorkloadAutomation', u'Workload Automation Documentation',
+ u'WA Mailing List <workload-automation@arm.com>, Sergei Trofimov <sergei.trofimov@arm.com>, Vasilis Flouris <vasilis.flouris@arm.com>', 'WorkloadAutomation', 'A framwork for automationg workload execution on mobile devices.',
+ 'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+
+def setup(app):
+ app.add_object_type('confval', 'confval',
+ objname='configuration value',
+ indextemplate='pair: %s; configuration value')
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
new file mode 100644
index 00000000..8551c672
--- /dev/null
+++ b/doc/source/configuration.rst
@@ -0,0 +1,188 @@
+.. _configuration-specification:
+
+=============
+Configuration
+=============
+
+In addition to specifying run execution parameters through an agenda, the
+behavior of WA can be modified through configuration file(s). The default
+configuration file is ``~/.workload_automation/config.py`` (the location can be
+changed by setting ``WA_USER_DIRECTORY`` environment variable, see :ref:`envvars`
+section below). This file will be
+created when you first run WA if it does not already exist. This file must
+always exist and will always be loaded. You can add to or override the contents
+of that file on invocation of Workload Automation by specifying an additional
+configuration file using ``--config`` option.
+
+The config file is just a Python source file, so it can contain any valid Python
+code (though execution of arbitrary code through the config file is
+discouraged). Variables with specific names will be picked up by the framework
+and used to modify the behavior of Workload automation.
+
+.. note:: As of version 2.1.3 is also possible to specify the following
+ configuration in the agenda. See :ref:`configuration in an agenda <configuration_in_agenda>`\ .
+
+
+.. _available_settings:
+
+Available Settings
+==================
+
+.. note:: Extensions such as workloads, instrumentation or result processors
+ may also pick up certain settings from this file, so the list below is
+ not exhaustive. Please refer to the documentation for the specific
+ extensions to see what settings they accept.
+
+.. confval:: device
+
+ This setting defines what specific Device subclass will be used to interact
+ the connected device. Obviously, this must match your setup.
+
+.. confval:: device_config
+
+ This must be a Python dict containing setting-value mapping for the
+ configured :rst:dir:`device`. What settings and values are valid is specific
+ to each device. Please refer to the documentation for your device.
+
+.. confval:: reboot_policy
+
+ This defines when during execution of a run the Device will be rebooted. The
+ possible values are:
+
+ ``"never"``
+ The device will never be rebooted.
+ ``"initial"``
+ The device will be rebooted when the execution first starts, just before
+ executing the first workload spec.
+ ``"each_spec"``
+ The device will be rebooted before running a new workload spec.
+ Note: this acts the same as each_iteration when execution order is set to by_iteration
+ ``"each_iteration"``
+ The device will be rebooted before each new iteration.
+
+ .. seealso::
+
+ :doc:`execution_model`
+
+.. confval:: execution_order
+
+ Defines the order in which the agenda spec will be executed. At the moment,
+ the following execution orders are supported:
+
+ ``"by_iteration"``
+ The first iteration of each workload spec is executed one after the other,
+ so all workloads are executed before proceeding on to the second iteration.
+ E.g. A1 B1 C1 A2 C2 A3. This is the default if no order is explicitly specified.
+
+ In case of multiple sections, this will spread them out, such that specs
+ from the same section are further part. E.g. given sections X and Y, global
+ specs A and B, and two iterations, this will run ::
+
+ X.A1, Y.A1, X.B1, Y.B1, X.A2, Y.A2, X.B2, Y.B2
+
+ ``"by_section"``
+ Same as ``"by_iteration"``, however this will group specs from the same
+ section together, so given sections X and Y, global specs A and B, and two iterations,
+ this will run ::
+
+ X.A1, X.B1, Y.A1, Y.B1, X.A2, X.B2, Y.A2, Y.B2
+
+ ``"by_spec"``
+ All iterations of the first spec are executed before moving on to the next
+ spec. E.g. A1 A2 A3 B1 C1 C2 This may also be specified as ``"classic"``,
+ as this was the way workloads were executed in earlier versions of WA.
+
+ ``"random"``
+ Execution order is entirely random.
+
+ Added in version 2.1.5.
+
+.. confval:: instrumentation
+
+ This should be a list of instruments to be enabled during run execution.
+ Values must be names of available instruments. Instruments are used to
+ collect additional data, such as energy measurements or execution time,
+ during runs.
+
+ .. seealso::
+
+ :doc:`api/wlauto.instrumentation`
+
+.. confval:: result_processors
+
+ This should be a list of result processors to be enabled during run execution.
+ Values must be names of available result processors. Result processor define
+ how data is output from WA.
+
+ .. seealso::
+
+ :doc:`api/wlauto.result_processors`
+
+.. confval:: logging
+
+ A dict that contains logging setting. At the moment only three settings are
+ supported:
+
+ ``"file format"``
+ Controls how logging output appears in the run.log file in the output
+ directory.
+ ``"verbose format"``
+ Controls how logging output appear on the console when ``--verbose`` flag
+ was used.
+ ``"regular format"``
+ Controls how logging output appear on the console when ``--verbose`` flag
+ was not used.
+
+ All three values should be Python `old-style format strings`_ specifying which
+ `log record attributes`_ should be displayed.
+
+There are also a couple of settings are used to provide additional metadata
+for a run. These may get picked up by instruments or result processors to
+attach context to results.
+
+.. confval:: project
+
+ A string naming the project for which data is being collected. This may be
+ useful, e.g. when uploading data to a shared database that is populated from
+ multiple projects.
+
+.. confval:: project_stage
+
+ A dict or a string that allows adding additional identifier. This is may be
+ useful for long-running projects.
+
+.. confval:: run_name
+
+ A string that labels the WA run that is bing performed. This would typically
+ be set in the ``config`` section of an agenda (see
+ :ref:`configuration in an agenda <configuration_in_agenda>`) rather than in the config file.
+
+.. _old-style format strings: http://docs.python.org/2/library/stdtypes.html#string-formatting-operations
+.. _log record attributes: http://docs.python.org/2/library/logging.html#logrecord-attributes
+
+
+.. _envvars:
+
+Environment Variables
+=====================
+
+In addition to standard configuration described above, WA behaviour can be
+altered through environment variables. These can determine where WA looks for
+various assets when it starts.
+
+.. confval:: WA_USER_DIRECTORY
+
+ This is the location WA will look for config.py, inustrumentation , and it
+ will also be used for local caches, etc. If this variable is not set, the
+ default location is ``~/.workload_automation`` (this is created when WA
+ is installed).
+
+ .. note:: This location **must** be writable by the user who runs WA.
+
+
+.. confval:: WA_EXTENSION_PATHS
+
+ By default, WA will look for extensions in its own package and in
+ subdirectories under ``WA_USER_DIRECTORY``. This environment variable can
+ be used specify a colon-separated list of additional locations WA should
+ use to look for extensions.
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
new file mode 100644
index 00000000..d0696ce7
--- /dev/null
+++ b/doc/source/contributing.rst
@@ -0,0 +1,45 @@
+
+Contributing Code
+=================
+
+We welcome code contributions via GitHub pull requests to the official WA
+repository. To help with maintainability of the code line we ask that the code
+uses a coding style consistent with the rest of WA code, which is basically
+`PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ with line length and block
+comment rules relaxed (the wrapper for PEP8 checker inside ``dev_scripts`` will
+run it with appropriate configuration).
+
+We ask that the following checks are performed on the modified code prior to
+submitting a pull request:
+
+.. note:: You will need pylint and pep8 static checkers installed::
+
+ pip install pep8
+ pip install pylint
+
+ It is recommened that you install via pip rather than through your
+ distribution's package mananger because the latter is likely to
+ contain out-of-date version of these tools.
+
+- ``./dev_scripts/pylint`` should be run without arguments and should produce no
+ output (any output should be addressed by making appropriate changes in the
+ code or adding a pylint ignore directive, if there is a good reason for
+ keeping the code as is).
+- ``./dev_scripts/pep8`` should be run without arguments and should produce no
+ output (any output should be addressed by making appropriate changes in the
+ code).
+- If the modifications touch core framework (anything under ``wlauto/core``), unit
+ tests should be run using ``nosetests``, and they should all pass.
+
+ - If significant additions have been made to the framework, unit
+ tests should be added to cover the new functionality.
+
+- If modifications have been made to documentation (this includes description
+ attributes for Parameters and Extensions), documentation should be built to
+ make sure no errors or warning during build process, and a visual inspection
+ of new/updated sections in resulting HTML should be performed to ensure
+ everything renders as expected.
+
+Once you have your contribution is ready, please follow instructions in `GitHub
+documentation <https://help.github.com/articles/creating-a-pull-request/>`_ to
+create a pull request.
diff --git a/doc/source/conventions.rst b/doc/source/conventions.rst
new file mode 100644
index 00000000..c811f522
--- /dev/null
+++ b/doc/source/conventions.rst
@@ -0,0 +1,74 @@
+===========
+Conventions
+===========
+
+Interface Definitions
+=====================
+
+Throughout this documentation a number of stubbed-out class definitions will be
+presented showing an interface defined by a base class that needs to be
+implemented by the deriving classes. The following conventions will be used when
+presenting such an interface:
+
+ - Methods shown raising :class:`NotImplementedError` are abstract and *must*
+ be overridden by subclasses.
+ - Methods with ``pass`` in their body *may* be (but do not need to be) overridden
+ by subclasses. If not overridden, these methods will default to the base
+ class implementation, which may or may not be a no-op (the ``pass`` in the
+ interface specification does not necessarily mean that the method does not have an
+ actual implementation in the base class).
+
+ .. note:: If you *do* override these methods you must remember to call the
+ base class' version inside your implementation as well.
+
+ - Attributes who's value is shown as ``None`` *must* be redefined by the
+ subclasses with an appropriate value.
+ - Attributes who's value is shown as something other than ``None`` (including
+ empty strings/lists/dicts) *may* be (but do not need to be) overridden by
+ subclasses. If not overridden, they will default to the value shown.
+
+Keep in mind that the above convention applies only when showing interface
+definitions and may not apply elsewhere in the documentation. Also, in the
+interest of clarity, only the relevant parts of the base class definitions will
+be shown some members (such as internal methods) may be omitted.
+
+
+Code Snippets
+=============
+
+Code snippets provided are intended to be valid Python code, and to be complete.
+However, for the sake of clarity, in some cases only the relevant parts will be
+shown with some details omitted (details that may necessary to validity of the code
+but not to understanding of the concept being illustrated). In such cases, a
+commented ellipsis will be used to indicate that parts of the code have been
+dropped. E.g. ::
+
+ # ...
+
+ def update_result(self, context):
+ # ...
+ context.result.add_metric('energy', 23.6, 'Joules', lower_is_better=True)
+
+ # ...
+
+
+Core Class Names
+================
+
+When core classes are referenced throughout the documentation, usually their
+fully-qualified names are given e.g. :class:`wlauto.core.workload.Workload`.
+This is done so that Sphinx_ can resolve them and provide a link. While
+implementing extensions, however, you should *not* be importing anything
+directly form under :mod:`wlauto.core`. Instead, classes you are meant to
+instantiate or subclass have been aliased in the root :mod:`wlauto` package,
+and should be imported from there, e.g. ::
+
+ from wlauto import Workload
+
+All examples given in the documentation follow this convention. Please note that
+this only applies to the :mod:`wlauto.core` subpackage; all other classes
+should be imported for their corresponding subpackages.
+
+.. _Sphinx: http://sphinx-doc.org/
+
+
diff --git a/doc/source/daq_device_setup.rst b/doc/source/daq_device_setup.rst
new file mode 100644
index 00000000..8853fc2f
--- /dev/null
+++ b/doc/source/daq_device_setup.rst
@@ -0,0 +1,246 @@
+.. _daq_setup:
+
+DAQ Server Guide
+================
+
+NI-DAQ, or just "DAQ", is the Data Acquisition device developed by National
+Instruments:
+
+ http://www.ni.com/data-acquisition/
+
+WA uses the DAQ to collect power measurements during workload execution. A
+client/server solution for this is distributed as part of WA, though it is
+distinct from WA and may be used separately (by invoking the client APIs from a
+Python script, or used directly from the command line).
+
+This solution is dependent on the NI-DAQmx driver for the DAQ device. At the
+time of writing, only Windows versions of the driver are supported (there is an
+old Linux version that works on some versions of RHEL and Centos, but it is
+unsupported and won't work with recent Linux kernels). Because of this, the
+server part of the solution will need to be run on a Windows machine (though it
+should also work on Linux, if the driver becomes available).
+
+
+.. _daq_wiring:
+
+DAQ Device Wiring
+-----------------
+
+The server expects the device to be wired in a specific way in order to be able
+to collect power measurements. Two consecutive Analogue Input (AI) channels on
+the DAQ are used to form a logical "port" (starting with AI/0 and AI/1 for port
+0). Of these, the lower/even channel (e.g. AI/0) is used to measure the voltage
+on the rail we're interested in; the higher/odd channel (e.g. AI/1) is used to
+measure the voltage drop across a known very small resistor on the same rail,
+which is then used to calculate current. The logical wiring diagram looks like
+this::
+
+ Port N
+ ======
+ |
+ | AI/(N*2)+ <--- Vr -------------------------|
+ | |
+ | AI/(N*2)- <--- GND -------------------// |
+ | |
+ | AI/(N*2+1)+ <--- V ------------|-------V |
+ | r | |
+ | AI/(N*2+1)- <--- Vr --/\/\/\----| |
+ | | |
+ | | |
+ | |------------------------------|
+ ======
+
+ Where:
+ V: Voltage going into the resistor
+ Vr: Voltage between resistor and the SOC
+ GND: Ground
+ r: The resistor across the rail with a known
+ small value.
+
+
+The physical wiring will depend on the specific DAQ device, as channel layout
+varies between models.
+
+.. note:: Current solution supports variable number of ports, however it
+ assumes that the ports are sequential and start at zero. E.g. if you
+ want to measure power on three rails, you will need to wire ports 0-2
+ (AI/0 to AI/5 channels on the DAQ) to do it. It is not currently
+ possible to use any other configuration (e.g. ports 1, 2 and 5).
+
+
+Setting up NI-DAQmx driver on a Windows Machine
+-----------------------------------------------
+
+ - The NI-DAQmx driver is pretty big in size, 1.5 GB. The driver name is
+ 'NI-DAQmx' and its version '9.7.0f0' which you can obtain it from National
+ Instruments website by downloading NI Measurement & Automation Explorer (Ni
+ MAX) from: http://joule.ni.com/nidu/cds/view/p/id/3811/lang/en
+
+ .. note:: During the installation process, you might be prompted to install
+ .NET framework 4.
+
+ - The installation process is quite long, 7-15 minutes.
+ - Once installed, open NI MAX, which should be in your desktop, if not type its
+ name in the start->search.
+ - Connect the NI-DAQ device to your machine. You should see it appear under
+ 'Devices and Interfaces'. If not, press 'F5' to refresh the list.
+ - Complete the device wiring as described in the :ref:`daq_wiring` section.
+ - Quit NI MAX.
+
+
+Setting up DAQ server
+---------------------
+
+The DAQ power measurement solution is implemented in daqpower Python library,
+the package for which can be found in WA's install location under
+``wlauto/external/daq_server/daqpower-1.0.0.tar.gz`` (the version number in your
+installation may be different).
+
+ - Install NI-DAQmx driver, as described in the previous section.
+ - Install Python 2.7.
+ - Download and install ``pip``, ``numpy`` and ``twisted`` Python packages.
+ These packages have C extensions, an so you will need a native compiler set
+ up if you want to install them from PyPI. As an easier alternative, you can
+ find pre-built Windows installers for these packages here_ (the versions are
+ likely to be older than what's on PyPI though).
+ - Install the daqpower package using pip::
+
+ pip install C:\Python27\Lib\site-packages\wlauto\external\daq_server\daqpower-1.0.0.tar.gz
+
+ This should automatically download and install ``PyDAQmx`` package as well
+ (the Python bindings for the NI-DAQmx driver).
+
+.. _here: http://www.lfd.uci.edu/~gohlke/pythonlibs/
+
+
+Running DAQ server
+------------------
+
+Once you have installed the ``daqpower`` package and the required dependencies as
+described above, you can start the server by executing ``run-daq-server`` from the
+command line. The server will start listening on the default port, 45677.
+
+.. note:: There is a chance that pip will not add ``run-daq-server`` into your
+ path. In that case, you can run daq server as such:
+ ``python C:\path to python\Scripts\run-daq-server``
+
+You can optionally specify flags to control the behaviour or the server::
+
+ usage: run-daq-server [-h] [-d DIR] [-p PORT] [--debug] [--verbose]
+
+ optional arguments:
+ -h, --help show this help message and exit
+ -d DIR, --directory DIR
+ Working directory
+ -p PORT, --port PORT port the server will listen on.
+ --debug Run in debug mode (no DAQ connected).
+ --verbose Produce verobose output.
+
+.. note:: The server will use a working directory (by default, the directory
+ the run-daq-server command was executed in, or the location specified
+ with -d flag) to store power traces before they are collected by the
+ client. This directory must be read/write-able by the user running
+ the server.
+
+
+Collecting Power with WA
+------------------------
+
+.. note:: You do *not* need to install the ``daqpower`` package on the machine
+ running WA, as it is already included in the WA install structure.
+ However, you do need to make sure that ``twisted`` package is
+ installed.
+
+You can enable ``daq`` instrument your agenda/config.py in order to get WA to
+collect power measurements. At minimum, you will also need to specify the
+resistor values for each port in your configuration, e.g.::
+
+ resistor_values = [0.005, 0.005] # in Ohms
+
+This also specifies the number of logical ports (measurement sites) you want to
+use, and, implicitly, the port numbers (ports 0 to N-1 will be used).
+
+.. note:: "ports" here refers to the logical ports wired on the DAQ (see :ref:`daq_wiring`,
+ not to be confused with the TCP port the server is listening on.
+
+Unless you're running the DAQ server and WA on the same machine (unlikely
+considering that WA is officially supported only on Linux and recent NI-DAQmx
+drivers are only available on Windows), you will also need to specify the IP
+address of the server::
+
+ daq_server = 127.0.0.1
+
+There are a number of other settings that can optionally be specified in the
+configuration (e.g. the labels to be used for DAQ ports). Please refer to the
+:class:`wlauto.instrumentation.daq.Daq` documentation for details.
+
+
+Collecting Power from the Command Line
+--------------------------------------
+
+``daqpower`` package also comes with a client that may be used from the command
+line. Unlike when collecting power with WA, you *will* need to install the
+``daqpower`` package. Once installed, you will be able to interract with a
+running DAQ server by invoking ``send-daq-command``. The invocation syntax is ::
+
+ send-daq-command --host HOST [--port PORT] COMMAND [OPTIONS]
+
+Options are command-specific. COMMAND may be one of the following (and they
+should generally be inoked in that order):
+
+ :configure: Set up a new session, specifying the configuration values to
+ be used. If there is already a configured session, it will
+ be terminated. OPTIONS for this this command are the DAQ
+ configuration parameters listed in the DAQ instrument
+ documentation with all ``_`` replaced by ``-`` and prefixed
+ with ``--``, e.g. ``--resistor-values``.
+ :start: Start collecting power measurments.
+ :stop: Stop collecting power measurments.
+ :get_data: Pull files containg power measurements from the server.
+ There is one option for this command:
+ ``--output-directory`` which specifies where the files will
+ be pulled to; if this is not specified, the will be in the
+ current directory.
+ :close: Close the currently configured server session. This will get rid
+ of the data files and configuration on the server, so it would
+ no longer be possible to use "start" or "get_data" commands
+ before a new session is configured.
+
+A typical command line session would go like this:
+
+.. code-block:: bash
+
+ send-daq-command --host 127.0.0.1 configure --resistor-values 0.005 0.005
+ # set up and kick off the use case you want to measure
+ send-daq-command --host 127.0.0.1 start
+ # wait for the use case to complete
+ send-daq-command --host 127.0.0.1 stop
+ send-daq-command --host 127.0.0.1 get_data
+ # files called PORT_0.csv and PORT_1.csv will appear in the current directory
+ # containing measurements collected during use case execution
+ send-daq-command --host 127.0.0.1 close
+ # the session is terminated and the csv files on the server have been
+ # deleted. A new session may now be configured.
+
+In addtion to these "standard workflow" commands, the following commands are
+also available:
+
+ :list_devices: Returns a list of DAQ devices detected by the NI-DAQmx
+ driver. In case mutiple devices are connected to the
+ server host, you can specify the device you want to use
+ with ``--device-id`` option when configuring a session.
+ :list_ports: Returns a list of ports tha have been configured for the
+ current session, e.g. ``['PORT_0', 'PORT_1']``.
+ :list_port_files: Returns a list of data files that have been geneted
+ (unless something went wrong, there should be one for
+ each port).
+
+
+Collecting Power from another Python Script
+-------------------------------------------
+
+You can invoke the above commands from a Python script using
+:py:func:`daqpower.client.execute_command` function, passing in
+:class:`daqpower.config.ServerConfiguration` and, in case of the configure command,
+:class:`daqpower.config.DeviceConfigruation`. Please see the implementation of
+the ``daq`` WA instrument for examples of how these APIs can be used.
diff --git a/doc/source/device_setup.rst b/doc/source/device_setup.rst
new file mode 100644
index 00000000..3f6e16ad
--- /dev/null
+++ b/doc/source/device_setup.rst
@@ -0,0 +1,407 @@
+Setting Up A Device
+===================
+
+WA should work with most Android devices out-of-the box, as long as the device
+is discoverable by ``adb`` (i.e. gets listed when you run ``adb devices``). For
+USB-attached devices, that should be the case; for network devices, ``adb connect``
+would need to be invoked with the IP address of the device. If there is only one
+device connected to the host running WA, then no further configuration should be
+necessary (though you may want to :ref:`tweak some Android settings <configuring-android>`\ ).
+
+If you have multiple devices connected, have a non-standard Android build (e.g.
+on a development board), or want to use of the more advanced WA functionality,
+further configuration will be required.
+
+Android
++++++++
+
+General Device Setup
+--------------------
+
+You can specify the device interface by setting ``device`` setting in
+``~/.workload_automation/config.py``. Available interfaces can be viewed by
+running ``wa list devices`` command. If you don't see your specific device
+listed (which is likely unless you're using one of the ARM-supplied platforms), then
+you should use ``generic_android`` interface (this is set in the config by
+default).
+
+.. code-block:: python
+
+ device = 'generic_android'
+
+The device interface may be configured through ``device_config`` setting, who's
+value is a ``dict`` mapping setting names to their values. You can find the full
+list of available parameter by looking up your device interface in the
+:ref:`devices` section of the documentation. Some of the most common parameters
+you might want to change are outlined below.
+
+.. confval:: adb_name
+
+ If you have multiple Android devices connected to the host machine, you will
+ need to set this to indicate to WA which device you want it to use.
+
+.. confval:: working_directory
+
+ WA needs a "working" directory on the device which it will use for collecting
+ traces, caching assets it pushes to the device, etc. By default, it will
+ create one under ``/sdcard`` which should be mapped and writable on standard
+ Android builds. If this is not the case for your device, you will need to
+ specify an alternative working directory (e.g. under ``/data/local``).
+
+.. confval:: scheduler
+
+ This specifies the scheduling mechanism (from the perspective of core layout)
+ utilized by the device). For recent big.LITTLE devices, this should generally
+ be "hmp" (ARM Hetrogeneous Mutli-Processing); some legacy development
+ platforms might have Linaro IKS kernels, in which case it should be "iks".
+ For homogeneous (single-cluster) devices, it should be "smp". Please see
+ ``scheduler`` parameter in the ``generic_android`` device documentation for
+ more details.
+
+.. confval:: core_names
+
+ This and ``core_clusters`` need to be set if you want to utilize some more
+ advanced WA functionality (like setting of core-related runtime parameters
+ such as governors, frequencies, etc). ``core_names`` should be a list of
+ core names matching the order in which they are exposed in sysfs. For
+ example, ARM TC2 SoC is a 2x3 big.LITTLE system; it's core_names would be
+ ``['a7', 'a7', 'a7', 'a15', 'a15']``, indicating that cpu0-cpu2 in cpufreq
+ sysfs structure are A7's and cpu3 and cpu4 are A15's.
+
+.. confval:: core_clusters
+
+ If ``core_names`` is defined, this must also be defined. This is a list of
+ integer values indicating the cluster the corresponding core in
+ ``cores_names`` belongs to. For example, for TC2, this would be
+ ``[0, 0, 0, 1, 1]``, indicating that A7's are on cluster 0 and A15's are on
+ cluster 1.
+
+A typical ``device_config`` inside ``config.py`` may look something like
+
+
+.. code-block:: python
+
+ device_config = dict(
+ 'adb_name'='0123456789ABCDEF',
+ 'working_direcory'='/sdcard/wa-working',
+ 'core_names'=['a7', 'a7', 'a7', 'a15', 'a15'],
+ 'core_clusters'=[0, 0, 0, 1, 1],
+ # ...
+ )
+
+.. _configuring-android:
+
+Configuring Android
+-------------------
+
+There are a few additional tasks you may need to perform once you have a device
+booted into Android (especially if this is an initial boot of a fresh OS
+deployment):
+
+ - You have gone through FTU (first time usage) on the home screen and
+ in the apps menu.
+ - You have disabled the screen lock.
+ - You have set sleep timeout to the highest possible value (30 mins on
+ most devices).
+ - You have disabled brightness auto-adjust and have set the brightness
+ to a fixed level.
+ - You have set the locale language to "English" (this is important for
+ some workloads in which UI automation looks for specific text in UI
+ elements).
+
+TC2 Setup
+---------
+
+This section outlines how to setup ARM TC2 development platform to work with WA.
+
+Pre-requisites
+~~~~~~~~~~~~~~
+
+You can obtain the full set of images for TC2 from Linaro:
+
+https://releases.linaro.org/latest/android/vexpress-lsk.
+
+For the easiest setup, follow the instructions on the "Firmware" and "Binary
+Image Installation" tabs on that page.
+
+.. note:: The default ``reboot_policy`` in ``config.py`` is to not reboot. With
+ this WA will assume that the device is already booted into Android
+ prior to WA being invoked. If you want to WA to do the initial boot of
+ the TC2, you will have to change reboot policy to at least
+ ``initial``.
+
+
+Setting Up Images
+~~~~~~~~~~~~~~~~~
+
+.. note:: Make sure that both DIP switches near the black reset button on TC2
+ are up (this is counter to the Linaro guide that instructs to lower
+ one of the switches).
+
+.. note:: The TC2 must have an Ethernet connection.
+
+
+If you have followed the setup instructions on the Linaro page, you should have
+a USB stick or an SD card with the file system, and internal microSD on the
+board (VEMSD) with the firmware images. The default Linaro configuration is to
+boot from the image on the boot partition in the file system you have just
+created. This is not supported by WA, which expects the image to be in NOR flash
+on the board. This requires you to copy the images from the boot partition onto
+the internal microSD card.
+
+Assuming the boot partition of the Linaro file system is mounted on
+``/media/boot`` and the internal microSD is mounted on ``/media/VEMSD``, copy
+the following images::
+
+ cp /media/boot/zImage /media/VEMSD/SOFTWARE/kern_mp.bin
+ cp /media/boot/initrd /media/VEMSD/SOFTWARE/init_mp.bin
+ cp /media/boot/v2p-ca15-tc2.dtb /media/VEMSD/SOFTWARE/mp_a7bc.dtb
+
+Optionally
+##########
+
+The default device tree configuration the TC2 is to boot on the A7 cluster. It
+is also possible to configure the device tree to boot on the A15 cluster, or to
+boot with one of the clusters disabled (turning TC2 into an A7-only or A15-only
+device). Please refer to the "Firmware" tab on the Linaro paged linked above for
+instructions on how to compile the appropriate device tree configurations.
+
+WA allows selecting between these configurations using ``os_mode`` boot
+parameter of the TC2 device interface. In order for this to work correctly,
+device tree files for the A15-bootcluster, A7-only and A15-only configurations
+should be copied into ``/media/VEMSD/SOFTWARE/`` as ``mp_a15bc.dtb``,
+``mp_a7.dtb`` and ``mp_a15.dtb`` respectively.
+
+This is entirely optional. If you're not planning on switching boot cluster
+configuration, those files do not need to be present in VEMSD.
+
+config.txt
+##########
+
+Also, make sure that ``USB_REMOTE`` setting in ``/media/VEMSD/config.txt`` is set
+to ``TRUE`` (this will allow rebooting the device by writing reboot.txt to
+VEMSD). ::
+
+ USB_REMOTE: TRUE ;Selects remote command via USB
+
+
+TC2-specific device_config settings
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are a few settings that may need to be set in ``device_config`` inside
+your ``config.py`` which are specific to TC2:
+
+.. note:: TC2 *does not* accept most "standard" android ``device_config``
+ settings.
+
+adb_name
+ If you're running WA with reboots disabled (which is the default reboot
+ policy), you will need to manually run ``adb connect`` with TC2's IP
+ address and set this.
+
+root_mount
+ WA expects TC2's internal microSD to be mounted on the host under
+ ``/media/VEMSD``. If this location is different, it needs to be specified
+ using this setting.
+
+boot_firmware
+ WA defaults to try booting using UEFI, which will require some additional
+ firmware from ARM that may not be provided with Linaro releases (see the
+ UEFI and PSCI section below). If you do not have those images, you will
+ need to set ``boot_firmware`` to ``bootmon``.
+
+fs_medium
+ TC2's file system can reside either on an SD card or on a USB stick. Boot
+ configuration is different depending on this. By default, WA expects it
+ to be on ``usb``; if you are using and SD card, you should set this to
+ ``sd``.
+
+bm_image
+ Bootmon image that comes as part of TC2 firmware periodically gets
+ updated. At the time of the release, ``bm_v519r.axf`` was used by
+ ARM. If you are using a more recent image, you will need to set this
+ indicating the image name (just the name of the actual file, *not* the
+ path). Note: this setting only applies if using ``bootmon`` boot
+ firmware.
+
+serial_device
+ WA will assume TC2 is connected on ``/dev/ttyS0`` by default. If the
+ serial port is different, you will need to set this.
+
+
+UEFI and PSCI
+~~~~~~~~~~~~~
+
+UEFI is a boot firmware alternative to bootmon. Currently UEFI is coupled with PSCI (Power State Coordination Interface). That means
+that in order to use PSCI, UEFI has to be the boot firmware. Currently the reverse dependency is true as well (for TC2). Therefore
+using UEFI requires enabling PSCI.
+
+In case you intend to use uefi/psci mode instead of bootmon, you will need two additional files: tc2_sec.bin and tc2_uefi.bin.
+after obtaining those files, place them inside /media/VEMSD/SOFTWARE/ directory as such::
+
+ cp tc2_sec.bin /media/VEMSD/SOFTWARE/
+ cp tc2_uefi.bin /media/VEMSD/SOFTWARE/
+
+
+Juno Setup
+----------
+
+.. note:: At the time of writing, the Android software stack on Juno was still
+ very immature. Some workloads may not run, and there maybe stability
+ issues with the device.
+
+
+The full software stack can be obtained from Linaro:
+
+https://releases.linaro.org/14.08/members/arm/android/images/armv8-android-juno-lsk
+
+Please follow the instructions on the "Binary Image Installation" tab on that
+page. More up-to-date firmware and kernel may also be obtained by registered
+members from ARM Connected Community: http://www.arm.com/community/ (though this
+is not guaranteed to work with the Linaro file system).
+
+UEFI
+~~~~
+
+Juno uses UEFI_ to boot the kernel image. UEFI supports multiple boot
+configurations, and presents a menu on boot to select (in default configuration
+it will automatically boot the first entry in the menu if not interrupted before
+a timeout). WA will look for a specific entry in the UEFI menu
+(``'WA'`` by default, but that may be changed by setting ``uefi_entry`` in the
+``device_config``). When following the UEFI instructions on the above Linaro
+page, please make sure to name the entry appropriately (or to correctly set the
+``uefi_entry``).
+
+.. _UEFI: http://en.wikipedia.org/wiki/UEFI
+
+There are two supported way for Juno to discover kernel images through UEFI. It
+can either load them from NOR flash on the board, or form boot partition on the
+file system. The setup described on the Linaro page uses the boot partition
+method.
+
+If WA does not find the UEFI entry it expects, it will create one. However, it
+will assume that the kernel image resides in NOR flash, which means it will not
+work with Linaro file system. So if you're replicating the Linaro setup exactly,
+you will need to create the entry manually, as outline on the above-linked page.
+
+Rebooting
+~~~~~~~~~
+
+At the time of writing, normal Android reboot did not work properly on Juno
+Android, causing the device to crash into an irrecoverable state. Therefore, WA
+will perform a hard reset to reboot the device. It will attempt to do this by
+toggling the DTR line on the serial connection to the device. In order for this
+to work, you need to make sure that SW1 configuration switch on the back panel of
+the board (the right-most DIP switch) is toggled *down*.
+
+
+Linux
++++++
+
+General Device Setup
+--------------------
+
+You can specify the device interface by setting ``device`` setting in
+``~/.workload_automation/config.py``. Available interfaces can be viewed by
+running ``wa list devices`` command. If you don't see your specific device
+listed (which is likely unless you're using one of the ARM-supplied platforms), then
+you should use ``generic_linux`` interface (this is set in the config by
+default).
+
+.. code-block:: python
+
+ device = 'generic_linux'
+
+The device interface may be configured through ``device_config`` setting, who's
+value is a ``dict`` mapping setting names to their values. You can find the full
+list of available parameter by looking up your device interface in the
+:ref:`devices` section of the documentation. Some of the most common parameters
+you might want to change are outlined below.
+
+Currently, the only only supported method for talking to a Linux device is over
+SSH. Device configuration must specify the parameters need to establish the
+connection.
+
+.. confval:: host
+
+ This should be either the the DNS name or IP address of the device.
+
+.. confval:: username
+
+ The login name of the user on the device that WA will use. This user should
+ have a home directory (unless an alternative working directory is specified
+ using ``working_directory`` config -- see below), and, for full
+ functionality, the user should have sudo rights (WA will be able to use
+ sudo-less acounts but some instruments or workload may not work).
+
+.. confval:: password
+
+ Password for the account on the device. Either this of a ``keyfile`` (see
+ below) must be specified.
+
+.. confval:: keyfile
+
+ If key-based authentication is used, this may be used to specify the SSH identity
+ file instead of the password.
+
+.. confval:: property_files
+
+ This is a list of paths that will be pulled for each WA run into the __meta
+ subdirectory in the results. The intention is to collect meta-data about the
+ device that may aid in reporducing the results later. The paths specified do
+ not have to exist on the device (they will be ignored if they do not). The
+ default list is ``['/proc/version', '/etc/debian_version', '/etc/lsb-release', '/etc/arch-release']``
+
+
+In addition, ``working_directory``, ``scheduler``, ``core_names``, and
+``core_clusters`` can also be specified and have the same meaning as for Android
+devices (see above).
+
+A typical ``device_config`` inside ``config.py`` may look something like
+
+
+.. code-block:: python
+
+ device_config = dict(
+ 'host'='192.168.0.7',
+ 'username'='guest',
+ 'password'='guest',
+ 'core_names'=['a7', 'a7', 'a7', 'a15', 'a15'],
+ 'core_clusters'=[0, 0, 0, 1, 1],
+ # ...
+ )
+
+
+Related Settings
+++++++++++++++++
+
+Reboot Policy
+-------------
+
+This indicates when during WA execution the device will be rebooted. By default
+this is set to ``never``, indicating that WA will not reboot the device. Please
+see ``reboot_policy`` documentation in :ref:`configuration-specification` for
+
+more details.
+
+Execution Order
+---------------
+
+``execution_order`` defines the order in which WA will execute workloads.
+``by_iteration`` (set by default) will execute the first iteration of each spec
+first, followed by the second iteration of each spec (that defines more than one
+iteration) and so forth. The alternative will loop through all iterations for
+the first first spec first, then move on to second spec, etc. Again, please see
+:ref:`configuration-specification` for more details.
+
+
+Adding a new device interface
++++++++++++++++++++++++++++++
+
+If you are working with a particularly unusual device (e.g. a early stage
+development board) or need to be able to handle some quirk of your Android build,
+configuration available in ``generic_android`` interface may not be enough for
+you. In that case, you may need to write a custom interface for your device. A
+device interface is an ``Extension`` (a plug-in) type in WA and is implemented
+similar to other extensions (such as workloads or instruments). Pleaser refer to
+:ref:`adding_a_device` section for information on how this may be done.
diff --git a/doc/source/execution_model.rst b/doc/source/execution_model.rst
new file mode 100644
index 00000000..3140583b
--- /dev/null
+++ b/doc/source/execution_model.rst
@@ -0,0 +1,115 @@
+++++++++++++++++++
+Framework Overview
+++++++++++++++++++
+
+Execution Model
+===============
+
+At the high level, the execution model looks as follows:
+
+.. image:: wa-execution.png
+ :scale: 50 %
+
+After some initial setup, the framework initializes the device, loads and initialized
+instrumentation and begins executing jobs defined by the workload specs in the agenda. Each job
+executes in four basic stages:
+
+setup
+ Initial setup for the workload is performed. E.g. required assets are deployed to the
+ devices, required services or applications are launched, etc. Run time configuration of the
+ device for the workload is also performed at this time.
+
+run
+ This is when the workload actually runs. This is defined as the part of the workload that is
+ to be measured. Exactly what happens at this stage depends entirely on the workload.
+
+result processing
+ Results generated during the execution of the workload, if there are any, are collected,
+ parsed and extracted metrics are passed up to the core framework.
+
+teardown
+ Final clean up is performed, e.g. applications may closed, files generated during execution
+ deleted, etc.
+
+Signals are dispatched (see signal_dispatch_ below) at each stage of workload execution,
+which installed instrumentation can hook into in order to collect measurements, alter workload
+execution, etc. Instrumentation implementation usually mirrors that of workloads, defining
+setup, teardown and result processing stages for a particular instrument. Instead of a ``run``,
+instruments usually implement a ``start`` and a ``stop`` which get triggered just before and just
+after a workload run. However, the signal dispatch mechanism give a high degree of flexibility
+to instruments allowing them to hook into almost any stage of a WA run (apart from the very
+early initialization).
+
+Metrics and artifacts generated by workloads and instrumentation are accumulated by the framework
+and are then passed to active result processors. This happens after each individual workload
+execution and at the end of the run. A result process may chose to act at either or both of these
+points.
+
+
+Control Flow
+============
+
+This section goes into more detail explaining the relationship between the major components of the
+framework and how control passes between them during a run. It will only go through the major
+transition and interactions and will not attempt to describe very single thing that happens.
+
+.. note:: This is the control flow for the ``wa run`` command which is the main functionality
+ of WA. Other commands are much simpler and most of what is described below does not
+ apply to them.
+
+#. ``wlauto.core.entry_point`` parses the command form the arguments and executes the run command
+ (``wlauto.commands.run.RunCommand``).
+#. Run command initializes the output directory and creates a ``wlauto.core.agenda.Agenda`` based on
+ the command line arguments. Finally, it instantiates a ``wlauto.core.execution.Executor`` and
+ passes it the Agenda.
+#. The Executor uses the Agenda to create a ``wlauto.core.configuraiton.RunConfiguration`` fully
+ defines the configuration for the run (it will be serialised into ``__meta`` subdirectory under
+ the output directory.
+#. The Executor proceeds to instantiate and install instrumentation, result processors and the
+ device interface, based on the RunConfiguration. The executor also initialise a
+ ``wlauto.core.execution.ExecutionContext`` which is used to track the current state of the run
+ execution and also serves as a means of communication between the core framework and the
+ extensions.
+#. Finally, the Executor instantiates a ``wlauto.core.execution.Runner``, initializes its job
+ queue with workload specs from the RunConfiguraiton, and kicks it off.
+#. The Runner performs the run time initialization of the device and goes through the workload specs
+ (in the order defined by ``execution_order`` setting), running each spec according to the
+ execution model described in the previous section. The Runner sends signals (see below) at
+ appropriate points during execution.
+#. At the end of the run, the control is briefly passed back to the Executor, which outputs a
+ summary for the run.
+
+
+.. _signal_dispatch:
+
+Signal Dispatch
+===============
+
+WA uses the `louie <https://pypi.python.org/pypi/Louie/1.1>`_ (formerly, pydispatcher) library
+for signal dispatch. Callbacks can be registered for signals emitted during the run. WA uses a
+version of louie that has been modified to introduce priority to registered callbacks (so that
+callbacks that are know to be slow can be registered with a lower priority so that they do not
+interfere with other callbacks).
+
+This mechanism is abstracted for instrumentation. Methods of an :class:`wlauto.core.Instrument`
+subclass automatically get hooked to appropriate signals based on their names when the instrument
+is "installed" for the run. Priority can be specified by adding ``very_fast_``, ``fast_`` ,
+``slow_`` or ``very_slow_`` prefixes to method names.
+
+The full list of method names and the signals they map to may be viewed
+:ref:`here <instrumentation_method_map>`.
+
+Signal dispatching mechanism may also be used directly, for example to dynamically register
+callbacks at runtime or allow extensions other than ``Instruments`` to access stages of the run
+they are normally not aware of.
+
+The sending of signals is the responsibility of the Runner. Signals gets sent during transitions
+between execution stages and when special evens, such as errors or device reboots, occur.
+
+See Also
+--------
+
+.. toctree::
+ :maxdepth: 1
+
+ instrumentation_method_map
diff --git a/doc/source/index.rst b/doc/source/index.rst
new file mode 100644
index 00000000..46095f5d
--- /dev/null
+++ b/doc/source/index.rst
@@ -0,0 +1,138 @@
+.. Workload Automation 2 documentation master file, created by
+ sphinx-quickstart on Mon Jul 15 09:00:46 2013.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+Welcome to Documentation for Workload Automation
+================================================
+
+Workload Automation (WA) is a framework for running workloads on real hardware devices. WA
+supports a number of output formats as well as additional instrumentation (such as Streamline
+traces). A number of workloads are included with the framework.
+
+
+.. contents:: Contents
+
+
+What's New
+~~~~~~~~~~
+
+.. toctree::
+ :maxdepth: 1
+
+ changes
+
+
+Usage
+~~~~~
+
+This section lists general usage documentation. If you're new to WA2, it is
+recommended you start with the :doc:`quickstart` page. This section also contains
+installation and configuration guides.
+
+
+.. toctree::
+ :maxdepth: 2
+
+ quickstart
+ installation
+ device_setup
+ invocation
+ agenda
+ configuration
+
+
+Extensions
+~~~~~~~~~~
+
+This section lists extensions that currently come with WA2. Each package below
+represents a particular type of extension (e.g. a workload); each sub-package of
+that package is a particular instance of that extension (e.g. the Andebench
+workload). Clicking on a link will show what the individual extension does,
+what configuration parameters it takes, etc.
+
+For how to implement you own extensions, please refer to the guides in the
+:ref:`in-depth` section.
+
+.. raw:: html
+
+ <style>
+ td {
+ vertical-align: text-top;
+ }
+ </style>
+ <table <tr><td>
+
+.. toctree::
+ :maxdepth: 2
+
+ extensions/workloads
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+ :maxdepth: 2
+
+ extensions/instruments
+
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+ :maxdepth: 2
+
+ extensions/result_processors
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+ :maxdepth: 2
+
+ extensions/devices
+
+.. raw:: html
+
+ </td></tr></table>
+
+.. _in-depth:
+
+In-depth
+~~~~~~~~
+
+This section contains more advanced topics, such how to write your own extensions
+and detailed descriptions of how WA functions under the hood.
+
+.. toctree::
+ :maxdepth: 2
+
+ conventions
+ writing_extensions
+ execution_model
+ resources
+ additional_topics
+ daq_device_setup
+ revent
+ contributing
+
+API Reference
+~~~~~~~~~~~~~
+
+.. toctree::
+ :maxdepth: 5
+
+ api/modules
+
+
+Indices and tables
+~~~~~~~~~~~~~~~~~~
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+
diff --git a/doc/source/installation.rst b/doc/source/installation.rst
new file mode 100644
index 00000000..0485ddcd
--- /dev/null
+++ b/doc/source/installation.rst
@@ -0,0 +1,144 @@
+============
+Installation
+============
+
+.. module:: wlauto
+
+This page describes how to install Workload Automation 2.
+
+
+Prerequisites
+=============
+
+Operating System
+----------------
+
+WA runs on a native Linux install. It was tested with Ubuntu 12.04,
+but any recent Linux distribution should work. It should run on either
+32bit or 64bit OS, provided the correct version of Android (see below)
+was installed. Officially, **other environments are not supported**. WA
+has been known to run on Linux Virtual machines and in Cygwin environments,
+though additional configuration maybe required in both cases (known issues
+include makings sure USB/serial connections are passed to the VM, and wrong
+python/pip binaries being picked up in Cygwin). WA *should* work on other
+Unix-based systems such as BSD or Mac OS X, but it has not been tested
+in those environments. WA *does not* run on Windows (though it should be
+possible to get limited functionality with minimal porting effort).
+
+
+Android SDK
+-----------
+
+You need to have the Android SDK with at least one platform installed.
+To install it, download the ADT Bundle from here_. Extract it
+and add ``<path_to_android_sdk>/sdk/platform-tools`` and ``<path_to_android_sdk>/sdk/tools``
+to your ``PATH``. To test that you've installed it properly run ``adb
+version``, the output should be similar to this::
+
+ $$ adb version
+ Android Debug Bridge version 1.0.31
+ $$
+
+.. _here: https://developer.android.com/sdk/index.html
+
+Once that is working, run ::
+
+ android update sdk
+
+This will open up a dialog box listing available android platforms and
+corresponding API levels, e.g. ``Android 4.3 (API 18)``. For WA, you will need
+at least API level 18 (i.e. Android 4.3), though installing the latest is
+usually the best bet.
+
+Optionally (but recommended), you should also set ``ANDROID_HOME`` to point to
+the install location of the SDK (i.e. ``<path_to_android_sdk>/sdk``).
+
+
+Python
+------
+
+Workload Automation 2 requires Python 2.7 (Python 3 is not supported, at the moment).
+
+
+pip
+---
+
+pip is the recommended package manager for Python. It is not part of standard
+Python distribution and would need to be installed separately. On Ubuntu and
+similar distributions, this may be done with APT::
+
+ sudo apt-get install python-pip
+
+
+Python Packages
+---------------
+
+.. note:: pip should automatically download and install missing dependencies,
+ so if you're using pip, you can skip this section.
+
+Workload Automation 2 depends on the following additional libraries:
+
+ * pexpect
+ * docutils
+ * pySerial
+ * pyYAML
+ * python-dateutil
+
+You can install these with pip::
+
+ sudo pip install pexpect
+ sudo pip install pyserial
+ sudo pip install pyyaml
+ sudo pip install docutils
+ sudo pip install python-dateutil
+
+Some of these may also be available in your distro's repositories, e.g. ::
+
+ sudo apt-get install python-serial
+
+Distro package versions tend to be older, so pip installation is recommended.
+However, pip will always download and try to build the source, so in some
+situations distro binaries may provide an easier fall back. Please also note that
+distro package names may differ from pip packages.
+
+
+Optional Python Packages
+------------------------
+
+.. note:: unlike the mandatory dependencies in the previous section,
+ pip will *not* install these automatically, so you will have
+ to explicitly install them if/when you need them.
+
+In addition to the mandatory packages listed in the previous sections, some WA
+functionality (e.g. certain extensions) may have additional dependencies. Since
+they are not necessary to be able to use most of WA, they are not made mandatory
+to simplify initial WA installation. If you try to use an extension that has
+additional, unmet dependencies, WA will tell you before starting the run, and
+you can install it then. They are listed here for those that would rather
+install them upfront (e.g. if you're planning to use WA to an environment that
+may not always have Internet access).
+
+ * nose
+ * pandas
+ * PyDAQmx
+ * pymongo
+ * jinja2
+
+
+.. note:: Some packages have C extensions and will require Python development
+ headers to install. You can get those by installing ``python-dev``
+ package in apt on Ubuntu (or the equivalent for your distribution).
+
+Installing
+==========
+
+Download the tarball and run pip::
+
+ sudo pip install wlauto-$version.tar.gz
+
+If the above succeeds, try ::
+
+ wa --version
+
+Hopefully, this should output something along the lines of "Workload Automation
+version $version".
diff --git a/doc/source/instrumentation_method_map.rst b/doc/source/instrumentation_method_map.rst
new file mode 100644
index 00000000..f68ecb59
--- /dev/null
+++ b/doc/source/instrumentation_method_map.rst
@@ -0,0 +1,73 @@
+Instrumentation Signal-Method Mapping
+=====================================
+
+.. _instrumentation_method_map:
+
+Instrument methods get automatically hooked up to signals based on their names. Mostly, the method
+name correponds to the name of the signal, however there are a few convienience aliases defined
+(listed first) to make easier to relate instrumenation code to the workload execution model.
+
+======================================== =========================================
+method name signal
+======================================== =========================================
+initialize run-init-signal
+setup successful-workload-setup-signal
+start before-workload-execution-signal
+stop after-workload-execution-signal
+process_workload_result successful-iteration-result-update-signal
+update_result after-iteration-result-update-signal
+teardown after-workload-teardown-signal
+finalize run-fin-signal
+on_run_start start-signal
+on_run_end end-signal
+on_workload_spec_start workload-spec-start-signal
+on_workload_spec_end workload-spec-end-signal
+on_iteration_start iteration-start-signal
+on_iteration_end iteration-end-signal
+before_initial_boot before-initial-boot-signal
+on_successful_initial_boot successful-initial-boot-signal
+after_initial_boot after-initial-boot-signal
+before_first_iteration_boot before-first-iteration-boot-signal
+on_successful_first_iteration_boot successful-first-iteration-boot-signal
+after_first_iteration_boot after-first-iteration-boot-signal
+before_boot before-boot-signal
+on_successful_boot successful-boot-signal
+after_boot after-boot-signal
+on_spec_init spec-init-signal
+on_run_init run-init-signal
+on_iteration_init iteration-init-signal
+before_workload_setup before-workload-setup-signal
+on_successful_workload_setup successful-workload-setup-signal
+after_workload_setup after-workload-setup-signal
+before_workload_execution before-workload-execution-signal
+on_successful_workload_execution successful-workload-execution-signal
+after_workload_execution after-workload-execution-signal
+before_workload_result_update before-iteration-result-update-signal
+on_successful_workload_result_update successful-iteration-result-update-signal
+after_workload_result_update after-iteration-result-update-signal
+before_workload_teardown before-workload-teardown-signal
+on_successful_workload_teardown successful-workload-teardown-signal
+after_workload_teardown after-workload-teardown-signal
+before_overall_results_processing before-overall-results-process-signal
+on_successful_overall_results_processing successful-overall-results-process-signal
+after_overall_results_processing after-overall-results-process-signal
+on_error error_logged
+on_warning warning_logged
+======================================== =========================================
+
+
+The names above may be prefixed with one of pre-defined prefixes to set the priority of the
+Instrument method realive to other callbacks registered for the signal (within the same priority
+level, callbacks are invoked in the order they were registered). The table below shows the mapping
+of the prifix to the corresponding priority:
+
+=========== ===
+prefix priority
+=========== ===
+very_fast\_ 20
+fast\_ 10
+normal\_ 0
+slow\_ -10
+very_slow\_ -20
+=========== ===
+
diff --git a/doc/source/instrumentation_method_map.template b/doc/source/instrumentation_method_map.template
new file mode 100644
index 00000000..48003245
--- /dev/null
+++ b/doc/source/instrumentation_method_map.template
@@ -0,0 +1,17 @@
+Instrumentation Signal-Method Mapping
+=====================================
+
+.. _instrumentation_method_map:
+
+Instrument methods get automatically hooked up to signals based on their names. Mostly, the method
+name correponds to the name of the signal, however there are a few convienience aliases defined
+(listed first) to make easier to relate instrumenation code to the workload execution model.
+
+$signal_names
+
+The names above may be prefixed with one of pre-defined prefixes to set the priority of the
+Instrument method realive to other callbacks registered for the signal (within the same priority
+level, callbacks are invoked in the order they were registered). The table below shows the mapping
+of the prifix to the corresponding priority:
+
+$priority_prefixes
diff --git a/doc/source/invocation.rst b/doc/source/invocation.rst
new file mode 100644
index 00000000..5c8ead92
--- /dev/null
+++ b/doc/source/invocation.rst
@@ -0,0 +1,135 @@
+.. _invocation:
+
+========
+Commands
+========
+
+Installing the wlauto package will add ``wa`` command to your system,
+which you can run from anywhere. This has a number of sub-commands, which can
+be viewed by executing ::
+
+ wa -h
+
+Individual sub-commands are discussed in detail below.
+
+run
+---
+
+The most common sub-command you will use is ``run``. This will run specfied
+workload(s) and process resulting output. This takes a single mandatory
+argument that specifies what you want WA to run. This could be either a
+workload name, or a path to an "agenda" file that allows to specify multiple
+workloads as well as a lot additional configuration (see :ref:`agenda`
+section for details). Executing ::
+
+ wa run -h
+
+Will display help for this subcommand that will look somehtign like this::
+
+ usage: run [-d DIR] [-f] AGENDA
+
+ Execute automated workloads on a remote device and process the resulting
+ output.
+
+ positional arguments:
+ AGENDA Agenda for this workload automation run. This defines
+ which workloads will be executed, how many times, with
+ which tunables, etc. See /usr/local/lib/python2.7
+ /dist-packages/wlauto/agenda-example.csv for an
+ example of how this file should be structured.
+
+ optional arguments:
+ -h, --help show this help message and exit
+ -c CONFIG, --config CONFIG
+ specify an additional config.py
+ -v, --verbose The scripts will produce verbose output.
+ --version Output the version of Workload Automation and exit.
+ --debug Enable debug mode. Note: this implies --verbose.
+ -d DIR, --output-directory DIR
+ Specify a directory where the output will be
+ generated. If the directoryalready exists, the script
+ will abort unless -f option (see below) is used,in
+ which case the contents of the directory will be
+ overwritten. If this optionis not specified, then
+ wa_output will be used instead.
+ -f, --force Overwrite output directory if it exists. By default,
+ the script will abort in thissituation to prevent
+ accidental data loss.
+ -i ID, --id ID Specify a workload spec ID from an agenda to run. If
+ this is specified, only that particular spec will be
+ run, and other workloads in the agenda will be
+ ignored. This option may be used to specify multiple
+ IDs.
+
+
+Output Directory
+~~~~~~~~~~~~~~~~
+
+The exact contents on the output directory will depend on configuration options
+used, instrumentation and output processors enabled, etc. Typically, the output
+directory will contain a results file at the top level that lists all
+measurements that were collected (currently, csv and json formats are
+supported), along with a subdirectory for each iteration executed with output
+for that specific iteration.
+
+At the top level, there will also be a run.log file containing the complete log
+output for the execution. The contents of this file is equivalent to what you
+would get in the console when using --verbose option.
+
+Finally, there will be a __meta subdirectory. This will contain a copy of the
+agenda file used to run the workloads along with any other device-specific
+configuration files used during execution.
+
+
+list
+----
+
+This lists all extensions of a particular type. For example ::
+
+ wa list workloads
+
+will list all workloads currently included in WA. The list will consist of
+extension names and short descriptions of the functionality they offer.
+
+
+show
+----
+
+This will show detailed information about an extension, including more in-depth
+description and any parameters/configuration that are available. For example
+executing ::
+
+ wa show andebench
+
+will produce something like ::
+
+
+ andebench
+
+ AndEBench is an industry standard Android benchmark provided by The Embedded Microprocessor Benchmark Consortium
+ (EEMBC).
+
+ parameters:
+
+ number_of_threads
+ Number of threads that will be spawned by AndEBench.
+ type: int
+
+ single_threaded
+ If ``true``, AndEBench will run with a single thread. Note: this must not be specified if ``number_of_threads``
+ has been specified.
+ type: bool
+
+ http://www.eembc.org/andebench/about.php
+
+ From the website:
+
+ - Initial focus on CPU and Dalvik interpreter performance
+ - Internal algorithms concentrate on integer operations
+ - Compares the difference between native and Java performance
+ - Implements flexible multicore performance analysis
+ - Results displayed in Iterations per second
+ - Detailed log file for comprehensive engineering analysis
+
+
+
diff --git a/doc/source/quickstart.rst b/doc/source/quickstart.rst
new file mode 100644
index 00000000..7b9ec9b7
--- /dev/null
+++ b/doc/source/quickstart.rst
@@ -0,0 +1,162 @@
+==========
+Quickstart
+==========
+
+This sections will show you how to quickly start running workloads using
+Workload Automation 2.
+
+
+Install
+=======
+
+.. note:: This is a quick summary. For more detailed instructions, please see
+ the :doc:`installation` section.
+
+Make sure you have Python 2.7 and a recent Android SDK with API level 18 or above
+installed on your system. For the SDK, make sure that either ``ANDROID_HOME``
+environment variable is set, or that ``adb`` is in your ``PATH``.
+
+.. note:: A complete install of the Android SDK is required, as WA uses a
+ number of its utilities, not just adb.
+
+In addition to the base Python 2.7 install, you will also need to have ``pip``
+(Python's package manager) installed as well. This is usually a separate package.
+
+Once you have the pre-requisites and a tarball with the workload automation package,
+you can install it with pip::
+
+ sudo pip install wlauto-2.2.0dev.tar.gz
+
+This will install Workload Automation on your system, along with the Python
+packages it depends on.
+
+(Optional) Verify installation
+-------------------------------
+
+Once the tarball has been installed, try executing ::
+
+ wa -h
+
+You should see a help message outlining available subcommands.
+
+
+(Optional) APK files
+--------------------
+
+A large number of WA workloads are installed as APK files. These cannot be
+distributed with WA and so you will need to obtain those separately.
+
+For more details, please see the :doc:`installation` section.
+
+
+Configure Your Device
+=====================
+
+Out of the box, WA is configured to work with a generic Android device through
+``adb``. If you only have one device listed when you execute ``adb devices``,
+and your device has a standard Android configuration, then no extra configuration
+is required (if your device is connected via network, you will have to manually execute
+``adb connect <device ip>`` so that it appears in the device listing).
+
+If you have multiple devices connected, you will need to tell WA which one you
+want it to use. You can do that by setting ``adb_name`` in device configuration inside
+``~/.workload_automation/config.py``\ , e.g.
+
+.. code-block:: python
+
+ # ...
+
+ device_config = dict(
+ adb_name = 'abcdef0123456789',
+ # ...
+ )
+
+ # ...
+
+This should give you basic functionality. If your device has non-standard
+Android configuration (e.g. it's a development board) or your need some advanced
+functionality (e.g. big.LITTLE tuning parameters), additional configuration may
+be required. Please see the :doc:`device_setup` section for more details.
+
+
+Running Your First Workload
+===========================
+
+The simplest way to run a workload is to specify it as a parameter to WA ``run``
+sub-command::
+
+ wa run dhrystone
+
+You will see INFO output from WA as it executes each stage of the run. A
+completed run output should look something like this::
+
+ INFO Initializing
+ INFO Running workloads
+ INFO Connecting to device
+ INFO Initializing device
+ INFO Running workload 1 dhrystone (iteration 1)
+ INFO Setting up
+ INFO Executing
+ INFO Processing result
+ INFO Tearing down
+ INFO Processing overall results
+ INFO Status available in wa_output/status.txt
+ INFO Done.
+ INFO Ran a total of 1 iterations: 1 OK
+ INFO Results can be found in wa_output
+
+Once the run has completed, you will find a directory called ``wa_output``
+in the location where you have invoked ``wa run``. Within this directory,
+you will find a "results.csv" file which will contain results obtained for
+dhrystone, as well as a "run.log" file containing detailed log output for
+the run. You will also find a sub-directory called 'drystone_1_1' that
+contains the results for that iteration. Finally, you will find a copy of the
+agenda file in the ``wa_output/__meta`` subdirectory. The contents of
+iteration-specific subdirectories will vary from workload to workload, and,
+along with the contents of the main output directory, will depend on the
+instrumentation and result processors that were enabled for that run.
+
+The ``run`` sub-command takes a number of options that control its behavior,
+you can view those by executing ``wa run -h``. Please see the :doc:`invocation`
+section for details.
+
+
+Create an Agenda
+================
+
+Simply running a single workload is normally of little use. Typically, you would
+want to specify several workloads, setup the device state and, possibly, enable
+additional instrumentation. To do this, you would need to create an "agenda" for
+the run that outlines everything you want WA to do.
+
+Agendas are written using YAML_ markup language. A simple agenda might look
+like this:
+
+.. code-block:: yaml
+
+ config:
+ instrumentation: [~execution_time]
+ result_processors: [json]
+ global:
+ iterations: 2
+ workloads:
+ - memcpy
+ - name: dhrystone
+ params:
+ mloops: 5
+ threads: 1
+
+This agenda
+
+- Specifies two workloads: memcpy and dhrystone.
+- Specifies that dhrystone should run in one thread and execute five million loops.
+- Specifies that each of the two workloads should be run twice.
+- Enables json result processor, in addition to the result processors enabled in
+ the config.py.
+- Disables execution_time instrument, if it is enabled in the config.py
+
+There is a lot more that could be done with an agenda. Please see :doc:`agenda`
+section for details.
+
+.. _YAML: http://en.wikipedia.org/wiki/YAML
+
diff --git a/doc/source/resources.rst b/doc/source/resources.rst
new file mode 100644
index 00000000..af944e6f
--- /dev/null
+++ b/doc/source/resources.rst
@@ -0,0 +1,45 @@
+Dynamic Resource Resolution
+===========================
+
+Introduced in version 2.1.3.
+
+The idea is to decouple resource identification from resource discovery.
+Workloads/instruments/devices/etc state *what* resources they need, and not
+*where* to look for them -- this instead is left to the resource resolver that
+is now part of the execution context. The actual discovery of resources is
+performed by resource getters that are registered with the resolver.
+
+A resource type is defined by a subclass of
+:class:`wlauto.core.resource.Resource`. An instance of this class describes a
+resource that is to be obtained. At minimum, a ``Resource`` instance has an
+owner (which is typically the object that is looking for the resource), but
+specific resource types may define other parameters that describe an instance of
+that resource (such as file names, URLs, etc).
+
+An object looking for a resource invokes a resource resolver with an instance of
+``Resource`` describing the resource it is after. The resolver goes through the
+getters registered for that resource type in priority order attempting to obtain
+the resource; once the resource is obtained, it is returned to the calling
+object. If none of the registered getters could find the resource, ``None`` is
+returned instead.
+
+The most common kind of object looking for resources is a ``Workload``, and
+since v2.1.3, ``Workload`` class defines
+:py:meth:`wlauto.core.workload.Workload.init_resources` method that may be
+overridden by subclasses to perform resource resolution. For example, a workload
+looking for an APK file would do so like this::
+
+ from wlauto import Workload
+ from wlauto.common.resources import ApkFile
+
+ class AndroidBenchmark(Workload):
+
+ # ...
+
+ def init_resources(self, context):
+ self.apk_file = context.resource.get(ApkFile(self))
+
+ # ...
+
+
+Currently available resource types are defined in :py:mod:`wlauto.common.resources`.
diff --git a/doc/source/revent.rst b/doc/source/revent.rst
new file mode 100644
index 00000000..e3b756ce
--- /dev/null
+++ b/doc/source/revent.rst
@@ -0,0 +1,97 @@
+.. _revent_files_creation:
+
+revent
+======
+
+revent utility can be used to record and later play back a sequence of user
+input events, such as key presses and touch screen taps. This is an alternative
+to Android UI Automator for providing automation for workloads. ::
+
+
+ usage:
+ revent [record time file|replay file|info] [verbose]
+ record: stops after either return on stdin
+ or time (in seconds)
+ and stores in file
+ replay: replays eventlog from file
+ info:shows info about each event char device
+ any additional parameters make it verbose
+
+Recording
+---------
+
+To record, transfer the revent binary to the device, then invoke ``revent
+record``, giving it the time (in seconds) you want to record for, and the
+file you want to record to (WA expects these files to have .revent
+extension)::
+
+ host$ adb push revent /data/local/revent
+ host$ adb shell
+ device# cd /data/local
+ device# ./revent record 1000 my_recording.revent
+
+The recording has now started and button presses, taps, etc you perform on the
+device will go into the .revent file. The recording will stop after the
+specified time period, and you can also stop it by hitting return in the adb
+shell.
+
+Replaying
+---------
+
+To replay a recorded file, run ``revent replay`` on the device, giving it the
+file you want to replay::
+
+ device# ./revent replay my_recording.revent
+
+
+Using revent With Workloads
+---------------------------
+
+Some workloads (pretty much all games) rely on recorded revents for their
+execution. :class:`wlauto.common.GameWorkload`-derived workloads expect two
+revent files -- one for performing the initial setup (navigating menus,
+selecting game modes, etc), and one for the actual execution of the game.
+Because revents are very device-specific\ [*]_, these two files would need to
+be recorded for each device.
+
+The files must be called ``<device name>.(setup|run).revent``, where
+``<device name>`` is the name of your device (as defined by the ``name``
+attribute of your device's class). WA will look for these files in two
+places: ``<install dir>/wlauto/workloads/<workload name>/revent_files``
+and ``~/.workload_automation/dependencies/<workload name>``. The first
+location is primarily intended for revent files that come with WA (and if
+you did a system-wide install, you'll need sudo to add files there), so it's
+probably easier to use the second location for the files you record. Also,
+if revent files for a workload exist in both locations, the files under
+``~/.workload_automation/dependencies`` will be used in favor of those
+installed with WA.
+
+For example, if you wanted to run angrybirds workload on "Acme" device, you would
+record the setup and run revent files using the method outlined in the section
+above and then pull them for the devices into the following locations::
+
+ ~/workload_automation/dependencies/angrybirds/Acme.setup.revent
+ ~/workload_automation/dependencies/angrybirds/Acme.run.revent
+
+(you may need to create the intermediate directories if they don't already
+exist).
+
+.. [*] It's not just about screen resolution -- the event codes may be different
+ even if devices use the same screen.
+
+
+revent vs. UiAutomator
+----------------------
+
+In general, Android UI Automator is the preferred way of automating user input
+for workloads because, unlike revent, UI Automator does not depend on a
+particular screen resolution, and so is more portable across different devices.
+It also gives better control and can potentially be faster for ling UI
+manipulations, as input events are scripted based on the available UI elements,
+rather than generated by human input.
+
+On the other hand, revent can be used to manipulate pretty much any workload,
+where as UI Automator only works for Android UI elements (such as text boxes or
+radio buttons), which makes the latter useless for things like games. Recording
+revent sequence is also faster than writing automation code (on the other hand,
+one would need maintain a different revent log for each screen resolution).
diff --git a/doc/source/wa-execution.png b/doc/source/wa-execution.png
new file mode 100644
index 00000000..9bdea6fd
--- /dev/null
+++ b/doc/source/wa-execution.png
Binary files differ
diff --git a/doc/source/writing_extensions.rst b/doc/source/writing_extensions.rst
new file mode 100644
index 00000000..737a1166
--- /dev/null
+++ b/doc/source/writing_extensions.rst
@@ -0,0 +1,956 @@
+==================
+Writing Extensions
+==================
+
+Workload Automation offers several extension points (or plugin types).The most
+interesting of these are
+
+:workloads: These are the tasks that get executed and measured on the device. These
+ can be benchmarks, high-level use cases, or pretty much anything else.
+:devices: These are interfaces to the physical devices (development boards or end-user
+ devices, such as smartphones) that use cases run on. Typically each model of a
+ physical device would require it's own interface class (though some functionality
+ may be reused by subclassing from an existing base).
+:instruments: Instruments allow collecting additional data from workload execution (e.g.
+ system traces). Instruments are not specific to a particular Workload. Instruments
+ can hook into any stage of workload execution.
+:result processors: These are used to format the results of workload execution once they have been
+ collected. Depending on the callback used, these will run either after each
+ iteration or at the end of the run, after all of the results have been
+ collected.
+
+You create an extension by subclassing the appropriate base class, defining
+appropriate methods and attributes, and putting the .py file with the class into
+an appropriate subdirectory under ``~/.workload_automation`` (there is one for
+each extension type).
+
+
+Extension Basics
+================
+
+This sub-section covers things common to implementing extensions of all types.
+It is recommended you familiarize yourself with the information here before
+proceeding onto guidance for specific extension types.
+
+To create an extension, you basically subclass an appropriate base class and them
+implement the appropriate methods
+
+The Context
+-----------
+
+The majority of methods in extensions accept a context argument. This is an
+instance of :class:`wlauto.core.execution.ExecutionContext`. If contains
+of information about current state of execution of WA and keeps track of things
+like which workload is currently running and the current iteration.
+
+Notable attributes of the context are
+
+context.spec
+ the current workload specification being executed. This is an
+ instance of :class:`wlauto.core.configuration.WorkloadRunSpec`
+ and defines the workload and the parameters under which it is
+ being executed.
+
+context.workload
+ ``Workload`` object that is currently being executed.
+
+context.current_iteration
+ The current iteration of the spec that is being executed. Note that this
+ is the iteration for that spec, i.e. the number of times that spec has
+ been run, *not* the total number of all iterations have been executed so
+ far.
+
+context.result
+ This is the result object for the current iteration. This is an instance
+ of :class:`wlauto.core.result.IterationResult`. It contains the status
+ of the iteration as well as the metrics and artifacts generated by the
+ workload and enable instrumentation.
+
+context.device
+ The device interface object that can be used to interact with the
+ device. Note that workloads and instruments have their own device
+ attribute and they should be using that instead.
+
+In addition to these, context also defines a few useful paths (see below).
+
+
+Paths
+-----
+
+You should avoid using hard-coded absolute paths in your extensions whenever
+possible, as they make your code too dependent on a particular environment and
+may mean having to make adjustments when moving to new (host and/or device)
+platforms. To help avoid hard-coded absolute paths, WA automation defines
+a number of standard locations. You should strive to define your paths relative
+to one of those.
+
+On the host
+~~~~~~~~~~~
+
+Host paths are available through the context object, which is passed to most
+extension methods.
+
+context.run_output_directory
+ This is the top-level output directory for all WA results (by default,
+ this will be "wa_output" in the directory in which WA was invoked.
+
+context.output_directory
+ This is the output directory for the current iteration. This will an
+ iteration-specific subdirectory under the main results location. If
+ there is no current iteration (e.g. when processing overall run results)
+ this will point to the same location as ``root_output_directory``.
+
+context.host_working_directory
+ This an addition location that may be used by extensions to store
+ non-iteration specific intermediate files (e.g. configuration).
+
+Additionally, the global ``wlauto.settings`` object exposes on other location:
+
+settings.dependency_directory
+ this is the root directory for all extension dependencies (e.g. media
+ files, assets etc) that are not included within the extension itself.
+
+As per Python best practice, it is recommended that methods and values in
+``os.path`` standard library module are used for host path manipulation.
+
+On the device
+~~~~~~~~~~~~~
+
+Workloads and instruments have a ``device`` attribute, which is an interface to
+the device used by WA. It defines the following location:
+
+device.working_directory
+ This is the directory for all WA-related files on the device. All files
+ deployed to the device should be pushed to somewhere under this location
+ (the only exception being executables installed with ``device.install``
+ method).
+
+Since there could be a mismatch between path notation used by the host and the
+device, the ``os.path`` modules should *not* be used for on-device path
+manipulation. Instead device has an equipment module exposed through
+``device.path`` attribute. This has all the same attributes and behaves the
+same way as ``os.path``, but is guaranteed to produce valid paths for the device,
+irrespective of the host's path notation.
+
+.. note:: result processors, unlike workloads and instruments, do not have their
+ own device attribute; however they can access the device through the
+ context.
+
+
+Parameters
+----------
+
+All extensions can be parameterized. Parameters are specified using
+``parameters`` class attribute. This should be a list of
+:class:`wlauto.core.Parameter` instances. The following attributes can be
+specified on parameter creation:
+
+name
+ This is the only mandatory argument. The name will be used to create a
+ corresponding attribute in the extension instance, so it must be a valid
+ Python identifier.
+
+kind
+ This is the type of the value of the parameter. This could be an
+ callable. Normally this should be a standard Python type, e.g. ``int`
+ or ``float``, or one the types defined in :mod:`wlauto.utils.types`.
+ If not explicitly specified, this will default to ``str``.
+
+ .. note:: Irrespective of the ``kind`` specified, ``None`` is always a
+ valid value for a parameter. If you don't want to allow
+ ``None``, then set ``mandatory`` (see below) to ``True``.
+
+allowed_values
+ A list of the only allowed values for this parameter.
+
+ .. note:: For composite types, such as ``list_of_strings`` or
+ ``list_of_ints`` in :mod:`wlauto.utils.types`, each element of
+ the value will be checked against ``allowed_values`` rather
+ than the composite value itself.
+
+default
+ The default value to be used for this parameter if one has not been
+ specified by the user. Defaults to ``None``.
+
+mandatory
+ A ``bool`` indicating whether this parameter is mandatory. Setting this
+ to ``True`` will make ``None`` an illegal value for the parameter.
+ Defaults to ``False``.
+
+ .. note:: Specifying a ``default`` will mean that this parameter will,
+ effectively, be ignored (unless the user sets the param to ``None``).
+
+ .. note:: Mandatory parameters are *bad*. If at all possible, you should
+ strive to provide a sensible ``default`` or to make do without
+ the parameter. Only when the param is absolutely necessary,
+ and there really is no sensible default that could be given
+ (e.g. something like login credentials), should you consider
+ making it mandatory.
+
+constraint
+ This is an additional constraint to be enforced on the parameter beyond
+ its type or fixed allowed values set. This should be a predicate (a function
+ that takes a single argument -- the user-supplied value -- and returns
+ a ``bool`` indicating whether the constraint has been satisfied).
+
+override
+ A parameter name must be unique not only within an extension but also
+ with that extension's class hierarchy. If you try to declare a parameter
+ with the same name as already exists, you will get an error. If you do
+ want to override a parameter from further up in the inheritance
+ hierarchy, you can indicate that by setting ``override`` attribute to
+ ``True``.
+
+ When overriding, you do not need to specify every other attribute of the
+ parameter, just the ones you what to override. Values for the rest will
+ be taken from the parameter in the base class.
+
+
+Validation and cross-parameter constraints
+------------------------------------------
+
+An extension will get validated at some point after constructions. When exactly
+this occurs depends on the extension type, but it *will* be validated before it
+is used.
+
+You can implement ``validate`` method in your extension (that takes no arguments
+beyond the ``self``) to perform any additions *internal* validation in your
+extension. By "internal", I mean that you cannot make assumptions about the
+surrounding environment (e.g. that the device has been initialized).
+
+The contract for ``validate`` method is that it should raise an exception
+(either ``wlauto.exceptions.ConfigError`` or extension-specific exception type -- see
+further on this page) if some validation condition has not, and cannot, been met.
+If the method returns without raising an exception, then the extension is in a
+valid internal state.
+
+Note that ``validate`` can be used not only to verify, but also to impose a
+valid internal state. In particular, this where cross-parameter constraints can
+be resolved. If the ``default`` or ``allowed_values`` of one parameter depend on
+another parameter, there is no way to express that declaratively when specifying
+the parameters. In that case the dependent attribute should be left unspecified
+on creation and should instead be set inside ``validate``.
+
+Logging
+-------
+
+Every extension class has it's own logger that you can access through
+``self.logger`` inside the extension's methods. Generally, a :class:`Device` will log
+everything it is doing, so you shouldn't need to add much additional logging in
+your expansion's. But you might what to log additional information, e.g.
+what settings your extension is using, what it is doing on the host, etc.
+Operations on the host will not normally be logged, so your extension should
+definitely log what it is doing on the host. One situation in particular where
+you should add logging is before doing something that might take a significant amount
+of time, such as downloading a file.
+
+
+Documenting
+-----------
+
+All extensions and their parameter should be documented. For extensions
+themselves, this is done through ``description`` class attribute. The convention
+for an extension description is that the first paragraph should be a short
+summary description of what the extension does and why one would want to use it
+(among other things, this will get extracted and used by ``wa list`` command).
+Subsequent paragraphs (separated by blank lines) can then provide a more
+detailed description, including any limitations and setup instructions.
+
+For parameters, the description is passed as an argument on creation. Please
+note that if ``default``, ``allowed_values``, or ``constraint``, are set in the
+parameter, they do not need to be explicitly mentioned in the description (wa
+documentation utilities will automatically pull those). If the ``default`` is set
+in ``validate`` or additional cross-parameter constraints exist, this *should*
+be documented in the parameter description.
+
+Both extensions and their parameters should be documented using reStructureText
+markup (standard markup for Python documentation). See:
+
+http://docutils.sourceforge.net/rst.html
+
+Aside from that, it is up to you how you document your extension. You should try
+to provide enough information so that someone unfamiliar with your extension is
+able to use it, e.g. you should document all settings and parameters your
+extension expects (including what the valid value are).
+
+
+Error Notification
+------------------
+
+When you detect an error condition, you should raise an appropriate exception to
+notify the user. The exception would typically be :class:`ConfigError` or
+(depending the type of the extension)
+:class:`WorkloadError`/:class:`DeviceError`/:class:`InstrumentError`/:class:`ResultProcessorError`.
+All these errors are defined in :mod:`wlauto.exception` module.
+
+:class:`ConfigError` should be raised where there is a problem in configuration
+specified by the user (either through the agenda or config files). These errors
+are meant to be resolvable by simple adjustments to the configuration (and the
+error message should suggest what adjustments need to be made. For all other
+errors, such as missing dependencies, mis-configured environment, problems
+performing operations, etc., the extension type-specific exceptions should be
+used.
+
+If the extension itself is capable of recovering from the error and carrying
+on, it may make more sense to log an ERROR or WARNING level message using the
+extension's logger and to continue operation.
+
+
+Utils
+-----
+
+Workload Automation defines a number of utilities collected under
+:mod:`wlauto.utils` subpackage. These utilities were created to help with the
+implementation of the framework itself, but may be also be useful when
+implementing extensions.
+
+
+Adding a Workload
+=================
+
+.. note:: You can use ``wa create workload [name]`` script to generate a new workload
+ structure for you. This script can also create the boilerplate for
+ UI automation, if your workload needs it. See ``wa create -h`` for more
+ details.
+
+New workloads can be added by subclassing :class:`wlauto.core.workload.Workload`
+
+
+The Workload class defines the following interface::
+
+ class Workload(Extension):
+
+ name = None
+
+ def init_resources(self, context):
+ pass
+
+ def setup(self, context):
+ raise NotImplementedError()
+
+ def run(self, context):
+ raise NotImplementedError()
+
+ def update_result(self, context):
+ raise NotImplementedError()
+
+ def teardown(self, context):
+ raise NotImplementedError()
+
+ def validate(self):
+ pass
+
+.. note:: Please see :doc:`conventions` section for notes on how to interpret
+ this.
+
+The interface should be implemented as follows
+
+ :name: This identifies the workload (e.g. it used to specify it in the
+ agenda_.
+ :init_resources: This method may be optionally override to implement dynamic
+ resource discovery for the workload.
+ **Added in version 2.1.3**
+ :setup: Everything that needs to be in place for workload execution should
+ be done in this method. This includes copying files to the device,
+ starting up an application, configuring communications channels,
+ etc.
+ :run: This method should perform the actual task that is being measured.
+ When this method exits, the task is assumed to be complete.
+
+ .. note:: Instrumentation is kicked off just before calling this
+ method and is disabled right after, so everything in this
+ method is being measured. Therefore this method should
+ contain the least code possible to perform the operations
+ you are interested in measuring. Specifically, things like
+ installing or starting applications, processing results, or
+ copying files to/from the device should be done elsewhere if
+ possible.
+
+ :update_result: This method gets invoked after the task execution has
+ finished and should be used to extract metrics and add them
+ to the result (see below).
+ :teardown: This could be used to perform any cleanup you may wish to do,
+ e.g. Uninstalling applications, deleting file on the device, etc.
+
+ :validate: This method can be used to validate any assumptions your workload
+ makes about the environment (e.g. that required files are
+ present, environment variables are set, etc) and should raise
+ a :class:`wlauto.exceptions.WorkloadError` if that is not the
+ case. The base class implementation only makes sure sure that
+ the name attribute has been set.
+
+.. _agenda: agenda.html
+
+Workload methods (except for ``validate``) take a single argument that is a
+:class:`wlauto.core.execution.ExecutionContext` instance. This object keeps
+track of the current execution state (such as the current workload, iteration
+number, etc), and contains, among other things, a
+:class:`wlauto.core.workload.WorkloadResult` instance that should be populated
+from the ``update_result`` method with the results of the execution. ::
+
+ # ...
+
+ def update_result(self, context):
+ # ...
+ context.result.add_metric('energy', 23.6, 'Joules', lower_is_better=True)
+
+ # ...
+
+Example
+-------
+
+This example shows a simple workload that times how long it takes to compress a
+file of a particular size on the device.
+
+.. note:: This is intended as an example of how to implement the Workload
+ interface. The methodology used to perform the actual measurement is
+ not necessarily sound, and this Workload should not be used to collect
+ real measurements.
+
+.. code-block:: python
+
+ import os
+ from wlauto import Workload, Parameter
+
+ class ZiptestWorkload(Workload):
+
+ name = 'ziptest'
+ description = '''
+ Times how long it takes to gzip a file of a particular size on a device.
+
+ This workload was created for illustration purposes only. It should not be
+ used to collect actual measurements.
+
+ '''
+
+ parameters = [
+ Parameter('file_size', kind=int, default=2000000,
+ description='Size of the file (in bytes) to be gzipped.')
+ ]
+
+ def setup(self, context):
+ # Generate a file of the specified size containing random garbage.
+ host_infile = os.path.join(context.output_directory, 'infile')
+ command = 'openssl rand -base64 {} > {}'.format(self.file_size, host_infile)
+ os.system(command)
+ # Set up on-device paths
+ devpath = self.device.path # os.path equivalent for the device
+ self.device_infile = devpath.join(self.device.working_directory, 'infile')
+ self.device_outfile = devpath.join(self.device.working_directory, 'outfile')
+ # Push the file to the device
+ self.device.push_file(host_infile, self.device_infile)
+
+ def run(self, context):
+ self.device.execute('cd {} && (time gzip {}) &>> {}'.format(self.device.working_directory,
+ self.device_infile,
+ self.device_outfile))
+
+ def update_result(self, context):
+ # Pull the results file to the host
+ host_outfile = os.path.join(context.output_directory, 'outfile')
+ self.device.pull_file(self.device_outfile, host_outfile)
+ # Extract metrics form the file's contents and update the result
+ # with them.
+ content = iter(open(host_outfile).read().strip().split())
+ for value, metric in zip(content, content):
+ mins, secs = map(float, value[:-1].split('m'))
+ context.result.add_metric(metric, secs + 60 * mins)
+
+ def teardown(self, context):
+ # Clean up on-device file.
+ self.device.delete_file(self.device_infile)
+ self.device.delete_file(self.device_outfile)
+
+
+
+.. _GameWorkload:
+
+Adding revent-dependent Workload:
+---------------------------------
+
+:class:`wlauto.common.game.GameWorkload` is the base class for all the workloads
+that depend on :ref:`revent_files_creation` files. It implements all the methods
+needed to push the files to the device and run them. New GameWorkload can be
+added by subclassing :class:`wlauto.common.game.GameWorkload`:
+
+The GameWorkload class defines the following interface::
+
+ class GameWorkload(Workload):
+
+ name = None
+ package = None
+ activity = None
+
+The interface should be implemented as follows
+
+ :name: This identifies the workload (e.g. it used to specify it in the
+ agenda_.
+ :package: This is the name of the '.apk' package without its file extension.
+ :activity: The name of the main activity that runs the package.
+
+Example:
+--------
+
+This example shows a simple GameWorkload that plays a game.
+
+.. code-block:: python
+
+ from wlauto.common.game import GameWorkload
+
+ class MyGame(GameWorkload):
+
+ name = 'mygame'
+ package = 'com.mylogo.mygame'
+ activity = 'myActivity.myGame'
+
+Convention for Naming revent Files for :class:`wlauto.common.game.GameWorkload`
+-------------------------------------------------------------------------------
+
+There is a convention for naming revent files which you should follow if you
+want to record your own revent files. Each revent file must start with the
+device name(case sensitive) then followed by a dot '.' then the stage name
+then '.revent'. All your custom revent files should reside at
+'~/.workload_automation/dependencies/WORKLOAD NAME/'. These are the current
+supported stages:
+
+ :setup: This stage is where the game is loaded. It is a good place to
+ record revent here to modify the game settings and get it ready
+ to start.
+ :run: This stage is where the game actually starts. This will allow for
+ more accurate results if the revent file for this stage only
+ records the game being played.
+
+For instance, to add a custom revent files for a device named mydevice and
+a workload name mygame, you create a new directory called mygame in
+'~/.workload_automation/dependencies/'. Then you add the revent files for
+the stages you want in ~/.workload_automation/dependencies/mygame/::
+
+ mydevice.setup.revent
+ mydevice.run.revent
+
+Any revent file in the dependencies will always overwrite the revent file in the
+workload directory. So it is possible for example to just provide one revent for
+setup in the dependencies and use the run.revent that is in the workload directory.
+
+Adding an Instrument
+====================
+
+Instruments can be used to collect additional measurements during workload
+execution (e.g. collect power readings). An instrument can hook into almost any
+stage of workload execution. A typical instrument would implement a subset of
+the following interface::
+
+ class Instrument(Extension):
+
+ name = None
+ description = None
+
+ parameters = [
+ ]
+
+ def initialize(self, context):
+ pass
+
+ def setup(self, context):
+ pass
+
+ def start(self, context):
+ pass
+
+ def stop(self, context):
+ pass
+
+ def update_result(self, context):
+ pass
+
+ def teardown(self, context):
+ pass
+
+ def finalize(self, context):
+ pass
+
+This is similar to a Workload, except all methods are optional. In addition to
+the workload-like methods, instruments can define a number of other methods that
+will get invoked at various points during run execution. The most useful of
+which is perhaps ``initialize`` that gets invoked after the device has been
+initialised for the first time, and can be used to perform one-time setup (e.g.
+copying files to the device -- there is no point in doing that for each
+iteration). The full list of available methods can be found in
+:ref:`Signals Documentation <instrument_name_mapping>`.
+
+
+Prioritization
+--------------
+
+Callbacks (e.g. ``setup()`` methods) for all instrumentation get executed at the
+same point during workload execution, one after another. The order in which the
+callbacks get invoked should be considered arbitrary and should not be relied
+on (e.g. you cannot expect that just because instrument A is listed before
+instrument B in the config, instrument A's callbacks will run first).
+
+In some cases (e.g. in ``start()`` and ``stop()`` methods), it is important to
+ensure that a particular instrument's callbacks run a closely as possible to the
+workload's invocations in order to maintain accuracy of readings; or,
+conversely, that a callback is executed after the others, because it takes a
+long time and may throw off the accuracy of other instrumentation. You can do
+this by prepending ``fast_`` or ``slow_`` to your callbacks' names. For
+example::
+
+ class PreciseInstrument(Instument):
+
+ # ...
+
+ def fast_start(self, context):
+ pass
+
+ def fast_stop(self, context):
+ pass
+
+ # ...
+
+``PreciseInstrument`` will be started after all other instrumentation (i.e.
+*just* before the workload runs), and it will stopped before all other
+instrumentation (i.e. *just* after the workload runs). It is also possible to
+use ``very_fast_`` and ``very_slow_`` prefixes when you want to be really
+sure that your callback will be the last/first to run.
+
+If more than one active instrument have specified fast (or slow) callbacks, then
+their execution order with respect to each other is not guaranteed. In general,
+having a lot of instrumentation enabled is going to necessarily affect the
+readings. The best way to ensure accuracy of measurements is to minimize the
+number of active instruments (perhaps doing several identical runs with
+different instruments enabled).
+
+Example
+-------
+
+Below is a simple instrument that measures the execution time of a workload::
+
+ class ExecutionTimeInstrument(Instrument):
+ """
+ Measure how long it took to execute the run() methods of a Workload.
+
+ """
+
+ name = 'execution_time'
+
+ def initialize(self, context):
+ self.start_time = None
+ self.end_time = None
+
+ def fast_start(self, context):
+ self.start_time = time.time()
+
+ def fast_stop(self, context):
+ self.end_time = time.time()
+
+ def update_result(self, context):
+ execution_time = self.end_time - self.start_time
+ context.result.add_metric('execution_time', execution_time, 'seconds')
+
+
+Adding a Result Processor
+=========================
+
+A result processor is responsible for processing the results. This may
+involve formatting and writing them to a file, uploading them to a database,
+generating plots, etc. WA comes with a few result processors that output
+results in a few common formats (such as csv or JSON).
+
+You can add your own result processors by creating a Python file in
+``~/.workload_automation/result_processors`` with a class that derives from
+:class:`wlauto.core.result.ResultProcessor`, which has the following interface::
+
+ class ResultProcessor(Extension):
+
+ name = None
+ description = None
+
+ parameters = [
+ ]
+
+ def initialize(self, context):
+ pass
+
+ def process_iteration_result(self, result, context):
+ pass
+
+ def export_iteration_result(self, result, context):
+ pass
+
+ def process_run_result(self, result, context):
+ pass
+
+ def export_run_result(self, result, context):
+ pass
+
+ def finalize(self, context):
+ pass
+
+
+The method names should be fairly self-explanatory. The difference between
+"process" and "export" methods is that export methods will be invoke after
+process methods for all result processors have been generated. Process methods
+may generated additional artifacts (metrics, files, etc), while export methods
+should not -- the should only handle existing results (upload them to a
+database, archive on a filer, etc).
+
+The result object passed to iteration methods is an instance of
+:class:`wlauto.core.result.IterationResult`, the result object passed to run
+methods is an instance of :class:`wlauto.core.result.RunResult`. Please refer to
+their API documentation for details.
+
+Example
+-------
+
+Here is an example result processor that formats the results as a column-aligned
+table::
+
+ import os
+ from wlauto import ResultProcessor
+ from wlauto.utils.misc import write_table
+
+
+ class Table(ResultProcessor):
+
+ name = 'table'
+ description = 'Gerates a text file containing a column-aligned table with run results.'
+
+ def process_run_result(self, result, context):
+ rows = []
+ for iteration_result in result.iteration_results:
+ for metric in iteration_result.metrics:
+ rows.append([metric.name, str(metric.value), metric.units or '',
+ metric.lower_is_better and '-' or '+'])
+
+ outfile = os.path.join(context.output_directory, 'table.txt')
+ with open(outfile, 'w') as wfh:
+ write_table(rows, wfh)
+
+
+Adding a Resource Getter
+========================
+
+A resource getter is a new extension type added in version 2.1.3. A resource
+getter implement a method of acquiring resources of a particular type (such as
+APK files or additional workload assets). Resource getters are invoked in
+priority order until one returns the desired resource.
+
+If you want WA to look for resources somewhere it doesn't by default (e.g. you
+have a repository of APK files), you can implement a getter for the resource and
+register it with a higher priority than the standard WA getters, so that it gets
+invoked first.
+
+Instances of a resource getter should implement the following interface::
+
+ class ResourceGetter(Extension):
+
+ name = None
+ resource_type = None
+ priority = GetterPriority.environment
+
+ def get(self, resource, **kwargs):
+ raise NotImplementedError()
+
+The getter should define a name (as with all extensions), a resource
+type, which should be a string, e.g. ``'jar'``, and a priority (see `Getter
+Prioritization`_ below). In addition, ``get`` method should be implemented. The
+first argument is an instance of :class:`wlauto.core.resource.Resource`
+representing the resource that should be obtained. Additional keyword
+arguments may be used by the invoker to provide additional information about
+the resource. This method should return an instance of the resource that
+has been discovered (what "instance" means depends on the resource, e.g. it
+could be a file path), or ``None`` if this getter was unable to discover
+that resource.
+
+Getter Prioritization
+---------------------
+
+A priority is an integer with higher numeric values indicating a higher
+priority. The following standard priority aliases are defined for getters:
+
+
+ :cached: The cached version of the resource. Look here first. This priority also implies
+ that the resource at this location is a "cache" and is not the only version of the
+ resource, so it may be cleared without losing access to the resource.
+ :preferred: Take this resource in favour of the environment resource.
+ :environment: Found somewhere under ~/.workload_automation/ or equivalent, or
+ from environment variables, external configuration files, etc.
+ These will override resource supplied with the package.
+ :package: Resource provided with the package.
+ :remote: Resource will be downloaded from a remote location (such as an HTTP server
+ or a samba share). Try this only if no other getter was successful.
+
+These priorities are defined as class members of
+:class:`wlauto.core.resource.GetterPriority`, e.g. ``GetterPriority.cached``.
+
+Most getters in WA will be registered with either ``environment`` or
+``package`` priorities. So if you want your getter to override the default, it
+should typically be registered as ``preferred``.
+
+You don't have to stick to standard priority levels (though you should, unless
+there is a good reason). Any integer is a valid priority. The standard priorities
+range from -20 to 20 in increments of 10.
+
+Example
+-------
+
+The following is an implementation of a getter for a workload APK file that
+looks for the file under
+``~/.workload_automation/dependencies/<workload_name>``::
+
+ import os
+ import glob
+
+ from wlauto import ResourceGetter, GetterPriority, settings
+ from wlauto.exceptions import ResourceError
+
+
+ class EnvironmentApkGetter(ResourceGetter):
+
+ name = 'environment_apk'
+ resource_type = 'apk'
+ priority = GetterPriority.environment
+
+ def get(self, resource):
+ resource_dir = _d(os.path.join(settings.dependency_directory, resource.owner.name))
+ version = kwargs.get('version')
+ found_files = glob.glob(os.path.join(resource_dir, '*.apk'))
+ if version:
+ found_files = [ff for ff in found_files if version.lower() in ff.lower()]
+ if len(found_files) == 1:
+ return found_files[0]
+ elif not found_files:
+ return None
+ else:
+ raise ResourceError('More than one .apk found in {} for {}.'.format(resource_dir,
+ resource.owner.name))
+
+.. _adding_a_device:
+
+Adding a Device
+===============
+
+At the moment, only Android devices are supported. Most of the functionality for
+interacting with a device is implemented in
+:class:`wlauto.common.AndroidDevice` and is exposed through ``generic_android``
+device interface, which should suffice for most purposes. The most common area
+where custom functionality may need to be implemented is during device
+initialization. Usually, once the device gets to the Android home screen, it's
+just like any other Android device (modulo things like differences between
+Android versions).
+
+If your device doesn't not work with ``generic_device`` interface and you need
+to write a custom interface to handle it, you would do that by subclassing
+``AndroidDevice`` and then just overriding the methods you need. Typically you
+will want to override one or more of the following:
+
+reset
+ Trigger a device reboot. The default implementation just sends ``adb
+ reboot`` to the device. If this command does not work, an alternative
+ implementation may need to be provided.
+
+hard_reset
+ This is a harsher reset that involves cutting the power to a device
+ (e.g. holding down power button or removing battery from a phone). The
+ default implementation is a no-op that just sets some internal flags. If
+ you're dealing with unreliable prototype hardware that can crash and
+ become unresponsive, you may want to implement this in order for WA to
+ be able to recover automatically.
+
+connect
+ When this method returns, adb connection to the device has been
+ established. This gets invoked after a reset. The default implementation
+ just waits for the device to appear in the adb list of connected
+ devices. If this is not enough (e.g. your device is connected via
+ Ethernet and requires an explicit ``adb connect`` call), you may wish to
+ override this to perform the necessary actions before invoking the
+ ``AndroidDevice``\ s version.
+
+init
+ This gets called once at the beginning of the run once the connection to
+ the device has been established. There is no default implementation.
+ It's there to allow whatever custom initialisation may need to be
+ performed for the device (setting properties, configuring services,
+ etc).
+
+Please refer to the API documentation for :class:`wlauto.common.AndroidDevice`
+for the full list of its methods and their functionality.
+
+
+Other Extension Types
+=====================
+
+In addition to extension types covered above, there are few other, more
+specialized ones. They will not be covered in as much detail. Most of them
+expose relatively simple interfaces with only a couple of methods and it is
+expected that if the need arises to extend them, the API-level documentation
+that accompanies them, in addition to what has been outlined here, should
+provide enough guidance.
+
+:commands: This allows extending WA with additional sub-commands (to supplement
+ exiting ones outlined in the :ref:`invocation` section).
+:modules: Modules are "extensions for extensions". They can be loaded by other
+ extensions to expand their functionality (for example, a flashing
+ module maybe loaded by a device in order to support flashing).
+
+
+Packaging Your Extensions
+=========================
+
+If your have written a bunch of extensions, and you want to make it easy to
+deploy them to new systems and/or to update them on existing systems, you can
+wrap them in a Python package. You can use ``wa create package`` command to
+generate appropriate boiler plate. This will create a ``setup.py`` and a
+directory for your package that you can place your extensions into.
+
+For example, if you have a workload inside ``my_workload.py`` and a result
+processor in ``my_result_processor.py``, and you want to package them as
+``my_wa_exts`` package, first run the create command ::
+
+ wa create package my_wa_exts
+
+This will create a ``my_wa_exts`` directory which contains a
+``my_wa_exts/setup.py`` and a subdirectory ``my_wa_exts/my_wa_exts`` which is
+the package directory for your extensions (you can rename the top-level
+``my_wa_exts`` directory to anything you like -- it's just a "container" for the
+setup.py and the package directory). Once you have that, you can then copy your
+extensions into the package directory, creating
+``my_wa_exts/my_wa_exts/my_workload.py`` and
+``my_wa_exts/my_wa_exts/my_result_processor.py``. If you have a lot of
+extensions, you might want to organize them into subpackages, but only the
+top-level package directory is created by default, and it is OK to have
+everything in there.
+
+.. note:: When discovering extensions thorugh this mechanism, WA traveries the
+ Python module/submodule tree, not the directory strucuter, therefore,
+ if you are going to create subdirectories under the top level dictory
+ created for you, it is important that your make sure they are valid
+ Python packages; i.e. each subdirectory must contain a __init__.py
+ (even if blank) in order for the code in that directory and its
+ subdirectories to be discoverable.
+
+At this stage, you may want to edit ``params`` structure near the bottom of
+the ``setup.py`` to add correct author, license and contact information (see
+"Writing the Setup Script" section in standard Python documentation for
+details). You may also want to add a README and/or a COPYING file at the same
+level as the setup.py. Once you have the contents of your package sorted,
+you can generate the package by running ::
+
+ cd my_wa_exts
+ python setup.py sdist
+
+This will generate ``my_wa_exts/dist/my_wa_exts-0.0.1.tar.gz`` package which
+can then be deployed on the target system with standard Python package
+management tools, e.g. ::
+
+ sudo pip install my_wa_exts-0.0.1.tar.gz
+
+As part of the installation process, the setup.py in the package, will write the
+package's name into ``~/.workoad_automoation/packages``. This will tell WA that
+the package contains extension and it will load them next time it runs.
+
+.. note:: There are no unistall hooks in ``setuputils``, so if you ever
+ uninstall your WA extensions package, you will have to manually remove
+ it from ``~/.workload_automation/packages`` otherwise WA will complain
+ abou a missing package next time you try to run it.