aboutsummaryrefslogtreecommitdiff
path: root/bigtop-deploy
diff options
context:
space:
mode:
authorKevin W Monroe <kevin.monroe@canonical.com>2017-04-19 17:31:52 +0000
committerKevin W Monroe <kevin.monroe@canonical.com>2017-04-25 15:42:34 -0500
commitb5b366fe1e535b3549d3b1c563bdce65b0d21914 (patch)
tree9a779006e22e636bf2461f339a140f6c9f0535d1 /bigtop-deploy
parent2f8311b184bf0c5d25756b098895e43b1dbc3c2e (diff)
BIGTOP-2747: new charm revs for bigtop-1.2 (closes #197)
Signed-off-by: Kevin W Monroe <kevin.monroe@canonical.com>
Diffstat (limited to 'bigtop-deploy')
-rw-r--r--bigtop-deploy/juju/hadoop-hbase/README.md67
-rw-r--r--bigtop-deploy/juju/hadoop-hbase/bundle.yaml12
-rw-r--r--bigtop-deploy/juju/hadoop-kafka/README.md49
-rw-r--r--bigtop-deploy/juju/hadoop-kafka/bundle.yaml12
-rw-r--r--bigtop-deploy/juju/hadoop-processing/README.md45
-rw-r--r--bigtop-deploy/juju/hadoop-processing/bundle.yaml8
-rw-r--r--bigtop-deploy/juju/hadoop-spark/README.md134
-rw-r--r--bigtop-deploy/juju/hadoop-spark/bundle.yaml42
-rw-r--r--bigtop-deploy/juju/hadoop-spark/tests/tests.yaml8
-rw-r--r--bigtop-deploy/juju/spark-processing/README.md109
-rw-r--r--bigtop-deploy/juju/spark-processing/bundle.yaml4
-rw-r--r--bigtop-deploy/juju/spark-processing/tests/tests.yaml8
12 files changed, 200 insertions, 298 deletions
diff --git a/bigtop-deploy/juju/hadoop-hbase/README.md b/bigtop-deploy/juju/hadoop-hbase/README.md
index b45bf7bc..28253277 100644
--- a/bigtop-deploy/juju/hadoop-hbase/README.md
+++ b/bigtop-deploy/juju/hadoop-hbase/README.md
@@ -26,14 +26,20 @@ to deliver high-availability, Hadoop can detect and handle failures at the
application layer. This provides a highly-available service on top of a cluster
of machines, each of which may be prone to failure.
-HBase is the Hadoop database. Think of it as a distributed, scalable Big Data
-store.
+Apache HBase is the Hadoop database. Think of it as a distributed, scalable
+Big Data store.
-This bundle provides a complete deployment of Hadoop and HBase components from
-[Apache Bigtop][] that performs distributed data processing at scale. Ganglia
-and rsyslog applications are also provided to monitor cluster health and syslog
-activity.
+Use HBase when you need random, realtime read/write access to your Big Data.
+This project's goal is the hosting of very large tables -- billions of rows X
+millions of columns -- atop clusters of commodity hardware. Learn more at
+[hbase.apache.org][].
+This bundle provides a complete deployment of Hadoop and HBase components
+from [Apache Bigtop][] that performs distributed data processing at scale.
+Ganglia and rsyslog applications are also provided to monitor cluster health
+and syslog activity.
+
+[hbase.apache.org]: http://hbase.apache.org/
[Apache Bigtop]: http://bigtop.apache.org/
## Bundle Composition
@@ -41,14 +47,14 @@ activity.
The applications that comprise this bundle are spread across 8 units as
follows:
- * NameNode (HDFS)
- * ResourceManager (YARN)
+ * NameNode v2.7.3
+ * ResourceManager v2.7.3
* Colocated on the NameNode unit
- * Zookeeper
+ * Zookeeper v3.4.6
* 3 separate units
- * Slave (DataNode and NodeManager)
+ * Slave (DataNode and NodeManager) v2.7.3
* 3 separate units
- * HBase
+ * HBase v1.1.9
* 3 units colocated with the Hadoop Slaves
* Client (Hadoop endpoint)
* Plugin (Facilitates communication with the Hadoop cluster)
@@ -65,9 +71,8 @@ demands.
# Deploying
-A working Juju installation is assumed to be present. If Juju is not yet set
-up, please follow the [getting-started][] instructions prior to deploying this
-bundle.
+This charm requires Juju 2.0 or greater. If Juju is not yet set up, please
+follow the [getting-started][] instructions prior to deploying this bundle.
> **Note**: This bundle requires hardware resources that may exceed limits
of Free-tier or Trial accounts on some clouds. To deploy to these
@@ -79,18 +84,10 @@ Deploy this bundle from the Juju charm store with the `juju deploy` command:
juju deploy hadoop-hbase
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-hadoop-hbase`.
-
Alternatively, deploy a locally modified `bundle.yaml` with:
juju deploy /path/to/bundle.yaml
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-/path/to/bundle.yaml`.
-
The charms in this bundle can also be built from their source layers in the
[Bigtop charm repository][]. See the [Bigtop charm README][] for instructions
on building and deploying these charms locally.
@@ -102,7 +99,6 @@ mirror options. See [Configuring Models][] for more information.
[getting-started]: https://jujucharms.com/docs/stable/getting-started
[bundle.yaml]: https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/hadoop-hbase/bundle.yaml
-[juju-quickstart]: https://launchpad.net/juju-quickstart
[Bigtop charm repository]: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
[Bigtop charm README]: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
[Configuring Models]: https://jujucharms.com/docs/stable/models-config
@@ -138,25 +134,16 @@ complete. Run the smoke-test actions as follows:
juju run-action hbase/0 smoke-test
juju run-action zookeeper/0 smoke-test
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action do <application>/0 smoke-test`.
-
Watch the progress of the smoke test actions with:
watch -n 2 juju show-action-status
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action status`.
-
Eventually, all of the actions should settle to `status: completed`. If
any report `status: failed`, that application is not working as expected. Get
more information about a specific smoke test with:
juju show-action-output <action-id>
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action fetch <action-id>`.
-
## Utilities
Applications in this bundle include command line and web utilities that
can be used to verify information about the cluster.
@@ -314,6 +301,18 @@ Multiple units may be added at once. For example, add four more slave units:
juju add-unit -n4 slave
+# Issues
+
+Apache Bigtop tracks issues using JIRA (Apache account required). File an
+issue for this bundle at:
+
+https://issues.apache.org/jira/secure/CreateIssue!default.jspa
+
+Ensure `Bigtop` is selected as the project. Typically, bundle issues are filed
+in the `deployment` component with the latest stable release selected as the
+affected version. Any uncertain fields may be left blank.
+
+
# Contact Information
- <bigdata@lists.ubuntu.com>
@@ -324,6 +323,6 @@ Multiple units may be added at once. For example, add four more slave units:
- [Apache Bigtop home page](http://bigtop.apache.org/)
- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Big Data](https://jujucharms.com/big-data)
+- [Juju Bigtop charms](https://jujucharms.com/q/bigtop)
- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
-- [Juju community](https://jujucharms.com/community)
diff --git a/bigtop-deploy/juju/hadoop-hbase/bundle.yaml b/bigtop-deploy/juju/hadoop-hbase/bundle.yaml
index 9e3c823a..618b196a 100644
--- a/bigtop-deploy/juju/hadoop-hbase/bundle.yaml
+++ b/bigtop-deploy/juju/hadoop-hbase/bundle.yaml
@@ -1,6 +1,6 @@
services:
namenode:
- charm: "cs:xenial/hadoop-namenode-13"
+ charm: "cs:xenial/hadoop-namenode-14"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -9,7 +9,7 @@ services:
to:
- "0"
resourcemanager:
- charm: "cs:xenial/hadoop-resourcemanager-14"
+ charm: "cs:xenial/hadoop-resourcemanager-16"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -18,7 +18,7 @@ services:
to:
- "0"
slave:
- charm: "cs:xenial/hadoop-slave-13"
+ charm: "cs:xenial/hadoop-slave-15"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
@@ -29,7 +29,7 @@ services:
- "2"
- "3"
plugin:
- charm: "cs:xenial/hadoop-plugin-13"
+ charm: "cs:xenial/hadoop-plugin-14"
annotations:
gui-x: "1000"
gui-y: "400"
@@ -43,7 +43,7 @@ services:
to:
- "4"
hbase:
- charm: "cs:xenial/hbase-11"
+ charm: "cs:xenial/hbase-14"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
@@ -54,7 +54,7 @@ services:
- "2"
- "3"
zookeeper:
- charm: "cs:xenial/zookeeper-17"
+ charm: "cs:xenial/zookeeper-19"
constraints: "mem=3G root-disk=32G"
num_units: 3
annotations:
diff --git a/bigtop-deploy/juju/hadoop-kafka/README.md b/bigtop-deploy/juju/hadoop-kafka/README.md
index 4c7a5818..4caf7744 100644
--- a/bigtop-deploy/juju/hadoop-kafka/README.md
+++ b/bigtop-deploy/juju/hadoop-kafka/README.md
@@ -44,15 +44,15 @@ activity.
The applications that comprise this bundle are spread across 9 units as
follows:
- * NameNode (HDFS)
- * ResourceManager (YARN)
+ * NameNode v2.7.3
+ * ResourceManager v2.7.3
* Colocated on the NameNode unit
- * Slave (DataNode and NodeManager)
+ * Slave (DataNode and NodeManager) v2.7.3
* 3 separate units
- * Kafka
+ * Kafka v0.10.1
* Flume-Kafka
* Colocated on the Kafka unit
- * Zookeeper
+ * Zookeeper v3.4.6
* 3 separate units
* Client (Hadoop endpoint)
* Plugin (Facilitates communication with the Hadoop cluster)
@@ -76,9 +76,8 @@ demands.
# Deploying
-A working Juju installation is assumed to be present. If Juju is not yet set
-up, please follow the [getting-started][] instructions prior to deploying this
-bundle.
+This charm requires Juju 2.0 or greater. If Juju is not yet set up, please
+follow the [getting-started][] instructions prior to deploying this bundle.
> **Note**: This bundle requires hardware resources that may exceed limits
of Free-tier or Trial accounts on some clouds. To deploy to these
@@ -90,18 +89,10 @@ Deploy this bundle from the Juju charm store with the `juju deploy` command:
juju deploy hadoop-kafka
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-hadoop-kafka`.
-
Alternatively, deploy a locally modified `bundle.yaml` with:
juju deploy /path/to/bundle.yaml
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-/path/to/bundle.yaml`.
-
The charms in this bundle can also be built from their source layers in the
[Bigtop charm repository][]. See the [Bigtop charm README][] for instructions
on building and deploying these charms locally.
@@ -113,7 +104,6 @@ mirror options. See [Configuring Models][] for more information.
[getting-started]: https://jujucharms.com/docs/stable/getting-started
[bundle.yaml]: https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
-[juju-quickstart]: https://launchpad.net/juju-quickstart
[Bigtop charm repository]: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
[Bigtop charm README]: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
[Configuring Models]: https://jujucharms.com/docs/stable/models-config
@@ -166,25 +156,16 @@ complete. Run the smoke-test actions as follows:
juju run-action kafka/0 smoke-test
juju run-action zookeeper/0 smoke-test
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action do <application>/0 smoke-test`.
-
Watch the progress of the smoke test actions with:
watch -n 2 juju show-action-status
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action status`.
-
Eventually, all of the actions should settle to `status: completed`. If
any report `status: failed`, that application is not working as expected. Get
more information about a specific smoke test with:
juju show-action-output <action-id>
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action fetch <action-id>`.
-
## Utilities
Applications in this bundle include command line and web utilities that
can be used to verify information about the cluster.
@@ -314,6 +295,18 @@ Multiple units may be added at once. For example, add four more slave units:
juju add-unit -n4 slave
+# Issues
+
+Apache Bigtop tracks issues using JIRA (Apache account required). File an
+issue for this bundle at:
+
+https://issues.apache.org/jira/secure/CreateIssue!default.jspa
+
+Ensure `Bigtop` is selected as the project. Typically, bundle issues are filed
+in the `deployment` component with the latest stable release selected as the
+affected version. Any uncertain fields may be left blank.
+
+
# Contact Information
- <bigdata@lists.ubuntu.com>
@@ -324,6 +317,6 @@ Multiple units may be added at once. For example, add four more slave units:
- [Apache Bigtop home page](http://bigtop.apache.org/)
- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Big Data](https://jujucharms.com/big-data)
+- [Juju Bigtop charms](https://jujucharms.com/q/bigtop)
- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
-- [Juju community](https://jujucharms.com/community)
diff --git a/bigtop-deploy/juju/hadoop-kafka/bundle.yaml b/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
index 948c8896..abf74bba 100644
--- a/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
+++ b/bigtop-deploy/juju/hadoop-kafka/bundle.yaml
@@ -1,6 +1,6 @@
services:
namenode:
- charm: "cs:xenial/hadoop-namenode-13"
+ charm: "cs:xenial/hadoop-namenode-14"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -9,7 +9,7 @@ services:
to:
- "0"
resourcemanager:
- charm: "cs:xenial/hadoop-resourcemanager-14"
+ charm: "cs:xenial/hadoop-resourcemanager-16"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -18,7 +18,7 @@ services:
to:
- "0"
slave:
- charm: "cs:xenial/hadoop-slave-13"
+ charm: "cs:xenial/hadoop-slave-15"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
@@ -29,7 +29,7 @@ services:
- "2"
- "3"
plugin:
- charm: "cs:xenial/hadoop-plugin-13"
+ charm: "cs:xenial/hadoop-plugin-14"
annotations:
gui-x: "1000"
gui-y: "400"
@@ -52,7 +52,7 @@ services:
to:
- "4"
zookeeper:
- charm: "cs:xenial/zookeeper-17"
+ charm: "cs:xenial/zookeeper-19"
constraints: "mem=3G root-disk=32G"
num_units: 3
annotations:
@@ -63,7 +63,7 @@ services:
- "6"
- "7"
kafka:
- charm: "cs:xenial/kafka-12"
+ charm: "cs:xenial/kafka-15"
constraints: "mem=3G"
num_units: 1
annotations:
diff --git a/bigtop-deploy/juju/hadoop-processing/README.md b/bigtop-deploy/juju/hadoop-processing/README.md
index 896a7936..9ad44b6a 100644
--- a/bigtop-deploy/juju/hadoop-processing/README.md
+++ b/bigtop-deploy/juju/hadoop-processing/README.md
@@ -38,10 +38,10 @@ and syslog activity.
The applications that comprise this bundle are spread across 5 machines as
follows:
- * NameNode (HDFS)
- * ResourceManager (YARN)
+ * NameNode v2.7.3
+ * ResourceManager v2.7.3
* Colocated on the NameNode unit
- * Slave (DataNode and NodeManager)
+ * Slave (DataNode and NodeManager) v2.7.3
* 3 separate units
* Client (Hadoop endpoint)
* Plugin (Facilitates communication with the Hadoop cluster)
@@ -57,9 +57,8 @@ demands.
# Deploying
-A working Juju installation is assumed to be present. If Juju is not yet set
-up, please follow the [getting-started][] instructions prior to deploying this
-bundle.
+This charm requires Juju 2.0 or greater. If Juju is not yet set up, please
+follow the [getting-started][] instructions prior to deploying this bundle.
> **Note**: This bundle requires hardware resources that may exceed limits
of Free-tier or Trial accounts on some clouds. To deploy to these
@@ -70,18 +69,10 @@ Deploy this bundle from the Juju charm store with the `juju deploy` command:
juju deploy hadoop-processing
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-hadoop-processing`.
-
Alternatively, deploy a locally modified `bundle.yaml` with:
juju deploy /path/to/bundle.yaml
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-/path/to/bundle.yaml`.
-
The charms in this bundle can also be built from their source layers in the
[Bigtop charm repository][]. See the [Bigtop charm README][] for instructions
on building and deploying these charms locally.
@@ -93,7 +84,6 @@ mirror options. See [Configuring Models][] for more information.
[getting-started]: https://jujucharms.com/docs/stable/getting-started
[bundle.yaml]: https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/hadoop-processing/bundle.yaml
-[juju-quickstart]: https://launchpad.net/juju-quickstart
[Bigtop charm repository]: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
[Bigtop charm README]: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
[Configuring Models]: https://jujucharms.com/docs/stable/models-config
@@ -127,25 +117,16 @@ Run the smoke-test actions as follows:
juju run-action resourcemanager/0 smoke-test
juju run-action slave/0 smoke-test
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action do <application>/0 smoke-test`.
-
Watch the progress of the smoke test actions with:
watch -n 2 juju show-action-status
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action status`.
-
Eventually, all of the actions should settle to `status: completed`. If
any report `status: failed`, that application is not working as expected. Get
more information about a specific smoke test with:
juju show-action-output <action-id>
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action fetch <action-id>`.
-
## Utilities
Applications in this bundle include Hadoop command line and web utilities that
can be used to verify information about the cluster.
@@ -270,6 +251,18 @@ Multiple units may be added at once. For example, add four more slave units:
juju add-unit -n4 slave
+# Issues
+
+Apache Bigtop tracks issues using JIRA (Apache account required). File an
+issue for this bundle at:
+
+https://issues.apache.org/jira/secure/CreateIssue!default.jspa
+
+Ensure `Bigtop` is selected as the project. Typically, bundle issues are filed
+in the `deployment` component with the latest stable release selected as the
+affected version. Any uncertain fields may be left blank.
+
+
# Contact Information
- <bigdata@lists.ubuntu.com>
@@ -280,6 +273,6 @@ Multiple units may be added at once. For example, add four more slave units:
- [Apache Bigtop home page](http://bigtop.apache.org/)
- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Big Data](https://jujucharms.com/big-data)
+- [Juju Bigtop charms](https://jujucharms.com/q/bigtop)
- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
-- [Juju community](https://jujucharms.com/community)
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle.yaml b/bigtop-deploy/juju/hadoop-processing/bundle.yaml
index c4d8005e..ef5195ae 100644
--- a/bigtop-deploy/juju/hadoop-processing/bundle.yaml
+++ b/bigtop-deploy/juju/hadoop-processing/bundle.yaml
@@ -1,6 +1,6 @@
services:
namenode:
- charm: "cs:xenial/hadoop-namenode-13"
+ charm: "cs:xenial/hadoop-namenode-14"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -9,7 +9,7 @@ services:
to:
- "0"
resourcemanager:
- charm: "cs:xenial/hadoop-resourcemanager-14"
+ charm: "cs:xenial/hadoop-resourcemanager-16"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -18,7 +18,7 @@ services:
to:
- "0"
slave:
- charm: "cs:xenial/hadoop-slave-13"
+ charm: "cs:xenial/hadoop-slave-15"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
@@ -29,7 +29,7 @@ services:
- "2"
- "3"
plugin:
- charm: "cs:xenial/hadoop-plugin-13"
+ charm: "cs:xenial/hadoop-plugin-14"
annotations:
gui-x: "1000"
gui-y: "400"
diff --git a/bigtop-deploy/juju/hadoop-spark/README.md b/bigtop-deploy/juju/hadoop-spark/README.md
index cc956e9d..13905b3d 100644
--- a/bigtop-deploy/juju/hadoop-spark/README.md
+++ b/bigtop-deploy/juju/hadoop-spark/README.md
@@ -26,35 +26,36 @@ to deliver high-availability, Hadoop can detect and handle failures at the
application layer. This provides a highly-available service on top of a cluster
of machines, each of which may be prone to failure.
-Spark is a fast and general engine for large-scale data processing.
+Apache Spark is a fast and general engine for large-scale data processing.
+Learn more at [spark.apache.org][].
This bundle provides a complete deployment of Hadoop and Spark components from
[Apache Bigtop][] that performs distributed data processing at scale. Ganglia
and rsyslog applications are also provided to monitor cluster health and syslog
activity.
+[spark.apache.org]: http://spark.apache.org/
[Apache Bigtop]: http://bigtop.apache.org/
## Bundle Composition
-The applications that comprise this bundle are spread across 9 units as
+The applications that comprise this bundle are spread across 5 units as
follows:
- * NameNode (HDFS)
- * ResourceManager (YARN)
+ * NameNode v2.7.3
+ * ResourceManager v2.7.3
* Colocated on the NameNode unit
- * Slave (DataNode and NodeManager)
- * 3 separate units
- * Spark (Master in yarn-client mode)
- * Zookeeper
+ * Slave (DataNode and NodeManager) v2.7.3
* 3 separate units
+ * Spark (Driver in yarn-client mode) v2.1.0
* Client (Hadoop endpoint)
+ * Colocated on the Spark unit
* Plugin (Facilitates communication with the Hadoop cluster)
- * Colocated on the Spark and Client units
+ * Colocated on the Spark/Client unit
* Ganglia (Web interface for monitoring cluster metrics)
- * Colocated on the Client unit
+ * Colocated on the Spark/Client unit
* Rsyslog (Aggregate cluster syslog events in a single location)
- * Colocated on the Client unit
+ * Colocated on the Spark/Client unit
Deploying this bundle results in a fully configured Apache Bigtop
cluster on any supported cloud, which can be scaled to meet workload
@@ -63,9 +64,8 @@ demands.
# Deploying
-A working Juju installation is assumed to be present. If Juju is not yet set
-up, please follow the [getting-started][] instructions prior to deploying this
-bundle.
+This charm requires Juju 2.0 or greater. If Juju is not yet set up, please
+follow the [getting-started][] instructions prior to deploying this bundle.
> **Note**: This bundle requires hardware resources that may exceed limits
of Free-tier or Trial accounts on some clouds. To deploy to these
@@ -77,18 +77,10 @@ Deploy this bundle from the Juju charm store with the `juju deploy` command:
juju deploy hadoop-spark
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-hadoop-spark`.
-
Alternatively, deploy a locally modified `bundle.yaml` with:
juju deploy /path/to/bundle.yaml
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-/path/to/bundle.yaml`.
-
The charms in this bundle can also be built from their source layers in the
[Bigtop charm repository][]. See the [Bigtop charm README][] for instructions
on building and deploying these charms locally.
@@ -100,7 +92,6 @@ mirror options. See [Configuring Models][] for more information.
[getting-started]: https://jujucharms.com/docs/stable/getting-started
[bundle.yaml]: https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/hadoop-spark/bundle.yaml
-[juju-quickstart]: https://launchpad.net/juju-quickstart
[Bigtop charm repository]: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
[Bigtop charm README]: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
[Configuring Models]: https://jujucharms.com/docs/stable/models-config
@@ -124,8 +115,8 @@ Once they all indicate that they are ready, perform application smoke tests
to verify that the bundle is working as expected.
## Smoke Test
-The charms for each core component (namenode, resourcemanager, slave, spark,
-and zookeeper) provide a `smoke-test` action that can be used to verify the
+The charms for each core component (namenode, resourcemanager, slave, and
+spark) provide a `smoke-test` action that can be used to verify the
application is functioning as expected. Note that the 'slave' component runs
extensive tests provided by Apache Bigtop and may take up to 30 minutes to
complete. Run the smoke-test actions as follows:
@@ -134,27 +125,17 @@ complete. Run the smoke-test actions as follows:
juju run-action resourcemanager/0 smoke-test
juju run-action slave/0 smoke-test
juju run-action spark/0 smoke-test
- juju run-action zookeeper/0 smoke-test
-
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action do <application>/0 smoke-test`.
Watch the progress of the smoke test actions with:
watch -n 2 juju show-action-status
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action status`.
-
Eventually, all of the actions should settle to `status: completed`. If
any report `status: failed`, that application is not working as expected. Get
more information about a specific smoke test with:
juju show-action-output <action-id>
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action fetch <action-id>`.
-
## Utilities
Applications in this bundle include command line and web utilities that
can be used to verify information about the cluster.
@@ -165,10 +146,6 @@ of YARN NodeManager units with the following:
juju run --application namenode "su hdfs -c 'hdfs dfsadmin -report'"
juju run --application resourcemanager "su yarn -c 'yarn node -list'"
-Show the list of Zookeeper nodes with the following:
-
- juju run --unit zookeeper/0 'echo "ls /" | /usr/lib/zookeeper/bin/zkCli.sh'
-
To access the HDFS web console, find the `PUBLIC-ADDRESS` of the namenode
application and expose it:
@@ -204,7 +181,7 @@ The web interface will be available at the following URL:
# Monitoring
This bundle includes Ganglia for system-level monitoring of the namenode,
-resourcemanager, slave, spark, and zookeeper units. Metrics are sent to a
+resourcemanager, slave, and spark units. Metrics are sent to a
centralized ganglia unit for easy viewing in a browser. To view the ganglia web
interface, find the `PUBLIC-ADDRESS` of the Ganglia application and expose it:
@@ -219,7 +196,7 @@ The web interface will be available at:
# Logging
This bundle includes rsyslog to collect syslog data from the namenode,
-resourcemanager, slave, spark, and zookeeper units. These logs are sent to a
+resourcemanager, slave, and spark units. These logs are sent to a
centralized rsyslog unit for easy syslog analysis. One method of viewing this
log data is to simply cat syslog from the rsyslog unit:
@@ -278,27 +255,17 @@ run with `juju run-action`:
enqueued: 2016-02-04 14:55:14 +0000 UTC
started: 2016-02-04 14:55:27 +0000 UTC
-The `spark` charm in this bundle also provides several benchmarks to gauge
-the performance of the Spark cluster. Each benchmark is an action that can be
-run with `juju run-action`:
+The `spark` charm in this bundle provides benchmarks to gauge the performance
+of the Spark/YARN cluster. Each benchmark is an action that can be run with
+`juju run-action`:
- $ juju actions spark | grep Bench
- connectedcomponent Run the Spark Bench ConnectedComponent benchmark.
- decisiontree Run the Spark Bench DecisionTree benchmark.
- kmeans Run the Spark Bench KMeans benchmark.
- linearregression Run the Spark Bench LinearRegression benchmark.
- logisticregression Run the Spark Bench LogisticRegression benchmark.
- matrixfactorization Run the Spark Bench MatrixFactorization benchmark.
- pagerank Run the Spark Bench PageRank benchmark.
- pca Run the Spark Bench PCA benchmark.
- pregeloperation Run the Spark Bench PregelOperation benchmark.
- shortestpaths Run the Spark Bench ShortestPaths benchmark.
- sql Run the Spark Bench SQL benchmark.
- stronglyconnectedcomponent Run the Spark Bench StronglyConnectedComponent benchmark.
- svdplusplus Run the Spark Bench SVDPlusPlus benchmark.
- svm Run the Spark Bench SVM benchmark.
-
- $ juju run-action spark/0 svdplusplus
+ $ juju actions spark
+ ...
+ pagerank Calculate PageRank for a sample data set
+ sparkpi Calculate Pi
+ ...
+
+ $ juju run-action spark/0 pagerank
Action queued with id: 339cec1f-e903-4ee7-85ca-876fb0c3d28e
$ juju show-action-output 339cec1f-e903-4ee7-85ca-876fb0c3d28e
@@ -307,40 +274,41 @@ run with `juju run-action`:
composite:
direction: asc
units: secs
- value: "200.754000"
- raw: |
- SVDPlusPlus,2016-11-02-03:08:26,200.754000,85.974071,.428255,0,SVDPlusPlus-MLlibConfig,,,,,10,,,50000,4.0,1.3,
- start: 2016-11-02T03:08:26Z
- stop: 2016-11-02T03:11:47Z
- results:
- duration:
- direction: asc
- units: secs
- value: "200.754000"
- throughput:
- direction: desc
- units: MB/sec
- value: ".428255"
+ value: "83"
+ start: 2017-04-12T23:22:38Z
+ stop: 2017-04-12T23:24:01Z
+ output: '{''status'': ''completed''}'
status: completed
timing:
- completed: 2016-11-02 03:11:48 +0000 UTC
- enqueued: 2016-11-02 03:08:21 +0000 UTC
- started: 2016-11-02 03:08:26 +0000 UTC
+ completed: 2017-04-12 23:24:02 +0000 UTC
+ enqueued: 2017-04-12 23:22:36 +0000 UTC
+ started: 2017-04-12 23:22:37 +0000 UTC
# Scaling
-By default, three Hadoop slave and three zookeeper units are deployed. Scaling
-these applications is as simple as adding more units. To add one unit:
+By default, three Hadoop slave units are deployed. Scaling these is as simple
+as adding more units. To add one unit:
juju add-unit slave
- juju add-unit zookeeper
Multiple units may be added at once. For example, add four more slave units:
juju add-unit -n4 slave
+# Issues
+
+Apache Bigtop tracks issues using JIRA (Apache account required). File an
+issue for this bundle at:
+
+https://issues.apache.org/jira/secure/CreateIssue!default.jspa
+
+Ensure `Bigtop` is selected as the project. Typically, bundle issues are filed
+in the `deployment` component with the latest stable release selected as the
+affected version. Any uncertain fields may be left blank.
+
+
# Contact Information
- <bigdata@lists.ubuntu.com>
@@ -351,6 +319,6 @@ Multiple units may be added at once. For example, add four more slave units:
- [Apache Bigtop home page](http://bigtop.apache.org/)
- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Big Data](https://jujucharms.com/big-data)
+- [Juju Bigtop charms](https://jujucharms.com/q/bigtop)
- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
-- [Juju community](https://jujucharms.com/community)
diff --git a/bigtop-deploy/juju/hadoop-spark/bundle.yaml b/bigtop-deploy/juju/hadoop-spark/bundle.yaml
index 500ae78a..06114957 100644
--- a/bigtop-deploy/juju/hadoop-spark/bundle.yaml
+++ b/bigtop-deploy/juju/hadoop-spark/bundle.yaml
@@ -1,6 +1,6 @@
services:
namenode:
- charm: "cs:xenial/hadoop-namenode-13"
+ charm: "cs:xenial/hadoop-namenode-14"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -9,7 +9,7 @@ services:
to:
- "0"
resourcemanager:
- charm: "cs:xenial/hadoop-resourcemanager-14"
+ charm: "cs:xenial/hadoop-resourcemanager-16"
constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
@@ -18,7 +18,7 @@ services:
to:
- "0"
slave:
- charm: "cs:xenial/hadoop-slave-13"
+ charm: "cs:xenial/hadoop-slave-15"
constraints: "mem=7G root-disk=32G"
num_units: 3
annotations:
@@ -29,13 +29,13 @@ services:
- "2"
- "3"
plugin:
- charm: "cs:xenial/hadoop-plugin-13"
+ charm: "cs:xenial/hadoop-plugin-14"
annotations:
gui-x: "1000"
gui-y: "400"
client:
charm: "cs:xenial/hadoop-client-3"
- constraints: "mem=3G"
+ constraints: "mem=7G root-disk=32G"
num_units: 1
annotations:
gui-x: "1250"
@@ -43,7 +43,7 @@ services:
to:
- "4"
spark:
- charm: "cs:xenial/spark-31"
+ charm: "cs:xenial/spark-34"
constraints: "mem=7G root-disk=32G"
num_units: 1
options:
@@ -52,18 +52,7 @@ services:
gui-x: "1000"
gui-y: "0"
to:
- - "5"
- zookeeper:
- charm: "cs:xenial/zookeeper-17"
- constraints: "mem=3G root-disk=32G"
- num_units: 3
- annotations:
- gui-x: "500"
- gui-y: "400"
- to:
- - "6"
- - "7"
- - "8"
+ - "4"
ganglia:
charm: "cs:xenial/ganglia-12"
num_units: 1
@@ -97,20 +86,17 @@ relations:
- [resourcemanager, slave]
- [plugin, namenode]
- [plugin, resourcemanager]
- - [client, plugin]
- [spark, plugin]
- - [spark, zookeeper]
+ - [client, plugin]
- ["ganglia-node:juju-info", "namenode:juju-info"]
- ["ganglia-node:juju-info", "resourcemanager:juju-info"]
- ["ganglia-node:juju-info", "slave:juju-info"]
- ["ganglia-node:juju-info", "spark:juju-info"]
- - ["ganglia-node:juju-info", "zookeeper:juju-info"]
- ["ganglia:node", "ganglia-node:node"]
- ["rsyslog-forwarder-ha:juju-info", "namenode:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "resourcemanager:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "slave:juju-info"]
- ["rsyslog-forwarder-ha:juju-info", "spark:juju-info"]
- - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
- ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
machines:
"0":
@@ -127,16 +113,4 @@ machines:
constraints: "mem=7G root-disk=32G"
"4":
series: "xenial"
- constraints: "mem=3G"
- "5":
- series: "xenial"
constraints: "mem=7G root-disk=32G"
- "6":
- series: "xenial"
- constraints: "mem=3G root-disk=32G"
- "7":
- series: "xenial"
- constraints: "mem=3G root-disk=32G"
- "8":
- series: "xenial"
- constraints: "mem=3G root-disk=32G"
diff --git a/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml b/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
index b9517421..cb745df7 100644
--- a/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
+++ b/bigtop-deploy/juju/hadoop-spark/tests/tests.yaml
@@ -5,9 +5,15 @@ sources:
packages:
- amulet
- python3-yaml
-# exclude tests that are unrelated to bigtop.
+# exclude tests that are unrelated to bigtop. the exclusion of spark might
+# look weird here, but for this bundle, we only care that spark is good in
+# yarn mode (covered by this bundle when we invoke the spark smoke-test). the
+# typical spark tests will test spark once in standalone and twice more in
+# various HA modes. that takes forever, so leave those heavy tests for the
+# spark-processing bundle. let's go fast on this one.
excludes:
- ganglia
- ganglia-node
- rsyslog
- rsyslog-forwarder-ha
+ - spark
diff --git a/bigtop-deploy/juju/spark-processing/README.md b/bigtop-deploy/juju/spark-processing/README.md
index c39fa2c8..a499a384 100644
--- a/bigtop-deploy/juju/spark-processing/README.md
+++ b/bigtop-deploy/juju/spark-processing/README.md
@@ -16,12 +16,14 @@
-->
# Overview
-This bundle provides a complete deployment of
-[Apache Spark][] in standalone HA mode as provided
-by [Apache Bigtop][]. Ganglia and rsyslog
+Apache Spark is a fast and general engine for large-scale data processing.
+Learn more at [spark.apache.org][].
+
+This bundle provides a complete deployment of Spark (in standalone HA mode)
+and Apache Zookeeper components from [Apache Bigtop][]. Ganglia and rsyslog
applications are included to monitor cluster health and syslog activity.
-[Apache Spark]: http://spark/apache.org/
+[spark.apache.org]: http://spark.apache.org/
[Apache Bigtop]: http://bigtop.apache.org/
## Bundle Composition
@@ -29,9 +31,9 @@ applications are included to monitor cluster health and syslog activity.
The applications that comprise this bundle are spread across 6 units as
follows:
- * Spark (Master and Worker)
+ * Spark (Master and Worker) v2.1.0
* 2 separate units
- * Zookeeper
+ * Zookeeper v3.4.6
* 3 separate units
* Ganglia (Web interface for monitoring cluster metrics)
* Rsyslog (Aggregate cluster syslog events in a single location)
@@ -44,9 +46,8 @@ demands.
# Deploying
-A working Juju installation is assumed to be present. If Juju is not yet set
-up, please follow the [getting-started][] instructions prior to deploying this
-bundle.
+This charm requires Juju 2.0 or greater. If Juju is not yet set up, please
+follow the [getting-started][] instructions prior to deploying this bundle.
> **Note**: This bundle requires hardware resources that may exceed limits
of Free-tier or Trial accounts on some clouds. To deploy to these
@@ -58,18 +59,10 @@ Deploy this bundle from the Juju charm store with the `juju deploy` command:
juju deploy spark-processing
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-spark-processing`.
-
Alternatively, deploy a locally modified `bundle.yaml` with:
juju deploy /path/to/bundle.yaml
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
-/path/to/bundle.yaml`.
-
The charms in this bundle can also be built from their source layers in the
[Bigtop charm repository][]. See the [Bigtop charm README][] for instructions
on building and deploying these charms locally.
@@ -81,7 +74,6 @@ mirror options. See [Configuring Models][] for more information.
[getting-started]: https://jujucharms.com/docs/stable/getting-started
[bundle.yaml]: https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/spark-processing/bundle.yaml
-[juju-quickstart]: https://launchpad.net/juju-quickstart
[Bigtop charm repository]: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
[Bigtop charm README]: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
[Configuring Models]: https://jujucharms.com/docs/stable/models-config
@@ -112,25 +104,16 @@ actions as follows:
juju run-action spark/0 smoke-test
juju run-action zookeeper/0 smoke-test
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action do <application>/0 smoke-test`.
-
Watch the progress of the smoke test actions with:
watch -n 2 juju show-action-status
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action status`.
-
Eventually, all of the actions should settle to `status: completed`. If
any report `status: failed`, that application is not working as expected. Get
more information about the smoke-test action
juju show-action-output <action-id>
-> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action fetch <action-id>`.
-
## Utilities
Applications in this bundle include Zookeeper command line and Spark web
utilities that can be used to verify information about the cluster.
@@ -181,27 +164,17 @@ the [rsyslog README](https://jujucharms.com/rsyslog/) for more information.
# Benchmarking
-The `spark` charm in this bundle provides several benchmarks to gauge
-the performance of the Spark cluster. Each benchmark is an action that can be
-run with `juju run-action`:
-
- $ juju actions spark | grep Bench
- connectedcomponent Run the Spark Bench ConnectedComponent benchmark.
- decisiontree Run the Spark Bench DecisionTree benchmark.
- kmeans Run the Spark Bench KMeans benchmark.
- linearregression Run the Spark Bench LinearRegression benchmark.
- logisticregression Run the Spark Bench LogisticRegression benchmark.
- matrixfactorization Run the Spark Bench MatrixFactorization benchmark.
- pagerank Run the Spark Bench PageRank benchmark.
- pca Run the Spark Bench PCA benchmark.
- pregeloperation Run the Spark Bench PregelOperation benchmark.
- shortestpaths Run the Spark Bench ShortestPaths benchmark.
- sql Run the Spark Bench SQL benchmark.
- stronglyconnectedcomponent Run the Spark Bench StronglyConnectedComponent benchmark.
- svdplusplus Run the Spark Bench SVDPlusPlus benchmark.
- svm Run the Spark Bench SVM benchmark.
-
- $ juju run-action spark/0 svdplusplus
+The `spark` charm in this bundle provides benchmarks to gauge the performance
+of the Spark cluster. Each benchmark is an action that can be run with
+`juju run-action`:
+
+ $ juju actions spark
+ ...
+ pagerank Calculate PageRank for a sample data set
+ sparkpi Calculate Pi
+ ...
+
+ $ juju run-action spark/0 pagerank
Action queued with id: 339cec1f-e903-4ee7-85ca-876fb0c3d28e
$ juju show-action-output 339cec1f-e903-4ee7-85ca-876fb0c3d28e
@@ -210,25 +183,15 @@ run with `juju run-action`:
composite:
direction: asc
units: secs
- value: "200.754000"
- raw: |
- SVDPlusPlus,2016-11-02-03:08:26,200.754000,85.974071,.428255,0,SVDPlusPlus-MLlibConfig,,,,,10,,,50000,4.0,1.3,
- start: 2016-11-02T03:08:26Z
- stop: 2016-11-02T03:11:47Z
- results:
- duration:
- direction: asc
- units: secs
- value: "200.754000"
- throughput:
- direction: desc
- units: x/sec
- value: ".428255"
+ value: "83"
+ start: 2017-04-12T23:22:38Z
+ stop: 2017-04-12T23:24:01Z
+ output: '{''status'': ''completed''}'
status: completed
timing:
- completed: 2016-11-02 03:11:48 +0000 UTC
- enqueued: 2016-11-02 03:08:21 +0000 UTC
- started: 2016-11-02 03:08:26 +0000 UTC
+ completed: 2017-04-12 23:24:02 +0000 UTC
+ enqueued: 2017-04-12 23:22:36 +0000 UTC
+ started: 2017-04-12 23:22:37 +0000 UTC
# Scaling
@@ -244,6 +207,18 @@ Multiple units may be added at once. For example, add four more spark units:
juju add-unit -n4 spark
+# Issues
+
+Apache Bigtop tracks issues using JIRA (Apache account required). File an
+issue for this bundle at:
+
+https://issues.apache.org/jira/secure/CreateIssue!default.jspa
+
+Ensure `Bigtop` is selected as the project. Typically, bundle issues are filed
+in the `deployment` component with the latest stable release selected as the
+affected version. Any uncertain fields may be left blank.
+
+
# Contact Information
- <bigdata@lists.ubuntu.com>
@@ -254,6 +229,6 @@ Multiple units may be added at once. For example, add four more spark units:
- [Apache Bigtop home page](http://bigtop.apache.org/)
- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Big Data](https://jujucharms.com/big-data)
+- [Juju Bigtop charms](https://jujucharms.com/q/bigtop)
- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
-- [Juju community](https://jujucharms.com/community)
diff --git a/bigtop-deploy/juju/spark-processing/bundle.yaml b/bigtop-deploy/juju/spark-processing/bundle.yaml
index bb79d945..f114ebfe 100644
--- a/bigtop-deploy/juju/spark-processing/bundle.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle.yaml
@@ -1,6 +1,6 @@
services:
spark:
- charm: "cs:xenial/spark-31"
+ charm: "cs:xenial/spark-34"
constraints: "mem=7G root-disk=32G"
num_units: 2
options:
@@ -13,7 +13,7 @@ services:
- "0"
- "1"
zookeeper:
- charm: "cs:xenial/zookeeper-17"
+ charm: "cs:xenial/zookeeper-19"
constraints: "mem=3G root-disk=32G"
num_units: 3
annotations:
diff --git a/bigtop-deploy/juju/spark-processing/tests/tests.yaml b/bigtop-deploy/juju/spark-processing/tests/tests.yaml
index e4b472ea..b9517421 100644
--- a/bigtop-deploy/juju/spark-processing/tests/tests.yaml
+++ b/bigtop-deploy/juju/spark-processing/tests/tests.yaml
@@ -5,15 +5,9 @@ sources:
packages:
- amulet
- python3-yaml
-# exclude tests that are unrelated to bigtop. the exclusion of spark might
-# look weird here, but for this bundle, we only care that spark is good in
-# HA mode (covered by this bundle when we invoke the spark smoke-test). the
-# typical spark tests will test spark once in standalone and twice more in
-# various HA modes. that takes forever, so leave those heavy tests for the
-# hadoop-spark bundle. let's go fast on this one.
+# exclude tests that are unrelated to bigtop.
excludes:
- ganglia
- ganglia-node
- rsyslog
- rsyslog-forwarder-ha
- - spark