summaryrefslogtreecommitdiff
path: root/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range
diff options
context:
space:
mode:
authorBoaz Leskes <b.leskes@gmail.com>2016-06-27 15:05:06 +0200
committerGitHub <noreply@github.com>2016-06-27 15:05:06 +0200
commitcb0824e957dcb7a20e8450f97283874eb61b9abe (patch)
treec2d840d84cf11ab3a77d2631cf7ff342287cfe71 /core/src/main/java/org/elasticsearch/search/aggregations/bucket/range
parent1863ab95f862294db7d105717656bd6452eef9d9 (diff)
Make shard store fetch less dependent on the current cluster state, both on master and non data nodes (#19044)
#18938 has changed the timing in which we send out to nodes to fetch their shard stores. Instead of doing this after the cluster state resulting of the node's join was published, #18938 made it be sent concurrently to the publishing processes. This revealed a couple of points where the shard store fetching is dependent of the current state of affairs of the cluster state, both on the master and the data nodes. The problem discovered were already present without #18938 but required a failure/extreme situations to make them happen.This PR tries to remove as much as possible of these dependencies making shard store fetching simpler and make the way to re-introduce #18938 which was reverted. These are the notable changes: 1) Allow TransportNodesAction (of which shard store fetching is derived) callers to supply concrete disco nodes, so it won't need the cluster state to resolve them. This was a problem because the cluster state containing the needed nodes was not yet made available through ClusterService. Note that long term we can expect the rest layer to resolve node ids to concrete nodes, making this mode the only one needed. 2) The data node relied on the cluster state to have the relevant index meta data so it can find data when custom paths are used. We now fall back to read the meta data from disk if needed. 3) The data node was relying on it's own IndexService state to indicate whether the data it has corresponds to an existing allocation. This is of course something it can not know until it got (and processed) the new cluster state from the master. This flag in the response is now removed. This is not a problem because we used that flag to protect against double assigning of a shard to the same node, but we are already protected from it by the allocation deciders. 4) I removed the redundant filterNodeIds method in TransportNodesAction - if people want to filter they can override resolveRequest.
Diffstat (limited to 'core/src/main/java/org/elasticsearch/search/aggregations/bucket/range')
-rw-r--r--core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java4
1 files changed, 2 insertions, 2 deletions
diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java
index 0121da2fa9..13d10bd0a0 100644
--- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java
+++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/range/AbstractRangeBuilder.java
@@ -20,8 +20,8 @@
package org.elasticsearch.search.aggregations.bucket.range;
import org.elasticsearch.common.io.stream.StreamInput;
-import org.elasticsearch.common.io.stream.StreamInputReader;
import org.elasticsearch.common.io.stream.StreamOutput;
+import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.search.aggregations.bucket.range.RangeAggregator.Range;
import org.elasticsearch.search.aggregations.support.ValuesSource;
@@ -47,7 +47,7 @@ public abstract class AbstractRangeBuilder<AB extends AbstractRangeBuilder<AB, R
/**
* Read from a stream.
*/
- protected AbstractRangeBuilder(StreamInput in, InternalRange.Factory<?, ?> rangeFactory, StreamInputReader<R> rangeReader)
+ protected AbstractRangeBuilder(StreamInput in, InternalRange.Factory<?, ?> rangeFactory, Writeable.Reader<R> rangeReader)
throws IOException {
super(in, rangeFactory.type(), rangeFactory.getValueSourceType(), rangeFactory.getValueType());
this.rangeFactory = rangeFactory;