Sunday, November 15, 2009

Numeric Range Queries - A comparison

Problem:

RangeQueries have long been problematic for Lucene. Internally, it constructs an OR query of TermQueries, each correspond to an entry in the term table with value that fall within the specified range.

When the range covers many terms, this approach has a term upper bound of BooleanQuery.getMaxClauseCount() (or a TooManyClauses runtime exception will be thrown)
And it can be really slow!

Solutions:

This blog examines the available alternatives and provides some performances analysis. (Without loss of generality, we will look at range query handling only on integer values, though the approaches to be discussed support all numeric types)

NumericRangeQuery:
As part of the recent Lucene 2.9.x release, Uwe Schindler introduced NumericRangeQuery which aims to solve this problem. (good documentation in the javadoc: http://lucene.apache.org/java/2_9_1/api/core/index.html) I will not do this any injustice by trying to explain details of this algorithm.

FieldCacheQuery:
There is however, another approach by using the FieldCache:

1) obtain the int[] from FieldCache.getInts()
2) iterate thru each element and collect it as a hit if it falls within the specified range.

Code snippet:

static Query buildFieldCacheQuery(final int start,final int end){

Filter f = new Filter(){

@Override

public DocIdSet getDocIdSet(IndexReader reader) throws IOException {

final int[] data = FieldCache.DEFAULT.getInts(reader, FIELDCACHE_FIELD);

return new DocIdSet(){


@Override

public DocIdSetIterator iterator() throws IOException {

return new DocIdSetIterator() {

int docid=-1;

@Override

public int advance(int target) throws IOException {

docid=target-1;

return nextDoc();

}


@Override

public int docID() {

return docid;

}


@Override

public int nextDoc() throws IOException {

int val;

docid++;

while(docidlength){

val = data[docid];

if (val>start && val

return docid;

else docid++;

}

return DocIdSetIterator.NO_MORE_DOCS;

}

};

}

};

}

};

return new ConstantScoreQuery(f);

}


Comparison:

Index structure:

NumericField numericField = new NumericField(NUMERIC_FIELD, 4);

numericField.setIntValue(n);

doc.add(numericField);


Field fieldCacheField = new Field(FIELDCACHE_FIELD,String.valueOf(n),Store.NO,Index.NOT_ANALYZED_NO_NORMS);

fieldCacheField.setOmitTermFreqAndPositions(true);

doc.add(fieldCacheField);


Field rangeField = new Field(RANGE_FIELD,format.format(n),Store.NO,Index.NOT_ANALYZED_NO_NORMS);

rangeField.setOmitTermFreqAndPositions(true);

doc.add(rangeField);


Following are the results:


JVM settings: -server -Xms1g -Xmx1g

The test measures different range lengths with respect to the possible values in the index, e.g. Range 1% would correspond to the range [0 - 10000] on a index size 1M docs

Index Size - 1M docs:

Range %:
RangeQueryNumericRangeQueryFieldCacheRangeQuery
1%202 ms1 ms1 ms
5%2047 ms3 ms2 ms
20%NA9 ms5 ms
50%NA17 ms9 ms
100%NA26 ms9 ms


Index Size - 5M docs, No longer measuring RangeQuery to stop beating the dead horse

Range %:
NumericRangeQueryFieldCacheRangeQuery
1%6 ms8 ms
5%15 ms11 ms
20%38 ms27 ms
50%75 ms47 ms
100%128 ms43 ms


Index Size - 10M docs
Range %:
NumericRangeQueryFieldCacheRangeQuery
1%10 ms16 ms
5%28 ms23 ms
20%84 ms53 ms
50%153 ms97 ms
100%249 ms92 ms


Conclusion & Observations:
  • Don't use RangeQuery! (Good thing it is deprecated in Lucene 2.9.x)
  • As index size increases, when the range is small, NumericRangeQuery slows down rather gracefully (less than linear), FieldCacheQuery slows down linearly.
  • As index size increases, when the range is large, NumericRangeQuery slows down almost linearly, and FieldCacheQuery plateaus at 50%.
  • If you expect your range covers >=5% of the corpus, FieldCacheQuery is faster.
  • FieldCacheQuery code snippet above can be easily changed to support Lucene 2.4.x for those of you that have not upgraded.
  • The FieldCacheQuery idea can be applied similarly to non-numeric fields, e.g. range of texts.
  • FieldCacheQuery assumes a one-time cost in index load (for the FieldCache), but the cost is necessary if you want to do sorting.
  • If you want to do range query on a really large index, consider sharding your index.
Don't believe my numbers? Run it yourself by checking out the source code for the test at: http://code.google.com/p/lucene-book/source/checkout, and run the test: book.lid.example.range.RangeTest.

5 comments:

  1. Hi John,

    the FieldCache approach is also available in Lucene trunk, you do not need to do it yourself: FieldCacheRangeFilter, which also support alls types and also text terms.

    The problems with the FieldCache approach in addition to longer loading time are:

    - only one value per document supported (limitation of FieldCache, made for sorting)

    - FCRF gets slower, when deleted docs are involved and the range contains 0 (because FCRF cannot determine a deleted doc, which also has a 0 value in FC)

    The numbers are similar to mine on https://issues.apache.org/jira/browse/LUCENE-1461

    Have you also tried other precSteps for comparison?

    Uwe

    ReplyDelete
  2. Hi Uwe:

    Re: the FieldCache approach is also available in Lucene trunk

    Great!

    Re: only one value per document supported

    True. However, in both Solr and Bobo, multivalued fieldcaches are implemented. This approach can still be applied.

    Re: FCRF gets slower, when deleted docs are involved and the range contains 0

    True. Deletes are indeed tricky, for each hit checking against the TermDocs is definitely slow. It would be nice if delete docset is exposed at the segment reader level, this way this can be solved efficiently rather easily.

    Having said this, one can at index warming time, just iterate AllTermDocs once and write deleted docids to a DocIdSet. At query time, just do parallel walk. I should probably try that.

    Thanks for the pointer to the Jira issue, lotsa good information there!

    I have not tried other precSteps as it is documented to use 4 for integers. Is it not the case?

    -John

    ReplyDelete
  3. The default precStep in Lucene 4, you are right. I just wanted to compare maybe other precSteps like 2, 6 or 8. In the javadocs is some discussion about differences and pros/cons (lower precSteps need more seeks in TermEnum).
    I agree with you, you can raise performance by doing everything in memory, but not everybody wants that. I have e.g. 13 numeric fields and keeping all of them in FieldCache or memory is no option (esp. loading time), especially if you do not want to sort against them. So NumericRange is a good option.

    ReplyDelete
  4. Thanks for the comment, Uwe!

    13 numeric fields for a shard of say 10 million docs, is 520MB (can be compressed furthermore), so it maybe feasible in some applications.

    I agree with the loading time, but many times index is loaded in the background or being "warmed" before published, and it is done only once. This practice is often used by many applications and is actually suggested by Solr/Lucene documentation.

    ReplyDelete
  5. very valuable information on Numeric Queries.

    Thanks
    Metadata Removal

    ReplyDelete