- TREC 2021
Deep Learning track web page
(See track web page for links to test and training corpora.)
NIST judgments for the Document Ranking task
NIST judgments for the Passage Ranking task
Modified NIST judgments for the Passage Ranking task. In this qrels file, all "1" judgments (Related) have been mapped to 0 because
related passages are not relevant. Use this qrels with trec-eval for all measures
except NDCG variants. Use the full qrels with trec-eval for NDCG-related
Note: Documents were judged on a four-point scale of Not Relevant (0), Relevant (1),
Highly Relevant (2) and Perfect (3). Levels 1--3 are considered to be relevant for
measures that use binary relevance judgments.
Passages were judged on a four-point scale of Not Relevant (0), Related (1),
Highly Relevant (2), and Perfect (3), where 'Related' is actually NOT
Relevant---it means that the passage was on the same general topic, but did not
answer the question. Thus, for Passage Ranking task runs (only), to
compute evaluation measures that use binary relevance judgments using trec_eval,
you either need to use trec_eval's -l option [trec_eval -l 2 qrelsfile runfile]
or modify the qrels file to change all 1 judgments to 0.
- Expanded Judgment Sets
After the conclusion of the TREC 2021 track, NIST made additional judgments for the
TREC 2021 evaluation topics to explore the quality of the test collection built from
the original track judgments. The exploration is described in a SIGIR 2022 paper
that is also posted here.
The following qrels are the union of the original track judgments and the judgments
made for the exploration. Remember that it is methodologically invalid to compare
scores computed using the original track judgments (such as the scores reported for
track submissions) to scores computed using the expanded judgments.
Expanded judgments for the Document Ranking task
Expanded judgments for the Passage Ranking task
Modified expanded judgments for the Passage Ranking task