Data - English Relevance Judgements

Return to the TREC home page TREC home Return to the TREC Data home page Data home          National Institute of Standards and Technology Home Page

Relevance judgments, or the "right answers", are a vital part of a test collection. TREC uses the following working definition of relevance: If you were writing a report on the subject of the topic and would use the information contained in the document in the report, then the document is relevant. Only binary judgments ("relevant" or "not relevant") are made, and a document is judged relevant if any piece of it is relevant (regardless of how small the piece is in relation to the rest of the document).

Judging is done using a pooling technique (described in the Overview papers in the TREC proceedings) on the set of documents used for the task that year. The relevance judgments are considered "complete" for that particular set of documents. By "complete" we mean that enough results have been assembled and judged to assume that most relevant documents have been found. When using these judgments to evaluate your own retrieval runs, it is very important to make sure the document collection and qrels match. Retrieval runs can be evaluated using the trec_eval (.gz) (trec_eval_latest.tar.gz) program.

The relevance judgments ("qrels") have been divided into files based on the TREC task they resulted from, and (sometimes) further divided into parts to make downloading easier.

The "topics" or questions for which these assessments were made are available at Data-English Test Questions (Topics) page. The documents themselves are covered by intellectual property agreements. The English collections may be purchased separately (see Data-English Documents page).

Last updated: Monday, 15-Apr-2019 08:29:27 MDT
Date created: Tuesday, 01-Aug-00
trec@nist.gov