2017 Tracks Homepage

Return to the TREC home page TREC home Return to the TREC Active Participant home page Active Participants home          National Institute of Standards and Technology Home Page

Common Core Track

This is a new track for TREC 2017. The track will serve as a common task for a wide spectrum of IR researchers, thus attracting a diverse run set that can be used to investigate new methodologies for test collection construction.
Track coordinators:
Evangelos Kanoulas, University of Amsterdam
James Allan, University of Massachusetts
Donna Harman, NIST
Track Web Page:
trec-core.github.io/2017/
Mailing list:
Google group, name: trec-core

50 test topics that will be assessed by NIST assessors
Judgments produced by NIST assessors
Per-topic summary statistics for automatic runs>
Per-topic summary statistics for manual runs
Per-topic summary statistics for automatic runs that made use of existing relevance judgments>
Per-topic summary statistics for automatic runs that did not make use of existing relevance judgments>
Per-topic summary statistics for manual runs that did not make use of existing relevance judgments>
Table of total number unique relevant retrieved per group
List of unique relevant documents found per topic
Number of teams that retrieved a given relevant document


Complex Answer Retrieval Track

This is a new track for TREC 2017. Its focus is on developing systems that are capable of answering complex information needs by collating relevant information from an entire corpus.
Track coordinators:
Laura Dietz, University of New Hampshire
Manisha Verma, University College London
Filip Radlinski, Google
Nick Craswell, Microsoft
Track Web Page:
trec-car.cs.unh.edu
Mailing list:
Google group, name: trec-car


Dynamic Domain Track

This track focuses on domain-specific search algorithms that adapt to the dynamic information needs of professional users as they explore in complex domains.
Track coordinators:
Grace Hui Yang, Georgetown University
Ian Soboroff, NIST
Track Web Page:
trec-dd.org/
Mailing list:
Google group, name: trec-dd

To obtain Dynamic Domain track collections from earlier years, complete the TREC DD Usage Agreement, and return it to Angela Ellis at NIST. (Maintain the individual agreement among users within your organization.)
dynamic-domain-2016-truth-data.xml.gz
cubetest-qrels-2016.gz

2017 Truth Data Note: This year the only domain is the New York Times Annotated Corpus, as per the guidelines. Note that the truth data file pads the document IDs out to seven digits, to match the numbers in the corpus filenames.
2017 DD track submission form


Live QA Track

In this track systems generate answers to real questions originating from real users via a live question stream, in real time.
Track coordinators:
Eugene Agichtein, Emory University
Asma Ben Abacha, National Institutes of Health
Donna Harman, NIST
Eric Nyberg, Carnegie Mellon University
Yuval Pinter, Georgia Institute of Technology
Track Web Page:
trec-liveqa.org
Track Mailing List:
Subscribe yourself to the mailing list by sending an email message to [email protected] and following the instructions in the email response you will receive.
Description of the data released here
Mapping between Yahoo question ids and NIST-assigned ids
Primary assessor's relevance judgments by response string
Secondary assessors' relevance judgments by response string
Primary assessor's paraphrase of the question
Multiple assessors' paraphrases of the questions

Medical questions with their reference answers
Relevance judgments by response string for medical questions
Assessors' paraphrases of the medical questions


Open Search Track

The OpenSearch track explores The "Living Labs" evaluation paradigm for IR that involves real users of operational search engines. The task in the track will continue/expand upon the ad hoc Academic Search task of TREC 2016.
Track coordinators:
Krisztian Balog, University of Stavanger
Maarten de Rijke, Unversity of Amsterdam
Anne Schuth, Blendle
Track Web Page:
trec-open-search.org
Mailing list:
Google group, name: trec-open-search

Precision Medicine Track

This track is a specialization of the Clinical Decision Support track of previous TRECs. It will focus on building systems that use data (e.g., a patient's past medical history and genomic information) to link oncology patients to clinical trials for new treatments as well as evidence-based literature to identify the most effective existing treatments.
Track coordinators:
Kirk Roberts, University of Texas Health Science Center
Dina Demner-Fushman, U.S. National Library of Medicine
Ellen Voorhees, NIST
William Hersh, Oregon Health and Science University
Shubham Pant, University of Texas MD Anderson Cancer Center
Track Web Page:
www.trec-cds.org/
Mailing list:
Google group, name: trec-cds

TREC 2014 instructions given to document assessors

TREC 2017 Precision Medicine track submission form
evaluation script for computing inferred measures
final relevance judgments suitable for use with sample-eval (Scientific Abstracts task, including topic 12)
final Scientific Abstracts task relevance judgments suitable for use with trec-eval (includes topic 12)
final Clinical Trials task relevance judgments suitable for use with trec-eval (includes topic 12)
final per-topic summary statistics for Scientific Abstracts runs (includes topic 12)
final per-topic summary statistics for Clinical Trials runs (includes topic 12)


Real-Time Summarization Track

The Real-Time Summarization (RTS) track explores techniques for constructing real-time update summaries from social media streams in response to users' information needs.
Track coordinators:
Jimmy Lin, Unversity of Waterloo
Adam Roegiest, Unversity of Waterloo
Luchen Tan, University of Waterloo
Richard McCreadie, University of Glasgow
Track Web Page:
trecrts.github.io
Mailing list:
Google group, name: trec-rts

Evaluation script for Batch evaluation, Scenario A
Evaluation script for Batch evaluation, Scenario B
Clusters used in batch evaluation
Relevance judgments for batch evaluation
Tweets Epoch for batch evaluation
Evaluation script for mobile evaluation
Relevance judgments for mobile evaluation
Tweets Epoch for mobile evaluation


Tasks Track

The goal of the Tasks track is to test whether systems can induce the possible tasks users might be trying to accomplish given a query.
Track coordinators:
Track coordinators:
Emine Yilmaz, University College London
Rishabh Mehrotra, University College London
Ben Carterette, University of Delaware
Evangelos Kanoulas, University of Amsterdam
Nick Craswell, Microsoft
Peter Bailey, Microsoft
Track Web Page:
www.cs.ucl.ac.uk/tasks-track-2017/
Mailing list:
Google group, name: tasks-track

TREC 2017 Test Queries
README file for using an ElasticSearch service to access the Clueweb collection




The TREC Conference series is co-sponsored by the NIST Information Technology Laboratory's (ITL)
Retrieval Group of the
Information Access Division (IAD)
Contact us at: trec (at) nist.gov



National Institute of Standards and Technology Home Page
is an agency of the U.S. Commerce Department

Last updated:
Date created: February 16, 2017
privacy policy / security notice / accessibility statement
disclaimer / FOIA