TREC 2025 Proceedings
cru-ansR-bareconf-
Submission Details
- Organization
- HLTCOE
- Track
- RAG TREC Instrument for Multilingual Evaluation
- Task
- Report Generation Task
- Date
- 2025-08-20
Run Description
- Document collection
- ['English subset', 'Arabic subset', 'Chinese subset', 'Russian subset']
- Machine translation of documents
- ['Yes we used the organizer-provided machine translations']
- Write a short description of your retrieval process
- See below
- Write a short description of your generation process
- Crucible@ragtime
Original run tag: strict-crucible-nugget-references-plaidx_svc-SupportedAnswerExtractorRequest
Question-answering prompt. Just rely on extraction confidence.
Crucible report generation.
Guiding nuggets: plaidx_svc
Document source: nugget citations.
Nugget extraction prompt 'SupportedAnswerExtractorAll' on collection "ragtime-mt"
LLM: llama3.3-70b-instruct
No sentence filtering with argue_eval.
Using abstractive summarization
Only retain sentences that have extraction confidence value >= 0.5, are not already selected (according to stopped and stemmed match), do not contain the expression 'source document'
For each nugget, among remaining sentence candidates, select the sentence with highest extraction confidence.
Chop to 2000 characters.
Created on 2025-08-20
- Which LLM(s) where used by your system?
- Llama-3.3-70B Instruct (70B)
- Open repository link
- https://github.com/laura-dietz/scale25-crucible/releases/tag/ragtime25-submission
- Assessing priority
- 7
Evaluation Files
Paper