Text REtrieval Conference (TREC)
System Description

Organization Name: Technion- Israel Institute of Technology Run ID: LARAg06pe5
Section 1.0 System Summary and Timing
Section 1.1 System Information
Hardware Model Used for TREC Experiment: Power Mac G5 Quad
System Use: SHARED
Total Amount of Hard Disk Storage: 233 Gb
Total Amount of RAM: 4096 MB
Clock Rate of CPU: 2500 MHz
Section 1.2 System Comparisons
Amount of developmental "Software Engineering": NONE
List of features that are not present in the system, but would have been beneficial to have:
List of features that are present in the system, and impacted its performance, but are not detailed within this form:
Section 2.0 Construction of Indices, Knowledge Bases, and Other Data Structures
Length of the stopword list: 430 words
Type of Stemming: PORTER
Controlled Vocabulary: NO
Term weighting: YES
  • Additional Comments on term weighting: BM25
Phrase discovery: NO
  • Kind of phrase:
  • Method used: OTHER
Type of Spelling Correction: NONE
Manually-Indexed Terms: NO
Proper Noun Identification: NO
Syntactic Parsing: NO
Tokenizer: NO
Word Sense Disambiguation: NO
Other technique: YES
Additional comments: Index of features generated for every document by a feature generator created from Wikipedia data. For more details, see the description in "Overcoming the Brittleness Bottleneck using Wikipedia: Enhancing Text Categorization with Encyclopedic Knowledge" by Gabrilovich and Markovitch in AAAI 2006.
Section 3.0 Statistics on Data Structures Built from TREC Text
Section 3.1 First Data Structure
Structure Type: INVERTED INDEX
Type of other data structure used:
Brief description of method using other data structure:
Total storage used: 6 Gb
Total computer time to build: 120 hours
Automatic process: YES
Manual hours required: hours
Type of manual labor: NONE
Term positions used: NO
Only single terms used: NO
Concepts (vs. single terms) represented: YES
  • Number of concepts represented: over 330,000
Type of representation: mapping onto Wikipedia articles
Auxilary files used: YES
  • Type of auxilary files used: text classifier built from Wikipedia database dump
Additional comments: The corpus documents were fed to the auxilary classifier, and the generated features were indexed in a standard inverted index. At retrieval time, features were generated for the query and this index was queried.
Section 3.2 Second Data Structure
Structure Type: INVERTED INDEX
Type of other data structure used:
Brief description of method using other data structure:
Total storage used: 22 Gb
Total computer time to build: 43 hours
Automatic process: YES
Manual hours required: hours
Type of manual labor: NONE
Term positions used: YES
Only single terms used: YES
Concepts (vs. single terms) represented: NO
  • Number of concepts represented:
Type of representation:
Auxilary files used: NO
  • Type of auxilary files used:
Additional comments: Standard bag of words index for the documents in corpus, used to create a conceptual model of the query. This index is queried in standard bag of words approach including query expansion, the top documents are processed to generate Wikipedia features, and the concepts shared by most documents are used as a query model. Then, a query is sent to the first inverted index, of Wikipedia concepts and final results are output.
Section 3.3 Third Data Structure
Structure Type:
Type of other data structure used:
Brief description of method using other data structure:
Total storage used: Gb
Total computer time to build: hours
Automatic process:
Manual hours required: hours
Type of manual labor: NONE
Term positions used:
Only single terms used:
Concepts (vs. single terms) represented:
  • Number of concepts represented:
Type of representation:
Auxilary files used:
  • Type of auxilary files used:
Additional comments:
Section 4.0 Data Built from Sources Other than the Input Text
Internally-built Auxiliary File

File type: OTHER
Domain type: DOMAIN INDEPENDENT
Total Storage: 0.9 Gb
Number of Concepts Represented: concepts
Type of representation:
Automatic or Manual:
  • Total Time to Build: hours
  • Total Time to Modify (if already built): hours
Type of Manual Labor used:
Additional comments:
Externally-built Auxiliary File

File is:
Total Storage: Gb
Number of Concepts Represented: concepts
Type of representation:
Additional comments:
Section 5.0 Computer Searching
Average computer time to search (per query): CPU seconds
Times broken down by component(s):
Section 5.1 Searching Methods
Vector space model:
Probabilistic model:
Cluster searching:
N-gram matching:
Boolean matching:
Fuzzy logic:
Free text scanning:
Neural networks:
Conceptual graphic matching:
Other:
Additional comments:
Section 5.2 Factors in Ranking
Term frequency:
Inverse document frequency:
Other term weights:
Semantic closeness:
Position in document:
Syntactic clues:
Proximity of terms:
Information theoretic weights:
Document length:
Percentage of query terms which match:
N-gram frequency:
Word specificity:
Word sense frequency:
Cluster distance:
Other:
Additional comments:
Section 6.0 Query Construction
Section 6.1 Automatically Built Queries for Ad-hoc Tasks
Topic fields used:          
Average computer time to build query    CPU seconds
Term weighting (weights based on terms in topics):
Phrase extraction from topics:
Syntactic parsing of topics:
Word sense disambiguation:
Proper noun identification algorithm:
Tokenizer:
  • Patterns which were tokenized:
Expansion of queries using previously constructed data structures:
  • Comment:
Automatic addition of:
Section 6.2 Manually Constructed Queries for Ad-hoc Tasks
Topic fields used:        
Average time to build query?   minutes
Type of query builder:
Tool used to build query:
Method used in intial query construction?
  • If yes, what was the source of terms?
Total CPU time for all iterations:  seconds
Clock time from initial construction of query to completion of final query:   minutes
Average number of iterations:
Average number of documents examined per iteration:
Minimum number of iterations:
Maximum number of iterations:
The end of an iteration is determined by:
Automatic term reweighting from relevant documents:
Automatic query expansion from relevant documents:
  • Type of automatic query expansion:
Other automatic methods:
  • Other automatic methods included:
Manual methods used:
  • Type of manual method used:
Send questions to trec@nist.gov

Disclaimer: Contents of this online document are not necessarily the official views of, nor endorsed by the U.S. Government, the Department of Commerce, or NIST.