Text REtrieval Conference (TREC)
|
Organization Name: Rutgers AntWorld Project (Kantor) | Run ID: AntHoc1 |
Section 1.0 System Summary and Timing |
---|
Section 1.1 System Information |
Hardware Model Used for TREC Experiment:PCs and a Sparc System Use:SHARED Total Amount of Hard Disk Storage:20 Gb Total Amount of RAM:500 MB Clock Rate of CPU:500 MHz |
Section 1.2 System Comparisons |
Amount of developmental "Software Engineering":SOME List of features that are not present in the system, but would have been beneficial to have:no positional info no n-gram indexing List of features that are present in the system, and impacted its performance, but are not detailed within this form: |
Section 2.0 Construction of Indices, Knowledge Bases, and Other Data Structures |
---|
Length of the stopword list:429 words Type of Stemming:PORTER Controlled Vocabulary:NO Term weighting:YES
Phrase discovery:NO
Type of Spelling Correction:NONE Manually-Indexed Terms:NO Proper Noun Identification:NO Syntactic Parsing:NO Tokenizer:NO Word Sense Disambiguation:NO Other technique:YES Additional comments:Best individual terms are found by a greedy algorithm; a linear discriminator with topic independent weights is formed (weight of a term is determined solely by its rank in the greedy term discovery process). |
Section 3.0 Statistics on Data Structures Built from TREC Text |
---|
Section 3.1 First Data Structure |
Structure Type:INVERTED INDEX Type of other data structure used:Boolean representation after cut point selection. Brief description of method using other data structure:Greedy cut point determinatoin for terms Total storage used:15.23 Gb Total computer time to build:70 hours Automatic process:YES Manual hours required:hours Type of manual labor:NONE Term positions used:NO Only single terms used:YES Concepts (vs. single terms) represented:NO
Type of representation: Auxilary files used:NO
Additional comments:Auxiliary misspelled above. We ignored narratives. Title texts given additional weight |
Section 3.2 Second Data Structure |
Structure Type: Type of other data structure used: Brief description of method using other data structure: Total storage used:Gb Total computer time to build:hours Automatic process: Manual hours required:hours Type of manual labor:NONE Term positions used: Only single terms used: Concepts (vs. single terms) represented:
Type of representation: Auxilary files used:
Additional comments:No 2nd structure |
Section 3.3 Third Data Structure |
Structure Type: Type of other data structure used: Brief description of method using other data structure: Total storage used:Gb Total computer time to build:hours Automatic process: Manual hours required:hours Type of manual labor:NONE Term positions used: Only single terms used: Concepts (vs. single terms) represented:
Type of representation: Auxilary files used:
Additional comments:N/A |
Section 4.0 Data Built from Sources Other than the Input Text |
---|
File type:NONE Domain type:DOMAIN INDEPENDENT Total Storage:Gb Number of Concepts Represented:concepts Type of representation:NONE Automatic or Manual:
Type of Manual Labor used:NONE Additional comments: |
File is:NONE Total Storage:Gb Number of Concepts Represented:concepts Type of representation:NONE Additional comments: |
Section 5.0 Computer Searching |
---|
Average computer time to search (per query): 10 mi CPU seconds |
Times broken down by component(s): 20% first retrieval; 10% building traning sets; 70% learning, scoring and producing lists. |
Section 5.1 Searching Methods |
Vector space model:YES Probabilistic model:NO Cluster searching:NO N-gram matching:NO Boolean matching:NO Fuzzy logic:NO Free text scanning:NO Neural networks:NO Conceptual graphic matching:NO Other:YES Additional comments:A VS retrieval draws 11 documents based on title and description bag of words. These are assumed relevant. 500 random documents from the AP collection are assumed not relevant. A greedy algorithm selects terms on their ability to separate these two sets. An effective query, consisting of terms, weighted |
Section 5.2 Factors in Ranking |
Term frequency: Inverse document frequency: Other term weights: Semantic closeness: Position in document: Syntactic clues: Proximity of terms: Information theoretic weights: Document length: Percentage of query terms which match: N-gram frequency: Word specificity: Word sense frequency: Cluster distance: Other: Additional comments: |
Section 6.0 Query Construction |
---|
Section 6.1 Automatically Built Queries for Ad-hoc Tasks |
Topic fields used: Average computer time to build query CPU seconds Term weighting (weights based on terms in topics): Phrase extraction from topics: Syntactic parsing of topics: Word sense disambiguation: Proper noun identification algorithm: Tokenizer: Expansion of queries using previously constructed data structures: Automatic addition of: |
Section 6.2 Manually Constructed Queries for Ad-hoc Tasks |
Topic fields used: Average time to build query? minutes Type of query builder: Tool used to build query: Method used in intial query construction? Total CPU time for all iterations: seconds Clock time from initial construction of query to completion of final query: minutes Average number of iterations: Average number of documents examined per iteration: Minimum number of iterations: Maximum number of iterations: The end of an iteration is determined by: Automatic term reweighting from relevant documents: Automatic query expansion from relevant documents: Other automatic methods: Manual methods used: |
Disclaimer: Contents of this online document are not necessarily the official views of, nor endorsed by the U.S. Government, the Department of Commerce, or NIST. |