Text REtrieval Conference (TREC)
System Description

Organization Name: Siemens AG Run ID: teklis7
Section 1.0 System Summary and Timing
Section 1.1 System Information
Hardware Model Used for TREC Experiment: PC Pentium Pro
System Use: SHARED
Total Amount of Hard Disk Storage: 9 Gb
Total Amount of RAM: 128 MB
Clock Rate of CPU: 200 MHz
Section 1.2 System Comparisons
Amount of developmental "Software Engineering": SOME
List of features that are not present in the system, but would have been beneficial to have: - usage of topic descriptions for learning - categorization of paragraphs - word context for word sense disambiguation
List of features that are present in the system, and impacted its performance, but are not detailed within this form:
Section 2.0 Construction of Indices, Knowledge Bases, and Other Data Structures
Length of the stopword list: 0 words
Type of Stemming: MORPHOLOGICAL
Controlled Vocabulary: NO
Term weighting: YES
  • Additional Comments on term weighting: correlation measure
Phrase discovery: NO
  • Kind of phrase:
  • Method used: OTHER
Type of Spelling Correction: NONE
Manually-Indexed Terms:
Proper Noun Identification:
Syntactic Parsing:
Tokenizer: YES
Word Sense Disambiguation:
Other technique: YES
Additional comments: other technique: statistical word pair recognition
Section 3.0 Statistics on Data Structures Built from TREC Text
Section 3.1 First Data Structure
Structure Type: OTHER DATA STRUCTURE
Type of other data structure used: Dictionary
Brief description of method using other data structure:
Total storage used: 0.02 Gb
Total computer time to build: 8 hours
Automatic process: YES
Manual hours required: hours
Type of manual labor: NONE
Term positions used: NO
Only single terms used: YES
Concepts (vs. single terms) represented: NO
  • Number of concepts represented:
Type of representation:
Auxilary files used: YES
  • Type of auxilary files used: Lexicon for stemming
Additional comments:
Section 3.2 Second Data Structure
Structure Type: NONE
Type of other data structure used:
Brief description of method using other data structure:
Total storage used: Gb
Total computer time to build: hours
Automatic process:
Manual hours required: hours
Type of manual labor: NONE
Term positions used:
Only single terms used:
Concepts (vs. single terms) represented:
  • Number of concepts represented:
Type of representation:
Auxilary files used:
  • Type of auxilary files used:
Additional comments:
Section 3.3 Third Data Structure
Structure Type: NONE
Type of other data structure used:
Brief description of method using other data structure:
Total storage used: Gb
Total computer time to build: hours
Automatic process:
Manual hours required: hours
Type of manual labor: NONE
Term positions used:
Only single terms used:
Concepts (vs. single terms) represented:
  • Number of concepts represented:
Type of representation:
Auxilary files used:
  • Type of auxilary files used:
Additional comments:
Section 4.0 Data Built from Sources Other than the Input Text
Internally-built Auxiliary File

File type: LEXICON
Domain type: DOMAIN INDEPENDENT
Total Storage: Gb
Number of Concepts Represented: concepts
Type of representation: NONE
Automatic or Manual:
  • Total Time to Build: hours
  • Total Time to Modify (if already built): hours
Type of Manual Labor used: NONE
Additional comments: We used an existing small lexicon (85K compressed) with common irregular words for stemming.
Externally-built Auxiliary File

File is: NONE
Total Storage: Gb
Number of Concepts Represented: concepts
Type of representation: NONE
Additional comments:
Section 5.0 Computer Searching
Average computer time to search (per query): CPU seconds
Times broken down by component(s):
Section 5.1 Searching Methods
Vector space model:
Probabilistic model: YES
Cluster searching:
N-gram matching:
Boolean matching:
Fuzzy logic: YES
Free text scanning:
Neural networks:
Conceptual graphic matching:
Other:
Additional comments: Since teklis is a categorization system it's not possible to give a search time per query. It processes about 1500 words per sec.
Section 5.2 Factors in Ranking
Term frequency:
Inverse document frequency:
Other term weights: YES
Semantic closeness:
Position in document:
Syntactic clues:
Proximity of terms:
Information theoretic weights:
Document length: YES
Percentage of query terms which match:
N-gram frequency:
Word specificity:
Word sense frequency: YES
Cluster distance:
Other:
Additional comments: We used a correlation measure as term weight
Send questions to trec@nist.gov

Disclaimer: Contents of this online document are not necessarily the official views of, nor endorsed by the U.S. Government, the Department of Commerce, or NIST.