Reliable Information Retrieval Systems Performance Evaluation: A Review
With the progressive and availability of various search tools, interest in the evaluation of information retrieval based on user perspective has grown tremendously among researchers. The Information Retrieval System Evaluation is done through Cranfield-paradigm in which the test collections provide...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Published: |
Institute of Electrical and Electronics Engineers
2024
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/45868/ https://doi.org/10.1109/ACCESS.2024.3377239 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | With the progressive and availability of various search tools, interest in the evaluation of information retrieval based on user perspective has grown tremendously among researchers. The Information Retrieval System Evaluation is done through Cranfield-paradigm in which the test collections provide the foundation of the evaluation process. The test collections consist of a document corpus, topics, and a set of relevant judgments. The relevant judgments are the documents which retrieved from the test collections based on the topics. The accuracy of the evaluation process is based on the number of relevant documents in the relevance judgment set, called qrels. This paper presents a comprehensive study, which discusses the various ways to improve the number of relevant documents in the qrels to improve the quality of qrels and through that increase the accuracy of the evaluation process. Different ways in which each methodology was performed to retrieve more relevant documents were categorized, described, and analyzed, resulting in an inclusive flow of these methodologies. |
---|