Optimality versus generality: performance assessment of meta-heuristics in educational timetabling

Educational timetabling, a principal branch of operations research, presents challenging combinatorial optimization problems widely encountered in educational institutions. Meta-heuristics have commonly been applied to these problems and managed to attain promising performance in terms of optimality...

Full description

Saved in:
Bibliographic Details
Main Authors: Abdipoor, Sina, Yaakob, Razali, Goh, Say Leng, Abdullah, Salwani, Hamdan, Hazlina, Kasmiran, Khairul Azhar
Format: Article
Language:en
Published: Institute of Electrical and Electronics Engineers 2025
Online Access:http://psasir.upm.edu.my/id/eprint/121565/1/121565.pdf
http://psasir.upm.edu.my/id/eprint/121565/
https://ieeexplore.ieee.org/document/10979955/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Educational timetabling, a principal branch of operations research, presents challenging combinatorial optimization problems widely encountered in educational institutions. Meta-heuristics have commonly been applied to these problems and managed to attain promising performance in terms of optimality. However, their general applicability has been overlooked, hindering their effectiveness as versatile solvers. The limited generalizability of current approaches is the primary hurdle between the literature and real-world applications. This paper addresses this gap by introducing a generality taxonomy and conducting comprehensive theoretical and empirical analyses. This study highlights the adverse impact of extreme parameter tuning on generality, emphasizing the need for more generalized approaches. Furthermore, it introduces a performance assessment framework, penalizing problem-tailored solutions. It also examines the optimality vs. generality performance of the state-of-the-art approaches of the latest university course timetabling benchmark to further reinforce our claim and validate the efficacy of our framework. Our findings indicate that the current literature prioritizes optimality over generality. We believe adopting the proposed assessment framework is crucial for bridging the gap between research and practical applications, enabling fairer comparisons, and encouraging more adaptable approaches.