Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution
Evaluation and benchmarking of many-objective optimization (MaOO) methods are complicated. The rapid development of new optimization algorithms for solving problems with many objectives has increased the necessity of developing performance indicators or metrics for evaluating the performance quality...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Published: |
World Scientific
2020
|
Online Access: | http://psasir.upm.edu.my/id/eprint/87482/ https://www.worldscientific.com/doi/abs/10.1142/S0219622020300049 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.upm.eprints.87482 |
---|---|
record_format |
eprints |
spelling |
my.upm.eprints.874822025-02-12T01:51:27Z http://psasir.upm.edu.my/id/eprint/87482/ Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution Mohammed, R. T. Yaakob, R. Zaidan, A. A. Sharef, N. M. Abdullah, R. H. Zaidan, B. B. Dawood, K. A. Evaluation and benchmarking of many-objective optimization (MaOO) methods are complicated. The rapid development of new optimization algorithms for solving problems with many objectives has increased the necessity of developing performance indicators or metrics for evaluating the performance quality and comparing the competing optimization algorithms fairly. Further investigations are required to highlight the limitations of how criteria/metrics are determined and the consistency of the procedures with the evaluation and benchmarking processes of MaOO. A review is conducted in this study to map the research landscape of multi-criteria evaluation and benchmarking processes for MaOO into a coherent taxonomy. Then contentious and challenging issues related to evaluation are highlighted, and the performance of optimization algorithms for MaOO is benchmarked. The methodological aspects of the evaluation and selection of MaOO algorithms are presented as the recommended solution on the basis of four distinct and successive phases. First, in the determination phase, the evaluation criteria of MaOO are collected, classified and grouped for testing experts’ consensus on the most suitable criteria. Second, the identification phase involves the process of establishing a decision matrix via a crossover of the ‘evaluation criteria’ and MaOO’, and the level of importance of each selective criteria and sub-criteria from phase one is computed to identify its weight value by using the best–worst method (BWM). Third, the development phase involves the creation of a decision matrix for MaOO selection on the basis of the integrated BWM and VIKOR method. Last, the validation phase involves the validation of the proposed solution. World Scientific 2020 Article PeerReviewed Mohammed, R. T. and Yaakob, R. and Zaidan, A. A. and Sharef, N. M. and Abdullah, R. H. and Zaidan, B. B. and Dawood, K. A. (2020) Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution. International Journal of Information Technology and Decision Making, 19 (6). pp. 1619-1693. ISSN 0219-6220; eISSN: 0219-6220 https://www.worldscientific.com/doi/abs/10.1142/S0219622020300049 10.1142/S0219622020300049 |
institution |
Universiti Putra Malaysia |
building |
UPM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Putra Malaysia |
content_source |
UPM Institutional Repository |
url_provider |
http://psasir.upm.edu.my/ |
description |
Evaluation and benchmarking of many-objective optimization (MaOO) methods are complicated. The rapid development of new optimization algorithms for solving problems with many objectives has increased the necessity of developing performance indicators or metrics for evaluating the performance quality and comparing the competing optimization algorithms fairly. Further investigations are required to highlight the limitations of how criteria/metrics are determined and the consistency of the procedures with the evaluation and benchmarking processes of MaOO. A review is conducted in this study to map the research landscape of multi-criteria evaluation and benchmarking processes for MaOO into a coherent taxonomy. Then contentious and challenging issues related to evaluation are highlighted, and the performance of optimization algorithms for MaOO is benchmarked. The methodological aspects of the evaluation and selection of MaOO algorithms are presented as the recommended solution on the basis of four distinct and successive phases. First, in the determination phase, the evaluation criteria of MaOO are collected, classified and grouped for testing experts’ consensus on the most suitable criteria. Second, the identification phase involves the process of establishing a decision matrix via a crossover of the ‘evaluation criteria’ and MaOO’, and the level of importance of each selective criteria and sub-criteria from phase one is computed to identify its weight value by using the best–worst method (BWM). Third, the development phase involves the creation of a decision matrix for MaOO selection on the basis of the integrated BWM and VIKOR method. Last, the validation phase involves the validation of the proposed solution. |
format |
Article |
author |
Mohammed, R. T. Yaakob, R. Zaidan, A. A. Sharef, N. M. Abdullah, R. H. Zaidan, B. B. Dawood, K. A. |
spellingShingle |
Mohammed, R. T. Yaakob, R. Zaidan, A. A. Sharef, N. M. Abdullah, R. H. Zaidan, B. B. Dawood, K. A. Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
author_facet |
Mohammed, R. T. Yaakob, R. Zaidan, A. A. Sharef, N. M. Abdullah, R. H. Zaidan, B. B. Dawood, K. A. |
author_sort |
Mohammed, R. T. |
title |
Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
title_short |
Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
title_full |
Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
title_fullStr |
Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
title_full_unstemmed |
Review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
title_sort |
review of the research landscape of multi-criteria evaluation and bench- marking processes for many-objective optimisation methods: coherent taxonomy, challenges and recommended solution |
publisher |
World Scientific |
publishDate |
2020 |
url |
http://psasir.upm.edu.my/id/eprint/87482/ https://www.worldscientific.com/doi/abs/10.1142/S0219622020300049 |
_version_ |
1825162387062259712 |
score |
13.239859 |