Modification of Steepest Descent Method for Solving Unconstrained Optimization
The Classical steepest descent (SD) method is known as one of the earliest and the best method to minimize a function. Even though the convergence rate is quite slow, but its simplicity has made it one of the easiest methods to be used and applied especially in the form of computer codes.
محفوظ في:
المؤلف الرئيسي: | Zubai'dah Binti Zainal Abidin |
---|---|
التنسيق: | أطروحة |
اللغة: | English |
منشور في: |
Universiti Malaysia Terengganu
2023
|
الموضوعات: | |
الوصول للمادة أونلاين: | http://umt-ir.umt.edu.my:8080/handle/123456789/17622 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
مواد مشابهة
-
A new multi-step gradient method for optimization problem
بواسطة: Mahboubeh, Farid, وآخرون
منشور في: (2010) -
The investigation of gradient method namely Steepest Descent and extending of Barzilai Borwein for solving unconstrained optimization problem / Nur Intan Syahirah Ismail & Nur Atikah Aziz
بواسطة: Ismail, Nur Intan Syahirah, وآخرون
منشور في: (2019) -
Scaled memoryless BFGS preconditioned steepest descent method for very large-scale unconstrained optimization
بواسطة: Leong, Wah June, وآخرون
منشور في: (2009) -
Preconditioning Subspace Quasi-Newton Method for Large Scale Unconstrained Optimization
بواسطة: Sim, Hong Sen
منشور في: (2011) -
Energy Management in Integrated Microgrids: An Optimal Schedule Controller Utilizing Gradient Descent Algorithm
بواسطة: Abdolrasol M.G.M., وآخرون
منشور في: (2024)