Modification of Steepest Descent Method for Solving Unconstrained Optimization
The Classical steepest descent (SD) method is known as one of the earliest and the best method to minimize a function. Even though the convergence rate is quite slow, but its simplicity has made it one of the easiest methods to be used and applied especially in the form of computer codes.
Saved in:
Main Author: | Zubai'dah Binti Zainal Abidin |
---|---|
Format: | Thesis |
Language: | English |
Published: |
Universiti Malaysia Terengganu
2023
|
Subjects: | |
Online Access: | http://umt-ir.umt.edu.my:8080/handle/123456789/17622 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A new multi-step gradient method for optimization problem
by: Mahboubeh, Farid, et al.
Published: (2010) -
The investigation of gradient method namely Steepest Descent and extending of Barzilai Borwein for solving unconstrained optimization problem / Nur Intan Syahirah Ismail & Nur Atikah Aziz
by: Ismail, Nur Intan Syahirah, et al.
Published: (2019) -
Scaled memoryless BFGS preconditioned steepest descent method for very large-scale unconstrained optimization
by: Leong, Wah June, et al.
Published: (2009) -
Preconditioning Subspace Quasi-Newton Method for Large Scale Unconstrained Optimization
by: Sim, Hong Sen
Published: (2011) -
Quasi-Newton type method via weak secant equations for unconstrained optimization
by: Lim, Keat Hee
Published: (2021)