Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine
Activation functions play an important role in deep learning models by introducing non-linearity to the output of a neuron, enabling the network to learn complex patterns and non-linear relationships in data and make predictions on more complex tasks. Deep learning models� most commonly used activ...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Published: |
Elsevier Ltd
2024
|
Online Access: | http://scholars.utp.edu.my/id/eprint/37730/ https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174329387&doi=10.1016%2fj.engappai.2023.107308&partnerID=40&md5=7374f922ed8f76dff4dddff12c075d70 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
oai:scholars.utp.edu.my:37730 |
---|---|
record_format |
eprints |
spelling |
oai:scholars.utp.edu.my:377302023-10-30T02:08:44Z http://scholars.utp.edu.my/id/eprint/37730/ Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine Ramadevi, B. Kasi, V.R. Bingi, K. Activation functions play an important role in deep learning models by introducing non-linearity to the output of a neuron, enabling the network to learn complex patterns and non-linear relationships in data and make predictions on more complex tasks. Deep learning models� most commonly used activation functions are Purelin, Sigmoid, Tansig, Rectified Linear Unit (ReLU), and Exponential Linear Unit (ELU), which exhibit limitations such as non-differentiability, vanishing gradients, and neuron inactivity with negative values. These functions are typically defined over a finite range, and their outputs are integers or real numbers. Using fractional calculus in designing activation functions for neural networks has shown promise in improving the performance of deep learning models in specific applications. These activation functions can capture more complex non-linearities than traditional integer-order activation functions, improving performance on tasks such as image classification and time series prediction. This paper focuses on deriving and testing linear and non-linear fractional-order forms of activation functions and their variants. The linear activation function includes Purelin. In contrast, the non-linear activation functions are Binary Step, Sigmoid, Tansig, ReLU, ELU, Gaussian Error Linear Unit (GELU), Hexpo, and their variants. Besides, the standard formula has been implemented and used in developing the fractional-order linear activation function. Furthermore, various expansion series, such as Euler and Maclaurin, have been used to design non-linear fractional-order activation functions and their variants. The single- and multi-layer fractional-order neural network models have been developed using the designed fractional-order activation functions. The simulation study uses developed fractional-order neural network models for predicting the Texas wind turbine systems� generated power. The performance of single and multi-layer fractional-order neural network models has been evaluated by changing the activation functions in the hidden layer while keeping the Purelin function constant at the output layer. Experiments on neural network models demonstrate that the designed fractional-order activation functions outperform traditional functions like Sigmoid, Tansig, ReLU, ELU, and their variants, effectively addressing limitations. © 2023 Elsevier Ltd Elsevier Ltd 2024 Article NonPeerReviewed Ramadevi, B. and Kasi, V.R. and Bingi, K. (2024) Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine. Engineering Applications of Artificial Intelligence, 127. ISSN 09521976 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174329387&doi=10.1016%2fj.engappai.2023.107308&partnerID=40&md5=7374f922ed8f76dff4dddff12c075d70 10.1016/j.engappai.2023.107308 10.1016/j.engappai.2023.107308 10.1016/j.engappai.2023.107308 |
institution |
Universiti Teknologi Petronas |
building |
UTP Resource Centre |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknologi Petronas |
content_source |
UTP Institutional Repository |
url_provider |
http://eprints.utp.edu.my/ |
description |
Activation functions play an important role in deep learning models by introducing non-linearity to the output of a neuron, enabling the network to learn complex patterns and non-linear relationships in data and make predictions on more complex tasks. Deep learning models� most commonly used activation functions are Purelin, Sigmoid, Tansig, Rectified Linear Unit (ReLU), and Exponential Linear Unit (ELU), which exhibit limitations such as non-differentiability, vanishing gradients, and neuron inactivity with negative values. These functions are typically defined over a finite range, and their outputs are integers or real numbers. Using fractional calculus in designing activation functions for neural networks has shown promise in improving the performance of deep learning models in specific applications. These activation functions can capture more complex non-linearities than traditional integer-order activation functions, improving performance on tasks such as image classification and time series prediction. This paper focuses on deriving and testing linear and non-linear fractional-order forms of activation functions and their variants. The linear activation function includes Purelin. In contrast, the non-linear activation functions are Binary Step, Sigmoid, Tansig, ReLU, ELU, Gaussian Error Linear Unit (GELU), Hexpo, and their variants. Besides, the standard formula has been implemented and used in developing the fractional-order linear activation function. Furthermore, various expansion series, such as Euler and Maclaurin, have been used to design non-linear fractional-order activation functions and their variants. The single- and multi-layer fractional-order neural network models have been developed using the designed fractional-order activation functions. The simulation study uses developed fractional-order neural network models for predicting the Texas wind turbine systems� generated power. The performance of single and multi-layer fractional-order neural network models has been evaluated by changing the activation functions in the hidden layer while keeping the Purelin function constant at the output layer. Experiments on neural network models demonstrate that the designed fractional-order activation functions outperform traditional functions like Sigmoid, Tansig, ReLU, ELU, and their variants, effectively addressing limitations. © 2023 Elsevier Ltd |
format |
Article |
author |
Ramadevi, B. Kasi, V.R. Bingi, K. |
spellingShingle |
Ramadevi, B. Kasi, V.R. Bingi, K. Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine |
author_facet |
Ramadevi, B. Kasi, V.R. Bingi, K. |
author_sort |
Ramadevi, B. |
title |
Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine |
title_short |
Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine |
title_full |
Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine |
title_fullStr |
Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine |
title_full_unstemmed |
Fractional ordering of activation functions for neural networks: A case study on Texas wind turbine |
title_sort |
fractional ordering of activation functions for neural networks: a case study on texas wind turbine |
publisher |
Elsevier Ltd |
publishDate |
2024 |
url |
http://scholars.utp.edu.my/id/eprint/37730/ https://www.scopus.com/inward/record.uri?eid=2-s2.0-85174329387&doi=10.1016%2fj.engappai.2023.107308&partnerID=40&md5=7374f922ed8f76dff4dddff12c075d70 |
_version_ |
1781707950460502016 |
score |
13.222552 |