A Systematic Review of Deep Learning Methods: Classification, Selection, and Scientific Understanding
Abstract
This study, A Systematic Review of Deep Learning Methods: Classification, Selection, and Scientific Understanding, categorizes central deep learning (DL) models—including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and Autoencoders (AEs)—based on their suitability for specific tasks and data types. While DL has achieved significant success in image recognition, language processing, and anomaly detection, several critical limitations pertain to interpretability, robustness, and scalability. This review summarizes the strengths and weaknesses of each model in a structured manner to guide the choice among DL models. Findings emphasize that theory must be advanced to improve transparency and reliability to better support practitioners and researchers in making informed choices for DL's responsible deployment across sectors.
References
S. Arora, N. Cohen, and E. Hazan, "On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization," pp. 244–253, Jan. 2018.
R. Balestriero and R. G. Baraniuk, "Batch normalization explained," arXiv preprint arXiv:2209.14778, 2022.
N. Thompson, K. Greenewald, K. Lee, and G. F. Manso, "The Computational Limits of Deep Learning," Jul. 2020, doi: 10.48550/arxiv.2007.05558.
Q. Zhang and S. Zhu, "Visual interpretability for deep learning: a survey," Frontiers of Information Technology & Electronic Engineering, vol. 19, no. 1, pp. 27–39, Jan. 2018, doi: 10.1631/fitee.1700808.
N. Akhtar and A. Mian, "Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey," IEEE Access, vol. 6, pp. 14410–14430, 2018, doi: 10.1109/access.2018.2807385.
B. B. Antal, A. G. Chesebro, H. H. Strey, L. R. Mujica-Parodi, and C. Weistuch, "Achieving Occam’s razor: Deep learning for optimal model reduction," PLoS Computational Biology, vol. 20, no. 7, pp. e1012283–e1012283, Jul. 2024, doi: 10.1371/journal.pcbi.1012283.
M. Ivanovs, R. Kadikis, and K. Ozols, "Perturbation-based methods for explaining deep neural networks: A survey," Pattern Recognition Letters, vol. 150, pp. 228–234, Oct. 2021, doi: 10.1016/j.patrec.2021.06.030.
Y. Kuvvetli, M. Deveci, T. Paksoy, and H. Garg, "A predictive analytics model for COVID-19 pandemic using artificial neural networks," Decision Analytics Journal, vol. 1, p. 100007, 2021, doi: 10.1016/j.dajour.2021.100007.
Y.-C. Wang, T. Chen, and M.-C. Chiu, "An explainable deep-learning approach for job cycle time prediction," Decision Analytics Journal, vol. 6, p. 100153, 2023, doi: 10.1016/j.dajour.2022.100153.
A. K. Singh and D. Koundal, "A Temporal Convolutional Network for modeling raw 3D sequences and air-writing recognition," Decision Analytics Journal, vol. 10, p. 100373, 2024, doi: 10.1016/j.dajour.2023.100373.
F. E. Ayo, L. A. Ogundele, S. Olakunle, J. B. Awotunde, and F. A. Kasali, "A hybrid correlation-based deep learning model for email spam classification using fuzzy inference system," Decision Analytics Journal, vol. 10, p. 100390, 2024, doi: 10.1016/j.dajour.2023.100390.
J. Yang, X. L. Zhang, and P. Su, "Deep-learning-based agile teaching framework of software development courses in computer science education," Procedia Computer Science, vol. 154, pp. 137–145, 2019, doi: 10.1016/j.procs.2019.06.021.
M. Abadi et al., "TensorFlow: A system for large-scale machine learning," in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI).
C. Rudin, "Stop explaining black-box machine learning models for high stakes decisions and use interpretable models instead," Nature Machine Intelligence
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 International Journal of Sciences: Basic and Applied Research (IJSBAR)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Authors who submit papers with this journal agree to the following terms.