Comparison of Convolutional Neural Networks Model Using Different Optimizers for Image Classification
AbstractFace detection technology and image classification are widely used in several industries that help humans in obtaining information and other related matters. In this paper, the utilization of the Computer Vision system uses the Convolutional Neural Network (CNN) algorithm to classify images by distinguishing the gender of the detected object. Architectural model through transfer learning by experimenting with three pre-trained models, namely VGG-16, Inception-V3, and MobileNet-V2 to determine the best architecture by using Optimizer Adam and RMSProp. To produce the best model and performance, experiments were carried out using several modules such as the data augmentation module and the re-indexing module. The Inception-V3 model got the best results in predicting Gender from the image with an accuracy and loss validation value of 0.9350, 0.1550, compared to VGG-16 and MobleNet-V2 with values 0.9320, 0.1660, and 0.8760, 0.3000.
. M. S. Islam, F. A. Foysal, N. Neehal, E. Karim, and S. A. Hossain, “IncePTB: A CNN based classification approach for recognizing traditional Bengali games,” Procedia Comput. Sci., vol. 143, pp. 595–602, 2018, doi: 10.1016/j.procs.2018.10.436.
. A. Patil and M. Rane, “Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition,” Smart Innov. Syst. Technol., vol. 195, pp. 21–30, 2021, doi: 10.1007/978-981-15-7078-0_3.
. Z. J. Wang et al., “CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization,” IEEE Trans. Vis. Comput. Graph., vol. 27, no. 2, pp. 1396–1406, 2021, doi: 10.1109/TVCG.2020.3030418.
. X. Han and Y. Li, “The Application of Convolution Neural Networks in Handwritten Numeral Recognition,” Int. J. database Theory Appl., vol. 8, no. 3, p. 10, 2015, doi: http://dx.doi.org/10.14257/ijdta.2015.8.3.32.
. S. Liu et al., “Matching-CNN Meets KNN : Quasi-Parametric Human Parsing,” IEEE Xplore, p. 9, 2015.
. A. Yusuf, R. C. Wihandika, and C. Dewi, “Klasifikasi Emosi Berdasarkan Ciri Wajah Menggunakan Convolutional Neural Network,” J-PTIIK, vol. 3, no. 11, p. 11, 2019, [Online]. Available: http://j-ptiik.ub.ac.id.
. P. Bahar, T. Alkhouli, J. Peter, C. J. Brix, and H. Ney, “Empirical Investigation of Optimization Algorithms in Neural Machine Translation,” no. 108, pp. 13–25, 2017, doi: 10.1515/pralin-2017-0005.PBML.
. S. Voronov, I. Voronov, and R. Kovalenko, “Comparative analysis of stochastic optimization algorithms for image registration,” 2018.
. R. Zaheer, “A Study of the Optimization Algorithms in Deep Learning,” 2019 Third Int. Conf. Inven. Syst. Control, no. March, pp. 536–539, 2020, doi: 10.1109/ICISC44355.2019.9036442.
. S. Chen, C. Zhang, M. Dong, J. Le, and M. Rao, “Using Ranking-CNN for Age Estimation,” Cvpr, pp. 5183–5192, 2017, [Online]. Available: http://arxiv.org/abs/1905.06509.
. R. Chauhan, K. K. Ghanshala, and R. C. Joshi, “Convolutional Neural Network ( CNN ) for Image Detection and Recognition,” 2018 First Int. Conf. Secur. Cyber Comput. Commun., no. December 2018, pp. 278–282, 2021, doi: 10.1109/ICSCCC.2018.8703316.
. E. R. Jeong, E. S. Lee, J. Joung, and H. Oh, “Convolutional neural network (Cnn)-based frame synchronization method,” Appl. Sci., vol. 10, no. 20, pp. 1–11, 2020, doi: 10.3390/app10207267.
. W. Setiawan and R. Rulaningtyas, “Classification of neovascularization using convolutional neural network model,” TELKOMNIKA (Telecommunication Comput. Electron. Control., vol. 17, no. 1, pp. 463–472, 2019, doi: 10.12928/TELKOMNIKA.v17i1.11604.
. A. Canziani, A. Paszke, and E. Culurciello, “An Analysis of Deep Neural Network Models for Practical Applications,” arXiv, p. 7, May 2016, [Online]. Available: http://arxiv.org/abs/1605.07678.
. J. Bankar and N. R. Gavai, “Convolutional Neural Network Based Inception V3 Model for Animal Classification,” Int. J. Adv. Res. Comput. Commun. Eng., vol. 7, no. 5, pp. 142–146, 2018, doi: 10.17148/IJARCCE.2018.7529.
. A. Lumini, L. Nanni, and G. Maguolo, “Deep learning for plankton and coral classification,” Appl. Comput. Informatics, 2019, doi: 10.1016/j.aci.2019.11.004.
. C. Wang et al., “Pulmonary image classification based on inception-v3 transfer learning model,” IEEE Access, vol. 7, pp. 146533–146541, 2019, doi: 10.1109/ACCESS.2019.2946000.
. Evan, M. Wulandari, and E. Syamsudin, “Recognition of pedestrian traffic light using tensorflow and SSD MobileNet V2,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1007, no. 1, 2020, doi: 10.1088/1757-899X/1007/1/012022.
. S. A. Magalhães et al., “Evaluating the single-shot multibox detector and yolo deep learning models for the detection of tomatoes in a greenhouse,” Sensors, vol. 21, no. 10, pp. 1–24, 2021, doi: 10.3390/s21103569.
. K. Dong, C. Zhou, Y. Ruan, and Y. Li, “MobileNetV2 Model for Image Classification,” Proc. - 2020 2nd Int. Conf. Inf. Technol. Comput. Appl. ITCA 2020, pp. 476–480, 2020, doi: 10.1109/ITCA52113.2020.00106.
. W. Setiawan, “Perbandingan arsitektur convolutional neural network untuk klasifikasi fundus,” SimanteC, vol. 7, no. 2, pp. 49–54, 2019.
. M. A. Hossain and M. S. Alam Sajib, “Classification of Image using Convolutional Neural Network (CNN),” Glob. J. Comput. Sci. Technol., vol. 19, no. 2, pp. 13–18, 2019, doi: 10.34257/gjcstdvol19is2pg13.
. N. E. Sahla, “A deep learning prediction model for object classification,” Delft University of Technology, 2018.
. A. Suhail, M. Jayabalan, and V. Thiruchelvam, “Convolutional neural network based object detection: A review,” J. Crit. Rev., vol. 7, no. 11, pp. 786–792, 2020, doi: 10.31838/jcr.07.11.140.
Copyright (c) 2021 International Journal of Sciences: Basic and Applied Research (IJSBAR)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who submit papers with this journal agree to the following terms.