Image Classification Modelling of Beef and Pork Using Convolutional Neural Network
AbstractThe high price of beef makes some people manipulate sales in markets or other shopping venues, such as mixing beef and pork. The difference between pork and beef is actually from the color and texture of the meat. However, many people do not understand these differences yet. One of the solutions is to create a technology that can recognize and differentiate pork and beef. That is what underlies this research to build a system that can classify the two types of meat. Since traditional machine learning such as Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) uses manual feature extraction in pattern recognition, we use Convolutional Neural Network (CNN) that can extract the feature automatically through the convolution layer. CNN is one of the deep learning methods and the development of artificial intelligence science that can be applied to classify images. There is no research on using CNN for pork and beef classification. Several regularization techniques, including dropout, L2, and max-norm with several values in them are applied to the model and compared to get the best classification results and can predict new data accurately. The best accuracy of 97.56% and the lowest loss of 0.111 were obtained from the CNN model by applying the dropout technique using p=0.7 supported by hyperparameters such as two convolution layers, 128 neurons in the fully connected layer, ReLU activation function, and two fully connected layers. The results of this study are expected to be the basis for making beef and pork recognition applications.
. Y. Erwanto, A. Mohammad Z, E.Y.P, Muslim Sugiyono, and A. Rohman. “Identification of pork contamination in meatballs of indonesia local market using polymerase chain reaction-restriction fragment length polymorphism analysis.” Asian Australasian Jornal of Animal Science. vol 27, pp. 1487-1492, 2014.
. S. Rahmati, N.M. Julkapli, W.A Yehye, and W.J Basirun. “Identification of meat origin in food products–a review.” Food Control, vol. 68, 2016, pp. 379-390.
. N. Lihayati, R.E Pawening, and M. Furqon. “Klasifikasi jenis daging berdasarkan tekstur menggunakan metode gray level coocurent matrix.” in Proc. SENTIA 2016, 2016.
. L. Zhu and P. Spachos. “Towards image classification with machine learning methodologies for smartphones.” Machine Learning and Knowledge Extraction, vo.1, pp.1039-1057, 2019.
. P. Dangeti. 2017. “K-Nearest neighbors and naive bayes.” in Statistics for Machine Learning, Birmingham, UK: Packt Publishing Ltd, 2017, pp.186-187.
. M. Hasan, S. Ullah, MJ. Khan, K. Khurshid. “ Comparative analysis of SVM, ANN and CNN for classifying vegetation species using hyperspectral thermal infrared data.” in ISPRS. vol.42, 2019.
. H. Kagaya, K. Aizawa, and M. Ogawa. “Food Detection and Recognition Using Convolutional Neural Network.” in Proceedings of the 22nd ACM international conference on Multimedia. pp. 1085–1088, Nov 2014.
. Y. LeCun, K. Kavukcuoglu, and C. Farabet. “Convolutional networks and applications in vision.” in ISCAS, 253–256, 2010.
. S. Salman and X. Liu. “Overfitting mechanism and avoidance in deep neural networks.” arXiv preprint arXiv:1901.06566, 2019.
. Rismiyati. Implementasi convolutional neural network untuk sortasi mutu salak ekspor berbasis citra digital thesis, Universitas Gajah Mada, Yogyakarta, 2016.
. M. S. Junayed et al. “AcneNet - a deep cnn based classification approach for acne classes.” in 2019 12th International Conference on Information & Communication Tech, and System, Surabaya, Indonesia, pp. 203-208, 2019.
. J. Nagi, F. Ducatelle, G. A. Di Caro, D. Cireşan, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber, and L. M. Gambardella. “Max-pooling convolutional neural networks for vision-based hand gesture recognition Signal and Image Processing Applications.” in IEEE International Conference, pp. 342-347, 2011.
. M. Alaslani, and L. Elrefaei. “Convolutional neural network based feature extraction for iris recognition.” International Journal of Computer Science & Information Technology, vol. 10, pp. 65-78, April 2018.
. I. Goodfellow, Y. Bengio, and A. Courville. “Feedforwards deep networks.” in Deep learning, vol. 1, Cambridge, US : MIT press, 2016, pp.176-180.
. Ying. “An overview of overfitting and its solutions.” Journal of Physics, vol.1168, 2019, pp.022022.
. N. Srivastava, G. Hinton, A. Krizhevsky, and I. Sutskever. “Dropout: a simple way to prevent neural networks from overfitting.” Journal of Machine Learning Research, vol 15, pp.1929-1958, 2014.
. Oles et al. “Package ‘EBImage’.” Internet: www.bioconductor.org/packages/release/ bioc/manuals/EBImage/man/EBImage.pdf., August. 2020 [Sept. 15, 2020].
. H.H. Aghdam, and E.J. Heravi. “Convolutional Neural Network.” In A Guide to Convolutional Neural Network (a practical application to traffic-sign detection and classification), Tarragona, SER: Springer, 2017, pp.106-118.
. K. Cai, W. Cao, L. Aarniovuori, H. Pang, Y. Lin, and G. Li. “Classification of power quality disturbances using wigner-ville distributin and deep convolutional networks.” IEEE Access, vol 7, pp.119099-119109. 2019.
Copyright (c) 2021 International Journal of Sciences: Basic and Applied Research (IJSBAR)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who submit papers with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
- By submitting the processing fee, it is understood that the author has agreed to our terms and conditions which may change from time to time without any notice.
- It should be clear for authors that the Editor In Chief is responsible for the final decision about the submitted papers; have the right to accept\reject any paper. The Editor In Chief will choose any option from the following to review the submitted papers:A. send the paper to two reviewers, if the results were negative by one reviewer and positive by the other one; then the editor may send the paper for third reviewer or he take immediately the final decision by accepting\rejecting the paper. The Editor In Chief will ask the selected reviewers to present the results within 7 working days, if they were unable to complete the review within the agreed period then the editor have the right to resend the papers for new reviewers using the same procedure. If the Editor In Chief was not able to find suitable reviewers for certain papers then he have the right to accept\reject the paper.B. sends the paper to a selected editorial board member(s). C. the Editor In Chief himself evaluates the paper.
- Author will take the responsibility what so ever if any copyright infringement or any other violation of any law is done by publishing the research work by the author
- Before publishing, author must check whether this journal is accepted by his employer, or any authority he intends to submit his research work. we will not be responsible in this matter.
- If at any time, due to any legal reason, if the journal stops accepting manuscripts or could not publish already accepted manuscripts, we will have the right to cancel all or any one of the manuscripts without any compensation or returning back any kind of processing cost.
- The cost covered in the publication fees is only for online publication of a single manuscript.