Image Classification Modelling of Beef and Pork Using Convolutional Neural Network

  • Salsabila Department of Statistics, IPB University, Bogor, 16680, Indonesia
  • Anwar Fitrianto Department of Statistics, IPB University, Bogor, 16680, Indonesia
  • Bagus Sartono Department of Statistics, IPB University, Bogor, 16680, Indonesia
Keywords: Beef and Pork, Model, Classification, CNN


The high price of beef makes some people manipulate sales in markets or other shopping venues, such as mixing beef and pork. The difference between pork and beef is actually from the color and texture of the meat. However, many people do not understand these differences yet. One of the solutions is to create a technology that can recognize and differentiate pork and beef. That is what underlies this research to build a system that can classify the two types of meat. Since traditional machine learning such as Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) uses manual feature extraction in pattern recognition, we use Convolutional Neural Network (CNN) that can extract the feature automatically through the convolution layer. CNN is one of the deep learning methods and the development of artificial intelligence science that can be applied to classify images. There is no research on using CNN for pork and beef classification. Several regularization techniques, including dropout, L2, and max-norm with several values in them are applied to the model and compared to get the best classification results and can predict new data accurately. The best accuracy of 97.56% and the lowest loss of 0.111 were obtained from the CNN model by applying the dropout technique using p=0.7 supported by hyperparameters such as two convolution layers, 128 neurons in the fully connected layer, ReLU activation function, and two fully connected layers. The results of this study are expected to be the basis for making beef and pork recognition applications.


. Y. Erwanto, A. Mohammad Z, E.Y.P, Muslim Sugiyono, and A. Rohman. “Identification of pork contamination in meatballs of indonesia local market using polymerase chain reaction-restriction fragment length polymorphism analysis.” Asian Australasian Jornal of Animal Science. vol 27, pp. 1487-1492, 2014.

. S. Rahmati, N.M. Julkapli, W.A Yehye, and W.J Basirun. “Identification of meat origin in food products–a review.” Food Control, vol. 68, 2016, pp. 379-390.

. N. Lihayati, R.E Pawening, and M. Furqon. “Klasifikasi jenis daging berdasarkan tekstur menggunakan metode gray level coocurent matrix.” in Proc. SENTIA 2016, 2016.

. L. Zhu and P. Spachos. “Towards image classification with machine learning methodologies for smartphones.” Machine Learning and Knowledge Extraction, vo.1, pp.1039-1057, 2019.

. P. Dangeti. 2017. “K-Nearest neighbors and naive bayes.” in Statistics for Machine Learning, Birmingham, UK: Packt Publishing Ltd, 2017, pp.186-187.

. M. Hasan, S. Ullah, MJ. Khan, K. Khurshid. “ Comparative analysis of SVM, ANN and CNN for classifying vegetation species using hyperspectral thermal infrared data.” in ISPRS. vol.42, 2019.

. H. Kagaya, K. Aizawa, and M. Ogawa. “Food Detection and Recognition Using Convolutional Neural Network.” in Proceedings of the 22nd ACM international conference on Multimedia. pp. 1085–1088, Nov 2014.

. Y. LeCun, K. Kavukcuoglu, and C. Farabet. “Convolutional networks and applications in vision.” in ISCAS, 253–256, 2010.

. S. Salman and X. Liu. “Overfitting mechanism and avoidance in deep neural networks.” arXiv preprint arXiv:1901.06566, 2019.

. Rismiyati. Implementasi convolutional neural network untuk sortasi mutu salak ekspor berbasis citra digital thesis, Universitas Gajah Mada, Yogyakarta, 2016.

. M. S. Junayed et al. “AcneNet - a deep cnn based classification approach for acne classes.” in 2019 12th International Conference on Information & Communication Tech, and System, Surabaya, Indonesia, pp. 203-208, 2019.

. J. Nagi, F. Ducatelle, G. A. Di Caro, D. Cireşan, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber, and L. M. Gambardella. “Max-pooling convolutional neural networks for vision-based hand gesture recognition Signal and Image Processing Applications.” in IEEE International Conference, pp. 342-347, 2011.

. M. Alaslani, and L. Elrefaei. “Convolutional neural network based feature extraction for iris recognition.” International Journal of Computer Science & Information Technology, vol. 10, pp. 65-78, April 2018.

. I. Goodfellow, Y. Bengio, and A. Courville. “Feedforwards deep networks.” in Deep learning, vol. 1, Cambridge, US : MIT press, 2016, pp.176-180.

. Ying. “An overview of overfitting and its solutions.” Journal of Physics, vol.1168, 2019, pp.022022.

. N. Srivastava, G. Hinton, A. Krizhevsky, and I. Sutskever. “Dropout: a simple way to prevent neural networks from overfitting.” Journal of Machine Learning Research, vol 15, pp.1929-1958, 2014.

. Oles et al. “Package ‘EBImage’.” Internet: bioc/manuals/EBImage/man/EBImage.pdf., August. 2020 [Sept. 15, 2020].

. H.H. Aghdam, and E.J. Heravi. “Convolutional Neural Network.” In A Guide to Convolutional Neural Network (a practical application to traffic-sign detection and classification), Tarragona, SER: Springer, 2017, pp.106-118.

. K. Cai, W. Cao, L. Aarniovuori, H. Pang, Y. Lin, and G. Li. “Classification of power quality disturbances using wigner-ville distributin and deep convolutional networks.” IEEE Access, vol 7, pp.119099-119109. 2019.