Selection of Food Identification System Features Using Convolutional Neural Network (CNN) Method

Arnita Arnita(1), Faridawaty Marpaung(2), Zainal Abidin Koemadji(3), Mhd Hidayat(4), Azi Widianto(5), Fitrahuda Aulia(6),


(1) Universitas Negeri Medan, Indonesia
(2) Universitas Negeri Medan, Indonesia
(3) University of Leicester, United Kingdom
(4) Universitas Negeri Medan, Indonesia
(5) Universitas Negeri Medan, Indonesia
(6) Universitas Negeri Medan, Indonesia

Abstract

Purpose: The identification and selection of food to be consumed are critical in determining the health quality of human life. Our diet and the illnesses we develop are closely linked. Public awareness of the significance of food quality has increased due to the rising prevalence of degenerative diseases such as obesity, heart disease, type 2 diabetes, hypertension, and cancer. This study aims to develop a model for food identification and identify aspects that can aid in food identification.

Methods : This study employs the convolutional neural network (CNN) approach, which is used to identify food objects or images based on the detected features. The images of thirty-five different types of traditional, processed, and western foods were gathered as the study’s input data. The image data for each type of food was repeated 100 times to produce a total of 3500 images.. Using the color, shape, and texture information, the food image is retrieved. The hue, saturation, and value (HSV) extraction method for color features, the Canny extraction method for shape features, and the gray level co-occurrence matrix (GLCM) method for texture features, in that sequence, were used to evaluate the data in addition to the CNN classification method.

Result:The simulation results show that the classification model’s accuracy and precision are 76% and 78%, respectively, when the CNN approach is used alone without the extraction method. The CNN classification model and HSV color extraction yielded an accuracy and precision of 51% and 55%, respectively. The CNN classification model with the Canny texture extraction method has an accuracy and precision of 20% and 20%, respectively, while the combined CNN and GLCM extraction methods have 67% and 69% success rates, respectively. According to the simulation results, the food classification and identification model that uses the CNN approach without the HSV, Canny, and GLCM feature extraction methods produces better results in terms of accuracy and precision model.

Novelty: This research has the potential to be used in a variety of food identification applications, such as food and nutrition service systems, as well as to improve product quality in the food and beverage industry.

Keywords

CNN, HSV, Canny, GLCM, food identification

Full Text:

PDF

References

U. Arshad, “Object Detection in Last Decade - A Survey,” Sci. J. Informatics, vol. 8, no. 1, pp. 60–70, 2021, doi: 10.15294/sji.v8i1.28956.

M. A. Muslim et al., “New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning *,” Intell. Syst. with Appl., vol. 18, no. December 2022, p. 200204, 2023, doi: 10.1016/j.iswa.2023.200204.

K. Umam and B. S. Negara, “Deteksi Obyek Manusia Pada Basis Data Video Menggunakan Metode Background Subtraction Dan Operasi Morfologi,” J. CoreIT J. Has. Penelit. Ilmu Komput. dan Teknol. Inf., vol. 2, no. 2, p. 31, 2016, doi: 10.24014/coreit.v2i2.2391.

G. Chung, B. Gesing, G. Steinhauer, M. Heck, and K. Dierkx, “Important_Artificial Intelligence in Logistics,” DHL IBM, p. 45, 2018.

I. G. and Y. B. and A. Courville, Deep Learning. 2016.

A. Anton, N. F. Nissa, A. Janiati, N. Cahya, and P. Astuti, “Application of Deep Learning Using Convolutional Neural Network (CNN) Method For Women’s Skin Classification,” Sci. J. Informatics, vol. 8, no. 1, pp. 144–153, 2021, doi: 10.15294/sji.v8i1.26888.

Y. Lecun, L. Bottou, Y. Bengio, and P. Ha, “LeNet,” in Proceedings of the IEEE, 1998, no. November, pp. 1–46.

X. Luo, R. Shen, J. Hu, J. Deng, L. Hu, and Q. Guan, “A Deep Convolution Neural Network Model for Vehicle Recognition and Face Recognition,” Procedia Comput. Sci., vol. 107, no. Icict, pp. 715–720, 2017, doi: 10.1016/j.procs.2017.03.153.

A. K. Nugroho, I. Permadi, and M. Faturrahim, “Improvement Of Image Quality Using Convolutional Neural Networks Method,” Sci. J. Informatics, vol. 9, no. 1, pp. 95–103, May 2022, doi: 10.15294/sji.v9i1.30892.

A. Patil and M. Rane, “Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition,” Smart Innov. Syst. Technol., vol. 195, pp. 21–30, 2021, doi: 10.1007/978-981-15-7078-0_3.

F. Maulana, T. Sri, and N. Asih, “Hoax Classification in Indonesian Language with Bidirectional Temporal Convolutional Network Architecture,” J. Soft Comput. Explor., vol. 4, no. 1, pp. 39–52, 2023.

G. Guo, H. Wang, Y. Yan, J. Zheng, and B. Li, “A fast face detection method via convolutional neural network,” Neurocomputing, vol. 395, no. Bo Li, pp. 128–137, 2020, doi: 10.1016/j.neucom.2018.02.110.

F. Shu Zhan, Qin-Qin Tao, Xiao-Hong Li, “Face detection using representation learning,” Neurocomputing, vol. 3, no. 2, pp. 80–91, 2016.

Arnita, M. Yani, F. Marpaung, M. Hidayat, and A. Widianto, “A comparative study of convolutional neural network and k-nearest neighbours algorithms for food image recognition,” J. Comput. Technol., vol. 27, no. 6, pp. 88–99, 2022, doi: 10.25743/ICT.2022.27.6.008.

H. Cena and P. C. Calder, “Defining a Healthy Diet : Evidence for the Role of,” Nutrients, vol. 12, no. 334, pp. 1–15, 2020.

S. P. Mohanty et al., “The Food Recognition Benchmark: Using Deep Learning to Recognize Food in Images,” Front. Nutr., vol. 9, no. May, pp. 1–13, 2022, doi: 10.3389/fnut.2022.875143.

Z. Shen, A. Shehzad, S. Chen, H. Sun, and J. Liu, “Machine Learning Based Approach on Food Recognition and Nutrition Estimation,” Procedia Comput. Sci., vol. 174, pp. 448–453, 2020, doi: 10.1016/j.procs.2020.06.113.

L. A. Ruiz, A. Recio, A. Fernández-Sarría, and T. Hermosilla, “A tool for object descriptive feature extraction: Application to image classification and map updating,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. - ISPRS Arch., vol. 38, no. 4C7, 2010.

D. R. Tobergte and S. Curtis, Algorithms for image prcessing and computer vision, vol. 53, no. 9. 2013.

J. Mantik, R. Antika, and M. Iqbal, “Using the Matlab App to Detect Objects By Color With Hsv Color Segmentation,” J. Mantik, vol. 5, no. 4, pp. 2702–2708, 2022.

N. D. Lynn, A. I. Sourav, and A. J. Santoso, “Implementation of Real-Time Edge Detection Using Canny and Sobel Algorithms,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1096, no. 1, p. 012079, 2021, doi: 10.1088/1757-899x/1096/1/012079.

E. A. Sekehravani, E. Babulak, and M. Masoodi, “Implementing canny edge detection algorithm for noisy image,” Bull. Electr. Eng. Informatics, vol. 9, no. 4, pp. 1404–1410, 2020, doi: 10.11591/eei.v9i4.1837.

M. O’Byrne, B. Ghosh, V. Pakrashi, and F. Schoefs, “Texture Analysis Based Detection And Classification Of Structure Features On Ageing Infrastructure Elements,” Hal, pp. 1–6, 2018.

F. Roberti de Siqueira, W. Robson Schwartz, and H. Pedrini, “Multi-scale gray level co-occurrence matrices for texture description,” Neurocomputing, vol. 120, pp. 336–345, 2013, doi: 10.1016/j.neucom.2012.09.042.

D. E. Birba, “A Comparative study of data splitting algorithms for machine learning model selection,” Degree Proj. Comput. Sci. Eng., no. December, 2020.

A. Gholamy, V. Kreinovich, and O. Kosheleva, “Why 70/30 or 80/20 Relation Between Training and Testing Sets : A Pedagogical Explanation,” Dep. Tech. Reports, vol. 1209, pp. 1–6, 2018.

R. Sarki, K. Ahmed, H. Wang, Y. Zhang, and K. Wang, “Convolutional Neural Network for Multi-class Classification of Diabetic Eye Disease,” EAI Endorsed Trans. Scalable Inf. Syst., vol. 9, no. 4, 2022, doi: 10.4108/eai.16-12-2021.172436.

C. O. A. Freitas, J. M. De Carvalho, J. J. Oliveira, S. B. K. Aires, and R. Sabourin, “Confusion matrix disagreement for multiple classifiers,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 4756 LNCS, no. May 2014, pp. 387–396, 2007, doi: 10.1007/978-3-540-76725-1_41.

E. R. Davies, Computer Vision: Principles, Algorithms, Applications, Learning: Fifth Edition. 2017.

J. Ilmiah and P. Teknik, “Implementation Segmentation of Color Image With,” VOLT (Jurnal Ilm. Pendidik. Tek. Elektro), vol. 2, no. 2, pp. 157–166, 2017.

V. B. Vladimir Chernov, Jarmo Alander, “Integer-based accurate conversion between RGB and HSV color spaces,” Comput. Electr. Eng., vol. 46, pp. 328–337, 2015.

A. Ramola, A. K. Shakya, and D. Van Pham, “Study of statistical methods for texture analysis and their modern evolutions,” Eng. Reports, vol. 2, no. 4, pp. 1–24, 2020, doi: 10.1002/eng2.12149.

T. I. Simanjuntak, S. Suwilo, and R. W. Sembiring, “Analysis of Detection of Drow and Entire Co-Occurrence Matrix GLCM Method on the Classification of Image,” J. Phys. Conf. Ser., vol. 1361, no. 1, 2019, doi: 10.1088/1742-6596/1361/1/012028.

Ş. Öztürk and B. Akdemir, “Application of Feature Extraction and Classification Methods for Histopathological Image using GLCM, LBP, LBGLCM, GLRLM and SFTA,” Procedia Comput. Sci., vol. 132, no. Iccids, pp. 40–46, 2018, doi: 10.1016/j.procs.2018.05.057.

B. Ramprakash, D. Indumathi, and C. Science, “Text detection from traffic regulatory sign boards,” vol. 11, no. 5, pp. 11–14, 2020.

Q. Xu, S. Varadarajan, C. Chakrabarti, and L. J. Karam, “A distributed canny edge detector: Algorithm and FPGA implementation,” IEEE Trans. Image Process., vol. 23, no. 7, pp. 2944–2960, 2014, doi: 10.1109/TIP.2014.2311656.

A. Septiarini and R. Wardoyo, “Kompleksitas Algoritma GLCM untuk Ekstraksi Ciri Tekstur pada Penyakit Glaucoma,” Pros. Semin. Nas. Komun. dan Inform., no. 978-602-72127-1–8, pp. 98–102, 2015.

G. F. Laxmi, P. Eosina, and F. Fatimah, “Analisis perbandingan metode prewitt dan canny untuk identifikasi ikan air tawar,” Pros. SINTAK, pp. 201–206, 2017.

A. L. Katole, K. P. Yellapragada, A. K. Bedi, S. S. Kalra, and M. Siva Chaitanya, “Hierarchical Deep Learning Architecture for 10K Objects Classification,” pp. 77–93, 2015, doi: 10.5121/csit.2015.51408.

W. S. Eka Putra, “Klasifikasi Citra Menggunakan Convolutional Neural Network (CNN) pada Caltech 101,” J. Tek. ITS, vol. 5, no. 1, 2016, doi: 10.12962/j23373539.v5i1.15696.

Refbacks

  • There are currently no refbacks.




Scientific Journal of Informatics (SJI)
p-ISSN 2407-7658 | e-ISSN 2460-0040
Published By Department of Computer Science Universitas Negeri Semarang
Website: https://journal.unnes.ac.id/nju/index.php/sji
Email: [email protected]

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.