YOLOv8 Analysis for Vehicle Classification Under Various Image Conditions

Eben Panja(1), Hendry Hendry(2), Christine Dewi(3),


(1) Department of Information Technology, Universitas Kristen Satya Wacana, Indonesia
(2) Department of Information Technology, Universitas Kristen Satya Wacana, Indonesia
(3) Department of Information Technology, Universitas Kristen Satya Wacana, Indonesia

Abstract

Purpose: The purpose of this research is to detect vehicle types in various image conditions using YOLOv8n, YOLOv8s, and YOLOv8m with augmentation.

Methods: This research utilizes the YOLOv8 method on the DAWN dataset. The method involves using pre-trained Convolutional Neural Networks (CNN) to process the images and output the bounding boxes and classes of the detected objects. Additionally, data augmentation applied to improve the model's ability to recognize vehicles from different directions and viewpoints.

Result: The mAP values for the test results are as follows: Without data augmentation, YOLOv8n achieved approximately 58%, YOLOv8s scored around 68.5%, and YOLOv8m achieved roughly 68.9%. However, after applying horizontal flip data augmentation, YOLOv8n's mAP increased to about 60.9%, YOLOv8s improved to about 62%, and YOLOv8m excelled with a mAP of about 71.2%. Using horizontal flip data augmentation improves the performance of all three YOLOv8 models. The YOLOv8m model achieves the highest mAP value of 71.2%, indicating its high effectiveness in detecting objects after applying horizontal flip augmentation.

Novelty: This research introduces novelty by employing the latest version of YOLO, YOLOv8, and comparing its performance with YOLOv8n, YOLOv8s, and YOLOv8m. The use of data augmentation techniques, such as horizontal flip, to increase data variation is also novel in expanding the dataset and improving the model's ability to recognize objects.

Keywords

CNN; Data augmentation; Dawn; Object detection; YOLOv8

Full Text:

PDF

References

J. Lee and K. il Hwang, “YOLO with adaptive frame control for real-time object detection applications,” Multimed. Tools Appl., vol. 81, no. 25, pp. 36375–36396, 2022, doi: 10.1007/s11042-021-11480-0.

H. Bahri, D. Krcmarik, and J. Koci, “Accurate Object Detection System on HoloLens Using YOLO Algorithm,” Proc. - 2019 3rd Int. Conf. Control. Artif. Intell. Robot. Optim. ICCAIRO 2019, pp. 219–224, 2019, doi: 10.1109/ICCAIRO47923.2019.00042.

B. S. Rekha, A. Marium, G. N. Srinivasan, and S. A. Shetty, “Literature Survey on Object Detection using YOLO,” Int. Res. J. Eng. Technol., vol. 07, no. 06, pp. 3082–3088, 2020, [Online]. Available: https://www.irjet.net/archives/V7/i6/IRJET-V7I6576.pdf

D. S. Gothane, “A Practice for Object Detection Using YOLO Algorithm,” Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol., vol. 7, no. 2, pp. 268–272, 2021, doi: 10.32628/cseit217249.

N. M. A. A. Dazlee, S. A. Khalil, S. Abdul-Rahman, and S. Mutalib, “Object Detection for Autonomous Vehicles with Sensor-based Technology Using YOLO,” Int. J. Intell. Syst. Appl. Eng., vol. 10, no. 1, pp. 129–134, 2022, doi: 10.1039/b000000x.

N. Zarei, P. Moallem, and M. Shams, “Fast-Yolo-Rec: Incorporating Yolo-Base Detection and Recurrent-Base Prediction Networks for Fast Vehicle Detection in Consecutive Images,” IEEE Access, vol. 10, no. November, pp. 120592–120605, 2022, doi: 10.1109/ACCESS.2022.3221942.

C. Dewi, A. P. S. Chen, and H. J. Christanto, “Recognizing Similar Musical Instruments with YOLO Models,” Big Data Cogn. Comput., vol. 7, no. 2, 2023, doi: 10.3390/bdcc7020094.

A. L. Rishika, C. Aishwarya, A. Sahithi, and M. Premchender, “Real-time Vehicle Detection and Tracking using YOLO-based Deep Sort Model: A Computer Vision Application for Traffic Surveillance,” Turkish J. Comput. Math. Educ., vol. 14, no. 01, pp. 255–264, 2023.

A. Bathija and G. Sharma, “Visual Object Detection and Tracking using YOLO and SORT,” Int. J. Eng. Res. Technol., vol. 8, no. 11, pp. 705–708, 2019, [Online]. Available: https://www.ijert.org

Y. Zhang, Z. Guo, J. Wu, Y. Tian, H. Tang, and X. Guo, “Real-Time Vehicle Detection Based on Improved YOLO v5,” Sustain., vol. 14, no. 19, 2022, doi: 10.3390/su141912274.

S. Wibowo and I. Sugiarto, “Hand Symbol Classification for Human-Computer Interaction Using the Fifth Version of YOLO Object Detection,” CommIT J., vol. 17, no. 1, pp. 43–50, 2023.

N. N. Hasibuan, M. Zarlis, and S. Efendi, “Detection and tracking different type of cars with YOLO model combination and deep sort algorithm based on computer vision of traffic controlling,” J. dan Penelit. Tek. Inform., vol. 6, no. 1, pp. 210–220, 2021, [Online]. Available: https://doi.org/10.33395/sinkron.v6i1.11231

M. A. Kenk and M. Hassaballah, “DAWN: Vehicle Detection in Adverse Weather Nature Dataset,” pp. 1–6, 2020, doi: 10.17632/766ygrbt8y.3.

S. E. Ryu and K. Y. Chung, “Detection model of occluded object based on yolo using hard-example mining and augmentation policy optimization,” Appl. Sci., vol. 11, no. 15, 2021, doi: 10.3390/app11157093.

R. Hao, K. Namdar, L. Liu, M. A. Haider, and F. Khalvati, “A Comprehensive Study of Data Augmentation Strategies for Prostate Cancer Detection in Diffusion-Weighted MRI Using Convolutional Neural Networks,” J. Digit. Imaging, vol. 34, no. 4, pp. 862–876, 2021, doi: 10.1007/s10278-021-00478-7.

M. H. Putra, Z. M. Yussof, K. C. Lim, and S. I. Salim, “Convolutional neural network for person and car detection using YOLO framework,” J. Telecommun. Electron. Comput. Eng., vol. 10, no. 1–7, pp. 67–71, 2018.

F. He and D. Tao, “Recent advances in deep learning theory,” Int. J. Mach. Learn. Cybern., vol. 11, no. 4, pp. 747–750, 2021, doi: 10.1007/s13042-020-01096-5.

I. P. Sary, S. Andromeda, and E. U. Armin, “Performance Comparison of YOLOv5 and YOLOv8 Architectures in Human Detection using Aerial Images,” Ultim. Comput. J. Sist. Komput., vol. 15, no. 1, pp. 8–13, 2023, doi: 10.31937/sk.v15i1.3204.

S. Tamang, B. Sen, A. Pradhan, K. Sharma, and V. K. Singh, “Enhancing COVID-19 Safety: Exploring YOLOv8 Object Detection for Accurate Face Mask Classification,” Orig. Res. Pap. Int. J. Intell. Syst. Appl. Eng. IJISAE, vol. 2023, no. 2, pp. 892–897, 2023, [Online]. Available: www.ijisae.org

S. Zhao, J. Zheng, S. Sun, and L. Zhang, “SS symmetry An Improved YOLO Algorithm for Fast and Accurate,” Symmetry MDPI, vol. 14, p. 1669, 2022.

Z. Ning, X. Wu, J. Yang, and Y. Yang, “MT-YOLOv5: Mobile terminal table detection model based on YOLOv5,” J. Phys. Conf. Ser., vol. 1978, no. 1, 2021, doi: 10.1088/1742-6596/1978/1/012010.

J. Hu, X. Zhi, T. Shi, W. Zhang, Y. Cui, and S. Zhao, “Pag-yolo: A portable attention-guided yolo network for small ship detection,” Remote Sens., vol. 13, no. 16, 2021, doi: 10.3390/rs13163059.

H. Yang et al., “Deep learning for automated detection of cyst and tumors of the jaw in panoramic radiographs,” J. Clin. Med., vol. 9, no. 6, pp. 1–14, 2020, doi: 10.3390/jcm9061839.

P. K. Sekharamantry, F. Melgani, and J. Malacarne, “Deep Learning-Based Apple Detection with Attention Module and Improved Loss Function in YOLO,” Remote Sens., vol. 15, no. 6, 2023, doi: 10.3390/rs15061516.

C. Dewi, A. P. Shun Chen, and H. Juli Christanto, “YOLOv7 for Face Mask Identification Based on Deep Learning,” 2023 15th Int. Conf. Comput. Autom. Eng. ICCAE 2023, no. June, pp. 193–197, 2023, doi: 10.1109/ICCAE56788.2023.10111427.

A. Farid, F. Hussain, K. Khan, M. Shahzad, U. Khan, and Z. Mahmood, “A Fast and Accurate Real-Time Vehicle Detection Method Using Deep Learning for Unconstrained Environments,” Appl. Sci., vol. 13, no. 5, 2023, doi: 10.3390/app13053059.

A. Chaudhuri, “Smart Traffic Management of Vehicles using Faster R-CNN based Deep Learning Method,” arXiv Prepr. arXiv2311.10099, 2023.

Refbacks

  • There are currently no refbacks.




Scientific Journal of Informatics (SJI)
p-ISSN 2407-7658 | e-ISSN 2460-0040
Published By Department of Computer Science Universitas Negeri Semarang
Website: https://journal.unnes.ac.id/nju/index.php/sji
Email: [email protected]

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.