Edge Computing Implementation for Action Recognition Systems

Afis Asryullah Pratama(1),


(1) Politeknik Elektronika Negeri Surabaya

Abstract

Nowadays the deep learning has been improved to many different sectors, including human action recognition system. This system mostly needs a high computing resource to work on. In its implementation, it will be built under cloud computing architecture which requires sensors used to send whole raw data to the cloud which puts a load in the networks. Therefore, edge computing system exists to overcome that weakness. This paper presents a method to recognize human action using deep learning with edge computing architecture. With RGB image as the input, this system will detect all persons in the frame using SSD-Mobilenet V2 model with various threshold values, then recognize every person’s action using our trained model with DetectNet architecture in various threshold too. The output of the system is detected person’s RoI and its recognized action action, which a lot smaller than the whole frame. As a result, our proposed system yields the best accuracy of human detection at 64.06% with a threshold at 0.15 and the best accuracy of action recognition at  37.8% with a threshold at 0.4.

Keywords

Computer vision; Deep learning; IoT; Edge Computing

Full Text:

PDF

References

Zhuming Bi, Li Da Xu, and Chengen Wang, “Internet of Things for Enterprise Systems of Modern Manufacturing,†IEEE Trans. Ind. Informatics, vol. 10, no. 2, pp. 1537–1546, May 2014, doi: 10.1109/TII.2014.2300338.

J. Wan et al., “A Manufacturing Big Data Solution for Active Preventive Maintenance,†IEEE Trans. Ind. Informatics, vol. 13, no. 4, pp. 2039–2047, Aug. 2017, doi: 10.1109/TII.2017.2670505.

A. G. Frank, L. S. Dalenogare, and N. F. Ayala, “Industry 4.0 technologies: Implementation patterns in manufacturing companies,†Int. J. Prod. Econ., vol. 210, pp. 15–26, Apr. 2019, doi: 10.1016/j.ijpe.2019.01.004.

L. Roda-Sanchez, C. Garrido-Hidalgo, D. Hortelano, T. Olivares, and M. C. Ruiz, “OperaBLE: An IoT-Based Wearable to Improve Efficiency and Smart Worker Care Services in Industry 4.0,†J. Sensors, vol. 2018, pp. 1–12, Aug. 2018, doi: 10.1155/2018/6272793.

M. Nguyen, L. Fan, and C. Shahabi, “Activity Recognition Using Wrist-Worn Sensors for Human Performance Evaluation,†in 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Nov. 2015, pp. 164–169, doi: 10.1109/ICDMW.2015.199.

M. Neuhausen, J. Teizer, and M. König, “Construction worker detection and tracking in bird’s-eye view camera images,†ISARC 2018 - 35th Int. Symp. Autom. Robot. Constr. Int. AEC/FM Hackathon Futur. Build. Things, no. Isarc, 2018, doi: 10.22260/isarc2018/0161.

M. Satyanarayanan, “The Emergence of Edge Computing,†Computer (Long. Beach. Calif)., vol. 50, no. 1, pp. 30–39, Jan. 2017, doi: 10.1109/MC.2017.9.

Y. Y. F. Panduman, A. R. A. Besari, S. Sukaridhoto, R. P. N. Budiarti, R. W. Sudibyo, and F. Nobuo, “Implementation of integration VaaMSN and SEMAR for wide coverage air quality monitoring,†Telkomnika (Telecommunication Comput. Electron. Control., vol. 16, no. 6, pp. 2630–2642, 2018, doi: 10.12928/TELKOMNIKA.v16i6.10152.

A. Mochamad Rifki Ulil, Fiannurdin, S. Sukaridhoto, A. Tjahjono, and D. K. Basuki, “The Vehicle as a Mobile Sensor Network base IoT and Big Data for Pothole Detection Caused by Flood Disaster,†IOP Conf. Ser. Earth Environ. Sci., vol. 239, p. 012034, Feb. 2019, doi: 10.1088/1755-1315/239/1/012034.

A. Rasyid et al., “Pothole Visual Detection using Machine Learning Method integrated with Internet of Thing Video Streaming Platform,†IES 2019 - Int. Electron. Symp. Role Techno-Intelligence Creat. an Open Energy Syst. Towar. Energy Democr. Proc., pp. 672–675, 2019, doi: 10.1109/ELECSYM.2019.8901626.

Andrew Tao, Jon Barker, and S. Sarathy, “DetectNet: Deep Neural Network for Object Detection in DIGITS | NVIDIA Developer Blog,†2016. https://developer.nvidia.com/blog/detectnet-deep-neural-network-object-detection-digits/ (accessed Sep. 30, 2020).

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,†Commun. ACM, 2017, doi: 10.1145/3065386.

J. Wang, X. Nie, Y. Xia, Y. Wu, and S. C. Zhu, “Cross-view action modeling, learning, and recognition,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2649–2656, 2014, doi: 10.1109/CVPR.2014.339.

M. D. Rodriguez, J. Ahmed, and M. Shah, “Action MACH: A spatio-temporal maximum average correlation height filter for action recognition,†26th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR, 2008, doi: 10.1109/CVPR.2008.4587727.

B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei, “Human action recognition by learning bases of action attributes and parts,†Proc. IEEE Int. Conf. Comput. Vis., pp. 1331–1338, 2011, doi: 10.1109/ICCV.2011.6126386.

V. Delaitre, I. Laptev, and J. Sivic, “Recognizing human actions in still images: A study of bag-of-features and part-based representations,†Br. Mach. Vis. Conf. BMVC 2010 - Proc., 2010, doi: 10.5244/C.24.97.

N. Dalal et al., “Histograms of Oriented Gradients for Human Detection To cite this version : HAL Id : inria-00548512 Histograms of Oriented Gradients for Human Detection,†IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 886–893, 2010.

Refbacks

  • There are currently no refbacks.




Scientific Journal of Informatics (SJI)
p-ISSN 2407-7658 | e-ISSN 2460-0040
Published By Department of Computer Science Universitas Negeri Semarang
Website: https://journal.unnes.ac.id/nju/index.php/sji
Email: [email protected]

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.