Human Operator Identification in a Collaborative Robot Workspace within the Industry 5.0 Concept
Vladyslav Yevsieiev
Department of Computer-Integrated Technologies, Automation and Robotics, Kharkiv National University of Radio Electronics, Ukraine
Amer Abu-Jassar
Department of Computer Science, College of Computer Sciences and Informatics, Amman Arab University, Amman, Jordan
Svitlana Maksymova
Department of Computer-Integrated Technologies, Automation and Robotics, Kharkiv National University of Radio Electronics, Ukraine
Dmytro Gurin
Department of Computer-Integrated Technologies, Automation and Robotics, Kharkiv National University of Radio Electronics, Ukraine
##semicolon## Industry 5.0##common.commaListSeparator## Collaborative Robot##common.commaListSeparator## Workspace##common.commaListSeparator## Computer Vision, Robot Manipulator, Operator Identification, Human Identification.
सार
This paper explores the process of a human operator identifying in a collaborative robot workspace, which is critical within the Industry 5.0 concept. Using modern methods of computer vision and face recognition algorithms, a reliable mechanism of interaction between the operator and the robot is provided. Experimental results confirm the high accuracy of identification, which allows for safe and efficient operation of robotic systems in real production conditions. The article emphasizes the importance of integrating such technologies to increase the level of automation and create intuitive and adaptive production environments that meet the principles of Industry 5.0
##submission.citations##
1. Yevsieiev, V., & et al. (2024). Object Recognition and Tracking Method in the Mobile Robot’s Workspace in Real Time. Technical science research in Uzbekistan, 2(2), 115-124.
2. Gurin, D., & et al. (2024). Using Convolutional Neural Networks to Analyze and Detect Key Points of Objects in Image. Multidisciplinary Journal of Science and Technology, 4(9), 5-15.
3. Samoilenko, H., & et al. (2024). Review for Collective Problem-Solving by a Group of Robots. Journal of Universal Science Research, 2(6), 7-16.
4. Gurin, D., & et al. (2024). MobileNetv2 Neural Network Model for Human Recognition and Identification in the Working Area of a Collaborative Robot. Multidisciplinary Journal of Science and Technology, 4(8), 5-12.
5. Sotnik, S., Mustafa, S. K., Ahmad, M. A., Lyashenko, V., & Zeleniy, O. (2020). Some features of route planning as the basis in a mobile robot. International Journal of Emerging Trends in Engineering Research, 8(5), 2074-2079.
6. Sotnik, S., & Lyashenko, V. (2022). Prospects for Introduction of Robotics in Service. Prospects, 6(5), 4-9.
7. Abu-Jassar, A. T., Attar, H., Lyashenko, V., Amer, A., Sotnik, S., & Solyman, A. (2023). Access control to robotic systems based on biometric: the generalized model and its practical implementation. International Journal of Intelligent Engineering and Systems, 16(5), 313-328.
8. Al-Sharo, Y. M., Abu-Jassar, A. T., Sotnik, S., & Lyashenko, V. (2023). Generalized Procedure for Determining the Collision-Free Trajectory for a Robotic Arm. Tikrit Journal of Engineering Sciences, 30(2), 142-151.
9. Ahmad, M. A., Sinelnikova, T., Lyashenko, V., & Mustafa, S. K. (2020). Features of the construction and control of the navigation system of a mobile robot. International Journal of Emerging Trends in Engineering Research, 8(4), 1445-1449.
10. Lyashenko, V., Laariedh, F., Ayaz, A. M., & Sotnik, S. (2021). Recognition of Voice Commands Based on Neural Network. TEM Journal: Technology, Education, Management, Informatics, 10(2), 583-591.
11. Abu-Jassar, A., & et al. (2023). Obstacle Avoidance Sensors: A Brief Overview. Multidisciplinary Journal of Science and Technology, 3(5), 4-10.
12. Yevsieiev, V., & et al. (2024). The Canny Algorithm Implementation for Obtaining the Object Contour in a Mobile Robot’s Workspace in Real Time. Journal of Universal Science Research, 2(3), 7–19.
13. Gurin, D., & et al. (2024). Using the Kalman Filter to Represent Probabilistic Models for Determining the Location of a Person in Collaborative Robot Working Area. Multidisciplinary Journal of Science and Technology, 4(8), 66-75.
14. Al-Sharo, Y. M., Abu-Jassar, A. T., Sotnik, S., & Lyashenko, V. (2021). Neural networks as a tool for pattern recognition of fasteners. International Journal of Engineering Trends and Technology, 69(10), 151-160.
15. Гиренко, А. В., Ляшенко, В. В., Машталир, В. П., & Путятин, Е. П. (1996). Методы корреляционного обнаружения объектов. Харьков: АО “БизнесИнформ, 112.
16. Lyashenko, V. V., Matarneh, R., & Deineko, Z. V. (2016). Using the Properties of Wavelet Coefficients of Time Series for Image Analysis and Processing. Journal of Computer Sciences and Applications, 4(2), 27-34.
17. Lyashenko, V., Matarneh, R., & Kobylin, O. (2016). Contrast modification as a tool to study the structure of blood components. Journal of Environmental Science, Computer Science and Engineering & Technology, 5(3), 150-160.
18. Lyashenko, V. V., Lyubchenko, V. A., Ahmad, M. A., Khan, A., & Kobylin, O. A. (2016). The Methodology of Image Processing in the Study of the Properties of Fiber as a Reinforcing Agent in Polymer Compositions. International Journal of Advanced Research in Computer Science, 7(1).
19. Lyashenko, V. V., Deineko, Z. V., & Ahmad, M. A. Properties of wavelet coefficients of self-similar time series. In other words, 9, 16.
20. Lyubchenko, V., & et al.. (2016). Digital image processing techniques for detection and diagnosis of fish diseases. International Journal of Advanced Research in Computer Science and Software Engineering, 6(7), 79-83.
21. Lyashenko, V. V., Matarneh, R., Kobylin, O., & Putyatin, Y. P. (2016). Contour Detection and Allocation for Cytological Images Using Wavelet Analysis Methodology. International Journal, 4(1), 85-94.
22. Mousavi, S. M. H., Lyashenko, V., & Prasath, S. (2019). Analysis of a robust edge detection system in different color spaces using color and depth images. Компьютерная оптика, 43(4), 632-646.
23. Uchqun o‘g‘li, B. S., Valentin, L., & Vyacheslav, L. (2023). Preprocessing of digital images to improve the efficiency of liver fat analysis. Multidisciplinary Journal of Science and Technology, 3(1), 107-114.
24. Drugarin, C. V. A., Lyashenko, V. V., Mbunwe, M. J., & Ahmad, M. A. (2018). Pre-processing of Images as a Source of Additional Information for Image of the Natural Polymer Composites. Analele Universitatii'Eftimie Murgu', 25(2).
25. Lyashenko, V. V., & Babker, A. (2017). Using of Color Model and Contrast Variation in Wavelet Ideology for Study Megaloblastic Anemia Cells. Open Journal of Blood Diseases, 7(03), 86-102.
26. Orobinskyi, P., & et al.. (2020). Comparative Characteristics of Filtration Methods in the Processing of Medical Images. American Journal of Engineering Research, 9(4), 20-25.
27. Orobinskyi, P., Petrenko, D., & Lyashenko, V. (2019, February). Novel approach to computer-aided detection of lung nodules of difficult location with use of multifactorial models and deep neural networks. In 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM) (pp. 1-5). IEEE.
28. Boboyorov Sardor Uchqun o'g'li, Lyubchenko Valentin, & Lyashenko Vyacheslav. (2023). Image Processing Techniques as a Tool for the Analysis of Liver Diseases. Journal of Universal Science Research, 1(8), 223–233.
29. Tahseen A. J. A., & et al.. (2023). Binarization Methods in Multimedia Systems when Recognizing License Plates of Cars. International Journal of Academic Engineering Research (IJAER), 7(2), 1-9.
30. Abu-Jassar, A. T., Attar, H., Amer, A., Lyashenko, V., Yevsieiev, V., & Solyman, A. (2024). Remote Monitoring System of Patient Status in Social IoT Environments Using Amazon Web Services (AWS) Technologies and Smart Health Care. International Journal of Crowd Science.
31. Abu-Jassar, A. T., Attar, H., Amer, A., Lyashenko, V., Yevsieiev, V., & Solyman, A. (2024). Development and Investigation of Vision System for a Small-Sized Mobile Humanoid Robot in a Smart Environment. International Journal of Crowd Science.
32. Babker, A. M., Suliman, R. S., Elshaikh, R. H., Boboyorov, S., & Lyashenko, V. (2024). Sequence of Simple Digital Technologies for Detection of Platelets in Medical Images. Biomedical and Pharmacology Journal, 17(1), 141-152.
33. Color correction of the input image as an element of improving the quality of its visualization / M. Yevstratov, V. Lyubchenko, Abu-Jassar Amer, V. Lyashenko // Technical science research in Uzbekistan. – 2024. – № 2(4). – P. 79-88.
34. Lyubchenko, V., Veretelnyk, K., Kots, P., & Lyashenko, V. (2024).Digital image segmentation procedure as an example of an NP-problem. Multidisciplinary Journal of Science and Technology, 4(4), 170-177.
35. Abu-Jassar, A., Al-Sharo, Y., Boboyorov, S., & Lyashenko, V. (2023, December). Contrast as a Method of Image Processing in Increasing Diagnostic Efficiency When Studying Liver Fatty Tissue Levels. In 2023 2nd International Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI) (pp. 1-5). IEEE.
36. Lyashenko, V., Kobylin, O., & Selevko, O. (2020). Wavelet analysis and contrast modification in the study of cell structures images. International Journal of Advanced Trends in Computer Science and Engineering, 9(4), 4701-4706.
37. Rabotiahov, A., Kobylin, O., Dudar, Z., & Lyashenko, V. (2018, February). Bionic image segmentation of cytology samples method. In 2018 14th International Conference on Advanced Trends in Radioelecrtronics, Telecommunications and Computer Engineering (TCSET) (pp. 665-670). IEEE.
38. Bortnikova, V., & et al. (2019). Structural parameters influence on a soft robotic manipulator finger bend angle simulation. In 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), IEEE, 35-38.
39. Yevsieiev, V., & et al. (2024). Building a traffic route taking into account obstacles based on the A-star algorithm using the python language. Technical Science Research In Uzbekistan, 2(3), 103-112.
40. Funkendorf, A., & et al. (2019). 79 Mathematical Model of Adapted Ultrasonic Bonding Process for MEMS Packaging. In 2019 IEEE XVth International Conference on the Perspective Technologies and Methods in MEMS Design (MEMSTECH), IEEE, 79-82.
41. Gurin, D., & et al. (2024). Effect of Frame Processing Frequency on Object Identification Using MobileNetV2 Neural Network for a Mobile Robot. Multidisciplinary Journal of Science and Technology, 4(8), 36-44.
42. Bortnikova, V., & et al. (2019). Mathematical model of equivalent stress value dependence from displacement of RF MEMS membrane. In 2019 IEEE XVth International Conference on the Perspective Technologies and Methods in MEMS Design (MEMSTECH), IEEE, 83-86.
43. Leng, J., & et al. (2022). Industry 5.0: Prospect and retrospect. Journal of Manufacturing Systems, 65, 279-295.
44. Chander, B., & et al. (2022). Artificial intelligence-based internet of things for industry 5.0. Artificial intelligence-based internet of things systems, 3-45.
45. Leonor Estévez Dorantes, T., & et al. (2022). Development of a powerful facial recognition system through an API using ESP32-Cam and Amazon Rekognition service as tools offered by Industry 5.0. In Proceedings of the 2022 5th International Conference on Machine Vision and Application 76-81.
46. Zhang, C., & et al. (2023). Towards new-generation human-centric smart manufacturing in Industry 5.0: A systematic review. Advanced Engineering Informatics, 57, 102121.
47. Rožanec, J. M., & et al. (2023). Human-centric artificial intelligence architecture for industry 5.0 applications. International journal of production research, 61(20), 6847-6872.
48. Martini, B., & et al. (2024). Human-Centered and Sustainable Artificial Intelligence in Industry 5.0: Challenges and Perspectives. Sustainability, 16(13), 5448.
49. Lu, Y., & et al. (2022). Outlook on human-centric manufacturing towards Industry 5.0. Journal of Manufacturing Systems, 62, 612-627.