Development and Enhancement of Autonomous Mobile Robots Using Reinforcement Learning: Improving Navigation and Obstacle Avoidance in Small-Scale Industrial Settings
Keywords:
Automation, Dynamic Object Collision Avoidance, Reinforcement Learning, Robotics, Small-Scale ManufacturingAbstract
The rapid development and integration of Autonomous Mobile Robots (AMRs) have revolutionised industries by enhancing automation capabilities. A critical challenge in this evolution is achieving effective navigation and obstacle avoidance, essential for deploying AMRs seamlessly in varied environments. This paper presents a detailed exploration of AMR navigation and obstacle avoidance advancements through the application of reinforcement learning, specifically focusing on small-scale Sri Lankan manufacturing facilities. The study demonstrates the effectiveness of Q-learning in managing dynamic obstacles within a factory environment. The AMR avoided obstacles in 36 out of 50 test runs, achieving a 72\% success rate, and maintained an average distance of 12 cm from each obstacle, underscoring the algorithm's precision in maintaining safe navigation paths while dynamically adapting to environmental changes. The continuous monitoring by ultrasonic sensors, combined with iterative learning, enabled the robot to refine its decision-making process and efficiently navigate through the environment. This paper also provides a comprehensive examination of conventional methods, tracing their historical development and assessing their role in addressing real-world challenges. The results highlight the significant improvements brought by reinforcement learning, particularly when integrated with sensor fusion and motor control technologies, to enhance navigation and dynamic object collision avoidance.
References
. Brown, C. et al. Navigating the Field: A Survey of Potential Fields Methods. Auton. Robots 2010, 509 25, 123–145.
. Cao, Y.; et al. A Survey on Simultaneous Localization and Mapping: Towards the Age of Spatial 511 Machine Intelligence. IEEE Trans. Cybern. 2018, 49, 2274–2299.
. Chen, X.; et al. Sensor Fusion for Autonomous Mobile Robots: A Comprehensive Survey. 513 Sensors 2020, 20, 2002.
. Jones, A.; Brown, B. Advancements in Robotics: Navigating the Future. J. Robot. 2022, 15, 515 123–145.
. Jones, R.; White, L. Reactive Navigation: Strategies for Dynamic Environments. Int. J. Robot. 517 Res. 2015, 32, 456–478.
. Karaman, S.; Frazzoli, E. Sampling-Based Algorithms for Optimal Motion Planning. Int. J. 519 Robot. Res. 2011, 30, 846–894.
. Li, S.; et al. Edge Computing for Autonomous Mobile Robots: Opportunities and Challenges. J. 521 Parallel Distrib. Comput. 2021, 147, 162–177.
. Miller, D.; et al. Behaviour-Based Systems for Mobile Robot Navigation: A Comprehensive 523 Review. Robot. Today 2013, 19, 67–89.
. Roberts, M.; Smith, D. Path Following Algorithms in Autonomous Mobile Robots. IEEE Trans. 525 Robot. 2016, 30, 1123–1140.
. Smith, A.; Johnson, B. Rule-Based Approaches in Mobile Robot Navigation. J. Robot. 2008, 21, 527 89–110.
. Smith, C.; et al. Autonomous Mobile Robots: A Paradigm Shift in Automation. Int. J. Autom. 529 Robot. 2021, 28, 67–89.
. Wang, J.; et al. Cutting-Edge Technologies for Obstacle Avoidance in Autonomous Mobile 531 Robots. Robot. Today 2020, 18, 210–235.
. Wu, G.; et al. Deep Reinforcement Learning for Mobile Robot Navigation: A Review. IEEE 533 Trans. Cogn. Dev. Syst. 2019, 11, 195–210.
. Alatise, M.B.; Hancke, G.P. A Review on Challenges of Autonomous Mobile Robot and Sensor 535 Fusion Methods. IEEE Access 2020, 8, 39830–39846.
. Chen, P.; Pei, J.; Lu, W.; Li, M. A Deep Reinforcement Learning-Based Method for Real Time 537 Path Planning and Dynamic Obstacle Avoidance. Neurocomputing 2022, 497, 64–75.
. Choi, J.; Lee, G.; Lee, C. Reinforcement Learning-Based Dynamic Obstacle Avoidance and 539 Integration of Path Planning. Intell. Serv. Robot. 2021, 14, 663–677.
. Fiorini, P.; Shiller, Z. Motion Planning in Dynamic Environments Using Velocity Obstacles. Int. 541 J. Robot. Res. 1998, 17, 760–772.
. Fragapane, G.; de Koster, R.; Sgarbossa, F.; Strandhagen, J.O. Planning and Control of Au- 543 tonomous Mobile Robots for Intralogistics: Literature Review and Research Agenda. Eur. J. 544 Oper. Res. 2021, 294, 405–426.
. Hanh, L.D.; Cong, V.D. Path Following and Avoiding Obstacles for Mobile Robot under Dynamic 546 Environments Using Reinforcement Learning. J. Robot. Control (JRC) 2023, 4, 157–164.
. Hillebrand, M.; Lakhani, M.; Dumitrescu, R. A Design Methodology for Deep Reinforcement 548 Learning in Autonomous Systems. Procedia Manuf. 2020, 52, 266–271.
. Huh, D.J.; Park, J.H.; Huh, U.Y.; Kim, H.I. Path Planning and Navigation for Autonomous 550 Mobile Robot. IECON Proc. (Ind. Electron. Conf.) 2002, 2, 1538–1542.
. IEEE Robotics and Automation Society; Institute of Electrical and Electronics Engineers. 2016 552 IEEE International Conference on Robotics and Automation: Stockholm, Sweden, May 16th 553 21st; IEEE: Stockholm, Sweden, 2016.
. IEEE Robotics and Automation Society; Institute of Electrical and Electronics Engineers. 2020 555 IEEE 16th International Conference on Automation Science and Engineering (CASE); IEEE: 556 Hong Kong, China, 2020.
. Institute of Electrical and Electronics Engineers. 2020 IEEE International Conference on Robotics 558 and Automation (ICRA): 31 May-31 August, 2020. Paris, France; IEEE: Paris, France, 2020.
. Kobayashi, M.; Zushi, H.; Nakamura, T.; Motoi, N. Local Path Planning: Dynamic Window 560 Approach with Q-Learning Considering Congestion Environments for Mobile Robot. IEEE 561 Access 2023, 11, 96733–96742.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Sciences: Basic and Applied Research (IJSBAR)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Authors who submit papers with this journal agree to the following terms.