| 研究生: |
許譯云 Hsu, Yi-Yun |
|---|---|
| 論文名稱: |
基於深度增強式學習與B-RRT*演算法之移動機器人路徑規劃與自主導航研究 Study on Path Planning and Autonomous Navigation of Mobile Robots based on Deep Reinforcement Learning and B-RRT* |
| 指導教授: |
鄭銘揚
Cheng, Ming-Yang |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2022 |
| 畢業學年度: | 110 |
| 語文別: | 中文 |
| 論文頁數: | 107 |
| 中文關鍵詞: | 路徑規劃 、自主導航 、深度增強式學習 、RRT 、ROS 、移動型機器人 |
| 外文關鍵詞: | Path Planning, Autonomous Navigation, Reinforcement Learning, RRT, ROS, Mobile Robot |
| 相關次數: | 點閱:55 下載:6 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著工業4.0在全世界所帶來的自動化浪潮,各式機器人系統被導入生產線,再加上近幾年人工智慧興起,更是讓自動化技術朝向智慧化邁進。其中移動型機器人的自主導航被廣泛應用於物流倉儲、前台服務、環境清掃與工業製造等領域,是一種必須具備應對動態環境變化的能力,並且能夠依循參考路徑或位置資訊來抵達目的地的技術。為了使移動型機器人能在面對多元的環境變化時做出相應決策,本論文提出一個基於深度增強式學習演算法的自主導航系統。與傳統演算法相比,雖然深度增強式學習擁有更高的穩定性與靈活度,卻有著在大範圍場景或某些特殊場景容易失敗的缺點。有鑑於此,本論文選擇使用B-RRT*演算法提供全域路徑中繼點,作為深度增強式學習的階段性目標以引導移動型機器人前往目的地。模擬與真實世界的實驗結果顯示本論文所提方法確實可行。
Thanks to the global wave of automation brought about by Industry 4.0, various types of robotic systems have been introduced into production lines. In addition, the recent rise of artificial intelligence has driven automation technology towards "smartness". Autonomous navigation of mobile robots has been widely applied to different fields such as warehouse logistics, front desk service, cleaning, manufacturing, etc. Autonomous navigation is a complex task that requires the ability of reacting to any environmental changes in addition to the capability of following a pre-defined reference path to arrive at a destination. To ensure that the mobile robot can properly respond to changes in the environment, this thesis develops an autonomous navigation system based on deep reinforcement learning (DRL). Compared with traditional methods, deep reinforcement learning performs better in terms of stability and flexibility. However, it is difficult for DRL to find a feasible solution in large-scale scenes or those with a specific structure such as a circular corridor. Therefore, this thesis employs B-RRT* to select intermediate waypoints as its reference trajectory information for deep reinforcement learning so as to guide the mobile robot to arrive at the destination. Simulations and real-world experiments have been carried out, with results verifying the effectiveness of the proposed approach.
[1] 財團法人台灣經濟研究院,產業技術白皮書,經濟部,中華民國,2021。
[2] 廖家宜,疫後著眼新科技導入-AI、機器人成自動化展亮點,DIGITIMES智慧應用,2020。[Online] Available: https://www.digitimes.com.tw/iot/article.asp?cat=158&id=0000591787_QAE65CEI9P99K56JVGJJN
[3] Intel,自主移動機器人技術與使用案例,Intel,2022。[Online] Available: https://www.intel.com.tw/content/www/tw/zh/robotics/autonomous-mobile-robots/overview.html
[4] 廖家宜,零接觸掀起自動化風潮 AMR成為大贏家,DIGITIMES智慧應用,2021。[Online] Available: https://www.digitimes.com.tw/iot/article.asp?cat=158&cat1=20&id=0000626186_90T2ZWD27NZ7NS74BUYRE
[5] 葉芳瑜,三大類型服務型機器人受歡迎,國家實驗研究院科技政策研究與資訊中心,2020。[Online] Available: https://iknow.stpi.narl.org.tw/Post/Read.aspx?PostID=16708
[6] International Federation of Robotics (IFR), Mobile Robots Revolutionize Industry, 2021. [Online] Available: https://ifr.org/ifr-press-releases/news/mobile-robots-revolutionize-industry
[7] 經濟部標準檢驗局,依ISO國際標準制定「服務型機器人之導航性能準則及相關試驗」國家標準,中華民國經濟部,2022。[Online] Available: https://www.moea.gov.tw/MNS/populace/news/News.aspx?kind=1&menu_id=40&news_id=100495
[8] J. Gao, W. Ye, J. Guo and Z. Li, "Deep Reinforcement Learning for Indoor Mobile Robot Path Planning," Sensors, vol. 20, Sep. 2020.
[9] B. Wang, Z. Liu, Q. Li and A. Prorok, "Mobile Robot Path Planning in Dynamic Environments Through Globally Guided Reinforcement Learning," IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6932-6939, Oct. 2020.
[10] G. Chen, L. Pan, Y. Chen, P. Xu, Z. Wang, P. Wu, J. Ji and X. Chen., "Robot Navigation with Map-Based Deep Reinforcement Learning," in Proceedings of 2020 IEEE International Conference on Networking, Sensing and Control (ICNSC), Nanjing, China, 30 Oct. - 02 Nov. 2020, pp. 1-6.
[11] L. Garrote, J. Paulo and U. J. Nunes, "Reinforcement Learning Aided Robot-Assisted Navigation: A Utility and RRT Two-Stage Approach," International Journal of Social Robotics, vol. 12, pp. 689–707, Sep. 2020.
[12] A. Faust, K. Oslund, O. Ramirez, A. Francis, L. Tapia, M. Fiser and J. Davidson, "PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-Based Planning," in Proceedings of 2018 IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, 21-25 May 2018, pp. 5113-5120.
[13] T. Bailey and H. Durrant-Whyte, "Simultaneous Localization and Mapping (SLAM): part II," IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108-117, Sep. 2006.
[14] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. D. Reid and J. J. Leonard, "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age," IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016.
[15] Z. Shang and Z. Shen, "Real-time 3D Reconstruction on Construction Site using Visual SLAM and UAV," arXiv preprint arXiv:1712.07122, 2017.
[16] A. Handa, T. Whelan, J. McDonald and A. J. Davison, "A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM," in Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May - 07 Jun. 2014, pp. 1524-1531.
[17] R. Sim and N. Roy, "Global A-Optimal Robot Exploration in SLAM," in Proceedings of 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18-22 Apr. 2005, pp. 661-666.
[18] H. Ismail, R. Roy, L. Sheu, W. Chieng and L. Tang, "Exploration-Based SLAM (e-SLAM) for the Indoor Mobile Robot Using Lidar," Sensors, vol. 22, no.4: 1689, Feb. 2022.
[19] T. Taketomi, H. Uchiyama and S. Ikeda, "Visual SLAM algorithms: a survey from 2010 to 2016," IPSJ Transactions on Computer Vision and Applications, Jun. 2017.
[20] S. Kohlbrecher, O. von Stryk, J. Meyer and U. Klingauf, "A Flexible and Scalable SLAM System with Full 3D Motion Estimation," in Proceedings of 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan , 01-05 Nov. 2011, pp. 155-160.
[21] G. Grisetti, C. Stachniss and W. Burgard, "Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters," IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34-46, Feb. 2007.
[22] W. Hess, D. Kohler, H. Rapp and D. Andor, "Real-time Loop Closure in 2D LIDAR SLAM," in Proceedings of 2016 IEEE International Conference on Robotics and Automation(ICRA), Stockholm, Sweden, 16-21 May 2016, pp. 1271-1278.
[23] C. Forster, M. Pizzoli and D. Scaramuzza, "SVO: Fast Semi-direct Monocular Visual Odometry," in Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May - 07 Jun. 2014, pp. 15-22.
[24] R. Mur-Artal, J. M. M. Montiel and J. D. Tardós, "ORB-SLAM: A Versatile and Accurate Monocular SLAM System," IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, Oct. 2015.
[25] R. Mur-Artal and J. D. Tardós, "ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras," IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, Oct. 2017.
[26] J. Engel, V. Koltun and D. Cremers, "Direct Sparse Odometry," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 611-625, Mar. 2018.
[27] S. Chen, B. Zhou, C. Jiang, W. Xue and Q. Li, "A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization," Remote Sensing, vol. 13, no. 14: 2720, Jul. 2021.
[28] D. Fox, W. Burgard and S. Thrun, "The Dynamic Window Approach to Collision Avoidance," IEEE Robotics & Automation Magazine, vol. 4, no. 1, pp. 23-33, Mar. 1997.
[29] C. Rösmann, W. Feiten, T. Wösch, F. Hoffmann and T. Bertram, "Trajectory Modification Considering Dynamic Constraints of Autonomous Robots," in Proceedings of ROBOTIK 2012; 7th German Conference on Robotics, Munich, Germany , 21-22 May 2012, pp 74-79.
[30] J. Park, J. Kim and J. Song, "Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning," International Journal of Control Automation and Systems, vol. 50, pp. 674-68, Dec. 2007.
[31] E. Prianto, M. Kim, J. Park, J. Bae and J. Kim, "Path Planning for Multi-Arm Manipulators Using Deep Reinforcement Learning: Soft Actor–Critic with Hindsight Experience Replay," Sensors, vol. 20, no. 20, Oct. 2020.
[32] J. C. Kiemel, R. Weitemeyer, P. Meißner and T. Kröger, "TrueÆdapt: Learning Smooth Online Trajectory Adaptation with Bounded Jerk, Acceleration and Velocity in Joint Space," in Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 Oct. 2020 - 24 Jan. 2021, pp. 5387-5394.
[33] P. E. Hart, N. J. Nilsson and B. Raphael, "A Formal Basis for the Heuristic Determination of Minimum Cost Paths," IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107, Jul. 1968.
[34] E. W. Dijkstra, "A Note on Two Problems in Connexion with Graphs," Numerische Mathematik, vol. 1, pp. 269-271, Dec. 1959.
[35] A. Stentz, "Optimal and Efficient Path Planning for Partially-known Environments," in Proceedings of 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, USA, 08-13 May 1994, vol.4, pp. 3310-3317.
[36] W. Xinyu, L. Xiaojuan, G. Yong, S. Jiadong and W. Rui, "Bidirectional Potential Guided RRT* for Motion Planning," IEEE Access, vol. 7, pp. 95046-95057, Jul. 2019.
[37] O. Khatib, "Real-time Obstacle Avoidance for Manipulators and Mobile Robots," in Proceedings of 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25-28 May 1985, pp. 500-505.
[38] Y. Koren and J. Borenstein, "Potential Field Methods and Their Inherent Limitations for Mobile Robot Navigation," in Proceedings of 1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA , 09-11 Apr. 1991, vol.2, pp. 1398-1404.
[39] M. A. Contreras-Cruz, V. Ayala-Ramirez and U. H. Hernandez-Belmonte, "Mobile Robot Path Planning Using Artificial Bee Colony and Evolutionary Programming," Applied Soft Computing, vol. 30, pp. 319-328, May 2015.
[40] M. S. Alam, M. U. Rafique and M. U. Khan. "Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization," arXiv preprint arXiv:2008.10000, Aug. 2020.
[41] C. Lamini, S. Benhlima and A. Elbekri, "Genetic Algorithm Based Approach for Autonomous Mobile Robot Path Planning," Procedia Computer Science, vol. 127, pp. 180-189, 2018.
[42] M. Elbanhawi and M. Simic, "Sampling-Based Robot Motion Planning: A Review," IEEE Access, vol. 2, pp. 56-77, Jan. 2014.
[43] L. E. Kavraki, P. Svestka, J. -C. Latombe and M. H. Overmars, "Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces," IEEE Transactions on Robotics and Automation, vol. 12, no. 4, pp. 566-580, Aug. 1996.
[44] S. M. Lavalle, "Rapidly-Exploring Random Trees: A New Tool for Path Planning," Department of Computer Science, Iowa State University, Tech. Rep., Oct. 1998.
[45] S. Karaman and E. Frazzoli, "Sampling-Based Algorithms for Optimal Motion Planning," The International Journal of Robotics Research, vol. 30, no. 7, pp. 846-894, Jun. 2011.
[46] M. Jordan and A. Perez, "Optimal Bidirectional Rapidly-exploring Random trees," Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory, Tech. Rep., Aug. 2013.
[47] J. J. Kuffner and S. M. LaValle, "RRT-connect: An Efficient Approach to Single-query Path Planning," in Proceedings of 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), 24-28 Apr. 2020, vol. 2, pp. 995-1001.
[48] S. M. LaValle and J. J. Kuffner, Jr., "Randomized Kinodynamic Planning," The International Journal of Robotics Research, vol. 20, no. 5, pp. 378-400, May 2001.
[49] S. Klemm, J. Oberländer, A. Hermann, A. Rönnau, T. Schamm, J. M. Zöllner and R. Dillmann, "RRT*-Connect: Faster, asymptotically optimal motion planning," in Proceedings of 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 06-09 Dec. 2015, pp. 1670-1677.
[50] A. H. Qureshi and Y. Ayaz, "Intelligent Bidirectional Rapidly-exploring Random Trees for Optimal Motion Planning in Complex Cluttered Environments," arXiv preprint arXiv:1703.08944, Mar. 2017.
[51] A. H. Qureshi and Y. Ayaz, "Potential Functions based Sampling Heuristic For Optimal Path Planning," Autonomous Robots, vol. 40, pp. 1079-7093, Nov. 2015.
[52] National Oceanic and Atmospheric Administration (NOAA.), "What is lidar? " National Ocean Service website, 2020. [Online]. Available: https://ocean-service.noaa. gov/facts/lidar.html.
[53] T. Theilig, HDDM+ – Innovative Technology for Distance Measurement from SICK, SICK AG in Waldkirch, 2017. [Online]. Available: https://-www.sick.com/tw/en/white-paper-hddm-technology-for-distance-measureme-nt-/w/gmt-micron-to-mile-whitepaper-HDDMplus/.
[54] K. Krinkin, A. Filatov, A. Y. Filatov, A. Huletski and D. Kartashov, "Evaluation of Modern Laser Based Indoor SLAM Algorithms," in Proceedings of 2018 22nd Conference of Open Innovations Association (FRUCT), 15-18 May 2018, pp. 101-106.
[55] W. Zong, G. Li, M. Li, L. Wang and S. Li, "A Survey of Laser Scan Matching Methods," Chinese Optics, vol. 11, no. 6, pp. 914-930, Dec. 2018.
[56] S. Thrun, W. Burgard and D. Fox, Probabilistic Robotics (Intelligent Robotics and Autonomous Agents), Cambridge: The MIT Press, 2005.
[57] X. Zhang, J. Lai, D. Xu, H. Li and M. Fu, "2D Lidar-Based SLAM and Path Planning for Indoor Rescue Using Mobile Robots", Journal of Advanced Transportation, vol. 2020, Nov. 2020.
[58] G. Grisetti, R. Kümmerle, C. Stachniss and W. Burgard, "A Tutorial on Graph-Based SLAM," IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31-43, 2010.
[59] S. Agarwal, K. Mierle, and The Ceres Solver Team, "Ceres solver," 2012. [Online]. Available: http://ceres-solver.org.
[60] J. Clausen, "Branch and Bound Algorithms-principles and Examples," Lectures Notes, Department of Computer Science, University of Copenhagen, 2003.
[61] Lozano-Perez, "Spatial Planning: A Configuration Space Approach," IEEE Transactions on Computers, vol. C-32, no. 2, pp. 108-120, Feb. 1983.
[62] S. M. LaValle, Planning Algorithms. Cambridge, U.K.: Cambridge University Press, 2006. [Online]. Available: http://planning.cs.uiuc.edu/
[63] T. Haarnoja, A. Zhou, P. Abbeel and S. Levine, "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor," arXiv preprint arXiv:1801.01290, Jan. 2018.
[64] T. Haarnoja, A. Zhou, P. Abbeel and S. Levine, "Soft actor-critic algorithms and applications," arXiv preprint arXiv:1812.05905, Dec. 2018.
[65] S. Hochreiter and J. Schmidhuber, "Long Short-Term Memory," Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[66] J. Kober, J. A. Bagnell and J. Peters, "Reinforcement Learning in Robotics: A Survey," International Journal of Robotics Research, vol. 32, pp.1238-1274, Aug. 2013.
[67] K. Arulkumaran, M. P. Deisenroth, M. Brundage and A. A. Bharath, "Deep Reinforcement Learning: A Brief Survey," IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26-38, Nov. 2017.
[68] 葉庭瑜,基於深度增強式學習之二軸機械手臂視覺追蹤系統研究,碩士論文,國立成功大學電機工程學系,中華民國,2019。
[69] 陳亞伶,基於電腦視覺與深度增強式學習之工業用機械手臂物件取放任務研究,碩士論文,國立成功大學電機工程學系,中華民國,2021。
[70] A.T.D. Perera and P. Kamalaruban, "Applications of Reinforcement Learning in Energy Systems," Renewable and Sustainable Energy Reviews, vol. 137, Mar. 2021.
[71] I. H. Sarker, "Machine Learning: Algorithms, Real-World Applications and Research Directions," SN Computer Science, vol. 2, no. 160, Mar. 2021.
[72] R. S. Sutton and A. G. Barto, Reinforcement learning: an introduction, Cambridge, MA:MIT Press, 1998.
[73] T. Haarnoja, H. Tang, P. Abbeel and S. Levine, "Reinforcement Learning with Deep Energy-Based Policies," arXiv preprint arXiv:1702.08165, Feb.2017.
[74] C. E. Shannon, "A Mathematical Theory of Communication," The Bell System Technical Journal, vol. 27, no. 3, pp. 379-423, Oct. 1948.
[75] H. Tang and T. Haarnoja, Learning Diverse Skills via Maximum Entropy Deep Reinforcement Learning, Berkeley Artificial Intelligence Research(BAIR), 2017. [online]. Available: https://bair.berkeley.edu/blog/2017/10/06/soft-q-learning/.
[76] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. A. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg and D. Hassabis, "Human-level Control Through Deep Reinforcement Learning," Nature, vol. 518, pp. 529–533, Feb. 2015.
[77] S. Fujimoto, H. van Hoof and D. Meger, "Addressing Function Approximation Error in Actor-Critic Methods," arXiv preprint arXiv:1802.09477, Feb. 2018.
[78] G. V. Houdt, C. Mosquera and G. Nápoles, " A Review on the Long Short-Term Memory Model," Artificial Intelligence Review, vol. 53, no. 8, pp. 5929-5955, May 2020.
[79] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai and S. Chintala, "PyTorch: An Imperative Style, High-Performance Deep Learning Library," arXiv preprint arXiv:1912.01703, Dec. 2019.
[80] D. E. Rumelhart, G. E. Hinton and R. J. Williams, "Learning representations by back-propagating errors," Nature, vol. 323, pp. 533-536, Oct. 1986.
[81] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler and A. Y. Ng, "ROS: An Open-source Robot Operating System, " ICRA Workshop on Open Source Software, Jan. 2009.
[82] ROS, "Documentation - ROS Wiki," 2011. [Online]. Available: https://www.ros.org/wiki.
[83] Y. Maruyama, S. Kato and T. Azumi, "Exploring the Performance of ROS2," in Proceedings of 2016 International Conference on Embedded Software (EMSOFT), Pittsburgh, PA, USA, 02-07 Oct. 2016.
[84] Real-Time Innovations (RTI.), "DDS: An Open Standard for Real-Time Applications," Real-Time Innovations Website. [Online]. Available: https://www.rti.com/ products/dds-standard.
[85] 陳玉鳳。打造資料分享平台,舊機台聯網奔向工業4.0。新通訊。2017年。
[86] N. Koenig and A. Howard, "Design and Use Paradigms for Gazebo, An Open-source Multi-robot Simulator," in Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 Sep. - 02 Oct. 2004, vol.3, pp. 2149-2154.
[87] R. Bischoff, U. Huggenberger and E. Prassler, "KUKA youBot - a Mobile Manipulator for Research and Education," in Proceedings of 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 09-13 May 2011, pp. 1-4.
[88] SICK AG, "Product Data Sheet: TiM240-2050300," 2021. [Online]. Available: https://www.sick.com/tw/zf/detection-and-ranging-solutions/2d-lidar/ tim2xx/tim240-2050300/p/p654443.
[89] SICK AG, "2D-LiDAR感測器TiM2xx," 2021. [Online]. Available: https://www.sick.com/tw/zf/detection-and-ranging-solutions/2d-lidar/tim2xx/tim240-2050300/p/p654443.
[90] youBot store GmbH, " YouBot Detailed Specifications " youBot store GmbH, 2015. [Online]. Available: http://www.youbot-store.com/wiki.
[91] ROBOTIS, "TurtleBot3: Features – ROBOTIS e-Manual," ROBOTIS, 2022. [Online]. Available: https://emanual.robotis.com/docs/en/platform/turtlebot3/features/
[92] ROBOTIS, "TurtleBot3: LDS-01 – ROBOTIS e-Manual," ROBOTIS, 2022. [Online]. Available: https://emanual.robotis.com/docs/en/platform/turtlebot3/appendix_lds_01/
[93] Raspberry Pi, "Raspberry Pi 3 Model B+," Raspberry Pi, 2018. [Online]. Available: https://www.raspberrypi.com/products/raspberry-pi-3-model-b-plus/.