簡易檢索 / 詳目顯示

研究生: 賀琦峰
He, Qi-Fong
論文名稱: 基於ROS系統之深度強化學習地圖構建
ROS System-Based for Map Construction in Deep Reinforcement Learning
指導教授: 賴槿峰
Lai, Chin-Feng
學位類別: 碩士
Master
系所名稱: 工學院 - 工程科學系
Department of Engineering Science
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 50
中文關鍵詞: ROS系統SLAM技術局部最小化問題機器人的路徑規劃問題
外文關鍵詞: ROS system, SLAM, Local-Minimum problem, Robot path planning problem
相關次數: 點閱:90下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究利用ROS系統框架並結合SLAM,提出新的人工智慧探索演算法,應用於機器人面對陌生環境中該如何探索的這個問題。提出以DQN為基底的探索模型,使機器人能在陌生的環境下快速尋找未探索過得區域,並且再搭配同樣也是以DQN為基底的導航模型解決了在探索過程中,產生的局部最小問題,最後利用本研究設計的模型切換機制,在相對應的情況下進行轉換,讓兩種不同的模型相輔相成之下,完成任務。

    相較於傳統的探索演算法,包括隨機覆蓋法、人工勢場法、柵格法、模板模型法...等,大多允許機器人重複探索已探索的路徑以及發生輕微碰撞,本研究將SLAM技術得到的地圖資訊回饋給探索模型並且紀錄機器人在地圖中的每一個柵格走過之步數,能夠有效解決這兩個問題。

    於實驗結果中,本研究比較了slam-gmapping開圖探索之表現,尤其在本研究選擇的四個實驗環境中,本研究之方法與slam-gmapping探索之總步數差距均為5%之內。透過觀察完成階段任務所需時間可以發現,本研究方法成長趨勢雖比其他兩者來的趨緩,但比起兩者開圖的先決條件,本研究方法勝出許多。

    This research uses the ROS system framework and SLAM to propose a new artificial intelligence exploration algorithm, which is applied to the problem of how robots should explore in unfamiliar environments. An exploration model based on DQN is proposed to enable the robot to quickly find unexplored areas in an unfamiliar environment, and a navigation model based on DQN also solves the Local-Minimum problem generated during the exploration process. Finally, the model switching mechanism designed in this study is used to convert under the corresponding circumstances, so that the two different models cooperation each other to complete the task.

    Compared with others traditional exploration algorithms, most of them allow the robot to repeatedly explore the explored path and slight collision, this study will use SLAM technology The obtained map information is fed back to the exploration model and records the number of steps the robot takes in each grid in the map, which can effectively solve these two problems.

    In the experimental results, this study compares the performance of slam-gmapping exploration, especially in the four experimental environments selected in this study, the total step difference between the method of this study and slam-gmapping exploration is within 5\% . By observing the time required to complete the tasks in the phase, we can find that although the growth trend of this research method is slower than the other two, it is much better than the prerequisites for the development of the two.

    摘要 i 英文延伸摘要 ii 致謝 vi 目錄 vii 表格 ix 圖片 x Chapter 1. 簡 介 1 1.1. 研究動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2. 研究目標 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3. 章節提要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Chapter 2. 研究背景與相關文獻 3 2.1. 研究背景 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1.1. 機器人路徑規劃之發展史 . . . . . . . . . . . . . . . . . . . . . . 3 2.1.2. 深度強化學習神經網路架構研究 . . . . . . . . . . . . . . . . . . 4 2.1.3. SLAM 技 術 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2. 強化學習研究 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1. 價值函數 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2. Deep Q Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3. 深度強化學習之路經規劃 . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.1. 路徑規劃法的總匯及比較 . . . . . . .10 2.3.2. ROS & Gazebo . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.3. 神經網路之路徑規劃相關研究 . . . . . . . . . . . . . . . . . . . 13 2.3.4. 神經網路規劃法與傳統學習規劃法利弊 . . . . . . . . . . . . . . 14 Chapter 3. 研究方法 16 3.1. 環境架設 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2. 問題描述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3. 路徑規劃架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3.1. 切換模型機制 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.4. BE Controller 模型參數設定 . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4.1. 符號定義 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4.2. α(獎勵函數平衡調節參數) . . . . . . . . . . . . . . . . . . . . . 18 3.4.3. βi(階段任務) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4.4. MGR(地圖獎勵值) . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4.5. flag(i,j)(個別區域步數計算) . . . . . . . . . . . . . . . . . . . . . 19 3.5. BE Controller 模型設計 20 3.5.1. 狀態 (State) 21 3.5.2. 動作 (Action) 21 3.5.3. DQN 模型架構 22 3.5.4. 獎勵函數 24 3.5.5. 策略 25 3.6. BN Controller 26 Chapter 4. 研究結果與討論 27 4.1. 實驗設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2. 實驗環境 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.3. 實驗流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.4. 實驗結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.4.1. 探索模型實驗 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.4.2. 表現比較實驗 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Chapter 5. 結論與未來展望 40 5.1. 研究結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2. 未來展望 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 References 42 Appendix A. 相關研究虛擬碼 48 A.1. DQN 之虛擬碼 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Appendix B. BN-Controller 49 B.1. 模型設計方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 B.2. 模型表現 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    [1] C. Taylor, A. Parker, S. Lau, E. Blair, A.Heninger, E. Ng, E. DiBernardo, R. Witman,M. Stout, et al., “Robot vacuum cleaner,” Jan. 26 2006. US Patent App. 11/171,031.
    [2] C. K. Volos, I. M. Kyprianidis, and I. N.Stouboulos, “Experimental investigation on coverage performance of a chaotic autonomous mobile robot,”Robotics and Au- tonomous Systems, vol. 61, no. 12, pp. 1314–1322, 2013.
    [3] 朱大奇 and 颜明重, “移动机器人路径规划技术综述,” 控制與決策, vol. 25, no. 7, pp. 961–967, 2010.
    [4] 张捍东, 郑睿, 岑豫皖, et al., 移动机器人路径规划技术的現狀與展望. PhD thesis, 2005.
    [5] 张颖, 吴成东, and 原宝龙, “机器人路径规划方法综述,” 控制工程, vol. 10, no. Z1, p. 1522155, 2003.
    [6] 曲道奎, 杜振军, 徐殿国, 徐方, et al., “移动机器人路径规划方法研究,” 2008.
    [7] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,M. Riedmiller,A.K.Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,”Nature, vol. 518, no. 7540, pp. 529–533, 2015.
    [8] L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continu- ous control of mobile robots for mapless navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 31–36, 2017.
    [9] T. Roska and L. O. Chua, “The cnn universal machine: an analogic array computer,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 40, no. 3, pp. 163–173, 1993.
    [10] G. E. Hinton, N. Srivastava, A.Krizhevsky,I.Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.
    [11] L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: A survey,”Journal of artificial intelligence research, vol. 4, pp. 237–285, 1996.
    [12] A. Y. Ng, S. J. Russell, et al., “Algorithms for inverse reinforcement learning.,” in Icml, vol. 1, p. 2, 2000.
    [13] C. J. Watkins and P. Dayan, “Q-learning,” Machine learning, vol. 8, no. 3-4, pp. 279– 292, 1992.
    [14] S.Gu, T.Lillicrap, I.Sutskever, and S.Levine, “Continuous deep q-learning with model-based acceleration,” in International Conference on Machine Learning, pp. 2829–2838, 2016.
    [15] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “Monoslam: Real-time single camera slam,” IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 6, pp. 1052–1067, 2007.
    [16] A. A. B. Pritsker, “Introduction to stimulation and slam ii,” 1986.
    [17] M. G. Dissanayake, P. Newman, S. Clark, H.F.Durrant-Whyte, and M. Csorba, “A so- lution to the simultaneous localization and map building (slam) problem,” IEEE Trans- actions on robotics and automation, vol. 17, no. 3, pp. 229–241, 2001.
    [18] D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2, pp. 2161–2168, Ieee, 2006.
    [19] M. Cummins and P. Newman, “Appearance-only slam at large scale with fab-map 2.0,” The International Journal of Robotics Research, vol. 30, no. 9, pp. 1100–1123, 2011.
    [20] D. Gálvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188–1197, 2012.
    [21] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent ele- mentary features,” in European conference on computer vision, pp. 778–792, Springer, 2010.
    [22] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in European conference on computer vision, pp. 430–443, Springer, 2006.
    [23] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in European conference on computer vision, pp. 404–417, Springer, 2006.
    [24] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
    [25] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision, pp. 2564–2571, Ieee, 2011.
    [26] J. Civera, A. J. Davison, and J. M. Montiel, “Inverse depth parametrization for monoc- ular slam,” IEEE transactions on robotics, vol. 24, no. 5, pp. 932–945, 2008.
    [27] A. Chiuso, P. Favaro, H. Jin, and S. Soatto, “Structure from motion causally integrated over time,” IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 4, pp. 523–535, 2002.
    [28] E. Eade and T. Drummond, “Scalable monocular slam,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 1, pp. 469– 476, IEEE, 2006.
    [29] E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, and P. Sayd, “Real time localiza- tion and 3d reconstruction,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 1, pp. 363–370, IEEE, 2006.
    [30] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in 2007 6th IEEE and ACM international symposium on mixed and augmented reality, pp. 225–234, IEEE, 2007.
    [31] H. Strasdat, J. M. Montiel, and A. J. Davison, “Visual slam: why filter?,” Image and Vision Computing, vol. 30, no. 2, pp. 65–77, 2012.
    [32] J. Engel, T. Schöps, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European conference on computer vision, pp. 834–849, Springer, 2014.
    [33] R. S. Sutton, A. G. Barto, et al., Introduction to reinforcement learning, vol. 135. MIT press Cambridge, 1998.
    [34] C. H. Papadimitriou and J. N. Tsitsiklis, “The complexity of markov decision pro- cesses,” Mathematics of operations research, vol. 12, no. 3, pp. 441–450, 1987.
    [35] D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, “The complexity of de- centralized control of markov decision processes,” Mathematics of operations research, vol. 27, no. 4, pp. 819–840, 2002.
    [36] T. G. Dietterich, “Hierarchical reinforcement learning with the maxq value function decomposition,” Journal of artificial intelligence research, vol. 13, pp. 227–303, 2000.
    [37] R. W. Beard, G. N. Saridis, and J. T. Wen, “Galerkin approximations of the generalized hamilton-jacobi-bellman equation,” Automatica, vol. 33, no. 12, pp. 2159–2177, 1997.
    [38] M. J. Matarić, “Reinforcement learning in the multi-robot domain,” in Robot colonies, pp. 73–83, Springer, 1997.
    [39] M. Riedmiller, T. Gabel, R. Hafner, and S. Lange, “Reinforcement learning for robot soccer,” Autonomous Robots, vol. 27, no. 1, pp. 55–73, 2009.
    [40] H. R. Beom and H. S. Cho, “A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning,” IEEE transactions on Systems, Man, and Cybernet- ics, vol. 25, no. 3, pp. 464–477, 1995.
    [41] L. J. Lin, “Scaling up reinforcement learning for robot control,” 1993.
    [42] J. Kober, E. Oztop, and J. Peters, “Reinforcement learning to adjust robot movements to new situations,” in Twenty-Second International Joint Conference on Artificial Intel- ligence, 2011.
    [43] S. A. Billings and G. L. Zheng, “Radial basis function network configuration using genetic algorithms,” Neural Networks, vol. 8, no. 6, pp. 877–890, 1995.
    [44] S. Shibusawa and T. Shibuya, “Reinforcement learning in the environment where op- timal action value function is partly discontinuous,” in 2016 55th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 1545–1550, IEEE, 2016.
    [45] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q- learning,” in Thirtieth AAAI conference on artificial intelligence, 2016.
    [46] Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Fre- itas, “Dueling network architectures for deep reinforcement learning,” arXiv preprint arXiv:1511.06581, 2015.
    [47] T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” arXiv preprint arXiv:1511.05952, 2015.
    [48] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep con- volutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
    [49] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye co- ordination for robotic grasping with deep learning and large-scale data collection,” The International Journal of Robotics Research, vol. 37, no. 4-5, pp. 421–436, 2018.
    [50] E. W. Dijkstra et al., “A note on two problems in connexion with graphs,” Numerische mathematik, vol. 1, no. 1, pp. 269–271, 1959.
    [51] J. Berthold, M. Dieterle, and R. Loogen, “Implementing parallel google map-reduce in eden,” in European Conference on Parallel Processing, pp. 990–1002, Springer, 2009.
    [52] R. A. Brooks, “Solving the find-path problem by good representation of free space,”IEEE Transactions on Systems, Man, and Cybernetics, no. 2, pp. 190–197, 1983.
    [53] T. Lozano-Pérez and M. A. Wesley, “An algorithm for planning collision-free paths among polyhedral obstacles,” Communications of the ACM, vol. 22, no. 10, pp. 560– 570, 1979.
    [54] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” in Au- tonomous robot vehicles, pp. 396–404, Springer, 1986.
    [55] E. Marchesini and A. Farinelli, “Genetic deep reinforcement learning for mapless nav- igation,” in Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1919–1921, 2020.
    [56] L. Xie, Reinforcement learning based mapless robot navigation. PhD thesis, University of Oxford, 2019.
    [57] X. Xue, Z. Li, D. Zhang, and Y. Yan, “A deep reinforcement learning method for mo- bile robot collision avoidance based on double dqn,” in 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), pp. 2131–2136, IEEE, 2019.
    [58] J. P. M. Silva and K. A. Sakallah, “Grasp—a new search algorithm for satisfiability,” in The Best of ICCAD, pp. 73–89, Springer, 2003.
    [59] M. D. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge-detection algorithms,” IEEE transactions on pattern analysis and machine intelligence, vol. 19, no. 12, pp. 1338–1359, 1997.
    [60] O. Khatib, “A unified approach for motion and force control of robot manipulators: The operational space formulation,” IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43–53, 1987.
    [61] C.-Y. Lee, P.-k. Tseng, and C.-N. Liang, “Automatic driving system able to make driving decisions and method thereof,” June 28 2016. US Patent 9,377,781.
    [62] 袁麟博, 章卫国, and 李广文, “一种基于遗传算法一模式搜索法的无人机路径规划,”彈箭與制導學報, vol. 29, no. 3, pp. 279–282, 2009.
    [63] W. Qian, Z. Xia, J. Xiong, Y. Gan, Y. Guo, S. Weng, H. Deng, Y. Hu, and J. Zhang, “Manipulation task simulation using ros and gazebo,” in 2014 IEEE International Con- ference on Robotics and Biomimetics (ROBIO 2014), pp. 2594–2598, IEEE, 2014.
    [64] J. Meyer, A. Sendobry, S. Kohlbrecher, U. Klingauf, and O. Von Stryk, “Comprehensive simulation of quadrotor uavs using ros and gazebo,” in International conference on simulation, modeling, and programming for autonomous robots, pp. 400–411, Springer, 2012.
    [65] I. Zamora, N. G. Lopez, V. M. Vilches, and A. H. Cordero, “Extending the openai gym for robotics: a toolkit for reinforcement learning using ros and gazebo,” arXiv preprint arXiv:1608.05742, 2016.
    [66] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physi- cal simulation for autonomous vehicles,” in Field and service robotics, pp. 621–635, Springer, 2018.
    [67] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), vol. 3, pp. 2149–2154, IEEE, 2004.
    [68] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. J. Ballard, A. Banino, M. Denil,R. Goroshin, L. Sifre, K. Kavukcuoglu, et al., “Learning to navigate in complex en- vironments,” arXiv preprint arXiv:1611.03673, 2016.
    [69] W. Li, C. Ma, and F. M. Wahl, “A neuro-fuzzy system architecture for behavior-based control of a mobile robot in unknown environments,” Fuzzy Sets and systems, vol. 87, no. 2, pp. 133–140, 1997.
    [70] H. M. Choset, S. Hutchinson, K. M. Lynch, G. Kantor, W. Burgard, L. E. Kavraki, and S. Thrun, Principles of robot motion: theory, algorithms, and implementation. MIT press, 2005.
    [71] L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continu- ous control of mobile robots for mapless navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 31–36, IEEE, 2017.
    [72] L. Ma, J. Chen, and Y. Liu, “Using rgb image as visual input for mapless robot naviga- tion,” arXiv preprint arXiv:1903.09927, 2019.
    [73] A. B. Ramaswamy and R. T. Brunts, “Mapless gps navigation system in vehicle enter- tainment system,” May 6 1997. US Patent 5,627,547.
    [74] W. Zhang and Y. Zhang, “Behavior switch for drl-based robot navigation,” in 2019 IEEE 15th International Conference on Control and Automation (ICCA), pp. 284–288, IEEE, 2019.
    [75] L. Chengqing, M. H. Ang, H. Krishnan, and L.S.Yong, “Virtual obstacle concept for local-minimum-recovery in potential-field based navigation,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Au- tomation. Symposia Proceedings (Cat. No. 00CH37065), vol. 2, pp. 983–988, IEEE, 2000.
    [76] Y. F. Li, J. Meng, and D. X. Zou, “Flame retardant laser direct structuring materials,” July 23 2013. US Patent 8,492,464.
    [77] M.-C. Rivara and C. Levin, “A 3-d refinement algorithm suitable for adaptive and multi-grid techniques,” Communications in Applied Numerical Methods, vol. 8, no. 5, pp. 281–290, 1992.
    [78] R. Chaisittiporn, “Service boy robot path planner by deep q network,” International Journal of Applied Computer Technology and Information Systems, vol. 8, no. 1, 2018.
    [79] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, p. 5, Kobe, Japan, 2009.
    [80] E. Guizzo and E. Ackerman, “The turtlebot3 teacher [resources_hands on],” IEEE Spec- trum, vol. 54, no. 8, pp. 19–20, 2017.

    下載圖示 校內:2025-07-01公開
    校外:2025-07-01公開
    QR CODE