簡易檢索 / 詳目顯示

研究生: 南比達
Khanum, Abida
論文名稱: 應用於自動駕駛車道跟隨控制之基於視覺傳感器的多模態多運動預測方法
Anticipating Vision Sensor-based lane-following for Autonomous Driving via Multi-Modal Multiple Motion Control
指導教授: 楊竹星
Yang, Chu-Sing
謝錫堃
Shieh, Ce-Kuen
共同指導教授: 李朝陽
Lee, Choa-Yang
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2023
畢業學年度: 112
語文別: 英文
論文頁數: 75
中文關鍵詞: 自動駕駛車輛深度學習駕駛模擬器環境運動規劃車道保持卷積神經網絡長短時記憶
外文關鍵詞: Autonomous Vehicle, Deep Learning, Driving Simulator Environment, Motion Planning, Lane Maintaining, Convolution Neural Network , Long Short-Term Memory
相關次數: 點閱:121下載:11
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • The realm of autonomous self-driving vehicles has recently surged in popularity. Concurrently, motion-planning technology has taken a pivotal role in navigating the challenges of on-road environments. The field of autonomous vehicles (AVs) has recently garnered significant research attention, particularly in the realm of motion planning (MP) for driving control. Within this landscape, deep learning (DL), a subset of machine learning-driven by neural networks, holds a prominent position. This paper presents an updated overview of theories and applications of DL methods while also providing an overview of various DL methodologies. The study analyzes DL-based architectures used in decision-making frameworks in motion activities such as lane assistance, lane-following, taking over, avoiding collisions, emergency braking, and planning for motion. It also discusses the main difficulties associated with autonomous driving. Recently, the domain of autonomous self-driving vehicles has garnered substantial attention. Researchers have proposed diverse decision-making and systems within this field to forecast multiple controls from varied datasets, aiming to enhance results across different MTs. The thesis introduces a groundbreaking framework for integrated multitasking learning termed Multi-Modal Dense-LSTM (MM-DL). In this situation, a deep learning network architecture, combining dense and Long short-term memory (LSTM), has been employed for lane-following on roads. The process involves feeding normalized input images into a three-unit dense output, followed by the LSTM output layer as the final stage. This framework employs three Long Short-Term Memory (LSTM) units as outputs (MTs), enabling the MM-DL to anticipate three distinct motion decision-making tasks: steering angle, speed, and throttle. Consequently, MM-DL offers the capacity to efficiently predict these tasks concurrently, resulting in a notable reduction in time complexity (e.g., to less than 5 ms). The study also includes popular open-access data sets gathered from simulators and real-world roadways. These datasets provide diverse autonomous driving objectives. The experimentation phase employed datasets from both the Airsim simulator and real-world scenarios. This proposed method facilitates the prediction of steering, throttle, and speeds as output parameters, thus evaluating the efficacy of the deep learning network. In-depth discussions of difficulties involving software and technology, security problems, computer processing effectiveness, expenses, information balancing, multitasking learning, and technology-related future direction are provided in this study. The paper outlines future trajectories within the MP domain, forecasting emerging trends and directions that are poised to shape the field's evolution. As a comprehensive guide, this paper caters to both academia and industry practitioners, elucidating the dynamic interplay between deep learning, autonomous driving, and motion planning. Extensive experiments are conducted with a focus on the lane-keeping task, employing two control mechanisms. The proposed MM-DL is benchmarked against existing methods to evaluate its performance. The experiments underscore the significant superiority of MM-DL, boasting accuracies of approximately 98% for steering angle, 99% for speed, and 98% for throttle control tasks. Mean squared errors between predicted values and actual ground truth are also reported in the experiments, yielding values of 0.0250, 0.0210, and 0.0242 for steering angle, speed, and throttle, respectively. The findings emphasize that the proposed framework yields effective and precise motion planning for autonomous lane-following, thus contributing to the realm of self-driving technology.

    Abstract . i Acknowledgment iv Table of Contents vii Figures ix Tables . xi Chapter 1 Introduction 1 1-1. Background . . 1 1-2. Problem Statement: . . 7 1-3. Contribution . . 7 1-4. Organization . . 9 Chapter 2 Related Work 10 2-1. An Overview of Deep Learning for AV . 10 2-2. Decision-Making Utilizing Scenario-Based Approaches 12 Chapter 3 DL-Based Scenario Approaches for Decision-Making 19 3-1. Convolutional Neural Network 19 3-2. DenseNet 20 3-3. Recurrent Neural Network 23 3-3.1 Motion Planning Architectures 26 Chapter 4 Methodology 31 4-1. Multi-Modal DenseNet-LSTM (MM-DL) 31 4-1.1 Input . 32 4-1.2 DenseNet-LSTM layers 33 4-1.3 Output 36 Chapter 5 Dataset 38 5-1. Dataset 39 5-2. Implementation Details 44 Chapter 6 Experiment . 46 6-1. Evaluation Metric . 46 6-2. Comparison Methods . 48 6-3. Results . 51 Chapter 7 Research Challenges. 57 Chapter 8 Conclusion 62 References 64

    [1] R. Valiente, M. Zaman, S. Ozer, and Y. P. Fallah, “Controlling steering angle for cooperative self-driving vehicles utilizing cnn and lstm-based deep networks,” in 2019 IEEE intelligent vehicles symposium (IV). IEEE, 2019, pp. 2423–2428.
    [2] A. Khanum, C.-Y. Lee, and C.-S. Yang, “End-to-end deep learning model for steering angle control of autonomous vehicles,” in 2020 International Symposium o n Computer, and Control (IS3C). IEEE, 2020, pp. 189–192.
    [3] W. Yuan, M. Yang, H. Li, C. Wang, and B. Wang, “End-to-end learning for high-precision lane keeping via multi-state model,” CAAI Transactions on Intelligence Technology, vol. 3, no. 4, pp. 185–190, 2018.
    [4] Y. Jeong, S. Kim, and K. Yi, “Surround vehicle motion prediction using lstm-rnn for motion planning of autonomous vehicles at multi-lane turn intersections,” IEEE Open Journal of Intelligent Transportation Systems, vol. 1, pp. 2–14, 2020.
    [5] Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, “Robust lane detection from continuous driving scenes using deep neural networks,” IEEE transactions on vehicular technology, vol. 69, no. 1, pp. 41–54, 2019.
    [6] Z. Liu, K. Wang, J. Yu, and J. He, “End-to-end control of autonomous vehicles based on deep learning with visual attention,” in 2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI). IEEE, 2020, pp. 584–589.
    [7] A. R. Fayjie, S. Hossain, D. Oualid, and D.-J. Lee, “Driverless car: Autonomous driving using deep reinforcement learning in urban environment,” in 2018 15th international conference on ubiquitous robots (ur). IEEE, 2018, pp. 896–901.
    [8] X. Wang, J. Wu, Y. Gu, H. Sun, L. Xu, S. Kamijo, and N. Zheng, “Human-like maneuver decision using lstm-crf model for on-road self-driving,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018, pp. 210–216.
    [9] Y. Zhang, Q. Li, Q. Kang, and Y. Zhang, “Autonomous car motion prediction based on hybrid resnet model,” in 2021 International Conference on Communications, Information System and Computer Engineering (CISCE). IEEE, 2021, pp. 649–652.
    [10] S. Song, X. Hu, J. Yu, L. Bai, and L. Chen, “Learning a deep motion planning model for autonomous driving,” in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018, pp. 1137–1142.
    [11] S. Azam, F. Munir, M. A. Rafique, A. M. Sheri, M. I. Hussain, and M. Jeon, “N 2 c: neural network controller design using behavioral cloning,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4744–4756, 2021.
    [12] L. Li, W. Zhao, C. Xu, C. Wang, Q. Chen, and S. Dai, “Lane-change intention inference based on rnn for autonomous driving on highways,” IEEE Transactions on Vehicular Technology, vol. 70, no. 6, pp. 5499–5510, 2021.
    [13] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [14] A. Khanum, C.-Y. Lee, and C.-S. Yang, “End-to-end deep learning model for steering angle control of autonomous vehicles,” in 2020 International Symposium on Computer, Consumer and Control (IS3C). IEEE, 2020, pp. 189–192.
    [15] D. Wang, J. Wen, Y. Wang, X. Huang, and F. Pei, “End-to-end self-driving using deep neural networks with multi-auxiliary tasks,” Automotive Innovation, vol. 2, no. 2, pp. 127–136, 2019.
    [16] A. Khanum, C.-Y. Lee, C.-C. Hus, and C.-S. Yang, “Anticipating autonomous vehicle driving based on multi-modal multiple motion tasks network,” Journal of Intelligent & Robotic Systems, vol. 105, no. 3, pp. 1–13, 2022.
    [17] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
    [18] H. Li, J. Zhang, Z. Zhang, and Z. Huang, “Active lane management for intelligent connected vehicles in weaving areas of urban expressway,” Journal of Intelligent and Connected Vehicles, vol. 4, no. 2, pp. 52–67, Sep. 2021.
    [19] B. Simmons, P. Adwani, H. Pham, Y. Alhuthaifi, and A. Wolek, “Training a remote-control car to autonomously lane-follow using end-to-end neural networks,” in 2019 53rd Annual Conference on Information Sciences and Systems (CISS). IEEE, 2019,pp. 1–6.
    [20] N. Deo and M. M. Trivedi, “Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms,” in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE,2018, pp. 1179–1184.
    [21] H. Wang, B. Lu, J. Li, T. Liu, Y. Xing, C. Lv, D. Cao, J. Li, J. Zhang, and E. Hashemi, “Risk assessment and mitigation in local path planning for autonomous vehicles with lstm based predictive model,” IEEE Transactions on Automation Science and Engineering, vol. 19, no. 4, pp. 2738–2749, 2021.
    [22] Y. Jeong, “Predictive lane change decision making using bidirectional long shot-term memory for autonomous driving on highways,” IEEE Access, vol. 9, pp. 144 985–144 998, 2021.
    [23] X. Song, K. Chen, X. Li, J. Sun, B. Hou, Y. Cui, B. Zhang, G. Xiong, and Z. Wang, “Pedestrian trajectory prediction based on deep convolutional lstm network,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 6, pp. 3285–3302,n2020.
    [24] X. Hu, B. Tang, L. Chen, S. Song, and X. Tong, “Learning a deep cascaded neural network for multiple motion commands prediction in autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 12, pp. 7585–7596, 2020.
    [25] H. Jiang, L. Chang, Q. Li, and D. Chen, “Deep transfer learning enable end-to-end steering angles prediction for self-driving car,” in 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020, pp. 405–412.
    [26] S. Yang, W. Wang, C. Liu, and W. Deng, “Scene understanding in deep learning-based end-to-end controllers for autonomous vehicles,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 1, pp. 53–63, 2018.
    [27] C. Hubschneider, A. Bauer, J. Doll, M. Weber, S. Klemm, F. Kuhnt, and J. M. Zöll-ner, “Integrating end-to-end learned steering into probabilistic autonomous driving,”in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2017, pp. 1–7.
    [28] L. Chen, Q. Wang, X. Lu, D. Cao, and F.-Y. Wang, “Learning driving models from parallel end-to-end driving data set,” Proceedings of the IEEE, vol. 108, no. 2, pp. 262–273, 2019.
    [29] V. John, A. Boyali, H. Tehrani, K. Ishimaru, M. Konishi, Z. Liu, and S. Mita, “Estimation of steering angle and collision avoidance for automated driving using deep mixture of experts,” IEEE Transactions on Intelligent Vehicles, vol. 3, no. 4, pp. 571–584, 2018.
    [30] Y. Kortli, S. Gabsi, L. F. L. Y. Voon, M. Jridi, M. Merzougui, and M. Atri, “Deep embedded hybrid cnn–lstm network for lane detection on nvidia jetson xavier nx,”Knowledge-based systems, vol. 240, p. 107941, 2022.
    [31] L. Chen, X. Hu, B. Tang, and Y. Cheng, “Conditional dqn-based motion planning with fuzzy logic for autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, 2020.
    [32] S. Azam, F. Munir, M. A. Rafique, A. M. Sheri, M. I. Hussain, and M. Jeon, “N 2 c: neural network controller design using behavioral cloning,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4744–4756, 2021.
    [33] L. A. Curiel-Ramirez, R. A. Ramirez-Mendoza, R. Bautista-Montesano, M. R. Bustamante-Bello, H. G. Gonzalez-Hernandez, J. A. Reyes-Avedaño, and E. C. Gallardo-Medina, “End-to-end automated guided modular vehicle,” Applied Sciences, vol. 10, no. 12, p. 4400, 2020.
    [34] U. Sumanth, N. S. Punn, S. K. Sonbhadra, and S. Agarwal, “Enhanced behavioral cloning-based self-driving car using transfer learning,” in Data Management, Analytics and Innovation. Springer, 2022, pp. 185–198.
    [35] D.-H. Lee and J.-L. Liu, “End-to-end deep learning of lane detection and path prediction for real-time autonomous driving,” Signal, Image and Video Processing, pp. 1–7,2022.
    [36] S. Zhou, M. Xie, Y. Jin, F. Miao, and C. Ding, “An end-to-end multi-task object detection using embedded gpu in autonomous driving,” in 2021 22nd International Symposium on Quality Electronic Design (ISQED). IEEE, 2021, pp. 122–128. 66
    [37] A. Khanum, C.-Y. Lee, and C.-S. Yang, “Involvement of deep learning for vision sensor-based autonomous driving control: A review,” IEEE Sensors Journal, 2023.
    [38] Y. Kortli, S. Gabsi, L. F. L. Y. Voon, M. Jridi, M. Merzougui, and M. Atri, “Deep embedded hybrid cnn–lstm network for lane detection on nvidia jetson xavier nx,”Knowledge-based systems, vol. 240, p. 107941, 2022.
    [39] J.-H. Kim, J.-H. Huh, S.-H. Jung, and C.-B. Sim, “A study on an enhanced autonomous driving simulation model based on reinforcement learning using a collision prevention model,” Electronics, vol. 10, no. 18, p. 2271, 2021.
    [40] X. Hu, L. Chen, B. Tang, D. Cao, and H. He, “Dynamic path planning for autonomous driving on various roads with avoidance of static and moving obstacles,” Mechanical systems and signal processing, vol. 100, pp. 482–500, 2018.
    [41] A. O. Ly and M. Akhloufi, “Learning to drive by imitation: An overview of deep behavior cloning methods,” IEEE Transactions on Intelligent Vehicles, vol. 6, no. 2, pp. 195–209, 2020.
    [42] A. Haydari and Y. Yilmaz, “Deep reinforcement learning for intelligent transportation systems: A survey,” IEEE Transactions on Intelligent Transportation Systems, 2020.
    [43] J. Ni, Y. Chen, Y. Chen, J. Zhu, D. Ali, and W. Cao, “A survey on theories and applications for self-driving cars based on deep learning methods,” Applied Sciences, vol. 10, no. 8, p. 2749, 2020.
    [44] S. Mozaffari, O. Y. Al-Jarrah, M. Dianati, P. Jennings, and A. Mouzakitis, “Deep learning-based vehicle behavior prediction for autonomous driving applications: A review,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 1, pp. 33–47, 2020.
    [45] S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, “A survey of deep learning applications to autonomous vehicle control,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 2, pp. 712–733, 2020.
    [46] B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. Pérez, “Deep reinforcement learning for autonomous driving: A survey,” IEEE Transactions on Intelligent Transportation Systems, 2021.
    [47] B. B. Elallid, N. Benamar, A. S. Hafid, T. Rachidi, and N. Mrani, “A comprehensive survey on the application of deep and reinforcement learning approaches in autonomousdriving,” Journal of King Saud University-Computer and Information Sciences, 2022.
    [48] D. Feng, C. Haase-Schütz, L. Rosenbaum, H. Hertlein, C. Glaeser, F. Timm, W. Wiesbeck, and K. Dietmayer, “Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1341–1360, 2020.
    [49] S. Aradi, “Survey of deep reinforcement learning for motion planning of autonomous vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 2, pp.740–759, 2022.
    [50] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey of deep learning techniques for autonomous driving,” Journal of Field Robotics, vol. 37, no. 3, pp. 362–386, 2020.
    [51] A. Gupta, A. Anpalagan, L. Guan, and A. S. Khwaja, “Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues,” Array, vol. 10, p. 100057, 2021.
    [52] P. Karle, M. Geisslinger, J. Betz, and M. Lienkamp, “Scenario understanding and motion prediction for autonomous vehicles—review and comparison,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 16 962–16 982, 2022.
    [53] L. Chen, Y. Li, C. Huang, B. Li, Y. Xing, D. Tian, L. Li, Z. Hu, X. Na, Z. Li et al., “Milestones in autonomous driving and intelligent vehicles: Survey of surveys,” IEEE Transactions on Intelligent Vehicles, 2022.
    [54] D. Coelho and M. Oliveira, “A review of end-to-end autonomous driving in urban environments,” IEEE Access, vol. 10, pp. 75 296–75 311, 2022.
    [55] N. M. Negash and J. Yang, “Driver behavior modeling towards autonomous vehicles: Comprehensive review,” IEEE Access, 2023.
    [56] F. Garrido and P. Resende, “Review of decision-making and planning approaches in automated driving,” IEEE Access, vol. 10, pp. 100 348–100 366, 2022.
    [57] A. Khanum, C.-Y. Lee, and C.-S. Yang, “Deep-learning-based network for lane following in autonomous vehicles,” Electronics, vol. 11, no. 19, p. 3084, 2022.
    [58] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
    [59] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
    [60] K. Min, D. Kim, J. Park, and K. Huh, “Rnn-based path prediction of obstacle vehicles with deep ensemble,” IEEE Transactions on Vehicular Technology, vol. 68, no. 10, pp. 10 252–10 256, 2019.
    [61] K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014.
    [62] S. Yang, W. Wang, C. Liu, and W. Deng, “Scene understanding in deep learning-based end-to-end controllers for autonomous vehicles,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 1, pp. 53–63, 2018.
    [63] Z. Chen, L. Li, and X. Huang, “Building an autonomous lane keeping simulator using real-world data and end-to-end learning,” IEEE Intelligent Transportation Systems Magazine, vol. 12, no. 1, pp. 47–59, 2018.
    [64] J. Li, X. Mei, D. Prokhorov, and D. Tao, “Deep neural network for structural prediction and lane detection in traffic scene,” IEEE transactions on neural networks and learning systems, vol. 28, no. 3, pp. 690–703, 2016.
    [65] L. Li, K. Ota, and M. Dong, “Humanlike driving: Empirical decision-making system for autonomous vehicles,” IEEE Transactions on Vehicular Technology, vol. 67, no. 8, pp. 6814–6823, 2018.
    [66] T. Qiu and Z. Huang, “Learning a steering decision policy for end-to-end control of autonomous vehicle,” in 2019 5th international conference on control, automation and robotics (ICCAR). IEEE, 2019, pp. 347–351.
    [67] Y. Nose, A. Kojima, H. Kawabata, and T. Hironaka, “A study on a lane keeping system using cnn for online learning of steering control from real time images,” in 2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). IEEE, 2019, pp. 1–4.
    [68] Y. Jeong, “Predictive lane change decision making using bidirectional long shot-term memory for autonomous driving on highways,” IEEE Access, vol. 9, pp. 144 985–144 998, 2021.
    [69] Y. Kortli, S. Gabsi, L. F. L. Y. Voon, M. Jridi, M. Merzougui, and M. Atri, “Deep embedded hybrid cnn–lstm network for lane detection on nvidia jetson xavier nx,”Knowledge-based systems, vol. 240, p. 107941, 2022.
    [70] A. R. Fayjie, S. Hossain, D. Oualid, and D.-J. Lee, “Driverless car: Autonomous driving using deep reinforcement learning in urban environment,” in 2018 15th international conference on ubiquitous robots (ur). IEEE, 2018, pp. 896–901.
    [71] J. Zhao, T. Qu, and F. Xu, “A deep reinforcement learning approach for autonomous highway driving,” IFAC-PapersOnLine, vol. 53, no. 5, pp. 542–546, 2020.
    [72] J. Kocić, N. Jovičić, and V. Drndarević, “An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms,” Sensors, vol. 19, no. 9, p. 2064, 2019.
    [73] S. Chowdhuri, T. Pankaj, and K. Zipser, “Multinet: Multi-modal multi-task learning for autonomous driving,” in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019, pp. 1496–1504.
    [74] Y. Hou, S. Hornauer, and K. Zipser, “Fast recurrent fully convolutional networks for direct perception in autonomous driving,” arXiv preprint arXiv:1711.06459, 2017.
    [75] H. Gao, G. Shi, G. Xie, and B. Cheng, “Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making,” International Journal of Advanced Robotic Systems, vol. 15, no. 6, p. 1729881418817162, 2018.
    [76] M. Masmoudi, H. Friji, H. Ghazzai, and Y. Massoud, “A reinforcement learning framework for video frame-based autonomous car-following,” IEEE Open Journal of Intelligent Transportation Systems, vol. 2, pp. 111–127, 2021.
    [77] X. Xiong, J. Wang, F. Zhang, and K. Li, “Combining deep reinforcement learning and safety based control for autonomous driving,” arXiv preprint arXiv:1612.00147, 2016.
    [78] A. Babhulkar, “Self-driving car using udacity’s car simulator environment and trained by deep neural networks,” 2019.
    [79] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,”arXiv preprint arXiv:1604.07316, 2016.
    [80] R. Hussain and S. Zeadally, “Autonomous cars: Research results, issues, and future challenges,” IEEE Communications Surveys & Tutorials, vol. 21, no. 2, pp. 1275–1313, 2018.
    [81] H. Ghariblu and H. B. Moghaddam, “Trajectory planning of autonomous vehicle in freeway driving,” Transport and Telecommunication, vol. 22, no. 3, pp. 278–286, 2021.
    [82] C. You, J. Lu, D. Filev, and P. Tsiotras, “Highway traffic modeling and decision making for autonomous vehicle using reinforcement learning,” in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018, pp. 1227–1232.
    [83] S. Song, X. Hu, J. Yu, L. Bai, and L. Chen, “Learning a deep motion planning model for autonomous driving,” in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018, pp. 1137–1142.
    [84] X. Hu, B. Tang, L. Chen, S. Song, and X. Tong, “Learning a deep cascaded neural network for multiple motion commands prediction in autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 12, pp. 7585–7596,2020.
    [85] Y. Jeong, S. Kim, and K. Yi, “Surround vehicle motion prediction using lstm-rnn for motion planning of autonomous vehicles at multi-lane turn intersections,” IEEE Open Journal of Intelligent Transportation Systems, vol. 1, pp. 2–14, 2020.
    [86] R. Oliveira, P. Lima, M. Cirillo, and B. Wahlberg, “Autonomous bus driving: A novel motion-planning approach,” IEEE Vehicular Technology Magazine, vol. 16, no. 3, pp. 29–37, 2021.
    [87] T. Hirakawa, T. Yamashita, T. Tamaki, and H. Fujiyoshi, “Survey on vision-based path prediction,” in International Conference on Distributed, Ambient, and Pervasive Interactions. Springer, 2018, pp. 48–64.
    [88] Y. Huang, H. Wang, A. Khajepour, H. Ding, K. Yuan, and Y. Qin, “A novel local motion planning framework for autonomous vehicles based on resistance network and model predictive control,” IEEE Transactions on Vehicular Technology, vol. 69, no. 1, pp. 55–66, 2019.
    [89] A. Boloor, K. Garimella, X. He, C. Gill, Y. Vorobeychik, and X. Zhang, “Attack-ing vision-based perception in end-to-end autonomous driving models,” Journal of Systems Architecture, vol. 110, p. 101766, 2020.
    [90] G. Li, L. Yang, S. Li, X. Luo, X. Qu, and P. Green, “Human-like decision making of artificial drivers in intelligent transportation systems: An end-to-end driving behavior prediction approach,” IEEE Intelligent Transportation Systems Magazine, vol. 14, no. 6, pp. 188–205, 2021.
    [91] N. Merrill and A. Eskandarian, “End-to-end multi-task machine learning of vehicle dynamics for steering angle prediction for autonomous driving,” in International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 59216. American Society of Mechanical Engineers, 2019, p. V003T01A016.
    [92] Y. Xing, C. Lv, D. Cao, and E. Velenis, “Multi-scale driver behavior modeling based on deep spatial-temporal representation for intelligent vehicles,” Transportation research part C: emerging technologies, vol. 130, p. 103288, 2021.
    [93] M. Crawshaw, “Multi-task learning with deep neural networks: A survey,” arXiv preprint arXiv:2009.09796, 2020.
    [94] S. Ruder, “An overview of multi-task learning in deep neural networks,” arXiv preprint arXiv:1706.05098, 2017.
    [95] C.-S. Shih, P.-W. Huang, E.-T. Yen, and P.-K. Tsung, “Vehicle speed prediction with rnn and attention model under multiple scenarios,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019, pp. 369–375.
    [96] P. Cai, S. Wang, Y. Sun, and M. Liu, “Probabilistic end-to-end vehicle navigation inncomplex dynamic environments with multimodal sensor fusion,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4218–4224, 2020.
    [97] C. Badue, R. Guidolini, R. V. Carneiro, P. Azevedo, V. B. Cardoso, A. Forechi, L. Jesus,nR. Berriel, T. M. Paixao, F. Mutz et al., “Self-driving cars: A survey,” Expert Systems with Applications, vol. 165, p. 113816, 2021.
    [98] C. Community., “Carla simulator.” 2017. Available online: https://carla.readthedocs.io/en/latest/ (accessed on 1 February 2022).
    [99] Home, “Airsim.” (Accessed: Aug. 6, 2020.) [Online]. Available: https://microsoft.github.io/AirSim/.
    [100] Udacity, “A self-driving car simulator built with unity,” 2017. Available: https://github.com/udacity/self-driving-car-sim.
    [101] B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner,“Torcs, the open racing car simulator,” Software available at http://torcs. sourceforge.net, vol. 4, no. 6, p. 2, 2000.
    [102] AutoRally, “Software.” (Accessed on: May 2017). [Online]. Available: https://github.com/AutoRally/autorally.
    [103] A. Amini, I. Gilitschenski, J. Phillips, J. Moseyko, R. Banerjee, S. Karaman, and D. Rus, “Learning robust control policies for end-to-end autonomous driving from data-driven simulation,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1143–1150, 2020.
    [104] A. Bewley, J. Rigley, Y. Liu, J. Hawke, R. Shen, V.-D. Lam, and A. Kendall, “Learning to drive from simulation without real world labels,” in 2019 International conference on robotics and automation (ICRA). IEEE, 2019, pp. 4818–4824.
    [105] S. R. Richter, Z. Hayder, and V. Koltun, “Playing for benchmarks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2213–2222.
    [106] “Gazebo.” (Accessed: Aug. 6, 2020). [Online]. Available: http://gazebosim.org.
    [107] SUMO, “Simulation of urban mobility.” https://www.eclipse.org/sumo/, accessed June 7, 2021.
    [108] CarSim, “Overview.” (accessed on 28 November 2018). Available online: https://www.carsim.com/products/carsim/.
    [109] PreScan, “Ass international.” (Accessed: Aug. 13, 2020). [Online]. Available: https://tass.plm.automation.siemens.com/prescan.
    [110] M. Smolyakov, A. Frolov, V. Volkov, and I. Stelmashchuk, “Self-driving car steering angle prediction based on deep neural network an example of carnd udacity simulator,” in 2018 IEEE 12th international conference on application of information and communication technologies (AICT). IEEE, 2018, pp. 1–5.
    [111] A. Best, S. Narang, L. Pasqualin, D. Barber, and D. Manocha, “Autonovi-sim: Autonomous vehicle simulation platform with weather, sensing, and traffic control,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 1048–1056.
    [112] M. Müller, V. Casser, J. Lahoud, N. Smith, and B. Ghanem, “Sim4cv: A photo-realistic simulator for computer vision applications,” International Journal of Computer Vision, vol. 126, pp. 902–919, 2018, (http:// www.sim4cv.org).
    [113] A. Ruano, “aitorzip/deepgtav,” 2016. [Online]. Available: https:// github.com/aitorzip/DeepGTAV.
    [114] J. Sprinkle, “Cat vehicle testbed,” Available: https://cpsvo.org/group/CATVehicleTestbed. Accessed: 2018-01-23.
    [115] L. silicon valley lab, “lgsvl/simulator.” https://github.com/lgsvl/simulator, Dec 2019. Accessed: 2020- 01-20.
    [116] S. M. Grigorescu, B. Trasnea, L. Marina, A. Vasilcoi, and T. Cocias“Neurotrajec-tory: A neuroevolutionary approach to local state trajectory learning for autonomous vehicles,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3441–3448, 2019.
    [117] C. Quiter and M. Ernst, “Deepdrive, 2018,” URL https://deepdrive. voyage. auto.
    [118] VIRES, “Simulationstechnologie gmbh. vtd - vires virtual test drive.”https://vires.com/vtd- vires-virtual-test-drive/, accessed (20.5.2019).
    [119] H. M. Eraqi, M. N. Moustafa, and J. Honer, “End-to-end deep learning for steering autonomous vehicles considering temporal dependencies,” arXiv preprint arXiv:1710.03804, 2017.
    [120] S. Yang, W. Wang, C. Liu, W. Deng, and J. K. Hedrick, “Feature analysis and se-lection for training an end-to-end autonomous vehicle controller using deep learning approach,” in 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017, pp. 1033–1038.
    [121] X. Liu, Z. Deng, H. Lu, and L. Cao, “Benchmark for road marking detection: Dataset specification and performance baseline,” in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2017, pp. 1–6.
    [122] J. Geyer, Y. Kassahun, M. Mahmudi, X. Ricou, R. Durgesh, A. S. Chung, L. Hauswald, V. H. Pham, M. Mühlegg, S. Dorn et al., “A2d2: Audi autonomous driving dataset,” arXiv preprint arXiv:2004.06320, 2020.
    [123] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,”The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
    [124] H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2174–2182.
    [125] H. Schafer, E. Santana, A. Haden, and R. Biasini, “A commute in data: The comma2k19 dataset,” arXiv preprint arXiv:1812.05752, 2018.
    [126] F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell,“Bdd100k: A diverse driving dataset for heterogeneous multitask learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2636–2645.
    [127] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The oxford robotcar dataset,” The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, 2017.
    [128] A. I. Maqueda, A. Loquercio, G. Gallego, N. García, and D. Scaramuzza, Event-based vision meets deep learning on steering prediction for self-driving cars,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5419–5427.
    [129] H. Yu, S. Yang, W. Gu, and S. Zhang, “Baidu driving dataset and end-to-end reactive control model,” in 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017, pp. 341–346.
    [130] W. LLC, “Waymo open dataset: An autonomous driving dataset,” 2019.
    [131] X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open dataset for autonomous driving and its application,” IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 10, pp. 2702–2719, 2019.
    [132] G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in video: A high-definition ground truth database,” Pattern Recognition Letters, vol. 30, no. 2, pp. 88–97, 2009.
    [133] A. Patil, S. Malla, H. Gang, and Y.-T. Chen, “The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 9552–9557.
    [134] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213–3223.
    [135] G. Neuhold, T. Ollmann, S. Rota Bulo, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4990–4999.
    [136] Y. Choi, N. Kim, S. Hwang, K. Park, J. S. Yoon, K. An, and I. S. Kweon, “Kaist multi-spectral day/night data set for autonomous and assisted driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 3, pp. 934–948, 2018.
    [137] S. Wang, M. Bai, G. Mattyus, H. Chu, W. Luo, B. Yang, J. Liang, J. Cheverie, S. Fidler,mand R. Urtasun, “Torontocity: Seeing the world with a million eyes,” arXiv preprint arXiv:1612.00423, 2016.
    [138] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driv-ing,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621–11 631.
    [139] Y. Chen, J. Wang, J. Li, C. Lu, Z. Luo, H. Xue, and C. Wang, “Lidar-video driving dataset: Learning driving policies effectively,” in Proceedings of the IEEE Conferencemon Computer Vision and Pattern Recognition, 2018, pp. 5870–5878.
    [140] I. Kotseruba, A. Rasouli, and J. K. Tsotsos, “Joint attention in autonomous driving (jaad),” arXiv preprint arXiv:1609.04741, 2016.
    [141] M. Braun, S. Krebs, F. Flohr, and D. M. Gavrila, “The eurocity persons dataset: Amnovel benchmark for object detection,” arXiv preprint arXiv:1805.07193, 2018.
    [142] G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to hand- written letters,” in 2017 international joint conference on neural networks (IJCNN). IEEE, 2017, pp. 2921–2926.
    [143] A. Jain, H. S. Koppula, B. Raghavan, S. Soh, and A. Saxena, “Car that knows before you do: Anticipating maneuvers via learning temporal driving models,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3182–3190.
    [144] Q.-H. Pham, P. Sevestre, R. S. Pahwa, H. Zhan, C. H. Pang, Y. Chen, A. Mustafa, V. Chandrasekhar, and J. Lin, “A* 3d dataset: Towards autonomous driving in chal-lenging environments,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 2267–2273.
    [145] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L.Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision. Springer, 2014, pp. 740–755.
    [146] S. Mandal, S. Biswas, V. E. Balas, R. N. Shaw, and A. Ghosh, “Motion prediction for autonomous vehicles from lyft dataset using deep learning,” in 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA). IEEE, 2020, pp. 768–773.
    [147] J. Xue, J. Fang, T. Li, B. Zhang, P. Zhang, Z. Ye, and J. Dou, “Blvd: Building a large-scale 5d semantics benchmark for autonomous driving,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 6685–6691.
    [148] G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. Jawahar, “Idd: Adataset for exploring problems of autonomous navigation in unconstrained environments,” in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019, pp. 1743–1751.
    [149] X. Roynard, J.-E. Deschaud, and F. Goulette, “Paris-lille-3d: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification,”The International Journal of Robotics Research, vol. 37, no. 6, pp. 545–557, 2018.
    [150] O. Zendel, K. Honauer, M. Murschitz, D. Steininger, and G. F. Dominguez, “Wilddash- creating hazard-aware benchmarks,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 402–416.
    [151] S. Zhang, R. Benenson, and B. Schiele, “Citypersons: A diverse dataset for pedestrian detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3213–3221.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE