簡易檢索 / 詳目顯示

研究生: 李廣祐
Li, Kuang-Yu
論文名稱: 結合原型網路與注意力基礎之編碼器的駕駛識別應用
Prototypical Network with Attention-Based Encoder for Driver Identification Application
指導教授: 張升懋
Chang, Sheng-Mao
共同指導教授: 李威勳
Lee, Wei-Shun
學位類別: 碩士
Master
系所名稱: 管理學院 - 數據科學研究所
Institute of Data Science
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 65
外文關鍵詞: Few-Shot Learning, attention mechanism, driver identification, user based insurance
相關次數: 點閱:151下載:28
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 駕駛識別是近年來興起在交通領域的問題之一,由於以生物識別為基礎的駕駛 識別方法可能侵犯隱私問題,因此以數據驅動為基礎的駕駛識別方法尤受重視。此 研究提出一個名為注意力編碼器(AttEnc)的神經網路架構,它利用了注意力機制 建模,並具有比現行駕駛識別模型更少的參數量。然而,現行方法大多數都忽略了 數據短缺的問題,而且他們的方法在分類未知駕駛時也不夠靈活。在此研究中,我 們使用 Few-Shot Learning 的方法來解決數據短缺、模型泛化能力不足的問題,並將 AttEnc 與 Prototypical Network 結合,提出了 P-AttEnc。從 AttEnc 的實驗結果發現, 它能夠在三個駕駛為十人的資料集上,分別取得 99.3%, 99.0% 與 99.9% 的準確率; 與其他現行駕駛識別模型相比,運行時間優化了 44% 至 79%,因為他平均降低了 87.6% 的參數量。P-AttEnc 基於有效學習特徵的能力,能夠在少量數據的場景下進行 分類,並能夠擴展至分類未知駕駛。從 P-AttEnc 的第一與第二個實驗結果發現,它 能夠在極少量的一筆數據的情況下,達到 69.8% 的準確度。而從第三個實驗結果發 現,P-AttEnc 在一筆資料的情況下分類未知駕駛時,可達平均 65.7% 的準確率,而 這也證明所提出之方法的有效性。

    Driver identification has become an area of increasing interest in recent years, especially for data-driven applications, because biometric-based technologies may incur privacy issues. This study proposes a deep learning neural network architecture, an Attention-based Encoder (AttEnc), which uses an attention mechanism for driver identification and uses fewer model parameters than current methods. Most studies do not address the issue of data shortage for driver identification, and most of them are inflexible when encountering unknown drivers. In this study, an architecture combines Prototypical Network and an Attention-based Encoder (P-AttEnc) is proposed, it applies Few-Shot Learning to conquer the data shortage issue and to enhance model generalization. The experiments showed that Attention-based Encoder can identify drivers with an accuracy of 99.3%, 99.0% and 99.9% in three different datasets and has faster 44% to 79% prediction time because it significantly reduce 87.6% on average model parameters. P-AttEnc classifies drivers based on few shot data and extracts driver fingerprints to address the issue of data shortage, and also be able to classify unknown drivers. The first and the second experiments showed that P-AttEnc can classify drivers with an accuracy of 69.8% in one shot scenario. The third experiment showed that P-AttEnc can classify unknown drivers with an average accuracy of 65.7% in one shot scenario, which shows the effectiveness of the proposed method.

    摘要 i Abstract ii 誌謝 iii Table of Contents iv List of Tables vi List of Figures vii Chapter 1. INTRODUCTION 1 1.1. Research Background 1 1.2. Problem Description 5 1.2.1. Number of Model Parameters 5 1.2.2. Real Time Applications 5 1.2.3. Data Sources 6 1.2.4. Data Shortage 7 1.2.5. Model Generalization 8 1.3. Solutions 9 1.4. Research Structure 13 Chapter 2. RELATED WORK 15 2.1. Current Studies in Driver Identification 15 2.1.1. Methods Based on Machine Learning 15 2.1.2. Methods Based on Deep Learning 17 2.2. Time Series Classification Methods 20 2.3. Few-Shot Learning Methods 23 2.4. Few-Shot Learning in Transportation Domain 24 Chapter 3. METHODOLOGY 26 3.1. Attention-based Encoder (AttEnc) 27 3.1.1. Multidimensional Input – Vehicle Dynamic 27 3.1.2. Positional Embedding 28 3.1.3. Input Embedding 28 3.1.4. Attention Mechanism 28 3.1.5. Residual Connect 31 3.1.6. Layer Normalization 31 3.2. Prototypical Network with Attention-based Encoder (P-AttEnc) 32 Chapter 4. EXPERIMENTAL FRAMEWORK 34 4.1. Data Description 34 4.1.1. OcsLab Driving Dataset 35 4.1.2. hciLab Driving Dataset 36 4.1.3. Vehicular data-trace 37 4.2. Data Preprocessing 38 4.3. Normalization 39 4.4. Window Slicing 39 4.5. Comparison Models 39 4.6. Experimental Setup 41 4.6.1. Settings of Stage 1 41 4.6.2. Settings of Stage 2 43 4.7. Workflow 48 Chapter 5. EXPERIMENTAL RESULTS 49 5.1. Results for stage 1 49 5.2. Results of stage 2 52 5.2.1. Experiment 1 52 5.2.2. Experiment 2 55 5.2.3. Experiment 3 56 Chapter 6. CONCLUSION 57 6.1. Discussion 57 6.2. Future Work 62 References 63

    Banerjee, T., Chowdhury, A., Chakravarty, T., & Ghose, A. (2020). Driver authentication by quantifying driving style using GPS only. In 2020 IEEE international conference on pervasive computing and communications workshops (PerCom workshops) (pp. 1–6). IEEE.
    Cai, R., Zhu, B., Ji, L., Hao, T., Yan, J., & Liu, W. (2017). An CNN-LSTM attention approach to understanding user query intent from online health communities. In 2017 IEEE international conference on data mining workshops (ICDMW) (pp. 430–437). IEEE.
    Chen, J., Wu, Z., & Zhang, J. (2019). Driver identification based on hidden feature extrac- tion by using adaptive nonnegativity-constrained autoencoder. , 74, 1–9. (Publisher: Elsevier)
    Dang, H., & Fürnkranz, J. (2019). Driver information embedding with siamese LSTM networks. In 2019 IEEE intelligent vehicles symposium (IV) (pp. 935–940). IEEE.
    Dong, W., Li, J., Yao, R., Li, C., Yuan, T., & Wang, L. (2016). Characterizing driving styles with deep learning.
    Dong, W., Yuan, T., Yang, K., Li, C., & Zhang, S. (2017). Autoencoder regularized network for driving style representation learning.
    Du, Q., Gu, W., Zhang, L., & Huang, S.-L. (2018). Attention-based LSTM-CNNs for time-series classification. In Proceedings of the 16th ACM conference on embedded networked sensor systems (pp. 410–411).
    Enev, M., Takakuwa, A., Koscher, K., & Kohno, T. (2016). Automobile driver fingerprinting. , 2016(1), 34–50. (Publisher: Sciendo)
    Girma, A., Yan, X., & Homaifar, A. (2019). Driver identification based on vehicle telematics data using LSTM-recurrent neural network. In 2019 IEEE 31st international confer- ence on tools with artificial intelligence (ICTAI) (pp. 894–902). IEEE.
    Goel, N., Sharma, M., & Vig, L. (2020). Font-ProtoNet: Prototypical network-based font identification of document images in low data regime. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 556–557).
    Hallac, D., Sharang, A., Stahlmann, R., Lamprecht, A., Huber, M., Roehder, M., & Leskovec, J. (2016). Driver identification using automobile sensor data from a single turn. In 2016 IEEE 19th international conference on intelligent transportation systems (ITSC) (pp. 953–958). IEEE.
    Jafarnejad, S., Castignani, G., & Engel, T. (2017). Towards a real-time driver identification mechanism based on driving sensing data. In 2017 IEEE 20th international conference on intelligent transportation systems (ITSC) (pp. 1–7). IEEE.
    Karim, F., Majumdar, S., Darabi, H., & Chen, S. (2017). LSTM fully convolutional networks for time series classification. , 6, 1662–1669. (Publisher: IEEE)
    Koch, G., Zemel, R., & Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In Icml deep learning workshop (Vol. 2).
    Kwak, B. I., Woo, J., & Kim, H. K. (2016). Know your master: Driver profiling-based anti- theft method. In 2016 14th annual conference on privacy, security and trust (PST) (pp.211–218). IEEE.
    Liu, C.-L., Hsaio, W.-H., & Tu, Y.-C. (2018). Time series classification with multivariate
    convolutional neural network. , 66(6), 4788–4797. (Publisher: IEEE)
    Liu, F., Zhou, X., Wang, T., Cao, J., Wang, Z., Wang, H., & Zhang, Y. (2019). An attention- based hybrid lstm-cnn model for arrhythmias classification. In 2019 international joint
    conference on neural networks (ijcnn) (pp. 1–8).
    Martínez, M. V., Echanobe, J., & del Campo, I. (2016). Driver identification and impostor
    detection based on driving behavior signals. In 2016 IEEE 19th international confer-
    ence on intelligent transportation systems (ITSC) (pp. 372–378). IEEE.
    Ordóñez, F. J., & Roggen, D. (2016). Deep convolutional and lstm recurrent neural networks
    for multimodal wearable activity recognition. Sensors, 16(1), 115.
    Ordóñez, F. J., & Roggen, D. (2016). Deep convolutional and lstm recurrent neural net- works for multimodal wearable activity recognition. , 16(1), 115. (Publisher: Multi-
    disciplinary Digital Publishing Institute)
    Park, K. H., & Kim, H. K. (2019). This car is mine!: Automobile theft countermeasure
    leveraging driver identification with generative adversarial networks.
    Rettore, P. H., Campolina, A. B., Souza, A., Maia, G., Villas, L. A., & Loureiro, A. A. (2018). Driver authentication in VANETs based on intra-vehicular sensor data. In 2018 IEEE
    symposium on computers and communications (ISCC) (pp. 00078–00083). IEEE. Schneegass, S., Pfleging, B., Broy, N., Heinrich, F., & Schmidt, A. (2013). A data set of real world driving to assess driver workload. In Proceedings of the 5th international conference on automotive user interfaces and interactive vehicular applications (pp.
    150–157).
    Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. In
    Advances in neural information processing systems (pp. 4077–4087).
    Song, G., Tao, Z., Huang, X., Cao, G., Liu, W., & Yang, L. (2020). Hybrid attention-based prototypical network for unfamiliar restaurant food image few-shot recognition. , 8,
    14893–14900. (Publisher: IEEE)
    Song, H., Rajan, D., Thiagarajan, J. J., & Spanias, A. (2017). Attend and diagnose: Clinical
    time series analysis using attention models.
    Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., & Hospedales, T. M. (2018). Learn-
    ing to compare: Relation network for few-shot learning. In Proceedings of the ieee
    conference on computer vision and pattern recognition (pp. 1199–1208).
    Sánchez, S. H., Pozo, R. F., & Gómez, L. A. H. (2020). Driver identification and verification
    from smartphone accelerometers using deep neural networks. (Publisher: IEEE)
    Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-sne. Journal of machine learning research, 9(11).
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998–6008).
    Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching networks for one shot learning. arXiv preprint arXiv:1606.04080.
    Zhang, G., Davoodnia, V., Sepas-Moghaddam, A., Zhang, Y., & Etemad, A. (2019). Classi- fication of hand movements from eeg using a deep attention-based lstm network. IEEE Sensors Journal, 20(6), 3113–3122.
    Zhang, J., Wu, Z., Li, F., Xie, C., Ren, T., Chen, J., & Liu, L. (2019). A deep learning framework for driving behavior identification on in-vehicle CAN-BUS sensor data. , 19(6), 1356. (Publisher: Multidisciplinary Digital Publishing Institute)
    Zhang, L., Zhu, G., Mei, L., Shen, P., Shah, S. A. A., & Bennamoun, M. (2018). Attention in convolutional LSTM for gesture recognition. In Advances in neural information processing systems (pp. 1953–1962).

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE