簡易檢索 / 詳目顯示

研究生: 黃盈慈
Hunag, Ying-Tzu
論文名稱: 基於物件偵測之智慧圍籬系统設計與實作
Design and Implementation of the SmartFence System Based on Object Detection Technology
指導教授: 蔡孟勳
Tsai, Meng-Hsun
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 39
中文關鍵詞: 智慧校園物件偵測物聯網校園安全
外文關鍵詞: Smart Campus, Object Detection, IoT, Campus Safety
相關次數: 點閱:111下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,校園安全問題頻繁發生,社會大眾開始重視此類的相關議題。因此為了提升校園安全,本實驗在校園的場域中,建置校園安全的系統CK-fence。利用物聯網技術,CK-fence能夠有效結合現有的監視器資源,讓校園安全管理人員更輕鬆地管理龐大監視系統。同時,該系統採用物件辨識技術,幫助安全管理人員設置敏感區域,當有人進入時,便能即時發送警示訊息。在本文,我們將探討CK-fence系統的設計,並提供系統資源使用情況和架設大規模系統的資源評估方法。此外,我們亦對系統的準確度和延遲進行了測試,進一步根據實驗結果提供參數設置的建議。

    In recent years, campus safety has become a critical issue due to the prevalence of security incidents on campus. Governments and schools have invested significantly in installing campus surveillance cameras as one of the main prevention strategies. Although widespread surveillance cameras have been installed, many limitations exist, such as school guardians' inability to locate the target camera when an incident occurs quickly, etc. To lessen these restrictions and improve campus security, we propose CK-fence, a practicable system in real-world scenarios. By integrating existing surveillance resources using IoT technology, CK-fence allows security guards to easily manage many surveillance systems based on object detection technology and enables the system to identify sensitive areas on campus. In addition, line notification has been used as an efficient method of message delivery when an intruder is detected in restricted areas. We evaluated resource usage, accuracy, and notification latency and in addition to implementing the CK-fence system on a campus scale, we propose a procedure for estimating resource usage. Furthermore, we propose a procedure to determine the confidence threshold that considers the distance from the target, visual complexity, and the trade-off between precision and recall.

    中文摘要 i Abstract ii Acknowledgements iv Contents v List of Tables vi List of Figures vii 1 Introduction 1 2 Related Work 5 3 Implementation 10 3.1 System Components 10 3.2 Work Flow 16 4 Performance Evaluation 20 4.1 Performance Evaluation 21 4.2 System latency 26 4.3 Accuracy 28 5 Conclusions 35 References 36

    [1] 教育部校園安全暨災害防救通報處理中心, “教育部110年各級學校校園安全及災害事件分析報告.” https://csrc.edu.tw/filemanage/detail/c25aaeb0-dcb0-4494-9a45-45a684ff8c6a, Dec 2022.
    [2] N. C. for Education Statistics, “Students’ reports of safety and security measures observed at school.” https://nces.ed.gov/programs/coe/indicator/a20/, May 2022.
    [3] N. C. for Education Statistics, “Safety and security practices at public schools.”https://nces.ed.gov/programs/coe/indicator/a19/, May 2022.
    [4] A. D. Pazho, C. Neff, G. A. Noghre, B. R. Ardabili, S. Yao, M. Baharani, and H. Tabkhi, “Ancilia: Scalable intelligent video surveillance for the artificial intelligence of things,” IEEE Internet of Things Journal, 2023.
    [5] N. Cavus, S. E. Mrwebi, I. Ibrahim, T. Modupeola, and A. Y. Reeves, “Internet of things and its applications to smart campus: A systematic literature review.,”International Journal of Interactive Mobile Technologies, vol. 17, no. 23, 2022.
    [6] K. AbuAlnaaj, V. Ahmed, and S. Saboor, “A strategic framework for smart campus,” in Proceedings of the International Conference on Industrial Engineering and Operations Management, vol. 22, 2020.
    [7] Z. Zhou, H. Yu, and H. Shi, “Optimization of wireless video surveillance system for smart campus based on internet of things,” IEEE Access, vol. 8, pp. 136434–136448, 2020.
    [8] L.-W. Chen, T.-P. Chen, D.-E. Chen, J.-X. Liu, and M.-F. Tsai, “Smart campus care and guiding with dedicated video footprinting through internet of things technologies,” IEEE Access, vol. 6, pp. 43956–43966, 2018.
    [9] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.
    [10] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
    [11] S. Degadwala, D. Vyas, U. Chakraborty, A. A. Dider, and H. Biswas, “Yolov4 deep learning model for medical face mask detection,” in 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), pp. 209–213, IEEE, 2021.
    [12] R. R. Mahurkar and N. G. Gadge, “Real-time covid-19 face mask detection with yolov4,” in 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), pp. 1250–1255, 2021.
    [13] F. H. Shubho, F. Iftekhar, E. Hossain, and S. Siddique, “Real-time traffic monitoring and traffic offense detection using yolov4 and opencv dnn,” in TENCON 2021 - 2021 IEEE Region 10 Conference (TENCON), pp. 46–51, 2021.
    [14] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg,“Ssd: Single shot multibox detector,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 21–37, Springer, 2016.
    [15] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll ́ar, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017.
    [16] S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar, and B. Lee, “A survey of modern deep learning based object detection models,” Digital Signal Processing, vol. 126, p. 103514, 2022.
    [17] M. Everingham and J. Winn, “The pascal visual object classes challenge 2012 (voc2012) development kit,” Pattern Anal. Stat. Model. Comput. Learn., Tech. Rep, vol. 2007, no. 1-45, p. 5, 2012.
    [18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll ́ar, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755, Springer, 2014.
    [19] C.-W. Hsu, “Yolotalk: An object detection platform with iot,” 2021.
    [20] C. Xu, J. Wang, W. Yang, and L. Yu, “Dot distance for tiny object detection in aerial images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1192–1201, 2021.
    [21] Z. Chun-Rong, “Application research and implementation of face recognition in smart campus,” in 2020 IEEE International Conference on Industrial Application of Artificial Intelligence (IAAI), pp. 27–32, IEEE, 2020.
    [22] L. Qin, W. Minkun, X. Kehang, Z. Jiawei, X. Junhu, and C. Jiong, “Research on smart safety electronic fence technology based on image processing,” IOP Conference Series: Materials Science and Engineering, vol. 486, p. 012113, jun 2019.
    [23] S. Mahamud and M. Hebert, “The optimal distance measure for object detection,”in 2003 IEEE Computer Society Conference on Computer Vision and PatternRecognition, 2003. Proceedings., vol. 1, pp. I–I, IEEE, 2003.
    [24] Y.-B. Lin, Y.-W. Lin, C.-M. Huang, C.-Y. Chih, and P. Lin, “Iottalk: A management platform for reconfigurable sensor devices,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1552–1562, 2017.
    [25] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
    [26] Y.-B. Lin, M.-Z. Shieh, Y.-W. Lin, and H.-Y. Chen, “Maptalk: Mosaicking physical objects into the cyber world,” Cyber-Physical Systems, vol. 4, no. 3, pp. 156–174, 2018.
    [27] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7464–7475, 2023

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE