簡易檢索 / 詳目顯示

研究生: 蔡仁翔
Tsai, Jen-Hsiang
論文名稱: 使用YOLO11n-seg進行2.5D銑削加工特徵辨識
2.5D Milling Feature Recognition Using YOLO11n-seg
指導教授: 鍾俊輝
Chung, Chun-hui
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 83
中文關鍵詞: 加工特徵辨識機器學習YOLO11
外文關鍵詞: Machining feature recognition, Machine learning, YOLO11
相關次數: 點閱:5下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在傳統的加工路徑編排時,依賴工程師的經驗來安排加工的工序、刀具的選用和加工的策略,即使是擁有豐富經驗的工程師也未必能在短時間找到最適合的方法,因此學者們開始朝著自動化製程系統的開發。電腦輔助製程規劃(Computer Aided Process Planning, CAPP) 的主要目標是將零件的設計檔轉換成製造的指令,加工特徵的辨識對刀具的選用和加工的策略至關重要,如通孔會使用鑽頭鑽出,而槽會使用槽銑刀加工。目前使用圖片作為輸入的加工特徵辨識模型需要多張的圖片作為輸入,並將結果整合得出零件的加工特徵,有的會需要依賴其他的演算法來處理相交錯的加工特徵,又少有針對辨識出的加工特徵進加工方向的判斷。為解決以上問題,本研究提出了新的方法進行加工特徵的辨識和加工方向的判斷。首先將零件依工件尺寸分為256層,依序取得三個軸向的剖面視圖後取平均並拼接成一張三軸剖面合成圖,將三軸剖面合成圖輸入到YOLO11n-seg這一能進行實例分割的模型進行加工特徵的辨識將零件的加工特徵一一區分出來,並且能利用模型推論出的遮罩表示加工特徵的確切範圍和位置,以利後續的加工編排。將每一加工特徵遮罩與原始圖片進行圖片後處理後,輸入到YOLO11n-cls模型中進行加工方向的判斷,再將兩個模型結果整合成最終的加工特徵辨識結果。在單一加工特徵辨識時,輸入圖片大小為1024×1024時達到最佳的效果,並且和其他文獻方法相比,分類正確率為第二高,為99.56%。考慮到加工方向的判斷,整體正確率為99.50%,且整體辨識流程總耗時不到300毫秒。而在複合加工特徵時,以2048×2048 pixels圖片輸入加工特徵辨識模型有著最高的F-Score,為96.20%。再將辨識結果的後處理圖片縮放到1024×1024 pixels後再進行加工方向的辨識達到最佳效果,不管是以遮罩或是邊界框交併比大於0.7為判斷正確標準,流程整體正確率達90%以上。

    This study proposes a new approach for machining feature recognition and machining direction determination. First, the part is divided into 256 layers according to the workpiece dimensions. Sectional views along the three principal axes are sequentially obtained, averaged, and then stitched together to form a 3-axis composite sectional image. This composite image is fed into YOLO11n-seg to recognize and distinguish each machining feature. The masks output represent the range and location of each machining feature. Each machining feature mask is then combined with the original image, and the resulting images are fed into the YOLO11n-cls model to determine the machining direction. The outputs of the two models are integrated to produce the final machining feature recognition results. For single feature recognition, the best performance was achieved when the input image size was 1024×1024 pixels, yielding the classification accuracy 99.56%. Considering machining direction determination, the overall accuracy reached 99.50%. For multiple features recognition, the highest F-score of 96.20% was obtained when using 2048×2048 pixels images as input to the machining feature recognition model. After resizing the post processed recognition results to 1024×1024 pixels for machining direction determination, the best performance was achieved. Regardless of whether the correctness criterion was based on an Intersection over Union greater than 0.7 using masks or bounding boxes, the overall process accuracy exceeded 90%. The machining feature recognition method proposed in this study achieves good recognition performance and facilitates subsequent process planning in the future.

    摘要 i 誌謝 xiv 目錄 xv 表目錄 xviii 圖目錄 xix 1 第一章 緒論 1 1.1 研究背景 1 1.2 文獻回顧 2 1.3 研究目的 8 1.4 論文架構 11 2 第二章 模型介紹 12 2.1 卷積神經網路介紹 12 2.1.1 卷積層(convolutional layer) 12 2.1.2 池化層(pooling layer) 13 2.1.3 全連接層 14 2.1.4 激活函數 15 2.1.5 隨機失活層 16 2.1.6 損失函數 17 2.2 標籤方法 18 2.3 YOLO模型發展介紹 19 2.3.1 YOLO11n-seg和先前版本以及其他實例分割模型比較 23 3 第三章 實驗設計與實驗方法 24 3.1 單一加工特徵資料建立 24 3.2 資料增生 29 3.3 複合加工特徵 29 3.4 自行生成標記檔 30 3.5 YOLO模型訓練 31 3.6 CNN模型和YOLO11n-cls模型判別加工方向 32 3.7 實驗設計總結 34 3.8 模型訓練與模型測試 35 4 第四章 結果討論 36 4.1 單一加工特徵辨識 36 4.1.1 YOLO11n-seg模型於單一加工特徵辨識 36 4.1.2 加工方向模型於單一加工特徵 41 4.1.3 單一加工特徵總結 42 4.2 複合加工特徵 43 4.2.1 YOLO11n-seg模型於複合加工特徵辨識 43 4.2.2 加工方向模型於複合加工特徵 48 4.2.3 複合加工特徵總結 50 5 第五章 結論與未來展望 52 5.1 結論 52 5.2 未來展望 53 6 參考文獻 55

    [1] H. Zhang et al., "A novel method based on deep reinforcement learning for machining process route planning," Robotics and Computer-Integrated Manufacturing, vol. 86, p. 102688, 2024.
    [2] L. Xie et al., "Graph neural network-enabled manufacturing method classification from engineering drawings," Computers in Industry, vol. 142, p. 103697, 2022.
    [3] C. Liu, Y. Li, and Z. Li, "A machining feature definition approach by using two-times unsupervised clustering based on historical data for process knowledge reuse," Journal of manufacturing systems, vol. 49, pp. 16-24, 2018.
    [4] X. Yan and S. Melkote, "Automated manufacturability analysis and machining process selection using deep generative model and Siamese neural networks," Journal of Manufacturing Systems, vol. 67, pp. 57-67, 2023.
    [5] J. Hou, C. Luo, F. Qin, Y. Shao, and X. Chen, "FuS-GCN: Efficient B-rep based graph convolutional networks for 3D-CAD model classification and retrieval," Advanced Engineering Informatics, vol. 56, p. 102008, 2023/04/01/ 2023, doi: https://doi.org/10.1016/j.aei.2023.102008.
    [6] S. Joshi and T.-C. Chang, "Graph-based heuristics for recognition of machined features from a 3D solid model," Computer-aided design, vol. 20, no. 2, pp. 58-66, 1988.
    [7] Y. Woo and H. Sakurai, "Recognition of maximal features by volume decomposition," Computer-Aided Design, vol. 34, no. 3, pp. 195-207, 2002.
    [8] S. Bendjebla, N. Cai, N. Anwer, S. Lavernhe, and C. Mehdi-Souzani, "Freeform machining features: New concepts and classification," Procedia CIRP, vol. 67, pp. 482-487, 2018.
    [9] Y. S. Kim and E. Wang, "Recognition of machining features for cast then machined parts," Computer-aided design, vol. 34, no. 1, pp. 71-87, 2002.
    [10] A. Verma and S. Rajotia, "A hint-based machining feature recognition system for 2.5 D parts," International journal of production research, vol. 46, no. 6, pp. 1515-1537, 2008.
    [11] J. Han, M. Pratt, and W. C. Regli, "Manufacturing feature recognition from solid models: a status report," IEEE transactions on robotics and automation, vol. 16, no. 6, pp. 782-796, 2000.
    [12] W. Geng, Z. Chen, K. He, and Y. Wu, "Feature recognition and volume generation of uncut regions for electrical discharge machining," Advances in Engineering Software, vol. 91, pp. 51-62, 2016/01/01/ 2016, doi: https://doi.org/10.1016/j.advengsoft.2015.10.005.
    [13] S. Subrahmanyam and M. Wozny, "An overview of automatic feature recognition techniques for computer-aided process planning," Computers in industry, vol. 26, no. 1, pp. 1-21, 1995.
    [14] P. Wang, W. Liu, and Y. You, "A hybrid framework for manufacturing feature recognition from CAD models of 3-axis milling parts," Advanced Engineering Informatics, vol. 57, p. 102073, 2023/08/01/ 2023, doi: https://doi.org/10.1016/j.aei.2023.102073.
    [15] P. Fu, "A hybrid feature recognition method based on loop-attributed adjacency graph and based on hint," in Fourth International Conference on Advanced Manufacturing Technology and Electronic Information (AMTEI 2024), vol. 13515: SPIE, pp. 484-492, 2025.
    [16] V. Sunil, R. Agarwal, and S. Pande, "An approach to recognize interacting features from B-Rep CAD models of prismatic machined parts using a hybrid (graph and rule based) technique," Computers in Industry, vol. 61, no. 7, pp. 686-701, 2010.
    [17] B. Babic, N. Nesic, and Z. Miljkovic, "A review of automated feature recognition with rule-based pattern recognition," Computers in industry, vol. 59, no. 4, pp. 321-337, 2008.
    [18] P. Wang, W.-A. Yang, and Y. You, "A hybrid learning framework for manufacturing feature recognition using graph neural networks," Journal of Manufacturing Processes, vol. 85, pp. 387-404, 2023.
    [19] P. Shi, Q. Qi, Y. Qin, P. J. Scott, and X. Jiang, "A novel learning-based feature recognition method using multiple sectional view representation," Journal of Intelligent Manufacturing, vol. 31, pp. 1291-1309, 2020.
    [20] H. Wu, R. Lei, Y. Peng, and L. Gao, "AAGNet: A graph neural network towards multi-task machining feature recognition," Robotics and Computer-Integrated Manufacturing, vol. 86, p. 102661, 2024, doi: https://doi.org/10.1016/j.rcim.2023.102661.
    [21] C. Jian, M. Li, K. Qiu, and M. Zhang, "An improved NBA-based STEP design intention feature recognition," Future Generation Computer Systems, vol. 88, pp. 357-362, 2018.
    [22] Y. Ma, Y. Zhang, and X. Luo, "Automatic recognition of machining features based on point cloud data using convolution neural networks," in Proceedings of the 2019 international conference on artificial intelligence and computer science, pp. 229-235, 2019.
    [23] S. Zhang, Z. Guan, H. Jiang, X. Wang, and P. Tan, "BrepMFR: Enhancing machining feature recognition in B-rep models through deep learning and domain adaptation," Computer Aided Geometric Design, vol. 111, p. 102318, 2024, doi: https://doi.org/10.1016/j.cagd.2024.102318.
    [24] D. Peddireddy et al., "Deep learning based approach for identifying conventional machining processes from CAD data," Procedia Manufacturing, vol. 48, pp. 915-925, 2020.
    [25] G. Vidanes, D. Toal, X. Zhang, A. Keane, J. Gregory, and M. Nunez, "Extending point-based deep learning approaches for better semantic segmentation in CAD," Computer-Aided Design, vol. 166, p. 103629, 2024.
    [26] Z. Zhang, P. Jaiswal, and R. Rai, "Featurenet: Machining feature recognition based on 3d convolution neural network," Computer-Aided Design, vol. 101, pp. 12-22, 2018.
    [27] A. R. Colligan, T. T. Robinson, D. C. Nolan, Y. Hua, and W. Cao, "Hierarchical CADNet: Learning from B-Reps for Machining Feature Recognition," Computer-Aided Design, vol. 147, p. 103226, 2022, doi: https://doi.org/10.1016/j.cad.2022.103226.
    [28] Y. Zhang, Y. Zhang, K. He, D. Li, X. Xu, and Y. Gong, "Intelligent feature recognition for STEP-NC-compliant manufacturing based on artificial bee colony algorithm and back propagation neural network," Journal of Manufacturing Systems, vol. 62, pp. 792-799, 2022, doi: https://doi.org/10.1016/j.jmsy.2021.01.018.
    [29] P. Shi, Q. Qi, Y. Qin, P. J. Scott, and X. Jiang, "Intersecting machining feature localization and recognition via single shot multibox detector," IEEE Transactions on Industrial Informatics, vol. 17, no. 5, pp. 3292-3302, 2020.
    [30] M. Xia, X. Zhao, and X. Hu, "Machining feature and topological relationship recognition based on a multi-task graph neural network," Advanced Engineering Informatics, vol. 62, p. 102721, 2024.
    [31] Y. Yu, H. Teng, H. Zhang, and F. Tian, "Machining feature recognition method based on 3D Convolution Neural Network," in 2023 2nd International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM), 2023: IEEE, pp. 77-82, 2023.
    [32] Y. Shi, Y. Zhang, and R. Harik, "Manufacturing feature recognition with a 2D convolutional neural network," CIRP Journal of Manufacturing Science and Technology, vol. 30, pp. 36-57, 2020, doi: https://doi.org/10.1016/j.cirpj.2020.04.001.
    [33] R. Lei, H. Wu, and Y. Peng, "Mfpointnet: A point cloud-based neural network using selective downsampling layer for machining feature recognition," Machines, vol. 10, no. 12, p. 1165, 2022.
    [34] S. A. Böhm, M. Neumayer, R. Song, F. Riß, and A. Knoll, "Node Classification in CAD Data: A Graph Learning Approach for Machining Feature Localization," in 2024 4th International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), 4-6 Nov. 2024 2024, pp. 1-7, doi: 10.1109/ICECCME62383.2024.10796886.
    [35] I. Betkier, M. Oszczypała, J. Pobożniak, S. Sobieski, and P. Betkier, "PocketFinderGNN: A manufacturing feature recognition software based on Graph Neural Networks (GNNs) using PyTorch Geometric and NetworkX," SoftwareX, vol. 23, p. 101466, 2023.
    [36] H. Zhang et al., "Point cloud self-supervised learning for machining feature recognition," Journal of Manufacturing Systems, vol. 77, pp. 78-95, 2024.
    [37] H. Zhang, S. Zhang, Y. Zhang, J. Liang, and Z. Wang, "Machining feature recognition based on a novel multi-task deep learning network," Robotics and Computer-Integrated Manufacturing, vol. 77, p. 102369, 2022.
    [38] L. Alzubaidi et al., "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," Journal of big Data, vol. 8, pp. 1-74, 2021.
    [39] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 2002.
    [40] Y. LeCun et al., "Learning algorithms for classification: A comparison on handwritten digit recognition," Neural networks: the statistical mechanics perspective, vol. 261, no. 276, p. 2, 1995.
    [41] M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on computer vision, 2014: Springer, pp. 818-833.
    [42] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
    [43] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
    [44] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
    [45] M. Hussain, "YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection," Machines, vol. 11, no. 7, p. 677, 2023.
    [46] P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, "A Review of Yolo algorithm developments," Procedia computer science, vol. 199, pp. 1066-1073, 2022.
    [47] C.-Y. Wang and H.-Y. M. Liao, "YOLOv1 to YOLOv10: The fastest and most accurate real-time object detection systems," APSIPA Transactions on Signal and Information Processing, vol. 13, no. 1, 2024.
    [48] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016.
    [49] J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263-7271, 2017.
    [50] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
    [51] M. Mao and M. Hong, "YOLO Object Detection for Real-Time Fabric Defect Inspection in the Textile Industry: A Review of YOLOv1 to YOLOv11," Sensors (Basel, Switzerland), vol. 25, no. 7, p. 2270, 2025.
    [52] U. Nepal and H. Eslamiat, "Comparing YOLOv3, YOLOv4 and YOLOv5 for autonomous landing spot detection in faulty UAVs," Sensors, vol. 22, no. 2, p. 464, 2022.
    [53] M. A. R. Alif and M. Hussain, "YOLOv1 to YOLOv10: A comprehensive review of YOLO variants and their application in the agricultural domain," arXiv preprint arXiv:2406.10139, 2024.
    [54] R. Khanam and M. Hussain, "Yolov11: An overview of the key architectural enhancements," arXiv preprint arXiv:2410.17725, 2024.
    [55] E. H. Alkhammash, "A Comparative Analysis of YOLOv9, YOLOv10, YOLOv11 for Smoke and Fire Detection," Fire, vol. 8, no. 1, p. 26, 2025. [Online]. Available: https://www.mdpi.com/2571-6255/8/1/26.
    [56] A. Sharma, V. Kumar, and L. Longchamps, "Comparative performance of YOLOv8, YOLOv9, YOLOv10, YOLOv11 and Faster R-CNN models for detection of multiple weed species," Smart Agricultural Technology, vol. 9, p. 100648, 2024, doi: https://doi.org/10.1016/j.atech.2024.100648.
    [57] R. Sapkota and M. Karkee, "Comparing YOLOv11 and YOLOv8 for instance segmentation of occluded and non-occluded immature green fruits in complex orchard environment," arXiv preprint arXiv:2410.19869, 2024.
    [58] Z. He, K. Wang, T. Fang, L. Su, R. Chen, and X. Fei, "Comprehensive Performance Evaluation of YOLOv11, YOLOv10, YOLOv9, YOLOv8 and YOLOv5 on Object Detection of Power Equipment," arXiv preprint arXiv:2411.18871, 2024.
    [59] H. Kwon, S. Choi, W. Woo, and H. Jung, "Evaluating Segmentation-Based Deep Learning Models for Real-Time Electric Vehicle Fire Detection," Fire, vol. 8, no. 2, p. 66, 2025.
    [60] L. He, Y. Zhou, L. Liu, and J. Ma, "Research and Application of YOLOv11-Based Object Segmentation in Intelligent Recognition at Construction Sites," Buildings, vol. 14, no. 12, p. 3777, 2024.

    無法下載圖示 校內:2027-09-01公開
    校外:2027-09-01公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE