| 研究生: |
吳紹琪 Wu, Shao-Chi |
|---|---|
| 論文名稱: |
基於YOLOv8之晶圓混合型缺陷模式識別方法研究 Research on Mixed-Type Wafer Defect Pattern Recognition Method Based on YOLOv8 |
| 指導教授: |
王明習
Wang, Ming-Shi |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 工程科學系碩士在職專班 Department of Engineering Science (on the job class) |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 英文 |
| 論文頁數: | 55 |
| 中文關鍵詞: | 半導體製造 、晶圓缺陷檢測 、YOLOv8 、晶圓圖分類 、深度學習 |
| 外文關鍵詞: | semiconductor manufacturing, wafer defect detection, YOLOv8, wafer map classification, deep learning |
| 相關次數: | 點閱:170 下載:65 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
晶圓缺陷的準確檢測和分類在半導體製造中至關重要,它提供可解釋的數據以識別問題的根本原因,據以執行品質管理和良率之改善措施。晶圓缺陷分類的傳統方法由該領域專家之工程師藉由電腦輔助工具以手動方式來進行分類工作,這種方法非但非常耗時且很可能得到準確率不高的結果。近年來深度學習技術如火如荼地被研究、應用與推廣。為了提高晶圓缺陷的檢測率與縮短檢測時間,使用深度學習演算法來自動識別晶圓缺陷也受到了廣泛的關注。工程師可以根據即時的缺陷分類結果,來執行製造過程變異的矯正措施,以降低晶圓成品的缺陷率,最終達到降低品質成本與零缺陷的目標。在本研究中,評估了 You Only Look Once (YOLO) 架構應用於混合類型晶圓缺陷圖進行分類工作。此研究使用 4,000 張混合型缺陷的晶圓圖,使用的為 YOLO 版本 8 (Yolov8)來執行分類模型,實驗結果顯示分類準確率可以達到 99.4%,結果證實 Yolov8 系統對半導體晶圓上的混合型缺陷之分類工作非常有效率及助益。
The accurate detection and classification of wafer defects are critical in semiconductor fabrication. It provides interpretable data to identify the root causes of the problems. According to these information or data, the manufacture engineer can execute quality management and yield improvement activities to reduce the wafer defects rate. The traditional approach of wafer defect classification, which conducted manually by expert engineers utilizing computer-aided tools, which is time-consuming and can be inaccurate. As a result, automated identification of wafer defects using deep learning algorithms has received substantial attention in order to increase detection process performance. Engineers to identify corrective measures for manufacturing process variation and to prevent wafer defects can use the timely defect categorization result and finally achieve the goal of reducing quality cost and zero defect. In our research, we evaluate the You Only Look Once (YOLO) architecture for classifying mixed-type wafer map defects. We train Yolov8 classification models with 4,000 wafer maps with mixed-type defects, and the experimental results show that classification accuracy can reach 99.4%. The Yolov8 image classification task is shown to be very efficient and beneficial in classifying mixed-type defects on semiconductor wafers.
[1]Bhatt, P. M., Malhan, R. K., Rajendran, P., Shah, B. C., Thakar, S., Yoon, Y. J., and Gupta, S. K., "Image-Based Surface Defect Detection Using Deep Learning: A Review", ASME. J. Comput. Inf. Sci. Eng., August 2021; 21(4): 040801. https://doi.org/10.1115/1.4049535
[2]"Automotive Zero Defects Framework", Automotive Electronics Council, AEC - Q004 – Rev February 26, 2020.
[3]Yu, N., Chen, H., Xu, Q., Hasan, M. M., and Sie, O., “Wafer map defect patterns classification based on a lightweight network and data augmentation”, CAAI Transactions on Intelligence Technology, 2022. (https://doi.org/10.1049/cit2.12126)
[4]Jin, C. H., Na, H. J., Piao, M., Pok, G., and Ryu, K. H., “A novel DBSCAN-based defect pattern detection and classification framework for wafer bin map”, IEEE Transactions on Semiconductor Manufacturing, 32(3), pp. 286-292, 2019.
[5]Wang, J., Xu, C., Yang, Z., Zhang, J., and Li, X., “Deformable convolutional networks for efficient mixed-type wafer defect pattern recognition”, IEEE Transactions on Semiconductor Manufacturing, 33(4), pp. 587-596, 2020.
[6]Kim, K. O., Kuo, W., and Luo, W., “A relation model of gate oxide yield and reliability”, Microelectronics Reliability, 44(3), pp. 425-434, 2004.
[7]Hwang, J. Y., and Kuo, W., “Model-based clustering for integrated circuit yield enhancement”, European Journal of Operational Research, 178(1), pp.143-153, 2007.
[8]Wu, M. J., Jang, J. S. R., and Chen, J. L., “Wafer map failure pattern recognition and similarity ranking for large-scale data sets”, IEEE Transactions on Semiconductor Manufacturing, 28(1), pp.1-12, 2014.
[9]Hwang, J. Y., and Kuo, W., “Model-based clustering for integrated circuit yield enhancement”, European Journal of Operational Research, 178(1), pp.143-153, 2007.
[10]Jeong, Y. S., Kim, S. J., and Jeong, M. K., “Automatic identification of defect patterns in semiconductor wafer maps using spatial correlogram and dynamic time warping”, IEEE Transactions on Semiconductor manufacturing, 21(4), pp.625-637, 2008.
[11]Yuan, T., Kuo, W., and Bae, S. J., “Detection of spatial defect patterns generated in semiconductor fabrication processes”, IEEE Transactions on Semiconductor Manufacturing, 24(3), pp.392-403, 2011.
[12]Wu, M. J., Jang, J. S. R., and Chen, J. L., “Wafer map failure pattern recognition and similarity ranking for large-scale data sets”, IEEE Transactions on Semiconductor Manufacturing, 28(1), pp.1-12, 2014.
[13]Byun, Y., and Baek, J. G., “ Mixed pattern recognition methodology on wafer maps with pre-trained convolutional neural networks”, The 12th International Conference on Agents and Artificial Intelligence ( ICAART 2020), February 22–24, 2020, Valletta, Malta, SciTePress, pp.974-979.
[14]Nag, S., Makwana, D., Mittal, S., and Mohan, C. K., “WaferSegClassNet-A light-weight network for classification and segmentation of semiconductor wafer defects”, Computers in Industry, 142, Article 103720, 2022.
[15]Jeong, Y. S., “Semiconductor wafer defect classification using support vector machine with weighted dynamic time warping kernel function”, Industrial Engineering & Management Systems, 16(3), pp.420-426, 2017.
[16]Yuan, T., Kuo, W., and Bae, S. J., “Detection of spatial defect patterns generated in semiconductor fabrication processes”, IEEE Transactions on Semiconductor Manufacturing, 24(3), pp.392-403, 2011.
[17]Abdullah, M., Rahman, M. H., and Akhter, S., “Pattern Recognition in Analog Wafer maps with Multiple Ensemble Approaches”, The 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), January 5-7, 2021, Dhaka,Bangladesh, IEEE, pp.587-591.
[18]Jin, C. H., Na, H. J., Piao, M., Pok, G., and Ryu, K. H., “ A novel DBSCAN-based defect pattern detection and classification framework for wafer bin map”, IEEE Transactions on Semiconductor Manufacturing, 32(3), pp.286-292, 2019.
[19]Ji, Y., and Lee, J. H., “Using GAN to improve CNN performance of wafer map defect type classification: Yield enhancement”, The 31st Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), August 24-26, 2020, Saratoga Springs, NY, USA, IEEE, pp.1-6.
[20]Chien, J. C., Wu, M. T., and Lee, J. D., “Inspection and classification of semiconductor wafer surface defects using CNN deep learning networks”, Applied Sciences, 10(15), 5340, 2020. (https://doi.org/10.3390/app10155340)
[21]Theodosiou, T., Rapti, A., Papageorgiou, K., Tziolas, T., Papageorgiou, E., Dimitriou, N. and Tzovaras, D.,” A Review Study on ML-based Methods for Defect-Pattern Recognition in Wafer Maps”, Procedia Computer Science, 217, pp.570-583, 2023.
[22]Girshick, R., Donahue, J., Darrell, T., & Malik, J., “Rich feature hierarchies for accurate object detection and semantic segmentation”, In Proceedings of the IEEE conference on computer vision and pattern recognition, June 23-28, 2014, Columbus, OH, USA, pp. 580-587.
[23]Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., “You only look once: Unified, real-time object detection”, In Proceedings of the IEEE conference on computer vision and pattern recognition, June 27-30, 2016, Las Vegas, NV, USA, pp.779-788.
[24]Girshick, R., “Fast r-cnn”, In Proceedings of the IEEE international conference on computer vision, December 7-13, 2015, Santiago, Chile, pp. 1440-1448.
[25]Ren, S., He, K., Girshick, R., & Sun, J., “Faster r-cnn: Towards real-time object detection with region proposal networks”, Conference on Neural Information Processing Systems (NIPS), December 7-10, 2015, Quebec, Canada, pp. 91-99.
[26]Redmon, J., and Farhadi, A., “YOLO9000: better, faster, stronger”, In Proceedings of the IEEE conference on computer vision and pattern recognition, July 21-26, 2017, Honolulu, HI, U.S.A., pp.7263-7271.
[27]Redmon, J., and Farhadi, A., “Yolov3: An incremental improvement”, arXiv preprint arXiv: 1804.02767, 2018.
[28]Bochkovskiy, A., Wang, C. Y., and Liao, H. Y. M., “Yolov4: Optimal speed and accuracy of object detection”, arXiv preprint arXiv: 2004.10934, 2020.
[29]Jocher, G., “YOLOv5 by Ultralytics”, https://github.com/ultralytics/yolov5, 2020. Accessed: May 23, 2023.
[30]Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L. and Wei, X., “YOLOv6: A single-stage object detection framework for industrial applications”, arXiv preprint arXiv:2209.02976, 2022.
[31]Wang, C. Y., Bochkovskiy, A., and Liao, H. Y. M., “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, arXiv preprint arXiv:2207.02696, 2022.
[32]Jocher, G. Chaurasia, A., and Qiu, J., “YOLO by Ultralytics.” https://github.com/ultralytics/, 2023. Accessed: May 23, 2023.
[33]Maksim, K., Kirill, B., Eduard, Z., Nikita, G., Aleksandr, B., Arina, L., and Nikolay, K., “Classification of wafer maps defect based on deep learning methods with small amount of data”, International Conference on Engineering and Telecommunication, November 20-21, 2019, Dolgoprudny, Russia, pp. 1-5.
[34]Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., and Fei-Fei, L., “Imagenet: A large-scale hierarchical image database”, IEEE conference on computer vision and pattern recognition, June 20-25, 2009, Miami, FL , USA, pp. 248-255.
[35]Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., “Feature pyramid networks for object detection”, In Proceedings of the IEEE conference on computer vision and pattern recognition”, July 21-26, 2017, Honolulu, HI, USA, pp. 2117-2125.
[36]Scius-Bertrand, A., Jungo, M., Wolf, B., Fischer, A., and Bui, M., “Transcription alignment of historical Vietnamese manuscripts without human-annotated learning samples”, Applied Sciences, 11(11), 4894, 2021.
[37]Jocher, G., “Yolov8 by Ultralytics”, https://github.com/ultralytics/ultralytics, 2023. Accessed: June 8, 2023.
[38]Wang, Chien-Yao, Bochkovskiy, Alexey, and Liao, Hong-Yuan Mark. “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, arXiv preprint arXiv: 2207.02696, 2022.
[39]Wang, Chien-Yao, Bochkovskiy, Alexey, and Liao, Hong-Yuan Mark, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”, arXiv preprint arXiv:2207.02696, 2022.
[40]Ang, G. J. N., Goil, A. K., Chan, H., Lee, X. C., Mustaffa, R. B. A., Jason, T., and Shen, B., “A novel application for real-time arrhythmia detection using YOLOv8”, arXiv preprint arXiv:2305.16727, 2023.
[41]Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., and Zitnick, C. L., “Microsoft coco: Common objects in context”, In Computer Vision, ECCV 2014: 13th European Conference, September 6-12, 2014, Zurich, Switzerland, pp. 740-755.
[42]Krizhevsky, A., Sutskever, I., and Hinton, G. E., “ Imagenet classification with deep convolutional neural networks”, Communications of the ACM, 60(6), pp.84-90, 2017.