| 研究生: |
林獻昱 Lin, Shian-Yu |
|---|---|
| 論文名稱: |
以特徵輔助的隨機森林模型協助弱晶片篩選以提高半導體品質與可靠性 Weak Die Screening by Feature Prioritized Random Forest for Improving Semiconductor Quality and Reliability |
| 指導教授: |
謝明得
Shieh, Ming-Der |
| 共同指導教授: |
吳誠文
Wu, Cheng-Wen |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2022 |
| 畢業學年度: | 110 |
| 語文別: | 英文 |
| 論文頁數: | 39 |
| 中文關鍵詞: | HTOL 測試 、IC測試 、機器學習 、隨機森林 、品質 、可靠度 |
| 外文關鍵詞: | HTOL test, IC testing, machine learning, random forest, quality, reliability |
| 相關次數: | 點閱:76 下載:8 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著對於安全關鍵(safety-critical)產品的需求不斷增加,半導體元件的品質和可靠性成為需要被保證的重中之重。近年來,通過機器學習(ML)演算法進行的測試數據分析被廣泛認為在提高半導體晶片的品質和可靠性方面具有巨大潛力。在此項研究中,我們審視了先進半導體產品的典型測試流程,並提出了一種基於機器學習的弱晶片篩選方法,以提高出貨產品的品質和可靠性。我們提出了以特徵輔助的隨機森林(FPRF)模型,該模型可以整合進現有的測試流程,以輔助高成本的可靠度測試。同時我們利用提出的FPRF模型對兩個採用先進製程的靜態隨機存取記憶體(SRAM)產品進行實驗,並根據最終測試(FT)中獲得的測試數據進行特徵分析。透過FPRF模型篩選之後,我們能夠從通過FT的那些晶片中篩選出更多的潛在壞晶片。對於第一個案例,在低於13%的過度殺傷率 (overkill rate)時,壞死命中率 (bad die hit rate) 可高達97%。另一個產品案例中,對於6%的過度殺傷率 (overkill rate),壞死命中率(bad die hit rate)可以達到100%。此外提出的FPRF模型也可以應用於其他產品。
With the increasing demand for safety-critical products, the quality and reliability of semiconductor components are among the top priorities. In recent years, test data analytics by machine learning (ML) algorithms are widely considered to have great potential for improving the quality and reliability of semiconductor chips. In this work, we inspect a typical test flow of advanced semiconductor products, and propose an ML-based weak die screening method for improving the quality and reliability of shipped products. We propose the feature prioritized random forest (FPRF) model, which can fit smoothly into the existing test flow. We perform experiments on two advanced SRAM products using the FPRF model. We perform feature analysis based on the test data obtained from the final test (FT). After the FPRF screening, we are able to screen out more bad dies from those that have passed the FT. For first product case, for an overkill rate of 12.93 %, the bad die hit rate can be as high as 96.55%. In another product case, with an overkill rate of 6%, the bad die hit rate can achieve 100%. One can explore the FPRF model for other products as well.
[1] H. G. Stratigopoulos, "Machine learning applications in IC testing." in Proc. 2018 IEEE 23rd European Test Symposium (ETS), pp. 1-10, 2018.
[2] M. Pradhan, and B. B. Bhattacharya, “A survey of digital circuit testing in the light of machine learning.” WIREs Data Min. Knowl. Disc., pp. 1-18, 2020.
[3] S. Roy, S. K. Millican, and V. D. Agrawal, “Special session–machine learning in test: A survey of analog, digital, memory, and RF integrated circuits.” in Proc. 2021 IEEE 39th VLSI Test Symposium (VTS), pp. 1-14, 2021.
[4] C. H. Chuang, K. W. Hou, C. W. Wu, M. Lee, C. H. Tsai, H. Chen, and M. J. Wang, “A Deep Learning-Based Screening Method for Improving the Quality and Reliability of Integrated Passive Devices.”, in Proc. 2020 IEEE International Test Conference in Asia (ITC-Asia), pp. 13-18, 2020.
[5] H. W. Block, and T. H. Savits. “Burn-in.” Statist. Sci., vol. 12, no. 1, pp. 1–19, 1997.
[6] R. P. Vollertsen, “Burn-in.” in Proc. IEEE International Integrated Reliability Workshop Final Report, pp. 167–173, 1999.
[7] JEDEC Standard: Temperature, Bias, and Operating Life, JEDEC Standard JESD 22-A108, 2017.
[8] Test Method Standard Microcircuits, MIL-STD 883L, Department of Defense, USA, Sep. 2019.
[9] J. R. Quinlan, “Induction of decision trees.” Machine learning, vol. 1, no. 1, pp. 81–106, 1986.
[10] L. K. Hansen, and P. Salamon, “Neural network ensembles.” in IEEE transactions on pattern analysis and machine intelligence, vol. 12, no. 10, pp. 993-1001, Oct. 1990.
[11] L. Breiman, “Bagging predictors.” Machine learning, vol. 24, no. 2, pp. 123–140, 1996.
[12] T. K. Ho, “The random subspace method for constructing decision forests.” in IEEE transactions on pattern analysis and machine intelligence, vol. 20, no. 8, pp. 832–844, Aug. 1998.
[13] L. Breiman, “Random forests.” Machine learning, vol. 45, no. 1, pp. 5-32, 2001.
[14] L. Breiman, Manual on Setting Up, Using, and Understanding Random Forests v4.0. (2003). Accessed: Jul. 29, 2022. [Online]. Available: https://www.stat.berkeley.edu/~breiman/Using_random_forests_v4.0.pdf
[15] T. Dietterich, “Overfitting and undercomputing in machine learning.” ACM computing surveys (CSUR), vol. 27, no. 3, pp. 326–327, Sep. 1995.
[16] A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms.” Pattern recognition, vol. 30, no. 7, pp. 1145–1159, 1997.
[17] P. I. Good, Permutation, parametric, and bootstrap tests of hypotheses, 3rd ed. New York, NY: Springer, 2005.
[18] C. Strobl, A. L. Boulesteix, T. Kneib, T. Augustin, and A. Zeileis, “Conditional variable importance for random forests.” BMC bioinformatics, vol. 9, no. 1, Jul. 2008.
[19] H. Abdi and L. J. Williams, “Principal component analysis: Principal component analysis,” Wiley Interdiscip. Rev. Comput. Stat., vol. 2, no. 4, pp. 433–459, 2010.