簡易檢索 / 詳目顯示

研究生: 吳迦樂
Wu, Chia-Le
論文名稱: 應用深度強化學習動態調整基因演算法參數-以半導體封裝廠表面貼裝技術站做實證研究
Applying Deep Reinforcement Learning to Dynamically Adjust Genetic Algorithm Parameters for The Surface Mount Technology Stage in Semiconductor Packaging Factory and An Empirical Study
指導教授: 王宏鍇
Wang, Hung-Kai
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 製造資訊與系統研究所
Institute of Manufacturing Information and Systems
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 80
中文關鍵詞: 深度強化學習基因演算法表面貼裝技術
外文關鍵詞: Deep Reinforcement Learning, Genetic Algorithm, Surface Mount Technology
相關次數: 點閱:63下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究主要探討表面貼裝技術(Surface Mount Technology,簡稱SMT)的生產排程問題。SMT技術是半導體封裝廠中常用的一種技術,主要用於將電子元件安裝在印刷電路板(PCB)上。在高密度自動化生產環境下,該技術能有效提升產品的可靠性。因此,優化SMT站點中的瓶頸效率,不僅能達到公司期望的排程結果,其中利用改機次數以及超出交期的貨批數作為評估標準,還能滿足客戶需求。
    研究過程中發現如果只用基因演算法解決此排程問題,很容易過早收斂在還沒有找到所有潛在的解決方案之前就陷入局部最佳解上,因此本研究發展出一種基於基因演算法透過深度強化學習動態調整參數方法(DRLGA)的排程系統用於最佳化SMT站點的瓶頸問題,其中需要考量一些特殊的限制,包含不同批次的產品需要根據設備類型對照可用機台、母批次的拆分以及是否為在製品 (WIP)。各批次到達時間以及相關的時間運算、和多次製程迴流等限制。本方法將兩種方法結合的學習模組中利用深度Q網路進行學習,動態調整交配率及突變率,有效提高整體的排程效率。為了驗證方法的有效性及實用性,本研究利用本方法DRLGA、粒子群演算法(PSO)、和混合調度遺傳算法 (HDGA)與基因演算法(GA)再三種資料負荷量下進行比較,透過改機次數和超出交期貨批數的提升率,結果顯示DRLGA在提升效度上表現最佳。

    This study focuses on the production scheduling problem of Surface Mount Technology (SMT), a widely used technique in semiconductor packaging for mounting electronic components onto PCBs. In high-density automated production, SMT enhances product reliability. Optimizing bottleneck efficiency in SMT stages meets the company's scheduling goals and customer demands by minimizing machine change setup times and out-of-due-date lots.
    During the research process, it was found that using only a genetic algorithm to solve this scheduling problem often leads to premature convergence, resulting in local optima without exploring all potential solutions. Consequently, this study developed a scheduling system based on a genetic algorithm that dynamically adjusts parameters through deep reinforcement learning (DRLGA) to optimize bottlenecks in SMT stages. This method considers several specific constraints, including the need for different lots of products to be processed on appropriate machines based on device type, the splitting of parent lots, and whether they are work in process (WIP). It also accounts for lot arrival times, related time calculations, and multiple reflow processes must be accounted for. The learning module combines these two methods, utilizing deep Q-networks (DQN) to dynamically adjust crossover and mutation rates, thereby effectively improving overall scheduling efficiency. To validate the proposed method's effectiveness, DRLGA was compared with Particle Swarm Optimization (PSO), Hybrid Dispatching Genetic Algorithm (HDGA), and Genetic Algorithm (GA) under three data loads. The evaluation focused on machine change setup times and out-of-due-date lots, with results showing DRLGA had the best performance in enhancing efficiency.

    中文摘要 I Abstract II Acknowledgment III Table of Contents IV List of Tables VII List of Figures VIII Chapter1. Introduction 1 1.1 Research Backround, Motivation and Importance 1 1.2 Research Purpose 3 1.3 Research Overview 4 Chapter 2. Literature Review 6 2.1 Scheduling Problems in Surface Mount Technology (SMT) 6 2.2 Genetic Algorithm in Scheduling 10 2.3 Reinforcement Learning 13 2.3.1 Markov decision process (MDP) 14 2.3.2 Learning mechanism 15 2.3.3 Model and decision strategy 16 2.3.4 Q-Learning and Deep Q Network 18 2.3.5 Application of parameter tuning 20 Chapter 3. Research Methodology 23 3.1 Research Framework 23 3.2 Problem Definition 24 3.2.1 Assumptions for the problem 25 3.2.2 Characteristic and constraints of the problem 26 3.2.3 Key Performance Indicator 31 3.3 Deep Reinforcement Learning Enhanced Genetic Algorithm 34 3.3.1 Design chromosomes 35 3.3.2 Element design of reinforcement learning 38 3.3.3 DQN learning method 42 3.3.4 Combined mechanism 45 Chapter 4. Empirical Study 48 4.1 Data Description 49 4.2 Experimental Design 52 4.2.1 Data preprocessing and experimental environment 53 4.2.2 Experimental methodology and parameter setting 53 4.3 Experimental Validation 56 4.3.1 Explanation of three scenario datasets 56 4.3.2 Results data of four methods in three production scenarios 57 Chapter 5. Conclusion and Future Research 65 Reference 67

    Abdelbar, A. M., K. M. Salama, 2019. Parameter Self-Adaptation in an Ant Colony Algorithm for Continuous Optimization, IEEE Access 7, 18464-18479.
    Ahmed, E. K., Z. X. Li, B. Veeravalli, S. Ren, 2022. Reinforcement learning-enabled genetic algorithm for school bus scheduling, J. Intell. Transport. Syst. 26, (3), 269-283.
    Chen, L., Z. Wang, Z. Wang. (2023). Introduction of IC Manufacturing to Amateurs. Paper presented at the SHS Web of Conferences.
    Chen, R. H., B. Yang, S. Li, S. L. Wang, 2020. A self-learning genetic algorithm based on reinforcement learning for flexible job-shop scheduling problem, Comput. Ind. Eng. 149, 12.
    Feng, Q., L. Wang, Y. Wang, L. Y. Su, 2017. Research on PID Parameter Tuning of Improved Genetic Algorithm, Agro Food Ind. Hi-Tech 28, (3), 3164-3168.
    García-Nájera, A., C. A. Brizuela, I. M. Martínez-Pérez, 2015. An efficient genetic algorithm for setup time minimization in PCB assembly, Int. J. Adv. Manuf. Technol. 77, (5-8), 973-989.
    Garey, M. R., D. S. Johnson, R. Sethi, 1976. The complexity of flowshop and jobshop scheduling, Mathematics of operations research 1, (2), 117-129.
    He, T., D. B. Li, S. W. Yoon, 2017. A multi-phase planning heuristic for a dual-delivery SMT placement machine optimization, Robot. Comput.-Integr. Manuf. 47, 85-94.
    Ho, W., P. Ji, 2009. An integrated scheduling problem of PCB components on sequential pick-and-place machines: Mathematical models and heuristic solutions, Expert Syst. Appl. 36, (3), 7002-7010.
    Holland, J. H., 1992. Genetic algorithms, Scientific american 267, (1), 66-73.
    Irmouli, M., N. Benazzoug, A. D. Adimi, F. Z. Rezkellah, I. Hamzaoui, T. Hamitouche, 2023. Genetic Algorithm enhanced by Deep Reinforcement Learning in parent selection mechanism and mutation: Minimizing makespan in permutation flow shop scheduling problems, arXiv preprint arXiv:2311.05937
    Kennedy, J., R. Eberhart, 1995. Particle swarm optimization, Proceedings of ICNN'95-international conference on neural networks 4, 1942-1948.
    Krajcovic, M., V. Hancinsky, L. Dulina, P. Grznar, M. Gaso, J. Vaculík, 2019. Parameter Setting for a Genetic Algorithm Layout Planner as a Toll of Sustainable Manufacturing, Sustainability 11, (7), 26.
    Li, M. L., X. Q. Gu, C. Y. Zeng, Y. Feng, 2020. Feasibility Analysis and Application of Reinforcement Learning Algorithm Based on Dynamic Parameter Adjustment, Algorithms 13, (9), 16.
    Lin, C. J., M. L. Huang, 2017. Modified artificial bee colony algorithm for scheduling optimization for printed circuit board production, J. Manuf. Syst. 44, 1-11.
    Mills, K. L., J. J. Filliben, A. L. Haines, 2015. Determining Relative Importance and Effective Settings for Genetic Algorithm Control Parameters, Evol. Comput. 23, (2), 309-342.
    Mnih, V., K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, . . . D. Hassabis, 2015. Human-level control through deep reinforcement learning, Nature 518, (7540), 529-533.
    Mumtaz, J., Z. L. Guan, L. Yue, Z. Y. Wang, S. Ullah, M. Rauf, 2019. Multi-Level Planning and Scheduling for Parallel PCB Assembly Lines Using Hybrid Spider Monkey Optimization Approach, IEEE Access 7, 18685-18700.
    Ohira, M., K. Takano, Z. W. Ma, 2021. A Novel Deep-Q-Network-Based Fine-Tuning Approach for Planar Bandpass Filter Design, IEEE Microw. Wirel. Compon. Lett. 31, (6), 638-641.
    Pandey, S., L. Wu, S. M. Guru, R. Buyya. (2010). A particle swarm optimization-based heuristic for scheduling workflow applications in cloud computing environments. Paper presented at the 2010 24th IEEE international conference on advanced information networking and applications.
    Qin, W., Z. L. Zhuang, Y. Liu, O. Tang, 2019. A two-stage ant colony algorithm for hybrid flow shop scheduling with lot sizing and calendar constraints in printed circuit board assembly, Comput. Ind. Eng. 138, 12.
    Sehgal, A., H. La, S. Louis, H. Nguyen. (2019). Deep reinforcement learning using genetic algorithm for parameter optimization. Paper presented at the 2019 Third IEEE International Conference on Robotic Computing (IRC).
    Shahrabi, J., M. A. Adibi, M. Mahootchi, 2017. A reinforcement learning approach to parameter estimation in dynamic job shop scheduling, Comput. Ind. Eng. 110, 75-82.
    Song, Y. J., J. W. Ou, P. N. Suganthan, W. Pedrycz, Q. W. Yang, L. N. Xing, 2023. Learning Adaptive Genetic Algorithm for Earth Electromagnetic Satellite Scheduling, IEEE Trans. Aerosp. Electron. Syst. 59, (6), 9010-9025.
    Sutton, R. S., A. G. Barto, 1999. Reinforcement learning: An introduction, Robotica 17, (2), 229-235.
    Wang, Y., 2023. An empiracal study of multiple machine job production scheduling system for SMT stage in semiconductor package assembly fab., 71.
    Watkins, C., P. Dayan, 1992. Q-LEARNING, Mach. Learn. 8, (3-4), 279-292.
    Zheng, Y. M., Q. L. Sun, Z. Q. Chen, M. W. Sun, J. Tao, H. Sun, 2021. Deep Q-Network based real-time active disturbance rejection controller parameter tuning for multi-area interconnected power systems, Neurocomputing 460, 360-373.

    無法下載圖示 校內:2029-07-30公開
    校外:2029-07-30公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE