| 研究生: |
池書維 Chih, Shu-Wei |
|---|---|
| 論文名稱: |
利用卷積網路提高光學繞射極限之影像解析度 Improving Image Resolution Beyond Optical Diffraction Limit with Convolutional Neural Network |
| 指導教授: |
張世慧
Chang, Shih-hui |
| 學位類別: |
碩士 Master |
| 系所名稱: |
理學院 - 光電科學與工程學系 Department of Photonics |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 中文 |
| 論文頁數: | 55 |
| 中文關鍵詞: | 解析度 、優化器 、反向傳播 、CNN |
| 外文關鍵詞: | Resolution, Optimizer, Backpropagation, CNN |
| 相關次數: | 點閱:63 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文中有四種不同的繞射波型設計圖,利用卷積神經網路(Convolutional Neural Network, CNN)設計出的模型,在不同的距離中判斷波之間的距離以增加解析度。會先從介紹一些所需要的機率概念,並加以運用在模型的數學式。再來將代入深度學習中,了解激活函數對一個數學模型的影響,再透過損失函數對照自己的數學式與目標的差距再對數學模型加以微調,微調方式就是透過優化器,在不同的優化器下會有不同的呈現,還有會利用反向傳播加速更新優化器的迭代,最後在介紹卷積運算和其中的一些技巧。最後將激活函數、損失函數、優化器、反向傳播和卷積運算結合並能和成一個CNN模型,再將四種波的圖片帶到模型中觀察,也會更變參數並去看其中的變化,了解如何優化精準度。
In this thesis, we utilizing Convolutional Neural Network (CNN) to design a model to recognize the distance between two spots in their overlapped diffractive patterns in hoping to overcome the optical diffraction limit. First of all, we will introduce some conceptions of probability and how to calculate in the model. Then, we’ll talk about the details of deep learning. Understanding the effect of activation function on a mathematical model. Using loss function to compare the difference between output data and target data and tuning the parameters in the model to make them closer. Optimizer is the way to tune the difference between output data and target value, also get the different result by using different optimizer. Backpropagation is used to accelerate the iteration of optimizer. Then, we will introduce convolutional operations and some application techniques. Finally, combining these methods to form a CNN model and we can check the accuracy of the model.
[1]Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193–202 (1980)
[2]A. H. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K. Lang, "Phoneme recognition using time-delay neural networks", IEEE Trans. Acoustics Speech Signal Processing, 37, 328-339 (1989)
[3]Y. LeCun, B. Boser, J. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand-written zip code recognition. In Neural Computation, (1989)
[4]M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: Pushing the limits of fluorescence microscopy,”
[5] Bishop, C. Pattern Recognition and Machine Learning (Springer, New York, New York, USA) ISBN: 978-038-731-073-2, (2006)
[6] 斎藤康毅, Deep Learning:用Python進行深度學習的基礎理論實作(美商歐萊禮股份有限公司台灣分公司ISBN: 978-986-476-484-6)
[7]Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747
[8]Kingma, Diederik P. and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG], (2014)
[9]N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, (2014)
[10]高揚, 葉振斌, 強化學習(RL):使用PyTorch徹底精通, 深智數位, ISBN:978-986-550-122-8
校內:2025-06-30公開