簡易檢索 / 詳目顯示

研究生: 池書維
Chih, Shu-Wei
論文名稱: 利用卷積網路提高光學繞射極限之影像解析度
Improving Image Resolution Beyond Optical Diffraction Limit with Convolutional Neural Network
指導教授: 張世慧
Chang, Shih-hui
學位類別: 碩士
Master
系所名稱: 理學院 - 光電科學與工程學系
Department of Photonics
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 55
中文關鍵詞: 解析度優化器反向傳播CNN
外文關鍵詞: Resolution, Optimizer, Backpropagation, CNN
相關次數: 點閱:63下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文中有四種不同的繞射波型設計圖,利用卷積神經網路(Convolutional Neural Network, CNN)設計出的模型,在不同的距離中判斷波之間的距離以增加解析度。會先從介紹一些所需要的機率概念,並加以運用在模型的數學式。再來將代入深度學習中,了解激活函數對一個數學模型的影響,再透過損失函數對照自己的數學式與目標的差距再對數學模型加以微調,微調方式就是透過優化器,在不同的優化器下會有不同的呈現,還有會利用反向傳播加速更新優化器的迭代,最後在介紹卷積運算和其中的一些技巧。最後將激活函數、損失函數、優化器、反向傳播和卷積運算結合並能和成一個CNN模型,再將四種波的圖片帶到模型中觀察,也會更變參數並去看其中的變化,了解如何優化精準度。

    In this thesis, we utilizing Convolutional Neural Network (CNN) to design a model to recognize the distance between two spots in their overlapped diffractive patterns in hoping to overcome the optical diffraction limit. First of all, we will introduce some conceptions of probability and how to calculate in the model. Then, we’ll talk about the details of deep learning. Understanding the effect of activation function on a mathematical model. Using loss function to compare the difference between output data and target data and tuning the parameters in the model to make them closer. Optimizer is the way to tune the difference between output data and target value, also get the different result by using different optimizer. Backpropagation is used to accelerate the iteration of optimizer. Then, we will introduce convolutional operations and some application techniques. Finally, combining these methods to form a CNN model and we can check the accuracy of the model.

    口試委員審定書 I 中文摘要 II Abstract III 誌謝 XIV 目錄 XV 表目錄 XVII 圖目錄 XVIII 符號 XXI 第1章 序論 1 1.1 前言 1 1.2 研究動機 2 1.3 本文內容 3 第2章 研究相關理論 4 2.1 機率論(Probability Theory) 4 2.1.1 邊際機率(Marginal Probability) 4 2.1.2 聯合機率(Joint Probability) 4 2.1.3 條件機率(Conditional Probability) 5 2.1.4 概率定律(The rule of probability) 5 2.2 貝氏理論(Bayesian Probabilities) 6 2.3 高斯分布(Gaussian Distribution) 6 2.4 似然函數(Likelihood Function) 7 2.5 最大似然(Maximum Likelihood) 8 第3章 數值模擬方法 9 3.1 深度學習(Deep Learning) 9 3.2 激活函數(Activation Function) 10 3.2.1 Sigmoid 函數(S型函數) 10 3.2.2 雙曲正切函數(tanh) 11 3.2.3 線性整流函式(ReLU) 12 3.2.4 Softmax函式 12 3.3 損失函數(Loss Function) 13 3.3.1 均方誤差(mean squared error) 14 3.3.2 交叉熵誤差(cross entropy error) 15 3.4 優化器(Optimizer) 17 3.5 Dropout 20 3.6 批次標準化(Batch Normalization) 21 3.7 反向傳播(Back Propagation) 22 3.8 卷積運算(Convolution ) 27 3.8.1 步幅(Stride) 29 3.8.2 填補(Padding) 30 3.8.3 池化層(Pooling) 31 第4章 研究結果與討論 33 第5章 結論與未來展望 52 5.1 結論 52 5.2 未來展望 53 參考文獻 54

    [1]Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193–202 (1980)
    [2]A. H. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K. Lang, "Phoneme recognition using time-delay neural networks", IEEE Trans. Acoustics Speech Signal Processing, 37, 328-339 (1989)
    [3]Y. LeCun, B. Boser, J. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand-written zip code recognition. In Neural Computation, (1989)
    [4]M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: Pushing the limits of fluorescence microscopy,”
    [5] Bishop, C. Pattern Recognition and Machine Learning (Springer, New York, New York, USA) ISBN: 978-038-731-073-2, (2006)
    [6] 斎藤康毅, Deep Learning:用Python進行深度學習的基礎理論實作(美商歐萊禮股份有限公司台灣分公司ISBN: 978-986-476-484-6)
    [7]Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747
    [8]Kingma, Diederik P. and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG], (2014)
    [9]N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, (2014)
    [10]高揚, 葉振斌, 強化學習(RL):使用PyTorch徹底精通, 深智數位, ISBN:978-986-550-122-8

    無法下載圖示 校內:2025-06-30公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE