簡易檢索 / 詳目顯示

研究生: 曹維廷
Cao, Wei-Ting
論文名稱: 藉由生成困難的新奇特徵來達到多類別新奇檢測
Multi­-Class Novelty Detection with Generated Hard Novel Features
指導教授: 朱威達
Chu, Wei-Ta
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 29
中文關鍵詞: 對抗式生成網路多類別新奇檢測基於閾值的檢測器
外文關鍵詞: Generative adversarial network, Multi­-class novelty detection, Threshold­-based detector
相關次數: 點閱:112下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 即使在影像辨識的領域中已經表現良好,但是卷積神經網路中顯然一直存在過度自信的問題。舉例來說,輸入一張新奇的樣本到神經網路中,卷積神經網路仍常將其錯誤分類成已知類別的其中一類,並給予很高的信心分數。因此,在測試階段中,多類別新奇檢測成為影像分類的重要步驟。在這篇論文當中,我們提出一個以對抗式生成網路生成困難新奇特徵(hard novel features)的方法來建構一個更強大的新奇檢測器,且不需要任何的參考數據集。這些生成的特徵應該圍繞在已知類別和新奇類別之間的邊界。藉由生成的困難特徵來給予新奇檢測器更大的挑戰,且迫使檢測器變得更強。在實驗部份,我們從幾個觀點來驗證了困難新奇特徵的有效性,且證明了這個方法能得到最先進的多類別新奇檢測的效能。

    Despite promising performance in image recognition, it has been shown that convolutionalneural networks clearly have the overconfidence problem, i.e., mis­classify a novel sampleinto one of the known classes with high confidence. The task of multi­class novelty de­tection is thus important to detect novel samples during inference. Without the need of areference dataset to describe the distribution of novel samples, we propose to generate hardnovel features via a generative adversarial network to facilitate constructing a powerful nov­eltydetector. Thegeneratedfeaturesshouldbearoundtheboundariesbetweenknownclassesand novel classes. They cause a bigger challenge for the novelty detector, and consequentlyenforce the novelty detector to be stronger. We verify effectiveness of the hard novel fea­tures from several perspectives, and show that this idea yields the state­of­the­art multi­classnovelty detection performance.

    Table of Contents 摘要i Abstract ii Table of Contents iii List of Tables v List of Figures vi Chapter 1. Introduction 1 1.1. Motivation 1 1.2. Multi-Class Novelty Detection 1 1.3. Hard Novel Features 2 1.4. Contributions 4 1.5. Thesis Organization 4 Chapter 2. Related Works 5 2.1. Out-of-Distribution Detection 6 2.2. Anomaly Detection 6 2.3. Novelty Detection 7 2.4. Open-Set Recognition 8 Chapter 3. The Proposed Approach 9 3.1. Generating Hard Novel Features 9 . 3.1.1. Overview 9 . 3.1.2. Adversarial Loss 10 . 3.1.3. Confidence Loss 11 . 3.1.4. Overall Objective 12 3.2. Incorporation with the Segregation Network 13 . 3.2.1. Segregation Network 13 . 3.2.2. Testing Scenario 14 Chapter 4. Experimental Results 16 4.1. Experiments 16 . 4.1.1. Datasets 16 . 4.1.2. Evaluation Protocol 16 . 4.1.3. Experimental Results 18 Chapter 5. Conclusion and Future Work 24 5.1. Conclusion 24 5.2. Future Works 24 References 25

    [1] Davide Abati, Angelo Porrello, Simone Calderara, and Rita Cucchiara. Latent space autoregression for novelty detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 481–490, 2019.
    [2] Abhijit Bendale and Terrance Boult. Towards open world recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1893–1902, 2015.
    [3] Abhijit Bendale and Terrance E. Boult. Towards open set deep networks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1563–1572, 2016.
    [4] Supritam Bhattacharjee, Devraj Mandal, and Soma Biswas. Multiclass
    novelty detection using mix-up technique. In Proceedings of IEEE Winter Conference on Applications of Computer Vision, pages 1400–1409, 2020.
    [5] Paul Bodesheim, Alexander Freytag, Erik Rodner, and Joachim Denzler. Local novelty detection in multiclass recognition problems. In Proceedings of IEEE Winter Conference on Applications of Computer Vision, pages 813–820, 2015.
    [6] Paul Bodesheim, Alexander Freytag, Erik Rodner, Michael Kemmler, and Joachim Denzler. Kernel null space methods for novelty detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3374–3381, 2013.
    [7] Giacomo Boracchi, Diego Carrera, and Brendt Wohlberg. Novelty detection in images by sparse representations. In Proceedings of IEEE Symposium on Intelligent Embedded Systems, pages 47–54. IEEE, 2014.
    [8] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM Computing Surveys, volume 41, number 3, pages 158,
    2009.
    [9] Matthew C. Cieslak, Ann M. Castelfranco, Vittoria Roncalli, Petra H. Lenz, and Daniel K. Hartline. T-distributed stochastic neighbor embedding (tsne): A tool for ecophysiological transcriptomic analysis. Marine genomics, volume 51, pages 100723, 2020.
    [10] Terrance DeVries and Graham W Taylor. Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865, 2018.
    [11] Yueqi Duan, Wenzhao Zheng, Xudong Lin, Jiwen Lu, and Jie Zhou. Deep adversarial metric learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2018.
    [12] ZongYuan Ge, Sergey Demyanov, Zetao Chen, and Rahil Garnavi. Generative open-max for multiclass open set classification. arXiv preprint arXiv:1707.07418, 2017.
    [13] Chuang Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modernneural networks. In Proceedings of International Conference on Machine Learning, 2017.
    [14] Ville Hautamaki, Ismo Karkkainen, and Pasi Franti. Outlier detection using knearest neighbour graph. In Proceedings of International Conference on Pattern Recognition., volume 3, pages 430–433, 2004.
    [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
    [16] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016.
    [17] Heiko Hoffmann. Kernel pca for novelty detection. Pattern Recognition, volumes 40, number 3, pages 863874, 2007.
    [18] YenChang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized ODIN: Detecting out-of-distribution image without learning from out-of-distribution data. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10951–10960, 2020.
    [19] Nathalie Japkowicz, Catherine Myers, and Mark Gluck. A novelty detection approach to classification. In Proceedings of International Joint Conference on Artificial Intelligence, volume 1, pages 518–523, 1995.
    [20] SouYoung Jin, Aruni RoyChowdhury, Huaizu Jiang, Ashish Singh, Aditya Prasad, Deep Chakraborty, and Erik LearnedMiller. Unsupervised hard example mining from videos for improved object detection. In Proceedings of European Conference on Computer Vision, 2018.
    [21] Taehung Kim, Kibeom Hong, and Hyeran Byun. The feature generator of hard negative samples for fine-grained image recognition. Neurocomputing, volume 439, pages 374382, 2020.
    [22] Edwin M. Knorr and Raymond T. Ng. Algorithms for mining distance-based outliers in large datasets. The VLDB Journal, volume 98, pages 392403, 1998.
    [23] Edwin M. Knorr, Raymond T. Ng, and Vladimir Tucakov. Distance-based
    outliers: algorithms and applications. The VLDB Journal, volume 8, number 34, pages 237253, 2000.
    [24] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, volume 60, number 6, pages 8490, 2017.
    [25] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Proceedings of Advances in Neural Information Processing Systems, 2017.
    [26] Kimin Lee, H. Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In Proceedings of International Conference on Learning Representations, 2018.
    [27] Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690, 2017.
    [28] Juncheng Liu, Zhouhui Lian, Yi Wang, and Jianguo Xiao. Incremental kernel null space discriminant analysis for novelty detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 792–800, 2017.
    [29] Dengsheng Lu and Qihao Weng. A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing, volume 28, number 5, pages 823870, 2007.
    [30] Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. arXiv preprint arXiv:1802.10501, 2018.
    [31] Kishan G. Mehrotra, Chilukuri K. Mohan, and HuaMing Huang. Clustering-based anomaly detection approaches. In Anomaly Detection Principles and Algorithms, pages 41–55. Springer, 2017.
    [32] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image classification: Generalizing to new classes at nearzero cost. IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 35, number 11, pages 26242637, 2013.
    [33] Reza MohammadiGhazi, Youssef M. Marzouk, and Oral Büyüköztürk. Conditional classifiers and boosted conditional gaussian mixture model for novelty detection. Pattern Recognition, volume 81, pages 601614,
    2018.
    [34] Sina Mohseni, Mandar Pitale, J.B.S. Yadawa, and Zhangyang Wang. Self-supervised learning for generalizable out-of-distribution detection. In Proceedings of AAAI Conference on Artificial Intelligence, volume 34, pages 5216–5223, 2020.
    [35] Poojan Oza, Hien V. Nguyen, and Vishal M. Patel. Multiple class novelty detection under data distribution shift. In Proceedings of European Conference on Computer Vision, pages 432–449. Springer, 2020.
    [36] Poojan Oza and Vishal M. Patel. C2ae: Class conditioned autoencoder
    for open-set recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 2307–2316, 2019.
    [37] Poojan Oza and Vishal M. Patel. Utilizing patch-level category activation patterns for multiple class novelty detection. In Proceedings of European Conference on Computer Vision, pages 421–437. Springer, 2020.
    [38] Guansong Pang, Chunhua Shen, Longbing Cao, and Anton van den Hengel. Deep learning for anomaly detection: A review. arXiv preprint arXiv:2007.02500, 2020.
    [39] Pramuditha Perera and Vishal M. Patel. Deep transfer learning for multiple class novelty detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 11544–11552, 2019.
    [40] Marko Ristin, Matthieu Guillaumin, Juergen Gall, and Luc Van Gool. Incremental learning of ncm forests for large-scale image classification. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3654–3661, 2014.
    [41] Walter J. Scheirer, Lalit P. Jain, and Terrance E. Boult. Probability models for open set recognition. In Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 36, number 11, pages 23172324, 2014.
    [42] Bernhard Schölkopf, John C. Platt, John ShaweTaylor,
    Alex J. Smola, and Robert C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, volume 13, number 7, pages 14431471, 2001.
    [43] Bernhard Schölkopf, Robert C. Williamson, Alex Smola, John ShaweTaylor, and John Platt. Support vector method for novelty detection. In Proceedings of Advances in Neural Information Processing Systems, volume 12, pages 582–588, 1999.
    [44] Alexander Schultheiss, Christoph Käding, Alexander Freytag, and Joachim Denzler. Finding the unknown: Novelty detection with extreme value signatures of deep neural activations. In Proceedings of German Conference on Pattern Recognition, pages 226–238, 2017.
    [45] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. In Proceedings of International Conference on Learning Representations, 2015.
    [46] Kumar Sricharan and Ashok Srivastava. Building robust classifiers through generation of confident out of distribution examples. arXiv preprint arXiv:1812.00239, 2018.
    [47] James P. Stevens. Outliers and influential data points in regression analysis. Psychological Bulletin, volume 95, number 2, pages 334, 1984.
    [48] Yi Lin Sung, SungHsien Hsieh, SooChang Pei, and ChunShien Lu. Difference-seeking generative adversarial network–unseen sample generation. In Proceedings of International Conference on Learning Representations, 2019.
    [49] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
    [50] David M.J. Tax and Robert P.W. Duin. Support vector data description. Machine Learning, volume 54, number 1, pages 4566, 2004.
    [51] Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, volume 3, number 1, pages 7186, 1991.
    [52] Simon Vandenhende, Bert De Brabandere, Davy Neven, and Luc Van Gool. A three-player gan: Generating hard samples to improve classification networks. In Proceedings of International Conference on Machine Vision Applications, 2019.
    [53] Sachin Vernekar, Ashish Gaurav, Vahdat Abdelzad, Taylor Denouden, Rick Salay, and Krzysztof Czarnecki. Out-of-distribution detection in classifiers via generation. arXiv preprint arXiv:1910.04241, 2019.
    [54] Kenji Yamanishi, JunIchi Takeuchi, Graham Williams, and Peter Milne. Online unsupervised outlier detection using finite mixtures with discounting learning algorithms. Data Mining and Knowledge Discovery, volume 8, number 3, pages 275300, 2004.
    [55] He Zhang and Vishal M. Patel. Sparse representation-based
    open set recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 39, number 8, pages 16901696, 2016.

    無法下載圖示 校內:2026-07-07公開
    校外:2026-07-07公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE