| 研究生: |
林耀三 Lin, Yao-San |
|---|---|
| 論文名稱: |
應用密度函數估計法提昇小樣本學習精確度 Using Density Estimation to Improve the Learning with Small Data Sets |
| 指導教授: |
利德江
Li, Der-Chang |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 工業管理科學系 Department of Industrial Management Science |
| 論文出版年: | 2003 |
| 畢業學年度: | 91 |
| 語文別: | 英文 |
| 論文頁數: | 84 |
| 外文關鍵詞: | Density Estimation, Small-data-set Learning, Preventive Management, Intervalization, Virtual Data, Virtual Samples Generation., Intervalized Kernel Density Estimator |
| 相關次數: | 點閱:125 下載:1 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
None
This study is devoted to learn knowledge with a small data set using statistical learning theory. Since fewer exemplars usually lead to a lower learning accuracy, many approaches use a big number of exemplars in learning process for higher learning accuracies. However, the idea would be inappropriate when a research is limited by cost and time. To overcome this difficulty, this research uses kernel methods of Density Estimation to improve the small size learning. Furthermore, Virtual Samples Generation with Intervalized Kernel Density Estimation is proposed to produce enough information for learning. The provided example shows that this is an economical and efficient method of knowledge acquisition from small data sets.
REFERENCES
陳隆昇, 功能性虛擬母體觀念之發展及其在機器學習中之應用, 國立成功大學碩士論文, 民國89年
Abu-Mostafa, Y. S. (1993). An Algorithm for Learning from Hints. Proceeding of 1993 International Joint Conference on Neural Networks. 1653-1656.
Ammaratunga, D. (1999). Searching for the Right Sample Size. The American Statistician, 53, 52-55.
Carter, M. A., Oxley, Mark E. (1999). Evaluating the Vapnik-Chervonenkis dimension of artificial neural networks using the Poincare polynomial. Neural Works, 12, 403-408.
Carter, M. A. (1995). The mathematics of measuring capabilities of artificial neural networks, Ph. D. thesis, Air Force Institute Institute of Technology, Wright-Patterson AFB, OH. DTIC ADA297408.
Cherkassky, V., Shao, X. (2001). Signal estimation and denoising using VC-theory. Neural Works, 14, 37-52.
de Montricher, G. M., Tapia, R. A., & Thompson, J. R. (1975). Nonparametric maximum likelihood estimation of probability densities by penalty function methods. Annals of Statistics 3: 1329-1348.
Hastie, T., Tibshirani, R., & Buja, A. (1999). Flexible Discriminant and Mixture Models. Statistics and Neural Networks Advances at the Interface-1, J. W. Kay and Titterington (ed.), 1-23.
Hastie, T., Tibshirani, R., & Friedman, J. (2001). The Elements of Statistical Learning-Data Mining, Inference, and Prediction. New York: Speinger-Verlag.
Husmeier, D., Taylor, J. (1997). Predicting conditional probability densities of stationary stochastic time series. Neural Works, 10(3), 479-497.
Husmeier, D., Taylor, J. (1998). Predicting conditional probability densities: Improved training scheme combining EM and RVFL. Neural Works, 11(1), 89-116.
Kendall, M.G., Stuart A. (1973). The Advanced Theory of Statistics, Volume 2, 3rd Edition. London: Griffin.
Kulczycki, P., Schioler, H. (1998). Estimating Conditional Distribution by Neural Networks. Neural Networks Proceedings, 2, 1344-1349.
Lanouette, R., Thibault, J., & Valade J. L. (1999). Process modeling with neural network using small experimental datasets. Computers and Chemical Engineering e, 23, 1167-1176.
Niyogi, P., Girosi, F., & Tomaso, P. (1998). Incorporating Prior Information in Machine Learning by Creating Virtual Examples. Proceeding of the IEEE, 86(11), 275-298.
Miller, G., Horn, D. (1998). Probability Density Estimation Using Entropy Maximization. Neural Computation,10, 1925-1938.
Minsky, M., Papert, S. (1969). Perceptrons. Cambridge, MA: MIT Press
Mitchell, T. M. (1997). Machine Learning. New York: McGraw-Hill.
Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33, 1065-1076.
Plutowski, M. E. P., Cottrell, G., & White, H. (1995). Experience with Selecting Exemplars from Clean Data. Neural Works, 9, 273-294.
Rosenblatt, M. (1956). Remarks on some nonparametric estimates of a density estimation. Annals of Mathematical Statistics, 27, 832-837.
Ross, S. M. (1996). Simulation/Second Edition. San Diego: Academic Press.
Schioler, H., Kulczycki, P.(1997) Neural Network for Estimating Conditional Distribution. IEEE Transaction on Neural Networks, 8(5), 1015-1025.
Silverman, B. W. (1990). Density Estimation for Statistical and Data Analysis. New York: Chapman and Hall.
Sontag, E. D. (1992). Sigmoids distinguish more efficiently then heavisides. Neural Commputation, 1, 470-472.
Sontag, E. D. (1992). Feedforward nets for interpolation and classification. Journal of Computer and System Sciences, 45, 20-48.
Tapia, R. A., Thompson, J. R. (1978). Nonparametric Probability Density Estimation. Baltimore: Johns Hopkins University Press.
Valiant, L. G. (1984). A theory of learnable. Communication of the Association Computing Machinery, 27, 1134-1142.
Vapnik, V. N. (2000). The nature of Statistical Learning Theory. New York: Speinger-Verlag.
Vapnik, V. N., Chervonenkis, A. Y. (1971). On the convergence of relative frequencies of events to their probabilities. Theory of Probability and its Application, 2, 264-280.
Watson, G. S. (1969). Density Estimation by Orthogonal Series. Annals of Mathematical Statistics, 40, 1496-1498.
Williams, P. M., (1996). Using Neural Networks to Model Conditional Multivariate Densities. Neural Computation, 8, 843-854.
Yang, H. H., Murata, N., & Amari, S. (1998). Statistical Inference: Learning in Artificial Neural Networks. Trends in Cognitive Science, 2(1), 4-10.