簡易檢索 / 詳目顯示

研究生: 張慶寶
Chang, Ching-Pao
論文名稱: 以多變量統計流程管制與行動導向之瑕疵預測技術改善軟體流程
Software Process Improvement using Multivariate Statistical Process Control and Action-Based Defect Prediction
指導教授: 朱治平
Chu, Chih-Ping
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2008
畢業學年度: 96
語文別: 英文
論文頁數: 92
中文關鍵詞: 瑕疵預防瑕疵預測軟體品質以發展行動為導向資料探勘軟體流程改善
外文關鍵詞: software process improvement, action-based, defect prediction, defect prevention
相關次數: 點閱:113下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 降低軟體流程(software process)的變異性(variances)是軟體發展活動中極為重要的步驟,而統計流程管制(statistical process control, SPC)則是在此活動中經常被用來監控軟體流程並找出問題(problems)的一種方式。雖然造成軟體發展流程問題的因素會因為專案屬性的不同(如專案開發人員的經驗、產品的複雜度、輔助工具與專案時程的安排)而不同,但分析現有的瑕疵報告(defect reports)通常被用來找出造成問題的原因(causes),然而分析數量龐大的瑕疵報告並非一件簡單的工作,因果分析技術(Causal Analysis)便是用來找出問題所在,以協助訂定與進行矯正措施(corrective actions)。
    本研究提出一種運用多變量統計流程管制(multivariate statistical process control)與以軟體發展行動為導向之瑕疵預測技術(Action-Based Defect Prediction)來偵測軟體發展流程中所發生的問題(problems detection),並同時找出導致問題的原因(causes of problems),這些資訊也將用來對未來軟體發展行動中可能再次發生問題進行預測(prediction)。本研究所提出的方法可以同時對多個事先定義的度量(measures)進行監控(monitor),對於已偵測為對失控點(out-of-control points)影響最大的度量,則可以進一步利用本研究所提的方法來找出導致問題的原因,其中淨最小平方法(partial least square)與多重假設檢定法(multiple hypothesis testing)可用來對於已找出的原因進行驗證(validation)的工作。使用本研究所提出的方法主要好處是對於具有高度相關(correlated)的度量可以同時進行監控,而所得的結果亦有助於因果分析與軟體流程改善的進行。
    本研究所提的方法除了可以找出問題的導致原因外,並提供一種以軟體發展行動為導向之軟體瑕疵預測技術(Action-Based Defect Prediction, ABDP)來協助對於軟體發展行動中可能的瑕疵進行預測,此方式主要是以分析專案發展流程中已執行過的軟體發展行動(performed actions)與瑕疵報告(defect reports)來找出造成瑕疵的軟體發展行動特徵(characteristics)。軟體瑕疵(software defect)不但會影響軟體品質(quality),其所衍生的成本(cost)更可能會影響到軟體專案(software project)的成功。雖然於發展初期(early stage)發現且移除瑕疵能有效降低瑕疵所造成的衝擊(impact),但若能在瑕疵發生前即予以防止則不但能有效節省用於瑕疵偵測(defect detection)與瑕疵移除(defect removal)上的成本,更能降低其所產生的衝擊。於軟體發展初期即著手預防軟體瑕疵的產生,更可降低軟體流程(software process)的變異性(variance)與提高軟體產品的穩定度(stability)。而這些找出來的軟體發展行動特徵將可用來預測接下來所要執行的軟體發展行動,如果將要執行的軟體發展行動被預測為有瑕疵產生時,適當的軟體發展行動矯正措施(corrective actions)便可立即用於這些軟體發展行動以便預防可能產生的瑕疵。找出專案發展中可能造成瑕疵的軟體發展行動並不容易,雖然使用預測模型(prediction model)來預測未來的軟體發展流程已為大部分的軟體開發組織所使用,用來建立此預測模型的資料也通常蒐集自過去的軟體專案,然而在不同軟體發展專案中所執行的軟體發展行動可能會產生不同的結果,也因此增加預測的困難度。本研究所提出的ABDP方法便是運用資料探勘(Data Mining)的技術來分析已執行過的軟體發展行動記錄與瑕疵報告,以便獲得可能產生瑕疵的行動樣式(pattern)。此樣式便可以運用來預測將要執行的軟體發展行動是否會產生瑕疵。此ABDP方式在本研究中也運用於一個蒐集自商業專案的資料並獲致相當不錯的預測成效,多種協助處理所蒐集到軟體發展行動資料的方法也同時運用到此方法中,例如運用子資料集合選取(Feature Subset Selection, FSS)來選取適當的屬性集來分析、減少取樣(under-sampling)與增加取樣(over-sampling)則用於不平衡的資料集。
    運用本研究所提出的方法主要的優點是多個事先定義好的度量(measures)可以同時進行監測,而對於所找出問題的原因則可以用來協助建立預測模型,此預測模型則可用來用預測同一專案中尚未執行軟體發展行動,此方式則可降低因為不同屬性專案所產生的誤差。此外所建立的預測模型與相關分析的結果也可以提供因果分析之用。

    Reducing the variance of software processes is an essential activity for software development process, in which the statistical process control (SPC) is a conventional means of monitoring software processes and detecting related problems. Factors causing problems of software process vary according to the different attributes (such as the experience of the developers, the product complexity, the development tools and the schedule) of a project. Determining the actual causes of reported problems requires significant effort due to the large number of possible causes. Causal analysis is a common approach to discover the causes of defects to facilitate corrective actions.
    This study presents an approach to detect problems of software process, identify the causes of problems and predict subsequent actions which are likely to cause problems using Multivariate Statistical Process Control (MSPC) and Action-Based Defect Prediction (ABDP). This proposed method can be applied to monitor multiple measures of software process simultaneously. The measures which are detected as the major impacts to the out-of-control signals can be used to identify the causes of problems in this study, in which the partial least squares (PLS) and statistical hypothesis testing are utilized to validate the identified causes. The main advantage of the proposed approach is that the correlated indices can be monitored simultaneously, and the results can be utilized to facilitate the causal analysis and software process improvement.
    In addition to identify the causes of problems, this study also applies an ABDP approach for in-process software development activities prediction where the data collected from performed actions are utilized to discover the characteristics of actions which are likely to cause problems. Rather than detecting defects at an early stage to reduce their impact, defect prevention means that defects are prevented from occurring in advance to reduce the cost of defect detection and defect removal. Software defects should be prevented to reduce the variance of projects and increase the stability of the software process. The discovered characteristics then can be applied to predict the defects generated by the subsequent actions, in which necessary corrective actions can be taken to avoid defects. The most significant challenge for a project manager is to identify actions that may incur defects before the action is performed. The prediction model is a conventional means of predicting the problems of subsequent software development process, where the prediction model can be built from the data collected from past projects. However, an action performed in different projects may yield different results, which are hard to predict in advance. The ABDP approach proposed in this study applies data mining techniques on the records of performed actions and reported defects in order to discover the patterns that are likely to cause defects. The discovered patterns then can be applied to predict the subsequent actions that may result in defects. To demonstrate the efficiency of the ABDP approach, it is applied to a business project, giving excellent prediction results and revealing the efficiency of the proposed approach. Many approaches used to handle the collected data are applied to increase the precision of predictions for subsequence actions, such as the Feature Subset Selection (FSS) technique is applied to select a set of attributes, the under-sampling is applied to the majority data, while the over-sampling is applied to address rarity problems.
    The main benefit of using the approach proposed in this study is that multiple measures defined can be monitored simultaneously. Once the causes of problems are identified, the results obtained from the analysis process can be utilized to build prediction models. The prediction models can be used to evaluate subsequent actions for in-process projects, and reduce variance of software projects. Additionally, the prediction models can be utilized for causal analysis to improve software processes accordingly.

    Chapter 1 Introduction 1 Chapter 2 Background 5 2.1. Software Process Measurement 5 2.2. Statistical Process Control 7 2.3. Multivariate Statistical Process Control 11 2.4. Partial Least Squares (PLS) 17 2.5. Hypothesis Testing 19 2.6. Software Defects 21 2.7. The Prediction Techniques 23 2.8. Summary 24 Chapter 3 Related Work 25 3.1. Causal Analysis 25 3.2. Software Defect Prediction 26 3.3. Software Defect Classification 28 2.9. Summary 30 Chapter 4 Multilevel Software Cause Identification 32 4.1. The Architecture of the MSCI 32 4.2. Research Design 35 4.2.1. Study Questions 35 4.2.2. Study Propositions 36 4.2.3. Unit of Analysis 37 4.2.4. Linking Data to Propositions 37 4.3. A Case Study 38 4.4. The Action-Based Model 40 Chapter 5 The Basic Components of the MSCI 45 5.1. The Process Monitoring Component 45 5.1.1. The Multilevel Measures 45 5.1.2. Multivariate Statistical Process Control 49 5.1.3. L1 Measures Analysis 51 5.2. The Cause Identification Component 51 5.2.1. L2 Measure Analysis 51 5.2.2. Problem Definition and Hypotheses Testing 53 5.3. The Action-Based Defect Prediction Component 56 5.3.1. The Action Definition 58 5.3.2. Data Collection 58 5.3.3. Data Analysis 60 5.3.4. Action Prediction 66 Chapter 6 Analytical Results and Discussions 70 6.1. Research Questions 70 6.2. Experimental Results of ABDP 72 6.2.1. The Sampling without FSS 74 6.2.2. Applied the FSS with Sampling 74 6.3. Threats to Validity 76 Chapter 7 Conclusion and Future Works 80 References 83 Appendix 89 1. An example of preprocessed data set 89 2. An example of reported defects 90 個人簡歷 91 著作目錄 92

    Agrawal, R., and Srikant, R. 1994. Fast Algorithms for Mining Association Rules. Proc. of the 20th Int'l Conference on Very Large Databases, Santiago, Chile, Sept, 487--499.
    Antoniol, G., Gradara, S. and Venturi, G. 2004. Methodological Issues in a CMM Level 4 Implementation. Software Process: Improvement and Practice 9(1): 33-50.
    Aversano, L., Lucia, A.D., Gaeta, M., Ritrovato, P., Stefanucci, S., Luisa, Villani, M.L. 2004. Managing Coordination and Cooperation in Distributed Software Processes: the GENESIS Environment. Software Process Improvement and Practice 9: 239--263.
    Baldassarre, M.T., Boffoli, N., Caivano, D. and Visaggio, G. 2005. Improving Dynamic Calibration through Statistical Process Control. International Conference on Software Maintenance (ICSM’2005), Budapest, Hungary, pp. 273-282.
    Basili, V.R. and Weiss, D.M. 1984. A Methodology for Collecting Valid Software Engineering Data. IEEE Transactions on Software Engineering 10(3): 728-738.
    Basili, V.R., Rombach, H.D. 1988. The TAME project: towards improvement-oriented software environments. IEEE Transactions on Software Engineering 14(6): 758-773.
    Benbasat, I., Goldstein, D.K. and Mead, M. 1987. The Case Research Strategy in Studies of Information Systems. MIS Quarterly 11(3): 369-386.
    Bhandari, I., Halliday, M.J., Chaar, J., Chillarege, R., Jones, K., Atkinson, J. S., Lepori-Costello, C., Jasper, P.Y., Tarver, E.D., Lewis, C.C., Yonezawa, M. 1994. In-Process Improvement through defect data interpretation, IBM System Journal, 33: 182-214.
    Boehm, B.W., Brown, J.R., Lipow, M.L. 1976. Quantitative Evaluation of Software Quality. Proceedings of the 2nd international conference on Software engineering, San Fransisco, California, United States, 592-605, IEEE Computer Society Press.
    Boehm, B. 1993. Value-Based Software Engineering, ACM SIGSOFT 28, 2: 1-12.
    Boehm, B. and Basili, V. 2001. Software Defect Top 10 Lists. IEEE Computer 34(1): 2--6.
    Boehm, B. and Huang, L. 2003. Value-Based Software Engineering: A Case Study. IEEE Computer 36(3): 33--41.
    Briand, L.C., Morasca, S., Basili, V.R. 2002. An Operational Process for Goal-Driven Definition of Measures. IEEE Transactions on Software Engineering 28(12): 1106-1125.
    Card, D.N. 1993. Defect-Causal Analysis Drives down Error Rates. IEEE Software 15(1): 88--89.
    Chang, C.-P., Lv, J.-L., Chu, C.-P. 2005. A Defect Estimation Approach for Sequential Inspection Using a Modified Capture-Recapture Model. 29th Annual International Computer Software and Applications Conference (COMPSAC’05), Edinburgh, Scotland, July 25-28, 2005, pp.
    Chang, C.-P. and Chu, C.-P. 2007. Defect Prevention in Software Processes: An Action-Based Approach. Journal of System and Software, 80(4):.559-570.
    Chawla, N. V. , Bowyer, K. W., Hall, L. O., Kegelmeyer, W. P. 2002. SMOTE: Synthetic Minority Over-Sampling Technique. Journal of Artificial Intelligence Research 15: 321-357.
    Chillarege, R., Bhandari, I, Chaar, J., Halliday, M., Moebus, D., Ray, B. and Wong, M.-Y. 1992. Orthogonal defect classification a concept for in-process measurements. IEEE Transactions on Software Engineering 18: 943--956.
    Chrissis, M.B., Konrad, M., Shrum, S. 2003. CMMI Guidelines for Process Integration and Product Improvement. Addison-Wesley, NJ.
    CMMI Product Team. 2001. Capability Maturity Model Integration V1.1, Stage Representation. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, USA.
    Cook, J.E., and Wolf, A.L. 1994. Toward Metrics for Process Validation. Proc. of the 3rd Inter. Conf. on the Software Process, Reston, Viginia, USA, 33--44.
    Cook, J.E., and Wolf, A.L. 1999. Software Process Validation: Quantitatively Measuring the Correspondence of a Process to a Model. ACM Transaction on Software Engineering and Methodology 8(2): 147--176.
    Crosier, R.B. 1988. Multivariate Generations of Cumulative Sum Quality-Control Schema. Technometrics 30(3): 291-303.
    Cusumano, M. 1991. Japan’s Software Factories: A Challenge to U.S. Management, Oxford University Press, New York.
    Doppke, J.C., Heimbigner, D., Wolf, A.L. 1998. Software Process Modeling and Execution within Virtual Environments. ACM Transactions on Software Engineering and Methodology 7: 1--47.
    Drummond, C., Holte, R.C. 2003. C4.5, Class Imbalance, and Cost Sensitivity: Why Under-Sampling beats Over-Sampling. Workshop on Learning from Imbalance Data Sets II, International Conf. on Machine Learning,
    Dy, J.G., Brodley, C.E., 2000. Feature Selection for Unsupervised Learning. Journal of Machine Learning Research 5, 845-889.
    Fagan, M.E. 1976. Design and Code Inspections to Reduce Errors in Program Development. IBM Systems Journal, 15(3): 182-211.
    Fenton, N., Krause, P. and Neil, M. 2002. Software Measurement: Uncertainty and Causal Modeling, IEEE Software 19(4): 116-122.
    Fleming, Q.W. 1998. Cost/Schedule Control Systems Criteris: The management Guide to C/SCSC. Probus, USA.
    Florac, W.A., Park, R.E., Carleton, A.D. 1997. Practical Software Measurement: Measuring for Process Management and Improvement. Pittsburgh, PA: Software Engineering Institute.
    Florac, W.A., and Carleton, A.D. 1999. Measuring the Software Process: Statistical Process Control for Software Process Improvement. Addison-Wesley, NJ.
    Geladi, P. and Kowalski, B. 1986. Partial Least-Squares Regression: A Tutorial. Analytica Chemica Acta 185: 1-17.
    Graves, T.L., Karr, A.F., Marron, J.S. and Siy, H. 2000. Predicting Fault Incidence Using Software Change History, IEEE Trans. on Software Engineering, 26: 653-661.
    Hall, M.A., Smith, L.A. 1999. Feature Selection for Machine Learning: Comparing a Correlation-based Filter Approach to the Wrapper. Proc. of the Florida Artificial Intelligence Symposium, FLAIRS Conference, pages 235-239.
    Hall, M.A., 2000. Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning. Proc. of the 17th International Conf. on Machine Learning, CA, USA, 359--366.
    Halstead, M.H. 1977. Elements of Software Science, Operating, and Programming Systems Series Volume 7. New York, NY: Elsevier.
    Han, J., and Kamber, M., 2001. Data Mining Concepts and Techniques. Morgan Kaufmann Publishers, USA.
    Harter, H.L. 1960. Tables of Range and Studentized Range. The Annals of Mathematical Statistics 31(4): 1122-1147.
    Hihn, J. and Habib-Agahi, H. 1991. Cost Estimation of Software intensive projects: a survey of current practices. International Conference on Software Engineering, IEEE Computer Society Press, Los Alamitor, CA, USA, pp. 276-287
    Hoskuldsson, A. 1988. PLS Regression Methods. Journal of Chemometrics (2): 211-228.
    Hotelling, H. 1931. The Generalization of Student's Ratio, The Annals of Mathematical Statistics, 2(3): 360-378.
    Hotelling, H. 1947. Multivariate quality control, illustrated by the air testing of sample bomb-sights, Techniques of Statistical Analysis, New York, McGraw-Hill, pp. 111-184.
    Hovemeyer, D. and Pugh, W. 2004. Finding Bugs is Easy, In Proc. of International Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 2004), pp. 92-106.
    Hunter, J.S. 1986. Exponentially Weighted Moving Average. Journal Quality Technology 18: 203-210.
    ISO/IEC. 2001. ISO/IEC 9126-1: Software Engineering-Software product quality-Part 1: Quality model. Geneva, Switzerland: International Organization for Standardization.
    Jacob, A.L., and Pillai, S.K. 2003. Statistical Process Control to Improve Coding and Code Review. IEEE Software 20(3): 50--55.
    Jalote, P. 2000. CMM in Practice: Processes for Executing Software Projects at Infosys. Boston, Addison-Wesley.
    Jones, C.L. 1985. A Process-Integrated Approach to Defect Prevention. IBM System Journal, 24: 150--167.
    Jones, T.C., 1994. Assessment and Control of Software Risks. Prentice Hall, NY.
    Khoshgoftaar, T.M., Allen, E.B., Jones, W.D., Hudepohl, J.P. 2000. Classification-Tree Models of Software-Quality over Multiple Releases. IEEE Trans. on Reliability 49(1): 4--11.
    Kilpi, T. 2001. Implementing a Software Metrics Program at Nokia. IEEE Software 18(6):72--77.
    Kitchenham, B. and Pfleeger, S.L. 1996. Software quality: the elusive target. IEEE Computer Society. 13(1): 12-21.
    Kohavi, R., John, G.H., 1996. Wrappers for Feature Subset Selection. Artificial Intelligence 97(12), 273-324.
    Kourti, T, MacGregor, J.F. 1995. Process analysis, monitoring and diagnosis using multivariate projection methods. Chemometrics and Intelligent Laboratory Systems 28: 3-21.
    Kshitija and Rajesh. 2001. Defect Prevention, Proceedings of The 3rd Annual Intl. Software Testing Conf., India.
    Laitenberger, O., Emam, K.E. and Harbich, T.G. 2001. An internally replicated quasi-experimental comparison of checklist and Perspective-Based Reading of Code Documents. IEEE Trans. Software Engineering 27(5): 387-421.
    Lavazza, L. 2000. Providing Automated Support for the GQM Measurement Process. IEEE Software 17(3): 56—62.
    Lawler, J., and Kitchenham, B. 2003. Measurement Modeling Technology. IEEE Software 12(3): 68--75.
    Lazis, L. and Medan, M. 2003. Software quality engineering versus software testing process. TELFOR 2003, 23-26.
    Leszak, M., Perry, D.E. and Stoll, D. 2002. Classification and Evaluation of defects in a Project Retrospective. The Journal of System and Software, 61: 173-187.
    Lethbridge, T.C. and Laganiere, R. 2002. Object-Oriented Software Engineering, McGraw-Hill, New York, pp. 348-397.
    Lim, T.-S., Loh, W.Y., Shin, Y.S., 2000. A Comparison of Prediction Accuracy, Complexity, and Training Time of Thirty-three Old and New Classification Algorithms. Machine Learning 40(3), 203-228.
    Lipke, W. 2002. Statistical Process Control of Project Performance. The Measurable News June, 25--28.
    Liu, B, Hsu, W., Ma, Y. 1998. Integrating Classification and Association Rule Mining. In Proceedings of the 4th Intl. Conf. on Knowledge Discovery and Data Mining (KDD'98), New York, USA, 80--96.
    Lowry, C.A., Woodall, W.H., Champ, C.W., Rigdon, S.E. 1992. A multivariate exponentially weighted moving average control chart. Technometrics 34(1): 46-53.
    MacGregor J.F. 1990. A Different View of the Funnel Experiment. Journal of Quality Technology 22(4): 255-259.
    Mahalanobis, P.C. 1936. On the generalized distance in statistics. Proceedings of National Institute of Science India 12, Sec. 7, pp. 49-55.
    Mason, R.L., Tracy, N.D., and Young, J.C. 1997. A Practical Approach for Interpreting Multivariate T2 Control Chart Signals. Journal of Quality Technology 29: 396-406,
    Mays, R.G., Jones, C.L., Holloway, G.J., Studinski, D.P. 1990. Experiences with Defect Prevention. IBM Systems Journal, 29:4--32.
    McCall, J.A., Richards, P.K., and Walters, G.F. 1977. Factors in software quality. Griffiths Air Force Base, N.Y. : Rome Air Development Center Air Force Systems Command.
    McCabe, T.J. and Butler, C.W. 1989. Design Complexity Measurement and Testing. Communications of the ACM 32, 12(December 1989): 1415-1425.
    McGarry, J., Card, D., Jones, C. 2001. Practical Software and Systems Measurement: A Foundation for Objective Project Management. Addison-Wesley, NJ.
    Meyer, A.D., Loch, C.H., and Pich, M.T. 2002. Managing Project Uncertainty: From Variation to Chaos. Operations Management and Research 43(2): 60-67.
    Ming, L. and Smidts, C.S. 2003. A ranking of software engineering measures based on expert opinion. IEEE Trans. Software Engineering, 29(9): 811–824.
    Mohapatra, S., and Mohanty, B. 2001. Defect Prevention through Defect Prediction: A Case Study at Infosys. Proceeding of the 17th IEEE International Conference on Software Maintenance (ICSM'01), 206--219.
    NASA. 2000. NASA Procedures and Guidelines for Mishap Reporting, Investigating, and Recordkeeping. Safety and Risk Management Division, NASA Headquarters, USA.
    Nijhuis, A., Jong, S.de and Vandeginste, B.G.M. 1997. Multivariate Statistical Process Control in Chromatography, Chemometrics and Intelligent Laboratory Systems, 38: 51-62.
    Nomikos, P. and MacGregor, J.F. 1995. Multivariate SPC Charts for Monitoring Batch Process. Technometrics 37(1): 41-59.
    Page, E.S. 1954. Continuous Inspection Schemes. Biometrika 41(1/2): 100-115.
    Paulk, M.C., Curtis, B., Chrissis, M.B., Weber, C. 1995. The Capability Maturity Model: Guidelines for Improving the Software Process. Addison-Wesley, NJ.
    Pighin, M. and Zamolo, R. 1997. A predictive metric based on discriminant statistical analysis, Proceedings of the 19th international conference on Software engineering, Boston, pp. 262-270.
    Pinto, J.K., and Mantel, S.J. 1990. The Cause of Project Failure. IEEE Transactions on Engineering Management 37(4): 269--276.
    Podgurski, A., Leon, D., Francis, P., Masri, W., Minch, M. 2003. Automated Support for Classifying Software Failure Reports. The 25th International Conference on Software Engineering (ICSE 2003), Los Alamitos, California, USA, 465-475.Pressman, R.S. 2001. Software Engineering: A Practitioner’s Approach. McGraw-Hill, NJ.
    Pourabdollah, A., and Hartley, M. 2006. Gathering Unstructured Workflow Data into Relational Database Model Using Process Definition Language. Proc. of Database and Applications, Innsbruck, Austria, 32--37.
    Pressman, R.S. 2001. Software Engineering: A Practitioner’s Approach, McGraw-Hill, New York.
    Quinlan, J.R., 1986. Induction of Decision Tree. Machine Learning 1(1), 81-106.
    Quinlan, J.R., 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, USA.
    Rainer, A., and Hall, T. 2003. A Quantitative and Qualitative Analysis of Factors Affecting Software Process. The Journal of System and Software 66: 7--21.
    Russell, S., Norvig, P., 1995. Artificial Intelligence: A Modern Approach. Prentice Hall, USA.
    Salem, A.M., Rekabb, L. and Whittakerc, J.A. 2004. Prediction of Software Failures Through Logistic Regression, Information and Software Technology, 46: 519-523.
    Schneidewind, N.F. 1992. Methodology for Validating Software Metrics, IEEE Trans. Software Engineering, 18(5): 410-422.
    Shaffer, J. P., 1995. Multiple Hypothesis Testing. Ann. Rev. Psych 46:561-584.
    Shewhart, W.A., 1939. Statistical Method from the Viewpoint of Quality Control, Dover Publications, New York, USA ISBN 0486652327.
    Sommerville, I. 2001. Software Engineering. Addison-Wesley, NJ.
    Srikant, R., and Agrawal, R. 1996. Mining quantitative association rules in large relational tables. In Proc. of the 1996 ACM SIGMOD international conf. on Management of data, 1--12.
    Tracy, N.D., Young, J.C. and Mason, R.L. 1992. Multivariate control charts for individual observations. Journal of Qualify Technology, 24: 88-95.
    Walpole, R.E., Myers, R.H., Myers, S.L., Ye, K. 2002. Probability and Statistics for Engineers and Scientists, New Jersey, Prentice Hall.
    Weiss, G.M., 2004. Mining with Rarity: A Unifying Framework. ACM SIGKDD Explorations Newsletter 6: 7--19.
    Weller, EF. 2000. Practical Applications of Statistical Process Control. IEEE Software 17(3): 48-55.
    Wilks, S.S. 1962. Mathematical Statistics, New York, Wiley.
    Wohlin, C. and Runeson, P. 1998. Defect Content Estimations from Review Data. ICSE'98 - Proceedings 20th International Conference on Software Engineering, pp. 400-409.
    Wold, H. 1966. Estimation of principal components and related models by iterative least squares. Proceedings of International Symposium in Dayton, Academic Press, 391-402.
    Wolf, A.L., and Rosenblum, D.S. 1993. A Study in Software Process Data Capture and Analysis, Proc. of the Second International Conference on the Software Process, IEEE Computer Society, 115--124.
    Yin, R.K. 2002. Case Study Research: Design and Methods – Third Edition, Sage Publications, London.
    Zhong, S., Khoshgoftaar, T.M. and Seliya, N. 2004. Analyzing Software Measurement Data with Clustering Techniques, IEEE Intelligent System, 19: 20-27.

    下載圖示 校內:立即公開
    校外:2008-01-28公開
    QR CODE