簡易檢索 / 詳目顯示

研究生: 詹巧純
JHAN, CIAO-CHUN
論文名稱: 應用ChatGPT於招聘流程履歷篩選模型之研究
A Study on the Application of ChatGPT in Resume Screening Models for Recruitment Process
指導教授: 陳宗義
Chen, Tsung-Yi
學位類別: 碩士
Master
系所名稱: 工學院 - 工程管理碩士在職專班
Engineering Management Graduate Program(on-the-job class)
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 87
中文關鍵詞: ChatGPT生成式人工智慧履歷篩選去偏見智慧招募
外文關鍵詞: ChatGPT, Generative Artificial Intelligence, Resume Classification, Intelligent Recruitment
相關次數: 點閱:22下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在全球數位轉型與人力資源挑戰的雙重壓力下,如何兼顧公平與效率的招募流程已成為關鍵議題。傳統以機器學習為基礎的履歷篩選雖能提升效率,卻常承襲歷史資料之偏見如性別、年齡、學歷、地理等。本研究聚焦生成式人工智慧(Generative AI,GAI)在人力資源管理之應用,提出一套兼顧公平性、透明性與可擴充性的履歷篩選模型,並以ChatGPT作為核心引擎進行實證驗證與多方比對。
    研究首先建構「去偏見篩選指標模型」,包含必要條件、加分條件與公平性框架,排除年齡、性別等非職務相關資訊,並以結構化提示語引導ChatGPT進行分類、推薦與排序。實驗選取三類代表性職缺為自動化專案工程師、製程工程師與作業員,並自人力銀行隨機抽樣履歷,由ChatGPT 進行篩選;同組履歷另由 HR 經理、資深專員與用人主管進行人工判讀。為強化實務效度,本研究進一步納入對應部門之前線同仁如製程、系統與生產工程師獨立評比,形成GAI、專家、同仁的三角交叉驗證。
    結果顯示,GAI能有效擷取履歷關鍵資訊並提供可追溯的分類理由;在初篩階段展現高效率與邏輯穩定性。於技能與職務需求明確對口之情境如MES/IoT整合、異常排除、SPC/DOE、SOP導入,GAI、HR專家與單位同仁之排序呈現高度一致;同仁評比亦證實GAI理由對現場之適配性,並於少數邊緣個案中提示管理導向或設計導向履歷需由人為調整。綜合專家與同仁的回饋,GAI不僅可顯著減輕HR前期工作量,亦能提升決策之公平性與可解釋性。
    本研究貢獻在於:(1)補足 GAI 應用於HRM的研究缺口;(2)提出可操作且可檢核之去偏見篩選模型與提示語設計;(3)透過三角交叉評選提供企業導入GAI篩選工具的實證洞見與策略建議。研究限制包括職缺範疇與產業場域有限、樣本規模屬探索性,以及未納入任用後績效的縱貫驗證。整體而言,建議以「人機協作」模式導入:GAI作為快速透明的初篩,專家檢核合規與組織適配,同仁驗證即戰力與現場可用性,以建構公平、智能且高效的招募流程。

    Under the dual pressures of global digital transformation and human resource challenges, achieving fairer and more efficient talent recruitment has become a critical issue. Current resume screening systems that adopt machine learning methods, while improving efficiency, remain heavily dependent on historical training data and are prone to biases related to gender, age, education, and geography. This study focuses on the emerging application of Generative Artificial Intelligence (GAI) in Human Resource Management (HRM), proposing a resume screening model that balances fairness, transparency, and scalability. ChatGPT, developed by OpenAI, is employed as the core engine for empirical validation, expert comparison, and unit-level employee cross-validation.
    A bias-mitigated screening indicator model was established, incorporating essential requirements, desirable criteria, and a fairness framework that excludes non–job-related information such as age and gender. Structured prompts were designed to guide ChatGPT in resume classification, recommendation, and ranking. The study examined three job positions—Automation Project Engineer, Process Engineer, and Production Operator—by randomly sampling resumes from online job banks and submitting them to ChatGPT for screening. The same resumes were also evaluated by HR managers, senior specialists, department supervisors, and frontline unit employees from the hiring departments to enable three-way triangulation of results and reasoning quality.
    The findings indicate that GAI performs effectively in identifying core resume information, providing consistent recommendations, and generating transparent justifications. Particularly during the initial pre-screening stage, GAI demonstrated high operational efficiency and logical stability. Employee evaluations further confirmed the practical fit of GAI’s top-ranked candidates—emphasizing immediate deployability (e.g., anomaly response, SPC/DOE use, SOP readiness)—and surfaced edge cases where management-heavy or design-centric profiles required human adjustment. Beyond significantly reducing the workload of HR professionals, GAI also contributes to mitigating bias risks and enhancing decision interpretability and compliance.
    This study contributes to both academia and practice by: (1) filling the research gap in applying GAI to HRM, (2) proposing an operational and transparent bias-mitigated resume screening model, and (3) offering empirical insights and strategic recommendations for organizations adopting GAI-based talent selection tools, validated jointly by experts and unit-level employees. Collectively, these contributions lay the foundation for building a fair, intelligent, and efficient recruitment system.

    摘要 I EXTENDED ABSTRACT II 致謝 1 目錄 2 表目錄 4 圖目錄 5 第一章 緒論 6 1.1 研究背景與動機 6 1.2 研究目的 7 1.3 研究流程 8 1.4 研究風險 10 1.5 研究限制 10 1.6 預期成果與產出 11 第二章 文獻探討 12 2.1 人力資源管理之現況 12 2.2 AI於人力資源管理之應用 13 2.3 履歷篩選:從人力到AI 14 2.3.1 招聘履歷篩選 14 2.3.2 履歷篩選人工與AI之差異 15 2.3.3 履歷篩選之AI應用現況 16 2.3.4 履歷篩選的指標模型 17 2.4 AI篩選之偏見 18 2.4.1 偏見之分類 18 2.4.2 減少偏見的方法 19 2.5 生成式人工智慧 20 2.5.1 生成式人工智慧簡介 20 2.5.2 ChatGPT工具 21 2.5.3 生成式AI的提示生成 21 2.5.4 生成式AI減少偏見的方法 22 第三章 AI-HRM的履歷篩選模型設計 23 3.1 建立AI-HRM環境模型 23 3.2 建立履歷篩選指標模型 26 3.3 建立去偏見的履歷篩選指標模型 30 第四章 實驗結果與討論 34 4.1 CHATGPT履歷篩選使用系統說明 34 4.1.1 HR標準化操作說明 35 4.1.2 GAI系統運用說明 37 4.1.3 篩選結果 39 4.2 實驗流程 40 4.3 實驗一:自動化/系統專案工程師履歷篩選 46 4.3.1 GAI篩選結果與建議 48 4.3.2 實驗一結果:GAI與專家評比 50 4.3.3 自動化/系統專案工程師之單位同仁評比說明 54 4.4 實驗二:製程工程師履歷篩選 57 4.4.1 GAI篩選結果與建議 59 4.4.2 實驗二結果:GAI與專家評比 61 4.4.3 製程工程師之單位同仁評比說明 64 4.5 實驗三:作業員交叉驗證 67 4.5.1 實驗三結果:GAI與專家比對 69 4.6 專家建議與回饋 71 第五章 研究結論與建議 73 5.1 研究結論 73 5.1.1 產業與學術發展 74 5.1.2 模型可行性 74 5.2 未來研究建議 74 參考文獻 76

    1. AI简历筛选:如何革新招聘流程,提高人才获取效率。(2024年9月16日)。 AiAlly。 https://www.getaially.com/zh/blog/streamlining-recruitment-ai-resume-screening
    2. 王志國、羅星宇、謝海鵬&金淑彬(2023)。人工智慧對人力資源管理的影響研究。社會科學前沿,12(2),827–832。
    3. 池進通、李鴻文&陳芬儀(2008)。五大人格特質與工作績效關係之研究。經營管理論叢,4(2),1–9。
    4. 翁清雄&卞澤娟(2015)。組織職業生涯管理與員工職業成長:基於匹配理論的研究。外國經濟與管理,37(8),30–42。
    5. 陳志勝(2023)。人工智能招聘實踐中的道德和歧視。Humanities & Social Sciences Communications,10(3),45–58。
    6. 陳奕云&林思吟(2023)。你被情緒勒索了嗎?盡責性與工作表現的關聯性研究(博士論文,國立高雄科技大學企業管理系)。
    7. 張春霞、李旭東&王紅梅(2010)。基于AHP的简历自动筛选模型。收入The Conference on Web Based Business Management 論文集(頁 261–264)。Scientific Research Publishing。https://www.scirp.org/pdf/18-1.4.9.pdf
    8. 黃弘怡(2023)。基於語言模型及提示工程的新聞文章自動摘要及科學文本簡化(碩士論文,朝陽科技大學)。臺灣博碩士論文知識加值系統。https://hdl.handle.net/11296/r6ay45
    9. 黃峻、林飛、楊靜、王興霞、倪清樺、王雨桐、…&王飛躍(2024)。生成式AI的大模型提示工程:方法、現狀與展望。智慧科學與技術學報,6(2),115–133。
    10. 謝政益(2022)。AI技術應用於台灣電子產業人才招募的現況探討(碩士論文,國立臺灣科技大學)。臺灣博碩士論文知識加值系統。https://hdl.handle.net/11296/gvw45a
    11. 鍾耀揚(2024)。應用提示工程於大型語言模型生成之繁體中文長篇文本條列式摘要品質提升研究(碩士論文,國立臺灣科技大學)。臺灣博碩士論文知識加值系統。https://hdl.handle.net/11296/hxua5u
    12. Bartlett, C. A., & Ghoshal, S. (2002). Managing across borders: The transnational solution. Harvard Business School Press.
    13. Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review, 94(4), 991–1013.
    14. Cable, D. M., & Gilovich, T. (1998). Looked over or overlooked? Prescreening decisions and postinterview evaluations. Journal of Applied Psychology, 83(3), 501–508.
    15. Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv. https://arxiv.org/abs/2303.04226
    16. Chapman, D. S., & Webster, J. (2003). The use of technologies in the recruiting, screening, and selection processes for job candidates. International Journal of Selection and Assessment, 11(2–3), 113–120. https://doi.org/10.1111/1468-2389.00234
    17. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012, January). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). Association for Computing Machinery. https://doi.org/10.1145/2090236.2090255
    18. Ertel, W. (2024). Introduction to artificial intelligence (4th ed.). Springer Nature. Friedman, B. A. (2007). Globalization implications for human resource management roles. Employee Responsibilities and Rights Journal, 19(3), 157–171. https://doi.org/10.1007/s10672-007-9047-7
    19. Gaucher, D., Friesen, J., & Kay, A. C. (2011). Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of Personality and Social Psychology, 101(1), 109–128. https://doi.org/10.1037/a0022530
    20. Jackson, P. (2021). Algorithmic bias: Understanding its origins and mitigating its impact. Journal of Artificial Intelligence Ethics, 15(2), 123–135.
    21. Jia, Q., Guo, Y., Li, R., Li, Y., & Chen, Y. (2018). A conceptual artificial intelligence application framework in human resource management. International Journal of Human Resource Studies, 8(2), 33–52.
    22. Kulkarni, S. B., & Che, X. (2019). Intelligent software tools for recruiting. Journal of International Technology and Information Management, 28(2), 2–16.
    23. Li, X., Zhou, J., & Cai, Y. (2024). The return of university reputation in job applications: Evidence from a field experiment in China. Applied Economics, 1–16. https://doi.org/10.1080/00036846.2024.2321196
    24. Menaka, R. (2023). Role of artificial intelligence (AI) in human resource management (HRM) in recent era. Shanlax International Journal of Management, 11(2), 32–38.
    25. Neumark, D., Burn, I., & Button, P. (2019). Is it harder for older workers to find jobs? New and improved evidence from a field experiment. Journal of Political Economy, 127(2), 922–970. https://doi.org/10.1086/701029
    26. Noe, R. A., Hollenbeck, J. R., Gerhart, B. A., & Wright, P. M. (2023). Human resource management: Gaining a competitive advantage (12th ed.). McGraw Hill.
    27. Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020, January). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481. Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
    28. Raub, M. (2018). Bots, bias and big data: Artificial intelligence, algorithmic bias and disparate impact liability in hiring practices. Arkansas Law Review, 71(3), 529–570.
    29. Rivera, L. A. (2012). Hiring as cultural matching: The case of elite professional service firms. American Sociological Review, 77(6), 999–1022.https://doi.org/10.1177/0003122412463213
    30. Sant, A., Escolano, C., Mash, A., Fornaciari, F. D. L., & Melero, M. (2024). The power of prompts: Evaluating and mitigating gender bias in MT with LLMs. arXiv preprint arXiv:2407.18786. https://doi.org/10.48550/arXiv.2407.18786
    31. Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (NIST Special Publication No. 1270, Version 1.0). U.S. Department of Commerce, National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.1270
    32. Shubhangi, S. (2020). The role of artificial intelligence in enhancing recruitment processes. Journal of Human Resource Management and Technology, 12(4), 45–60.
    33. Skopec, M., Issa, H., Reed, J., & Harris, M. (2020). The role of geographic bias in knowledge diffusion: A systematic review and narrative synthesis. Research Integrity and Peer Review, 5, 1–14. https://doi.org/10.1186/s41073-020-00094-3
    34. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (Vol. 30). Curran Associates, Inc. https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
    35. Zhang, J., Jin, S., Li, T., & Wang, H. (2021). Gender discrimination in China: Experimental evidence from the job market for college graduates. Journal of Comparative Economics, 49(3), 819–835. https://doi.org/10.1016/j.jce.2021.05.001
    36. Zou, X. (2024). A review of the latest research achievements in the basic theory of generative AI and artificial general intelligence (AGI). Computer Science and Technology, 3(3), 82–90. https://doi.org/10.11648/j.cst.20240303.13

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE