| 研究生: |
陳文𤧟 Chen, Wen-Ting |
|---|---|
| 論文名稱: |
影集〈無恥之徒〉中的仇恨言論:分類與剖析 Hate Speech in the TV Series Shameless: A Taxonomic Analysis |
| 指導教授: |
謝菁玉
Hsieh, Ching-Yu |
| 學位類別: |
碩士 Master |
| 系所名稱: |
文學院 - 外國語文學系 Department of Foreign Languages and Literature |
| 論文出版年: | 2026 |
| 畢業學年度: | 114 |
| 語文別: | 英文 |
| 論文頁數: | 126 |
| 中文關鍵詞: | 無恥之徒 、仇恨言論 、冒犯性語言 、顯性與隱性表達 |
| 外文關鍵詞: | Shameless, hate speech, offensive language, explicit and implicit expression |
| 相關次數: | 點閱:4 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文探究美國電視影集《無恥之徒》劇本對話中仇恨言論(Hate Speech)的實際運用模式,檢視了仇恨言論的組成元素、子類別、以及顯性與隱性的表達形式。本論文整合了Bisht等人於2020年提出的仇恨言論子類型(Subtypes)與Lewandowska-Tomaszczyk 等人於2023年提出的冒犯性語言層面(Aspects),以回答三個研究問題:第一、在該影集中,哪些子類型出現與共現的頻率最高?第二、在該影集中,哪些層面出現與共現的頻率最高?第三、根據該影集中出現的仇恨言論,有哪些額外的分類方式能使現有的仇恨言論分類更加完整。在研究方法上,本論文將該影集全11季共134集的完整劇本作為研究來源,並使用自動化關鍵詞檢索與人工手動識別的混合方法進行收集與編碼,最終成功識別出1,344筆仇恨言論實例,並利用描述性統計與推論性統計(如卡方檢定)等進行量化分析。量化結果顯示,在八個既有子類型中,針對性別的性別歧視(sexism)以高達54.6%的比例成為最主要的仇恨表達形式,其次則是針對身心障礙(disability)與出身地(origin)的言論。而在共現模式上,雖然性別歧視與身體(body)的共現最為頻繁,但其統計上的關聯性並不顯著(p>0.05),而性別歧視與出身地以及種族歧視(racism)與出身地兩組子類型則呈現出顯著的統計關聯(p<0.05)。另一方面,針對仇恨言論的七個層面,分析結果表明敵對性(hostile)與損害名譽(discredit)的出現頻率均超過九成,且敵對性與損害名及前兩者加上仇視性(hateful)的組合為最普遍的共現模式。此外,本研究亦依據影集中出現的仇恨言論,提出了針對階級(class)與威脅(threat)的子類型,以補足現有子類型的缺漏之處。並同時根據仇恨言論的詞性與功能,將其表達形式進一步細分為指涉性(referential expression)、描述性(descriptive expression)與行為性(behavioral expression)三種。本研究在仇恨言論領域上的貢獻包含結合自動化與手動分析以蒐集影集中的仇恨言論、將仇恨言論顯性與隱性的表達形式與不同的分類框架相結合、並延伸發展當前的分類體系,使之更加完整。未來研究可進一步比較《無恥之徒》的美國版與英國版影集,以深入探討文化差異對於仇恨言論造成的影響。
This thesis explores the manifestation of hate speech in the dialogue of the American television series Shameless. In order to address the limited research on hate speech in the field of movies and TV series, this study integrates two classification systems: the subtype categories (Bisht et al., 2020) which classify hate speech based on the themes and targets, and the aspect Offensive Language Taxonomy (Lewandowska-Tomaszczyk et al., 2023) which categorizes the elements that make up hate speech. This thesis aims at answering three research questions: (1) What kind of hate speech (Bisht et al., 2020) are most frequently represented and co-occur in Shameless? (2) What aspects of hate speech (Lewandowska-Tomaszczyk et al., 2023) occur and co-occur most commonly in Shameless? (3) What additional subtypes can be added into the current eight subtypes?
In this study, automatic keyword detection and manual identification are used to collect data from the 134 episodes across 11 seasons. The data are then analyzed using quantitative involving descriptive and inferential statistics as well as qualitative methods. The results show that hate speech related to sexism, disability, and origin show up the most commonly in Shameless, while the aspects of hostile and discredit are the most frequently covered. In terms of their co-occurrence, sexism and body, sexism and origin, as well as racism and origin are pairs that co-occur the most. On the other hand, the most salient co-occurrence patterns of aspects involved the combined presence of hostile, discredit and hateful. Hate speech keywords targeting female and male characters in Shameless are also found different and can be classified into different layers. Furthermore, some additional classifications are found to further categorize hate speech, e.g., those related to target individuals’ socioeconomic status and those used to threaten them. On the other hand, hate speech emerges as distinct expressions (e.g., referential, descriptive, and behavioral) as well based on the parts of speech. By examining data in Shameless, this thesis proposes the importance of adopting a multi-layered classification to analyze the explicit and implicit expressions, as well as proposing different classification of hate speech.
Adiyaksa, A. F., Richasdy, D., & Ihsan, A. F. (2022). Hate speech detection on youtube using long short-term memory and latent dirichlet allocation method. Journal of Information System Research (JOSH), 3(4), 644–650. https://doi.org/10.47065/josh.v3i4.1875
Al-Azzawi, Q. O., & Al-Ghizzy, M. J. D. (2022). A Linguistic Study of Offensive Language in Online Communication Chatgroups. International Journal of Linguistics Studies, 2(2), 170–175. https://doi.org/10.32996/ijls.2022.2.2.19
Allan, K., & Burridge, K. (2006). Forbidden Words: Taboo and the Censoring of Language. Cambridge University Press.
Armstrong, E. A., Hamilton, L. T., Armstrong, E. M., & Seeley, J. L. (2014). “Good Girls” gender, social class, and slut discourse on campus. Social Psychology Quarterly, 77(2), 100–122. https://doi.org/10.1177/0190272514521220
Assimakopoulos, S., Baider, F. H., & Millar, S. (2017). Online hate speech in the European Union: a discourse-analytic perspective. Springer Nature. https://doi.org/10.1007/978-3-319-72604-5
Baider, F., & Kopytowska, M. (2018). Narrating hostility, challenging hostile narratives. Lodz Papers in Pragmatics, 14(1), 1–24. https://doi.org/10.1515/lpp-2018-0001
Battistella, E. (2005). Bad language: are some words better than others? Oxford University Press.
Bednarek, M. (2019). The multifunctionality of swear/taboo words in television series. Emotion in discourse, 29, 29–54. https://doi.org/10.1075/pbns.302.02bed
Bisht, A., Singh, A., Bhadauria, H. S., Virmani, J., & Kriti. (2020). Detection of hate speech and offensive language in twitter data using lstm model. Recent trends in image and signal processing in computer vision, 243–264. Springer Singapore. https://doi.org/10.1007/978-981-15-2740-1_17
Brown, A. (2018). What is so special about online (as compared to offline) hate speech?. Ethnicities, 18(3), 297–326. https://doi.org/10.1177/1468796817709846
Caselli, T., Basile, V., Mitrović, J., Kartoziya, I., & Granitzer, M. (2020). I feel offended, don't be abusive! Implicit/explicit messages in offensive and abusive language. In Proceedings of the twelfth language resources and evaluation conference, 6193–6202.
Carranza‐Pinedo, V. (2025). What are particularistic pejoratives?. Mind & Language. https://doi.org/10.1111/mila.70005
Chen, Y., Zhou, Y., Zhu, S., & Xu, H. (2012). Detecting offensive language in social media to protect adolescent online safety. In 2012 international conference on privacy, security, risk and trust and 2012 international confernece on social computing, 71–80. https://doi.org/10.1109/SocialCom-PASSAT.2012.55
Chiril, P., Pamungkas, E. W., Benamara, F., Moriceau, V., & Patti, V. (2022). Emotionally informed hate speech detection: a multi-target perspective. Cognitive Computation, 14(1), 322–352. https://doi.org/10.1007/s12559-021-09862-5
Cressman, D. L., Callister, M., Robinson, T., & Near, C. (2009). Swearing in the cinema: An analysis of profanity in US teen‐oriented movies, 1980–2006. Journal of Children and Media, 3(2), 117–135. https://doi.org/10.1080/17482790902772257
Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press. https://doi.org/10.4159/harvard.9780674735613
Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media 11(1), 512–515. https://doi.org/10.1609/icwsm.v11i1.14955
De la Peña Sarracén, G. L., & Rosso, P. (2023). Systematic keyword and bias analyses in hate speech detection. Information Processing & Management, 60(5), 103433. https://doi.org/10.1016/j.ipm.2023.103433
Del Vigna, F., Cimino, A., Dell’Orletta, F., Petrocchi, M., & Tesconi, M. (2017, January). Hate me, hate me not: Hate speech detection on facebook. In Proceedings of the first Italian conference on cybersecurity (ITASEC17), 86–95.
DeSouza, I., & Naresh, S. (2021). How “Offensive” is offensive? A Closer Look at Controversial Advertisements. Journal of International Women's Studies, 22(3), 96–109.
Díaz-Torres, M. J., Morán-Méndez, P. A., Villasenor-Pineda, L., Montes, M., Aguilera, J., & Meneses-Lerín, L. (2020). Automatic detection of offensive language in social media: Defining linguistic criteria to build a Mexican Spanish dataset. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, 132–136.
ElSherief, M., Nilizadeh, S., Nguyen, D., Vigna, G., & Belding, E. (2018). Peer to peer hate: Hate speech instigators and their targets. In Proceedings of the International AAAI Conference on Web and Social Media, 12(1). https://doi.org/10.1609/icwsm.v12i1.15038
ElSherief, M., Kulkarni, V., Nguyen, D., Wang, W. Y., & Belding, E. (2018). Hate lingo: A target-based linguistic analysis of hate speech in social media. In Proceedings of the international AAAI conference on web and social media, 12(1). https://doi.org/10.1609/icwsm.v12i1.15041
Erjavec, K., & Kovačič, M. P. (2012). “You don't understand, this is a new war!” Analysis of hate speech in news web sites' comments. Mass Communication and Society, 15(6), 899–920. https://doi.org/10.1080/15205436.2011.619679
Fadhel, A., & Muhammed, W. S. M. (2023). A pragmatic study of hate speech in some American animated movies. Journal of Education College, 51(2), 363–382. https://doi.org/10.31185/eduj.Vol51.Iss2.3131
Felmlee, D., Inara Rodis, P., & Zhang, A. (2020). Sexist slurs: Reinforcing feminine stereotypes online. Sex roles, 83(1), 16–28. https://doi.org/10.1007/s11199-019-01095-z
Fitzgerald, M., Sapolsky, B., & McClung, S. (2009). Offensive language spoken on morning radio programs. Journal of Radio & Audio Media, 16(2), 181–199. https://doi.org/10.1080/19376520903277047
Fordjour, E. A. (2016). Foul language in the Ghanaian electronic media: a case study of some selected radio stations in Kumasi, Ghana. In International Conference on Management, Communication and Technology , 4(1), 26–32.
Fortuna, P., & Nunes, S. (2018). A survey on automatic detection of hate speech in text. ACM Computing Surveys (Csur), 51(4), 1-30. https://doi.org/10.1145/3232676
Fredrickson, B. L., & Roberts, T. A. (1997). Objectification theory: Toward understanding women’s lived experiences and mental health risks. Psychology of Women Quarterly, 21(2), 173–206. https://doi.org/10.1111/j.1471-6402.1997.tb00108.x
Gao, L., & Huang, R. (2017). Detecting online hate speech using context aware models. Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP 2017), 260–266. https://doi.org/10.48550/arXiv.1710.07395
Gerbner, G., Gross, L., Morgan, M., Signorielli, N., & Shanahan, J. (2002). Growing up with television: Cultivation processes. In Media effects (pp. 53-78). Routledge.
Guiora, A., & Park, E. A. (2017). Hate speech on social media. Philosophia, 45(3), 957–971. https://doi.org/10.1007/s11406-017-9858-4
Hayaty, M., Adi, S., & Hartanto, A. D. (2020). Lexicon-based indonesian local language abusive words dictionary to detect hate speech in social media. Journal of Information Systems Engineering and Business Intelligence, 6(1), 9–17. http://dx.doi.org/10.20473/jisebi.6.1.9-17
Husain, F., & Uzuner, O. (2021). Transfer learning approach for arabic offensive language detection system—BERT-based model. In 2021 4th International Conference on Computer Applications and Information Security (ICCAIS). https://doi.org/10.48550/arXiv.2102.05708
Hyatt, C. S., Maples-Keller, J. L., Sleep, C. E., Lynam, D. R., & Miller, J. D. (2019). The anatomy of an insult: Popular derogatory terms connote important individual differences in Agreeableness/Antagonism. Journal of Research in Personality, 78, 61–75. https://doi.org/10.1016/j.jrp.2018.11.005
Jay, T. (1992). Cursing in America: A psycholinguistic study of dirty language in the courts, in the movies, in the schoolyards and on the streets. John Benjamins Publishing Company. https://doi.org/10.1075/z.57
Jeshion, R. (2020). Pride and prejudiced: On the reclamation of slurs. Grazer Philosophische Studien, 97(1), 106–137.
Jin, Y., Wanner, L., Kadam, V., & Shvets, A. (2023). Towards weakly-supervised hate speech classification across datasets. In The 7th Workshop on Online Abuse and Harms (WOAH), 42–59. https://doi.org/10.18653/v1/2023.woah-1.4
Jones, T., Cunningham, P. H., & Gallagher, K. (2010). Violence in advertising. Journal of Advertising, 39(4), 11–36. https://doi.org/10.2753/JOA0091-3367390402
Kaakinen, M., Sirola, A., Savolainen, I., & Oksanen, A. (2020). Impulsivity, internalizing symptoms, and online group behavior as determinants of online hate. PloS one, 15(4), e0231052. https://doi.org/10.1371/journal.pone.0231052
Kaye, B. K., & Sapolsky, B. S. (2009). Taboo or not taboo? That is the question: Offensive language on prime-time broadcast and cable programming. Journal of broadcasting & Electronic media, 53(1), 22-37. https://doi.org/10.1080/08838150802643522
Khurana, U., Vermeulen, I., Nalisnick, E., Van Noorloos, M., & Fokkens, A. (2022). Hate speech criteria: A modular approach to task-specific hate speech definitions. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), 176–191. https://doi.org/10.48550/arXiv.2206.15455
Kotarcic, A., Hangartner, D., Gilardi, F., Kurer, S., & Donnay, K. (2022). Human-in-the-Loop hate speech classification in a multilingual context. Findings of the Association for Computational Linguistics: EMNLP 2022, 7414–7442. https://doi.org/10.48550/arXiv.2212.02108
Kwon, K. H., & Gruzd, A. (2017). Is offensive commenting contagious online? Examining public vs interpersonal swearing in response to Donald Trump’s YouTube campaign videos. Internet Research, 27(4), 991–1010. https://doi.org/10.1108/IntR-02-2017-0072
Leets, L. (2001). Explaining perceptions of racist speech. Communication Research, 28(5), 676–706. https://doi.org/10.1177/009365001028005005
Lewandowska-Tomaszczyk, B., Žitnik, S., Bączkowska, A., Liebeskind, C., Mitrović, J., & Valūnaitė Oleškevičienė, G. (2021). LOD-connected offensive language ontology and tagset enrichment. In CEUR workshop proceedings, 3064.
Lewandowska-Tomaszczyk, B., Bączkowska, A., Liebeskind, C., Valunaite Oleskeviciene, G., & Žitnik, S. (2023). An integrated explicit and implicit offensive language taxonomy. Lodz Papers in Pragmatics, 19(1), 7–48. https://doi.org/10.1515/lpp-2023-0002
Ljung, M. (2010). Swearing: A Cross-Cultural Linguistic Study. Palgrave Macmillan. https://doi.org/10.1057/9780230292376
Ljubešić, N., Mozetič, I., & Novak, P. K. (2023). Quantifying the impact of context on the quality of manual hate speech annotation. Natural Language Engineering, 29(6), 1481–1494. https://doi.org/10.1017/S1351324922000353
MacAvaney, S., Yao, H. R., Yang, E., Russell, K., Goharian, N., & Frieder, O. (2019). Hate speech detection: Challenges and solutions. PloS one, 14(8), e0221152. https://doi.org/10.1371/journal.pone.0221152
Madriaza, P., Hassan, G., Brouillette‐Alarie, S., Mounchingam, A. N., Durocher‐Corfa, L., Borokhovski, E., Pickup, D., S., & Paillé, S. (2025). Exposure to hate in online and traditional media: A systematic review and meta‐analysis of the impact of this exposure on individuals and communities. Campbell systematic reviews, 21(1), e70018. https://doi.org/10.1002/cl2.70018
Maisto, A., Pelosi, S., Vietri, S., & Vitale, P. (2017). Mining offensive language on social media. In Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it, 252–256.
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, hate speech, and social media: A systematic review and critique. Television & new media, 22(2), 205–224. https://doi.org/10.1177/1527476420982230
Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., & Chang, Y. (2016). Abusive language detection in online user content. In Proceedings of the 25th international conference on world wide web, 145–153.https://doi.org/10.1145/2872427.2883062
Oktaviani, A. D., & Nur, O. S. (2022). Illocutionary speech acts and types of hate speech in comments on@ Indraakenz’s Twitter Account. In International Journal of Science and Applied Science: Conference Series, 6(1), 91–99. https://doi.org/10.20961/ijsascs.v6i1.69943
Olteanu, A., Castillo, C., Boy, J., & Varshney, K. (2018). The effect of extremist violence on hateful speech online. In Proceedings of the international AAAI conference on web and social media, 12(1). https://doi.org/10.1609/icwsm.v12i1.15040
Paasch-Colberg, S., Strippel, C., Trebbe, J., & Emmer, M. (2021). From insult to hate speech: Mapping offensive language in German user comments on immigration. Media and Communication, 9(1), 171–180. https://doi.org/10.17645/mac.v9i1.3399
Papcunová, J., Martončik, M., Fedáková, D., Kentoš, M., Bozogáňová, M., Srba, I., ... & Adamkovič, M. (2023). Hate speech operationalization: a preliminary examination of hate speech indicators and their structure. Complex & Intelligent Systems, 9(3), 2827–2842. https://doi.org/10.1007/s40747-021-00561-0
Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. Sage Open, 10(4). https://doi.org/10.1177/2158244020973022
Pinker, S. (2007). The stuff of thought: Language as a window into human nature. Penguin.
Ramos, G., Batista, F., Ribeiro, R., Fialho, P., Moro, S., Fonseca, A., Guerra, R., Carvalho, P., Marques, C., & Silva, C. (2024). A comprehensive review on automatic hate speech detection in the age of the transformer. Social Network Analysis and Mining, 14(1), 204. https://doi.org/10.1007/s13278-024-01361-3
Ringrose, J., & Renold, E. (2012). Slut-shaming, girl power and ‘sexualisation’: Thinking through the politics of the international SlutWalks with teen girls. Gender and Education, 24(3), 333–343. https://doi.org/10.1080/09540253.2011.645023
Seabrook, R. C., Ward, L. M., & Giaccardi, S. (2019). Less than human? Media use, objectification of women, and men’s acceptance of sexual aggression. Psychology of Violence, 9(5), 536. https://doi.org/10.1037/vio0000198
Shafer, D. M., & Kaye, B. K. (2015). Attitudes toward offensive language in media (ATOL-M): Investigating enjoyment of cursing-laced television and films. Atlantic Journal of Communication, 23(4), 193-210. https://doi.org/10.1080/15456870.2015.1047494
Sharma, H. K., Singh, T., Kshitiz, K., Singh, H., & Kukreja, P. (2017). Detecting hate speech and insults on social commentary using NPL and machine learning. International Journal of Engineering Technology Science and Research, 4(12), 279-285.
Sharma, I. (2019). Contextualising hate speech: A study of India and Malaysia. Journal of International Studies, 15, 133. https://doi.org/10.32890/jis2019.15.6
Silva, L., Mondal, M., Correa, D., Benevenuto, F., & Weber, I. (2016). Analyzing the targets of hate in online social media. In Proceedings of the International AAAI Conference on Web and Social Media, 10(1), 687–690. https://doi.org/10.1609/icwsm.v10i1.14811
Singh, S., Roy, P., Sahoo, N., Mallela, N., Gupta, H., Bhattacharyya, P., ... & Sengupta, S. (2022). Hollywood identity bias dataset: A context oriented bias analysis of movie dialogues. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, 5274–5285.
Straus, S. (2007). What is the relationship between hate radio and violence? Rethinking Rwanda’s “Radio Machete”. Politics & Society, 35(4), 609–637. https://doi.org/10.1177/0032329207308181
Solovev, K., & Pröllochs, N. (2023). Moralized language predicts hate speech on social media. PNAS nexus, 2(1). https://doi.org/10.1093/pnasnexus/pgac281
Somerville, K. (2011). Violence, hate speech and inflammatory broadcasting in Kenya: The problems of definition and identification. Ecquid Novi: African Journalism Studies, 32(1), 82–101. https://doi.org/10.1080/02560054.2011.545568
Soral, W., Bilewicz, M., & Winiewski, M. (2018). Exposure to hate speech increases prejudice through desensitization. Aggressive Behavior, 44(2), 136–146. https://doi.org/10.1002/ab.21737
Tanev, H. (2024). JRC at ClimateActivism 2024: Lexicon-based Detection of Hate Speech. In Proceedings of the 7th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2024), 85–88.
Tariq, M., & Khan, M. A. (2017). Offensive advertising: A religion based Indian study. Journal of Islamic Marketing, 8(4), 656–668. https://doi.org/10.1108/JIMA-07-2015-0051
Teh, P. L., Cheng, C. B., & Chee, W. M. (2018). Identifying and categorising profane words in hate speech. In Proceedings of the 2nd international conference on compute and data analysis, 65–69. https://doi.org/10.1145/3193077.3193078
Unlu, A., Truong, S., Sawhney, N., Tammi, T., & Kotonen, T. (2025). From prejudice to marginalization: Tracing the forms of online hate speech targeting LGBTQ+ and Muslim communities. New Media & Society. https://doi.org/10.1177/14614448241312900
Vehovec, T., Kišjuhas, A., & Vehovec, R. (2016). Govor mržnje i verbalne agresije na internetu. [Hate speech and verbal aggression on the internet] Centar za nove medije Liber.
Vidgen, B., Harris, A., Nguyen, D., Tromble, R., Hale, S. A., & Margetts, H. (2019). Challenges and frontiers in abusive content detection. In Proceedings of the third workshop on abusive language online,80–93. https://doi.org/10.18653/v1/W19-3509
von Boguszewski, N., Moin, S., Bhowmick, A., Yimam, S. M., & Biemann, C. (2021). How hateful are movies? a study and prediction on movie subtitles. In Proceedings of the 17th Conference on Natural Language Processing, 37–49. https://doi.org/10.48550/arXiv.2108.10724
Waller, D. S., Fam, K. S., & Zafer Erdogan, B. (2005). Advertising of controversial products: a cross‐cultural study. Journal of Consumer Marketing, 22(1), 6–13. https://doi.org/10.1108/07363760510576509
Wajnryb, R. (2005). Expletive deleted $&#@*!: A good look at bad language. Free Press.
Yin, W., & Zubiaga, A. (2021). Towards generalisable hate speech detection: a review on obstacles and solutions. PeerJ computer science, 7, 598. https://doi.org/10.7717/peerj-cs.598
Yoder, M. M., Ng, L. H. X., Brown, D. W., & Carley, K. M. (2022). How hate speech varies by target identity: A computational analysis. In Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL), 30–43. https://doi.org/10.18653/v1/2022.conll-1.3
Yoon, H., & Kelly, A. (2023). Brand blunders and race in advertising: Issues, implications, and potential actions from a macromarketing perspective. Journal of Macromarketing, 43(1), 106–123. https://doi.org/10.1177/02761467231178550
Yu, X., Blanco, E., & Hong, L. (2022). Hate speech and counter speech detection: Conversational context does matter. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 5918–5931. https://doi.org/10.48550/arXiv.2206.06423
Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., & Kumar, R. (2019). Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1, 1478–1488. https://doi.org/10.48550/arXiv.1902.09666