T6.1 Figurative language and pragmatics benchmarks <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nFigurative language, including metaphor, irony, and sarcasm, is a prominent feature of human communication. Although LLMs show remarkable capabilities in adapting to a particular style or creating innovative metaphors, they are still at a low level regarding the comprehension of sarcasm and irony (Yakura 2024), and struggle with understanding or detecting indirect requests and faux pas (Strachan et al. 2024). Along similar lines, LLMs significantly lag behind humans in their pragmatic capabilities, such as considering extra-linguistic context, speaker intentions, presuppositions, and implied meanings (Sravanthi et al. 2024). A major problem in assessing the capabilities of LLMs’ in nuanced language understanding and generation is the lack of evaluation datasets and benchmarks. We aim to create an evaluation pipeline that will facilitate the evaluation and comparison of models with regard to figurative language and pragmatics.<\/p>\n<\/div>\n
We will construct and adapt several datasets and create a benchmarking pipeline for figurative language understanding, conversation handling, pragmatics reasoning, and associative behavior of LLMs. First, we will tackle 1) metaphor identification and explanation and construct a benchmark validated and augmented with human annotations. The dataset will include valid and invalid paraphrases and explanations, allowing for the evaluation in different setups, e.g., textual entailment, figurative language identification and interpretation, question answering, etc. For irony and sarcasm understanding, we will adapt and translate the Metaphor and Sarcasm Scenario Test (Adachi et al. 2004). In the pragmatic understanding benchmark, we will include implicature, presupposition, and conversation handling according to Grice’s maxims and test them with conversational AI\/chatbots (Sravanthi et al. 2024; Miehling et al. 2024). Finally, the associative behavior and association explanation benchmark will adapt WOW and the WAX dataset of association explanations (Liu et al. 2022).<\/p>\n<\/section>\n\n<\/div>\n<\/section>\n<\/div><\/section>
\n<\/span><\/span><\/div>Deliverables 6.1: Metaphor, irony, and sarcasm benchmark in Slovene (M12). Pragmatic and associative behavior explanation benchmark (M24).<\/em><\/strong><\/p>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>T6.2 Spoken language understanding benchmark <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nWhile there are many written language understanding benchmarks, spoken language understanding benchmarks lag significantly behind. However, with the increased number of multimodal LLMs that support speech (Barrault et al. 2023, Hu et al. 2024, Fathullah et al. 2024), the need for spoken language understanding benchmarks is rising significantly. Th objective of this challenge is to develop a spoken language understanding benchmark for Slovenian and establish it in the evaluation platform SloBENCH to evaluate the performance of current and future speech-enabled LLMs.<\/p>\n
The benchmark will consist of 1) automatic speech recognition, 2) dialogue act identification, and 3) sentiment classification. All three tasks will be integrated into the SloBENCH platform, which currently hosts only one ASR task. The benchmark will contain both a verbatim (\u201cwhat has been said\u201d) and a non-verbatim, normalized version of the transcript, with various forms of disfluencies removed, and the standard language used. Multiple transcription variants will be given for numerals, acronyms and abbreviations. Two evaluations will be run: one greedy, which provides the best possible result regarding word error rate and character error rate, and another, identifying the best, but consistent path through variants given their labels. Another part of the speech recognition task will be the bias report, a result of the experiments in T6.3. The other two tasks in the benchmark, dialogue act and sentiment classification, will be the results of experiments and experience obtained in T3.2.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section>
\n
<\/span><\/span><\/div>\n<\/div>\n<\/section>\n\n\nDeliverables 6.2: Speech dataset (4 hours), annotated with dialogue act and sentiment annotations (M30). Multi-reference ASR task, dialogue processing task, and sentiment in speech tasks (M36).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>T6.3 Bias detection in LLMs and ASR <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nAssessing LLMs’ inherent biases is an important and challenging aspect. One of the problems in less-resource language bias evaluation is dataset adaptation, as common datasets are frequently culturally specific (EEC, Kiritchenko & Mohammad 2018), and machine translation is not adequate. Therefore, a culture-specific adaptation should be considered. Moreover, it is important to focus not only on detecting bias in the textual modality but also in image generation tasks and speech recognition. The well-established bias in the performance of ASR systems (Feng et al. 2024) makes the technology significantly less accessible to specific demographic groups. One possibility is debiasing LLMs, as they are used as an underlying technology for many downstream tasks; we aim to explore this path.<\/p>\n<\/div>\n<\/section>\n\n\nWe will analyze the bias of LLMs for both unimodal and multimodal language and vision tasks and investigate a de-biasing technique for use in reducing\/controlling the levels of bias in LLM outputs in debate settings. First, the English EEC bias evaluation dataset (Kiritchenko & Mohammad 2018) will be adapted to Slovene, and we will use it with zero-shot prompting and sentence continuation generation techniques to assess the bias of LLMs. In the debate setting, different LLMs balance each other out by using the text generated during their debate for debiasing via fine-tuning and\/or chain-of-thought modeling. The debiased models will be assessed using a case of political bias. For the multimodal models, we will assess the bias using existing classifiers for attributes such as gender and ethnicity in images (e.g., CLIP, Radford et al. 2021). Bias in speech recognition will be analyzed based on the demographic traits of the speakers, including age, gender, and regional background.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section>
\n
<\/span><\/span><\/div>\n<\/div>\n<\/section>\n\n\nDeliverables 6.3<\/em>: Bias detection datasets for Slovene (M24). Debiasing approach for LLMs (M30). Spoken language bias detection analysis (M36).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>T6.4 Knowledge-based explanation of LLMs <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nTransparency and robustness in LLMs are fundamental to building trust, ensuring fairness and ethical use, fostering innovation, and complying with regulatory standards. While partial solutions in explanation exist, mostly based on LLMs self-explanation, this is far from satisfactory. In this research challenge, we will address the current lack of transparency in LLMs by a general methodology that will be applied to specific challenging domains, such as common-sense reasoning, natural language inference, paraphrasing, and computational folkloristics.<\/p>\n<\/div>\n<\/section>\n\n\nThe lack of transparency in LLMs will be addressed via a general methodology applied to specific challenging domains, such as common-sense reasoning, natural language inference, and computational folkloristics. The proposed methodology will use i) analysis of specific domain knowledge (e.g., existing ontologies, knowledge graphs, annotation instructions, etc.), ii) semi-automatic generation of explainability datasets with the help of LLMs, and iii) training LLMs on the original problem accompanied by the specific explainability datasets in both fine-tuning mode as well as retrieval-augmented mode.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section>
\n
<\/span><\/span><\/div>\n<\/div>\n<\/section>\n\n\nDeliverable 6.4: A novel knowledge-based explanation methodology for LLM explanation (M36).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":19,"featured_media":0,"parent":953,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","inline_featured_image":false,"episode_type":"","audio_file":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","footnotes":""},"class_list":["post-1184","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"\nChallenge 6: Evaluation and Understanding of LLMs - LLM4DH<\/title>\n\n\n\n\n\n\n\n\n\n\n\t\n
Figurative language, including metaphor, irony, and sarcasm, is a prominent feature of human communication. Although LLMs show remarkable capabilities in adapting to a particular style or creating innovative metaphors, they are still at a low level regarding the comprehension of sarcasm and irony (Yakura 2024), and struggle with understanding or detecting indirect requests and faux pas (Strachan et al. 2024). Along similar lines, LLMs significantly lag behind humans in their pragmatic capabilities, such as considering extra-linguistic context, speaker intentions, presuppositions, and implied meanings (Sravanthi et al. 2024). A major problem in assessing the capabilities of LLMs’ in nuanced language understanding and generation is the lack of evaluation datasets and benchmarks. We aim to create an evaluation pipeline that will facilitate the evaluation and comparison of models with regard to figurative language and pragmatics.<\/p>\n<\/div>\n
We will construct and adapt several datasets and create a benchmarking pipeline for figurative language understanding, conversation handling, pragmatics reasoning, and associative behavior of LLMs. First, we will tackle 1) metaphor identification and explanation and construct a benchmark validated and augmented with human annotations. The dataset will include valid and invalid paraphrases and explanations, allowing for the evaluation in different setups, e.g., textual entailment, figurative language identification and interpretation, question answering, etc. For irony and sarcasm understanding, we will adapt and translate the Metaphor and Sarcasm Scenario Test (Adachi et al. 2004). In the pragmatic understanding benchmark, we will include implicature, presupposition, and conversation handling according to Grice’s maxims and test them with conversational AI\/chatbots (Sravanthi et al. 2024; Miehling et al. 2024). Finally, the associative behavior and association explanation benchmark will adapt WOW and the WAX dataset of association explanations (Liu et al. 2022).<\/p>\n<\/section>\n Deliverables 6.1: Metaphor, irony, and sarcasm benchmark in Slovene (M12). Pragmatic and associative behavior explanation benchmark (M24).<\/em><\/strong><\/p>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div> While there are many written language understanding benchmarks, spoken language understanding benchmarks lag significantly behind. However, with the increased number of multimodal LLMs that support speech (Barrault et al. 2023, Hu et al. 2024, Fathullah et al. 2024), the need for spoken language understanding benchmarks is rising significantly. Th objective of this challenge is to develop a spoken language understanding benchmark for Slovenian and establish it in the evaluation platform SloBENCH to evaluate the performance of current and future speech-enabled LLMs.<\/p>\n The benchmark will consist of 1) automatic speech recognition, 2) dialogue act identification, and 3) sentiment classification. All three tasks will be integrated into the SloBENCH platform, which currently hosts only one ASR task. The benchmark will contain both a verbatim (\u201cwhat has been said\u201d) and a non-verbatim, normalized version of the transcript, with various forms of disfluencies removed, and the standard language used. Multiple transcription variants will be given for numerals, acronyms and abbreviations. Two evaluations will be run: one greedy, which provides the best possible result regarding word error rate and character error rate, and another, identifying the best, but consistent path through variants given their labels. Another part of the speech recognition task will be the bias report, a result of the experiments in T6.3. The other two tasks in the benchmark, dialogue act and sentiment classification, will be the results of experiments and experience obtained in T3.2.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section> Deliverables 6.2: Speech dataset (4 hours), annotated with dialogue act and sentiment annotations (M30). Multi-reference ASR task, dialogue processing task, and sentiment in speech tasks (M36).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div> Assessing LLMs’ inherent biases is an important and challenging aspect. One of the problems in less-resource language bias evaluation is dataset adaptation, as common datasets are frequently culturally specific (EEC, Kiritchenko & Mohammad 2018), and machine translation is not adequate. Therefore, a culture-specific adaptation should be considered. Moreover, it is important to focus not only on detecting bias in the textual modality but also in image generation tasks and speech recognition. The well-established bias in the performance of ASR systems (Feng et al. 2024) makes the technology significantly less accessible to specific demographic groups. One possibility is debiasing LLMs, as they are used as an underlying technology for many downstream tasks; we aim to explore this path.<\/p>\n<\/div>\n<\/section>\n We will analyze the bias of LLMs for both unimodal and multimodal language and vision tasks and investigate a de-biasing technique for use in reducing\/controlling the levels of bias in LLM outputs in debate settings. First, the English EEC bias evaluation dataset (Kiritchenko & Mohammad 2018) will be adapted to Slovene, and we will use it with zero-shot prompting and sentence continuation generation techniques to assess the bias of LLMs. In the debate setting, different LLMs balance each other out by using the text generated during their debate for debiasing via fine-tuning and\/or chain-of-thought modeling. The debiased models will be assessed using a case of political bias. For the multimodal models, we will assess the bias using existing classifiers for attributes such as gender and ethnicity in images (e.g., CLIP, Radford et al. 2021). Bias in speech recognition will be analyzed based on the demographic traits of the speakers, including age, gender, and regional background.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section> Deliverables 6.3<\/em>: Bias detection datasets for Slovene (M24). Debiasing approach for LLMs (M30). Spoken language bias detection analysis (M36).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div> Transparency and robustness in LLMs are fundamental to building trust, ensuring fairness and ethical use, fostering innovation, and complying with regulatory standards. While partial solutions in explanation exist, mostly based on LLMs self-explanation, this is far from satisfactory. In this research challenge, we will address the current lack of transparency in LLMs by a general methodology that will be applied to specific challenging domains, such as common-sense reasoning, natural language inference, paraphrasing, and computational folkloristics.<\/p>\n<\/div>\n<\/section>\n The lack of transparency in LLMs will be addressed via a general methodology applied to specific challenging domains, such as common-sense reasoning, natural language inference, and computational folkloristics. The proposed methodology will use i) analysis of specific domain knowledge (e.g., existing ontologies, knowledge graphs, annotation instructions, etc.), ii) semi-automatic generation of explainability datasets with the help of LLMs, and iii) training LLMs on the original problem accompanied by the specific explainability datasets in both fine-tuning mode as well as retrieval-augmented mode.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section> Deliverable 6.4: A novel knowledge-based explanation methodology for LLM explanation (M36).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>
\nT6.2 Spoken language understanding benchmark <\/em><\/strong><\/h3>\n<\/div><\/section>
\n
\nT6.3 Bias detection in LLMs and ASR <\/em><\/strong><\/h3>\n<\/div><\/section>
\n
\nT6.4 Knowledge-based explanation of LLMs <\/em><\/strong><\/h3>\n<\/div><\/section>
\n
\n