T2.1 LLMs for effective lexicography <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nFor the last thirty years, lexicographers working on the description of the lexicon attempted to use various automated procedures to analyze language data and generate lexicographic descriptions (Atkins and Rundell 2008, Gantar et al. 2016). The latest developments in generative AI triggered attempts to use new tools for lexicographic purposes (Lew 2023; Rees et al. 2023; Jakub\u00ed\u010dek & Rundell 2023). However, after the first attempts, it was found that there is a significant difference between the ability of LLMs to produce quality lexicographic content for English and for other languages, in particular for the less-resourced languages or those that are under-represented in LLMs (de Schryver, 2024).<\/p>\n<\/div>\n
The Digital Dictionary Database for Slovene (DDDS) will be improved on various levels of linguistic description using the models produced in T1.1. We will generate morphological and semantic data, focusing on 1) morphological paradigm generation, 2) word-sense discrimination, 3) generation of various types of definitions (semantic indicators, simplified, terminological, etc.), 4) improving collocations and examples of use; 5) attribution of labels (stylistic, normative, domain, genre, etc.) 6) description of idiomatic, figurative and metaphorical language, etc. The result will be a significantly improved DDDS, which will be, in turn, used for improving models in T1.1. All versions of DDDS will be available as publicly available datasets and via open-access API.<\/p>\n
<\/div>\n<\/section>\n\n<\/div>\n<\/section>\n<\/div><\/section>
\n<\/span><\/span><\/div>Deliverables 2.2: DDDS with generated lexicographic data \u2013 first version (M24), DDDS with generated lexicographic data \u2013 final version (M36)<\/em><\/strong><\/p>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>T2.2 Neural spell- and grammar checking <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nParticularly for less-resourced languages, neural grammar correction development often relies on synthetic data, such as generated examples of erroneous language use. While useful for addressing data sparsity, this approach lacks authenticity and contextual richness, leading to suboptimal performance in practical applications. The issue is especially problematic in educational settings, where accurate and contextually relevant corrections are essential for effective learning and user trust. To address the challenge, we propose methodologies that combine the strengths of both synthetic and authentic language data for grammar handling.<\/p>\n<\/div>\n<\/section>\n\n\nWe will utilize the data from error-annotated Lektor, KOST, and \u0160olar corpora (the latter detailing 180 different types of errors) for advanced LLM-based synthesis of examples with linguistic errors. For each error type, we will test different parameters, such as different types of input and wordings for the prompt, and experiment with different methods of error insertion. We will iteratively produce synthetic data, continuously refining our approach through linguistic evaluations and fine-tuning of Slovene grammar detectors to determine configurations with the most realistic outcomes. Next, we will create high-quality reference evaluation datasets with various types of Slovene texts. Besides school essays, we will cover other text genres to produce authentic open-source datasets with texts by adult L1 writers and L2 writers.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section>
\n
<\/span><\/span><\/div>\n<\/div>\n<\/section>\n\n\nDeliverables 2.1: Synthetic language error datasets (M12). Grammar checking LLMs (M18). Authentic grammar checking evaluation datasets (M24).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>T2.3 Advanced grammatical analysis of multilingual corpora <\/em><\/strong><\/h3>\n<\/div><\/section>
\n\n\nIn recent decades, linguistics has seen a revolutionary transition from intuition-based research to data-driven approaches, fueled by the advent of large-scale corpora and advanced computational tools. This shift has led to significant new discoveries about language structure and use, particularly in the field of descriptive and comparative grammar analysis. However, traditional corpus-based methods remain labor-intensive and implicitly rely on pre-existing linguistic assumptions guiding the extraction of relevant patterns from corpora and their subsequent analysis. The emergence of LLMs with sophisticated reasoning capabilities offers a groundbreaking opportunity to enhance and expand these methods by streamlining and accelerating corpus linguistic analysis, as well as potentially uncovering previously unidentified patterns of language use. We will develop a novel approach to grammatical analysis of multilingual corpora by fine-tuning state-of-the-art LLMs on the Universal Dependencies (UD) dataset, which provides large-scale, reliable morphosyntactic annotations for numerous world languages. We will systematically evaluate the potential of such LLMs enhanced with explicit grammatical knowledge to provide new insights into the grammar of the world\u2019s languages and to facilitate the linguistic analysis of language corpora in general.<\/p>\n<\/div>\n<\/section>\n\n\nWe will develop and evaluate a new method for LLM-based grammatical analysis of multilingual corpora. First, we will fine-tune massively multilingual LLM, such as LLaMa-3 or T5-XXL, on the UD massively multilingual dataset. Second, we will construct a multi-layered dataset of selected state-of-the-art linguistic findings for three typical corpus linguistic tasks: data annotation, pattern extraction, and data summarization. Third, we will quantitatively and qualitatively evaluate the capabilities and limitations of the new multilingual linguistic LLM for these tasks, by also accounting for the different prompting strategies. The new model will provide novel linguistic insights into world languages, encoded in the UD dataset, and support grammatical analysis of language corpora in general.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section>
\n
<\/span><\/span><\/div>\n<\/div>\n<\/section>\n\n\nDeliverables 2.3:<\/em> LLM with improved grammatical knowledge (M12). Dataset for evaluating grammatical knowledge of LLMs (M18). Multilingual and cross-lingual grammatical analyses (M36)<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":953,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","inline_featured_image":false,"episode_type":"","audio_file":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","footnotes":""},"class_list":["post-956","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"\nChallenge 2: LLMs for Linguistics and Knowledge Management - LLM4DH<\/title>\n\n\n\n\n\n\n\n\n\n\n\t\n
For the last thirty years, lexicographers working on the description of the lexicon attempted to use various automated procedures to analyze language data and generate lexicographic descriptions (Atkins and Rundell 2008, Gantar et al. 2016). The latest developments in generative AI triggered attempts to use new tools for lexicographic purposes (Lew 2023; Rees et al. 2023; Jakub\u00ed\u010dek & Rundell 2023). However, after the first attempts, it was found that there is a significant difference between the ability of LLMs to produce quality lexicographic content for English and for other languages, in particular for the less-resourced languages or those that are under-represented in LLMs (de Schryver, 2024).<\/p>\n<\/div>\n
The Digital Dictionary Database for Slovene (DDDS) will be improved on various levels of linguistic description using the models produced in T1.1. We will generate morphological and semantic data, focusing on 1) morphological paradigm generation, 2) word-sense discrimination, 3) generation of various types of definitions (semantic indicators, simplified, terminological, etc.), 4) improving collocations and examples of use; 5) attribution of labels (stylistic, normative, domain, genre, etc.) 6) description of idiomatic, figurative and metaphorical language, etc. The result will be a significantly improved DDDS, which will be, in turn, used for improving models in T1.1. All versions of DDDS will be available as publicly available datasets and via open-access API.<\/p>\n
\n
Deliverables 2.2: DDDS with generated lexicographic data \u2013 first version (M24), DDDS with generated lexicographic data \u2013 final version (M36)<\/em><\/strong><\/p>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div> Particularly for less-resourced languages, neural grammar correction development often relies on synthetic data, such as generated examples of erroneous language use. While useful for addressing data sparsity, this approach lacks authenticity and contextual richness, leading to suboptimal performance in practical applications. The issue is especially problematic in educational settings, where accurate and contextually relevant corrections are essential for effective learning and user trust. To address the challenge, we propose methodologies that combine the strengths of both synthetic and authentic language data for grammar handling.<\/p>\n<\/div>\n<\/section>\n We will utilize the data from error-annotated Lektor, KOST, and \u0160olar corpora (the latter detailing 180 different types of errors) for advanced LLM-based synthesis of examples with linguistic errors. For each error type, we will test different parameters, such as different types of input and wordings for the prompt, and experiment with different methods of error insertion. We will iteratively produce synthetic data, continuously refining our approach through linguistic evaluations and fine-tuning of Slovene grammar detectors to determine configurations with the most realistic outcomes. Next, we will create high-quality reference evaluation datasets with various types of Slovene texts. Besides school essays, we will cover other text genres to produce authentic open-source datasets with texts by adult L1 writers and L2 writers.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section> Deliverables 2.1: Synthetic language error datasets (M12). Grammar checking LLMs (M18). Authentic grammar checking evaluation datasets (M24).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div> In recent decades, linguistics has seen a revolutionary transition from intuition-based research to data-driven approaches, fueled by the advent of large-scale corpora and advanced computational tools. This shift has led to significant new discoveries about language structure and use, particularly in the field of descriptive and comparative grammar analysis. However, traditional corpus-based methods remain labor-intensive and implicitly rely on pre-existing linguistic assumptions guiding the extraction of relevant patterns from corpora and their subsequent analysis. The emergence of LLMs with sophisticated reasoning capabilities offers a groundbreaking opportunity to enhance and expand these methods by streamlining and accelerating corpus linguistic analysis, as well as potentially uncovering previously unidentified patterns of language use. We will develop a novel approach to grammatical analysis of multilingual corpora by fine-tuning state-of-the-art LLMs on the Universal Dependencies (UD) dataset, which provides large-scale, reliable morphosyntactic annotations for numerous world languages. We will systematically evaluate the potential of such LLMs enhanced with explicit grammatical knowledge to provide new insights into the grammar of the world\u2019s languages and to facilitate the linguistic analysis of language corpora in general.<\/p>\n<\/div>\n<\/section>\n We will develop and evaluate a new method for LLM-based grammatical analysis of multilingual corpora. First, we will fine-tune massively multilingual LLM, such as LLaMa-3 or T5-XXL, on the UD massively multilingual dataset. Second, we will construct a multi-layered dataset of selected state-of-the-art linguistic findings for three typical corpus linguistic tasks: data annotation, pattern extraction, and data summarization. Third, we will quantitatively and qualitatively evaluate the capabilities and limitations of the new multilingual linguistic LLM for these tasks, by also accounting for the different prompting strategies. The new model will provide novel linguistic insights into world languages, encoded in the UD dataset, and support grammatical analysis of language corpora in general.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section> Deliverables 2.3:<\/em> LLM with improved grammatical knowledge (M12). Dataset for evaluating grammatical knowledge of LLMs (M18). Multilingual and cross-lingual grammatical analyses (M36)<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div>T2.2 Neural spell- and grammar checking <\/em><\/strong><\/h3>\n<\/div><\/section>
\n
\nT2.3 Advanced grammatical analysis of multilingual corpora <\/em><\/strong><\/h3>\n<\/div><\/section>
\n
\n