{"id":1137,"date":"2024-12-24T10:59:51","date_gmt":"2024-12-24T09:59:51","guid":{"rendered":"https:\/\/www.cjvt.si\/llm4dh\/?page_id=1137"},"modified":"2025-05-14T12:28:17","modified_gmt":"2025-05-14T10:28:17","slug":"challenge-1","status":"publish","type":"page","link":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/","title":{"rendered":"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models"},"content":{"rendered":"<div class=\"flex_column av_one_full  no_margin flex_column_div av-zero-column-padding first  avia-builder-el-0  el_before_av_one_full  avia-builder-el-first  \" style='margin-top:0px; margin-bottom:30px; border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><h1><strong>Challenge 1: Improving LLMs with linguistic resources and development of vision-language models<\/strong><\/h1>\n<\/div><\/section><\/div>\n<div class=\"flex_column av_one_full  no_margin flex_column_div av-zero-column-padding first  avia-builder-el-2  el_after_av_one_full  el_before_av_tab_section  avia-builder-el-last  column-top-margin\" style='margin-top:0px; margin-bottom:30px; border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><p>LLMs require large amounts of high-quality textual data for their training and fine-tuning for specific tasks. High-quality lexicographic data can help pretrain LLMs by producing different types of data, in particular knowledge graphs and raw text. The available information in lexicographic resources includes relations, information on sense distribution with definitions of word senses, cross-lingual connections, identification and description of idiomatic or figurative expressions, etc.<\/p>\n<p>This information trove is not yet adequately utilized by LLMs but it could reduce their hallucination, improve their language proficiency in complex situations and less-resourced languages, and improve fine-tuning of LLMs for specific important tasks, such as commonsense reasoning and natural language inference.<\/p>\n<\/div><\/section><\/div>\n<\/div><\/div><\/div><!-- close content main div --><\/div><\/div><div id='av-tab-section-1'  class='av-tab-section-container entry-content-wrapper main_color av-tab-no-transition   av-tab-above-content  avia-builder-el-4  el_after_av_one_full  avia-builder-el-last  submenu-not-first container_wrap fullsize' style=' '  ><div class='av-tab-section-outer-container'><div class='av-tab-section-tab-title-container avia-tab-title-padding-default ' ><a href='#task-1.1' data-av-tab-section-title='1' class='av-section-tab-title av-active-tab-title no-scroll av-tab-no-icon av-tab-no-image  '><span class='av-outer-tab-title'><span class='av-inner-tab-title'>Task 1.1<\/span><\/span><span class='av-tab-arrow-container'><span><\/span><\/span><\/a><a href='#task-1.2' data-av-tab-section-title='2' class='av-section-tab-title  av-tab-no-icon av-tab-no-image  '><span class='av-outer-tab-title'><span class='av-inner-tab-title'>Task 1.2<\/span><\/span><span class='av-tab-arrow-container'><span><\/span><\/span><\/a><a href='#task-1.3' data-av-tab-section-title='3' class='av-section-tab-title  av-tab-no-icon av-tab-no-image  '><span class='av-outer-tab-title'><span class='av-inner-tab-title'>Task 1.3<\/span><\/span><span class='av-tab-arrow-container'><span><\/span><\/span><\/a><a href='#yearly-reports' data-av-tab-section-title='4' class='av-section-tab-title  av-tab-no-icon av-tab-no-image  '><span class='av-outer-tab-title'><span class='av-inner-tab-title'>Yearly reports<\/span><\/span><span class='av-tab-arrow-container'><span><\/span><\/span><\/a><\/div><div class='av-tab-section-inner-container avia-section-default' style='width:400vw; left:0%;'><span class='av_prev_tab_section av_tab_navigation'><\/span><span class='av_next_tab_section av_tab_navigation'><\/span>\n<div data-av-tab-section-content=\"1\" class=\"av-layout-tab av-animation-delay-container av-active-tab-content __av_init_open  avia-builder-el-5  el_before_av_tab_sub_section  avia-builder-el-first   \" style='vertical-align:middle; '  data-tab-section-id=\"task-1.1\"><div class='av-layout-tab-inner'><div class='container'><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><h3><span style=\"color: #000000;\"><strong><em>Task 1.1:\u00a0Improving LLMs with linguistic data<\/em><\/strong><\/span><\/h3>\n<\/div><\/section><br \/>\n<section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>The first challenge will address LLM improvements with monolingual lexicographic data in the form of knowledge graphs and raw texts.<\/p>\n<\/div>\n<\/section>\n<section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>This task will first develop a novel methodology for extracting knowledge graphs from monolingual linguistic knowledge sources such as digital dictionary databases, suitable for morphologically-rich languages. Further, the task will develop methods to generate large quantities of raw text from these sources. The methodology will be applied to the Dictionary Database for Slovene (DDS), the largest open-access lexical\/lexicographic resource for the Slovene language, and Open Slovene WordNet (\u010cibej et al. 2023) which is already linked with the DDDS.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section><br \/>\n<div class=\"flex_column av_one_fifth  flex_column_div av-zero-column-padding first  avia-builder-el-8  el_after_av_textblock  el_before_av_four_fifth  column-top-margin\" style='border-radius:0px; '><span  class=\"av_font_icon avia_animate_when_visible avia-icon-animate  av-icon-style-  av-no-color avia-icon-pos-left \" style=\"\"><span class='av-icon-char' style='font-size:40px;line-height:40px;' aria-hidden='true' data-av_icon='\ue810' data-av_iconfont='entypo-fontello' ><\/span><\/span><\/div><div class=\"flex_column av_four_fifth  flex_column_div av-zero-column-padding   avia-builder-el-10  el_after_av_one_fifth  avia-builder-el-last  column-top-margin\" style='border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \"><\/div>\n<\/section>\n<section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p><strong><em>Deliverables 1.1: DDDS and OSWN datasets ready for training (M6). Initial improved LLM (M12). Final improved LLM (M24).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div><div data-av-tab-section-content=\"2\" class=\"av-layout-tab av-animation-delay-container   avia-builder-el-12  el_after_av_tab_sub_section  el_before_av_tab_sub_section   \" style='vertical-align:middle; '  data-tab-section-id=\"task-1.2\"><div class='av-layout-tab-inner'><div class='container'><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><h3><span style=\"color: #000000;\"><strong><em>Task 1.<\/em><em>2 Improving LLMs with cross-lingual data <\/em><\/strong><\/span><\/h3>\n<\/div><\/section><br \/>\n<section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>Large quantities of multilingual and cross-lingual linguistic data in several languages offer an opportunity to improve LLMs with further pretraining and fine-tuning for specific tasks. The knowledge-improved multilingual LLMs will be better in general sense and particularly for languages with injected multilingual resources. Such knowledge-enhanced LLMs will improve cross-lingual transfer via fine-tuning and prompt engineering, including multilingual prompt engineering. Such transfer is highly relevant for less-resourced languages such as Slovene.<\/p>\n<\/div>\n<\/section>\n<section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>We will develop a novel methodology for extracting knowledge graphs from multilingual digital dictionary databases such as Wiktionary, BabelNet and other cross-lingual resources such as DBPedia and linked WordNets. The methodology will be suitable for morphologically rich languages. Further, the task will develop methods to generate large quantities of raw text from multilingual digital dictionary databases. The extracted KGs and raw texts will be used in further pretraining of LLMs for general use. The instruction following datasets, produced by tasks in WP2 and WP5 will be used to adapt LLMs for specific cross-lingual and multilingual linguistic tasks.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section><br \/>\n<div class=\"flex_column av_one_fifth  flex_column_div av-zero-column-padding first  avia-builder-el-15  el_after_av_textblock  el_before_av_four_fifth  column-top-margin\" style='border-radius:0px; '><span  class=\"av_font_icon avia_animate_when_visible avia-icon-animate  av-icon-style-  av-no-color avia-icon-pos-left \" style=\"\"><span class='av-icon-char' style='font-size:40px;line-height:40px;' aria-hidden='true' data-av_icon='\ue810' data-av_iconfont='entypo-fontello' ><\/span><\/span><\/div><div class=\"flex_column av_four_fifth  flex_column_div av-zero-column-padding   avia-builder-el-17  el_after_av_one_fifth  avia-builder-el-last  column-top-margin\" style='border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \"><\/div>\n<\/section>\n<section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p><strong><em>Deliverables 1.2: KGs and raw texts datasets (M18). Initial improved LLMs (M24). Final improved LLMs (M30).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div><div data-av-tab-section-content=\"3\" class=\"av-layout-tab av-animation-delay-container   avia-builder-el-19  el_after_av_tab_sub_section  el_before_av_tab_sub_section   \" style='vertical-align:middle; '  data-tab-section-id=\"task-1.3\"><div class='av-layout-tab-inner'><div class='container'><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><h3><span style=\"color: #000000;\"><em><strong>T1.3 Improving multimodal models <\/strong><\/em><\/span><\/h3>\n<\/div><\/section><br \/>\n<section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>Multimodal language models, such as vision-language models (VLMs) for less-resourced languages are rare and difficult to produce, as the resources allowing matching different modalities are very rare, and obtaining them with machine translation is mostly inadequate. The challenge is to effectively create VLM models supporting less-resourced languages and domains, such as Slovene and advance methodologies for effective VLM training with less-resources via different methods for aligning modalities, such as training alternatives to contrastive learning (Radford et al., 2021), relative representations (Norelli et al., 2023), and novel methods for background knowledge grounding.<\/p>\n<\/div>\n<\/section>\n<section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>We will build a dataset for vision-language model (VLM) construction, namely an image-text dataset containing images with Slovenian captions. We expect the open-source dataset will contain from 100 thousand to 1 million image-text pairs, a minimum requirement for successful model training. Next, we will construct a VLM for Slovenian, focusing on effective VLMs training methods, such as aligning modalities methods, contrastive learning (Radford et al., 2021), relative representations (Norelli et al., 2023), and novel methods for background knowledge grounding.<\/p>\n<\/div>\n<\/section>\n<\/div><\/section><br \/>\n<div class=\"flex_column av_one_fifth  flex_column_div av-zero-column-padding first  avia-builder-el-22  el_after_av_textblock  el_before_av_four_fifth  column-top-margin\" style='border-radius:0px; '><span  class=\"av_font_icon avia_animate_when_visible avia-icon-animate  av-icon-style-  av-no-color avia-icon-pos-left \" style=\"\"><span class='av-icon-char' style='font-size:40px;line-height:40px;' aria-hidden='true' data-av_icon='\ue810' data-av_iconfont='entypo-fontello' ><\/span><\/span><\/div><div class=\"flex_column av_four_fifth  flex_column_div av-zero-column-padding   avia-builder-el-24  el_after_av_one_fifth  avia-builder-el-last  column-top-margin\" style='border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \"><\/div>\n<\/section>\n<section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p><strong><em>Deliverables 1.3: Slovene datasets for training VLM (M12). Slovene VLM model (M24).<\/em><\/strong><\/p>\n<\/div>\n<\/section>\n<\/div><\/section><\/div><\/p>\n<\/div><\/div><\/div><div data-av-tab-section-content=\"4\" class=\"av-layout-tab av-animation-delay-container   avia-builder-el-26  el_after_av_tab_sub_section  avia-builder-el-last   \" style='vertical-align:middle; '  data-tab-section-id=\"yearly-reports\"><div class='av-layout-tab-inner'><div class='container'><\/div><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":19,"featured_media":0,"parent":953,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","inline_featured_image":false,"episode_type":"","audio_file":"","podmotor_file_id":"","podmotor_episode_id":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","footnotes":""},"class_list":["post-1137","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Challenge 1: Improving LLMs with linguistic resources and development of vision-language models - LLM4DH<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models - LLM4DH\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/\" \/>\n<meta property=\"og:site_name\" content=\"LLM4DH\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-14T10:28:17+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/work-packages\\\/work-package-1\\\/\",\"url\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/work-packages\\\/work-package-1\\\/\",\"name\":\"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models - LLM4DH\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/#website\"},\"datePublished\":\"2024-12-24T09:59:51+00:00\",\"dateModified\":\"2025-05-14T10:28:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/work-packages\\\/work-package-1\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/work-packages\\\/work-package-1\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/work-packages\\\/work-package-1\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Work Packages\",\"item\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/work-packages\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/\",\"name\":\"LLM4DH\",\"description\":\"Work site\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.cjvt.si\\\/llm4dh\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models - LLM4DH","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/","og_locale":"en_US","og_type":"article","og_title":"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models - LLM4DH","og_url":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/","og_site_name":"LLM4DH","article_modified_time":"2025-05-14T10:28:17+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/","url":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/","name":"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models - LLM4DH","isPartOf":{"@id":"https:\/\/www.cjvt.si\/llm4dh\/en\/#website"},"datePublished":"2024-12-24T09:59:51+00:00","dateModified":"2025-05-14T10:28:17+00:00","breadcrumb":{"@id":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/work-package-1\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cjvt.si\/llm4dh\/en\/"},{"@type":"ListItem","position":2,"name":"Work Packages","item":"https:\/\/www.cjvt.si\/llm4dh\/en\/work-packages\/"},{"@type":"ListItem","position":3,"name":"Challenge 1: Improving LLMs with linguistic resources and development of vision-language models"}]},{"@type":"WebSite","@id":"https:\/\/www.cjvt.si\/llm4dh\/en\/#website","url":"https:\/\/www.cjvt.si\/llm4dh\/en\/","name":"LLM4DH","description":"Work site","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.cjvt.si\/llm4dh\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/pages\/1137","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/comments?post=1137"}],"version-history":[{"count":8,"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/pages\/1137\/revisions"}],"predecessor-version":[{"id":1584,"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/pages\/1137\/revisions\/1584"}],"up":[{"embeddable":true,"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/pages\/953"}],"wp:attachment":[{"href":"https:\/\/www.cjvt.si\/llm4dh\/en\/wp-json\/wp\/v2\/media?parent=1137"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}