Abelardo Carlos Martinez Lorenzo

Lead AI Scientist building AI for development economics and evidence synthesis.

Publications

Selected publications

Publications by Abelardo Carlos Martinez Lorenzo focus on semantic parsing, multilingual NLP, scientific meaning representation, and the broader research foundation behind current work on evidence synthesis and AI for development economics.

The summaries below are short notes based on the official abstracts in ACL Anthology and AAAI proceedings. For the broader academic profile and citation history, use Google Scholar.

Featured papers

Representative work

These papers are the clearest entry points into the research themes behind the site: multilingual semantics, semantic parsing, and evidence-oriented AI systems.

ACL 2024

Mitigating Data Scarcity in Semantic Parsing across Languages with the Multilingual Semantic Layer and its Dataset.

Introduces the Multilingual Semantic Layer and a semi-automatic multilingual dataset creation pipeline, then shows that manually refining 1,100 sentences across 11 languages helps reduce the semantic parsing performance gap for low-resource languages.

NAACL 2024

MOSAICo: a Multilingual Open-text Semantically Annotated Interlinked Corpus.

Presents a large multilingual corpus with hundreds of millions of silver semantic annotations across four NLU tasks and five languages, aiming to make explicit semantic supervision more available across languages and tasks.

LREC-COLING 2024

Efficient AMR Parsing with CLAP: Compact Linearization with an Adaptable Parser.

Proposes a compact AMR linearization and a modular parser that cuts token counts by roughly 40 to 50 percent and reduces training and inference time by about 80 percent while preserving strong parsing quality.

AAAI 2022

BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers.

Presents BMR as a broad, fully semantic and language-independent meaning representation designed to connect text, images, audio, video, and logical forms across languages and AI settings.

Full list

All publications by year

2024

3 papers

ACL 2024

Mitigating Data Scarcity in Semantic Parsing across Languages with the Multilingual Semantic Layer and its Dataset.

Abstract summary: Introduces the Multilingual Semantic Layer and a semi-automatic multilingual dataset creation pipeline, then shows that manually refining 1,100 sentences across 11 languages helps reduce the semantic parsing performance gap for low-resource languages.

NAACL 2024

MOSAICo: a Multilingual Open-text Semantically Annotated Interlinked Corpus.

Abstract summary: Presents a large multilingual corpus with hundreds of millions of silver semantic annotations across four NLU tasks and five languages, aiming to make explicit semantic supervision more available across languages and tasks.

LREC-COLING 2024

Efficient AMR Parsing with CLAP: Compact Linearization with an Adaptable Parser.

Abstract summary: Proposes a compact AMR linearization and a modular parser that cuts token counts by roughly 40 to 50 percent and reduces training and inference time by about 80 percent while preserving strong parsing quality.

2023

3 papers

Findings of ACL 2023

Cross-lingual AMR Aligner: Paying Attention to Cross-Attention.

Abstract summary: Builds a cross-lingual AMR aligner that recovers span-to-graph alignments directly from transformer cross-attention, avoiding English-specific rules and improving alignment quality across multiple languages.

ACL 2023

AMRs Assemble! Learning to Ensemble with Autoregressive Models for AMR Parsing.

Abstract summary: Examines the weaknesses of existing AMR ensemble methods, adds validation for structural constraints, and proposes two transformer-based ensemble strategies that are both more robust and less computationally expensive.

Findings of ACL 2023

Incorporating Graph Information in Transformer-based AMR Parsing.

Abstract summary: Introduces LeakDistill, which injects graph structure into transformer representations through structural adapters and word-to-node alignment, leading to state-of-the-art AMR parsing without extra training data.

2022

2 papers

ACL 2022

Fully-Semantic Parsing and Generation: the BabelNet Meaning Representation.

Abstract summary: Introduces the BabelNet Meaning Representation as an interlingual formalism built on BabelNet and VerbAtlas, releases the BMR 1.0 dataset, and shows that the formalism supports strong multilingual parsing and generation.

AAAI 2022

BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers.

Abstract summary: Presents BMR as a broad, fully semantic and language-independent meaning representation designed to connect text, images, audio, video, and logical forms across languages and AI settings.