Two papers published at AACL-IJCNLP 2025

BamNLP publishes two papers at the AACL-IJCNLP conference in Mumbai, India.

 


Jiahui Li, Sean Papay, Roman Klinger. Are Humans as Brittle as Large Language Models?

Large language models are brittle – small changes to a prompt may change the output entirely. This is commonly seen as a huge issue with models, but humans are also "brittle", depending how a question is asked, their answer also changes. In this paper, we study how the changes of answers by humans compare to those by LLMs – what happens if there are typos in the answer options, or the order of them changes? We find that humans are indeed also brittle, but to lesser extends than LLMs, and not susceptible to the same type of changes.


Yarik Menchaca Resendiz, Martin Kerwer, Anita Chasiotis, Marlene Bodemer, Kai Sassenberg, Roman Klinger: Supporting Plain Language Summarization of Psychological Meta-Analyses with Large Language Models.

In this demo paper, we present a system that we develop together with the Leibniz Institute for Psychology, in the project KlarPsy. The KlarPsy team prepares meta-reviews for psychological topics in a way that laypeople may easier understand the content. Our system supports the "translation" to simpler language and the extraction of the most important facts with a prompt-based system.