LLanMER 2025

The First International Workshop on Large Language Model-Oriented Empirical Research

Trondheim, Norway

held in conjunction with The ACM International Conference on the Foundations of Software Engineering (FSE), June 23 - June 27, 2025

Introduction

Large language models (LLMs) are very large deep learning models pre-trained on vast amounts of data. They can answer questions and assist users with different tasks such as composing emails, essays, and code. Ever since ChatGPT's launch in November 2022, researchers have conducted various studies or created tools to integrate LLMs into (1) the current practices of software development as well as maintenannce, (2) the training of next-generation software engineers, and (3) human-computer interactions to facilitate better software/hardware usage.

Lots of research questions remain open regarding the methodologies for conducting empirical research with LLMs. For instance, what is the best usage of LLMs in different scenarios, what are rigorous measurements for LLM results, and what are the potential impacts of LLM-oriented research on ethics, economy, energy, as well as environment? All these questions are critical for researchers to conduct responsible, reliable, and reproducible empirical research. Thus, we organize this workshop focusing on methodologies for conducting empirical research with LLMs. This workshop intends to provide a venue for researchers and practitioners to share ideas, discuss obstacles and challenges in LLM-oriented empirical research, brainstorm solutions, and establish collaborations to define reliable LLM-oriented empirical methodologies in cost-efficient ways. To achieve that goal, our workshop will include a keynote talk, paper presentations, and a panel.

Areas of interest include but are not restricted to:

  • Methodologies: How should we leverage LLMs to solve real-world problems? In the problem-solution procedure, how can we reveal, measure, and address the hallucination issues of LLMs?
  • Measurements: How do we precisely measure the effectiveness of LLM usage and evaluate results? How can we ensure the reproducibility and representativeness of evaluation results? How can we evaluate LLM-based approaches in a scalable way?
  • Analytics: How do we compare different usage of LLMs? How do we ensure a fair comparison given the existence of randomness and hallucination issues?
  • Ethical Aspect: What approaches can we take to ensure that the LLM-oriented empirical research does not violate ethical regulations or raise ethical concerns?
  • Economy Aspect: What is the cost comparison between different usage of LLMs? How are LLM-based approaches compared with non-LLM-based approaches in terms of effectiveness, performance, runtime cost, and financial cost?
  • Energy Aspect: What is the energy consumption of different usage of LLMs? What kinds of LLM-oriented approaches are energy-saving solutions or energy-consuming solutions? How can we optimize the energy consumption by distinct LLM usage without compromising the effectiveness significantly?
  • Environment Aspect: What potential impacts LLM usage can introduce to our environment or society? How does that impact personal privacy, intellectual property, technology accessibility, and copyright?

Important dates

Paper submissions: February 25th, 2025
Paper notifications: March 25th, 2025
Paper camera-ready: April 24th, 2025
Workshop date: June 26th, 2025

Submission details

Submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options). We call for two types of submissions to the workshop:

  • Long (up to 6 pages), or Short (up to 3 pages) Research Papers, plus at most 2 pages for references and well-marked appendices. These papers present research work in early stage. Position papers with exceptional visions will also be considered.
  • Long (up to 6 pages), or Short (up to 3 pages) Experience Papers, plus at most 2 pages for references and well-marked appendices. These submissions should report experience on the application or assessment of LLMs in a non-trivial setting.

All the submissions must not have been published elsewhere or under review elsewhere when being considered for LLanMER 2025. Similar to FSE, LLanMER will employ a double-blind review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-anonymous review process.

Please submit your papers through the following EasyChair link:

https://easychair.org/conferences/?conf=llanmer2025

For accepted papers (except for talk abstracts), authors are required to prepare their final submissions for the workshop proceedings based on the suggestions provided by reviewers, and one author is expected to attend the workshop and present the paper.

Organizers

Technical Program Committee