Large language models (LLMs) are very large deep learning models pre-trained on vast amounts of data. They can answer questions and assist users with different tasks such as composing emails, essays, and code. Ever since ChatGPT's launch in November 2022, researchers have conducted various studies or created tools to integrate LLMs into (1) the current practices of software development as well as maintenannce, (2) the training of next-generation software engineers, and (3) human-computer interactions to facilitate better software/hardware usage.
Lots of research questions remain open regarding the methodologies for conducting empirical research with LLMs. For instance, what is the best usage of LLMs in different scenarios, what are rigorous measurements for LLM results, and what are the potential impacts of LLM-oriented research on ethics, economy, energy, as well as environment? All these questions are critical for researchers to conduct responsible, reliable, and reproducible empirical research. Thus, we organize this workshop focusing on methodologies for conducting empirical research with LLMs. This workshop intends to provide a venue for researchers and practitioners to share ideas, discuss obstacles and challenges in LLM-oriented empirical research, brainstorm solutions, and establish collaborations to define reliable LLM-oriented empirical methodologies in cost-efficient ways. To achieve that goal, our workshop will include a keynote talk, paper presentations, and a panel.
Areas of interest include but are not restricted to:
Submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options). We call for two types of submissions to the workshop:
All the submissions must not have been published elsewhere or under review elsewhere when being considered for LLanMER 2025. Similar to FSE, LLanMER will employ a double-blind review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-anonymous review process.
Please submit your papers through the following EasyChair link:
https://easychair.org/conferences/?conf=llanmer2025
For accepted papers (except for talk abstracts), authors are required to prepare their final submissions for the workshop proceedings based on the suggestions provided by reviewers, and one author is expected to attend the workshop and present the paper.