Special Issue

VOL 12, NO 3 (2025) "Supercomputing for Creating, Fine-tuning and Application of Large Language Models"

Submission Deadline: 1 September 2025

In recent years, Large Language Models (LLMs) such as GPT, LLaMA, Gemini, and others have revolutionized a wide range of domains, from natural language understanding and generation to scientific discovery, coding, and education. Training and fine-tuning these models require massive computational resources and innovative algorithmic and architectural solutions. High-performance computing (HPC) environments play a pivotal role in overcoming the challenges posed by the scale, complexity, and resource intensity of LLMs.

This special issue aims to bring together researchers, practitioners, and industry leaders at the intersection of AI, machine learning, and supercomputing to share their latest advances, methodologies, and experiences related to LLMs.

This Special Issue on Supercomputing for Creating, Fine-tuning and Application of Large Language Models (#3, 2025) of the scientific journal ''Supercomputing Frontiers and Innovations'' (indexed at ACM Digital Library and Scopus) is an open call for contributions seeking original, unpublished papers that present scientific contributions in the field of high-performance computing.

Topics of interest include, but are not limited to:

  • Architectural Innovations: Novel HPC architectures, quantum-classical hybrid systems, and specialized hardware (e.g., TPUs, GPUs) for LLM training and inference.
  • Distributed Training & Optimization: Scalable algorithms, parallel computing techniques, and frameworks for efficient large-scale model training.
  • Model Compression & Efficiency: Quantization, pruning, knowledge distillation, and other methods to reduce computational costs while maintaining performance.
  • Fine-tuning & Adaptation: Techniques for task-specific, language-specific and domain-specific customization, continual learning on resource-constrained platforms.
  • Ethical & Societal Implications: Bias mitigation, fairness, transparency, and sustainability in LLM development.
  • Applications in Science & Industry: Case studies on LLMs in healthcare, climate modeling, finance, and other domains leveraging HPC.
  • Energy Efficiency & Sustainability: Green computing strategies, carbon footprint analysis, and eco-friendly model deployment.
  • Resources and Evaluation: Benchmarks and performance of LLM in training and inference.
  • Case studies: Application of LLMs in science, healthcare, engineering, and other domains.
  • Multimodal language models: Training and deployment at scale.
  • Interpretability: Robustness and interpretability of LLMs.

A paper is 10–16 pages long and presents the results of a completed scientific study. All articles will be peer reviewed and accepted based on quality, originality, novelty, and relevance to the special issue.

Contributions should be written in good English and must meet Supercomputing Frontiers and Innovations standards. Please prepare your article according to the Author Guidelines before submitting a manuscript, and use JSFI's system to submit your manuscripts online (registration in the system). Plеase, do not forget to leave a comment for the Editorial Board, stating that the paper is submitted for the special 3/2025 journal issue.

Program Committee:

  • Dr. Natalia Loukachevitch, Leading researcher, Research Computing Center, Professor of Computational mathematics and cybernetics department, Lomonosov Moscow State University, Russia
  • Dr. Mikhail Tikhomirov, Researcher, Research Computing Center, Lomonosov Moscow State University, Russia

All questions about submissions should be emailed to Dr. Natalia Loukachevitch (louk_nat@rcc.msu.ru).