The rapid advancement of Large Language Models (LLMs) and the continuing evolution of Evolutionary Computation (EC) are shaping a new frontier in artificial intelligence. LLMs demonstrate remarkable abilities in natural language understanding, reasoning, and generative tasks, while EC provides a powerful global optimization framework for complex, high-dimensional, and multi-objective problems. Although the two paradigms originate from different traditions, their complementarity is increasingly evident: LLMs can supply domain knowledge, interpretable heuristics, and code generation for evolutionary algorithms, while EC can optimize prompts, fine-tune architectures, and improve deployment strategies of LLMs under constrained or closed-box settings.
LLM-enhanced EC and EC-enhanced LLM open up promising directions across multiple domains. On the one hand, LLMs can act as intelligent operators, problem encoders, and algorithm generators to accelerate evolutionary search, thereby reducing reliance on handcrafted design. On the other hand, EC enables systematic search over prompts, instructions, and architectures, achieving efficiency, robustness, and safety for LLMs at scale. Furthermore, integrated EC & LLM frameworks have shown potential in neural architecture search, code synthesis, software engineering, automated scheduling, text-to-X generation, and scientific discovery. The convergence of EC and LLM is not only a methodological advance but also a paradigm shift toward more autonomous, interpretable, and adaptive AI systems.
This special issue is well aligned with the thematic scope of leading venues Applied and Computational Mathematics, particularly in advancing the synergy between optimization and large-scale language-driven intelligence. By showcasing cutting-edge research at the intersection of EC and LLMs, this issue aims to foster the development of scalable, robust, and interpretable AI systems. The anticipated impact is twofold: deepening the theoretical foundations of LLM-enhanced evolutionary optimization and EA-enhanced LLM adaptation, and accelerating their deployment in diverse real-world domains, including neural architecture search, automated scheduling, software engineering, generative design, and scientific discovery. Ultimately, this special issue will promote the emergence of more autonomous, adaptive, and trustworthy intelligent systems that can address pressing industrial, societal, and scientific challenges.
Types of articles welcomed: Original research articles.
Through this special issue, we aim to catalyze research at the intersection of LLMs and EC, driving new methods and applications toward autonomous and interpretable AI. We welcome researchers from various disciplines to provide interdisciplinary perspectives on Evolutionary Computation Meets Large Language Models: Theory, Methods, and Applications. Your contributions will play a crucial role in advancing knowledge in this field.
Potential topics include, but are not limited to:
- LLM-driven evolutionary operators and pipelines
- Evolutionary prompt and instruction optimization
- EC-based tuning of black-box LLM parameters
- LLM-assisted neural architecture search
- Resource-efficient EC-LLM co-design and deployment
- Interpretable EC enhanced by LLM-derived heuristics
- Evolutionary program synthesis and automated code repair
- Multi-agent EC frameworks with LLM collaboration
- LLM-driven evolutionary operators and pipelines
- Benchmarks and evaluation protocols for EC-LLM integration