Memory-Augmented LLM Agents
The rise of Large Language Models (LLMs) such as GPT5, Claude, and Gemini has opened new frontiers for developing intelligent agents capable of performing complex tasks via natural language interfaces. These agents exhibit remarkable generalization capabilities but often suffer from limited memory, inconsistent context retention, and a lack of interpretability, especially in evolving, knowledge intensive environments.
To address these challenges, recent advances have explored explicit memory augmentation, agent-based architectures, and neuro-symbolic integration as key building blocks toward building adaptive, trustworthy, and knowledge-grounded LLM agents. Notably, integrating structured knowledge representations such as knowledge graphs or symbolic memory stores allows LLMs to better retain long-term factual knowledge, support continual adaptation, and enhance agent behavior in dynamic real-world settings.
This special issue aims to bring together cutting-edge research at the intersection of LLMs, memory architectures, agent systems, and knowledge engineering, to address foundational and applied challenges in developing memory-augmented LLM agents. We invite original research papers, applied studies, and open-source resources (e.g. tools, benchmarks, datasets) that focus on improving the memory, reasoning, and adaptability of LLM-based agents through structured knowledge, continual learning, and multi-agent coordination.
Topics of interest include, but are not limited to:
1. Memory augmented architectures for LLM-based agents
2. Symbolic memory integration for long term reasoning and planning
3. Knowledge Mechanisms and Mechanistic Interpretability in LLM/Agents
4. Casual learning for LLM and causal agents
5. Casual Interpretability in LLM/Agents
6. Knowledge editing, injection, and refinement for LLM memory alignment
7. Continual and lifelong learning for LLM agents in dynamic environments
8. Explicit memory retrieval and routing in multimodal or multi-agent setups
9. Controllable and verifiable memory behavior in LLM-based systems
10. Temporal alignment of structured knowledge and agent memory
11. Emergent behavior and topology optimization in memory-driven agents
12. Neuro-symbolic models for agent communication and coordination
13. Domain-specific LLM adaptation with structured memory (e.g. scientific discovery, code generation, healthcare, finance)
14. Benchmarks, tools, and datasets for memory-enhanced LLM agents
15. Explainability and trustworthiness via symbolic knowledge traces
16. Open-world adaptation, knowledge forgetting, and memory editing
17. Cross-lingual or low-resource agent memory modelling
Guest Editors
Dr. Kun Kuang, Zhejiang University, China
Dr. Ningyu Zhang, Zhejiang University, China
Dr. Hang Yu, Shanghai University, China
Dr. Tongtong Wu, Monash University, Australia
Deadline
The deadline for manuscript submissions is 15 December 2026, but we can accommodate extensions on a case-by-case basis. Manuscripts submitted before the deadline will be subject to an APC of $2750 USD. All accepted papers will be published online.
Submission Instructions
Please submit the full manuscript to The Knowledge Engineering Review via our Online Submission System. All manuscripts are thoroughly refereed through a single-anonymized peer-review process. A guide for authors for submission of manuscripts is available on the For Authors page.
Additionally, please choose the topic of this Special Issue when submitting and specify it in your cover letter. For further inquiries, please contact Guest Editors:
Kun Kuang (kunkuang@zju.edu.cn)
Ningyu Zhang (zhangningyu@zju.edu.cn)
Hang Yu (yuhang@shu.edu.cn)
Tongtong Wu (tongtong.wu@monash.edu)
-
{{article.year}}, {{article.volume}}({{article.issue}}): {{article.fpage | processPage:article.lpage:6}}. doi: {{article.doi}}{{article.articleStateNameEn}}, doi: {{article.doi}}Abstract({{article.visitArticleCount}}) Abstract({{article.visitArticleCount}}) HTML HTML PDF({{article.pdfDownloadCount}}){{article.publishDate | date:'dd MMM yyyy' : 'UTC'}}Open Access