CoG-MeM: A Cognitive-Behavior-Inspired and Logic-Aligned Design for Memory Encoding, Retrieval, and Synthesis
DOI:
https://doi.org/10.31224/6547Keywords:
Large Language Models, Memory, Continue LearningAbstract
We propose CoG-MeM, a cognitive-behavior-inspired memory design for LLMs that extends beyond traditional RAG via a logic-aligned pipeline. CoG-MeM features: (1) Logical Encoding, using SFT and DPO to compress dialogues into high-fidelity ``logical chunks'' that aim to preserve core axioms; (2) End-to-End Retrieval, fine-tuning the LLM to map queries directly to memory entries; and (3) Logical Arbitration, a reasoning mechanism that facilitates prioritizing non-parametric memory over parametric priors during logic conflicts. Our results show that CoG-MeM allows models to adopt counterfactual rules through memory injection without weight updates. As a proof-of-concept, this design demonstrates promising logical adaptability and potential for data-efficient, non-parametric continual learning in smaller LLMs.
Downloads
Downloads
Posted
Versions
- 2026-04-24 (4)
- 2026-04-16 (3)
- 2026-03-12 (2)
- 2026-03-03 (1)
License
Copyright (c) 2026 Zhiqiang Gan

This work is licensed under a Creative Commons Attribution 4.0 International License.