This is an outdated version published on 2026-03-03. Read the most recent version.
Preprint / Version 1

CoG-MeM: A Cognitive-Behavior-Inspired and Logic-Aligned Design for Memory Encoding, Retrieval, and Synthesis

##article.authors##

  • Zhiqiang Gan Independent Researcher

DOI:

https://doi.org/10.31224/6547

Keywords:

Large Language Models, Memory, Continue Learning

Abstract

We propose CoG-MeM, a cognitive-behavior-inspired memory design for LLMs that extends beyond traditional RAG via a logic-aligned pipeline. CoG-MeM features: (1) Logical Encoding, using SFT and DPO to compress dialogues into high-fidelity ``logical chunks'' that aim to preserve core axioms; (2) End-to-End Retrieval, fine-tuning the LLM to map queries directly to memory entries; and (3) Logical Arbitration, a reasoning mechanism that facilitates prioritizing non-parametric memory over parametric priors during logic conflicts. Our results show that CoG-MeM allows models to adopt counterfactual rules through memory injection without weight updates. As a proof-of-concept, this design demonstrates promising logical adaptability and potential for data-efficient, non-parametric continual learning in smaller LLMs.

Downloads

Download data is not yet available.

Downloads

Posted

2026-03-03

Versions