This is an outdated version published on 2026-03-12. Read the most recent version.
Preprint / Version 2

CoG-MeM: A Cognitive-Behavior-Inspired and Logic-Aligned Design for Memory Encoding, Retrieval, and Synthesis

##article.authors##

  • Zhiqiang Gan Independent Researcher

DOI:

https://doi.org/10.31224/6547

Keywords:

Large Language Models, Memory, Continue Learning

Abstract

We propose CoG-MeM, a cognitive-behavior-inspired memory design for LLMs that extends beyond traditional RAG via a logic-aligned pipeline. CoG-MeM features: (1) Logical Encoding, using SFT and DPO to compress dialogues into high-fidelity ``logical chunks'' that aim to preserve core axioms; (2) End-to-End Retrieval, fine-tuning the LLM to map queries directly to memory entries; and (3) Logical Arbitration, a reasoning mechanism that facilitates prioritizing non-parametric memory over parametric priors during logic conflicts. Our results show that CoG-MeM allows models to adopt counterfactual rules through memory injection without weight updates. As a proof-of-concept, this design demonstrates promising logical adaptability and potential for data-efficient, non-parametric continual learning in smaller LLMs.

Downloads

Download data is not yet available.

Downloads

Posted

2026-03-03 — Updated on 2026-03-12

Versions

Version justification

Expanded Training & Evaluation: The dataset for the structural matching stage has been expanded to 937 training samples and 155 test samples. This allows for a more robust assessment of the model's ability to map queries to precise memory indices across six distinct cognitive scenarios. Enhanced Demonstration Suite: The number of interactive case studies has been increased to 10 demos (up from 2). These now span five diverse domains—including Physics, Chemistry, Mathematics, Law, and Etiquette—to demonstrate the model's generalized capability in handling various counterfactual rules and logical constraints.