Adaptive Memory for LLM-Based Time Series Analysis
A Case Study on Bitcoin Regime Detection
DOI:
https://doi.org/10.31224/6603Keywords:
large language models, adaptive memory, regime detection, time series analysis, financial forecasting, market regimes, LSTMAbstract
Static models trained on historical data fail silently when underlying market dynamics shift, a phenomenon known as concept drift. We investigate whether large language models (LLMs) equipped with structured adaptive memory can detect and adapt to regime changes in financial time series. Using seven years of hourly Bitcoin OHLCV data (2017--2024) across six labeled market regimes, we benchmark four memory architectures (regime context injection, news-weighted memory, cosine similarity-based historical matching, and rolling self-feedback) against an LSTM baseline and a memory-free LLM. For 24-hour price direction prediction, all methods perform near chance (49--51% accuracy), confirming that short-term Bitcoin forecasting remains an open challenge regardless of model architecture. For regime change detection, the primary contribution of this work, the LLM identifies 3 of 6 ground-truth transitions (50%) with a 0% false positive rate and generates structured evidence for each detection, a capability absent from all statistical baselines (CUSUM: 83% detection but no explanations; BinSeg: 33%; Bollinger Bands: 17%). We release all code, data, and prompts to enable full reproducibility. Our findings indicate that LLMs contribute not through superior predictive accuracy, but through explainable drift attribution, a qualitative advantage with practical implications for high-stakes decision-making.
Downloads
Downloads
Posted
License
Copyright (c) 2026 Manas Mudbari, Chandan Bhagat

This work is licensed under a Creative Commons Attribution 4.0 International License.