Preprint / Version 1

An Approach to AI High-Velocity Development Through Systematic Context Engineering: A Case Study

##article.authors##

  • Francesco Bisardi Independent Researcher

DOI:

https://doi.org/10.31224/5863

Keywords:

Large Language Models (LLMs), Case Study, generative ai

Abstract

AI-assisted development promises substantial velocity gains, yet teams routinely fall short due to context loss and fragmented workflows. This paper introduces a two-layer context engineering architecture that treats context as a first-class system: (1) a declarative rule layer that encodes stable invariants, and (2) a programmatic Model Context Protocol (MCP) layer that exposes live project structure to AI agents. This architecture sits within a broader four-pillar synthesis consisting of an AI-optimized zero-friction stack, disciplined prompt workflows, the two-layer context system, and agentic orchestration patterns. We evaluate the approach through the development of a production-grade, multi-tenant SaaS platform (220k+ LoC) built by two part-time developers in fifteen weeks. Analysis of 3,676 AI-assisted sessions shows that Context Reuse Efficiency rose from $90.1\%$ to $92.2\%$ and Generative Amplification declined from $5.4\%$ to $4.6\%$, consistent with a shift from generative substitution to retrieval-augmented reuse as context became systematized. Using a Design Science Research framework, we show that the two-layer pattern is feasible at production scale and produces measurable effects on development velocity. While based on a single case, the architecture and metrics establish a foundation for practitioners and researchers to validate and extend these practices across diverse contexts.

Downloads

Download data is not yet available.

Downloads

Posted

2025-11-25