Boundary as an Execution-Time Primitive for AI-Assisted Software Development Governance
Why AI-Assisted Development Fails Before Governance Begins
DOI:
https://doi.org/10.31224/6583Keywords:
AI-assisted software development, AI governance, Task execution boundary, Boundary evidence, LLM agents, Software engineering governance, Runtime constraints, Decidable governance, Engineering accountability, Prompt engineeringAbstract
Loss of control, behavioral drift, and non-auditability in AI-assisted software development are commonly attributed to model misalignment, hallucination, or insufficient guardrails. This paper argues that such diagnoses overlook a fundamental category distinction.
We distinguish between model alignment boundaries, established at training time through data distributions, RLHF, and safety fine-tuning, and task execution boundaries, which must be explicitly constructed at execution time for a specific engineering task. While the former provides general, statistical safety tendencies, it does not—and cannot—automatically inherit the concrete, task-specific constraints required for engineering governance.
We show that many widely reported failures, including insecure yet functional code generation, arise not from deficient model alignment but from the absence of a decidable task execution boundary at runtime. When such a boundary is missing, drift and violation become epistemically undecidable, and model preferences fill the resulting vacuum.
We formalize task execution boundaries as the resolution of visible scope and explicit prohibitive constraints, introduce boundary evidence as the minimal auditable unit, and demonstrate through engineering scenarios that governance mechanisms operating without this primitive rest on interpretive rather than decidable foundations.
Downloads
Downloads
Posted
License
Copyright (c) 2026 Spark Tsai

This work is licensed under a Creative Commons Attribution 4.0 International License.