Systematic Prompt Optimization for LLM-Based Backend API Generation: An Empirical Study in NestJS
DOI:
https://doi.org/10.31224/6363Keywords:
Artificial Intelligence, Large Language Models (LLMs), Prompt EngineeringAbstract
Large Language Models (LLMs) are increasingly used as developer productivity tools for backend application programming interface (API) generation. However, prompt engineering is typically performed in an ad hoc manner, limiting reliability and code quality. This study systematically evaluates prompt design strategies for NestJS-based API endpoint generation across five realistic backend tasks. We compared baseline prompting against persona-based, structured reasoning, constraint-driven, and self-review strategies using automated functional, security, architectural, and completeness metrics. Our results show that structured and reflective prompting significantly improves code quality, achieving up to 24% relative improvement over baseline prompts. These findings demonstrate that prompt design is a critical engineering lever for production-ready, AI-assisted software development.
Downloads
Downloads
Posted
License
Copyright (c) 2026 Himanshu Sharma

This work is licensed under a Creative Commons Attribution 4.0 International License.