Preprint / Version 1

Knowledge-Guided 3D CT Generation: A Conditioning-Centric Taxonomy

##article.authors##

DOI:

https://doi.org/10.31224/6313

Keywords:

3D computer vision, Multimodal learning, Deep learning architectures, Generative models, Biomedical image analysis

Abstract

Controllable generation guided by external knowledge is a key requirement in modern generative deep learning applications, enabling the synthesis of samples with explicit constraints on semantic content, structural properties, and variability. In 3D Computed Tomography (CT), such control is essential for clinical applications, including data augmentation, privacy-preserving data sharing, and the simulation of specific anatomical or pathological scenarios. While research on conditional 3D CT generation has expanded rapidly, the diversity of existing approaches makes systematic comparison difficult and obscures fundamental design choices. In this survey, we propose a conditioning-centric taxonomy that organizes the literature along three orthogonal dimensions: the type of external knowledge (K), the knowledge integration paradigm (I), and the generative architecture (A). This factorization defines an explicit design space (K x I x A) that provides a unified perspective on prior work. Using this framework, we systematize existing methods, identify dominant trends and recurring design patterns, and highlight underexplored regions of the design space that point toward promising directions for future research.

Downloads

Download data is not yet available.

Downloads

Posted

2026-01-20