AI Policy
Acceptable Usage
Acceptable usage of LLM, AI, and machine-generated submissions are listed below. This usage must be disclosed and documented. Failure to disclose, or implausible disclosures, are grounds for rejection.
- Language translation
- Pre-writing work such as literature searches, idea generation and organization, when the paper also reports on original research and the author performs human supervision (e.g., verifying sources).
- Copy-editing and formatting
- Machine-assisted content or data analysis
- Dictation software
Unacceptable Usage
- Generating text which is used verbatim (including whole paragraphs and sections)
- Generating fake data
- LLM as co-authors or interlocutors as if human (e.g., “interviewing” or “dialogue” with LLMs)
- Generating false information to mislead moderators (or readers)
- Submitting AI-generated content which the author has not thoroughly reviewed and confirmed (this includes reviewing the cited sources to confirm they exist and are accurately characterized, and verifying images)
- Entire papers produced by AI generation with no human-generated components beyond prompts
Other work we do not allow, machine or human generated
- Technical assessment or design of AI programs or algorithms performed by LLMs or similar programs
- Superficial, high-level theory synthesis and systemic reviews (e.g., “proposed framework” papers)
- Overviews of prior research or theory with no contribution to scholarship (comparable to undergraduate course assignments)
- Exceptions to the rules in this category may be made for work that has been peer-reviewed, or based on individual appeal.
Evaluation criteria which may contribute to our decision
- Moderator assessment of minimal contribution to engineering
- Identification of author as a human with a consistent identity (as marked by ORCiD or other sources)
- Moderator detection of likely-AI slop, with the following red flags:
- Literature reviews or synthesis without substantive contribution
- Unsubstantiated grand theories, which may have:
- Fanciful equations
- Integration of multiple major theories into definitive new “paradigms,” “proposed frameworks,” or similar, without substantive engagement
- Papers written in many short sections, often including bullet-point lists and numbered paragraphs.
- Fake empirical research
- Papers with:
- High truth-indifference or phoniness (e.g., no realistic grounding)
- Enterprise misrepresentation (e.g., false or misleading description of the research project)
- Constraint evasion (e.g., implausible or facile assumptions without evidence)
- Low falsifiability (e.g., untestable claims without justification)
- Absence of methodological accountability (e.g., extensive methods and data discussion without any research output)
Due to the above policies, submissions that fit this criteria may require further examination and thus will be held for a longer time before posting. We allow appeals by email to director@engrxiv.org. However, appeals end at our discretion. Posting on engrXiv is a privilege, not a right. We ban authors for fraud, plagiarism (including LLM-generated plagiarism), abuse, misrepresentation, or repeated violations.
Adopted and adapted from AI Policy – SocArXiv