DOI of the published article https://ieeexplore.ieee.org/document/11465617#:~:text=10.1109/ICIPTM69057.2026.11465617
Bounding the Long Tail: AI Norms for Decision-Making under Negligible Probabilities
DOI:
https://doi.org/10.31224/6167Abstract
This paper recasts a long-standing decision-theory dilemma as an AI agent-design problem, proposing a principled cutoff for ultra-low-probability, extreme-utility outcomes to prevent exploitability in autonomous systems. By characterizing a vulnerability class for expected-utility maximizers and introducing a rationally negligible probability threshold grounded in cognitive skepticism, the framework preserves dominance and tractability while blocking adversarial gambles such as Pascal-type offers. Formal analysis motivates design norms for AI agents—utility bounding, calibrated priors, and epsilon-screening—together with guidance on selecting context-sensitive thresholds to maintain preference stability. This positions the proposal as a safety-centric inductive bias for rational AI decision-makers, aligning theoretical desiderata with implementable policy constraints in high-stakes, low-signal environments
Downloads
Downloads
Posted
License
Copyright (c) 2026 Vipul Razdan

This work is licensed under a Creative Commons Attribution 4.0 International License.