Preprint has been published in a journal as an article
DOI of the published article https://ieeexplore.ieee.org/document/11465617#:~:text=10.1109/ICIPTM69057.2026.11465617
Preprint / Version 1

Bounding the Long Tail: AI Norms for Decision-Making under Negligible Probabilities

##article.authors##

DOI:

https://doi.org/10.31224/6167

Abstract

This paper recasts a long-standing decision-theory dilemma as an AI agent-design problem, proposing a principled cutoff for ultra-low-probability, extreme-utility outcomes to prevent exploitability in autonomous systems. By characterizing a vulnerability class for expected-utility maximizers and introducing a rationally negligible probability threshold grounded in cognitive skepticism, the framework preserves dominance and tractability while blocking adversarial gambles such as Pascal-type offers. Formal analysis motivates design norms for AI agents—utility bounding, calibrated priors, and epsilon-screening—together with guidance on selecting context-sensitive thresholds to maintain preference stability. This positions the proposal as a safety-centric inductive bias for rational AI decision-makers, aligning theoretical desiderata with implementable policy constraints in high-stakes, low-signal environments

Downloads

Download data is not yet available.

Downloads

Posted

2026-01-07