In catastrophe risk and probabilistic hazard evaluations, one overarching issue is how to account for uncertainties. One influential school of thought employs a sharp distinction between aleatory (pertaining to randomness) uncertainty and epistemic (pertaining to knowledge) uncertainty, and uses this distinction to derive total uncertainty in risk and probabilistic hazard evaluations. This paper first critiques two quantitative versions of this sharp distinction. The first version applies only to single models. The second version applies to a weighted combination of rival models. This paper shows that serious contradictions and biases arise for each version, as elaborated by its advocates. Given these contradictions and the need to address the overarching issue of uncertainties in risk evaluations, this paper goes on to sketch an alternative approach called robust simulation, which applies to catastrophe risk and probabilistic hazard results. Robust simulation first develops results from a single preferred model, and then proceeds to find and evaluate results from the most divergent yet credible rival models. In addition to producing many simulations from models, this approach also tests resulting distributions in terms of their stability.Taylor, C., Murnane, R., Graf, W., and Lee, Y. (2013). ”Epistemic Uncertainty, Rival Models, and Closure.” Nat. Hazards Rev., 14(1), 42–51.