Group 4—Uncertainty, ignorance and partial knowledge

Chair: Professor Mark Colyvan

In many risk situations, agents are forced to make decisions despite the large uncertainties typically involved. Fortunately there is a fairly standard account of how such decisions are made: expected utility theory. This theory counsels the agent to calculate the expected utility for each act under consideration, then choose the act with the highest expected utility (if there is such an act). This is all well and good if the uncertainties in question are reasonably well behaved, but this is not always the case. In order to use expected utility theory, the agent must have precise probability and utility assignments for all the outcomes under consideration. But we can be uncertain about these for a variety of reasons. For example, in environmental policy decisions, there can be a great deal of disagreement about the appropriate utilities and this gives rise to uncertainty about which utilities ought to be used in the expected utility calculations. On the probability side, we can have uncertainty about the appropriate statistical model. In such cases, precise probability and utility assignments to each outcome are at best an idealisation and at worst a serious misrepresentation that hides the extent of our ignorance. Worse still, there are arguably cases where probabilities are not even the right tools (when the uncertainty arises from linguistic sources, for instance). In short, we very often face meta-uncertainty: uncertainty about the extent and nature of the uncertainty we face.

There are various formal tools that can help with some of these uncertainties. These include imprecise probabilities and preference aggregation functions. But theses tools have their limits. In this subgroup we will investigate existing and new formal methods for quantifying meta-uncertainty as well as more qualitative approaches such as qualitative probability and utility assignments, precautionary reasoning, and maxi-min decision making.

Questions to get you thinking:

  1. Is all uncertainty amenable to probabilistic treatment? If some uncertainty is not amenable to probabilistic treatment, what does the corresponding decision theory look like?
  2. How can we represent uncertainty about uncertainty (e.g. uncertainty about a probability distribution)?
  3. Is precautionary reasoning any help in decision making under massive uncertainty?
  4. When there is disagreement over the value of outcomes, how can these values be aggregated?
  5. How can we take these fairly complex mathematical and conceptual issues and make them accessible to the general public and politicians?

Recommended Reading

Bradley S (2015) Imprecise Probabilities. In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2015 Edition)

Briggs R (2015) Normative Theories of Rational Choice: Expected Utility. in E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2015 Edition)

Possingham HP, Wilson KA (2005) Biodiversity—Turning Up the Heat on Hotspots. Nature 436: 919–920.

Regan HM, Colyvan M, Burgman MA (2002) A Taxonomy and Treatment of Uncertainty for Ecology and Conservation Biology. Ecological Applications 12(2): 618–628.[0618:ATATOU]2.0.CO;2/full

Steele KS (2006) The Precautionary Principle: A New Approach to Public Decision-Making? Law, Probability and Risk 5 (1): 19–31.

Further (in depth) Reading

Burgman MA (2005) Risks and Decisions for Conservation and Environmental Management. Cambridge University Press, Cambridge.

Paris JB (1994) The Uncertain Reasoner’s Companion: A Mathematical Perspective. Cambridge University Press, Cambridge.

Shafer G (1976) A Mathematical Theory of Evidence. Princeton University Press. Princeton, NJ.

Walley P (1991) Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London.

© 2024 Australian Academy of Science