When are there too many experts?
F. Heuff (TU Delft - Electrical Engineering, Mathematics and Computer Science)
GF Nane – Mentor (TU Delft - Applied Probability)
V.N.S.R. Dwarka – Graduation committee member (TU Delft - Numerical Analysis)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
In some areas of science, like risk analysis or public health, we often don’t have enough data to base decisions on. In such cases, we rely on experts. We create panels of experts and ask them to evaluate quintiles of interest. But, how many experts do we actually need? And when are there too many experts? These are the main questions of my thesis. To answer these questions I used the Classical Model (CM), which evaluates each ex pert based on two performance scores: the calibration score, which measures the expert’s statistical accuracy, and the information score, which measures how precise the expert’s uncertain assessments are. These scores are used to assign weights when combining expert opinions, and several weighting approaches are examined and compared in this thesis. Using data from the National Institute for Public Health and the Environment (RIVM), this study analyzes how the number of experts in a panel affects the performance of the different Decision Makers. Special attention is given to the role of experts who con sistently underestimate outcomes, and whether their uncertain assessments affect their individual performance and the performance of the aggregated distribution. This re search contributes to understanding how to construct effective expert panels for crucial decisions.