In some areas of science, like risk analysis or public health, we often don’t have enough data to base decisions on. In such cases, we rely on experts. We create panels of experts and ask them to evaluate quintiles of interest. But, how many experts do we actually need? And wh
...
In some areas of science, like risk analysis or public health, we often don’t have enough data to base decisions on. In such cases, we rely on experts. We create panels of experts and ask them to evaluate quintiles of interest. But, how many experts do we actually need? And when are there too many experts? These are the main questions of my thesis. To answer these questions I used the Classical Model (CM), which evaluates each ex pert based on two performance scores: the calibration score, which measures the expert’s statistical accuracy, and the information score, which measures how precise the expert’s uncertain assessments are. These scores are used to assign weights when combining expert opinions, and several weighting approaches are examined and compared in this thesis. Using data from the National Institute for Public Health and the Environment (RIVM), this study analyzes how the number of experts in a panel affects the performance of the different Decision Makers. Special attention is given to the role of experts who con sistently underestimate outcomes, and whether their uncertain assessments affect their individual performance and the performance of the aggregated distribution. This re search contributes to understanding how to construct effective expert panels for crucial decisions.