Predict+optimize for combinatorial optimization with uncertainty in the constraints

More Info
expand_more

Abstract

In this work, it is investigated whether the predict+optimize framework could be utilized for combinatorial optimization problems with a linear objective that have uncertainty in the constraint parameters, such that it outperforms prediction-error-based training. To this end, a predict+optimize formulation of the 0-1 knapsack problem is used, which is an NP-hard combinatorial optimization problem. This formulation was chosen because it has many real-world applications where the constraint parameters are uncertain. In this instance, the challenge is to predict the weights of knapsack items based on feature data and then use these predictions to solve the knapsack problem. The key is that the item weights are uncertain
and thus must be estimated, but the quality of the solution is evaluated with respect to the true weights. This uncertainty in the weights can lead to infeasible solutions when evaluated on the true weights. Standard predict+optimize techniques do not account for such infeasibility of candidate solutions, which makes them mostly inapplicable for solving problems with uncertainty in the constraint parameters. To
overcome this shortcoming of standard predict+optimize techniques, several correction methods for the evaluation of infeasible solutions during training are compared. Crucially these correction methods can be varied between training and evaluation. By penalizing infeasible solutions linear in the weight of the overweight items during training, predict+optimize is able to outperform standard prediction-error-based techniques. This performance increases to an extent with increased penalization factors, leading to more stable, more optimal results on the validation set.