New Primal-dual Interior-point Methods Based on Kernel Functions

More Info
expand_more

Abstract

Two important classes of polynomial-time interior-point method (IPMs) are small- and large-update methods, respectively. The theoretical complexity bound for large-update methods is a factor $\sqrt{n}$ worse than the bound for small-update methods, where $n$ denotes the number of (linear) inequalities in the problem. In practice the situation is opposite: implementations of large-update methods are much more efficient than those of small-update methods. This so-called irony of IPMs motivated the present work. Recently J. Peng C. Roos and T. Terlaky were able to design new IPMs with large-updates whose complexity is only a factor $\log n$ worse than for small-update methods. This means that the factor $\sqrt{n}$ was reduced to $\log n$, thus significantly reducing the gap between the theoretical behavior of large- and small-update methods. They made use of so-called self-regular barrier (or proximity) functions. Each such barrier function is determined by its (univariate) self-regular kernel function. In these thesis we introduce a new class of kernel functions which differs from the class of self-regular kernel functions. The class is defined by some simple conditions on the kernel function which concern the growth and the barrier behavior of the kernel function. These properties enable us to derive many new and tight estimates that greatly simplify the analysis of IPMs based on these kernel functions. In Chapter 2 we consider ten specific (classes of) kernel functions belonging to the new class, and using the new estimates present a complete complexity analysis for each of these functions. Some of these functions are self-regular and others are not. Iterations bounds both for large- and small-update methods are derived. It is shown that small-update methods based on the new kernel functions all have the same complexity as the classical primal-dual IPM, namely $O(\sqrt{n}\,\log\frac{n}{\e})$. For large-update methods the best obtained bound is $O(\sqrt{n}\,\br{\log n}\,\log\frac{n}{\e})$, which is up till now the best known bound for such methods. The results of Chapter 2 for LO are extended to semidefinite optimization in Chapter 3, where we it is shown that at some point the analysis boils down to exactly the same analysis as for the LO case. In Chapter 4 some numerical results are presented. These results show that one of the new kernel functions, with finite barrier term and with the best possible theoretical complexity, performs surprisingly well in our experiments.