Effects of Fitness on Generative Graph Models with Preferential Attachment

More Info
expand_more

Abstract

Preferential Attachment models offer an explanation for why power laws are so common in real-world data. In these models, we start out with an initial network and add nodes one at a time. For each new node, we make m connections to existing nodes and if we define the attachment probability of attaching to a vertex to be proportional to its degree, we have defined the Barabási–Albert (BA) model. The tail of the degree distribution of the vertices of this model decays like a power law, offering an explanation for why power law behavior is common in real world data. The BA model can be adjusted by letting the probability of attaching to a vertex depend on its degree and an additional parameter for each vertex called fitness, that we randomly assign from a fitness distribution. If we adjust the attachment probability by multiplying with fitness, this model, the Bianconi–Barabási (BB) model, shows three qualitative behaviors. For flat distributions, the model behaves as the BA model, the old get richer; if fitness values differ, the exponent of the sublinear growth of the expected degree is larger if fitness is larger; if the tail of the fitness distribution is light enough, we find that a larger than expected proportion of half-edges connects to vertices with very high fitness values—condensation. In this thesis we compare qualitative behaviors of various preferential attachment models. Chiefly, we concern ourselves with models with vertex fitness. For these models, we study how the fitness distribution affects qualitative behaviors, such as condensation. First we review known results about the degree distributions of the graph. Degree distributions are an accessible and understandable property that is related to the topology of the network. We do this for the model without vertex fitness, for multiplicative fitness and for models with additive fitness and define the regimes of their qualitative behavior. For these results, we give clarification of the behavior and highlight the principles employed to derive their proofs. In doing so, we gather the commonalities of these proofs and provide intuition about their underlying principles by accenting the recurring themes.