Community structure is observed in many real-world networks, such as (online) social networks, where groups of friends of a certain person are often also friends of each other. Newman's modularity has been explored as an important quantitative metric for communities and clusters
...
Community structure is observed in many real-world networks, such as (online) social networks, where groups of friends of a certain person are often also friends of each other. Newman's modularity has been explored as an important quantitative metric for communities and clusters detection in networks. We present a new expressions and bounds for the modularity. These expressions reveal conditions for or properties of the maximum modularity of a network in both topological and spectral domains. Finding the maximum modularity of a given graph has been proven to be NP-complete and therefore, several heuristic algorithms have been proposed in the past. We investigate the problem of finding the maximum modularity of classes of graphs that have the same number of links and/or nodes and determine analytical upper bounds. Moreover, from the set of all connected graphs with a fixed number of links and/or number of nodes, we construct graphs that can attain maximum modularity, named maximum modular graphs. The maximum modularity is shown to depend on the residue obtained when the number of links is divided by the number of partitions. The modularity depends on the chosen partitioning of the network into communities, which makes finding the specific partition that leads to the maximum modularity a hard problem. In this thesis, we prove that deciding whether a graph with a given number of links, number of communities, and modularity exists is NP-complete and subsequently propose a heuristic algorithm for generating random graphs with a given modularity and number of links with different topological properties. The generator can be used in the broad field of modeling and analyzing clustered social or organizational networks. We also propose a model which can randomly generate simple graphs which are line graphs of other simple graphs, employing an iterative merging procedure. These graphs possess interesting properties, for example, if the cliques are all of the same size, the assortativities of the line graphs in each step are close to 0, and the assortativities of the corresponding root graphs increases linearly from -1 to 0 with the steps of the nodal merging process. Due to their importance to society, communication systems, represented as complex networks, should be built and operated to withstand failures. We define robustness as the maintenance of functionality under node or link removal. In this context, the functionality is measured by several graph metrics. We study the robustness of both static and time-varying networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. In static networks, targeted and random failures have been studied in the literature; however, existing approaches tend to study random failure in terms of average-case behavior, giving no idea of how badly network performance can degrade purely by chance. Instead of considering average network performance under random failure, we compute the network performance probability density functions as functions of the fraction of nodes removed. We show that many centrality measures produce similar targeted attacks in both static and time-varying networks and that a combination of degree centrality and eigenvector centrality may be enough to evaluate worst-case behavior of static networks: even small subsets of highly connected nodes act as a bottleneck in the static or temporal information flow, becoming critical weak points of the entire system. We also study the robustness envelope and targeted attack responses of static networks that are rewired to have high and low degree assortativities, discovering that moderate assortativity increases confer more robustness against targeted attacks whilst moderate decreases confer more robustness against random uniform attacks. In time-varying randomly generated networks, where all the nodes have similar properties, we show that random errors and intelligent attacks exhibit similar behavior. However, cost considerations make network providers less inclined to take robustness measures against failures that are unlikely to manifest, like several failures coinciding simultaneously. Considering networks embedded in a two-dimensional plane, we study the problem of finding a critical region - a part of the network that can be enclosed by a given elementary figure of predetermined size - whose destruction would lead to the highest network disruption. We determine that the problem is polynomially solvable and propose appropriate algorithms. In addition, we consider region-aware network augmentation to decrease the impact of a regional failure. We subsequently address the region-disjoint paths problem, which asks for two paths with minimum total weight between a source and a destination that cannot both be cut by a single regional failure of a given diameter (unless that failure includes the source and the destination). We prove that deciding whether region-disjoint paths exist is NP-hard and propose a heuristic region-disjoint paths algorithm. Defining an optimal protection strategy against viruses, spam propagation or any other kind of contamination process is an important feature for designing new networks and architectures. The first approach is a network adaptation, which is the interplay between disease dynamics on a network and the topology dynamics. A continuous-time adaptive Susceptible-Infectious-Susceptible (ASIS) model is introduced in order to investigate this interaction, where a susceptible node avoids infections by breaking its links to its infected neighbors while it enhances the connections with other susceptible nodes by creating links to them. When the initial topology of the network is a complete graph, an exact solution to the average metastable state fraction of infected nodes is derived without resorting to any mean-field approximation. A linear scaling law of the epidemic threshold as a function of the effective link-breaking rate is found. The metastable state topology shows high connectivity and low modularity in two cases: (i) a ``strongly adaptive'' region with very high effective spreading rate, and (ii) a ``weakly adaptive'' region with very low effective spreading rate. These two regions are separated from the other half-open elliptical-like regions of low connectivity and high modularity in a contour-line-like way. Our results indicate that the adaptation of the topology in response to disease dynamics suppresses the infection, while it promotes the network evolution towards a topology that exhibits assortative-mixing, modular structure and a binomial-like degree distribution. In the second approach, we consider decentralized optimal protection strategies when a virus is propagating over a network through a SIS epidemic process. By assuming that each node in the network can decide to protect itself from infection at a constant cost, we model our system using a game theoretic framework. We find pure, mixed equilibria, and the Price of Anarchy (PoA) in several network topologies and propose algorithms to compute a pure equilibrium.