1 

Assessing robustness and quantifying uncertainty in level set segmentation of the bowel lumen
This thesis explores various methods to assess the robustness of an algorithm used to determine the severeness of Crohn's disease, segmenting the bowel lumen by minimizing a total variation based energy functional similar to the Local Gaussian Distribution Fitting energy functional. By comparing the local solution of this minimization problem obtained using gradient descent with the global solution of a convex formulation of the optimization problem, the conclusion is made that it is definitely not the global solution that is searched for, which emphasizes the strong dependence of the algorithm on its initial condition. The dependence of the algorithm on its input parameters is also tested by slightly varying the input parameters or by artificially creating missing input data and measuring its influence on the output solution.
Applications of the methods discussed in this thesis to test the robustness of the specific algorithm mentioned comprehends the justification to measure the input data less accurate, which is both cheaper and faster, the necessity of a centerline to serve as an initial condition, and a technique to determine the most important parts along the centerline, that is the parts that have the greatest influence on the output solution.

[PDF]
[Abstract]

2 

Increasing robustness of SoftwareDefined Networks
In this thesis, an overview on performed research is given to investigate possible enhancements and solutions to enable SDN as future network paradigm. Currently, beside robustness, problems exist on scalability and security with the application of SDN to current network infrastructures. On robustness, current research do not provide the necessary solutions to detect failures and activate protection schemes to failover to preconfigured backup paths within the set failover requirements. We will attempt to solve the problems to reduce the failover times on Ethernet IP networks with the application of active link monitoring and advanced capabilities of the OpenFlow protocol. To enable protection scheme, a routing algorithm is required that provides linkbased protection. We propose a protection algorithm that guarantees protection, minimizes path cost on primary path and discovers protection paths for intermediate switches on the primary path with the main purpose to minimize failover times, optimize network traffic and reduce the need for crankback routing. In short, we provide a complete solution to increase the robustness of Ethernet IP networks to the level of carriergrade and industrial networks with the application of a linkbased protection scheme and optimal routing algorithm, combined into a Software Designed Networking solution.

[PDF]
[Abstract]

3 

Stability and robustness analysis of acyclic timetable
o satisfy the growing passenger transportation demands and improve the service quality n railway system, a more stable and robust timetable needs to be designed while considering highly utilized capacity. Acyclic timetable is extensively applied in large ailway networks. In order to acquire the quality of timetable, analytical timetable stability analysis software PETER (Performance Evaluation of Timed Events in Railways) is used o analyse timetable stability and robustness with delay impact, delay sensitivity and delay propagation. The method has been applied to the Yangtze River Delta of the Chinese ailway network.

[PDF]
[Abstract]

4 

Robustness Envelopes of Networks
We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of averagecase behavior, giving no idea of how badly network performance can degrade purely by chance. Instead of considering average network performance under random failure, we compute approximate network performance probability density functions as functions of the fraction of nodes removed. We find that targeted attacks based on centrality measures give a good indication of the worstcase behavior of a network. We show that many centrality measures produce similar targeted attacks and that a combination of degree centrality and eigenvector centrality may be enough to evaluate worstcase behavior of networks. Finally, we study the robustness envelope and targeted attack responses of networks that are rewired to have high and lowdegree assortativities, discovering that moderate assortativity increases confer more robustness against targeted attacks whilst moderate decreases confer more robustness against random uniform attacks.

[PDF]
[Abstract]

5 

Robustness of networks
Our society depends more strongly than ever on large networks such as transportation networks, the Internet and power grids. Engineers are confronted with fundamental questions such as “how to evaluate the robustness of networks for a given service?”, “how to design a robust network?”, because networks always affect the functioning of a service. Robustness is an important issue for many complex networks, on which various dynamic processes or services take place. In this work, we define robustness as follows: a network is more robust if the service on the network performs better, where
performance of the service is assessed when the network is either (a) in a conventional state or (b) under perturbations, e.g. failures, virus spreadings etc. In this thesis, we survey a particular line of network robustness research within our general framework: robustness quantification, optimization and the interplay between service and network.
Significant progress has been made in understanding the relationship between the structural properties of networks and the performance of the dynamics or services taking place on these networks. We assume that network robustness can be quantified by a topological measure of the network. A brief overview of the topological measures is presented. Each measure may represent the robustness of a network with respect to a certain performance aspect of a service. We focus on the measure known as algebraic connectivity. Evidence collected from literature shows that the algebraic connectivity
characterizes network robustness with respect to synchronization of dynamic processes at nodes, random walks on graphs and the connectivity of a network. Moreover, we illustrate that, on a given diameter, graphs with large algebraic connectivity tend to be dense in the core and sparse at the border. Such structures distribute traffic homogeneously and are thus robust in terms of traffic engineering.
How do we design a robust network with respect to the metric algebraic connectivity? First, the complete graph has the maximal algebraic connectivity, while its high link density makes it impractical to use due to the cost of constructing links. Constraints on other network features are usually set up to incorporate realistic requirements. For example, constraint on the diameter may guarantee certain endtoend quality of service levels such as the delay. We propose a class of clique chain structures which optimize the algebraic connectivity and many other robust features among all graphs with diameter D and size N. The optimal graph within the class can be determined either analytically or numerically. Second, complete replacement of an existing infrastructure is expensive. Thus, we design strategies for robustness optimization using minor topological modifications. These strategies are evaluated in various classes of graphs. The robustness quantification, or equivalently, the association of the performance of a service with a topological measure, may be implicit. In this case, we explore
the interplay between topology and service in determining the overall performance.
Many services on communications and transportation networks are based on shortest path routing. The weight of a link, such as delay or bandwidth, is generally a metric optimized via shortest path routing. Thus, link weight tuning, a mechanism to control traffic, is also considered as part of the service. The interplay between service (shortest path routing and link weight tuning) and topology is investigated for the following performance aspects: (a) the structure of the transport overlay network, which is the union of shortest paths between all node pairs and (b) the traffic distribution in the
overlay network. Important new findings are (i) the universal phase transition in overlay structures as we tune the link weight structure over different classes of networks and (ii) the power law traffic distribution in the overlay networks when link weights vary strongly in various classes of networks. Furthermore, we consider the service that measures a network topology as the union of shortest paths among a set of testboxes (nodes). The measured topology is a subgraph of the overlay network, which is again a subgraph of the actual network. The performance in terms of the sampling bias
of measuring a network topology is investigated. Our work contributes substantially to a better understanding of the effect of the service (testbox selection) and the actual network structure on the performance with respect to sampling bias. Our investigations on the interplay between service and network reveal again the association between the performance of a service and certain topological feature, and thus, contribute to the quantification of network robustness. The multidisciplinary nature of this research lies not only in the presence of robustness issues in many complex networks, but also in that advances in other disciplines such as graph theory, combinatorics, linear algebra and statistical physics are widely applied throughout the thesis to study optimization problems and the performance of
large networks.

[PDF]
[Abstract]

6 

Measuring Robustness of Complex Networks
Specially over the last ten years, societies depend more strongly than ever on networked and communication systems. This dependency has grown to the point to which our wellbeing depends on these networked critical infrastructures. Bettering our understanding of such complex networks and their robustness is on the bleeding edge of graph theory, of which this thesis is part of.

[PDF]
[Abstract]

7 

Robust Solutions for the ResourceConstrained Project Scheduling Problem: Understanding and Improving Robustness in Partial Order Schedules produced by the Chaining algorithm
Robustness is essential for schedules if they are being executed under uncertain conditions. In this thesis we research robustness in Partial Order Schedules, which represent sets of solutions for instances of the ResourceConstrained Project Scheduling Problem. They can be generated using a greedy procedure called Chaining, which can be easily adapted to use various heuristics. We use an empirical method to gain understanding of robustness and how the chaining procedure adds to this.
Based on the findings of an exploratory study we develop three models, each capturing aspects of robustness on a different level. The first model describes how a single activity is affected by various disturbances. The second model predicts how structural properties of Partial Order Schedules can reduce the effect of these disturbances. The third model describes how heuristics for the chaining procedure can influence these properties.
Using experimental evaluation, we found that the model is not complete. Experimental results did conform to the expectations set by the third model, but not of the second model. We therefore suspect that it is too simplistic for accurate predictions, but since it does match earlier observations we believe it is a good starting point for further understanding.

[PDF]
[Abstract]

8 

Facility Location Models in Emergency Medical Service: Robustness and Approximations
In emergency medical service (EMS) the use of optimisation models and Operations Research techniques is becoming more common. EMS providers incorporate facility location models and simulation software packages into decision support tools, allowing the extensive evaluation of 'whatif' scenarios. We give a literature survey of facility location models applied to EMS and analyse the properties of one EMS model in particular, namely the Maximal Covering Location problem (MCLP).
We analyse the sensitivity of the MCLP to changes in the parameters and design approaches to construct insensitive solutions. Furthermore, we prove performance guarantees for two heuristic solution methods for the MCLP: the Greedy Search and the Swap Local Search. All solution methods are numerically evaluated using generated instances and realistic instances based on The Netherlands.
Our main research contributions are as follows. First, we apply Robust Optimisation to EMS optimisation models. We derive and analyse a Robust Counterpart formulation for a general linear constraint under the assumption of a certain polytopal uncertainty structure. Second, we present a constructive proof of the tight performance guarantee for the Swap Local Search. The proof explicitly derives the family of worstcase MCLP instances, which have a certain symmetry. Finally, we perform a thorough computational study for the described methods.
This research is performed to conclude the Master in Applied Mathematics at the Delft University of Technology in The Netherlands. It is a cooperative project with CWI in Amsterdam, as part of the REPRO research project on ambulance logistics. CWI is the national research institute for mathematics and computer science in The Netherlands. For more information on the REPRO project, see repro.project.cwi.nl.

[PDF]
[Abstract]

9 

Robust public transport from a passenger perspective: A study to evaluate and improve the robustness of multilevel public transport networks
Disturbances in public transport are an important issue for passengers, public transport operators and infrastructure managers. After the occurrence of large disturbances, there is often a strong call from passengers and society to make the public transport network less vulnerable – and therefore more robust – against these types of events.
In this study, a methodology is developed which enables the evaluation of the current robustness of multilevel public transport networks, as well as the evaluation of proposed robustness measures. The case study shows that it is worth to consider another network level as backup in case a certain network level is blocked. The result of the case study indicates that from a societal point of view, there is still room to improve the robustness of multilevel public transport networks.

[PDF]
[Abstract]

10 

Design of an Ecological Flowbased Interface for 4D Trajectory Management in Air Traffic Control
The concept of Trajectory Based Operations as proposed by the SESAR and NextGen projects seeks to enhance efficiency and increase airspace capacity by the explicit use of time as the control variable. Nevertheless, the Air Traffic Control system has too many unforeseen events to make it entirely automatic, and adequate decisionsupport tools will be required to support the human Air Traffic Controller with control operations. In previous research, following the Ecological Interface Design paradigm, one such constrainbased decision support tool was developed. A humanintheloop experiment showed that system’s Robustness could be affected if control choices were not taken adequately. The goal of this study has been to integrate the observed attributes of expert control into a graphical representation in order to enable novices to perform better rulebased shortcuts which ensures the longterm stability of the system. For this purpose, the elements taken into consideration by expert operators during decisionmaking processes were analyzed. As a result, three different types of abstractions have been identified which influence the operator’s cognitive activities. A metric for the evaluation of robustness has been adapted and its graphical representation integrated in a prototype in order to promote control behavior changes in novices controllers. A future experiment is foreseen in order to validate the design presented here and to evaluate its effectiveness.

file embargo until: 20200608
[Abstract]

11 

The Influence of Spillback Modelling when Assessing Consequences of Blockings in a Road Network
Robustness of a network is a main objective for road network managers these days, and has therefore become an important study area for transportation scientists. This article discusses one specific aspect in assessing road network robustness: the consequences of the closure of a link. These spillback effects have been examined in a dedicated traffic simulator in which the representation of spillback can be switched on and off. The impacts are studied in a simulation study of a road network of a regional size in which sequentially links are blocked. Two scenarios for route choice are considered: a fixed route choice based on a daily congestion pattern and a route choice adapted to the actual congestion caused by the closure. The study has also shown the influence of information which makes travellers adapt their routes. Road network robustness and characteristics of vulnerable links are evaluated for both spillback and nonspillback cases. It is found that a valid spillback modelling is a prerequisite for correctly analysing the robustness of the network as a whole, as well as for correctly indicating the locations in the network where a closure causes the largest delays. Furthermore, without simulating spillback, it is not possible to identify correctly the most vulnerable links for the network performance.

[PDF]
[Abstract]

12 

A New Metric for Robustness with Respect to Virus Spread (Work in Progress)
The robustness of a network is depending on the type of attack we are considering. In this paper we focus on the spread of viruses on networks. It is common practice to use the epidemic threshold as a measure for robustness. Because the epidemic threshold is inversely proportional to the largest eigenvalue of the adjacency matrix, it seems easy to compare the robustness of two networks. We will show in this paper that the comparison of the robustness with respect to virus spread for two networks actually depends on the value of the effective spreading rate τ. For this reason we propose a new metric, the viral conductance, which takes into account the complete range of values τ can obtain. In
this paper we determine the viral conductance of regular graphs, complete bipartite graphs and a number of realistic networks.

[PDF]
[Abstract]

13 

ELASTICITY: Topological Characterization of Robustness in Complex Networks

[PDF]

14 

Robustness Analysis of Road Networks: a Framework with Combined DTA Models
Network robustness is the ability of a road network functioning properly facing unpredictable and exceptional incidents. A systematical framework with combined dynamic traffic assignment (DTA) models is designed for the analysis of road network robustness. With this framework, network performance considering travelers' (route) choice behavior under both normal traffic condition and incident condition can be properly described. So the robustness of road networks against incidents can be analyzed comprehensively. Based on such knowledge, road networks robustness can be improved from the planning stage or with suitable measures.

[PDF]
[Abstract]

15 

The Nintertwined SIS epidemic network model
Serious epidemics, both in cyber space as well as in our real world, are expected to occur with high probability, which justifies investigations in virus spread models in (contact) networks. The Nintertwined virus spread model of the SIStype is introduced as a promising and analytically tractablemodel of which the steadystate behavior is fairly completely determined. Compared to the exact SIS Markov model, the Nintertwined model makes only one approximation of a meanfield kind that results in upper bounding the exact model for finite network size N and improves in accuracy with N. We review many properties theoretically, thereby showing, besides the flexibility to extend the model into an entire heterogeneous setting, that much insight can be gained that is hidden in the exact Markov model.

[PDF]
[Abstract]

16 

Modular Structures, Robustness and Protection of Complex Networks: Theory, Complexity and Algorithms
Community structure is observed in many realworld networks, such as (online) social networks, where groups of friends of a certain person are often also friends of each other. Newman's modularity has been explored as an important quantitative metric for communities and clusters detection in networks.
We present a new expressions and bounds for the modularity. These expressions reveal conditions for or properties of the maximum modularity of a network in both topological and spectral domains. Finding the maximum modularity of a given graph has been proven to be NPcomplete and therefore, several heuristic algorithms have been proposed in the past. We investigate the problem of finding the maximum modularity of classes of graphs that have the same number of links and/or nodes and determine analytical upper bounds. Moreover, from the set of all connected graphs with a fixed number of links and/or number of nodes, we construct graphs that can attain maximum modularity, named maximum modular graphs. The maximum modularity is shown to depend on the residue obtained when the number of links is divided by the number of partitions. The modularity depends on the chosen partitioning of the network into communities, which makes finding the specific partition that leads to the maximum modularity a hard problem. In this thesis, we prove that deciding whether a graph with a given number of links, number of communities, and modularity exists is NPcomplete and subsequently propose a heuristic algorithm for generating random graphs with a given modularity and number of links with different topological properties. The generator can be used in the broad field of modeling and analyzing clustered social or organizational networks. We also propose a model which can randomly generate simple graphs which are line graphs of other simple graphs, employing an iterative merging procedure. These graphs possess interesting properties, for example, if the cliques are all of the same size, the assortativities of the line graphs in each step are close to 0, and the assortativities of the corresponding root graphs increases linearly from 1 to 0 with the steps of the nodal merging process.
Due to their importance to society, communication systems, represented as complex networks, should be built and operated to withstand failures. We define robustness as the maintenance of functionality under node or link removal. In this context, the functionality is measured by several graph metrics.
We study the robustness of both static and timevarying networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. In static networks, targeted and random failures have been studied in the literature; however, existing approaches tend to study random failure in terms of averagecase behavior, giving no idea of how badly network performance can degrade purely by chance. Instead of considering average network performance under random failure, we compute the network performance probability density functions as functions of the fraction of nodes removed.
We show that many centrality measures produce similar targeted attacks in both static and timevarying networks and that a combination of degree centrality and eigenvector centrality may be enough to evaluate worstcase behavior of static networks: even small subsets of highly connected nodes act as a bottleneck in the static or temporal information flow, becoming critical weak points of the entire system. We also study the robustness envelope and targeted attack responses of static networks that are rewired to have high and low degree assortativities, discovering that moderate assortativity increases confer more robustness against targeted attacks whilst moderate decreases confer more robustness against random uniform attacks. In timevarying randomly generated networks, where all the nodes have similar properties, we show that random errors and intelligent attacks exhibit similar behavior. However, cost considerations make network providers less inclined to take robustness measures against failures that are unlikely to manifest, like several failures coinciding simultaneously. Considering networks embedded in a twodimensional plane, we study the problem of finding a critical region  a part of the network that can be enclosed by a given elementary figure of predetermined size  whose destruction would lead to the highest network disruption. We determine that the problem is polynomially solvable and propose appropriate algorithms. In addition, we consider regionaware network augmentation to decrease the impact of a regional failure. We subsequently address the regiondisjoint paths problem, which asks for two paths with minimum total weight between a source and a destination that cannot both be cut by a single regional failure of a given diameter (unless that failure includes the source and the destination). We prove that deciding whether regiondisjoint paths exist is NPhard and propose a heuristic regiondisjoint paths algorithm.
Defining an optimal protection strategy against viruses, spam propagation or any other kind of contamination process is an important feature for designing new networks and architectures. The first approach is a network adaptation, which is the interplay between disease dynamics on a network and the topology dynamics. A continuoustime adaptive SusceptibleInfectiousSusceptible (ASIS) model is introduced in order to investigate this interaction, where a susceptible node avoids infections by breaking its links to its infected neighbors while it enhances the connections with other susceptible nodes by creating links to them.
When the initial topology of the network is a complete graph, an exact solution to the average metastable state fraction of infected nodes is derived without resorting to any meanfield approximation. A linear scaling law of the epidemic threshold as a function of the effective linkbreaking rate is found. The metastable state topology shows high connectivity and low modularity in two cases: (i) a ``strongly adaptive'' region with very high effective spreading rate, and (ii) a ``weakly adaptive'' region with very low effective spreading rate. These two regions are separated from the other halfopen ellipticallike regions of low connectivity and high modularity in a contourlinelike way. Our results indicate that the adaptation of the topology in response to disease dynamics suppresses the infection, while it promotes the network evolution towards a topology that exhibits assortativemixing, modular structure and a binomiallike degree distribution. In the second approach, we consider decentralized optimal protection strategies when a virus is propagating over a network through a SIS epidemic process. By assuming that each node in the network can decide to protect itself from infection at a constant cost, we model our system using a game theoretic framework. We find pure, mixed equilibria, and the Price of Anarchy (PoA) in several network topologies and propose algorithms to compute a pure equilibrium.

[PDF]
[Abstract]

17 

A framework for robustness analysis of road networks for short term variations in supply
There is a growing awareness that road networks, are becoming more and more vulnerable to unforeseen disturbances like incidents and that measures need to be taken in order to make road networks more robust. In order to do this the following questions need to be addressed: How is robustness defined? Against which disturbances should the network be made robust? Which factors determine the robustness of a road network? What is the relationship between robustness, travel times and travel time reliability? Which indicators can be used to quantify robustness? How can these indicators be computed? This paper addresses these questions by developing a consistent framework for robustness in which a definition, terms related to robustness, indicators and an evaluation method are included. By doing this, policy makers and transportation analyst are offered a framework to discuss issues that are related to road network robustness and vulnerability which goes beyond the disconnected definitions, indicators and evaluation methods used so far in literature. Furthermore, the evaluation method that is presented for evaluating the robustness of the road network against short term variations in supply (like incidents) contributes to the problem of designing robust road networks because it has a relatively short computation time and it takes spillback effects and alternative routes into account.

[PDF]
[Abstract]

18 

The vulnerability of road networks: Now and in the future
Transport networks in major urban areas are becoming more and more vulnerable to unforeseen disturbances in transport networks, like incidents. For the near future, we expect an increasing number of incidents with a large impact due to the overall increase of the traffic load. In this paper the hypothesis is tested that, if no measures are taken, the impact of incidents increases in the future and, therefore, the vulnerability of the road network increases. It is shown that the current network of the area The HagueRotterdam in the Netherlands is already vulnerable. If the demand increases, the increase in total travel time is more than linear with the increase in demand in the situation without an incident. The impact of incidents also increases when the level of demand increases. This results in the overall conclusion that it is necessary to make the road network more robust.

[PDF]
[Abstract]

19 

Topology of molecular networks

[PDF]

20 

Quantifying the full reliability benefits of road network improvements

[PDF]
