1 

Beschikbaarheid van communicatie netwerken : Redundantie verhoogt beschikbaarheid, kostenplaatje nog onduidelijk
In de afgelopen jaren verschijnen er steeds vaker voorbeelden in de media van storingen en kwetsbaarheden van (mobiele) communicatienetwerken. In dit artikel schetsen we enkele ontwikkelingen die het een uitdaging maken onze communicatie netwerken voldoende beschikbaar en robuust te maken. Aan de hand van een rekenvoorbeeld laten we zien hoe redundantie de beschikbaarheid van communicatie netwerken kan verhogen.

[PDF]
[Abstract]

2 

Graph measures and network robustness
Network robustness research aims at finding a measure to quantify network robustness. Once such a measure has been established, we will be able to compare networks, to improve existing networks and to design new networks that are able to continue to perform well when it is subject to failures or attacks. In this paper we survey a large amount of robustness measures on simple, undirected and unweighted graphs, in order to offer a tool for network administrators to evaluate and improve the robustness of their network. The measures discussed in this paper are based on the concepts of connectivity (including reliability polynomials), distance, betweenness and clustering. Some other measures are notions from spectral graph theory, more precisely, they are functions of the Laplacian eigenvalues. In addition to surveying these graph measures, the paper also contains a discussion of their functionality as a measure for topological network robustness.

[PDF]
[Abstract]

3 

Impact of Gaming during Channel Zapping on Quality of Experience
This paper belongs to a research program started in 2006, dealing with Quality of Experience (QoE) aspects of channel zapping. The program, which has relevance for the broader topics Quality of Service and Next Generation Networks, started with quantifying the QoE expressed as a socalled Mean Opinion Score (MOS) for situations when a black screen is visible during channel zapping. Based upon the observation that the QoE is (possibly) increased by showing information while the user waits for the target channel to appear, the program continued with assessing the QoE in case advertisements are shown during channel switching. In this paper, we quantify the impact on QoE when offering users an interactive game during channel zapping. Our subjective experiment shows that for zapping times greater than 2.25 seconds, offering games during zapping instead of the usual black screen, leads to a better QoE. For zapping times larger then about 3 seconds, the MOS for the ‘game’ scenario is larger than 3.5, indicating the onset to acceptable quality. For zapping times below 1 second, the MOS for the ‘game’ scenario is very low, indicating that it is a bad idea to show games in case of such short zapping times. The scenario where during zapping advertisements are shown is always outperformed by either the ‘black screen’ or ‘game’ scenario. For small zapping times, the ease of play is much too low. For instance, for a zapping time of 1 second, only 18% of the test subjects managed to play the game in time. For zapping times of 3 seconds the ease of play becomes higher than 80%. At zapping times of 5 seconds, the game score is as high as 82%.

[Abstract]

4 

Sales bottlenecks and their effect on profit
This study introduces the term sales bottleneck, defined as a stage in a total production or service delivery process that limits sales. After analyzing the suitability of traditional methods to find sales bottlenecks, the study proposes the bottleneck accounting model as a method to determine sales bottlenecks and calculate the effect of each of these bottlenecks on profit. Analytical verification shows that this method finds all sales bottlenecks and determines the exact effect on profit. Finally, the method’s effectiveness is tested in practice by means of a case study performed within the Rabobank.

[Abstract]

5 

The minimal spectral radius of graphs with a given diameter
The spectral radius of a graph (i.e., the largest eigenvalue of its corresponding adjacency matrix) plays an important role in modeling virus propagation in networks. In fact, the smaller the spectral radius, the larger the robustness of a network against the spread of viruses. Among all connected graphs on n nodes the path Pn has minimal spectral radius. However, its diameter D, i.e., the maximum number of hops between any pair of nodes in the graph, is the largest possible, namely D = n  1. In general, communication networks are designed such that the diameter is small, because the larger the number of nodes traversed on a connection, the lower the quality of the service running over the network. This leads us to state the following problem: which connected graph on n nodes and a given diameter D has minimal spectral radius? In this paper we solve this problem explicitly for graphs with diameter D ∈ fenced(1, 2, ⌊⌋ fenced(frac(n, 2)), n  3, n  2, n  1). Moreover, we solve the problem for almost all graphs on at most 20 nodes by a computer search. © 2007 Elsevier Inc. All rights reserved.

[Abstract]

6 

Contextindependent centrality measures underestimate the vulnerability of power grids
Power grids vulnerability is a key issue in society. A component failure may trigger cascades of failures across the grid and lead to a large blackout. Within complex network analysis, structural vulnerabilities of power grids have been studied mostly using purely topological approaches, which assumes that flow of power is dictated by shortest paths. However, this fails to capture the real flow characteristics of power grids. We have proposed a flow redistribution mechanism that closely mimics the flow in power grids using the power transfer distribution factor (PTDF). We apply the model to the European highvoltage grid to carry out a comparative study for a number of centrality measures. 'Centrality' gives an indication of the criticality of network components. We also show a brief comparison of the end results with other power grid systems to generalise the result.

[Abstract]

7 

Perceived quality of channel zapping
The end user experience of service quality is critical to the success of a service provider's IPTV deployment program. A key element involved in validating IPTV quality of experience (QoE) is how quickly and reliably users can change TV channels, often referred to as channel zapping. Currently there is no knowledge about the explicit relation between zapping time and the user perceived quality as expressed as a Mean Opinion Score (MOS). We have proposed a model where the MOS depends on the zapping time on a logarithmic scale. In order to validate our model we have conducted a number of subjective tests in order to get insight in this relation. The tests were performed by 21 test subjects at TNO and at Acreo. It turns out that the correlation between the subjective data and the perceptive model is very high (0.99). Therefore we conclude that our perceptive model is very useful for assessing the perceived quality of zapping. The result is not limited to IPTV but applies to any type of TV service with channel switching times that are not instantaneous, regardless whether it is terrestrial or satellite based.

[Abstract]

8 

Viral conductance : Quantifying the robustness of networks with respect to spread of epidemics
In this paper, we propose a novel measure, viral conductance (VC), to assess the robustness of complex networks with respect to the spread of SIS epidemics. In contrast to classical measures that assess the robustness of networks based on the epidemic threshold above which an epidemic takes place, the new measure incorporates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, we show that VC provides more insight about the robustness of networks than does the epidemic threshold. We also address the paradoxical robustness of BarabásiAlbert preferential attachment networks. Even though this class of networks is characterized by a vanishing epidemic threshold, the epidemic requires high effective infection strength to cause a major outbreak. On the contrary, in homogeneous networks the effective infection strength does not need to be very much beyond the epidemic threshold to cause a major outbreak. To overcome computational complexities, we propose a heuristic to compute the VC for large networks with high accuracy. Simulations show that the heuristic gives an accurate approximation of the exact value of the VC. Moreover, we derive upper and lower bounds of the new measure. We also apply the new measure to assess the robustness of different types of network structures, i.e. WattsStrogatz small world, BarabásiAlbert, correlated preferential attachment, Internet ASlevel, and social networks. The extensive simulations show that in WattsStrogatz small world networks, the increase in probability of rewiring decreases the robustness of networks. Additionally, VC confirms that the irregularity in node degrees decreases the robustness of the network. Furthermore, the new measure reveals insights about design and mitigation strategies of infrastructure and social networks.

[Abstract]

9 

Virus spread in complete bipartite graphs
In this paper we study the spread of viruses on the complete bipartite graph Km,n. Using mean field theory we first show that the epidemic threshold for this type of graph satifies Tc = 1/√MN, hence, confirming previous results from literature. Next, we find an expression for the average number of infected nodes in the steady state. In addition, our model is improved by the introduction of infection delay. We validate our models by means of simulations. Inspired by simulation results, we analyze the probability distribution of the number of infected nodes in the steady state for the case without infection delay. The mathematical model we obtain is able to predict the probability distribution very well, in particular, for large values of the effective spreading rate. It is also shown that the probabilistic analysis and the mean field theory predict the same average number of infected nodes in the steady state. Finally, we present a heuristic for the prediction of the extinction probability in the first phase of the infection. Simulations show that, for the case without infection delay, this time dependent heuristic is quite accurate. © 2007 ICST.

[PDF]
[Abstract]

10 

Decentralized planning of energy demand for the management of robustness and discomfort
The robustness of smart grids is challenged by unpredictable power peaks or temporal demand oscillations that can cause blackouts and increase supply costs. Planning of demand can mitigate these effects and increase robustness. However, the impact on consumers in regards to the discomfort they experience as a result of improving robustness is usually neglected. This paper introduces a decentralized agentbased approach that quantifies and manages the tradeoff between robustness and discomfort under demand planning. Eight selection functions of plans are experimentally evaluated using real data from two operational smart grids. These functions can provide different quality of service levels for demandside energy selfmanagement that capture both robustness and discomfort criteria.

[Abstract]

11 

Modeling regionbased interconnection for interdependent networks
Various realworld networks interact with and depend on each other. The design of the interconnection between interacting networks is one of the main challenges to achieve a robust interdependent network. Due to cost considerations, network providers are inclined to interconnect nodes that are geographically close. Accordingly, we propose two topologies, the random geographic graph and the relative neighborhood graph, for the design of interconnection in interdependent networks that incorporates the geographic location of nodes. Differing from the onetoone interconnection studied in the literature, one node in one network can depend on an arbitrary number of nodes in the other network. We derive the average number of interdependent links for the two topologies, which enables their comparison. For the two topologies, we evaluate the impact of the interconnection structure on the robustness of interdependent networks against cascading failures. The two topologies are assessed on the realworld coupled Italian Internet and the electric transmission network. Finally, we propose the derivative of the largest mutually connected component with respect to the fraction of failed nodes as a robustness metric. This robustness metric quantifies the damage of the network introduced by a small fraction of initial failures well before the critical fraction of failures at which the whole network collapses. © 2016 American Physical Society.

[Abstract]

12 

Structural vulnerability assessment of electric power grids
Cascading failures are the typical reasons of blackouts in power grids. The grid topology plays an important role in determining the dynamics of cascading failures in power grids. Measures for vulnerability analysis are crucial to assure a higher level of robustness of power grids. Metrics from Complex Networks are widely used to investigate the grid vulnerability. Yet, these purely topological metrics fail to capture the real behaviour of power grids. This paper proposes a metric, the effective graph resistance, as a vulnerability measure to determine the critical components in a power grid. Differently than the existing purely topological measures, the effective graph resistance accounts for the electrical properties of power grids such as power flow allocation according to Kirchoff laws. To demonstrate the applicability of the effective graph resistance, a quantitative vulnerability assessment of the IEEE 118 buses power system is performed. The simulation results verify the effectiveness of the effective graph resistance to identify the critical transmission lines in a power grid. © 2014 IEEE.

[Abstract]

13 

Measuring and controlling unfairness in decentralized planning of energy demand
Demandside energy management improves robustness and efficiency in Smart Grids. Loadadjustment and loadshifting are performed to match demand to available supply. These operations come at a discomfort cost for consumers as their lifestyle is influenced when they adjust or shift in time their demand. Performance of demandside energy management mainly concerns how robustness is maximized or discomfort is minimized. However, measuring and controlling the distribution of discomfort as perceived between different consumers provides an enriched notion of fairness in demandside energy management that is missing in current approaches. This paper defines unfairness in demandside energy management and shows how unfairness is measurable and controllable by software agents that plan energy demand in a decentralized fashion. Experimental evaluation using real demand and survey data from two operational Smart Grid projects confirms these findings. © 2014 IEEE.

[Abstract]

14 

Inflection points for network reliability
Given a finite, undirected graph G (possibly with multiple edges), we assume that the vertices are operational, but the edges are each independently operational with probability p. The (allterminal) reliability, Rel(G,p), of G is the probability that the spanning subgraph of operational edges is connected. It has been conjectured that reliability functions have at most one point of inflection in (0,1). We show that the allterminal reliability of almost every simple graph of order n has a point of inflection, and there are indeed infinite families of graphs (both simple and otherwise) with more than one point of inflection. © 2013 Springer Science+Business Media New York.

[Abstract]

15 

Analysing and modelling the interconnected cyber space
The security environment is rapidly changing. Due to increasing dependency on complex critical communication and information systems, society as a whole, but defence organisations in particular, have to rethink, redesign and adapt their Defences to be able to confront the challenges of cyberwarfare and crime. The literature abounds with examples of recent cyber attacks. We just mention the cyber attack on Estonia in 2007, the Distributed DoS attacks following WikiLeaks and the attack of the Stuxnet worm in Iran. These examples clearly underline the need to adapt our(defensive and offensive) strategy to these changes.

[PDF]
[Abstract]

16 

Monitor veiligheid en vertrouwen
Doelstelling: Om meer inzicht te krijgen in de risico’s bij het gebruik van ICTdiensten en de relatie met het vertrouwen van consumenten en bedrijven, is besloten te werken aan een monitor ‘Vertrouwen in ICT’. Dit onderzoek heeft als doel de eerste noodzakelijke stap te zetten: het bouwen van een monitoringfundament om over langere periode het vertrouwen in ICTdiensten inzichtelijk te maken. Naast de ontwikkeling van een monitoring framework, heeft dit onderzoek ook als doel om inzicht te verkrijgen in het vertrouwen in eCommerce. Met deze resultaten wordt duidelijk welke informatie nog mist om de monitor verder uit te breiden naar andere cases. Ook kunnen de eerste resultaten leiden tot speerpunten waar extra aandacht van de overheid wenselijk is.Doelstelling

[PDF]
[Abstract]

17 

Reliability polynomials crossing more than twice
In this paper we study allterminal reliability polynomials of networks having the same number of nodes and the same number of links. First we show that the smallest possible size for a pair of networks that allows for two crossings of their reliability polynomials have seven nodes and fifteen edges. Then we propose a construction of pairs of graphs whose reliability polynomials exhibit an arbitrary number of crossings. The construction does not depend on multigraphs. We also give concrete examples of pairs of graphs whose reliability polynomials have three, four and five crossings, respectively, and provide the first example of a graph with more than one point of inflection in (0,1).

[Abstract]

18 

Heterogeneous protection in regular and complete bipartite networks
We examine the influence of heterogeneous curing rates for a SIS model, used for malware spreading on the Internet, information dissemination in unreliable networks, and propagation of failures in networks. The topology structures considered are the regular graph which represents the homogenous network structures and the complete bipartite graph which represents the hierarchical network structures.We find the threshold in a regular graph with m different curing rates. Further, we consider a complete bipartite graph with 2 curing rates and find the threshold for any distribution of curing rates among nodes. In addition, we consider the optimization problem and show that the minimum sum of the curing rates that satisfies the threshold equation is equal to the number of links in the graph. The optimization problem is simplified by assuming fixed curing rates δ1, δ2 and optimization of the distribution of curing rates among different sets of nodes. © IFIP International Federation for Information Processing 2009.

[Abstract]

19 

TCP and web browsing performance in case of bidirectional packet loss
Performance modeling of the transport control protocol (TCP) has received a lot of attention over the past few years. The most commonly quoted results are approximate formulas for TCP throughput (Padhye et al. (2000) [1]) and document download times (Cardwell et al. (2000) [2]) which are used for dimensioning of IP networks. However, the existing modeling approaches unanimously assume that packet loss only occurs for packets from the server to the client, whereas in reality the packets in the direction from the client to the server may also be dropped. Our simulations with NS2 show that this bidirectional packet loss indeed may have a strong impact on TCP performance. Motivated by this, we refine the models in Padhye et al. (2000) [1] and Cardwell et al. (2000) [2] by including bidirectional packet loss, also including correlations between packet loss occurrences. Simulations show that the proposed model leads to strong improvements of the accuracy of the TCP performance predictions. In addition we show how our model can be used to predict quality of experience for web browsing sessions. © 2010 Elsevier B.V. All rights reserved.

[Abstract]

20 

Graphs with given diameter maximizing the algebraic connectivity
We propose a class of graphs GD(n<sub>1</sub>,n<sub>2</sub>,⋯,n <sub>D+1</sub>), containing of a chain of D+1 cliques K n<sub>1</sub>,K n <sub>2</sub>,⋯,Kn<sub>D+1</sub>, where neighboring cliques are fullyinterconnected. The class of graphs has diameter D and size N=∑<sub>1≤i≤D+1</sub>n<sub>i</sub>. We prove that this class of graphs can achieve the maximal number of links, the minimum average hopcount, and more interestingly, the maximal of any Laplacian eigenvalue among all graphs with N nodes and diameter D. The algebraic connectivity is the eigenvalue of the Laplacian that has been studied most, because it features many interesting properties. We determine the graph with the largest algebraic connectivity among graphs with N nodes and diameter D≤4. For other diameters, numerically searching for the maximum of any eigenvalue is feasible, because (a) the searching space within the class GD(n<sub>1</sub>,n<sub>2</sub>,⋯,n <sub>D+1</sub>) is much smaller than within all graphs with N nodes and diameter D; (b) we reduce the calculation of the Laplacian spectrum from a N×N to a (D+1)×(D+1) matrix. The maximum of any Laplacian eigenvalue obtained either theoretically or by numerical searching is applied to (1) investigate the topological features of graphs that maximize different Laplacian eigenvalues; (2) study the correlation between the maximum algebraic connectivity amax(N,D) and N as well as D and (3) evaluate two upper bounds of the algebraic connectivity that are proposed in the literature. © 2010 Elsevier Inc. All rights reserved.

[Abstract]
