1 

Graphs with given diameter maximizing the algebraic connectivity
We propose a class of graphs GD(n<sub>1</sub>,n<sub>2</sub>,⋯,n <sub>D+1</sub>), containing of a chain of D+1 cliques K n<sub>1</sub>,K n <sub>2</sub>,⋯,Kn<sub>D+1</sub>, where neighboring cliques are fullyinterconnected. The class of graphs has diameter D and size N=∑<sub>1≤i≤D+1</sub>n<sub>i</sub>. We prove that this class of graphs can achieve the maximal number of links, the minimum average hopcount, and more interestingly, the maximal of any Laplacian eigenvalue among all graphs with N nodes and diameter D. The algebraic connectivity is the eigenvalue of the Laplacian that has been studied most, because it features many interesting properties. We determine the graph with the largest algebraic connectivity among graphs with N nodes and diameter D≤4. For other diameters, numerically searching for the maximum of any eigenvalue is feasible, because (a) the searching space within the class GD(n<sub>1</sub>,n<sub>2</sub>,⋯,n <sub>D+1</sub>) is much smaller than within all graphs with N nodes and diameter D; (b) we reduce the calculation of the Laplacian spectrum from a N×N to a (D+1)×(D+1) matrix. The maximum of any Laplacian eigenvalue obtained either theoretically or by numerical searching is applied to (1) investigate the topological features of graphs that maximize different Laplacian eigenvalues; (2) study the correlation between the maximum algebraic connectivity amax(N,D) and N as well as D and (3) evaluate two upper bounds of the algebraic connectivity that are proposed in the literature. © 2010 Elsevier Inc. All rights reserved.

[Abstract]

2 

Rate adaptation using acknowledgement feedback: Throughput upper bounds
We consider packetbypacket rate adaptation to maximize the throughput over a finitestate Markov channel. To limit the amount of feedback data, we use past packet acknowledgements (ACKs) and past rates as channel state information. It is known that the maximum achievable throughput is computationally prohibitive to determine. Thus, in this paper we derive two upper bounds on the maximum achievable throughput, which are tighter than previously known ones. We compare the upper bounds with a known myopic rateadaptation policy. Numerical studies over a wide range of SNR suggest that the myopic rate adaptation policy is close to the upper bounds and may be adequate in slowly timevarying channels. © 2008 IEEE.

[Abstract]

3 

M/M/∞ transience: Tail asymptotics of congestion periods
The ccongestion period, defined as a time interval in which the number of customers is larger than c all the time, is a key quantity in the design of communication networks. Particularly in the setting of M/M/∞ systems, the analysis of the duration of the congestion period, D<sub>c</sub>, has attracted substantial attention; related quantities have been studied as well, such as the total area Ac above c, and the number of arrived customers Nc during a congestion period. Laplace transforms of these three random variables being known, as well as explicit formulae for their moments, this article addresses the corresponding tail asymptotics. Our work addresses the following topics. In the socalled manyflows scaling, we show that the tail asymptotics are essentially exponential in the scaling parameter. The proof techniques stem from largedeviations theory; we also identify the most likely way in which the event under consideration occurs. In the same scaling, we approximate the model by its Gaussian counterpart. Specializing to our specific model, we show that the (fairly abstract) samplepath largedeviations theorem for Gaussian processes, viz. the generalized version of Schilder's theorem, can be written in a considerably more explicit way. Relying on this result, we derive the tail asymptotics for the Gaussian counterpart. Then we use changeofmeasure arguments to find upper bounds, uniform in the model parameters, on the probabilities of interest. These changeofmeasures are applied to devise a number of important sampling schemes, for fast simulation of rareevent probabilities. They turn out to yield a substantial speedup in simulation effort, compared to naive, direct simulations.

[Abstract]
