Using wavelet transform self-similarity for effective multiple description video coding

More Info
expand_more

Abstract

Video streaming over unreliable networks requires preventive measures to avoid quality deterioration in the presence of packet losses. However, these measures result in redundancy in the transmitted data which is utilized to estimate the missing packets lost in the delivered portions. In this paper, we have used the self-similarity property if the discrete wavelet transform (DWT) to minimize the redundancy and improve the fidelity of the delivered video streams in presence of data loss. Our proposed method decomposes the video into multiple descriptions after applying the DWT. The descriptions are organized in such a way that when one of them is lost during transmission, it is estimated using the delivered portions by means of self-similarity between the DWT coefficients. In our experiments, we compare video reconstruction in the presence of data loss in one or two descriptions. Based on the experimental results, we have ascertained that our estimation method for missing coefficients by means of self-similarity is able to improve the video quality by 2.14dB and 7.26dB in case of one description and two descriptions, respectively. Moreover, our proposed method outperforms the state-of-the-art Forward Error Correction (FEC) method in case of higher bit-rates.